All of lore.kernel.org
 help / color / mirror / Atom feed
* Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 14:27 ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-02 14:27 UTC (permalink / raw)
  To: linux-ext4; +Cc: LKML, Linux-MM, Jiri Slaby

I'm testing a page-reclaim-related series on my laptop that is partially
aimed at fixing long stalls when doing metadata-intensive operations on
low memory such as a git checkout. I've been running 3.9-rc2 with the
series applied but found that the interactive performance was awful even
when there was plenty of free memory.

I activated a monitor from mmtests that logs when a process is stuck for
a long time in D state and found that there are a lot of stalls in ext4.
The report first states that processes have been stalled for a total of
6498 seconds on IO which seems like a lot. Here is a breakdown of the
recorded events.

Time stalled in this event:   566745 ms
Event count:                     181
git                  sleep_on_buffer        1236 ms
git                  sleep_on_buffer        1161 ms
imapd                sleep_on_buffer        3111 ms
cp                   sleep_on_buffer       10745 ms
cp                   sleep_on_buffer        5036 ms
cp                   sleep_on_buffer        4370 ms
cp                   sleep_on_buffer        1682 ms
cp                   sleep_on_buffer        8207 ms
cp                   sleep_on_buffer        5312 ms
cp                   sleep_on_buffer        1563 ms
patch                sleep_on_buffer        1172 ms
patch                sleep_on_buffer        4585 ms
patch                sleep_on_buffer        3541 ms
patch                sleep_on_buffer        4155 ms
patch                sleep_on_buffer        3120 ms
cc1                  sleep_on_buffer        1107 ms
cc1                  sleep_on_buffer        1291 ms
cc1                  sleep_on_buffer        1125 ms
cc1                  sleep_on_buffer        1257 ms
imapd                sleep_on_buffer        1424 ms
patch                sleep_on_buffer        1126 ms
mutt                 sleep_on_buffer        4804 ms
patch                sleep_on_buffer        3489 ms
patch                sleep_on_buffer        4242 ms
cp                   sleep_on_buffer        1942 ms
cp                   sleep_on_buffer        2670 ms
cp                   sleep_on_buffer        1071 ms
cp                   sleep_on_buffer        1676 ms
cp                   sleep_on_buffer        1058 ms
cp                   sleep_on_buffer        1382 ms
cp                   sleep_on_buffer        2196 ms
cp                   sleep_on_buffer        1017 ms
cp                   sleep_on_buffer        1096 ms
cp                   sleep_on_buffer        1203 ms
cp                   sleep_on_buffer        1307 ms
cp                   sleep_on_buffer        1676 ms
cp                   sleep_on_buffer        1024 ms
cp                   sleep_on_buffer        1270 ms
cp                   sleep_on_buffer        1200 ms
cp                   sleep_on_buffer        1674 ms
cp                   sleep_on_buffer        1202 ms
cp                   sleep_on_buffer        2260 ms
cp                   sleep_on_buffer        1685 ms
cp                   sleep_on_buffer        1921 ms
cp                   sleep_on_buffer        1434 ms
cp                   sleep_on_buffer        1346 ms
cp                   sleep_on_buffer        2132 ms
cp                   sleep_on_buffer        1304 ms
cp                   sleep_on_buffer        1328 ms
cp                   sleep_on_buffer        1419 ms
cp                   sleep_on_buffer        1882 ms
cp                   sleep_on_buffer        1172 ms
cp                   sleep_on_buffer        1299 ms
cp                   sleep_on_buffer        1806 ms
cp                   sleep_on_buffer        1297 ms
cp                   sleep_on_buffer        1484 ms
cp                   sleep_on_buffer        1313 ms
cp                   sleep_on_buffer        1342 ms
cp                   sleep_on_buffer        1320 ms
cp                   sleep_on_buffer        1147 ms
cp                   sleep_on_buffer        1346 ms
cp                   sleep_on_buffer        2391 ms
cp                   sleep_on_buffer        1128 ms
cp                   sleep_on_buffer        1386 ms
cp                   sleep_on_buffer        1505 ms
cp                   sleep_on_buffer        1664 ms
cp                   sleep_on_buffer        1290 ms
cp                   sleep_on_buffer        1532 ms
cp                   sleep_on_buffer        1719 ms
cp                   sleep_on_buffer        1149 ms
cp                   sleep_on_buffer        1364 ms
cp                   sleep_on_buffer        1397 ms
cp                   sleep_on_buffer        1213 ms
cp                   sleep_on_buffer        1171 ms
cp                   sleep_on_buffer        1352 ms
cp                   sleep_on_buffer        3000 ms
cp                   sleep_on_buffer        4866 ms
cp                   sleep_on_buffer        5863 ms
cp                   sleep_on_buffer        3951 ms
cp                   sleep_on_buffer        3469 ms
cp                   sleep_on_buffer        2172 ms
cp                   sleep_on_buffer       21366 ms
cp                   sleep_on_buffer       28856 ms
cp                   sleep_on_buffer        1212 ms
cp                   sleep_on_buffer        2326 ms
cp                   sleep_on_buffer        1357 ms
cp                   sleep_on_buffer        1482 ms
cp                   sleep_on_buffer        1372 ms
cp                   sleep_on_buffer        1475 ms
cp                   sleep_on_buffer        1540 ms
cp                   sleep_on_buffer        2993 ms
cp                   sleep_on_buffer        1269 ms
cp                   sleep_on_buffer        1478 ms
cp                   sleep_on_buffer        1137 ms
cp                   sleep_on_buffer        1114 ms
cp                   sleep_on_buffer        1137 ms
cp                   sleep_on_buffer        1616 ms
cp                   sleep_on_buffer        1291 ms
cp                   sleep_on_buffer        1336 ms
cp                   sleep_on_buffer        2440 ms
cp                   sleep_on_buffer        1058 ms
cp                   sleep_on_buffer        1825 ms
cp                   sleep_on_buffer        1320 ms
cp                   sleep_on_buffer        2556 ms
cp                   sleep_on_buffer        2463 ms
cp                   sleep_on_buffer        2563 ms
cp                   sleep_on_buffer        1218 ms
cp                   sleep_on_buffer        2862 ms
cp                   sleep_on_buffer        1484 ms
cp                   sleep_on_buffer        1039 ms
cp                   sleep_on_buffer        5180 ms
cp                   sleep_on_buffer        2584 ms
cp                   sleep_on_buffer        1357 ms
cp                   sleep_on_buffer        4492 ms
cp                   sleep_on_buffer        1111 ms
cp                   sleep_on_buffer        3992 ms
cp                   sleep_on_buffer        4205 ms
cp                   sleep_on_buffer        4980 ms
cp                   sleep_on_buffer        6303 ms
imapd                sleep_on_buffer        8473 ms
cp                   sleep_on_buffer        7128 ms
cp                   sleep_on_buffer        4740 ms
cp                   sleep_on_buffer       10236 ms
cp                   sleep_on_buffer        1210 ms
cp                   sleep_on_buffer        2670 ms
cp                   sleep_on_buffer       11461 ms
cp                   sleep_on_buffer        5946 ms
cp                   sleep_on_buffer        7144 ms
cp                   sleep_on_buffer        2205 ms
cp                   sleep_on_buffer       25904 ms
cp                   sleep_on_buffer        1766 ms
cp                   sleep_on_buffer        9823 ms
cp                   sleep_on_buffer        1849 ms
cp                   sleep_on_buffer        1380 ms
cp                   sleep_on_buffer        2524 ms
cp                   sleep_on_buffer        2389 ms
cp                   sleep_on_buffer        1996 ms
cp                   sleep_on_buffer       10396 ms
cp                   sleep_on_buffer        2020 ms
cp                   sleep_on_buffer        1132 ms
cc1                  sleep_on_buffer        1182 ms
cp                   sleep_on_buffer        1195 ms
cp                   sleep_on_buffer        1179 ms
cp                   sleep_on_buffer        7301 ms
cp                   sleep_on_buffer        8328 ms
cp                   sleep_on_buffer        6922 ms
cp                   sleep_on_buffer       10555 ms
Cache I/O            sleep_on_buffer       11963 ms
cp                   sleep_on_buffer        2368 ms
cp                   sleep_on_buffer        6905 ms
cp                   sleep_on_buffer        1686 ms
cp                   sleep_on_buffer        1219 ms
cp                   sleep_on_buffer        1793 ms
cp                   sleep_on_buffer        1899 ms
cp                   sleep_on_buffer        6412 ms
cp                   sleep_on_buffer        2799 ms
cp                   sleep_on_buffer        1316 ms
cp                   sleep_on_buffer        1211 ms
git                  sleep_on_buffer        1328 ms
imapd                sleep_on_buffer        4242 ms
imapd                sleep_on_buffer        2754 ms
imapd                sleep_on_buffer        4496 ms
imapd                sleep_on_buffer        4603 ms
imapd                sleep_on_buffer        7929 ms
imapd                sleep_on_buffer        8851 ms
imapd                sleep_on_buffer        2016 ms
imapd                sleep_on_buffer        1019 ms
imapd                sleep_on_buffer        1138 ms
git                  sleep_on_buffer        1510 ms
git                  sleep_on_buffer        1366 ms
git                  sleep_on_buffer        3445 ms
git                  sleep_on_buffer        2704 ms
git                  sleep_on_buffer        2057 ms
git                  sleep_on_buffer        1202 ms
git                  sleep_on_buffer        1293 ms
cat                  sleep_on_buffer        1505 ms
imapd                sleep_on_buffer        1263 ms
imapd                sleep_on_buffer        1347 ms
imapd                sleep_on_buffer        2910 ms
git                  sleep_on_buffer        1210 ms
git                  sleep_on_buffer        1199 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811105e3>] do_generic_file_read.constprop.35+0x363/0x440
[<ffffffff81111359>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Some of those stalls are awful -- 28 seconds to update atime seems
excessive. This is with relatime in use

mel@machina:~ > mount | grep sd
/dev/sda8 on / type ext4 (rw,relatime,nobarrier,data=ordered)
/dev/sda6 on /home type ext4 (rw,relatime,nobarrier,data=ordered)
/dev/sda5 on /usr/src type ext4 (rw,relatime,nobarrier,data=ordered)

/tmp is mounted as tmpfs so I doubt it's a small write problem.

Time stalled in this event:   466201 ms
Event count:                      45
git                  sleep_on_buffer        1011 ms
git                  sleep_on_buffer       29540 ms
git                  sleep_on_buffer        1485 ms
git                  sleep_on_buffer        1244 ms
git                  sleep_on_buffer       17896 ms
git                  sleep_on_buffer        1882 ms
git                  sleep_on_buffer       18249 ms
mv                   sleep_on_buffer        2107 ms
mv                   sleep_on_buffer       12655 ms
mv                   sleep_on_buffer        4290 ms
mv                   sleep_on_buffer        2640 ms
patch                sleep_on_buffer        2433 ms
patch                sleep_on_buffer        2305 ms
patch                sleep_on_buffer        3672 ms
git                  sleep_on_buffer       16663 ms
git                  sleep_on_buffer       16516 ms
git                  sleep_on_buffer       16168 ms
git                  sleep_on_buffer        1382 ms
git                  sleep_on_buffer        1695 ms
git                  sleep_on_buffer        1301 ms
git                  sleep_on_buffer       22039 ms
git                  sleep_on_buffer       19077 ms
git                  sleep_on_buffer        1208 ms
git                  sleep_on_buffer       20237 ms
git                  sleep_on_buffer        1284 ms
git                  sleep_on_buffer       19518 ms
git                  sleep_on_buffer        1959 ms
git                  sleep_on_buffer       27574 ms
git                  sleep_on_buffer        9708 ms
git                  sleep_on_buffer        1968 ms
git                  sleep_on_buffer       23600 ms
git                  sleep_on_buffer       12578 ms
git                  sleep_on_buffer       19573 ms
git                  sleep_on_buffer        2257 ms
git                  sleep_on_buffer       19068 ms
git                  sleep_on_buffer        2833 ms
git                  sleep_on_buffer        3182 ms
git                  sleep_on_buffer       22496 ms
git                  sleep_on_buffer       14030 ms
git                  sleep_on_buffer        1722 ms
git                  sleep_on_buffer       25652 ms
git                  sleep_on_buffer       15730 ms
git                  sleep_on_buffer       19096 ms
git                  sleep_on_buffer        1529 ms
git                  sleep_on_buffer        3149 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177ba9>] vfs_lstat+0x19/0x20
[<ffffffff81177d15>] sys_newlstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

These are directory lookups which might be a bit more reasonable to
stall on but stalls of 30 seconds seems way out of order. Unfortuantely
I do not have a comparison with older kernels but even when interactive
performance was bad on older kernels, it did not feel *this* bad.

The rest of the mail is just the remaining stalls recorded. They are a
lot of them and they are all really high. Is this a known issue? It's
not necessarily an ext4 issue and could be an IO scheduler or some other
writeback change too. I've been offline for a while so could have missed
similar bug reports and/or fixes.

Time stalled in this event:   437040 ms
Event count:                     106
git                  wait_on_page_bit       1517 ms
git                  wait_on_page_bit       2694 ms
git                  wait_on_page_bit       2829 ms
git                  wait_on_page_bit       2796 ms
git                  wait_on_page_bit       2625 ms
git                  wait_on_page_bit      14350 ms
git                  wait_on_page_bit       4529 ms
xchat                wait_on_page_bit       1928 ms
akregator            wait_on_page_bit       1116 ms
akregator            wait_on_page_bit       3556 ms
cat                  wait_on_page_bit       5311 ms
sequence-patch.      wait_on_page_bit       2555 ms
pool                 wait_on_page_bit       1485 ms
git                  wait_on_page_bit       6778 ms
git                  wait_on_page_bit       3464 ms
git                  wait_on_page_bit       2189 ms
pool                 wait_on_page_bit       3657 ms
compare-kernels      wait_on_page_bit       5729 ms
compare-kernels      wait_on_page_bit       4446 ms
git                  wait_on_page_bit       2011 ms
xchat                wait_on_page_bit       6250 ms
git                  wait_on_page_bit       2761 ms
git                  wait_on_page_bit       1157 ms
xchat                wait_on_page_bit       2670 ms
pool                 wait_on_page_bit       5964 ms
xchat                wait_on_page_bit       1805 ms
play                 wait_on_page_bit       1800 ms
xchat                wait_on_page_bit      12008 ms
cat                  wait_on_page_bit       3642 ms
sequence-patch.      wait_on_page_bit       2309 ms
sequence-patch.      wait_on_page_bit       5430 ms
cat                  wait_on_page_bit       2614 ms
sequence-patch.      wait_on_page_bit       2220 ms
git                  wait_on_page_bit       3505 ms
git                  wait_on_page_bit       4181 ms
mozStorage #2        wait_on_page_bit       1012 ms
xchat                wait_on_page_bit       1966 ms
pool                 wait_on_page_bit      14217 ms
pool                 wait_on_page_bit       3728 ms
xchat                wait_on_page_bit       1896 ms
play                 wait_on_page_bit       8731 ms
mutt                 wait_on_page_bit      14378 ms
play                 wait_on_page_bit       1208 ms
Cache I/O            wait_on_page_bit       1174 ms
xchat                wait_on_page_bit       1141 ms
mozStorage #2        wait_on_page_bit       1161 ms
mozStorage #2        wait_on_page_bit       6727 ms
Cache I/O            wait_on_page_bit       7559 ms
mozStorage #2        wait_on_page_bit       4630 ms
Cache I/O            wait_on_page_bit       4642 ms
mozStorage #2        wait_on_page_bit       1764 ms
mozStorage #2        wait_on_page_bit       2357 ms
Cache I/O            wait_on_page_bit       3694 ms
xchat                wait_on_page_bit       8484 ms
mozStorage #2        wait_on_page_bit       3958 ms
mozStorage #2        wait_on_page_bit       2067 ms
Cache I/O            wait_on_page_bit       2728 ms
xchat                wait_on_page_bit       4115 ms
Cache I/O            wait_on_page_bit       7738 ms
xchat                wait_on_page_bit       7279 ms
Cache I/O            wait_on_page_bit       4366 ms
mozStorage #2        wait_on_page_bit       2040 ms
mozStorage #2        wait_on_page_bit       1102 ms
mozStorage #2        wait_on_page_bit       4628 ms
Cache I/O            wait_on_page_bit       5127 ms
akregator            wait_on_page_bit       2897 ms
Cache I/O            wait_on_page_bit       1429 ms
mozStorage #3        wait_on_page_bit       1465 ms
git                  wait_on_page_bit       2830 ms
git                  wait_on_page_bit       2508 ms
mutt                 wait_on_page_bit       4955 ms
pool                 wait_on_page_bit       4495 ms
mutt                 wait_on_page_bit       7429 ms
akregator            wait_on_page_bit       3744 ms
mutt                 wait_on_page_bit      11632 ms
pool                 wait_on_page_bit      11632 ms
sshd                 wait_on_page_bit      16035 ms
mutt                 wait_on_page_bit      16254 ms
mutt                 wait_on_page_bit       3253 ms
mutt                 wait_on_page_bit       3254 ms
git                  wait_on_page_bit       2644 ms
git                  wait_on_page_bit       2434 ms
git                  wait_on_page_bit       8364 ms
git                  wait_on_page_bit       1618 ms
git                  wait_on_page_bit       5990 ms
git                  wait_on_page_bit       2663 ms
git                  wait_on_page_bit       1102 ms
git                  wait_on_page_bit       1160 ms
git                  wait_on_page_bit       1161 ms
git                  wait_on_page_bit       1608 ms
git                  wait_on_page_bit       2100 ms
git                  wait_on_page_bit       2215 ms
git                  wait_on_page_bit       1231 ms
git                  wait_on_page_bit       2274 ms
git                  wait_on_page_bit       6081 ms
git                  wait_on_page_bit       6877 ms
git                  wait_on_page_bit       2035 ms
git                  wait_on_page_bit       2568 ms
git                  wait_on_page_bit       4475 ms
pool                 wait_on_page_bit       1253 ms
mv                   sleep_on_buffer        1036 ms
git                  wait_on_page_bit       1876 ms
git                  wait_on_page_bit       2332 ms
git                  wait_on_page_bit       2840 ms
git                  wait_on_page_bit       1850 ms
git                  wait_on_page_bit       3943 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8110f3aa>] generic_perform_write+0xca/0x210
[<ffffffff8110f548>] generic_file_buffered_write+0x58/0x90
[<ffffffff81110f96>] __generic_file_aio_write+0x1b6/0x3b0
[<ffffffff8111120a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea3a3>] ext4_file_write+0x83/0xd0
[<ffffffff81172a73>] do_sync_write+0xa3/0xe0
[<ffffffff811730fe>] vfs_write+0xae/0x180
[<ffffffff8117341d>] sys_write+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   417840 ms
Event count:                      56
xchat                sleep_on_buffer        8571 ms
xchat                sleep_on_buffer        1772 ms
xchat                sleep_on_buffer        4063 ms
xchat                sleep_on_buffer       16290 ms
xchat                sleep_on_buffer        3201 ms
compare-kernels      sleep_on_buffer        1698 ms
xchat                sleep_on_buffer       14631 ms
xchat                sleep_on_buffer       12970 ms
xchat                sleep_on_buffer        4182 ms
xchat                sleep_on_buffer        5449 ms
Cache I/O            sleep_on_buffer        4079 ms
xchat                sleep_on_buffer        8246 ms
xchat                sleep_on_buffer        6530 ms
xchat                sleep_on_buffer        2041 ms
xchat                sleep_on_buffer       15815 ms
pool                 sleep_on_buffer        4115 ms
tee                  sleep_on_buffer        2057 ms
xchat                sleep_on_buffer        4814 ms
tee                  sleep_on_buffer       66037 ms
Cache I/O            sleep_on_buffer        6601 ms
xchat                sleep_on_buffer       10208 ms
tee                  sleep_on_buffer        6064 ms
Cache I/O            sleep_on_buffer        2008 ms
xchat                sleep_on_buffer        5257 ms
git                  sleep_on_buffer        2032 ms
xchat                sleep_on_buffer        2313 ms
tee                  sleep_on_buffer        5287 ms
Cache I/O            sleep_on_buffer        1650 ms
akregator            sleep_on_buffer        1154 ms
tee                  sleep_on_buffer       10362 ms
xchat                sleep_on_buffer        6208 ms
xchat                sleep_on_buffer        4405 ms
Cache I/O            sleep_on_buffer        8580 ms
mozStorage #2        sleep_on_buffer        6573 ms
tee                  sleep_on_buffer       10180 ms
Cache I/O            sleep_on_buffer        7691 ms
mozStorage #3        sleep_on_buffer        5502 ms
xchat                sleep_on_buffer        2339 ms
Cache I/O            sleep_on_buffer        3819 ms
sshd                 sleep_on_buffer        7252 ms
tee                  sleep_on_buffer       11422 ms
Cache I/O            sleep_on_buffer        1661 ms
bash                 sleep_on_buffer       10905 ms
git                  sleep_on_buffer        1277 ms
git                  sleep_on_buffer       18599 ms
git                  sleep_on_buffer        1189 ms
git                  sleep_on_buffer       22945 ms
pool                 sleep_on_buffer       17753 ms
git                  sleep_on_buffer        1367 ms
git                  sleep_on_buffer        2223 ms
git                  sleep_on_buffer        1280 ms
git                  sleep_on_buffer        2061 ms
git                  sleep_on_buffer        1034 ms
pool                 sleep_on_buffer       18189 ms
git                  sleep_on_buffer        1344 ms
xchat                sleep_on_buffer        2545 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118b868>] file_update_time+0x98/0x100
[<ffffffff81110f5c>] __generic_file_aio_write+0x17c/0x3b0
[<ffffffff8111120a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea3a3>] ext4_file_write+0x83/0xd0
[<ffffffff81172a73>] do_sync_write+0xa3/0xe0
[<ffffffff811730fe>] vfs_write+0xae/0x180
[<ffffffff8117341d>] sys_write+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   283964 ms
Event count:                      27
git                  sleep_on_buffer       19088 ms
git                  sleep_on_buffer        1177 ms
git                  sleep_on_buffer       30745 ms
git                  sleep_on_buffer        4782 ms
git                  sleep_on_buffer       11435 ms
git                  sleep_on_buffer        2816 ms
git                  sleep_on_buffer        5088 ms
git-merge            sleep_on_buffer       18801 ms
git                  sleep_on_buffer        1415 ms
git                  sleep_on_buffer       16005 ms
git                  sleep_on_buffer        2178 ms
git                  sleep_on_buffer       14354 ms
git                  sleep_on_buffer       12612 ms
git                  sleep_on_buffer        2785 ms
git                  sleep_on_buffer       15498 ms
git                  sleep_on_buffer       15331 ms
git                  sleep_on_buffer        1151 ms
git                  sleep_on_buffer        1320 ms
git                  sleep_on_buffer        8787 ms
git                  sleep_on_buffer        2199 ms
git                  sleep_on_buffer        1006 ms
git                  sleep_on_buffer       23644 ms
git                  sleep_on_buffer        2407 ms
git                  sleep_on_buffer        1169 ms
git                  sleep_on_buffer       25022 ms
git                  sleep_on_buffer       18651 ms
git                  sleep_on_buffer       24498 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811fb6cf>] ext4_orphan_add+0x10f/0x1f0
[<ffffffff811fc6cb>] ext4_unlink+0x32b/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   266300 ms
Event count:                      69
git                  sleep_on_buffer        2773 ms
akregator            sleep_on_buffer        1957 ms
git                  sleep_on_buffer        1417 ms
imapd                sleep_on_buffer        9532 ms
imapd                sleep_on_buffer       57801 ms
pool                 sleep_on_buffer        7761 ms
imapd                sleep_on_buffer        1444 ms
patch                sleep_on_buffer        3872 ms
imapd                sleep_on_buffer        6422 ms
imapd                sleep_on_buffer        1748 ms
pool                 sleep_on_buffer       10552 ms
imapd                sleep_on_buffer       10114 ms
imapd                sleep_on_buffer        7575 ms
mutt                 sleep_on_buffer        3901 ms
bzip2                sleep_on_buffer        1104 ms
imapd                sleep_on_buffer        4983 ms
imapd                sleep_on_buffer        1746 ms
mutt                 sleep_on_buffer        1881 ms
imapd                sleep_on_buffer        1067 ms
imapd                sleep_on_buffer        1863 ms
imapd                sleep_on_buffer        1508 ms
imapd                sleep_on_buffer        1508 ms
offlineimap          sleep_on_buffer        1385 ms
imapd                sleep_on_buffer        1653 ms
imapd                sleep_on_buffer        1179 ms
imapd                sleep_on_buffer        3473 ms
imapd                sleep_on_buffer       10130 ms
vim                  sleep_on_buffer        1690 ms
imapd                sleep_on_buffer        3102 ms
dconf-service        sleep_on_buffer        5097 ms
imapd                sleep_on_buffer        2888 ms
cp                   sleep_on_buffer        1036 ms
imapd                sleep_on_buffer       22501 ms
rsync                sleep_on_buffer        5026 ms
imapd                sleep_on_buffer        2897 ms
rsync                sleep_on_buffer        1200 ms
akregator            sleep_on_buffer        4780 ms
Cache I/O            sleep_on_buffer        1433 ms
imapd                sleep_on_buffer        2588 ms
akregator            sleep_on_buffer        1576 ms
vi                   sleep_on_buffer        2086 ms
firefox              sleep_on_buffer        4718 ms
imapd                sleep_on_buffer        1158 ms
git                  sleep_on_buffer        2073 ms
git                  sleep_on_buffer        1017 ms
git                  sleep_on_buffer        1616 ms
git                  sleep_on_buffer        1043 ms
imapd                sleep_on_buffer        1746 ms
imapd                sleep_on_buffer        1007 ms
git                  sleep_on_buffer        1146 ms
git                  sleep_on_buffer        1916 ms
git                  sleep_on_buffer        1059 ms
git                  sleep_on_buffer        1801 ms
git                  sleep_on_buffer        1208 ms
git                  sleep_on_buffer        1486 ms
git                  sleep_on_buffer        1806 ms
git                  sleep_on_buffer        1295 ms
git                  sleep_on_buffer        1461 ms
git                  sleep_on_buffer        1371 ms
git                  sleep_on_buffer        2010 ms
git                  sleep_on_buffer        1622 ms
git                  sleep_on_buffer        1453 ms
git                  sleep_on_buffer        1392 ms
git                  sleep_on_buffer        1329 ms
git                  sleep_on_buffer        1773 ms
git                  sleep_on_buffer        1750 ms
git                  sleep_on_buffer        2354 ms
imapd                sleep_on_buffer        3201 ms
imapd                sleep_on_buffer        2240 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811ebe24>] __ext4_new_inode+0x294/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   240174 ms
Event count:                      34
systemd-journal      sleep_on_buffer        1321 ms
systemd-journal      sleep_on_buffer        4851 ms
systemd-journal      sleep_on_buffer        3341 ms
systemd-journal      sleep_on_buffer       17219 ms
systemd-journal      sleep_on_buffer        3190 ms
systemd-journal      sleep_on_buffer       13420 ms
systemd-journal      sleep_on_buffer       23421 ms
systemd-journal      sleep_on_buffer        4987 ms
systemd-journal      sleep_on_buffer       16358 ms
systemd-journal      sleep_on_buffer        2734 ms
mozStorage #2        sleep_on_buffer        1454 ms
systemd-journal      sleep_on_buffer        4524 ms
mozStorage #2        sleep_on_buffer        1211 ms
systemd-journal      sleep_on_buffer        1711 ms
systemd-journal      sleep_on_buffer        2158 ms
mkdir                wait_on_page_bit_killable   1084 ms
systemd-journal      sleep_on_buffer        5673 ms
mozStorage #2        sleep_on_buffer        1800 ms
systemd-journal      sleep_on_buffer        5586 ms
mozStorage #2        sleep_on_buffer        3199 ms
nm-dhcp-client.      wait_on_page_bit_killable   1060 ms
mozStorage #2        sleep_on_buffer        6669 ms
systemd-journal      sleep_on_buffer        3603 ms
systemd-journal      sleep_on_buffer        7666 ms
systemd-journal      sleep_on_buffer       13961 ms
systemd-journal      sleep_on_buffer        9063 ms
systemd-journal      sleep_on_buffer        4120 ms
systemd-journal      sleep_on_buffer        3328 ms
systemd-journal      sleep_on_buffer       12093 ms
systemd-journal      sleep_on_buffer        5464 ms
systemd-journal      sleep_on_buffer       12649 ms
systemd-journal      sleep_on_buffer       23460 ms
systemd-journal      sleep_on_buffer       13123 ms
systemd-journal      sleep_on_buffer        4673 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118b868>] file_update_time+0x98/0x100
[<ffffffff811f539c>] ext4_page_mkwrite+0x5c/0x470
[<ffffffff8113740e>] do_wp_page+0x5ce/0x7d0
[<ffffffff81139598>] handle_pte_fault+0x1c8/0x200
[<ffffffff8113a731>] handle_mm_fault+0x271/0x390
[<ffffffff81597959>] __do_page_fault+0x169/0x520
[<ffffffff81597d19>] do_page_fault+0x9/0x10
[<ffffffff81594488>] page_fault+0x28/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   212304 ms
Event count:                      41
pool                 sleep_on_buffer        1216 ms
pool                 sleep_on_buffer       36361 ms
cp                   sleep_on_buffer        5034 ms
git                  sleep_on_buffer        2344 ms
gnuplot              sleep_on_buffer        1733 ms
gnuplot              sleep_on_buffer        2303 ms
gnuplot              sleep_on_buffer        1982 ms
gnuplot              sleep_on_buffer        2491 ms
gnuplot              sleep_on_buffer        1520 ms
gnuplot              sleep_on_buffer        1209 ms
gnuplot              sleep_on_buffer        1188 ms
gnuplot              sleep_on_buffer        1654 ms
gnuplot              sleep_on_buffer        1403 ms
gnuplot              sleep_on_buffer        1386 ms
gnuplot              sleep_on_buffer        1899 ms
gnuplot              sleep_on_buffer        2673 ms
gnuplot              sleep_on_buffer        2158 ms
gnuplot              sleep_on_buffer        1780 ms
gnuplot              sleep_on_buffer        1624 ms
gnuplot              sleep_on_buffer        1704 ms
gnuplot              sleep_on_buffer        2207 ms
gnuplot              sleep_on_buffer        2557 ms
gnuplot              sleep_on_buffer        1692 ms
gnuplot              sleep_on_buffer        1686 ms
gnuplot              sleep_on_buffer        1258 ms
offlineimap          sleep_on_buffer        1217 ms
pool                 sleep_on_buffer       13434 ms
offlineimap          sleep_on_buffer       30091 ms
offlineimap          sleep_on_buffer        9048 ms
offlineimap          sleep_on_buffer       13754 ms
offlineimap          sleep_on_buffer       36560 ms
offlineimap          sleep_on_buffer        1465 ms
cp                   sleep_on_buffer        1525 ms
cp                   sleep_on_buffer        2193 ms
DOM Worker           sleep_on_buffer        5563 ms
DOM Worker           sleep_on_buffer        3597 ms
cp                   sleep_on_buffer        1261 ms
git                  sleep_on_buffer        1427 ms
git                  sleep_on_buffer        1097 ms
git                  sleep_on_buffer        1232 ms
offlineimap          sleep_on_buffer        5778 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff81180bd8>] lookup_open+0xc8/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   211510 ms
Event count:                      20
flush-8:0            sleep_on_buffer       29387 ms
flush-8:0            sleep_on_buffer        2159 ms
flush-8:0            sleep_on_buffer        8593 ms
flush-8:0            sleep_on_buffer        3143 ms
flush-8:0            sleep_on_buffer        4641 ms
flush-8:0            sleep_on_buffer       17279 ms
flush-8:0            sleep_on_buffer        2210 ms
flush-8:0            sleep_on_buffer       15948 ms
flush-8:0            sleep_on_buffer        4686 ms
flush-8:0            sleep_on_buffer        7027 ms
flush-8:0            sleep_on_buffer       17871 ms
flush-8:0            sleep_on_buffer        3262 ms
flush-8:0            sleep_on_buffer        7311 ms
flush-8:0            sleep_on_buffer       11255 ms
flush-8:0            sleep_on_buffer        5693 ms
flush-8:0            sleep_on_buffer        8628 ms
flush-8:0            sleep_on_buffer       10917 ms
flush-8:0            sleep_on_buffer       17497 ms
flush-8:0            sleep_on_buffer       15750 ms
flush-8:0            sleep_on_buffer       18253 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d506>] ext4_split_extent_at+0xb6/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   201192 ms
Event count:                      23
imapd                sleep_on_buffer        3770 ms
imapd                sleep_on_buffer       37050 ms
make                 sleep_on_buffer        5342 ms
compare-mmtests      sleep_on_buffer        1774 ms
scp                  sleep_on_buffer        2478 ms
scp                  sleep_on_buffer        2368 ms
imapd                sleep_on_buffer        3163 ms
pool                 sleep_on_buffer        2033 ms
imapd                sleep_on_buffer        1311 ms
imapd                sleep_on_buffer       11011 ms
imapd                sleep_on_buffer        1345 ms
imapd                sleep_on_buffer       20545 ms
imapd                sleep_on_buffer       19511 ms
imapd                sleep_on_buffer       20863 ms
imapd                sleep_on_buffer       32313 ms
imapd                sleep_on_buffer        6984 ms
imapd                sleep_on_buffer        8152 ms
imapd                sleep_on_buffer        3038 ms
imapd                sleep_on_buffer        8032 ms
imapd                sleep_on_buffer        3649 ms
imapd                sleep_on_buffer        2195 ms
imapd                sleep_on_buffer        1848 ms
mv                   sleep_on_buffer        2417 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177bc6>] vfs_stat+0x16/0x20
[<ffffffff81177ce5>] sys_newstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   169878 ms
Event count:                      56
git                  wait_on_page_bit       8573 ms
git                  wait_on_page_bit       2986 ms
git                  wait_on_page_bit       1811 ms
git                  wait_on_page_bit       2623 ms
git                  wait_on_page_bit       1419 ms
git                  wait_on_page_bit       1244 ms
git                  wait_on_page_bit       1134 ms
git                  wait_on_page_bit       5825 ms
git                  wait_on_page_bit       3567 ms
git                  wait_on_page_bit       1119 ms
git                  wait_on_page_bit       1375 ms
git                  wait_on_page_bit       3726 ms
git                  wait_on_page_bit       2670 ms
git                  wait_on_page_bit       4141 ms
git                  wait_on_page_bit       3858 ms
git                  wait_on_page_bit       6684 ms
git                  wait_on_page_bit       5355 ms
gen-report.sh        wait_on_page_bit       4747 ms
git                  wait_on_page_bit       6752 ms
git                  wait_on_page_bit       1229 ms
git                  wait_on_page_bit       4409 ms
git                  wait_on_page_bit       3101 ms
git                  wait_on_page_bit       1817 ms
git                  wait_on_page_bit       1687 ms
git                  wait_on_page_bit       3683 ms
git                  wait_on_page_bit       2031 ms
git                  wait_on_page_bit       2138 ms
git                  wait_on_page_bit       1513 ms
git                  wait_on_page_bit       1804 ms
git                  wait_on_page_bit       2559 ms
git                  wait_on_page_bit       7958 ms
git                  wait_on_page_bit       6265 ms
git                  wait_on_page_bit       1261 ms
git                  wait_on_page_bit       4018 ms
git                  wait_on_page_bit       1450 ms
git                  wait_on_page_bit       1821 ms
git                  wait_on_page_bit       3186 ms
git                  wait_on_page_bit       1513 ms
git                  wait_on_page_bit       3215 ms
git                  wait_on_page_bit       1262 ms
git                  wait_on_page_bit       8188 ms
git                  sleep_on_buffer        1019 ms
git                  wait_on_page_bit       5233 ms
git                  wait_on_page_bit       1842 ms
git                  wait_on_page_bit       1378 ms
git                  wait_on_page_bit       1386 ms
git                  wait_on_page_bit       2016 ms
git                  wait_on_page_bit       1901 ms
git                  wait_on_page_bit       2750 ms
git                  sleep_on_buffer        1152 ms
git                  wait_on_page_bit       1169 ms
git                  wait_on_page_bit       1371 ms
git                  wait_on_page_bit       1916 ms
git                  wait_on_page_bit       1630 ms
git                  wait_on_page_bit       8286 ms
git                  wait_on_page_bit       1112 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8111d620>] truncate_inode_pages+0x10/0x20
[<ffffffff8111d677>] truncate_pagecache+0x47/0x70
[<ffffffff811f2f4d>] ext4_setattr+0x17d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   167244 ms
Event count:                     118
folder-markup.s      sleep_on_buffer        2055 ms
folder-markup.s      sleep_on_buffer        3917 ms
mv                   sleep_on_buffer        1025 ms
folder-markup.s      sleep_on_buffer        1670 ms
folder-markup.s      sleep_on_buffer        1144 ms
folder-markup.s      sleep_on_buffer        1063 ms
folder-markup.s      sleep_on_buffer        1385 ms
folder-markup.s      sleep_on_buffer        1753 ms
folder-markup.s      sleep_on_buffer        1351 ms
folder-markup.s      sleep_on_buffer        1143 ms
folder-markup.s      sleep_on_buffer        1581 ms
folder-markup.s      sleep_on_buffer        1747 ms
folder-markup.s      sleep_on_buffer        1241 ms
folder-markup.s      sleep_on_buffer        1419 ms
folder-markup.s      sleep_on_buffer        1429 ms
folder-markup.s      sleep_on_buffer        1112 ms
git                  sleep_on_buffer        1190 ms
git                  sleep_on_buffer        1190 ms
git                  sleep_on_buffer        1050 ms
git                  sleep_on_buffer        1463 ms
git                  sleep_on_buffer        1376 ms
folder-markup.s      sleep_on_buffer        1481 ms
folder-markup.s      sleep_on_buffer        1424 ms
folder-markup.s      sleep_on_buffer        1633 ms
folder-markup.s      sleep_on_buffer        1012 ms
folder-markup.s      sleep_on_buffer        1706 ms
folder-markup.s      sleep_on_buffer        1246 ms
folder-markup.s      sleep_on_buffer        1275 ms
git                  sleep_on_buffer        1484 ms
git                  sleep_on_buffer        1216 ms
git                  sleep_on_buffer        1065 ms
git                  sleep_on_buffer        1455 ms
folder-markup.s      sleep_on_buffer        1063 ms
folder-markup.s      sleep_on_buffer        3059 ms
folder-markup.s      sleep_on_buffer        1140 ms
folder-markup.s      sleep_on_buffer        1353 ms
mv                   sleep_on_buffer        1050 ms
folder-markup.s      sleep_on_buffer        1209 ms
git                  sleep_on_buffer        1341 ms
scp                  sleep_on_buffer        4975 ms
folder-markup.s      sleep_on_buffer        1743 ms
folder-markup.s      sleep_on_buffer        1280 ms
folder-markup.s      sleep_on_buffer        2140 ms
folder-markup.s      sleep_on_buffer        1138 ms
folder-markup.s      sleep_on_buffer        1140 ms
folder-markup.s      sleep_on_buffer        1162 ms
folder-markup.s      sleep_on_buffer        1023 ms
git                  sleep_on_buffer        2174 ms
git                  sleep_on_buffer        1306 ms
git                  sleep_on_buffer        1224 ms
git                  sleep_on_buffer        1359 ms
git                  sleep_on_buffer        1551 ms
git                  sleep_on_buffer        1068 ms
git                  sleep_on_buffer        1367 ms
git                  sleep_on_buffer        1292 ms
git                  sleep_on_buffer        1369 ms
git                  sleep_on_buffer        1554 ms
git                  sleep_on_buffer        1273 ms
git                  sleep_on_buffer        1365 ms
mv                   sleep_on_buffer        1107 ms
folder-markup.s      sleep_on_buffer        1519 ms
folder-markup.s      sleep_on_buffer        1253 ms
folder-markup.s      sleep_on_buffer        1195 ms
mv                   sleep_on_buffer        1091 ms
git                  sleep_on_buffer        1147 ms
git                  sleep_on_buffer        1271 ms
git                  sleep_on_buffer        1056 ms
git                  sleep_on_buffer        1134 ms
git                  sleep_on_buffer        1252 ms
git                  sleep_on_buffer        1352 ms
git                  sleep_on_buffer        1449 ms
folder-markup.s      sleep_on_buffer        1732 ms
folder-markup.s      sleep_on_buffer        1332 ms
folder-markup.s      sleep_on_buffer        1450 ms
git                  sleep_on_buffer        1102 ms
git                  sleep_on_buffer        1771 ms
git                  sleep_on_buffer        1225 ms
git                  sleep_on_buffer        1089 ms
git                  sleep_on_buffer        1083 ms
folder-markup.s      sleep_on_buffer        1071 ms
folder-markup.s      sleep_on_buffer        1186 ms
folder-markup.s      sleep_on_buffer        1170 ms
git                  sleep_on_buffer        1249 ms
git                  sleep_on_buffer        1255 ms
folder-markup.s      sleep_on_buffer        1563 ms
folder-markup.s      sleep_on_buffer        1258 ms
git                  sleep_on_buffer        2066 ms
git                  sleep_on_buffer        1493 ms
git                  sleep_on_buffer        1515 ms
git                  sleep_on_buffer        1380 ms
git                  sleep_on_buffer        1238 ms
git                  sleep_on_buffer        1393 ms
git                  sleep_on_buffer        1040 ms
git                  sleep_on_buffer        1986 ms
git                  sleep_on_buffer        1293 ms
git                  sleep_on_buffer        1209 ms
git                  sleep_on_buffer        1098 ms
git                  sleep_on_buffer        1091 ms
git                  sleep_on_buffer        1701 ms
git                  sleep_on_buffer        2237 ms
git                  sleep_on_buffer        1810 ms
folder-markup.s      sleep_on_buffer        1166 ms
folder-markup.s      sleep_on_buffer        2064 ms
folder-markup.s      sleep_on_buffer        1285 ms
folder-markup.s      sleep_on_buffer        1129 ms
folder-markup.s      sleep_on_buffer        1080 ms
git                  sleep_on_buffer        1277 ms
git                  sleep_on_buffer        1280 ms
folder-markup.s      sleep_on_buffer        1298 ms
folder-markup.s      sleep_on_buffer        1355 ms
folder-markup.s      sleep_on_buffer        1043 ms
folder-markup.s      sleep_on_buffer        1204 ms
git                  sleep_on_buffer        1068 ms
git                  sleep_on_buffer        1654 ms
git                  sleep_on_buffer        1380 ms
git                  sleep_on_buffer        1289 ms
git                  sleep_on_buffer        1442 ms
git                  sleep_on_buffer        1299 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fa9b6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fac9f>] ext4_create+0xff/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   135113 ms
Event count:                     116
flush-8:16           get_request            1274 ms
flush-8:16           get_request            1098 ms
flush-8:16           get_request            1079 ms
flush-8:16           get_request            1234 ms
flush-8:16           get_request            1229 ms
flush-8:16           get_request            1056 ms
flush-8:16           get_request            1096 ms
flush-8:16           get_request            1092 ms
flush-8:16           get_request            1099 ms
flush-8:16           get_request            1057 ms
flush-8:16           get_request            1103 ms
flush-8:16           get_request            1207 ms
flush-8:16           get_request            1087 ms
flush-8:16           get_request            1060 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1196 ms
flush-8:16           get_request            1453 ms
flush-8:16           get_request            1084 ms
flush-8:16           get_request            1051 ms
flush-8:16           get_request            1084 ms
flush-8:16           get_request            1132 ms
flush-8:16           get_request            1164 ms
flush-8:16           get_request            1063 ms
flush-8:16           get_request            1221 ms
flush-8:16           get_request            1074 ms
flush-8:16           get_request            1099 ms
flush-8:16           get_request            1077 ms
flush-8:16           get_request            1243 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1078 ms
flush-8:16           get_request            1101 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1056 ms
flush-8:16           get_request            1333 ms
flush-8:16           get_request            1103 ms
flush-8:16           get_request            1216 ms
flush-8:16           get_request            1108 ms
flush-8:16           get_request            1109 ms
flush-8:16           get_request            1113 ms
flush-8:16           get_request            1349 ms
flush-8:16           get_request            1086 ms
flush-8:16           get_request            1070 ms
flush-8:16           get_request            1064 ms
flush-8:16           get_request            1091 ms
flush-8:16           get_request            1064 ms
flush-8:16           get_request            1222 ms
flush-8:16           get_request            1103 ms
flush-8:16           get_request            1434 ms
flush-8:16           get_request            1124 ms
flush-8:16           get_request            1359 ms
flush-8:16           get_request            1060 ms
flush-8:16           get_request            1057 ms
flush-8:16           get_request            1066 ms
flush-8:16           get_request            1357 ms
flush-8:16           get_request            1089 ms
flush-8:16           get_request            1071 ms
flush-8:16           get_request            1196 ms
flush-8:16           get_request            1091 ms
flush-8:16           get_request            1203 ms
flush-8:16           get_request            1100 ms
flush-8:16           get_request            1208 ms
flush-8:16           get_request            1113 ms
flush-8:16           get_request            1260 ms
flush-8:16           get_request            1480 ms
flush-8:16           get_request            1054 ms
flush-8:16           get_request            1211 ms
flush-8:16           get_request            1101 ms
flush-8:16           get_request            1098 ms
flush-8:16           get_request            1190 ms
flush-8:16           get_request            1046 ms
flush-8:16           get_request            1066 ms
flush-8:16           get_request            1204 ms
flush-8:16           get_request            1076 ms
flush-8:16           get_request            1094 ms
flush-8:16           get_request            1094 ms
flush-8:16           get_request            1081 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1193 ms
flush-8:16           get_request            1066 ms
flush-8:16           get_request            1069 ms
flush-8:16           get_request            1081 ms
flush-8:16           get_request            1107 ms
flush-8:16           get_request            1375 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1068 ms
flush-8:16           get_request            1077 ms
flush-8:16           get_request            1108 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1098 ms
flush-8:16           get_request            1063 ms
flush-8:16           get_request            1074 ms
flush-8:16           get_request            1072 ms
flush-8:16           get_request            1038 ms
flush-8:16           get_request            1058 ms
flush-8:16           get_request            1202 ms
flush-8:16           get_request            1359 ms
flush-8:16           get_request            1190 ms
flush-8:16           get_request            1497 ms
flush-8:16           get_request            2173 ms
flush-8:16           get_request            1199 ms
flush-8:16           get_request            1358 ms
flush-8:16           get_request            1384 ms
flush-8:16           get_request            1355 ms
flush-8:16           get_request            1327 ms
flush-8:16           get_request            1312 ms
flush-8:16           get_request            1318 ms
flush-8:16           get_request            1093 ms
flush-8:16           get_request            1265 ms
flush-8:16           get_request            1155 ms
flush-8:16           get_request            1107 ms
flush-8:16           get_request            1263 ms
flush-8:16           get_request            1104 ms
flush-8:16           get_request            1122 ms
flush-8:16           get_request            1578 ms
flush-8:16           get_request            1089 ms
flush-8:16           get_request            1075 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff812a4b1f>] generic_make_request.part.59+0x6f/0xa0
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffff811f6014>] ext4_io_submit+0x24/0x60
[<ffffffff811f2265>] ext4_writepage+0x135/0x220
[<ffffffff81119292>] __writepage+0x12/0x40
[<ffffffff81119a96>] write_cache_pages+0x206/0x460
[<ffffffff81119d35>] generic_writepages+0x45/0x70
[<ffffffff8111ac15>] do_writepages+0x25/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   115939 ms
Event count:                      23
bash                 sleep_on_buffer        3076 ms
du                   sleep_on_buffer        2364 ms
du                   sleep_on_buffer        1515 ms
git                  sleep_on_buffer        1706 ms
rm                   sleep_on_buffer       10595 ms
find                 sleep_on_buffer        2048 ms
rm                   sleep_on_buffer        9146 ms
rm                   sleep_on_buffer        8220 ms
rm                   sleep_on_buffer        6080 ms
cp                   sleep_on_buffer        6302 ms
ls                   sleep_on_buffer        1225 ms
cp                   sleep_on_buffer        6279 ms
cp                   sleep_on_buffer        1164 ms
cp                   sleep_on_buffer        3365 ms
cp                   sleep_on_buffer        2191 ms
cp                   sleep_on_buffer        1367 ms
du                   sleep_on_buffer        4155 ms
cp                   sleep_on_buffer        3906 ms
cp                   sleep_on_buffer        4758 ms
rsync                sleep_on_buffer        6575 ms
git                  sleep_on_buffer        1688 ms
git                  sleep_on_buffer       26470 ms
git                  sleep_on_buffer        1744 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811849b2>] vfs_readdir+0xc2/0xe0
[<ffffffff81184ae9>] sys_getdents+0x89/0x110
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   101122 ms
Event count:                      18
flush-8:0            sleep_on_buffer       21732 ms
flush-8:0            sleep_on_buffer        2211 ms
flush-8:0            sleep_on_buffer        1480 ms
flush-8:0            sleep_on_buffer       16292 ms
flush-8:0            sleep_on_buffer        2975 ms
flush-8:0            sleep_on_buffer        7025 ms
flush-8:0            sleep_on_buffer        5535 ms
flush-8:0            sleep_on_buffer        1885 ms
flush-8:0            sleep_on_buffer        1329 ms
flush-8:0            sleep_on_buffer        1374 ms
flush-8:0            sleep_on_buffer        1490 ms
flush-8:0            sleep_on_buffer       16341 ms
flush-8:0            sleep_on_buffer       14939 ms
flush-8:0            sleep_on_buffer        1202 ms
flush-8:0            sleep_on_buffer        1262 ms
flush-8:0            sleep_on_buffer        1121 ms
flush-8:0            sleep_on_buffer        1571 ms
flush-8:0            sleep_on_buffer        1358 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d506>] ext4_split_extent_at+0xb6/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    98613 ms
Event count:                       8
git                  sleep_on_buffer       14529 ms
git                  sleep_on_buffer        4477 ms
git                  sleep_on_buffer       10045 ms
git                  sleep_on_buffer       11068 ms
git                  sleep_on_buffer       18777 ms
git                  sleep_on_buffer        9434 ms
git                  sleep_on_buffer       12262 ms
git                  sleep_on_buffer       18021 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f4e95>] ext4_evict_inode+0x1e5/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    94944 ms
Event count:                      11
git                  sleep_on_buffer       16110 ms
git                  sleep_on_buffer        6508 ms
git                  sleep_on_buffer       23186 ms
git                  sleep_on_buffer       25228 ms
git-merge            sleep_on_buffer        1672 ms
konqueror            sleep_on_buffer        1411 ms
git                  sleep_on_buffer        1803 ms
git                  sleep_on_buffer       15397 ms
git                  sleep_on_buffer        1276 ms
git                  sleep_on_buffer        1012 ms
git                  sleep_on_buffer        1341 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fc3e1>] ext4_unlink+0x41/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    93658 ms
Event count:                      26
flush-8:0            sleep_on_buffer        1294 ms
flush-8:0            sleep_on_buffer        2856 ms
flush-8:0            sleep_on_buffer        3764 ms
flush-8:0            sleep_on_buffer        5086 ms
flush-8:0            sleep_on_buffer        1203 ms
flush-8:0            sleep_on_buffer        1289 ms
flush-8:0            sleep_on_buffer        1264 ms
flush-8:0            sleep_on_buffer        1252 ms
flush-8:0            sleep_on_buffer        2997 ms
flush-8:0            sleep_on_buffer        2765 ms
flush-8:0            sleep_on_buffer        4235 ms
flush-8:0            sleep_on_buffer        5205 ms
flush-8:0            sleep_on_buffer        6971 ms
flush-8:0            sleep_on_buffer        4155 ms
ps                   wait_on_page_bit_killable   1054 ms
flush-8:0            sleep_on_buffer        3719 ms
flush-8:0            sleep_on_buffer       10283 ms
flush-8:0            sleep_on_buffer        3068 ms
flush-8:0            sleep_on_buffer        2000 ms
flush-8:0            sleep_on_buffer        2264 ms
flush-8:0            sleep_on_buffer        3623 ms
flush-8:0            sleep_on_buffer       12954 ms
flush-8:0            sleep_on_buffer        6579 ms
flush-8:0            sleep_on_buffer        1245 ms
flush-8:0            sleep_on_buffer        1293 ms
flush-8:0            sleep_on_buffer        1240 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    92353 ms
Event count:                      11
flush-8:0            sleep_on_buffer        2192 ms
flush-8:0            sleep_on_buffer        2088 ms
flush-8:0            sleep_on_buffer        1460 ms
flush-8:0            sleep_on_buffer        1241 ms
flush-8:0            sleep_on_buffer        1986 ms
flush-8:0            sleep_on_buffer        1331 ms
flush-8:0            sleep_on_buffer        2192 ms
flush-8:0            sleep_on_buffer        3327 ms
flush-8:0            sleep_on_buffer       73408 ms
flush-8:0            sleep_on_buffer        1229 ms
flush-253:0          sleep_on_buffer        1899 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    91515 ms
Event count:                       7
flush-8:0            sleep_on_buffer        7128 ms
flush-8:0            sleep_on_buffer       18731 ms
flush-8:0            sleep_on_buffer       12643 ms
flush-8:0            sleep_on_buffer       28149 ms
flush-8:0            sleep_on_buffer        5728 ms
flush-8:0            sleep_on_buffer       18040 ms
git                  wait_on_page_bit       1096 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121e658>] ext4_ext_convert_to_initialized+0x408/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4601>] write_cache_pages_da+0x421/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    86251 ms
Event count:                      76
imapd                wait_on_page_bit_killable   1088 ms
imapd                wait_on_page_bit_killable   1092 ms
git                  wait_on_page_bit_killable   1616 ms
git                  wait_on_page_bit_killable   1114 ms
play                 wait_on_page_bit_killable   1019 ms
play                 wait_on_page_bit_killable   1012 ms
play                 wait_on_page_bit_killable   1223 ms
play                 wait_on_page_bit_killable   1223 ms
play                 wait_on_page_bit_killable   1034 ms
play                 wait_on_page_bit_killable   1034 ms
play                 wait_on_page_bit_killable   1096 ms
play                 wait_on_page_bit_killable   1096 ms
play                 wait_on_page_bit_killable   1093 ms
play                 wait_on_page_bit_killable   1093 ms
vim                  wait_on_page_bit_killable   1084 ms
dbus-daemon-lau      wait_on_page_bit_killable   1076 ms
play                 wait_on_page_bit_killable   1097 ms
play                 wait_on_page_bit_killable   1097 ms
git                  wait_on_page_bit_killable   1005 ms
systemd-journal      wait_on_page_bit_killable   1252 ms
systemd-journal      wait_on_page_bit_killable   1158 ms
git                  wait_on_page_bit_killable   1237 ms
git                  wait_on_page_bit_killable   1043 ms
git                  wait_on_page_bit_killable   1068 ms
git                  wait_on_page_bit_killable   1070 ms
git                  wait_on_page_bit_killable   1070 ms
git                  wait_on_page_bit_killable   1097 ms
git                  wait_on_page_bit_killable   1055 ms
git                  wait_on_page_bit_killable   1252 ms
git                  wait_on_page_bit_killable   1187 ms
git                  wait_on_page_bit_killable   1069 ms
git                  wait_on_page_bit_killable   1194 ms
git                  wait_on_page_bit_killable   1035 ms
git                  wait_on_page_bit_killable   1046 ms
git                  wait_on_page_bit_killable   1024 ms
git                  wait_on_page_bit_killable   1124 ms
git                  wait_on_page_bit_killable   1293 ms
git                  wait_on_page_bit_killable   1184 ms
git                  wait_on_page_bit_killable   1269 ms
git                  wait_on_page_bit_killable   1268 ms
git                  wait_on_page_bit_killable   1088 ms
git                  wait_on_page_bit_killable   1093 ms
git                  wait_on_page_bit_killable   1013 ms
git                  wait_on_page_bit_killable   1034 ms
git                  wait_on_page_bit_killable   1018 ms
git                  wait_on_page_bit_killable   1185 ms
git                  wait_on_page_bit_killable   1258 ms
git                  wait_on_page_bit_killable   1006 ms
git                  wait_on_page_bit_killable   1061 ms
git                  wait_on_page_bit_killable   1108 ms
git                  wait_on_page_bit_killable   1006 ms
git                  wait_on_page_bit_killable   1012 ms
git                  wait_on_page_bit_killable   1210 ms
git                  wait_on_page_bit_killable   1239 ms
git                  wait_on_page_bit_killable   1146 ms
git                  wait_on_page_bit_killable   1106 ms
git                  wait_on_page_bit_killable   1063 ms
git                  wait_on_page_bit_killable   1070 ms
git                  wait_on_page_bit_killable   1041 ms
git                  wait_on_page_bit_killable   1052 ms
git                  wait_on_page_bit_killable   1237 ms
git                  wait_on_page_bit_killable   1117 ms
git                  wait_on_page_bit_killable   1086 ms
git                  wait_on_page_bit_killable   1051 ms
git                  wait_on_page_bit_killable   1029 ms
runlevel             wait_on_page_bit_killable   1019 ms
evolution            wait_on_page_bit_killable   1384 ms
evolution            wait_on_page_bit_killable   1144 ms
firefox              wait_on_page_bit_killable   1537 ms
git                  wait_on_page_bit_killable   1017 ms
evolution            wait_on_page_bit_killable   1015 ms
evolution            wait_on_page_bit_killable   1523 ms
ps                   wait_on_page_bit_killable   1394 ms
kio_http             wait_on_page_bit_killable   1010 ms
plugin-containe      wait_on_page_bit_killable   1522 ms
qmmp                 wait_on_page_bit_killable   1170 ms
[<ffffffff811115c8>] wait_on_page_bit_killable+0x78/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff81111a78>] filemap_fault+0x3d8/0x410
[<ffffffff8113599a>] __do_fault+0x6a/0x530
[<ffffffff811394be>] handle_pte_fault+0xee/0x200
[<ffffffff8113a731>] handle_mm_fault+0x271/0x390
[<ffffffff81597959>] __do_page_fault+0x169/0x520
[<ffffffff81597d19>] do_page_fault+0x9/0x10
[<ffffffff81594488>] page_fault+0x28/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    78888 ms
Event count:                      10
git                  sleep_on_buffer        1019 ms
git                  sleep_on_buffer        2031 ms
git                  sleep_on_buffer        2109 ms
git                  sleep_on_buffer        5858 ms
git                  sleep_on_buffer       15181 ms
git                  sleep_on_buffer       22771 ms
git                  sleep_on_buffer        2331 ms
git                  sleep_on_buffer        1341 ms
git                  sleep_on_buffer       24648 ms
git                  sleep_on_buffer        1599 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb052>] ext4_delete_entry+0x62/0x120
[<ffffffff811fc495>] ext4_unlink+0xf5/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    77568 ms
Event count:                      12
git                  sleep_on_buffer        2592 ms
git                  sleep_on_buffer        1312 ms
git                  sleep_on_buffer        1974 ms
git                  sleep_on_buffer        2508 ms
git                  sleep_on_buffer        1245 ms
git                  sleep_on_buffer       20990 ms
git                  sleep_on_buffer       14782 ms
git                  sleep_on_buffer        2026 ms
git                  sleep_on_buffer        1880 ms
git                  sleep_on_buffer        2174 ms
git                  sleep_on_buffer       24451 ms
git                  sleep_on_buffer        1634 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811fc633>] ext4_unlink+0x293/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    73950 ms
Event count:                      21
pool                 wait_on_page_bit       5626 ms
git                  sleep_on_buffer        1077 ms
pool                 wait_on_page_bit       1040 ms
offlineimap          wait_on_page_bit       1083 ms
pool                 wait_on_page_bit       1044 ms
pool                 wait_on_page_bit       7268 ms
pool                 wait_on_page_bit       9900 ms
pool                 wait_on_page_bit       3530 ms
offlineimap          wait_on_page_bit      18212 ms
git                  wait_on_page_bit       1101 ms
git                  wait_on_page_bit       1402 ms
git                  sleep_on_buffer        1037 ms
pool                 wait_on_page_bit       1107 ms
git                  sleep_on_buffer        1106 ms
pool                 wait_on_page_bit      11643 ms
pool                 wait_on_page_bit       1272 ms
evolution            wait_on_page_bit       1471 ms
pool                 wait_on_page_bit       1458 ms
pool                 wait_on_page_bit       1331 ms
git                  sleep_on_buffer        1082 ms
offlineimap          wait_on_page_bit       1160 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff81110c50>] filemap_write_and_wait_range+0x60/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    70700 ms
Event count:                      27
flush-8:0            sleep_on_buffer        1735 ms
flush-8:0            sleep_on_buffer        1720 ms
flush-8:0            sleep_on_buffer        3099 ms
flush-8:0            sleep_on_buffer        1321 ms
flush-8:0            sleep_on_buffer        3276 ms
flush-8:0            sleep_on_buffer        4215 ms
flush-8:0            sleep_on_buffer        1412 ms
flush-8:0            sleep_on_buffer        1049 ms
flush-8:0            sleep_on_buffer        2320 ms
flush-8:0            sleep_on_buffer        8076 ms
flush-8:0            sleep_on_buffer        2210 ms
flush-8:0            sleep_on_buffer        1204 ms
flush-8:0            sleep_on_buffer        1262 ms
flush-8:0            sleep_on_buffer        1995 ms
flush-8:0            sleep_on_buffer        1675 ms
flush-8:0            sleep_on_buffer        4219 ms
flush-8:0            sleep_on_buffer        4027 ms
flush-8:0            sleep_on_buffer        3452 ms
flush-8:0            sleep_on_buffer        6020 ms
flush-8:0            sleep_on_buffer        1318 ms
flush-8:0            sleep_on_buffer        1065 ms
flush-8:0            sleep_on_buffer        1148 ms
flush-8:0            sleep_on_buffer        1230 ms
flush-8:0            sleep_on_buffer        4479 ms
flush-8:0            sleep_on_buffer        1580 ms
flush-8:0            sleep_on_buffer        4551 ms
git                  sleep_on_buffer        1042 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121a97b>] __ext4_ext_dirty.isra.40+0x7b/0x80
[<ffffffff8121d34a>] ext4_ext_insert_extent+0x31a/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    59807 ms
Event count:                      37
mv                   sleep_on_buffer        1439 ms
mv                   sleep_on_buffer        1490 ms
mv                   sleep_on_buffer        1876 ms
mv                   sleep_on_buffer        1240 ms
mv                   sleep_on_buffer        1897 ms
mv                   sleep_on_buffer        2089 ms
mv                   sleep_on_buffer        1375 ms
mv                   sleep_on_buffer        1386 ms
mv                   sleep_on_buffer        1442 ms
mv                   sleep_on_buffer        1682 ms
mv                   sleep_on_buffer        1188 ms
offlineimap          sleep_on_buffer        2247 ms
mv                   sleep_on_buffer        1262 ms
mv                   sleep_on_buffer        8930 ms
mv                   sleep_on_buffer        1392 ms
mv                   sleep_on_buffer        1536 ms
mv                   sleep_on_buffer        1064 ms
mv                   sleep_on_buffer        1303 ms
mv                   sleep_on_buffer        1487 ms
mv                   sleep_on_buffer        1331 ms
mv                   sleep_on_buffer        1757 ms
mv                   sleep_on_buffer        1069 ms
mv                   sleep_on_buffer        1183 ms
mv                   sleep_on_buffer        1548 ms
mv                   sleep_on_buffer        1090 ms
mv                   sleep_on_buffer        1770 ms
mv                   sleep_on_buffer        1002 ms
mv                   sleep_on_buffer        1199 ms
mv                   sleep_on_buffer        1066 ms
mv                   sleep_on_buffer        1275 ms
mv                   sleep_on_buffer        1198 ms
mv                   sleep_on_buffer        1653 ms
mv                   sleep_on_buffer        1197 ms
mv                   sleep_on_buffer        1275 ms
mv                   sleep_on_buffer        1317 ms
mv                   sleep_on_buffer        1025 ms
mv                   sleep_on_buffer        1527 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fba26>] ext4_rename+0x276/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    59307 ms
Event count:                      15
git                  sleep_on_buffer        3293 ms
git                  sleep_on_buffer        1350 ms
git                  sleep_on_buffer        2132 ms
git                  sleep_on_buffer        1018 ms
git                  sleep_on_buffer       16069 ms
git                  sleep_on_buffer        5478 ms
offlineimap          sleep_on_buffer        1138 ms
imapd                sleep_on_buffer        1927 ms
imapd                sleep_on_buffer        6417 ms
offlineimap          sleep_on_buffer        6241 ms
offlineimap          sleep_on_buffer        1549 ms
rsync                sleep_on_buffer        3776 ms
rsync                sleep_on_buffer        2516 ms
git                  sleep_on_buffer        1025 ms
git                  sleep_on_buffer        5378 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121c4ad>] ext4_ext_tree_init+0x2d/0x40
[<ffffffff811ecc06>] __ext4_new_inode+0x1076/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    58651 ms
Event count:                       4
git                  sleep_on_buffer       13070 ms
git                  sleep_on_buffer       18222 ms
git                  sleep_on_buffer       13508 ms
git                  sleep_on_buffer       13851 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fc898>] ext4_orphan_del+0x1a8/0x1e0
[<ffffffff811f4fbb>] ext4_evict_inode+0x30b/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    56275 ms
Event count:                      14
git                  sleep_on_buffer        1116 ms
git                  sleep_on_buffer        1347 ms
git                  sleep_on_buffer        1258 ms
git                  sleep_on_buffer        3471 ms
git                  sleep_on_buffer        3348 ms
git                  sleep_on_buffer        1185 ms
git                  sleep_on_buffer        1423 ms
git                  sleep_on_buffer        2662 ms
git                  sleep_on_buffer        8693 ms
git                  sleep_on_buffer        8223 ms
git                  sleep_on_buffer        4792 ms
git                  sleep_on_buffer        2553 ms
git                  sleep_on_buffer        2550 ms
git                  sleep_on_buffer       13654 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f9c6e>] add_dirent_to_buf+0x12e/0x1d0
[<ffffffff811f9e38>] ext4_dx_add_entry+0x128/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fa9b6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fac9f>] ext4_create+0xff/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    55128 ms
Event count:                      12
dconf-service        sleep_on_buffer        1918 ms
pool                 sleep_on_buffer       10558 ms
pool                 sleep_on_buffer        1957 ms
pool                 sleep_on_buffer        1903 ms
pool                 sleep_on_buffer        1187 ms
offlineimap          sleep_on_buffer        2077 ms
URL Classifier       sleep_on_buffer        3924 ms
offlineimap          sleep_on_buffer        2573 ms
StreamT~ns #343      sleep_on_buffer       11686 ms
DOM Worker           sleep_on_buffer        2215 ms
pool                 sleep_on_buffer        4513 ms
offlineimap          sleep_on_buffer       10617 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    53464 ms
Event count:                       4
play                 sleep_on_buffer        6853 ms
play                 sleep_on_buffer       15340 ms
play                 sleep_on_buffer       24793 ms
play                 sleep_on_buffer        6478 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811f313c>] ext4_setattr+0x36c/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811712bd>] chown_common+0xbd/0xd0
[<ffffffff81172417>] sys_fchown+0xb7/0xd0
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    51867 ms
Event count:                       3
flush-8:0            sleep_on_buffer       42842 ms
flush-8:0            sleep_on_buffer        2026 ms
flush-8:0            sleep_on_buffer        6999 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    49716 ms
Event count:                       8
pool                 sleep_on_buffer        4642 ms
offlineimap          sleep_on_buffer        4279 ms
evolution            sleep_on_buffer        5182 ms
rsync                sleep_on_buffer        5599 ms
git                  sleep_on_buffer        8338 ms
StreamT~ns #343      sleep_on_buffer        2216 ms
git                  sleep_on_buffer        2844 ms
git                  sleep_on_buffer       16616 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291ba>] ext4_free_blocks+0x36a/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f513e>] ext4_evict_inode+0x48e/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    42396 ms
Event count:                       5
git                  sleep_on_buffer        1115 ms
git                  sleep_on_buffer       15407 ms
git                  sleep_on_buffer        9114 ms
git                  sleep_on_buffer        1076 ms
git                  sleep_on_buffer       15684 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f4e95>] ext4_evict_inode+0x1e5/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    41836 ms
Event count:                      29
git                  sleep_on_buffer        1326 ms
git                  sleep_on_buffer        1017 ms
git                  sleep_on_buffer        1077 ms
git                  sleep_on_buffer        2618 ms
git                  sleep_on_buffer        1058 ms
git                  sleep_on_buffer        1321 ms
git                  sleep_on_buffer        1199 ms
git                  sleep_on_buffer        1067 ms
git                  sleep_on_buffer        1227 ms
git                  sleep_on_buffer        1101 ms
git                  sleep_on_buffer        1105 ms
git                  sleep_on_buffer        1048 ms
git                  sleep_on_buffer        1254 ms
git                  sleep_on_buffer        1866 ms
git                  sleep_on_buffer        1768 ms
git                  sleep_on_buffer        1613 ms
git                  sleep_on_buffer        1690 ms
git                  sleep_on_buffer        1189 ms
git                  sleep_on_buffer        1063 ms
git                  sleep_on_buffer        1022 ms
git                  sleep_on_buffer        2039 ms
git                  sleep_on_buffer        1898 ms
git                  sleep_on_buffer        1422 ms
git                  sleep_on_buffer        1678 ms
git                  sleep_on_buffer        1285 ms
git                  sleep_on_buffer        2058 ms
git                  sleep_on_buffer        1336 ms
git                  sleep_on_buffer        1364 ms
git                  sleep_on_buffer        2127 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fae2c>] ext4_link+0xfc/0x1b0
[<ffffffff81181e33>] vfs_link+0x113/0x1c0
[<ffffffff811828a4>] sys_linkat+0x174/0x1c0
[<ffffffff81182909>] sys_link+0x19/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    41493 ms
Event count:                       2
flush-8:0            sleep_on_buffer       28180 ms
flush-8:0            sleep_on_buffer       13313 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d506>] ext4_split_extent_at+0xb6/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4601>] write_cache_pages_da+0x421/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    40644 ms
Event count:                      30
flush-8:16           get_request            1797 ms
flush-8:16           get_request            1334 ms
flush-8:16           get_request            1288 ms
flush-8:16           get_request            1741 ms
flush-8:16           get_request            2518 ms
flush-8:16           get_request            1752 ms
flush-8:16           get_request            1069 ms
flush-8:16           get_request            1487 ms
flush-8:16           get_request            1000 ms
flush-8:16           get_request            1270 ms
flush-8:16           get_request            1223 ms
flush-8:16           get_request            1384 ms
flush-8:16           get_request            1082 ms
flush-8:16           get_request            1195 ms
flush-8:16           get_request            1163 ms
flush-8:16           get_request            1605 ms
flush-8:16           get_request            1110 ms
flush-8:16           get_request            1249 ms
flush-8:16           get_request            2064 ms
flush-8:16           get_request            1073 ms
flush-8:16           get_request            1238 ms
flush-8:16           get_request            1215 ms
flush-8:16           get_request            1075 ms
flush-8:16           get_request            1532 ms
flush-8:16           get_request            1586 ms
flush-8:16           get_request            1165 ms
flush-8:16           get_request            1129 ms
flush-8:16           get_request            1098 ms
flush-8:16           get_request            1099 ms
flush-8:16           get_request            1103 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff812a4b1f>] generic_make_request.part.59+0x6f/0xa0
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffff811f6014>] ext4_io_submit+0x24/0x60
[<ffffffff811f2265>] ext4_writepage+0x135/0x220
[<ffffffff81119292>] __writepage+0x12/0x40
[<ffffffff81119a96>] write_cache_pages+0x206/0x460
[<ffffffff81119d35>] generic_writepages+0x45/0x70
[<ffffffff8111ac15>] do_writepages+0x25/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    39571 ms
Event count:                       8
kio_http             sleep_on_buffer       23133 ms
vi                   sleep_on_buffer        4288 ms
git                  sleep_on_buffer        1410 ms
mutt                 sleep_on_buffer        2302 ms
mutt                 sleep_on_buffer        2299 ms
Cache I/O            sleep_on_buffer        1283 ms
gpg                  sleep_on_buffer        3265 ms
git                  sleep_on_buffer        1591 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff811fc6cb>] ext4_unlink+0x32b/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    38769 ms
Event count:                       6
rsync                sleep_on_buffer        3513 ms
rsync                sleep_on_buffer        3570 ms
git                  sleep_on_buffer       26211 ms
git                  sleep_on_buffer        1657 ms
git                  sleep_on_buffer        2184 ms
git                  sleep_on_buffer        1634 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811fbd0c>] ext4_rename+0x55c/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    34812 ms
Event count:                       4
acroread             wait_on_page_bit      11968 ms
acroread             wait_on_page_bit       7121 ms
acroread             wait_on_page_bit       3126 ms
acroread             wait_on_page_bit      12597 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8111d620>] truncate_inode_pages+0x10/0x20
[<ffffffff8111d677>] truncate_pagecache+0x47/0x70
[<ffffffff811f2f4d>] ext4_setattr+0x17d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff811c2996>] compat_sys_open+0x16/0x20
[<ffffffff8159d81c>] sysenter_dispatch+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    34740 ms
Event count:                       4
systemd-journal      sleep_on_buffer        1126 ms
systemd-journal      sleep_on_buffer       29206 ms
systemd-journal      sleep_on_buffer        1787 ms
systemd-journal      sleep_on_buffer        2621 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff8121fd1f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171b32>] do_fallocate+0x112/0x190
[<ffffffff81171c02>] sys_fallocate+0x52/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    33158 ms
Event count:                      32
mv                   sleep_on_buffer        1043 ms
git                  wait_on_page_bit       1150 ms
cc1                  sleep_on_buffer        1062 ms
git                  wait_on_page_bit       1055 ms
flush-8:16           get_request            1091 ms
mktexlsr             sleep_on_buffer        1152 ms
imapd                sleep_on_buffer        1004 ms
flush-8:16           get_request            1087 ms
flush-8:16           get_request            1104 ms
sleep                wait_on_page_bit_killable   1142 ms
git                  wait_on_page_bit_killable   1108 ms
git                  wait_on_page_bit_killable   1007 ms
git                  wait_on_page_bit_killable   1074 ms
git                  wait_on_page_bit_killable   1050 ms
nm-dhcp-client.      wait_on_page_bit_killable   1069 ms
uname                wait_on_page_bit_killable   1086 ms
sed                  wait_on_page_bit_killable   1101 ms
git                  wait_on_page_bit_killable   1057 ms
grep                 wait_on_page_bit_killable   1045 ms
imapd                sleep_on_buffer        1032 ms
git                  sleep_on_buffer        1015 ms
folder-markup.s      sleep_on_buffer        1048 ms
git                  wait_on_page_bit       1086 ms
git                  sleep_on_buffer        1041 ms
git                  sleep_on_buffer        1048 ms
git                  wait_on_page_bit       1063 ms
git                  sleep_on_buffer        1083 ms
series2git           sleep_on_buffer        1073 ms
git                  wait_on_page_bit       1093 ms
git                  wait_on_page_bit       1071 ms
git                  wait_on_page_bit       1018 ms

Time stalled in this event:    32109 ms
Event count:                      23
flush-8:16           get_request            1475 ms
flush-8:16           get_request            1431 ms
flush-8:16           get_request            1027 ms
flush-8:16           get_request            2019 ms
flush-8:16           get_request            1021 ms
flush-8:16           get_request            1013 ms
flush-8:16           get_request            1093 ms
flush-8:16           get_request            1178 ms
flush-8:16           get_request            1051 ms
flush-8:16           get_request            1296 ms
flush-8:16           get_request            1525 ms
flush-8:16           get_request            1083 ms
flush-8:16           get_request            1654 ms
flush-8:16           get_request            1583 ms
flush-8:16           get_request            1405 ms
flush-8:16           get_request            2004 ms
flush-8:16           get_request            2203 ms
flush-8:16           get_request            1980 ms
flush-8:16           get_request            1211 ms
flush-8:16           get_request            1116 ms
flush-8:16           get_request            1071 ms
flush-8:16           get_request            1255 ms
flush-8:16           get_request            1415 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff812a4b1f>] generic_make_request.part.59+0x6f/0xa0
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffff811a30fb>] submit_bh+0xfb/0x130
[<ffffffff811a6058>] __block_write_full_page+0x1c8/0x340
[<ffffffff811a62a3>] block_write_full_page_endio+0xd3/0x110
[<ffffffff811a62f0>] block_write_full_page+0x10/0x20
[<ffffffff811aa0c3>] blkdev_writepage+0x13/0x20
[<ffffffff81119292>] __writepage+0x12/0x40
[<ffffffff81119a96>] write_cache_pages+0x206/0x460
[<ffffffff81119d35>] generic_writepages+0x45/0x70
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    31440 ms
Event count:                       6
pool                 sleep_on_buffer       13120 ms
scp                  sleep_on_buffer        5297 ms
scp                  sleep_on_buffer        3769 ms
scp                  sleep_on_buffer        2870 ms
cp                   sleep_on_buffer        5153 ms
git                  sleep_on_buffer        1231 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811ebe24>] __ext4_new_inode+0x294/0x10c0
[<ffffffff811fb456>] ext4_mkdir+0x146/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    30241 ms
Event count:                       4
git                  sleep_on_buffer       10480 ms
evince               sleep_on_buffer        1309 ms
git                  sleep_on_buffer       17269 ms
git                  sleep_on_buffer        1183 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811eb7cf>] ext4_free_inode+0x22f/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    28375 ms
Event count:                       4
flush-8:0            sleep_on_buffer        7042 ms
flush-8:0            sleep_on_buffer        1900 ms
flush-8:0            sleep_on_buffer        1746 ms
flush-8:0            sleep_on_buffer       17687 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121e658>] ext4_ext_convert_to_initialized+0x408/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4601>] write_cache_pages_da+0x421/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    26600 ms
Event count:                       4
systemd-journal      sleep_on_buffer        2463 ms
systemd-journal      sleep_on_buffer        2988 ms
systemd-journal      sleep_on_buffer       19520 ms
systemd-journal      sleep_on_buffer        1629 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff8121f9e1>] ext4_ext_truncate+0x71/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff81171979>] do_sys_ftruncate.constprop.14+0x109/0x170
[<ffffffff81171a09>] sys_ftruncate+0x9/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    25557 ms
Event count:                       2
flush-253:0          sleep_on_buffer        2782 ms
flush-253:0          sleep_on_buffer       22775 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119ddb2>] wb_do_writeback+0xb2/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    25493 ms
Event count:                       5
git                  sleep_on_buffer       15264 ms
git                  sleep_on_buffer        2091 ms
git                  sleep_on_buffer        2507 ms
git                  sleep_on_buffer        1218 ms
git                  sleep_on_buffer        4413 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811eb7cf>] ext4_free_inode+0x22f/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    25420 ms
Event count:                       8
Cache I/O            sleep_on_buffer        8766 ms
pool                 sleep_on_buffer        1851 ms
rsync                sleep_on_buffer        2738 ms
imapd                sleep_on_buffer        1697 ms
evolution            sleep_on_buffer        2829 ms
pool                 sleep_on_buffer        2854 ms
firefox              sleep_on_buffer        2326 ms
imapd                sleep_on_buffer        2359 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811ec0e8>] __ext4_new_inode+0x558/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    24833 ms
Event count:                       9
kswapd0              wait_on_page_bit       2147 ms
kswapd0              wait_on_page_bit       1483 ms
kswapd0              wait_on_page_bit       1393 ms
kswapd0              wait_on_page_bit       1844 ms
kswapd0              wait_on_page_bit       1920 ms
kswapd0              wait_on_page_bit       3606 ms
kswapd0              wait_on_page_bit       7155 ms
kswapd0              wait_on_page_bit       1189 ms
kswapd0              wait_on_page_bit       4096 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811228cf>] shrink_inactive_list+0x15f/0x4a0
[<ffffffff811230cc>] shrink_lruvec+0x13c/0x260
[<ffffffff81123256>] shrink_zone+0x66/0x180
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8112451b>] balance_pgdat+0x33b/0x4b0
[<ffffffff811247a6>] kswapd+0x116/0x230
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    23799 ms
Event count:                      19
jbd2/sdb1-8          wait_on_page_bit       1077 ms
jbd2/sdb1-8          wait_on_page_bit       1126 ms
jbd2/sdb1-8          wait_on_page_bit       1197 ms
jbd2/sdb1-8          wait_on_page_bit       1101 ms
jbd2/sdb1-8          wait_on_page_bit       1160 ms
jbd2/sdb1-8          wait_on_page_bit       1594 ms
jbd2/sdb1-8          wait_on_page_bit       1364 ms
jbd2/sdb1-8          wait_on_page_bit       1094 ms
jbd2/sdb1-8          wait_on_page_bit       1141 ms
jbd2/sdb1-8          wait_on_page_bit       1309 ms
jbd2/sdb1-8          wait_on_page_bit       1325 ms
jbd2/sdb1-8          wait_on_page_bit       1415 ms
jbd2/sdb1-8          wait_on_page_bit       1331 ms
jbd2/sdb1-8          wait_on_page_bit       1372 ms
jbd2/sdb1-8          wait_on_page_bit       1187 ms
jbd2/sdb1-8          wait_on_page_bit       1472 ms
jbd2/sdb1-8          wait_on_page_bit       1192 ms
jbd2/sdb1-8          wait_on_page_bit       1080 ms
jbd2/sdb1-8          wait_on_page_bit       1262 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8110f2a3>] filemap_fdatawait+0x23/0x30
[<ffffffff8123a78c>] journal_finish_inode_data_buffers+0x6c/0x170
[<ffffffff8123b376>] jbd2_journal_commit_transaction+0x706/0x13c0
[<ffffffff81240513>] kjournald2+0xb3/0x240
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    22392 ms
Event count:                       2
rsync                sleep_on_buffer        3595 ms
git                  sleep_on_buffer       18797 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811fb960>] ext4_rename+0x1b0/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    21612 ms
Event count:                       3
flush-8:0            sleep_on_buffer       13971 ms
flush-8:0            sleep_on_buffer        3795 ms
flush-8:0            sleep_on_buffer        3846 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811efadd>] ext4_da_update_reserve_space+0x1cd/0x280
[<ffffffff8121f88a>] ext4_ext_map_blocks+0x91a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    21313 ms
Event count:                       6
git                  sleep_on_buffer        1261 ms
git                  sleep_on_buffer        2135 ms
systemd-journal      sleep_on_buffer       13451 ms
git                  sleep_on_buffer        1203 ms
git                  sleep_on_buffer        1180 ms
git                  sleep_on_buffer        2083 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811e99fd>] ext4_file_mmap+0x3d/0x50
[<ffffffff81140175>] mmap_region+0x325/0x590
[<ffffffff811406f8>] do_mmap_pgoff+0x318/0x440
[<ffffffff8112ba05>] vm_mmap_pgoff+0xa5/0xd0
[<ffffffff8113ee84>] sys_mmap_pgoff+0xa4/0x180
[<ffffffff81006b8d>] sys_mmap+0x1d/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    19298 ms
Event count:                       3
flush-8:0            sleep_on_buffer       14371 ms
flush-8:0            sleep_on_buffer        1545 ms
flush-8:0            sleep_on_buffer        3382 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d24c>] ext4_ext_insert_extent+0x21c/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    19044 ms
Event count:                       2
akregator            sleep_on_buffer       12495 ms
imapd                sleep_on_buffer        6549 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff8121f9e1>] ext4_ext_truncate+0x71/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    18957 ms
Event count:                       5
flush-8:0            sleep_on_buffer        2120 ms
flush-8:0            sleep_on_buffer        1668 ms
flush-8:0            sleep_on_buffer        2679 ms
flush-8:0            sleep_on_buffer        4561 ms
flush-8:0            sleep_on_buffer        7929 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121a97b>] __ext4_ext_dirty.isra.40+0x7b/0x80
[<ffffffff8121d34a>] ext4_ext_insert_extent+0x31a/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    18341 ms
Event count:                       6
imapd                sleep_on_buffer        5018 ms
imapd                sleep_on_buffer        1541 ms
acroread             sleep_on_buffer        5963 ms
git                  sleep_on_buffer        3274 ms
git                  sleep_on_buffer        1387 ms
git                  sleep_on_buffer        1158 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811ea201>] ext4_release_file+0x61/0xd0
[<ffffffff811742a0>] __fput+0xb0/0x240
[<ffffffff81174439>] ____fput+0x9/0x10
[<ffffffff81065dc7>] task_work_run+0x97/0xd0
[<ffffffff81002cbc>] do_notify_resume+0x9c/0xb0
[<ffffffff8159c46a>] int_signal+0x12/0x17
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    18310 ms
Event count:                      17
cp                   sleep_on_buffer        1061 ms
cp                   sleep_on_buffer        1032 ms
cp                   sleep_on_buffer        1072 ms
cp                   sleep_on_buffer        1039 ms
cp                   sleep_on_buffer        1035 ms
cp                   sleep_on_buffer        1167 ms
cp                   sleep_on_buffer        1029 ms
cp                   sleep_on_buffer        1108 ms
cp                   sleep_on_buffer        1009 ms
cp                   sleep_on_buffer        1113 ms
cp                   sleep_on_buffer        1113 ms
cp                   sleep_on_buffer        1029 ms
free                 wait_on_page_bit_killable   1067 ms
imapd                sleep_on_buffer        1103 ms
cat                  sleep_on_buffer        1180 ms
imapd                sleep_on_buffer        1005 ms
git                  sleep_on_buffer        1148 ms
[<ffffffff8110ef12>] __lock_page_killable+0x62/0x70
[<ffffffff81110507>] do_generic_file_read.constprop.35+0x287/0x440
[<ffffffff81111359>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    18275 ms
Event count:                       2
systemd-journal      sleep_on_buffer        1594 ms
systemd-journal      sleep_on_buffer       16681 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff81228bbd>] ext4_mb_new_blocks+0x1fd/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff8121fd1f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171b32>] do_fallocate+0x112/0x190
[<ffffffff81171c02>] sys_fallocate+0x52/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    17970 ms
Event count:                       2
pool                 sleep_on_buffer       12739 ms
pool                 sleep_on_buffer        5231 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9bc4>] add_dirent_to_buf+0x84/0x1d0
[<ffffffff811fa7e4>] ext4_add_entry+0x124/0x2d0
[<ffffffff811fa9b6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fac9f>] ext4_create+0xff/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    17925 ms
Event count:                       1
git                  sleep_on_buffer       17925 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f003b>] ext4_getblk+0x5b/0x1f0
[<ffffffff811f01e1>] ext4_bread+0x11/0x80
[<ffffffff811f758d>] ext4_append+0x5d/0x120
[<ffffffff811fb243>] ext4_init_new_dir+0x83/0x150
[<ffffffff811fb48d>] ext4_mkdir+0x17d/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    17421 ms
Event count:                       1
git                  sleep_on_buffer       17421 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f9c6e>] add_dirent_to_buf+0x12e/0x1d0
[<ffffffff811fa7e4>] ext4_add_entry+0x124/0x2d0
[<ffffffff811fb4bd>] ext4_mkdir+0x1ad/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    17385 ms
Event count:                       7
git                  sleep_on_buffer        1409 ms
git                  sleep_on_buffer        1128 ms
git                  sleep_on_buffer        6323 ms
rsync                sleep_on_buffer        4503 ms
git                  sleep_on_buffer        1204 ms
mv                   sleep_on_buffer        1190 ms
git                  sleep_on_buffer        1628 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811fb960>] ext4_rename+0x1b0/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    13983 ms
Event count:                       3
patch                sleep_on_buffer        1511 ms
cp                   sleep_on_buffer        2096 ms
git                  sleep_on_buffer       10376 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811ec0e8>] __ext4_new_inode+0x558/0x10c0
[<ffffffff811fb456>] ext4_mkdir+0x146/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    13603 ms
Event count:                       4
git                  sleep_on_buffer        2160 ms
gen-report.sh        sleep_on_buffer        4730 ms
evolution            sleep_on_buffer        4697 ms
git                  sleep_on_buffer        2016 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811fb6cf>] ext4_orphan_add+0x10f/0x1f0
[<ffffffff811f31a4>] ext4_setattr+0x3d4/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    13264 ms
Event count:                       8
ls                   sleep_on_buffer        1116 ms
ls                   sleep_on_buffer        1756 ms
ls                   sleep_on_buffer        1901 ms
ls                   sleep_on_buffer        2033 ms
ls                   sleep_on_buffer        1373 ms
ls                   sleep_on_buffer        3046 ms
offlineimap          sleep_on_buffer        1011 ms
imapd                sleep_on_buffer        1028 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177bc6>] vfs_stat+0x16/0x20
[<ffffffff81177ce5>] sys_newstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    12710 ms
Event count:                       6
git                  sleep_on_buffer        1364 ms
git                  sleep_on_buffer        1612 ms
git                  sleep_on_buffer        4321 ms
git                  sleep_on_buffer        2185 ms
git                  sleep_on_buffer        2126 ms
git                  sleep_on_buffer        1102 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122509a>] ext4_mb_load_buddy+0x26a/0x350
[<ffffffff8122676b>] ext4_mb_find_by_goal+0x9b/0x2d0
[<ffffffff81227109>] ext4_mb_regular_allocator+0x59/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    12397 ms
Event count:                       7
jbd2/dm-0-8          sleep_on_buffer        1516 ms
jbd2/dm-0-8          sleep_on_buffer        1153 ms
jbd2/dm-0-8          sleep_on_buffer        1307 ms
jbd2/dm-0-8          sleep_on_buffer        1518 ms
jbd2/dm-0-8          sleep_on_buffer        1513 ms
jbd2/dm-0-8          sleep_on_buffer        1516 ms
jbd2/dm-0-8          sleep_on_buffer        3874 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff8123b488>] jbd2_journal_commit_transaction+0x818/0x13c0
[<ffffffff81240513>] kjournald2+0xb3/0x240
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    12361 ms
Event count:                       4
git                  sleep_on_buffer        1076 ms
scp                  sleep_on_buffer        1517 ms
rsync                sleep_on_buffer        5018 ms
rsync                sleep_on_buffer        4750 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f9c6e>] add_dirent_to_buf+0x12e/0x1d0
[<ffffffff811fa7e4>] ext4_add_entry+0x124/0x2d0
[<ffffffff811fa9b6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fac9f>] ext4_create+0xff/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    12175 ms
Event count:                       3
patch                sleep_on_buffer        1546 ms
patch                sleep_on_buffer        7218 ms
patch                sleep_on_buffer        3411 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fb4bd>] ext4_mkdir+0x1ad/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    11862 ms
Event count:                       4
bash                 sleep_on_buffer        5441 ms
offlineimap          sleep_on_buffer        2780 ms
pool                 sleep_on_buffer        1529 ms
pool                 sleep_on_buffer        2112 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff811f31a4>] ext4_setattr+0x3d4/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    11695 ms
Event count:                       1
git                  sleep_on_buffer       11695 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811fb960>] ext4_rename+0x1b0/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    11452 ms
Event count:                       8
compare-mmtests      sleep_on_buffer        1407 ms
compare-mmtests      sleep_on_buffer        1439 ms
find                 sleep_on_buffer        2063 ms
git                  sleep_on_buffer        1128 ms
cp                   sleep_on_buffer        1041 ms
rsync                sleep_on_buffer        1533 ms
rsync                sleep_on_buffer        1070 ms
FileLoader           sleep_on_buffer        1771 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f0227>] ext4_bread+0x57/0x80
[<ffffffff811f7b21>] __ext4_read_dirblock+0x41/0x1d0
[<ffffffff811f849b>] htree_dirblock_to_tree+0x3b/0x1a0
[<ffffffff811f8d7f>] ext4_htree_fill_tree+0x7f/0x220
[<ffffffff811e8d67>] ext4_dx_readdir+0x1a7/0x440
[<ffffffff811e9572>] ext4_readdir+0x422/0x4e0
[<ffffffff811849a0>] vfs_readdir+0xb0/0xe0
[<ffffffff81184ae9>] sys_getdents+0x89/0x110
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     9483 ms
Event count:                       3
offlineimap          sleep_on_buffer        1768 ms
dconf-service        sleep_on_buffer        6600 ms
git                  sleep_on_buffer        1115 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fb8b5>] ext4_rename+0x105/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     8201 ms
Event count:                       1
systemd-journal      sleep_on_buffer        8201 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff8121fd1f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171b32>] do_fallocate+0x112/0x190
[<ffffffff81171c02>] sys_fallocate+0x52/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     7699 ms
Event count:                       2
git                  sleep_on_buffer        3475 ms
git                  sleep_on_buffer        4224 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fc898>] ext4_orphan_del+0x1a8/0x1e0
[<ffffffff811f4fbb>] ext4_evict_inode+0x30b/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     7564 ms
Event count:                       2
tar                  sleep_on_buffer        1286 ms
rm                   sleep_on_buffer        6278 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fc3e1>] ext4_unlink+0x41/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff811825dd>] sys_unlinkat+0x1d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6596 ms
Event count:                       1
acroread             sleep_on_buffer        6596 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811105e3>] do_generic_file_read.constprop.35+0x363/0x440
[<ffffffff81111359>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159d81c>] sysenter_dispatch+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6589 ms
Event count:                       1
tar                  sleep_on_buffer        6589 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fb4bd>] ext4_mkdir+0x1ad/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6272 ms
Event count:                       6
pool                 wait_on_page_bit       1005 ms
pool                 wait_on_page_bit       1015 ms
StreamT~ns #908      sleep_on_buffer        1086 ms
Cache I/O            wait_on_page_bit       1091 ms
StreamT~ns #138      wait_on_page_bit       1046 ms
offlineimap          sleep_on_buffer        1029 ms
[<ffffffff810a04ed>] futex_wait+0x17d/0x270
[<ffffffff810a21ac>] do_futex+0x7c/0x1b0
[<ffffffff810a241d>] sys_futex+0x13d/0x190
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6237 ms
Event count:                       1
offlineimap          sleep_on_buffer        6237 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291ba>] ext4_free_blocks+0x36a/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6192 ms
Event count:                       4
ls                   sleep_on_buffer        1679 ms
ls                   sleep_on_buffer        1746 ms
ls                   sleep_on_buffer        1076 ms
ls                   sleep_on_buffer        1691 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff811ef20d>] __ext4_get_inode_loc+0x3dd/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177bc6>] vfs_stat+0x16/0x20
[<ffffffff81177ce5>] sys_newstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     5989 ms
Event count:                       3
flush-8:0            sleep_on_buffer        1184 ms
flush-8:0            sleep_on_buffer        1548 ms
flush-8:0            sleep_on_buffer        3257 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     5770 ms
Event count:                       1
git                  sleep_on_buffer        5770 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121bf74>] ext4_ext_rm_leaf+0x1e4/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f513e>] ext4_evict_inode+0x48e/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4477 ms
Event count:                       2
offlineimap          sleep_on_buffer        2154 ms
DOM Worker           sleep_on_buffer        2323 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff81180bd8>] lookup_open+0xc8/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4428 ms
Event count:                       3
compare-mmtests      sleep_on_buffer        1725 ms
compare-mmtests      sleep_on_buffer        1634 ms
cp                   sleep_on_buffer        1069 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177ba9>] vfs_lstat+0x19/0x20
[<ffffffff81177d15>] sys_newlstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4168 ms
Event count:                       3
git                  sleep_on_buffer        1866 ms
git                  sleep_on_buffer        1070 ms
git                  sleep_on_buffer        1232 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122509a>] ext4_mb_load_buddy+0x26a/0x350
[<ffffffff81227247>] ext4_mb_regular_allocator+0x197/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3940 ms
Event count:                       2
evolution            sleep_on_buffer        1978 ms
git                  sleep_on_buffer        1962 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291ba>] ext4_free_blocks+0x36a/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3802 ms
Event count:                       2
git                  sleep_on_buffer        1933 ms
git                  sleep_on_buffer        1869 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81171cd7>] sys_faccessat+0x97/0x220
[<ffffffff81171e73>] sys_access+0x13/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3792 ms
Event count:                       3
cc1                  sleep_on_buffer        1161 ms
compare-mmtests      sleep_on_buffer        1088 ms
cc1                  sleep_on_buffer        1543 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117e76a>] link_path_walk+0x7ca/0x8e0
[<ffffffff81181596>] path_openat+0x96/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3783 ms
Event count:                       2
compare-mmtests      sleep_on_buffer        2237 ms
compare-mmtests      sleep_on_buffer        1546 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117e76a>] link_path_walk+0x7ca/0x8e0
[<ffffffff81181596>] path_openat+0x96/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117285f>] sys_openat+0xf/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3692 ms
Event count:                       2
git                  sleep_on_buffer        1667 ms
git                  sleep_on_buffer        2025 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122509a>] ext4_mb_load_buddy+0x26a/0x350
[<ffffffff8122676b>] ext4_mb_find_by_goal+0x9b/0x2d0
[<ffffffff81227109>] ext4_mb_regular_allocator+0x59/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3533 ms
Event count:                       1
pool                 sleep_on_buffer        3533 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fbf16>] ext4_rename+0x766/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3329 ms
Event count:                       3
folder-markup.s      sleep_on_buffer        1147 ms
imapd                sleep_on_buffer        1053 ms
gnuplot              sleep_on_buffer        1129 ms
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2861 ms
Event count:                       2
chmod                sleep_on_buffer        1227 ms
chmod                sleep_on_buffer        1634 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811f313c>] ext4_setattr+0x36c/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff8117137b>] chmod_common+0xab/0xb0
[<ffffffff811721a1>] sys_fchmodat+0x41/0xa0
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2822 ms
Event count:                       1
gnome-terminal       sleep_on_buffer        2822 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811eb856>] ext4_free_inode+0x2b6/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81174368>] __fput+0x178/0x240
[<ffffffff81174439>] ____fput+0x9/0x10
[<ffffffff81065dc7>] task_work_run+0x97/0xd0
[<ffffffff81002cbc>] do_notify_resume+0x9c/0xb0
[<ffffffff8159c46a>] int_signal+0x12/0x17
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2769 ms
Event count:                       1
imapd                sleep_on_buffer        2769 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811f313c>] ext4_setattr+0x36c/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff8117137b>] chmod_common+0xab/0xb0
[<ffffffff811721a1>] sys_fchmodat+0x41/0xa0
[<ffffffff81172214>] sys_chmod+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2727 ms
Event count:                       1
mv                   sleep_on_buffer        2727 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f9c6e>] add_dirent_to_buf+0x12e/0x1d0
[<ffffffff811fa7e4>] ext4_add_entry+0x124/0x2d0
[<ffffffff811fbf16>] ext4_rename+0x766/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2675 ms
Event count:                       1
flush-8:0            sleep_on_buffer        2675 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2658 ms
Event count:                       1
patch                sleep_on_buffer        2658 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121c4ad>] ext4_ext_tree_init+0x2d/0x40
[<ffffffff811ecc06>] __ext4_new_inode+0x1076/0x10c0
[<ffffffff811fb456>] ext4_mkdir+0x146/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2603 ms
Event count:                       2
flush-8:0            sleep_on_buffer        1162 ms
flush-8:0            sleep_on_buffer        1441 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d24c>] ext4_ext_insert_extent+0x21c/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2580 ms
Event count:                       2
rm                   sleep_on_buffer        1265 ms
rm                   sleep_on_buffer        1315 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff811e8265>] ext4_read_block_bitmap+0x35/0x60
[<ffffffff8122908c>] ext4_free_blocks+0x23c/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f513e>] ext4_evict_inode+0x48e/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff811825dd>] sys_unlinkat+0x1d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2542 ms
Event count:                       2
flush-8:16           get_request            1316 ms
flush-8:16           get_request            1226 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff812a4b1f>] generic_make_request.part.59+0x6f/0xa0
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffff811a30fb>] submit_bh+0xfb/0x130
[<ffffffff811a6058>] __block_write_full_page+0x1c8/0x340
[<ffffffff811a62a3>] block_write_full_page_endio+0xd3/0x110
[<ffffffff811a62f0>] block_write_full_page+0x10/0x20
[<ffffffff811aa0c3>] blkdev_writepage+0x13/0x20
[<ffffffff81119292>] __writepage+0x12/0x40
[<ffffffff81119a96>] write_cache_pages+0x206/0x460
[<ffffffff81119d35>] generic_writepages+0x45/0x70
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2504 ms
Event count:                       1
acroread             sleep_on_buffer        2504 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811105e3>] do_generic_file_read.constprop.35+0x363/0x440
[<ffffffff81111359>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159dc79>] ia32_sysret+0x0/0x5
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2477 ms
Event count:                       2
git                  sleep_on_buffer        1200 ms
firefox              sleep_on_buffer        1277 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291ba>] ext4_free_blocks+0x36a/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f513e>] ext4_evict_inode+0x48e/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2168 ms
Event count:                       2
xchat                sleep_on_buffer        1096 ms
xchat                sleep_on_buffer        1072 ms
[<ffffffff81185476>] do_poll.isra.7+0x1c6/0x290
[<ffffffff81186331>] do_sys_poll+0x191/0x200
[<ffffffff81186466>] sys_poll+0x66/0x100
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2156 ms
Event count:                       2
git                  sleep_on_buffer        1076 ms
git                  sleep_on_buffer        1080 ms
[<ffffffff811383b2>] unmap_single_vma+0x82/0x100
[<ffffffff81138c2c>] unmap_vmas+0x4c/0xa0
[<ffffffff811408f0>] exit_mmap+0x90/0x170
[<ffffffff81043ee5>] mmput.part.27+0x45/0x110
[<ffffffff81043fcd>] mmput+0x1d/0x30
[<ffffffff8104be22>] exit_mm+0x132/0x180
[<ffffffff8104bfc5>] do_exit+0x155/0x460
[<ffffffff8104c34f>] do_group_exit+0x3f/0xa0
[<ffffffff8104c3c2>] sys_exit_group+0x12/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2141 ms
Event count:                       2
imapd                sleep_on_buffer        1057 ms
ntpd                 wait_on_page_bit_killable   1084 ms
[<ffffffff81185a99>] do_select+0x4c9/0x5d0
[<ffffffff81185d58>] core_sys_select+0x1b8/0x2f0
[<ffffffff811860d6>] sys_select+0xb6/0x100
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2130 ms
Event count:                       2
git                  sleep_on_buffer        1110 ms
git                  sleep_on_buffer        1020 ms
[<ffffffff811f4ccb>] ext4_evict_inode+0x1b/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2092 ms
Event count:                       1
flush-8:0            sleep_on_buffer        2092 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121a97b>] __ext4_ext_dirty.isra.40+0x7b/0x80
[<ffffffff8121d69b>] ext4_split_extent_at+0x24b/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2079 ms
Event count:                       2
offlineimap          sleep_on_buffer        1030 ms
pool                 wait_on_page_bit       1049 ms
[<ffffffff811ea6e5>] ext4_sync_file+0x205/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2066 ms
Event count:                       2
folder-markup.s      sleep_on_buffer        1024 ms
tee                  sleep_on_buffer        1042 ms
[<ffffffff8117b90e>] pipe_read+0x20e/0x340
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2047 ms
Event count:                       1
Cache I/O            sleep_on_buffer        2047 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291e1>] ext4_free_blocks+0x391/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff81171979>] do_sys_ftruncate.constprop.14+0x109/0x170
[<ffffffff81171a09>] sys_ftruncate+0x9/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1977 ms
Event count:                       1
patch                sleep_on_buffer        1977 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117e76a>] link_path_walk+0x7ca/0x8e0
[<ffffffff8117ede3>] path_lookupat+0x53/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177ba9>] vfs_lstat+0x19/0x20
[<ffffffff81177d15>] sys_newlstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1839 ms
Event count:                       1
compare-mmtests      sleep_on_buffer        1839 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117e76a>] link_path_walk+0x7ca/0x8e0
[<ffffffff81181596>] path_openat+0x96/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1819 ms
Event count:                       1
cp                   sleep_on_buffer        1819 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff811a4bc7>] write_dirty_buffer+0x67/0x70
[<ffffffff8123d035>] __flush_batch+0x45/0xa0
[<ffffffff8123dad6>] jbd2_log_do_checkpoint+0x1d6/0x220
[<ffffffff8123dba1>] __jbd2_log_wait_for_space+0x81/0x190
[<ffffffff812382d0>] start_this_handle+0x2e0/0x3e0
[<ffffffff81238590>] jbd2__journal_start.part.8+0x90/0x190
[<ffffffff812386d5>] jbd2__journal_start+0x45/0x50
[<ffffffff812205d1>] __ext4_journal_start_sb+0x81/0x170
[<ffffffff811ebf61>] __ext4_new_inode+0x3d1/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1664 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1664 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d125>] ext4_ext_insert_extent+0xf5/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1635 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1635 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1591 ms
Event count:                       1
imapd                sleep_on_buffer        1591 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8117ca84>] lookup_hash+0x14/0x20
[<ffffffff8117fae3>] do_unlinkat+0xf3/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1529 ms
Event count:                       1
ls                   sleep_on_buffer        1529 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f0227>] ext4_bread+0x57/0x80
[<ffffffff811f7b21>] __ext4_read_dirblock+0x41/0x1d0
[<ffffffff811f7f3d>] dx_probe+0x3d/0x410
[<ffffffff811f8dce>] ext4_htree_fill_tree+0xce/0x220
[<ffffffff811e8d67>] ext4_dx_readdir+0x1a7/0x440
[<ffffffff811e9572>] ext4_readdir+0x422/0x4e0
[<ffffffff811849a0>] vfs_readdir+0xb0/0xe0
[<ffffffff81184ae9>] sys_getdents+0x89/0x110
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1523 ms
Event count:                       1
gnuplot              sleep_on_buffer        1523 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121fad7>] ext4_ext_truncate+0x167/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1519 ms
Event count:                       1
find                 sleep_on_buffer        1519 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f0227>] ext4_bread+0x57/0x80
[<ffffffff811f7b21>] __ext4_read_dirblock+0x41/0x1d0
[<ffffffff811f849b>] htree_dirblock_to_tree+0x3b/0x1a0
[<ffffffff811f8e42>] ext4_htree_fill_tree+0x142/0x220
[<ffffffff811e8d67>] ext4_dx_readdir+0x1a7/0x440
[<ffffffff811e9572>] ext4_readdir+0x422/0x4e0
[<ffffffff811849a0>] vfs_readdir+0xb0/0xe0
[<ffffffff81184ae9>] sys_getdents+0x89/0x110
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1509 ms
Event count:                       1
git                  sleep_on_buffer        1509 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff811e8265>] ext4_read_block_bitmap+0x35/0x60
[<ffffffff81227533>] ext4_mb_mark_diskspace_used+0x53/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1470 ms
Event count:                       1
rm                   sleep_on_buffer        1470 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eb4d0>] ext4_read_inode_bitmap+0x400/0x4d0
[<ffffffff811eb7ab>] ext4_free_inode+0x20b/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff811825dd>] sys_unlinkat+0x1d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1462 ms
Event count:                       1
imapd                sleep_on_buffer        1462 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811fbb37>] ext4_rename+0x387/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1457 ms
Event count:                       1
git                  sleep_on_buffer        1457 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff811e8265>] ext4_read_block_bitmap+0x35/0x60
[<ffffffff81227533>] ext4_mb_mark_diskspace_used+0x53/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1395 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1395 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1387 ms
Event count:                       1
git                  sleep_on_buffer        1387 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1378 ms
Event count:                       1
gnuplot              sleep_on_buffer        1378 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811ea201>] ext4_release_file+0x61/0xd0
[<ffffffff811742a0>] __fput+0xb0/0x240
[<ffffffff81174439>] ____fput+0x9/0x10
[<ffffffff81065de4>] task_work_run+0xb4/0xd0
[<ffffffff8104bffa>] do_exit+0x18a/0x460
[<ffffffff8104c34f>] do_group_exit+0x3f/0xa0
[<ffffffff8104c3c2>] sys_exit_group+0x12/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1337 ms
Event count:                       1
git                  sleep_on_buffer        1337 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff81224c2e>] ext4_mb_init_group+0x9e/0x100
[<ffffffff81224d97>] ext4_mb_good_group+0x107/0x1a0
[<ffffffff81227233>] ext4_mb_regular_allocator+0x183/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1309 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1309 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121a97b>] __ext4_ext_dirty.isra.40+0x7b/0x80
[<ffffffff8121d69b>] ext4_split_extent_at+0x24b/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1284 ms
Event count:                       1
cp                   sleep_on_buffer        1284 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff81177e71>] sys_readlinkat+0xe1/0x120
[<ffffffff81177ec6>] sys_readlink+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1277 ms
Event count:                       1
git                  sleep_on_buffer        1277 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff81224c2e>] ext4_mb_init_group+0x9e/0x100
[<ffffffff81224d97>] ext4_mb_good_group+0x107/0x1a0
[<ffffffff81227233>] ext4_mb_regular_allocator+0x183/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1235 ms
Event count:                       1
cp                   sleep_on_buffer        1235 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8122d7d0>] ext4_alloc_blocks+0x140/0x2b0
[<ffffffff8122d995>] ext4_alloc_branch+0x55/0x2c0
[<ffffffff8122ecb9>] ext4_ind_map_blocks+0x299/0x500
[<ffffffff811efd43>] ext4_map_blocks+0x1b3/0x450
[<ffffffff811f23e7>] _ext4_get_block+0x87/0x170
[<ffffffff811f2501>] ext4_get_block+0x11/0x20
[<ffffffff811a65bf>] __block_write_begin+0x1af/0x4d0
[<ffffffff811f1969>] ext4_write_begin+0x159/0x410
[<ffffffff8110f3aa>] generic_perform_write+0xca/0x210
[<ffffffff8110f548>] generic_file_buffered_write+0x58/0x90
[<ffffffff81110f96>] __generic_file_aio_write+0x1b6/0x3b0
[<ffffffff8111120a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea3a3>] ext4_file_write+0x83/0xd0
[<ffffffff81172a73>] do_sync_write+0xa3/0xe0
[<ffffffff811730fe>] vfs_write+0xae/0x180
[<ffffffff8117341d>] sys_write+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1182 ms
Event count:                       1
imapd                sleep_on_buffer        1182 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb052>] ext4_delete_entry+0x62/0x120
[<ffffffff811fbfea>] ext4_rename+0x83a/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1181 ms
Event count:                       1
systemd-journal      sleep_on_buffer        1181 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d125>] ext4_ext_insert_extent+0xf5/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff8121fd1f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171b32>] do_fallocate+0x112/0x190
[<ffffffff81171c02>] sys_fallocate+0x52/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1160 ms
Event count:                       1
rm                   sleep_on_buffer        1160 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fc169>] ext4_rmdir+0x39/0x270
[<ffffffff8117dbf8>] vfs_rmdir.part.32+0xa8/0xf0
[<ffffffff8117fc8a>] vfs_rmdir+0x3a/0x50
[<ffffffff8117fe63>] do_rmdir+0x1c3/0x1e0
[<ffffffff811825ed>] sys_unlinkat+0x2d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1108 ms
Event count:                       1
mutt                 sleep_on_buffer        1108 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811eb7cf>] ext4_free_inode+0x22f/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81174368>] __fput+0x178/0x240
[<ffffffff81174439>] ____fput+0x9/0x10
[<ffffffff81065dc7>] task_work_run+0x97/0xd0
[<ffffffff81002cbc>] do_notify_resume+0x9c/0xb0
[<ffffffff8159c46a>] int_signal+0x12/0x17
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1106 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1106 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811efadd>] ext4_da_update_reserve_space+0x1cd/0x280
[<ffffffff8121f88a>] ext4_ext_map_blocks+0x91a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1081 ms
Event count:                       1
imapd                sleep_on_buffer        1081 ms
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff8121f9e1>] ext4_ext_truncate+0x71/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1079 ms
Event count:                       1
git                  sleep_on_buffer        1079 ms
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1074 ms
Event count:                       1
cp                   sleep_on_buffer        1074 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811ec749>] __ext4_new_inode+0xbb9/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1072 ms
Event count:                       1
du                   sleep_on_buffer        1072 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177d45>] sys_newfstatat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1034 ms
Event count:                       1
git                  sleep_on_buffer        1034 ms
[<ffffffff8110ef82>] __lock_page+0x62/0x70
[<ffffffff8110fe71>] find_lock_page+0x51/0x80
[<ffffffff8110ff04>] grab_cache_page_write_begin+0x64/0xd0
[<ffffffff811f1ca4>] ext4_da_write_begin+0x84/0x2e0
[<ffffffff8110f3aa>] generic_perform_write+0xca/0x210
[<ffffffff8110f548>] generic_file_buffered_write+0x58/0x90
[<ffffffff81110f96>] __generic_file_aio_write+0x1b6/0x3b0
[<ffffffff8111120a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea3a3>] ext4_file_write+0x83/0xd0
[<ffffffff81172a73>] do_sync_write+0xa3/0xe0
[<ffffffff811730fe>] vfs_write+0xae/0x180
[<ffffffff8117341d>] sys_write+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1031 ms
Event count:                       1
git                  sleep_on_buffer        1031 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122509a>] ext4_mb_load_buddy+0x26a/0x350
[<ffffffff81227247>] ext4_mb_regular_allocator+0x197/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811fb960>] ext4_rename+0x1b0/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1029 ms
Event count:                       1
git                  wait_on_page_bit_killable   1029 ms
[<ffffffff815966d9>] kretprobe_trampoline+0x25/0x4c
[<ffffffff81111728>] filemap_fault+0x88/0x410
[<ffffffff81135d69>] __do_fault+0x439/0x530
[<ffffffff811394be>] handle_pte_fault+0xee/0x200
[<ffffffff8113a731>] handle_mm_fault+0x271/0x390
[<ffffffff81597a20>] __do_page_fault+0x230/0x520
[<ffffffff81594ec5>] do_device_not_available+0x15/0x20
[<ffffffff8159d50e>] device_not_available+0x1e/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1017 ms
Event count:                       1
npviewer.bin         sleep_on_buffer        1017 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff81180bd8>] lookup_open+0xc8/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff8118182a>] path_openat+0x32a/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff811c2996>] compat_sys_open+0x16/0x20
[<ffffffff8159dc79>] ia32_sysret+0x0/0x5
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1016 ms
Event count:                       1
rm                   sleep_on_buffer        1016 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff8123d8f0>] __wait_cp_io+0xd0/0xe0
[<ffffffff8123da23>] jbd2_log_do_checkpoint+0x123/0x220
[<ffffffff8123dba1>] __jbd2_log_wait_for_space+0x81/0x190
[<ffffffff812382d0>] start_this_handle+0x2e0/0x3e0
[<ffffffff81238590>] jbd2__journal_start.part.8+0x90/0x190
[<ffffffff812386d5>] jbd2__journal_start+0x45/0x50
[<ffffffff812205d1>] __ext4_journal_start_sb+0x81/0x170
[<ffffffff811fc44c>] ext4_unlink+0xac/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff811825dd>] sys_unlinkat+0x1d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff


-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 14:27 ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-02 14:27 UTC (permalink / raw)
  To: linux-ext4; +Cc: LKML, Linux-MM, Jiri Slaby

I'm testing a page-reclaim-related series on my laptop that is partially
aimed at fixing long stalls when doing metadata-intensive operations on
low memory such as a git checkout. I've been running 3.9-rc2 with the
series applied but found that the interactive performance was awful even
when there was plenty of free memory.

I activated a monitor from mmtests that logs when a process is stuck for
a long time in D state and found that there are a lot of stalls in ext4.
The report first states that processes have been stalled for a total of
6498 seconds on IO which seems like a lot. Here is a breakdown of the
recorded events.

Time stalled in this event:   566745 ms
Event count:                     181
git                  sleep_on_buffer        1236 ms
git                  sleep_on_buffer        1161 ms
imapd                sleep_on_buffer        3111 ms
cp                   sleep_on_buffer       10745 ms
cp                   sleep_on_buffer        5036 ms
cp                   sleep_on_buffer        4370 ms
cp                   sleep_on_buffer        1682 ms
cp                   sleep_on_buffer        8207 ms
cp                   sleep_on_buffer        5312 ms
cp                   sleep_on_buffer        1563 ms
patch                sleep_on_buffer        1172 ms
patch                sleep_on_buffer        4585 ms
patch                sleep_on_buffer        3541 ms
patch                sleep_on_buffer        4155 ms
patch                sleep_on_buffer        3120 ms
cc1                  sleep_on_buffer        1107 ms
cc1                  sleep_on_buffer        1291 ms
cc1                  sleep_on_buffer        1125 ms
cc1                  sleep_on_buffer        1257 ms
imapd                sleep_on_buffer        1424 ms
patch                sleep_on_buffer        1126 ms
mutt                 sleep_on_buffer        4804 ms
patch                sleep_on_buffer        3489 ms
patch                sleep_on_buffer        4242 ms
cp                   sleep_on_buffer        1942 ms
cp                   sleep_on_buffer        2670 ms
cp                   sleep_on_buffer        1071 ms
cp                   sleep_on_buffer        1676 ms
cp                   sleep_on_buffer        1058 ms
cp                   sleep_on_buffer        1382 ms
cp                   sleep_on_buffer        2196 ms
cp                   sleep_on_buffer        1017 ms
cp                   sleep_on_buffer        1096 ms
cp                   sleep_on_buffer        1203 ms
cp                   sleep_on_buffer        1307 ms
cp                   sleep_on_buffer        1676 ms
cp                   sleep_on_buffer        1024 ms
cp                   sleep_on_buffer        1270 ms
cp                   sleep_on_buffer        1200 ms
cp                   sleep_on_buffer        1674 ms
cp                   sleep_on_buffer        1202 ms
cp                   sleep_on_buffer        2260 ms
cp                   sleep_on_buffer        1685 ms
cp                   sleep_on_buffer        1921 ms
cp                   sleep_on_buffer        1434 ms
cp                   sleep_on_buffer        1346 ms
cp                   sleep_on_buffer        2132 ms
cp                   sleep_on_buffer        1304 ms
cp                   sleep_on_buffer        1328 ms
cp                   sleep_on_buffer        1419 ms
cp                   sleep_on_buffer        1882 ms
cp                   sleep_on_buffer        1172 ms
cp                   sleep_on_buffer        1299 ms
cp                   sleep_on_buffer        1806 ms
cp                   sleep_on_buffer        1297 ms
cp                   sleep_on_buffer        1484 ms
cp                   sleep_on_buffer        1313 ms
cp                   sleep_on_buffer        1342 ms
cp                   sleep_on_buffer        1320 ms
cp                   sleep_on_buffer        1147 ms
cp                   sleep_on_buffer        1346 ms
cp                   sleep_on_buffer        2391 ms
cp                   sleep_on_buffer        1128 ms
cp                   sleep_on_buffer        1386 ms
cp                   sleep_on_buffer        1505 ms
cp                   sleep_on_buffer        1664 ms
cp                   sleep_on_buffer        1290 ms
cp                   sleep_on_buffer        1532 ms
cp                   sleep_on_buffer        1719 ms
cp                   sleep_on_buffer        1149 ms
cp                   sleep_on_buffer        1364 ms
cp                   sleep_on_buffer        1397 ms
cp                   sleep_on_buffer        1213 ms
cp                   sleep_on_buffer        1171 ms
cp                   sleep_on_buffer        1352 ms
cp                   sleep_on_buffer        3000 ms
cp                   sleep_on_buffer        4866 ms
cp                   sleep_on_buffer        5863 ms
cp                   sleep_on_buffer        3951 ms
cp                   sleep_on_buffer        3469 ms
cp                   sleep_on_buffer        2172 ms
cp                   sleep_on_buffer       21366 ms
cp                   sleep_on_buffer       28856 ms
cp                   sleep_on_buffer        1212 ms
cp                   sleep_on_buffer        2326 ms
cp                   sleep_on_buffer        1357 ms
cp                   sleep_on_buffer        1482 ms
cp                   sleep_on_buffer        1372 ms
cp                   sleep_on_buffer        1475 ms
cp                   sleep_on_buffer        1540 ms
cp                   sleep_on_buffer        2993 ms
cp                   sleep_on_buffer        1269 ms
cp                   sleep_on_buffer        1478 ms
cp                   sleep_on_buffer        1137 ms
cp                   sleep_on_buffer        1114 ms
cp                   sleep_on_buffer        1137 ms
cp                   sleep_on_buffer        1616 ms
cp                   sleep_on_buffer        1291 ms
cp                   sleep_on_buffer        1336 ms
cp                   sleep_on_buffer        2440 ms
cp                   sleep_on_buffer        1058 ms
cp                   sleep_on_buffer        1825 ms
cp                   sleep_on_buffer        1320 ms
cp                   sleep_on_buffer        2556 ms
cp                   sleep_on_buffer        2463 ms
cp                   sleep_on_buffer        2563 ms
cp                   sleep_on_buffer        1218 ms
cp                   sleep_on_buffer        2862 ms
cp                   sleep_on_buffer        1484 ms
cp                   sleep_on_buffer        1039 ms
cp                   sleep_on_buffer        5180 ms
cp                   sleep_on_buffer        2584 ms
cp                   sleep_on_buffer        1357 ms
cp                   sleep_on_buffer        4492 ms
cp                   sleep_on_buffer        1111 ms
cp                   sleep_on_buffer        3992 ms
cp                   sleep_on_buffer        4205 ms
cp                   sleep_on_buffer        4980 ms
cp                   sleep_on_buffer        6303 ms
imapd                sleep_on_buffer        8473 ms
cp                   sleep_on_buffer        7128 ms
cp                   sleep_on_buffer        4740 ms
cp                   sleep_on_buffer       10236 ms
cp                   sleep_on_buffer        1210 ms
cp                   sleep_on_buffer        2670 ms
cp                   sleep_on_buffer       11461 ms
cp                   sleep_on_buffer        5946 ms
cp                   sleep_on_buffer        7144 ms
cp                   sleep_on_buffer        2205 ms
cp                   sleep_on_buffer       25904 ms
cp                   sleep_on_buffer        1766 ms
cp                   sleep_on_buffer        9823 ms
cp                   sleep_on_buffer        1849 ms
cp                   sleep_on_buffer        1380 ms
cp                   sleep_on_buffer        2524 ms
cp                   sleep_on_buffer        2389 ms
cp                   sleep_on_buffer        1996 ms
cp                   sleep_on_buffer       10396 ms
cp                   sleep_on_buffer        2020 ms
cp                   sleep_on_buffer        1132 ms
cc1                  sleep_on_buffer        1182 ms
cp                   sleep_on_buffer        1195 ms
cp                   sleep_on_buffer        1179 ms
cp                   sleep_on_buffer        7301 ms
cp                   sleep_on_buffer        8328 ms
cp                   sleep_on_buffer        6922 ms
cp                   sleep_on_buffer       10555 ms
Cache I/O            sleep_on_buffer       11963 ms
cp                   sleep_on_buffer        2368 ms
cp                   sleep_on_buffer        6905 ms
cp                   sleep_on_buffer        1686 ms
cp                   sleep_on_buffer        1219 ms
cp                   sleep_on_buffer        1793 ms
cp                   sleep_on_buffer        1899 ms
cp                   sleep_on_buffer        6412 ms
cp                   sleep_on_buffer        2799 ms
cp                   sleep_on_buffer        1316 ms
cp                   sleep_on_buffer        1211 ms
git                  sleep_on_buffer        1328 ms
imapd                sleep_on_buffer        4242 ms
imapd                sleep_on_buffer        2754 ms
imapd                sleep_on_buffer        4496 ms
imapd                sleep_on_buffer        4603 ms
imapd                sleep_on_buffer        7929 ms
imapd                sleep_on_buffer        8851 ms
imapd                sleep_on_buffer        2016 ms
imapd                sleep_on_buffer        1019 ms
imapd                sleep_on_buffer        1138 ms
git                  sleep_on_buffer        1510 ms
git                  sleep_on_buffer        1366 ms
git                  sleep_on_buffer        3445 ms
git                  sleep_on_buffer        2704 ms
git                  sleep_on_buffer        2057 ms
git                  sleep_on_buffer        1202 ms
git                  sleep_on_buffer        1293 ms
cat                  sleep_on_buffer        1505 ms
imapd                sleep_on_buffer        1263 ms
imapd                sleep_on_buffer        1347 ms
imapd                sleep_on_buffer        2910 ms
git                  sleep_on_buffer        1210 ms
git                  sleep_on_buffer        1199 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811105e3>] do_generic_file_read.constprop.35+0x363/0x440
[<ffffffff81111359>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Some of those stalls are awful -- 28 seconds to update atime seems
excessive. This is with relatime in use

mel@machina:~ > mount | grep sd
/dev/sda8 on / type ext4 (rw,relatime,nobarrier,data=ordered)
/dev/sda6 on /home type ext4 (rw,relatime,nobarrier,data=ordered)
/dev/sda5 on /usr/src type ext4 (rw,relatime,nobarrier,data=ordered)

/tmp is mounted as tmpfs so I doubt it's a small write problem.

Time stalled in this event:   466201 ms
Event count:                      45
git                  sleep_on_buffer        1011 ms
git                  sleep_on_buffer       29540 ms
git                  sleep_on_buffer        1485 ms
git                  sleep_on_buffer        1244 ms
git                  sleep_on_buffer       17896 ms
git                  sleep_on_buffer        1882 ms
git                  sleep_on_buffer       18249 ms
mv                   sleep_on_buffer        2107 ms
mv                   sleep_on_buffer       12655 ms
mv                   sleep_on_buffer        4290 ms
mv                   sleep_on_buffer        2640 ms
patch                sleep_on_buffer        2433 ms
patch                sleep_on_buffer        2305 ms
patch                sleep_on_buffer        3672 ms
git                  sleep_on_buffer       16663 ms
git                  sleep_on_buffer       16516 ms
git                  sleep_on_buffer       16168 ms
git                  sleep_on_buffer        1382 ms
git                  sleep_on_buffer        1695 ms
git                  sleep_on_buffer        1301 ms
git                  sleep_on_buffer       22039 ms
git                  sleep_on_buffer       19077 ms
git                  sleep_on_buffer        1208 ms
git                  sleep_on_buffer       20237 ms
git                  sleep_on_buffer        1284 ms
git                  sleep_on_buffer       19518 ms
git                  sleep_on_buffer        1959 ms
git                  sleep_on_buffer       27574 ms
git                  sleep_on_buffer        9708 ms
git                  sleep_on_buffer        1968 ms
git                  sleep_on_buffer       23600 ms
git                  sleep_on_buffer       12578 ms
git                  sleep_on_buffer       19573 ms
git                  sleep_on_buffer        2257 ms
git                  sleep_on_buffer       19068 ms
git                  sleep_on_buffer        2833 ms
git                  sleep_on_buffer        3182 ms
git                  sleep_on_buffer       22496 ms
git                  sleep_on_buffer       14030 ms
git                  sleep_on_buffer        1722 ms
git                  sleep_on_buffer       25652 ms
git                  sleep_on_buffer       15730 ms
git                  sleep_on_buffer       19096 ms
git                  sleep_on_buffer        1529 ms
git                  sleep_on_buffer        3149 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177ba9>] vfs_lstat+0x19/0x20
[<ffffffff81177d15>] sys_newlstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

These are directory lookups which might be a bit more reasonable to
stall on but stalls of 30 seconds seems way out of order. Unfortuantely
I do not have a comparison with older kernels but even when interactive
performance was bad on older kernels, it did not feel *this* bad.

The rest of the mail is just the remaining stalls recorded. They are a
lot of them and they are all really high. Is this a known issue? It's
not necessarily an ext4 issue and could be an IO scheduler or some other
writeback change too. I've been offline for a while so could have missed
similar bug reports and/or fixes.

Time stalled in this event:   437040 ms
Event count:                     106
git                  wait_on_page_bit       1517 ms
git                  wait_on_page_bit       2694 ms
git                  wait_on_page_bit       2829 ms
git                  wait_on_page_bit       2796 ms
git                  wait_on_page_bit       2625 ms
git                  wait_on_page_bit      14350 ms
git                  wait_on_page_bit       4529 ms
xchat                wait_on_page_bit       1928 ms
akregator            wait_on_page_bit       1116 ms
akregator            wait_on_page_bit       3556 ms
cat                  wait_on_page_bit       5311 ms
sequence-patch.      wait_on_page_bit       2555 ms
pool                 wait_on_page_bit       1485 ms
git                  wait_on_page_bit       6778 ms
git                  wait_on_page_bit       3464 ms
git                  wait_on_page_bit       2189 ms
pool                 wait_on_page_bit       3657 ms
compare-kernels      wait_on_page_bit       5729 ms
compare-kernels      wait_on_page_bit       4446 ms
git                  wait_on_page_bit       2011 ms
xchat                wait_on_page_bit       6250 ms
git                  wait_on_page_bit       2761 ms
git                  wait_on_page_bit       1157 ms
xchat                wait_on_page_bit       2670 ms
pool                 wait_on_page_bit       5964 ms
xchat                wait_on_page_bit       1805 ms
play                 wait_on_page_bit       1800 ms
xchat                wait_on_page_bit      12008 ms
cat                  wait_on_page_bit       3642 ms
sequence-patch.      wait_on_page_bit       2309 ms
sequence-patch.      wait_on_page_bit       5430 ms
cat                  wait_on_page_bit       2614 ms
sequence-patch.      wait_on_page_bit       2220 ms
git                  wait_on_page_bit       3505 ms
git                  wait_on_page_bit       4181 ms
mozStorage #2        wait_on_page_bit       1012 ms
xchat                wait_on_page_bit       1966 ms
pool                 wait_on_page_bit      14217 ms
pool                 wait_on_page_bit       3728 ms
xchat                wait_on_page_bit       1896 ms
play                 wait_on_page_bit       8731 ms
mutt                 wait_on_page_bit      14378 ms
play                 wait_on_page_bit       1208 ms
Cache I/O            wait_on_page_bit       1174 ms
xchat                wait_on_page_bit       1141 ms
mozStorage #2        wait_on_page_bit       1161 ms
mozStorage #2        wait_on_page_bit       6727 ms
Cache I/O            wait_on_page_bit       7559 ms
mozStorage #2        wait_on_page_bit       4630 ms
Cache I/O            wait_on_page_bit       4642 ms
mozStorage #2        wait_on_page_bit       1764 ms
mozStorage #2        wait_on_page_bit       2357 ms
Cache I/O            wait_on_page_bit       3694 ms
xchat                wait_on_page_bit       8484 ms
mozStorage #2        wait_on_page_bit       3958 ms
mozStorage #2        wait_on_page_bit       2067 ms
Cache I/O            wait_on_page_bit       2728 ms
xchat                wait_on_page_bit       4115 ms
Cache I/O            wait_on_page_bit       7738 ms
xchat                wait_on_page_bit       7279 ms
Cache I/O            wait_on_page_bit       4366 ms
mozStorage #2        wait_on_page_bit       2040 ms
mozStorage #2        wait_on_page_bit       1102 ms
mozStorage #2        wait_on_page_bit       4628 ms
Cache I/O            wait_on_page_bit       5127 ms
akregator            wait_on_page_bit       2897 ms
Cache I/O            wait_on_page_bit       1429 ms
mozStorage #3        wait_on_page_bit       1465 ms
git                  wait_on_page_bit       2830 ms
git                  wait_on_page_bit       2508 ms
mutt                 wait_on_page_bit       4955 ms
pool                 wait_on_page_bit       4495 ms
mutt                 wait_on_page_bit       7429 ms
akregator            wait_on_page_bit       3744 ms
mutt                 wait_on_page_bit      11632 ms
pool                 wait_on_page_bit      11632 ms
sshd                 wait_on_page_bit      16035 ms
mutt                 wait_on_page_bit      16254 ms
mutt                 wait_on_page_bit       3253 ms
mutt                 wait_on_page_bit       3254 ms
git                  wait_on_page_bit       2644 ms
git                  wait_on_page_bit       2434 ms
git                  wait_on_page_bit       8364 ms
git                  wait_on_page_bit       1618 ms
git                  wait_on_page_bit       5990 ms
git                  wait_on_page_bit       2663 ms
git                  wait_on_page_bit       1102 ms
git                  wait_on_page_bit       1160 ms
git                  wait_on_page_bit       1161 ms
git                  wait_on_page_bit       1608 ms
git                  wait_on_page_bit       2100 ms
git                  wait_on_page_bit       2215 ms
git                  wait_on_page_bit       1231 ms
git                  wait_on_page_bit       2274 ms
git                  wait_on_page_bit       6081 ms
git                  wait_on_page_bit       6877 ms
git                  wait_on_page_bit       2035 ms
git                  wait_on_page_bit       2568 ms
git                  wait_on_page_bit       4475 ms
pool                 wait_on_page_bit       1253 ms
mv                   sleep_on_buffer        1036 ms
git                  wait_on_page_bit       1876 ms
git                  wait_on_page_bit       2332 ms
git                  wait_on_page_bit       2840 ms
git                  wait_on_page_bit       1850 ms
git                  wait_on_page_bit       3943 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8110f3aa>] generic_perform_write+0xca/0x210
[<ffffffff8110f548>] generic_file_buffered_write+0x58/0x90
[<ffffffff81110f96>] __generic_file_aio_write+0x1b6/0x3b0
[<ffffffff8111120a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea3a3>] ext4_file_write+0x83/0xd0
[<ffffffff81172a73>] do_sync_write+0xa3/0xe0
[<ffffffff811730fe>] vfs_write+0xae/0x180
[<ffffffff8117341d>] sys_write+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   417840 ms
Event count:                      56
xchat                sleep_on_buffer        8571 ms
xchat                sleep_on_buffer        1772 ms
xchat                sleep_on_buffer        4063 ms
xchat                sleep_on_buffer       16290 ms
xchat                sleep_on_buffer        3201 ms
compare-kernels      sleep_on_buffer        1698 ms
xchat                sleep_on_buffer       14631 ms
xchat                sleep_on_buffer       12970 ms
xchat                sleep_on_buffer        4182 ms
xchat                sleep_on_buffer        5449 ms
Cache I/O            sleep_on_buffer        4079 ms
xchat                sleep_on_buffer        8246 ms
xchat                sleep_on_buffer        6530 ms
xchat                sleep_on_buffer        2041 ms
xchat                sleep_on_buffer       15815 ms
pool                 sleep_on_buffer        4115 ms
tee                  sleep_on_buffer        2057 ms
xchat                sleep_on_buffer        4814 ms
tee                  sleep_on_buffer       66037 ms
Cache I/O            sleep_on_buffer        6601 ms
xchat                sleep_on_buffer       10208 ms
tee                  sleep_on_buffer        6064 ms
Cache I/O            sleep_on_buffer        2008 ms
xchat                sleep_on_buffer        5257 ms
git                  sleep_on_buffer        2032 ms
xchat                sleep_on_buffer        2313 ms
tee                  sleep_on_buffer        5287 ms
Cache I/O            sleep_on_buffer        1650 ms
akregator            sleep_on_buffer        1154 ms
tee                  sleep_on_buffer       10362 ms
xchat                sleep_on_buffer        6208 ms
xchat                sleep_on_buffer        4405 ms
Cache I/O            sleep_on_buffer        8580 ms
mozStorage #2        sleep_on_buffer        6573 ms
tee                  sleep_on_buffer       10180 ms
Cache I/O            sleep_on_buffer        7691 ms
mozStorage #3        sleep_on_buffer        5502 ms
xchat                sleep_on_buffer        2339 ms
Cache I/O            sleep_on_buffer        3819 ms
sshd                 sleep_on_buffer        7252 ms
tee                  sleep_on_buffer       11422 ms
Cache I/O            sleep_on_buffer        1661 ms
bash                 sleep_on_buffer       10905 ms
git                  sleep_on_buffer        1277 ms
git                  sleep_on_buffer       18599 ms
git                  sleep_on_buffer        1189 ms
git                  sleep_on_buffer       22945 ms
pool                 sleep_on_buffer       17753 ms
git                  sleep_on_buffer        1367 ms
git                  sleep_on_buffer        2223 ms
git                  sleep_on_buffer        1280 ms
git                  sleep_on_buffer        2061 ms
git                  sleep_on_buffer        1034 ms
pool                 sleep_on_buffer       18189 ms
git                  sleep_on_buffer        1344 ms
xchat                sleep_on_buffer        2545 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118b868>] file_update_time+0x98/0x100
[<ffffffff81110f5c>] __generic_file_aio_write+0x17c/0x3b0
[<ffffffff8111120a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea3a3>] ext4_file_write+0x83/0xd0
[<ffffffff81172a73>] do_sync_write+0xa3/0xe0
[<ffffffff811730fe>] vfs_write+0xae/0x180
[<ffffffff8117341d>] sys_write+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   283964 ms
Event count:                      27
git                  sleep_on_buffer       19088 ms
git                  sleep_on_buffer        1177 ms
git                  sleep_on_buffer       30745 ms
git                  sleep_on_buffer        4782 ms
git                  sleep_on_buffer       11435 ms
git                  sleep_on_buffer        2816 ms
git                  sleep_on_buffer        5088 ms
git-merge            sleep_on_buffer       18801 ms
git                  sleep_on_buffer        1415 ms
git                  sleep_on_buffer       16005 ms
git                  sleep_on_buffer        2178 ms
git                  sleep_on_buffer       14354 ms
git                  sleep_on_buffer       12612 ms
git                  sleep_on_buffer        2785 ms
git                  sleep_on_buffer       15498 ms
git                  sleep_on_buffer       15331 ms
git                  sleep_on_buffer        1151 ms
git                  sleep_on_buffer        1320 ms
git                  sleep_on_buffer        8787 ms
git                  sleep_on_buffer        2199 ms
git                  sleep_on_buffer        1006 ms
git                  sleep_on_buffer       23644 ms
git                  sleep_on_buffer        2407 ms
git                  sleep_on_buffer        1169 ms
git                  sleep_on_buffer       25022 ms
git                  sleep_on_buffer       18651 ms
git                  sleep_on_buffer       24498 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811fb6cf>] ext4_orphan_add+0x10f/0x1f0
[<ffffffff811fc6cb>] ext4_unlink+0x32b/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   266300 ms
Event count:                      69
git                  sleep_on_buffer        2773 ms
akregator            sleep_on_buffer        1957 ms
git                  sleep_on_buffer        1417 ms
imapd                sleep_on_buffer        9532 ms
imapd                sleep_on_buffer       57801 ms
pool                 sleep_on_buffer        7761 ms
imapd                sleep_on_buffer        1444 ms
patch                sleep_on_buffer        3872 ms
imapd                sleep_on_buffer        6422 ms
imapd                sleep_on_buffer        1748 ms
pool                 sleep_on_buffer       10552 ms
imapd                sleep_on_buffer       10114 ms
imapd                sleep_on_buffer        7575 ms
mutt                 sleep_on_buffer        3901 ms
bzip2                sleep_on_buffer        1104 ms
imapd                sleep_on_buffer        4983 ms
imapd                sleep_on_buffer        1746 ms
mutt                 sleep_on_buffer        1881 ms
imapd                sleep_on_buffer        1067 ms
imapd                sleep_on_buffer        1863 ms
imapd                sleep_on_buffer        1508 ms
imapd                sleep_on_buffer        1508 ms
offlineimap          sleep_on_buffer        1385 ms
imapd                sleep_on_buffer        1653 ms
imapd                sleep_on_buffer        1179 ms
imapd                sleep_on_buffer        3473 ms
imapd                sleep_on_buffer       10130 ms
vim                  sleep_on_buffer        1690 ms
imapd                sleep_on_buffer        3102 ms
dconf-service        sleep_on_buffer        5097 ms
imapd                sleep_on_buffer        2888 ms
cp                   sleep_on_buffer        1036 ms
imapd                sleep_on_buffer       22501 ms
rsync                sleep_on_buffer        5026 ms
imapd                sleep_on_buffer        2897 ms
rsync                sleep_on_buffer        1200 ms
akregator            sleep_on_buffer        4780 ms
Cache I/O            sleep_on_buffer        1433 ms
imapd                sleep_on_buffer        2588 ms
akregator            sleep_on_buffer        1576 ms
vi                   sleep_on_buffer        2086 ms
firefox              sleep_on_buffer        4718 ms
imapd                sleep_on_buffer        1158 ms
git                  sleep_on_buffer        2073 ms
git                  sleep_on_buffer        1017 ms
git                  sleep_on_buffer        1616 ms
git                  sleep_on_buffer        1043 ms
imapd                sleep_on_buffer        1746 ms
imapd                sleep_on_buffer        1007 ms
git                  sleep_on_buffer        1146 ms
git                  sleep_on_buffer        1916 ms
git                  sleep_on_buffer        1059 ms
git                  sleep_on_buffer        1801 ms
git                  sleep_on_buffer        1208 ms
git                  sleep_on_buffer        1486 ms
git                  sleep_on_buffer        1806 ms
git                  sleep_on_buffer        1295 ms
git                  sleep_on_buffer        1461 ms
git                  sleep_on_buffer        1371 ms
git                  sleep_on_buffer        2010 ms
git                  sleep_on_buffer        1622 ms
git                  sleep_on_buffer        1453 ms
git                  sleep_on_buffer        1392 ms
git                  sleep_on_buffer        1329 ms
git                  sleep_on_buffer        1773 ms
git                  sleep_on_buffer        1750 ms
git                  sleep_on_buffer        2354 ms
imapd                sleep_on_buffer        3201 ms
imapd                sleep_on_buffer        2240 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811ebe24>] __ext4_new_inode+0x294/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   240174 ms
Event count:                      34
systemd-journal      sleep_on_buffer        1321 ms
systemd-journal      sleep_on_buffer        4851 ms
systemd-journal      sleep_on_buffer        3341 ms
systemd-journal      sleep_on_buffer       17219 ms
systemd-journal      sleep_on_buffer        3190 ms
systemd-journal      sleep_on_buffer       13420 ms
systemd-journal      sleep_on_buffer       23421 ms
systemd-journal      sleep_on_buffer        4987 ms
systemd-journal      sleep_on_buffer       16358 ms
systemd-journal      sleep_on_buffer        2734 ms
mozStorage #2        sleep_on_buffer        1454 ms
systemd-journal      sleep_on_buffer        4524 ms
mozStorage #2        sleep_on_buffer        1211 ms
systemd-journal      sleep_on_buffer        1711 ms
systemd-journal      sleep_on_buffer        2158 ms
mkdir                wait_on_page_bit_killable   1084 ms
systemd-journal      sleep_on_buffer        5673 ms
mozStorage #2        sleep_on_buffer        1800 ms
systemd-journal      sleep_on_buffer        5586 ms
mozStorage #2        sleep_on_buffer        3199 ms
nm-dhcp-client.      wait_on_page_bit_killable   1060 ms
mozStorage #2        sleep_on_buffer        6669 ms
systemd-journal      sleep_on_buffer        3603 ms
systemd-journal      sleep_on_buffer        7666 ms
systemd-journal      sleep_on_buffer       13961 ms
systemd-journal      sleep_on_buffer        9063 ms
systemd-journal      sleep_on_buffer        4120 ms
systemd-journal      sleep_on_buffer        3328 ms
systemd-journal      sleep_on_buffer       12093 ms
systemd-journal      sleep_on_buffer        5464 ms
systemd-journal      sleep_on_buffer       12649 ms
systemd-journal      sleep_on_buffer       23460 ms
systemd-journal      sleep_on_buffer       13123 ms
systemd-journal      sleep_on_buffer        4673 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118b868>] file_update_time+0x98/0x100
[<ffffffff811f539c>] ext4_page_mkwrite+0x5c/0x470
[<ffffffff8113740e>] do_wp_page+0x5ce/0x7d0
[<ffffffff81139598>] handle_pte_fault+0x1c8/0x200
[<ffffffff8113a731>] handle_mm_fault+0x271/0x390
[<ffffffff81597959>] __do_page_fault+0x169/0x520
[<ffffffff81597d19>] do_page_fault+0x9/0x10
[<ffffffff81594488>] page_fault+0x28/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   212304 ms
Event count:                      41
pool                 sleep_on_buffer        1216 ms
pool                 sleep_on_buffer       36361 ms
cp                   sleep_on_buffer        5034 ms
git                  sleep_on_buffer        2344 ms
gnuplot              sleep_on_buffer        1733 ms
gnuplot              sleep_on_buffer        2303 ms
gnuplot              sleep_on_buffer        1982 ms
gnuplot              sleep_on_buffer        2491 ms
gnuplot              sleep_on_buffer        1520 ms
gnuplot              sleep_on_buffer        1209 ms
gnuplot              sleep_on_buffer        1188 ms
gnuplot              sleep_on_buffer        1654 ms
gnuplot              sleep_on_buffer        1403 ms
gnuplot              sleep_on_buffer        1386 ms
gnuplot              sleep_on_buffer        1899 ms
gnuplot              sleep_on_buffer        2673 ms
gnuplot              sleep_on_buffer        2158 ms
gnuplot              sleep_on_buffer        1780 ms
gnuplot              sleep_on_buffer        1624 ms
gnuplot              sleep_on_buffer        1704 ms
gnuplot              sleep_on_buffer        2207 ms
gnuplot              sleep_on_buffer        2557 ms
gnuplot              sleep_on_buffer        1692 ms
gnuplot              sleep_on_buffer        1686 ms
gnuplot              sleep_on_buffer        1258 ms
offlineimap          sleep_on_buffer        1217 ms
pool                 sleep_on_buffer       13434 ms
offlineimap          sleep_on_buffer       30091 ms
offlineimap          sleep_on_buffer        9048 ms
offlineimap          sleep_on_buffer       13754 ms
offlineimap          sleep_on_buffer       36560 ms
offlineimap          sleep_on_buffer        1465 ms
cp                   sleep_on_buffer        1525 ms
cp                   sleep_on_buffer        2193 ms
DOM Worker           sleep_on_buffer        5563 ms
DOM Worker           sleep_on_buffer        3597 ms
cp                   sleep_on_buffer        1261 ms
git                  sleep_on_buffer        1427 ms
git                  sleep_on_buffer        1097 ms
git                  sleep_on_buffer        1232 ms
offlineimap          sleep_on_buffer        5778 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff81180bd8>] lookup_open+0xc8/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   211510 ms
Event count:                      20
flush-8:0            sleep_on_buffer       29387 ms
flush-8:0            sleep_on_buffer        2159 ms
flush-8:0            sleep_on_buffer        8593 ms
flush-8:0            sleep_on_buffer        3143 ms
flush-8:0            sleep_on_buffer        4641 ms
flush-8:0            sleep_on_buffer       17279 ms
flush-8:0            sleep_on_buffer        2210 ms
flush-8:0            sleep_on_buffer       15948 ms
flush-8:0            sleep_on_buffer        4686 ms
flush-8:0            sleep_on_buffer        7027 ms
flush-8:0            sleep_on_buffer       17871 ms
flush-8:0            sleep_on_buffer        3262 ms
flush-8:0            sleep_on_buffer        7311 ms
flush-8:0            sleep_on_buffer       11255 ms
flush-8:0            sleep_on_buffer        5693 ms
flush-8:0            sleep_on_buffer        8628 ms
flush-8:0            sleep_on_buffer       10917 ms
flush-8:0            sleep_on_buffer       17497 ms
flush-8:0            sleep_on_buffer       15750 ms
flush-8:0            sleep_on_buffer       18253 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d506>] ext4_split_extent_at+0xb6/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   201192 ms
Event count:                      23
imapd                sleep_on_buffer        3770 ms
imapd                sleep_on_buffer       37050 ms
make                 sleep_on_buffer        5342 ms
compare-mmtests      sleep_on_buffer        1774 ms
scp                  sleep_on_buffer        2478 ms
scp                  sleep_on_buffer        2368 ms
imapd                sleep_on_buffer        3163 ms
pool                 sleep_on_buffer        2033 ms
imapd                sleep_on_buffer        1311 ms
imapd                sleep_on_buffer       11011 ms
imapd                sleep_on_buffer        1345 ms
imapd                sleep_on_buffer       20545 ms
imapd                sleep_on_buffer       19511 ms
imapd                sleep_on_buffer       20863 ms
imapd                sleep_on_buffer       32313 ms
imapd                sleep_on_buffer        6984 ms
imapd                sleep_on_buffer        8152 ms
imapd                sleep_on_buffer        3038 ms
imapd                sleep_on_buffer        8032 ms
imapd                sleep_on_buffer        3649 ms
imapd                sleep_on_buffer        2195 ms
imapd                sleep_on_buffer        1848 ms
mv                   sleep_on_buffer        2417 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177bc6>] vfs_stat+0x16/0x20
[<ffffffff81177ce5>] sys_newstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   169878 ms
Event count:                      56
git                  wait_on_page_bit       8573 ms
git                  wait_on_page_bit       2986 ms
git                  wait_on_page_bit       1811 ms
git                  wait_on_page_bit       2623 ms
git                  wait_on_page_bit       1419 ms
git                  wait_on_page_bit       1244 ms
git                  wait_on_page_bit       1134 ms
git                  wait_on_page_bit       5825 ms
git                  wait_on_page_bit       3567 ms
git                  wait_on_page_bit       1119 ms
git                  wait_on_page_bit       1375 ms
git                  wait_on_page_bit       3726 ms
git                  wait_on_page_bit       2670 ms
git                  wait_on_page_bit       4141 ms
git                  wait_on_page_bit       3858 ms
git                  wait_on_page_bit       6684 ms
git                  wait_on_page_bit       5355 ms
gen-report.sh        wait_on_page_bit       4747 ms
git                  wait_on_page_bit       6752 ms
git                  wait_on_page_bit       1229 ms
git                  wait_on_page_bit       4409 ms
git                  wait_on_page_bit       3101 ms
git                  wait_on_page_bit       1817 ms
git                  wait_on_page_bit       1687 ms
git                  wait_on_page_bit       3683 ms
git                  wait_on_page_bit       2031 ms
git                  wait_on_page_bit       2138 ms
git                  wait_on_page_bit       1513 ms
git                  wait_on_page_bit       1804 ms
git                  wait_on_page_bit       2559 ms
git                  wait_on_page_bit       7958 ms
git                  wait_on_page_bit       6265 ms
git                  wait_on_page_bit       1261 ms
git                  wait_on_page_bit       4018 ms
git                  wait_on_page_bit       1450 ms
git                  wait_on_page_bit       1821 ms
git                  wait_on_page_bit       3186 ms
git                  wait_on_page_bit       1513 ms
git                  wait_on_page_bit       3215 ms
git                  wait_on_page_bit       1262 ms
git                  wait_on_page_bit       8188 ms
git                  sleep_on_buffer        1019 ms
git                  wait_on_page_bit       5233 ms
git                  wait_on_page_bit       1842 ms
git                  wait_on_page_bit       1378 ms
git                  wait_on_page_bit       1386 ms
git                  wait_on_page_bit       2016 ms
git                  wait_on_page_bit       1901 ms
git                  wait_on_page_bit       2750 ms
git                  sleep_on_buffer        1152 ms
git                  wait_on_page_bit       1169 ms
git                  wait_on_page_bit       1371 ms
git                  wait_on_page_bit       1916 ms
git                  wait_on_page_bit       1630 ms
git                  wait_on_page_bit       8286 ms
git                  wait_on_page_bit       1112 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8111d620>] truncate_inode_pages+0x10/0x20
[<ffffffff8111d677>] truncate_pagecache+0x47/0x70
[<ffffffff811f2f4d>] ext4_setattr+0x17d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   167244 ms
Event count:                     118
folder-markup.s      sleep_on_buffer        2055 ms
folder-markup.s      sleep_on_buffer        3917 ms
mv                   sleep_on_buffer        1025 ms
folder-markup.s      sleep_on_buffer        1670 ms
folder-markup.s      sleep_on_buffer        1144 ms
folder-markup.s      sleep_on_buffer        1063 ms
folder-markup.s      sleep_on_buffer        1385 ms
folder-markup.s      sleep_on_buffer        1753 ms
folder-markup.s      sleep_on_buffer        1351 ms
folder-markup.s      sleep_on_buffer        1143 ms
folder-markup.s      sleep_on_buffer        1581 ms
folder-markup.s      sleep_on_buffer        1747 ms
folder-markup.s      sleep_on_buffer        1241 ms
folder-markup.s      sleep_on_buffer        1419 ms
folder-markup.s      sleep_on_buffer        1429 ms
folder-markup.s      sleep_on_buffer        1112 ms
git                  sleep_on_buffer        1190 ms
git                  sleep_on_buffer        1190 ms
git                  sleep_on_buffer        1050 ms
git                  sleep_on_buffer        1463 ms
git                  sleep_on_buffer        1376 ms
folder-markup.s      sleep_on_buffer        1481 ms
folder-markup.s      sleep_on_buffer        1424 ms
folder-markup.s      sleep_on_buffer        1633 ms
folder-markup.s      sleep_on_buffer        1012 ms
folder-markup.s      sleep_on_buffer        1706 ms
folder-markup.s      sleep_on_buffer        1246 ms
folder-markup.s      sleep_on_buffer        1275 ms
git                  sleep_on_buffer        1484 ms
git                  sleep_on_buffer        1216 ms
git                  sleep_on_buffer        1065 ms
git                  sleep_on_buffer        1455 ms
folder-markup.s      sleep_on_buffer        1063 ms
folder-markup.s      sleep_on_buffer        3059 ms
folder-markup.s      sleep_on_buffer        1140 ms
folder-markup.s      sleep_on_buffer        1353 ms
mv                   sleep_on_buffer        1050 ms
folder-markup.s      sleep_on_buffer        1209 ms
git                  sleep_on_buffer        1341 ms
scp                  sleep_on_buffer        4975 ms
folder-markup.s      sleep_on_buffer        1743 ms
folder-markup.s      sleep_on_buffer        1280 ms
folder-markup.s      sleep_on_buffer        2140 ms
folder-markup.s      sleep_on_buffer        1138 ms
folder-markup.s      sleep_on_buffer        1140 ms
folder-markup.s      sleep_on_buffer        1162 ms
folder-markup.s      sleep_on_buffer        1023 ms
git                  sleep_on_buffer        2174 ms
git                  sleep_on_buffer        1306 ms
git                  sleep_on_buffer        1224 ms
git                  sleep_on_buffer        1359 ms
git                  sleep_on_buffer        1551 ms
git                  sleep_on_buffer        1068 ms
git                  sleep_on_buffer        1367 ms
git                  sleep_on_buffer        1292 ms
git                  sleep_on_buffer        1369 ms
git                  sleep_on_buffer        1554 ms
git                  sleep_on_buffer        1273 ms
git                  sleep_on_buffer        1365 ms
mv                   sleep_on_buffer        1107 ms
folder-markup.s      sleep_on_buffer        1519 ms
folder-markup.s      sleep_on_buffer        1253 ms
folder-markup.s      sleep_on_buffer        1195 ms
mv                   sleep_on_buffer        1091 ms
git                  sleep_on_buffer        1147 ms
git                  sleep_on_buffer        1271 ms
git                  sleep_on_buffer        1056 ms
git                  sleep_on_buffer        1134 ms
git                  sleep_on_buffer        1252 ms
git                  sleep_on_buffer        1352 ms
git                  sleep_on_buffer        1449 ms
folder-markup.s      sleep_on_buffer        1732 ms
folder-markup.s      sleep_on_buffer        1332 ms
folder-markup.s      sleep_on_buffer        1450 ms
git                  sleep_on_buffer        1102 ms
git                  sleep_on_buffer        1771 ms
git                  sleep_on_buffer        1225 ms
git                  sleep_on_buffer        1089 ms
git                  sleep_on_buffer        1083 ms
folder-markup.s      sleep_on_buffer        1071 ms
folder-markup.s      sleep_on_buffer        1186 ms
folder-markup.s      sleep_on_buffer        1170 ms
git                  sleep_on_buffer        1249 ms
git                  sleep_on_buffer        1255 ms
folder-markup.s      sleep_on_buffer        1563 ms
folder-markup.s      sleep_on_buffer        1258 ms
git                  sleep_on_buffer        2066 ms
git                  sleep_on_buffer        1493 ms
git                  sleep_on_buffer        1515 ms
git                  sleep_on_buffer        1380 ms
git                  sleep_on_buffer        1238 ms
git                  sleep_on_buffer        1393 ms
git                  sleep_on_buffer        1040 ms
git                  sleep_on_buffer        1986 ms
git                  sleep_on_buffer        1293 ms
git                  sleep_on_buffer        1209 ms
git                  sleep_on_buffer        1098 ms
git                  sleep_on_buffer        1091 ms
git                  sleep_on_buffer        1701 ms
git                  sleep_on_buffer        2237 ms
git                  sleep_on_buffer        1810 ms
folder-markup.s      sleep_on_buffer        1166 ms
folder-markup.s      sleep_on_buffer        2064 ms
folder-markup.s      sleep_on_buffer        1285 ms
folder-markup.s      sleep_on_buffer        1129 ms
folder-markup.s      sleep_on_buffer        1080 ms
git                  sleep_on_buffer        1277 ms
git                  sleep_on_buffer        1280 ms
folder-markup.s      sleep_on_buffer        1298 ms
folder-markup.s      sleep_on_buffer        1355 ms
folder-markup.s      sleep_on_buffer        1043 ms
folder-markup.s      sleep_on_buffer        1204 ms
git                  sleep_on_buffer        1068 ms
git                  sleep_on_buffer        1654 ms
git                  sleep_on_buffer        1380 ms
git                  sleep_on_buffer        1289 ms
git                  sleep_on_buffer        1442 ms
git                  sleep_on_buffer        1299 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fa9b6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fac9f>] ext4_create+0xff/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   135113 ms
Event count:                     116
flush-8:16           get_request            1274 ms
flush-8:16           get_request            1098 ms
flush-8:16           get_request            1079 ms
flush-8:16           get_request            1234 ms
flush-8:16           get_request            1229 ms
flush-8:16           get_request            1056 ms
flush-8:16           get_request            1096 ms
flush-8:16           get_request            1092 ms
flush-8:16           get_request            1099 ms
flush-8:16           get_request            1057 ms
flush-8:16           get_request            1103 ms
flush-8:16           get_request            1207 ms
flush-8:16           get_request            1087 ms
flush-8:16           get_request            1060 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1196 ms
flush-8:16           get_request            1453 ms
flush-8:16           get_request            1084 ms
flush-8:16           get_request            1051 ms
flush-8:16           get_request            1084 ms
flush-8:16           get_request            1132 ms
flush-8:16           get_request            1164 ms
flush-8:16           get_request            1063 ms
flush-8:16           get_request            1221 ms
flush-8:16           get_request            1074 ms
flush-8:16           get_request            1099 ms
flush-8:16           get_request            1077 ms
flush-8:16           get_request            1243 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1078 ms
flush-8:16           get_request            1101 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1056 ms
flush-8:16           get_request            1333 ms
flush-8:16           get_request            1103 ms
flush-8:16           get_request            1216 ms
flush-8:16           get_request            1108 ms
flush-8:16           get_request            1109 ms
flush-8:16           get_request            1113 ms
flush-8:16           get_request            1349 ms
flush-8:16           get_request            1086 ms
flush-8:16           get_request            1070 ms
flush-8:16           get_request            1064 ms
flush-8:16           get_request            1091 ms
flush-8:16           get_request            1064 ms
flush-8:16           get_request            1222 ms
flush-8:16           get_request            1103 ms
flush-8:16           get_request            1434 ms
flush-8:16           get_request            1124 ms
flush-8:16           get_request            1359 ms
flush-8:16           get_request            1060 ms
flush-8:16           get_request            1057 ms
flush-8:16           get_request            1066 ms
flush-8:16           get_request            1357 ms
flush-8:16           get_request            1089 ms
flush-8:16           get_request            1071 ms
flush-8:16           get_request            1196 ms
flush-8:16           get_request            1091 ms
flush-8:16           get_request            1203 ms
flush-8:16           get_request            1100 ms
flush-8:16           get_request            1208 ms
flush-8:16           get_request            1113 ms
flush-8:16           get_request            1260 ms
flush-8:16           get_request            1480 ms
flush-8:16           get_request            1054 ms
flush-8:16           get_request            1211 ms
flush-8:16           get_request            1101 ms
flush-8:16           get_request            1098 ms
flush-8:16           get_request            1190 ms
flush-8:16           get_request            1046 ms
flush-8:16           get_request            1066 ms
flush-8:16           get_request            1204 ms
flush-8:16           get_request            1076 ms
flush-8:16           get_request            1094 ms
flush-8:16           get_request            1094 ms
flush-8:16           get_request            1081 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1193 ms
flush-8:16           get_request            1066 ms
flush-8:16           get_request            1069 ms
flush-8:16           get_request            1081 ms
flush-8:16           get_request            1107 ms
flush-8:16           get_request            1375 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1068 ms
flush-8:16           get_request            1077 ms
flush-8:16           get_request            1108 ms
flush-8:16           get_request            1080 ms
flush-8:16           get_request            1098 ms
flush-8:16           get_request            1063 ms
flush-8:16           get_request            1074 ms
flush-8:16           get_request            1072 ms
flush-8:16           get_request            1038 ms
flush-8:16           get_request            1058 ms
flush-8:16           get_request            1202 ms
flush-8:16           get_request            1359 ms
flush-8:16           get_request            1190 ms
flush-8:16           get_request            1497 ms
flush-8:16           get_request            2173 ms
flush-8:16           get_request            1199 ms
flush-8:16           get_request            1358 ms
flush-8:16           get_request            1384 ms
flush-8:16           get_request            1355 ms
flush-8:16           get_request            1327 ms
flush-8:16           get_request            1312 ms
flush-8:16           get_request            1318 ms
flush-8:16           get_request            1093 ms
flush-8:16           get_request            1265 ms
flush-8:16           get_request            1155 ms
flush-8:16           get_request            1107 ms
flush-8:16           get_request            1263 ms
flush-8:16           get_request            1104 ms
flush-8:16           get_request            1122 ms
flush-8:16           get_request            1578 ms
flush-8:16           get_request            1089 ms
flush-8:16           get_request            1075 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff812a4b1f>] generic_make_request.part.59+0x6f/0xa0
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffff811f6014>] ext4_io_submit+0x24/0x60
[<ffffffff811f2265>] ext4_writepage+0x135/0x220
[<ffffffff81119292>] __writepage+0x12/0x40
[<ffffffff81119a96>] write_cache_pages+0x206/0x460
[<ffffffff81119d35>] generic_writepages+0x45/0x70
[<ffffffff8111ac15>] do_writepages+0x25/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   115939 ms
Event count:                      23
bash                 sleep_on_buffer        3076 ms
du                   sleep_on_buffer        2364 ms
du                   sleep_on_buffer        1515 ms
git                  sleep_on_buffer        1706 ms
rm                   sleep_on_buffer       10595 ms
find                 sleep_on_buffer        2048 ms
rm                   sleep_on_buffer        9146 ms
rm                   sleep_on_buffer        8220 ms
rm                   sleep_on_buffer        6080 ms
cp                   sleep_on_buffer        6302 ms
ls                   sleep_on_buffer        1225 ms
cp                   sleep_on_buffer        6279 ms
cp                   sleep_on_buffer        1164 ms
cp                   sleep_on_buffer        3365 ms
cp                   sleep_on_buffer        2191 ms
cp                   sleep_on_buffer        1367 ms
du                   sleep_on_buffer        4155 ms
cp                   sleep_on_buffer        3906 ms
cp                   sleep_on_buffer        4758 ms
rsync                sleep_on_buffer        6575 ms
git                  sleep_on_buffer        1688 ms
git                  sleep_on_buffer       26470 ms
git                  sleep_on_buffer        1744 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811849b2>] vfs_readdir+0xc2/0xe0
[<ffffffff81184ae9>] sys_getdents+0x89/0x110
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:   101122 ms
Event count:                      18
flush-8:0            sleep_on_buffer       21732 ms
flush-8:0            sleep_on_buffer        2211 ms
flush-8:0            sleep_on_buffer        1480 ms
flush-8:0            sleep_on_buffer       16292 ms
flush-8:0            sleep_on_buffer        2975 ms
flush-8:0            sleep_on_buffer        7025 ms
flush-8:0            sleep_on_buffer        5535 ms
flush-8:0            sleep_on_buffer        1885 ms
flush-8:0            sleep_on_buffer        1329 ms
flush-8:0            sleep_on_buffer        1374 ms
flush-8:0            sleep_on_buffer        1490 ms
flush-8:0            sleep_on_buffer       16341 ms
flush-8:0            sleep_on_buffer       14939 ms
flush-8:0            sleep_on_buffer        1202 ms
flush-8:0            sleep_on_buffer        1262 ms
flush-8:0            sleep_on_buffer        1121 ms
flush-8:0            sleep_on_buffer        1571 ms
flush-8:0            sleep_on_buffer        1358 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d506>] ext4_split_extent_at+0xb6/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    98613 ms
Event count:                       8
git                  sleep_on_buffer       14529 ms
git                  sleep_on_buffer        4477 ms
git                  sleep_on_buffer       10045 ms
git                  sleep_on_buffer       11068 ms
git                  sleep_on_buffer       18777 ms
git                  sleep_on_buffer        9434 ms
git                  sleep_on_buffer       12262 ms
git                  sleep_on_buffer       18021 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f4e95>] ext4_evict_inode+0x1e5/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    94944 ms
Event count:                      11
git                  sleep_on_buffer       16110 ms
git                  sleep_on_buffer        6508 ms
git                  sleep_on_buffer       23186 ms
git                  sleep_on_buffer       25228 ms
git-merge            sleep_on_buffer        1672 ms
konqueror            sleep_on_buffer        1411 ms
git                  sleep_on_buffer        1803 ms
git                  sleep_on_buffer       15397 ms
git                  sleep_on_buffer        1276 ms
git                  sleep_on_buffer        1012 ms
git                  sleep_on_buffer        1341 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fc3e1>] ext4_unlink+0x41/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    93658 ms
Event count:                      26
flush-8:0            sleep_on_buffer        1294 ms
flush-8:0            sleep_on_buffer        2856 ms
flush-8:0            sleep_on_buffer        3764 ms
flush-8:0            sleep_on_buffer        5086 ms
flush-8:0            sleep_on_buffer        1203 ms
flush-8:0            sleep_on_buffer        1289 ms
flush-8:0            sleep_on_buffer        1264 ms
flush-8:0            sleep_on_buffer        1252 ms
flush-8:0            sleep_on_buffer        2997 ms
flush-8:0            sleep_on_buffer        2765 ms
flush-8:0            sleep_on_buffer        4235 ms
flush-8:0            sleep_on_buffer        5205 ms
flush-8:0            sleep_on_buffer        6971 ms
flush-8:0            sleep_on_buffer        4155 ms
ps                   wait_on_page_bit_killable   1054 ms
flush-8:0            sleep_on_buffer        3719 ms
flush-8:0            sleep_on_buffer       10283 ms
flush-8:0            sleep_on_buffer        3068 ms
flush-8:0            sleep_on_buffer        2000 ms
flush-8:0            sleep_on_buffer        2264 ms
flush-8:0            sleep_on_buffer        3623 ms
flush-8:0            sleep_on_buffer       12954 ms
flush-8:0            sleep_on_buffer        6579 ms
flush-8:0            sleep_on_buffer        1245 ms
flush-8:0            sleep_on_buffer        1293 ms
flush-8:0            sleep_on_buffer        1240 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    92353 ms
Event count:                      11
flush-8:0            sleep_on_buffer        2192 ms
flush-8:0            sleep_on_buffer        2088 ms
flush-8:0            sleep_on_buffer        1460 ms
flush-8:0            sleep_on_buffer        1241 ms
flush-8:0            sleep_on_buffer        1986 ms
flush-8:0            sleep_on_buffer        1331 ms
flush-8:0            sleep_on_buffer        2192 ms
flush-8:0            sleep_on_buffer        3327 ms
flush-8:0            sleep_on_buffer       73408 ms
flush-8:0            sleep_on_buffer        1229 ms
flush-253:0          sleep_on_buffer        1899 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    91515 ms
Event count:                       7
flush-8:0            sleep_on_buffer        7128 ms
flush-8:0            sleep_on_buffer       18731 ms
flush-8:0            sleep_on_buffer       12643 ms
flush-8:0            sleep_on_buffer       28149 ms
flush-8:0            sleep_on_buffer        5728 ms
flush-8:0            sleep_on_buffer       18040 ms
git                  wait_on_page_bit       1096 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121e658>] ext4_ext_convert_to_initialized+0x408/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4601>] write_cache_pages_da+0x421/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    86251 ms
Event count:                      76
imapd                wait_on_page_bit_killable   1088 ms
imapd                wait_on_page_bit_killable   1092 ms
git                  wait_on_page_bit_killable   1616 ms
git                  wait_on_page_bit_killable   1114 ms
play                 wait_on_page_bit_killable   1019 ms
play                 wait_on_page_bit_killable   1012 ms
play                 wait_on_page_bit_killable   1223 ms
play                 wait_on_page_bit_killable   1223 ms
play                 wait_on_page_bit_killable   1034 ms
play                 wait_on_page_bit_killable   1034 ms
play                 wait_on_page_bit_killable   1096 ms
play                 wait_on_page_bit_killable   1096 ms
play                 wait_on_page_bit_killable   1093 ms
play                 wait_on_page_bit_killable   1093 ms
vim                  wait_on_page_bit_killable   1084 ms
dbus-daemon-lau      wait_on_page_bit_killable   1076 ms
play                 wait_on_page_bit_killable   1097 ms
play                 wait_on_page_bit_killable   1097 ms
git                  wait_on_page_bit_killable   1005 ms
systemd-journal      wait_on_page_bit_killable   1252 ms
systemd-journal      wait_on_page_bit_killable   1158 ms
git                  wait_on_page_bit_killable   1237 ms
git                  wait_on_page_bit_killable   1043 ms
git                  wait_on_page_bit_killable   1068 ms
git                  wait_on_page_bit_killable   1070 ms
git                  wait_on_page_bit_killable   1070 ms
git                  wait_on_page_bit_killable   1097 ms
git                  wait_on_page_bit_killable   1055 ms
git                  wait_on_page_bit_killable   1252 ms
git                  wait_on_page_bit_killable   1187 ms
git                  wait_on_page_bit_killable   1069 ms
git                  wait_on_page_bit_killable   1194 ms
git                  wait_on_page_bit_killable   1035 ms
git                  wait_on_page_bit_killable   1046 ms
git                  wait_on_page_bit_killable   1024 ms
git                  wait_on_page_bit_killable   1124 ms
git                  wait_on_page_bit_killable   1293 ms
git                  wait_on_page_bit_killable   1184 ms
git                  wait_on_page_bit_killable   1269 ms
git                  wait_on_page_bit_killable   1268 ms
git                  wait_on_page_bit_killable   1088 ms
git                  wait_on_page_bit_killable   1093 ms
git                  wait_on_page_bit_killable   1013 ms
git                  wait_on_page_bit_killable   1034 ms
git                  wait_on_page_bit_killable   1018 ms
git                  wait_on_page_bit_killable   1185 ms
git                  wait_on_page_bit_killable   1258 ms
git                  wait_on_page_bit_killable   1006 ms
git                  wait_on_page_bit_killable   1061 ms
git                  wait_on_page_bit_killable   1108 ms
git                  wait_on_page_bit_killable   1006 ms
git                  wait_on_page_bit_killable   1012 ms
git                  wait_on_page_bit_killable   1210 ms
git                  wait_on_page_bit_killable   1239 ms
git                  wait_on_page_bit_killable   1146 ms
git                  wait_on_page_bit_killable   1106 ms
git                  wait_on_page_bit_killable   1063 ms
git                  wait_on_page_bit_killable   1070 ms
git                  wait_on_page_bit_killable   1041 ms
git                  wait_on_page_bit_killable   1052 ms
git                  wait_on_page_bit_killable   1237 ms
git                  wait_on_page_bit_killable   1117 ms
git                  wait_on_page_bit_killable   1086 ms
git                  wait_on_page_bit_killable   1051 ms
git                  wait_on_page_bit_killable   1029 ms
runlevel             wait_on_page_bit_killable   1019 ms
evolution            wait_on_page_bit_killable   1384 ms
evolution            wait_on_page_bit_killable   1144 ms
firefox              wait_on_page_bit_killable   1537 ms
git                  wait_on_page_bit_killable   1017 ms
evolution            wait_on_page_bit_killable   1015 ms
evolution            wait_on_page_bit_killable   1523 ms
ps                   wait_on_page_bit_killable   1394 ms
kio_http             wait_on_page_bit_killable   1010 ms
plugin-containe      wait_on_page_bit_killable   1522 ms
qmmp                 wait_on_page_bit_killable   1170 ms
[<ffffffff811115c8>] wait_on_page_bit_killable+0x78/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff81111a78>] filemap_fault+0x3d8/0x410
[<ffffffff8113599a>] __do_fault+0x6a/0x530
[<ffffffff811394be>] handle_pte_fault+0xee/0x200
[<ffffffff8113a731>] handle_mm_fault+0x271/0x390
[<ffffffff81597959>] __do_page_fault+0x169/0x520
[<ffffffff81597d19>] do_page_fault+0x9/0x10
[<ffffffff81594488>] page_fault+0x28/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    78888 ms
Event count:                      10
git                  sleep_on_buffer        1019 ms
git                  sleep_on_buffer        2031 ms
git                  sleep_on_buffer        2109 ms
git                  sleep_on_buffer        5858 ms
git                  sleep_on_buffer       15181 ms
git                  sleep_on_buffer       22771 ms
git                  sleep_on_buffer        2331 ms
git                  sleep_on_buffer        1341 ms
git                  sleep_on_buffer       24648 ms
git                  sleep_on_buffer        1599 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb052>] ext4_delete_entry+0x62/0x120
[<ffffffff811fc495>] ext4_unlink+0xf5/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    77568 ms
Event count:                      12
git                  sleep_on_buffer        2592 ms
git                  sleep_on_buffer        1312 ms
git                  sleep_on_buffer        1974 ms
git                  sleep_on_buffer        2508 ms
git                  sleep_on_buffer        1245 ms
git                  sleep_on_buffer       20990 ms
git                  sleep_on_buffer       14782 ms
git                  sleep_on_buffer        2026 ms
git                  sleep_on_buffer        1880 ms
git                  sleep_on_buffer        2174 ms
git                  sleep_on_buffer       24451 ms
git                  sleep_on_buffer        1634 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811fc633>] ext4_unlink+0x293/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    73950 ms
Event count:                      21
pool                 wait_on_page_bit       5626 ms
git                  sleep_on_buffer        1077 ms
pool                 wait_on_page_bit       1040 ms
offlineimap          wait_on_page_bit       1083 ms
pool                 wait_on_page_bit       1044 ms
pool                 wait_on_page_bit       7268 ms
pool                 wait_on_page_bit       9900 ms
pool                 wait_on_page_bit       3530 ms
offlineimap          wait_on_page_bit      18212 ms
git                  wait_on_page_bit       1101 ms
git                  wait_on_page_bit       1402 ms
git                  sleep_on_buffer        1037 ms
pool                 wait_on_page_bit       1107 ms
git                  sleep_on_buffer        1106 ms
pool                 wait_on_page_bit      11643 ms
pool                 wait_on_page_bit       1272 ms
evolution            wait_on_page_bit       1471 ms
pool                 wait_on_page_bit       1458 ms
pool                 wait_on_page_bit       1331 ms
git                  sleep_on_buffer        1082 ms
offlineimap          wait_on_page_bit       1160 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff81110c50>] filemap_write_and_wait_range+0x60/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    70700 ms
Event count:                      27
flush-8:0            sleep_on_buffer        1735 ms
flush-8:0            sleep_on_buffer        1720 ms
flush-8:0            sleep_on_buffer        3099 ms
flush-8:0            sleep_on_buffer        1321 ms
flush-8:0            sleep_on_buffer        3276 ms
flush-8:0            sleep_on_buffer        4215 ms
flush-8:0            sleep_on_buffer        1412 ms
flush-8:0            sleep_on_buffer        1049 ms
flush-8:0            sleep_on_buffer        2320 ms
flush-8:0            sleep_on_buffer        8076 ms
flush-8:0            sleep_on_buffer        2210 ms
flush-8:0            sleep_on_buffer        1204 ms
flush-8:0            sleep_on_buffer        1262 ms
flush-8:0            sleep_on_buffer        1995 ms
flush-8:0            sleep_on_buffer        1675 ms
flush-8:0            sleep_on_buffer        4219 ms
flush-8:0            sleep_on_buffer        4027 ms
flush-8:0            sleep_on_buffer        3452 ms
flush-8:0            sleep_on_buffer        6020 ms
flush-8:0            sleep_on_buffer        1318 ms
flush-8:0            sleep_on_buffer        1065 ms
flush-8:0            sleep_on_buffer        1148 ms
flush-8:0            sleep_on_buffer        1230 ms
flush-8:0            sleep_on_buffer        4479 ms
flush-8:0            sleep_on_buffer        1580 ms
flush-8:0            sleep_on_buffer        4551 ms
git                  sleep_on_buffer        1042 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121a97b>] __ext4_ext_dirty.isra.40+0x7b/0x80
[<ffffffff8121d34a>] ext4_ext_insert_extent+0x31a/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    59807 ms
Event count:                      37
mv                   sleep_on_buffer        1439 ms
mv                   sleep_on_buffer        1490 ms
mv                   sleep_on_buffer        1876 ms
mv                   sleep_on_buffer        1240 ms
mv                   sleep_on_buffer        1897 ms
mv                   sleep_on_buffer        2089 ms
mv                   sleep_on_buffer        1375 ms
mv                   sleep_on_buffer        1386 ms
mv                   sleep_on_buffer        1442 ms
mv                   sleep_on_buffer        1682 ms
mv                   sleep_on_buffer        1188 ms
offlineimap          sleep_on_buffer        2247 ms
mv                   sleep_on_buffer        1262 ms
mv                   sleep_on_buffer        8930 ms
mv                   sleep_on_buffer        1392 ms
mv                   sleep_on_buffer        1536 ms
mv                   sleep_on_buffer        1064 ms
mv                   sleep_on_buffer        1303 ms
mv                   sleep_on_buffer        1487 ms
mv                   sleep_on_buffer        1331 ms
mv                   sleep_on_buffer        1757 ms
mv                   sleep_on_buffer        1069 ms
mv                   sleep_on_buffer        1183 ms
mv                   sleep_on_buffer        1548 ms
mv                   sleep_on_buffer        1090 ms
mv                   sleep_on_buffer        1770 ms
mv                   sleep_on_buffer        1002 ms
mv                   sleep_on_buffer        1199 ms
mv                   sleep_on_buffer        1066 ms
mv                   sleep_on_buffer        1275 ms
mv                   sleep_on_buffer        1198 ms
mv                   sleep_on_buffer        1653 ms
mv                   sleep_on_buffer        1197 ms
mv                   sleep_on_buffer        1275 ms
mv                   sleep_on_buffer        1317 ms
mv                   sleep_on_buffer        1025 ms
mv                   sleep_on_buffer        1527 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fba26>] ext4_rename+0x276/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    59307 ms
Event count:                      15
git                  sleep_on_buffer        3293 ms
git                  sleep_on_buffer        1350 ms
git                  sleep_on_buffer        2132 ms
git                  sleep_on_buffer        1018 ms
git                  sleep_on_buffer       16069 ms
git                  sleep_on_buffer        5478 ms
offlineimap          sleep_on_buffer        1138 ms
imapd                sleep_on_buffer        1927 ms
imapd                sleep_on_buffer        6417 ms
offlineimap          sleep_on_buffer        6241 ms
offlineimap          sleep_on_buffer        1549 ms
rsync                sleep_on_buffer        3776 ms
rsync                sleep_on_buffer        2516 ms
git                  sleep_on_buffer        1025 ms
git                  sleep_on_buffer        5378 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121c4ad>] ext4_ext_tree_init+0x2d/0x40
[<ffffffff811ecc06>] __ext4_new_inode+0x1076/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    58651 ms
Event count:                       4
git                  sleep_on_buffer       13070 ms
git                  sleep_on_buffer       18222 ms
git                  sleep_on_buffer       13508 ms
git                  sleep_on_buffer       13851 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fc898>] ext4_orphan_del+0x1a8/0x1e0
[<ffffffff811f4fbb>] ext4_evict_inode+0x30b/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    56275 ms
Event count:                      14
git                  sleep_on_buffer        1116 ms
git                  sleep_on_buffer        1347 ms
git                  sleep_on_buffer        1258 ms
git                  sleep_on_buffer        3471 ms
git                  sleep_on_buffer        3348 ms
git                  sleep_on_buffer        1185 ms
git                  sleep_on_buffer        1423 ms
git                  sleep_on_buffer        2662 ms
git                  sleep_on_buffer        8693 ms
git                  sleep_on_buffer        8223 ms
git                  sleep_on_buffer        4792 ms
git                  sleep_on_buffer        2553 ms
git                  sleep_on_buffer        2550 ms
git                  sleep_on_buffer       13654 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f9c6e>] add_dirent_to_buf+0x12e/0x1d0
[<ffffffff811f9e38>] ext4_dx_add_entry+0x128/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fa9b6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fac9f>] ext4_create+0xff/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    55128 ms
Event count:                      12
dconf-service        sleep_on_buffer        1918 ms
pool                 sleep_on_buffer       10558 ms
pool                 sleep_on_buffer        1957 ms
pool                 sleep_on_buffer        1903 ms
pool                 sleep_on_buffer        1187 ms
offlineimap          sleep_on_buffer        2077 ms
URL Classifier       sleep_on_buffer        3924 ms
offlineimap          sleep_on_buffer        2573 ms
StreamT~ns #343      sleep_on_buffer       11686 ms
DOM Worker           sleep_on_buffer        2215 ms
pool                 sleep_on_buffer        4513 ms
offlineimap          sleep_on_buffer       10617 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    53464 ms
Event count:                       4
play                 sleep_on_buffer        6853 ms
play                 sleep_on_buffer       15340 ms
play                 sleep_on_buffer       24793 ms
play                 sleep_on_buffer        6478 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811f313c>] ext4_setattr+0x36c/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811712bd>] chown_common+0xbd/0xd0
[<ffffffff81172417>] sys_fchown+0xb7/0xd0
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    51867 ms
Event count:                       3
flush-8:0            sleep_on_buffer       42842 ms
flush-8:0            sleep_on_buffer        2026 ms
flush-8:0            sleep_on_buffer        6999 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    49716 ms
Event count:                       8
pool                 sleep_on_buffer        4642 ms
offlineimap          sleep_on_buffer        4279 ms
evolution            sleep_on_buffer        5182 ms
rsync                sleep_on_buffer        5599 ms
git                  sleep_on_buffer        8338 ms
StreamT~ns #343      sleep_on_buffer        2216 ms
git                  sleep_on_buffer        2844 ms
git                  sleep_on_buffer       16616 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291ba>] ext4_free_blocks+0x36a/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f513e>] ext4_evict_inode+0x48e/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    42396 ms
Event count:                       5
git                  sleep_on_buffer        1115 ms
git                  sleep_on_buffer       15407 ms
git                  sleep_on_buffer        9114 ms
git                  sleep_on_buffer        1076 ms
git                  sleep_on_buffer       15684 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f4e95>] ext4_evict_inode+0x1e5/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    41836 ms
Event count:                      29
git                  sleep_on_buffer        1326 ms
git                  sleep_on_buffer        1017 ms
git                  sleep_on_buffer        1077 ms
git                  sleep_on_buffer        2618 ms
git                  sleep_on_buffer        1058 ms
git                  sleep_on_buffer        1321 ms
git                  sleep_on_buffer        1199 ms
git                  sleep_on_buffer        1067 ms
git                  sleep_on_buffer        1227 ms
git                  sleep_on_buffer        1101 ms
git                  sleep_on_buffer        1105 ms
git                  sleep_on_buffer        1048 ms
git                  sleep_on_buffer        1254 ms
git                  sleep_on_buffer        1866 ms
git                  sleep_on_buffer        1768 ms
git                  sleep_on_buffer        1613 ms
git                  sleep_on_buffer        1690 ms
git                  sleep_on_buffer        1189 ms
git                  sleep_on_buffer        1063 ms
git                  sleep_on_buffer        1022 ms
git                  sleep_on_buffer        2039 ms
git                  sleep_on_buffer        1898 ms
git                  sleep_on_buffer        1422 ms
git                  sleep_on_buffer        1678 ms
git                  sleep_on_buffer        1285 ms
git                  sleep_on_buffer        2058 ms
git                  sleep_on_buffer        1336 ms
git                  sleep_on_buffer        1364 ms
git                  sleep_on_buffer        2127 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fae2c>] ext4_link+0xfc/0x1b0
[<ffffffff81181e33>] vfs_link+0x113/0x1c0
[<ffffffff811828a4>] sys_linkat+0x174/0x1c0
[<ffffffff81182909>] sys_link+0x19/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    41493 ms
Event count:                       2
flush-8:0            sleep_on_buffer       28180 ms
flush-8:0            sleep_on_buffer       13313 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d506>] ext4_split_extent_at+0xb6/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4601>] write_cache_pages_da+0x421/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    40644 ms
Event count:                      30
flush-8:16           get_request            1797 ms
flush-8:16           get_request            1334 ms
flush-8:16           get_request            1288 ms
flush-8:16           get_request            1741 ms
flush-8:16           get_request            2518 ms
flush-8:16           get_request            1752 ms
flush-8:16           get_request            1069 ms
flush-8:16           get_request            1487 ms
flush-8:16           get_request            1000 ms
flush-8:16           get_request            1270 ms
flush-8:16           get_request            1223 ms
flush-8:16           get_request            1384 ms
flush-8:16           get_request            1082 ms
flush-8:16           get_request            1195 ms
flush-8:16           get_request            1163 ms
flush-8:16           get_request            1605 ms
flush-8:16           get_request            1110 ms
flush-8:16           get_request            1249 ms
flush-8:16           get_request            2064 ms
flush-8:16           get_request            1073 ms
flush-8:16           get_request            1238 ms
flush-8:16           get_request            1215 ms
flush-8:16           get_request            1075 ms
flush-8:16           get_request            1532 ms
flush-8:16           get_request            1586 ms
flush-8:16           get_request            1165 ms
flush-8:16           get_request            1129 ms
flush-8:16           get_request            1098 ms
flush-8:16           get_request            1099 ms
flush-8:16           get_request            1103 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff812a4b1f>] generic_make_request.part.59+0x6f/0xa0
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffff811f6014>] ext4_io_submit+0x24/0x60
[<ffffffff811f2265>] ext4_writepage+0x135/0x220
[<ffffffff81119292>] __writepage+0x12/0x40
[<ffffffff81119a96>] write_cache_pages+0x206/0x460
[<ffffffff81119d35>] generic_writepages+0x45/0x70
[<ffffffff8111ac15>] do_writepages+0x25/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    39571 ms
Event count:                       8
kio_http             sleep_on_buffer       23133 ms
vi                   sleep_on_buffer        4288 ms
git                  sleep_on_buffer        1410 ms
mutt                 sleep_on_buffer        2302 ms
mutt                 sleep_on_buffer        2299 ms
Cache I/O            sleep_on_buffer        1283 ms
gpg                  sleep_on_buffer        3265 ms
git                  sleep_on_buffer        1591 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff811fc6cb>] ext4_unlink+0x32b/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    38769 ms
Event count:                       6
rsync                sleep_on_buffer        3513 ms
rsync                sleep_on_buffer        3570 ms
git                  sleep_on_buffer       26211 ms
git                  sleep_on_buffer        1657 ms
git                  sleep_on_buffer        2184 ms
git                  sleep_on_buffer        1634 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811fbd0c>] ext4_rename+0x55c/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    34812 ms
Event count:                       4
acroread             wait_on_page_bit      11968 ms
acroread             wait_on_page_bit       7121 ms
acroread             wait_on_page_bit       3126 ms
acroread             wait_on_page_bit      12597 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8111d620>] truncate_inode_pages+0x10/0x20
[<ffffffff8111d677>] truncate_pagecache+0x47/0x70
[<ffffffff811f2f4d>] ext4_setattr+0x17d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff811c2996>] compat_sys_open+0x16/0x20
[<ffffffff8159d81c>] sysenter_dispatch+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    34740 ms
Event count:                       4
systemd-journal      sleep_on_buffer        1126 ms
systemd-journal      sleep_on_buffer       29206 ms
systemd-journal      sleep_on_buffer        1787 ms
systemd-journal      sleep_on_buffer        2621 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff8121fd1f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171b32>] do_fallocate+0x112/0x190
[<ffffffff81171c02>] sys_fallocate+0x52/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    33158 ms
Event count:                      32
mv                   sleep_on_buffer        1043 ms
git                  wait_on_page_bit       1150 ms
cc1                  sleep_on_buffer        1062 ms
git                  wait_on_page_bit       1055 ms
flush-8:16           get_request            1091 ms
mktexlsr             sleep_on_buffer        1152 ms
imapd                sleep_on_buffer        1004 ms
flush-8:16           get_request            1087 ms
flush-8:16           get_request            1104 ms
sleep                wait_on_page_bit_killable   1142 ms
git                  wait_on_page_bit_killable   1108 ms
git                  wait_on_page_bit_killable   1007 ms
git                  wait_on_page_bit_killable   1074 ms
git                  wait_on_page_bit_killable   1050 ms
nm-dhcp-client.      wait_on_page_bit_killable   1069 ms
uname                wait_on_page_bit_killable   1086 ms
sed                  wait_on_page_bit_killable   1101 ms
git                  wait_on_page_bit_killable   1057 ms
grep                 wait_on_page_bit_killable   1045 ms
imapd                sleep_on_buffer        1032 ms
git                  sleep_on_buffer        1015 ms
folder-markup.s      sleep_on_buffer        1048 ms
git                  wait_on_page_bit       1086 ms
git                  sleep_on_buffer        1041 ms
git                  sleep_on_buffer        1048 ms
git                  wait_on_page_bit       1063 ms
git                  sleep_on_buffer        1083 ms
series2git           sleep_on_buffer        1073 ms
git                  wait_on_page_bit       1093 ms
git                  wait_on_page_bit       1071 ms
git                  wait_on_page_bit       1018 ms

Time stalled in this event:    32109 ms
Event count:                      23
flush-8:16           get_request            1475 ms
flush-8:16           get_request            1431 ms
flush-8:16           get_request            1027 ms
flush-8:16           get_request            2019 ms
flush-8:16           get_request            1021 ms
flush-8:16           get_request            1013 ms
flush-8:16           get_request            1093 ms
flush-8:16           get_request            1178 ms
flush-8:16           get_request            1051 ms
flush-8:16           get_request            1296 ms
flush-8:16           get_request            1525 ms
flush-8:16           get_request            1083 ms
flush-8:16           get_request            1654 ms
flush-8:16           get_request            1583 ms
flush-8:16           get_request            1405 ms
flush-8:16           get_request            2004 ms
flush-8:16           get_request            2203 ms
flush-8:16           get_request            1980 ms
flush-8:16           get_request            1211 ms
flush-8:16           get_request            1116 ms
flush-8:16           get_request            1071 ms
flush-8:16           get_request            1255 ms
flush-8:16           get_request            1415 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff812a4b1f>] generic_make_request.part.59+0x6f/0xa0
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffff811a30fb>] submit_bh+0xfb/0x130
[<ffffffff811a6058>] __block_write_full_page+0x1c8/0x340
[<ffffffff811a62a3>] block_write_full_page_endio+0xd3/0x110
[<ffffffff811a62f0>] block_write_full_page+0x10/0x20
[<ffffffff811aa0c3>] blkdev_writepage+0x13/0x20
[<ffffffff81119292>] __writepage+0x12/0x40
[<ffffffff81119a96>] write_cache_pages+0x206/0x460
[<ffffffff81119d35>] generic_writepages+0x45/0x70
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    31440 ms
Event count:                       6
pool                 sleep_on_buffer       13120 ms
scp                  sleep_on_buffer        5297 ms
scp                  sleep_on_buffer        3769 ms
scp                  sleep_on_buffer        2870 ms
cp                   sleep_on_buffer        5153 ms
git                  sleep_on_buffer        1231 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811ebe24>] __ext4_new_inode+0x294/0x10c0
[<ffffffff811fb456>] ext4_mkdir+0x146/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    30241 ms
Event count:                       4
git                  sleep_on_buffer       10480 ms
evince               sleep_on_buffer        1309 ms
git                  sleep_on_buffer       17269 ms
git                  sleep_on_buffer        1183 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811eb7cf>] ext4_free_inode+0x22f/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    28375 ms
Event count:                       4
flush-8:0            sleep_on_buffer        7042 ms
flush-8:0            sleep_on_buffer        1900 ms
flush-8:0            sleep_on_buffer        1746 ms
flush-8:0            sleep_on_buffer       17687 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121e658>] ext4_ext_convert_to_initialized+0x408/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4601>] write_cache_pages_da+0x421/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    26600 ms
Event count:                       4
systemd-journal      sleep_on_buffer        2463 ms
systemd-journal      sleep_on_buffer        2988 ms
systemd-journal      sleep_on_buffer       19520 ms
systemd-journal      sleep_on_buffer        1629 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff8121f9e1>] ext4_ext_truncate+0x71/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff81171979>] do_sys_ftruncate.constprop.14+0x109/0x170
[<ffffffff81171a09>] sys_ftruncate+0x9/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    25557 ms
Event count:                       2
flush-253:0          sleep_on_buffer        2782 ms
flush-253:0          sleep_on_buffer       22775 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119ddb2>] wb_do_writeback+0xb2/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    25493 ms
Event count:                       5
git                  sleep_on_buffer       15264 ms
git                  sleep_on_buffer        2091 ms
git                  sleep_on_buffer        2507 ms
git                  sleep_on_buffer        1218 ms
git                  sleep_on_buffer        4413 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811eb7cf>] ext4_free_inode+0x22f/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    25420 ms
Event count:                       8
Cache I/O            sleep_on_buffer        8766 ms
pool                 sleep_on_buffer        1851 ms
rsync                sleep_on_buffer        2738 ms
imapd                sleep_on_buffer        1697 ms
evolution            sleep_on_buffer        2829 ms
pool                 sleep_on_buffer        2854 ms
firefox              sleep_on_buffer        2326 ms
imapd                sleep_on_buffer        2359 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811ec0e8>] __ext4_new_inode+0x558/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    24833 ms
Event count:                       9
kswapd0              wait_on_page_bit       2147 ms
kswapd0              wait_on_page_bit       1483 ms
kswapd0              wait_on_page_bit       1393 ms
kswapd0              wait_on_page_bit       1844 ms
kswapd0              wait_on_page_bit       1920 ms
kswapd0              wait_on_page_bit       3606 ms
kswapd0              wait_on_page_bit       7155 ms
kswapd0              wait_on_page_bit       1189 ms
kswapd0              wait_on_page_bit       4096 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811228cf>] shrink_inactive_list+0x15f/0x4a0
[<ffffffff811230cc>] shrink_lruvec+0x13c/0x260
[<ffffffff81123256>] shrink_zone+0x66/0x180
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8112451b>] balance_pgdat+0x33b/0x4b0
[<ffffffff811247a6>] kswapd+0x116/0x230
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    23799 ms
Event count:                      19
jbd2/sdb1-8          wait_on_page_bit       1077 ms
jbd2/sdb1-8          wait_on_page_bit       1126 ms
jbd2/sdb1-8          wait_on_page_bit       1197 ms
jbd2/sdb1-8          wait_on_page_bit       1101 ms
jbd2/sdb1-8          wait_on_page_bit       1160 ms
jbd2/sdb1-8          wait_on_page_bit       1594 ms
jbd2/sdb1-8          wait_on_page_bit       1364 ms
jbd2/sdb1-8          wait_on_page_bit       1094 ms
jbd2/sdb1-8          wait_on_page_bit       1141 ms
jbd2/sdb1-8          wait_on_page_bit       1309 ms
jbd2/sdb1-8          wait_on_page_bit       1325 ms
jbd2/sdb1-8          wait_on_page_bit       1415 ms
jbd2/sdb1-8          wait_on_page_bit       1331 ms
jbd2/sdb1-8          wait_on_page_bit       1372 ms
jbd2/sdb1-8          wait_on_page_bit       1187 ms
jbd2/sdb1-8          wait_on_page_bit       1472 ms
jbd2/sdb1-8          wait_on_page_bit       1192 ms
jbd2/sdb1-8          wait_on_page_bit       1080 ms
jbd2/sdb1-8          wait_on_page_bit       1262 ms
[<ffffffff8110f0e0>] wait_on_page_bit+0x70/0x80
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8110f2a3>] filemap_fdatawait+0x23/0x30
[<ffffffff8123a78c>] journal_finish_inode_data_buffers+0x6c/0x170
[<ffffffff8123b376>] jbd2_journal_commit_transaction+0x706/0x13c0
[<ffffffff81240513>] kjournald2+0xb3/0x240
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    22392 ms
Event count:                       2
rsync                sleep_on_buffer        3595 ms
git                  sleep_on_buffer       18797 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811fb960>] ext4_rename+0x1b0/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    21612 ms
Event count:                       3
flush-8:0            sleep_on_buffer       13971 ms
flush-8:0            sleep_on_buffer        3795 ms
flush-8:0            sleep_on_buffer        3846 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811efadd>] ext4_da_update_reserve_space+0x1cd/0x280
[<ffffffff8121f88a>] ext4_ext_map_blocks+0x91a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    21313 ms
Event count:                       6
git                  sleep_on_buffer        1261 ms
git                  sleep_on_buffer        2135 ms
systemd-journal      sleep_on_buffer       13451 ms
git                  sleep_on_buffer        1203 ms
git                  sleep_on_buffer        1180 ms
git                  sleep_on_buffer        2083 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811e99fd>] ext4_file_mmap+0x3d/0x50
[<ffffffff81140175>] mmap_region+0x325/0x590
[<ffffffff811406f8>] do_mmap_pgoff+0x318/0x440
[<ffffffff8112ba05>] vm_mmap_pgoff+0xa5/0xd0
[<ffffffff8113ee84>] sys_mmap_pgoff+0xa4/0x180
[<ffffffff81006b8d>] sys_mmap+0x1d/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    19298 ms
Event count:                       3
flush-8:0            sleep_on_buffer       14371 ms
flush-8:0            sleep_on_buffer        1545 ms
flush-8:0            sleep_on_buffer        3382 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d24c>] ext4_ext_insert_extent+0x21c/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    19044 ms
Event count:                       2
akregator            sleep_on_buffer       12495 ms
imapd                sleep_on_buffer        6549 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff8121f9e1>] ext4_ext_truncate+0x71/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    18957 ms
Event count:                       5
flush-8:0            sleep_on_buffer        2120 ms
flush-8:0            sleep_on_buffer        1668 ms
flush-8:0            sleep_on_buffer        2679 ms
flush-8:0            sleep_on_buffer        4561 ms
flush-8:0            sleep_on_buffer        7929 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121a97b>] __ext4_ext_dirty.isra.40+0x7b/0x80
[<ffffffff8121d34a>] ext4_ext_insert_extent+0x31a/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    18341 ms
Event count:                       6
imapd                sleep_on_buffer        5018 ms
imapd                sleep_on_buffer        1541 ms
acroread             sleep_on_buffer        5963 ms
git                  sleep_on_buffer        3274 ms
git                  sleep_on_buffer        1387 ms
git                  sleep_on_buffer        1158 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811ea201>] ext4_release_file+0x61/0xd0
[<ffffffff811742a0>] __fput+0xb0/0x240
[<ffffffff81174439>] ____fput+0x9/0x10
[<ffffffff81065dc7>] task_work_run+0x97/0xd0
[<ffffffff81002cbc>] do_notify_resume+0x9c/0xb0
[<ffffffff8159c46a>] int_signal+0x12/0x17
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    18310 ms
Event count:                      17
cp                   sleep_on_buffer        1061 ms
cp                   sleep_on_buffer        1032 ms
cp                   sleep_on_buffer        1072 ms
cp                   sleep_on_buffer        1039 ms
cp                   sleep_on_buffer        1035 ms
cp                   sleep_on_buffer        1167 ms
cp                   sleep_on_buffer        1029 ms
cp                   sleep_on_buffer        1108 ms
cp                   sleep_on_buffer        1009 ms
cp                   sleep_on_buffer        1113 ms
cp                   sleep_on_buffer        1113 ms
cp                   sleep_on_buffer        1029 ms
free                 wait_on_page_bit_killable   1067 ms
imapd                sleep_on_buffer        1103 ms
cat                  sleep_on_buffer        1180 ms
imapd                sleep_on_buffer        1005 ms
git                  sleep_on_buffer        1148 ms
[<ffffffff8110ef12>] __lock_page_killable+0x62/0x70
[<ffffffff81110507>] do_generic_file_read.constprop.35+0x287/0x440
[<ffffffff81111359>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    18275 ms
Event count:                       2
systemd-journal      sleep_on_buffer        1594 ms
systemd-journal      sleep_on_buffer       16681 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff81228bbd>] ext4_mb_new_blocks+0x1fd/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff8121fd1f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171b32>] do_fallocate+0x112/0x190
[<ffffffff81171c02>] sys_fallocate+0x52/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    17970 ms
Event count:                       2
pool                 sleep_on_buffer       12739 ms
pool                 sleep_on_buffer        5231 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9bc4>] add_dirent_to_buf+0x84/0x1d0
[<ffffffff811fa7e4>] ext4_add_entry+0x124/0x2d0
[<ffffffff811fa9b6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fac9f>] ext4_create+0xff/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    17925 ms
Event count:                       1
git                  sleep_on_buffer       17925 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f003b>] ext4_getblk+0x5b/0x1f0
[<ffffffff811f01e1>] ext4_bread+0x11/0x80
[<ffffffff811f758d>] ext4_append+0x5d/0x120
[<ffffffff811fb243>] ext4_init_new_dir+0x83/0x150
[<ffffffff811fb48d>] ext4_mkdir+0x17d/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    17421 ms
Event count:                       1
git                  sleep_on_buffer       17421 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f9c6e>] add_dirent_to_buf+0x12e/0x1d0
[<ffffffff811fa7e4>] ext4_add_entry+0x124/0x2d0
[<ffffffff811fb4bd>] ext4_mkdir+0x1ad/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    17385 ms
Event count:                       7
git                  sleep_on_buffer        1409 ms
git                  sleep_on_buffer        1128 ms
git                  sleep_on_buffer        6323 ms
rsync                sleep_on_buffer        4503 ms
git                  sleep_on_buffer        1204 ms
mv                   sleep_on_buffer        1190 ms
git                  sleep_on_buffer        1628 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811fb960>] ext4_rename+0x1b0/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    13983 ms
Event count:                       3
patch                sleep_on_buffer        1511 ms
cp                   sleep_on_buffer        2096 ms
git                  sleep_on_buffer       10376 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811ec0e8>] __ext4_new_inode+0x558/0x10c0
[<ffffffff811fb456>] ext4_mkdir+0x146/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    13603 ms
Event count:                       4
git                  sleep_on_buffer        2160 ms
gen-report.sh        sleep_on_buffer        4730 ms
evolution            sleep_on_buffer        4697 ms
git                  sleep_on_buffer        2016 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811fb6cf>] ext4_orphan_add+0x10f/0x1f0
[<ffffffff811f31a4>] ext4_setattr+0x3d4/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    13264 ms
Event count:                       8
ls                   sleep_on_buffer        1116 ms
ls                   sleep_on_buffer        1756 ms
ls                   sleep_on_buffer        1901 ms
ls                   sleep_on_buffer        2033 ms
ls                   sleep_on_buffer        1373 ms
ls                   sleep_on_buffer        3046 ms
offlineimap          sleep_on_buffer        1011 ms
imapd                sleep_on_buffer        1028 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177bc6>] vfs_stat+0x16/0x20
[<ffffffff81177ce5>] sys_newstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    12710 ms
Event count:                       6
git                  sleep_on_buffer        1364 ms
git                  sleep_on_buffer        1612 ms
git                  sleep_on_buffer        4321 ms
git                  sleep_on_buffer        2185 ms
git                  sleep_on_buffer        2126 ms
git                  sleep_on_buffer        1102 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122509a>] ext4_mb_load_buddy+0x26a/0x350
[<ffffffff8122676b>] ext4_mb_find_by_goal+0x9b/0x2d0
[<ffffffff81227109>] ext4_mb_regular_allocator+0x59/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    12397 ms
Event count:                       7
jbd2/dm-0-8          sleep_on_buffer        1516 ms
jbd2/dm-0-8          sleep_on_buffer        1153 ms
jbd2/dm-0-8          sleep_on_buffer        1307 ms
jbd2/dm-0-8          sleep_on_buffer        1518 ms
jbd2/dm-0-8          sleep_on_buffer        1513 ms
jbd2/dm-0-8          sleep_on_buffer        1516 ms
jbd2/dm-0-8          sleep_on_buffer        3874 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff8123b488>] jbd2_journal_commit_transaction+0x818/0x13c0
[<ffffffff81240513>] kjournald2+0xb3/0x240
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    12361 ms
Event count:                       4
git                  sleep_on_buffer        1076 ms
scp                  sleep_on_buffer        1517 ms
rsync                sleep_on_buffer        5018 ms
rsync                sleep_on_buffer        4750 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f9c6e>] add_dirent_to_buf+0x12e/0x1d0
[<ffffffff811fa7e4>] ext4_add_entry+0x124/0x2d0
[<ffffffff811fa9b6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fac9f>] ext4_create+0xff/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    12175 ms
Event count:                       3
patch                sleep_on_buffer        1546 ms
patch                sleep_on_buffer        7218 ms
patch                sleep_on_buffer        3411 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fb4bd>] ext4_mkdir+0x1ad/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    11862 ms
Event count:                       4
bash                 sleep_on_buffer        5441 ms
offlineimap          sleep_on_buffer        2780 ms
pool                 sleep_on_buffer        1529 ms
pool                 sleep_on_buffer        2112 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff811f31a4>] ext4_setattr+0x3d4/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    11695 ms
Event count:                       1
git                  sleep_on_buffer       11695 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811fb960>] ext4_rename+0x1b0/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    11452 ms
Event count:                       8
compare-mmtests      sleep_on_buffer        1407 ms
compare-mmtests      sleep_on_buffer        1439 ms
find                 sleep_on_buffer        2063 ms
git                  sleep_on_buffer        1128 ms
cp                   sleep_on_buffer        1041 ms
rsync                sleep_on_buffer        1533 ms
rsync                sleep_on_buffer        1070 ms
FileLoader           sleep_on_buffer        1771 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f0227>] ext4_bread+0x57/0x80
[<ffffffff811f7b21>] __ext4_read_dirblock+0x41/0x1d0
[<ffffffff811f849b>] htree_dirblock_to_tree+0x3b/0x1a0
[<ffffffff811f8d7f>] ext4_htree_fill_tree+0x7f/0x220
[<ffffffff811e8d67>] ext4_dx_readdir+0x1a7/0x440
[<ffffffff811e9572>] ext4_readdir+0x422/0x4e0
[<ffffffff811849a0>] vfs_readdir+0xb0/0xe0
[<ffffffff81184ae9>] sys_getdents+0x89/0x110
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     9483 ms
Event count:                       3
offlineimap          sleep_on_buffer        1768 ms
dconf-service        sleep_on_buffer        6600 ms
git                  sleep_on_buffer        1115 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fb8b5>] ext4_rename+0x105/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     8201 ms
Event count:                       1
systemd-journal      sleep_on_buffer        8201 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff8121fd1f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171b32>] do_fallocate+0x112/0x190
[<ffffffff81171c02>] sys_fallocate+0x52/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     7699 ms
Event count:                       2
git                  sleep_on_buffer        3475 ms
git                  sleep_on_buffer        4224 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fc898>] ext4_orphan_del+0x1a8/0x1e0
[<ffffffff811f4fbb>] ext4_evict_inode+0x30b/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     7564 ms
Event count:                       2
tar                  sleep_on_buffer        1286 ms
rm                   sleep_on_buffer        6278 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fc3e1>] ext4_unlink+0x41/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff811825dd>] sys_unlinkat+0x1d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6596 ms
Event count:                       1
acroread             sleep_on_buffer        6596 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811105e3>] do_generic_file_read.constprop.35+0x363/0x440
[<ffffffff81111359>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159d81c>] sysenter_dispatch+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6589 ms
Event count:                       1
tar                  sleep_on_buffer        6589 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fb4bd>] ext4_mkdir+0x1ad/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6272 ms
Event count:                       6
pool                 wait_on_page_bit       1005 ms
pool                 wait_on_page_bit       1015 ms
StreamT~ns #908      sleep_on_buffer        1086 ms
Cache I/O            wait_on_page_bit       1091 ms
StreamT~ns #138      wait_on_page_bit       1046 ms
offlineimap          sleep_on_buffer        1029 ms
[<ffffffff810a04ed>] futex_wait+0x17d/0x270
[<ffffffff810a21ac>] do_futex+0x7c/0x1b0
[<ffffffff810a241d>] sys_futex+0x13d/0x190
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6237 ms
Event count:                       1
offlineimap          sleep_on_buffer        6237 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291ba>] ext4_free_blocks+0x36a/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6192 ms
Event count:                       4
ls                   sleep_on_buffer        1679 ms
ls                   sleep_on_buffer        1746 ms
ls                   sleep_on_buffer        1076 ms
ls                   sleep_on_buffer        1691 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff811ef20d>] __ext4_get_inode_loc+0x3dd/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177bc6>] vfs_stat+0x16/0x20
[<ffffffff81177ce5>] sys_newstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     5989 ms
Event count:                       3
flush-8:0            sleep_on_buffer        1184 ms
flush-8:0            sleep_on_buffer        1548 ms
flush-8:0            sleep_on_buffer        3257 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227556>] ext4_mb_mark_diskspace_used+0x76/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     5770 ms
Event count:                       1
git                  sleep_on_buffer        5770 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121bf74>] ext4_ext_rm_leaf+0x1e4/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f513e>] ext4_evict_inode+0x48e/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4477 ms
Event count:                       2
offlineimap          sleep_on_buffer        2154 ms
DOM Worker           sleep_on_buffer        2323 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff81180bd8>] lookup_open+0xc8/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4428 ms
Event count:                       3
compare-mmtests      sleep_on_buffer        1725 ms
compare-mmtests      sleep_on_buffer        1634 ms
cp                   sleep_on_buffer        1069 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177ba9>] vfs_lstat+0x19/0x20
[<ffffffff81177d15>] sys_newlstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4168 ms
Event count:                       3
git                  sleep_on_buffer        1866 ms
git                  sleep_on_buffer        1070 ms
git                  sleep_on_buffer        1232 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122509a>] ext4_mb_load_buddy+0x26a/0x350
[<ffffffff81227247>] ext4_mb_regular_allocator+0x197/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3940 ms
Event count:                       2
evolution            sleep_on_buffer        1978 ms
git                  sleep_on_buffer        1962 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291ba>] ext4_free_blocks+0x36a/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3802 ms
Event count:                       2
git                  sleep_on_buffer        1933 ms
git                  sleep_on_buffer        1869 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81171cd7>] sys_faccessat+0x97/0x220
[<ffffffff81171e73>] sys_access+0x13/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3792 ms
Event count:                       3
cc1                  sleep_on_buffer        1161 ms
compare-mmtests      sleep_on_buffer        1088 ms
cc1                  sleep_on_buffer        1543 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117e76a>] link_path_walk+0x7ca/0x8e0
[<ffffffff81181596>] path_openat+0x96/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3783 ms
Event count:                       2
compare-mmtests      sleep_on_buffer        2237 ms
compare-mmtests      sleep_on_buffer        1546 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117e76a>] link_path_walk+0x7ca/0x8e0
[<ffffffff81181596>] path_openat+0x96/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117285f>] sys_openat+0xf/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3692 ms
Event count:                       2
git                  sleep_on_buffer        1667 ms
git                  sleep_on_buffer        2025 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122509a>] ext4_mb_load_buddy+0x26a/0x350
[<ffffffff8122676b>] ext4_mb_find_by_goal+0x9b/0x2d0
[<ffffffff81227109>] ext4_mb_regular_allocator+0x59/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3533 ms
Event count:                       1
pool                 sleep_on_buffer        3533 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f9dd2>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811fa925>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fbf16>] ext4_rename+0x766/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3329 ms
Event count:                       3
folder-markup.s      sleep_on_buffer        1147 ms
imapd                sleep_on_buffer        1053 ms
gnuplot              sleep_on_buffer        1129 ms
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2861 ms
Event count:                       2
chmod                sleep_on_buffer        1227 ms
chmod                sleep_on_buffer        1634 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811f313c>] ext4_setattr+0x36c/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff8117137b>] chmod_common+0xab/0xb0
[<ffffffff811721a1>] sys_fchmodat+0x41/0xa0
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2822 ms
Event count:                       1
gnome-terminal       sleep_on_buffer        2822 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811eb856>] ext4_free_inode+0x2b6/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81174368>] __fput+0x178/0x240
[<ffffffff81174439>] ____fput+0x9/0x10
[<ffffffff81065dc7>] task_work_run+0x97/0xd0
[<ffffffff81002cbc>] do_notify_resume+0x9c/0xb0
[<ffffffff8159c46a>] int_signal+0x12/0x17
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2769 ms
Event count:                       1
imapd                sleep_on_buffer        2769 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811f313c>] ext4_setattr+0x36c/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff8117137b>] chmod_common+0xab/0xb0
[<ffffffff811721a1>] sys_fchmodat+0x41/0xa0
[<ffffffff81172214>] sys_chmod+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2727 ms
Event count:                       1
mv                   sleep_on_buffer        2727 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f9c6e>] add_dirent_to_buf+0x12e/0x1d0
[<ffffffff811fa7e4>] ext4_add_entry+0x124/0x2d0
[<ffffffff811fbf16>] ext4_rename+0x766/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2675 ms
Event count:                       1
flush-8:0            sleep_on_buffer        2675 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2658 ms
Event count:                       1
patch                sleep_on_buffer        2658 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121c4ad>] ext4_ext_tree_init+0x2d/0x40
[<ffffffff811ecc06>] __ext4_new_inode+0x1076/0x10c0
[<ffffffff811fb456>] ext4_mkdir+0x146/0x2b0
[<ffffffff81181b42>] vfs_mkdir+0xa2/0x120
[<ffffffff81182533>] sys_mkdirat+0xa3/0xf0
[<ffffffff81182594>] sys_mkdir+0x14/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2603 ms
Event count:                       2
flush-8:0            sleep_on_buffer        1162 ms
flush-8:0            sleep_on_buffer        1441 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d24c>] ext4_ext_insert_extent+0x21c/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2580 ms
Event count:                       2
rm                   sleep_on_buffer        1265 ms
rm                   sleep_on_buffer        1315 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff811e8265>] ext4_read_block_bitmap+0x35/0x60
[<ffffffff8122908c>] ext4_free_blocks+0x23c/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f513e>] ext4_evict_inode+0x48e/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff811825dd>] sys_unlinkat+0x1d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2542 ms
Event count:                       2
flush-8:16           get_request            1316 ms
flush-8:16           get_request            1226 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff812a4b1f>] generic_make_request.part.59+0x6f/0xa0
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffff811a30fb>] submit_bh+0xfb/0x130
[<ffffffff811a6058>] __block_write_full_page+0x1c8/0x340
[<ffffffff811a62a3>] block_write_full_page_endio+0xd3/0x110
[<ffffffff811a62f0>] block_write_full_page+0x10/0x20
[<ffffffff811aa0c3>] blkdev_writepage+0x13/0x20
[<ffffffff81119292>] __writepage+0x12/0x40
[<ffffffff81119a96>] write_cache_pages+0x206/0x460
[<ffffffff81119d35>] generic_writepages+0x45/0x70
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2504 ms
Event count:                       1
acroread             sleep_on_buffer        2504 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff811105e3>] do_generic_file_read.constprop.35+0x363/0x440
[<ffffffff81111359>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159dc79>] ia32_sysret+0x0/0x5
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2477 ms
Event count:                       2
git                  sleep_on_buffer        1200 ms
firefox              sleep_on_buffer        1277 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291ba>] ext4_free_blocks+0x36a/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f513e>] ext4_evict_inode+0x48e/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2168 ms
Event count:                       2
xchat                sleep_on_buffer        1096 ms
xchat                sleep_on_buffer        1072 ms
[<ffffffff81185476>] do_poll.isra.7+0x1c6/0x290
[<ffffffff81186331>] do_sys_poll+0x191/0x200
[<ffffffff81186466>] sys_poll+0x66/0x100
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2156 ms
Event count:                       2
git                  sleep_on_buffer        1076 ms
git                  sleep_on_buffer        1080 ms
[<ffffffff811383b2>] unmap_single_vma+0x82/0x100
[<ffffffff81138c2c>] unmap_vmas+0x4c/0xa0
[<ffffffff811408f0>] exit_mmap+0x90/0x170
[<ffffffff81043ee5>] mmput.part.27+0x45/0x110
[<ffffffff81043fcd>] mmput+0x1d/0x30
[<ffffffff8104be22>] exit_mm+0x132/0x180
[<ffffffff8104bfc5>] do_exit+0x155/0x460
[<ffffffff8104c34f>] do_group_exit+0x3f/0xa0
[<ffffffff8104c3c2>] sys_exit_group+0x12/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2141 ms
Event count:                       2
imapd                sleep_on_buffer        1057 ms
ntpd                 wait_on_page_bit_killable   1084 ms
[<ffffffff81185a99>] do_select+0x4c9/0x5d0
[<ffffffff81185d58>] core_sys_select+0x1b8/0x2f0
[<ffffffff811860d6>] sys_select+0xb6/0x100
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2130 ms
Event count:                       2
git                  sleep_on_buffer        1110 ms
git                  sleep_on_buffer        1020 ms
[<ffffffff811f4ccb>] ext4_evict_inode+0x1b/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81182c4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2092 ms
Event count:                       1
flush-8:0            sleep_on_buffer        2092 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121a97b>] __ext4_ext_dirty.isra.40+0x7b/0x80
[<ffffffff8121d69b>] ext4_split_extent_at+0x24b/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2079 ms
Event count:                       2
offlineimap          sleep_on_buffer        1030 ms
pool                 wait_on_page_bit       1049 ms
[<ffffffff811ea6e5>] ext4_sync_file+0x205/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2066 ms
Event count:                       2
folder-markup.s      sleep_on_buffer        1024 ms
tee                  sleep_on_buffer        1042 ms
[<ffffffff8117b90e>] pipe_read+0x20e/0x340
[<ffffffff81172b53>] do_sync_read+0xa3/0xe0
[<ffffffff8117327b>] vfs_read+0xab/0x170
[<ffffffff8117338d>] sys_read+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2047 ms
Event count:                       1
Cache I/O            sleep_on_buffer        2047 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812291e1>] ext4_free_blocks+0x391/0xc10
[<ffffffff8121bd16>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121bf95>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121dcbc>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8121fb0b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff81171979>] do_sys_ftruncate.constprop.14+0x109/0x170
[<ffffffff81171a09>] sys_ftruncate+0x9/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1977 ms
Event count:                       1
patch                sleep_on_buffer        1977 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117e76a>] link_path_walk+0x7ca/0x8e0
[<ffffffff8117ede3>] path_lookupat+0x53/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177ba9>] vfs_lstat+0x19/0x20
[<ffffffff81177d15>] sys_newlstat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1839 ms
Event count:                       1
compare-mmtests      sleep_on_buffer        1839 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117e76a>] link_path_walk+0x7ca/0x8e0
[<ffffffff81181596>] path_openat+0x96/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1819 ms
Event count:                       1
cp                   sleep_on_buffer        1819 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff811a4bc7>] write_dirty_buffer+0x67/0x70
[<ffffffff8123d035>] __flush_batch+0x45/0xa0
[<ffffffff8123dad6>] jbd2_log_do_checkpoint+0x1d6/0x220
[<ffffffff8123dba1>] __jbd2_log_wait_for_space+0x81/0x190
[<ffffffff812382d0>] start_this_handle+0x2e0/0x3e0
[<ffffffff81238590>] jbd2__journal_start.part.8+0x90/0x190
[<ffffffff812386d5>] jbd2__journal_start+0x45/0x50
[<ffffffff812205d1>] __ext4_journal_start_sb+0x81/0x170
[<ffffffff811ebf61>] __ext4_new_inode+0x3d1/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1664 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1664 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d125>] ext4_ext_insert_extent+0xf5/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1635 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1635 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1591 ms
Event count:                       1
imapd                sleep_on_buffer        1591 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f96f9>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8117ca84>] lookup_hash+0x14/0x20
[<ffffffff8117fae3>] do_unlinkat+0xf3/0x260
[<ffffffff81182611>] sys_unlink+0x11/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1529 ms
Event count:                       1
ls                   sleep_on_buffer        1529 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f0227>] ext4_bread+0x57/0x80
[<ffffffff811f7b21>] __ext4_read_dirblock+0x41/0x1d0
[<ffffffff811f7f3d>] dx_probe+0x3d/0x410
[<ffffffff811f8dce>] ext4_htree_fill_tree+0xce/0x220
[<ffffffff811e8d67>] ext4_dx_readdir+0x1a7/0x440
[<ffffffff811e9572>] ext4_readdir+0x422/0x4e0
[<ffffffff811849a0>] vfs_readdir+0xb0/0xe0
[<ffffffff81184ae9>] sys_getdents+0x89/0x110
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1523 ms
Event count:                       1
gnuplot              sleep_on_buffer        1523 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121fad7>] ext4_ext_truncate+0x167/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1519 ms
Event count:                       1
find                 sleep_on_buffer        1519 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f0227>] ext4_bread+0x57/0x80
[<ffffffff811f7b21>] __ext4_read_dirblock+0x41/0x1d0
[<ffffffff811f849b>] htree_dirblock_to_tree+0x3b/0x1a0
[<ffffffff811f8e42>] ext4_htree_fill_tree+0x142/0x220
[<ffffffff811e8d67>] ext4_dx_readdir+0x1a7/0x440
[<ffffffff811e9572>] ext4_readdir+0x422/0x4e0
[<ffffffff811849a0>] vfs_readdir+0xb0/0xe0
[<ffffffff81184ae9>] sys_getdents+0x89/0x110
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1509 ms
Event count:                       1
git                  sleep_on_buffer        1509 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff811e8265>] ext4_read_block_bitmap+0x35/0x60
[<ffffffff81227533>] ext4_mb_mark_diskspace_used+0x53/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1470 ms
Event count:                       1
rm                   sleep_on_buffer        1470 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eb4d0>] ext4_read_inode_bitmap+0x400/0x4d0
[<ffffffff811eb7ab>] ext4_free_inode+0x20b/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff8117fbe1>] do_unlinkat+0x1f1/0x260
[<ffffffff811825dd>] sys_unlinkat+0x1d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1462 ms
Event count:                       1
imapd                sleep_on_buffer        1462 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811fbb37>] ext4_rename+0x387/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1457 ms
Event count:                       1
git                  sleep_on_buffer        1457 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff811e8265>] ext4_read_block_bitmap+0x35/0x60
[<ffffffff81227533>] ext4_mb_mark_diskspace_used+0x53/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1395 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1395 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1387 ms
Event count:                       1
git                  sleep_on_buffer        1387 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1378 ms
Event count:                       1
gnuplot              sleep_on_buffer        1378 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811ea201>] ext4_release_file+0x61/0xd0
[<ffffffff811742a0>] __fput+0xb0/0x240
[<ffffffff81174439>] ____fput+0x9/0x10
[<ffffffff81065de4>] task_work_run+0xb4/0xd0
[<ffffffff8104bffa>] do_exit+0x18a/0x460
[<ffffffff8104c34f>] do_group_exit+0x3f/0xa0
[<ffffffff8104c3c2>] sys_exit_group+0x12/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1337 ms
Event count:                       1
git                  sleep_on_buffer        1337 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff81224c2e>] ext4_mb_init_group+0x9e/0x100
[<ffffffff81224d97>] ext4_mb_good_group+0x107/0x1a0
[<ffffffff81227233>] ext4_mb_regular_allocator+0x183/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4185>] mpage_add_bh_to_extent+0x45/0xa0
[<ffffffff811f4505>] write_cache_pages_da+0x325/0x4b0
[<ffffffff811f49e5>] ext4_da_writepages+0x355/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1309 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1309 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff8121a97b>] __ext4_ext_dirty.isra.40+0x7b/0x80
[<ffffffff8121d69b>] ext4_split_extent_at+0x24b/0x390
[<ffffffff8121e038>] ext4_split_extent.isra.47+0x108/0x130
[<ffffffff8121e3ae>] ext4_ext_convert_to_initialized+0x15e/0x590
[<ffffffff8121ee7b>] ext4_ext_handle_uninitialized_extents+0x2fb/0x3c0
[<ffffffff8121f547>] ext4_ext_map_blocks+0x5d7/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1284 ms
Event count:                       1
cp                   sleep_on_buffer        1284 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b789>] update_time+0x79/0xc0
[<ffffffff8118ba31>] touch_atime+0x161/0x170
[<ffffffff81177e71>] sys_readlinkat+0xe1/0x120
[<ffffffff81177ec6>] sys_readlink+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1277 ms
Event count:                       1
git                  sleep_on_buffer        1277 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff81224c2e>] ext4_mb_init_group+0x9e/0x100
[<ffffffff81224d97>] ext4_mb_good_group+0x107/0x1a0
[<ffffffff81227233>] ext4_mb_regular_allocator+0x183/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110c3a>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea54a>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1758>] do_fsync+0x58/0x80
[<ffffffff811a1abb>] sys_fsync+0xb/0x10
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1235 ms
Event count:                       1
cp                   sleep_on_buffer        1235 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812275bf>] ext4_mb_mark_diskspace_used+0xdf/0x4d0
[<ffffffff81228c6f>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8122d7d0>] ext4_alloc_blocks+0x140/0x2b0
[<ffffffff8122d995>] ext4_alloc_branch+0x55/0x2c0
[<ffffffff8122ecb9>] ext4_ind_map_blocks+0x299/0x500
[<ffffffff811efd43>] ext4_map_blocks+0x1b3/0x450
[<ffffffff811f23e7>] _ext4_get_block+0x87/0x170
[<ffffffff811f2501>] ext4_get_block+0x11/0x20
[<ffffffff811a65bf>] __block_write_begin+0x1af/0x4d0
[<ffffffff811f1969>] ext4_write_begin+0x159/0x410
[<ffffffff8110f3aa>] generic_perform_write+0xca/0x210
[<ffffffff8110f548>] generic_file_buffered_write+0x58/0x90
[<ffffffff81110f96>] __generic_file_aio_write+0x1b6/0x3b0
[<ffffffff8111120a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea3a3>] ext4_file_write+0x83/0xd0
[<ffffffff81172a73>] do_sync_write+0xa3/0xe0
[<ffffffff811730fe>] vfs_write+0xae/0x180
[<ffffffff8117341d>] sys_write+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1182 ms
Event count:                       1
imapd                sleep_on_buffer        1182 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb052>] ext4_delete_entry+0x62/0x120
[<ffffffff811fbfea>] ext4_rename+0x83a/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1181 ms
Event count:                       1
systemd-journal      sleep_on_buffer        1181 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff8121a8f2>] ext4_ext_get_access.isra.39+0x22/0x30
[<ffffffff8121d125>] ext4_ext_insert_extent+0xf5/0x420
[<ffffffff8121f60a>] ext4_ext_map_blocks+0x69a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff8121fd1f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171b32>] do_fallocate+0x112/0x190
[<ffffffff81171c02>] sys_fallocate+0x52/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1160 ms
Event count:                       1
rm                   sleep_on_buffer        1160 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9505>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fc169>] ext4_rmdir+0x39/0x270
[<ffffffff8117dbf8>] vfs_rmdir.part.32+0xa8/0xf0
[<ffffffff8117fc8a>] vfs_rmdir+0x3a/0x50
[<ffffffff8117fe63>] do_rmdir+0x1c3/0x1e0
[<ffffffff811825ed>] sys_unlinkat+0x2d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1108 ms
Event count:                       1
mutt                 sleep_on_buffer        1108 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811eb7cf>] ext4_free_inode+0x22f/0x5f0
[<ffffffff811f4fe1>] ext4_evict_inode+0x331/0x4c0
[<ffffffff8118bcbf>] evict+0xaf/0x1b0
[<ffffffff8118c543>] iput_final+0xd3/0x160
[<ffffffff8118c609>] iput+0x39/0x50
[<ffffffff81187248>] dentry_iput+0x98/0xe0
[<ffffffff81188ac8>] dput+0x128/0x230
[<ffffffff81174368>] __fput+0x178/0x240
[<ffffffff81174439>] ____fput+0x9/0x10
[<ffffffff81065dc7>] task_work_run+0x97/0xd0
[<ffffffff81002cbc>] do_notify_resume+0x9c/0xb0
[<ffffffff8159c46a>] int_signal+0x12/0x17
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1106 ms
Event count:                       1
flush-8:0            sleep_on_buffer        1106 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f51b1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811efadd>] ext4_da_update_reserve_space+0x1cd/0x280
[<ffffffff8121f88a>] ext4_ext_map_blocks+0x91a/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119de90>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff810690eb>] kthread+0xbb/0xc0
[<ffffffff8159c0fc>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1081 ms
Event count:                       1
imapd                sleep_on_buffer        1081 ms
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fb67b>] ext4_orphan_add+0xbb/0x1f0
[<ffffffff8121f9e1>] ext4_ext_truncate+0x71/0x1e0
[<ffffffff811ef535>] ext4_truncate.part.59+0xd5/0xf0
[<ffffffff811f0614>] ext4_truncate+0x34/0x90
[<ffffffff811f2f5d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d132>] notify_change+0x1f2/0x3c0
[<ffffffff811715d9>] do_truncate+0x59/0xa0
[<ffffffff8117d186>] handle_truncate+0x66/0xa0
[<ffffffff81181306>] do_last+0x626/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1079 ms
Event count:                       1
git                  sleep_on_buffer        1079 ms
[<ffffffff812a5050>] generic_make_request+0x60/0x70
[<ffffffff812a50c7>] submit_bio+0x67/0x130
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1074 ms
Event count:                       1
cp                   sleep_on_buffer        1074 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a35ae>] __lock_buffer+0x2e/0x30
[<ffffffff81239def>] do_get_write_access+0x43f/0x4b0
[<ffffffff81239fab>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220839>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f2b88>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f2bf9>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811ec749>] __ext4_new_inode+0xbb9/0x10c0
[<ffffffff811fac5b>] ext4_create+0xbb/0x190
[<ffffffff81180aa5>] vfs_create+0xb5/0x120
[<ffffffff81180c4e>] lookup_open+0x13e/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff811815b3>] path_openat+0xb3/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff8117284c>] sys_open+0x1c/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1072 ms
Event count:                       1
du                   sleep_on_buffer        1072 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff8117ca63>] __lookup_hash+0x33/0x40
[<ffffffff8158464f>] lookup_slow+0x40/0xa4
[<ffffffff8117efb2>] path_lookupat+0x222/0x780
[<ffffffff8117f53f>] filename_lookup+0x2f/0xc0
[<ffffffff81182074>] user_path_at_empty+0x54/0xa0
[<ffffffff811820cc>] user_path_at+0xc/0x10
[<ffffffff81177b39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177d45>] sys_newfstatat+0x15/0x30
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1034 ms
Event count:                       1
git                  sleep_on_buffer        1034 ms
[<ffffffff8110ef82>] __lock_page+0x62/0x70
[<ffffffff8110fe71>] find_lock_page+0x51/0x80
[<ffffffff8110ff04>] grab_cache_page_write_begin+0x64/0xd0
[<ffffffff811f1ca4>] ext4_da_write_begin+0x84/0x2e0
[<ffffffff8110f3aa>] generic_perform_write+0xca/0x210
[<ffffffff8110f548>] generic_file_buffered_write+0x58/0x90
[<ffffffff81110f96>] __generic_file_aio_write+0x1b6/0x3b0
[<ffffffff8111120a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea3a3>] ext4_file_write+0x83/0xd0
[<ffffffff81172a73>] do_sync_write+0xa3/0xe0
[<ffffffff811730fe>] vfs_write+0xae/0x180
[<ffffffff8117341d>] sys_write+0x4d/0x90
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1031 ms
Event count:                       1
git                  sleep_on_buffer        1031 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7818>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff8122462e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122509a>] ext4_mb_load_buddy+0x26a/0x350
[<ffffffff81227247>] ext4_mb_regular_allocator+0x197/0x430
[<ffffffff81228db6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121f471>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811efe65>] ext4_map_blocks+0x2d5/0x450
[<ffffffff811f3f0a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4a10>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac0b>] do_writepages+0x1b/0x30
[<ffffffff81110be9>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff811114b7>] filemap_flush+0x17/0x20
[<ffffffff811f0354>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811fb960>] ext4_rename+0x1b0/0x980
[<ffffffff8117d4ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180126>] vfs_rename+0xb6/0x240
[<ffffffff81182c96>] sys_renameat+0x386/0x3d0
[<ffffffff81182cf6>] sys_rename+0x16/0x20
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1029 ms
Event count:                       1
git                  wait_on_page_bit_killable   1029 ms
[<ffffffff815966d9>] kretprobe_trampoline+0x25/0x4c
[<ffffffff81111728>] filemap_fault+0x88/0x410
[<ffffffff81135d69>] __do_fault+0x439/0x530
[<ffffffff811394be>] handle_pte_fault+0xee/0x200
[<ffffffff8113a731>] handle_mm_fault+0x271/0x390
[<ffffffff81597a20>] __do_page_fault+0x230/0x520
[<ffffffff81594ec5>] do_device_not_available+0x15/0x20
[<ffffffff8159d50e>] device_not_available+0x1e/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1017 ms
Event count:                       1
npviewer.bin         sleep_on_buffer        1017 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff811eefee>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f0d2e>] ext4_iget+0x7e/0x940
[<ffffffff811f9796>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9835>] ext4_lookup+0x25/0x30
[<ffffffff8117c628>] lookup_real+0x18/0x50
[<ffffffff81180bd8>] lookup_open+0xc8/0x1d0
[<ffffffff81180fe7>] do_last+0x307/0x820
[<ffffffff8118182a>] path_openat+0x32a/0x4a0
[<ffffffff8118210d>] do_filp_open+0x3d/0xa0
[<ffffffff81172749>] do_sys_open+0xf9/0x1e0
[<ffffffff811c2996>] compat_sys_open+0x16/0x20
[<ffffffff8159dc79>] ia32_sysret+0x0/0x5
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1016 ms
Event count:                       1
rm                   sleep_on_buffer        1016 ms
[<ffffffff815966b4>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3566>] __wait_on_buffer+0x26/0x30
[<ffffffff8123d8f0>] __wait_cp_io+0xd0/0xe0
[<ffffffff8123da23>] jbd2_log_do_checkpoint+0x123/0x220
[<ffffffff8123dba1>] __jbd2_log_wait_for_space+0x81/0x190
[<ffffffff812382d0>] start_this_handle+0x2e0/0x3e0
[<ffffffff81238590>] jbd2__journal_start.part.8+0x90/0x190
[<ffffffff812386d5>] jbd2__journal_start+0x45/0x50
[<ffffffff812205d1>] __ext4_journal_start_sb+0x81/0x170
[<ffffffff811fc44c>] ext4_unlink+0xac/0x350
[<ffffffff8117daef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117f9d7>] vfs_unlink+0x37/0x50
[<ffffffff8117fbff>] do_unlinkat+0x20f/0x260
[<ffffffff811825dd>] sys_unlinkat+0x1d/0x40
[<ffffffff8159c1ad>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff


-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 14:27 ` Mel Gorman
@ 2013-04-02 15:00   ` Jiri Slaby
  -1 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-02 15:00 UTC (permalink / raw)
  To: Mel Gorman, linux-ext4; +Cc: LKML, Linux-MM

On 04/02/2013 04:27 PM, Mel Gorman wrote:
> I'm testing a page-reclaim-related series on my laptop that is partially
> aimed at fixing long stalls when doing metadata-intensive operations on
> low memory such as a git checkout. I've been running 3.9-rc2 with the
> series applied but found that the interactive performance was awful even
> when there was plenty of free memory.
> 
> I activated a monitor from mmtests that logs when a process is stuck for
> a long time in D state and found that there are a lot of stalls in ext4.
> The report first states that processes have been stalled for a total of
> 6498 seconds on IO which seems like a lot. Here is a breakdown of the
> recorded events.

Just a note that I am indeed using ext4 on the affected machine for all
filesystems I have except for an efi partition...

-- 
js
suse labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 15:00   ` Jiri Slaby
  0 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-02 15:00 UTC (permalink / raw)
  To: Mel Gorman, linux-ext4; +Cc: LKML, Linux-MM

On 04/02/2013 04:27 PM, Mel Gorman wrote:
> I'm testing a page-reclaim-related series on my laptop that is partially
> aimed at fixing long stalls when doing metadata-intensive operations on
> low memory such as a git checkout. I've been running 3.9-rc2 with the
> series applied but found that the interactive performance was awful even
> when there was plenty of free memory.
> 
> I activated a monitor from mmtests that logs when a process is stuck for
> a long time in D state and found that there are a lot of stalls in ext4.
> The report first states that processes have been stalled for a total of
> 6498 seconds on IO which seems like a lot. Here is a breakdown of the
> recorded events.

Just a note that I am indeed using ext4 on the affected machine for all
filesystems I have except for an efi partition...

-- 
js
suse labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 14:27 ` Mel Gorman
@ 2013-04-02 15:03   ` Zheng Liu
  -1 siblings, 0 replies; 105+ messages in thread
From: Zheng Liu @ 2013-04-02 15:03 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

Hi Mel,

Thanks for reporting it.

On 04/02/2013 10:27 PM, Mel Gorman wrote:
> I'm testing a page-reclaim-related series on my laptop that is partially
> aimed at fixing long stalls when doing metadata-intensive operations on
> low memory such as a git checkout. I've been running 3.9-rc2 with the
> series applied but found that the interactive performance was awful even
> when there was plenty of free memory.
> 
> I activated a monitor from mmtests that logs when a process is stuck for
> a long time in D state and found that there are a lot of stalls in ext4.
> The report first states that processes have been stalled for a total of
> 6498 seconds on IO which seems like a lot. Here is a breakdown of the
> recorded events.

In this merge window, we add a status tree as a extent cache.  Meanwhile
a es_cache shrinker is registered to try to reclaim from this cache when
we are under a high memory pressure.  So I suspect that the root cause
is this shrinker.  Could you please tell me how to reproduce this
problem?  If I understand correctly, I can run mmtest to reproduce this
problem, right?

Thanks,
						- Zheng

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 15:03   ` Zheng Liu
  0 siblings, 0 replies; 105+ messages in thread
From: Zheng Liu @ 2013-04-02 15:03 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

Hi Mel,

Thanks for reporting it.

On 04/02/2013 10:27 PM, Mel Gorman wrote:
> I'm testing a page-reclaim-related series on my laptop that is partially
> aimed at fixing long stalls when doing metadata-intensive operations on
> low memory such as a git checkout. I've been running 3.9-rc2 with the
> series applied but found that the interactive performance was awful even
> when there was plenty of free memory.
> 
> I activated a monitor from mmtests that logs when a process is stuck for
> a long time in D state and found that there are a lot of stalls in ext4.
> The report first states that processes have been stalled for a total of
> 6498 seconds on IO which seems like a lot. Here is a breakdown of the
> recorded events.

In this merge window, we add a status tree as a extent cache.  Meanwhile
a es_cache shrinker is registered to try to reclaim from this cache when
we are under a high memory pressure.  So I suspect that the root cause
is this shrinker.  Could you please tell me how to reproduce this
problem?  If I understand correctly, I can run mmtest to reproduce this
problem, right?

Thanks,
						- Zheng

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 14:27 ` Mel Gorman
@ 2013-04-02 15:06   ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-02 15:06 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 03:27:17PM +0100, Mel Gorman wrote:
> I'm testing a page-reclaim-related series on my laptop that is partially
> aimed at fixing long stalls when doing metadata-intensive operations on
> low memory such as a git checkout. I've been running 3.9-rc2 with the
> series applied but found that the interactive performance was awful even
> when there was plenty of free memory.

Can you try 3.9-rc4 or later and see if the problem still persists?
There were a number of ext4 issues especially around low memory
performance which weren't resolved until -rc4.

Thanks,

						- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 15:06   ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-02 15:06 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 03:27:17PM +0100, Mel Gorman wrote:
> I'm testing a page-reclaim-related series on my laptop that is partially
> aimed at fixing long stalls when doing metadata-intensive operations on
> low memory such as a git checkout. I've been running 3.9-rc2 with the
> series applied but found that the interactive performance was awful even
> when there was plenty of free memory.

Can you try 3.9-rc4 or later and see if the problem still persists?
There were a number of ext4 issues especially around low memory
performance which weren't resolved until -rc4.

Thanks,

						- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 15:06   ` Theodore Ts'o
@ 2013-04-02 15:14     ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-02 15:14 UTC (permalink / raw)
  To: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 11:06:51AM -0400, Theodore Ts'o wrote:
> 
> Can you try 3.9-rc4 or later and see if the problem still persists?
> There were a number of ext4 issues especially around low memory
> performance which weren't resolved until -rc4.

Actually, sorry, I took a closer look and I'm not as sure going to
-rc4 is going to help (although we did have some ext4 patches to fix a
number of bugs that flowed in as late as -rc4).

Can you send us the patch that you used to get record these long stall
times?  And I assume you're using a laptop drive?  5400RPM or 7200RPM?

	      	     	    	    	   - Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 15:14     ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-02 15:14 UTC (permalink / raw)
  To: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 11:06:51AM -0400, Theodore Ts'o wrote:
> 
> Can you try 3.9-rc4 or later and see if the problem still persists?
> There were a number of ext4 issues especially around low memory
> performance which weren't resolved until -rc4.

Actually, sorry, I took a closer look and I'm not as sure going to
-rc4 is going to help (although we did have some ext4 patches to fix a
number of bugs that flowed in as late as -rc4).

Can you send us the patch that you used to get record these long stall
times?  And I assume you're using a laptop drive?  5400RPM or 7200RPM?

	      	     	    	    	   - Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 15:03   ` Zheng Liu
@ 2013-04-02 15:15     ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-02 15:15 UTC (permalink / raw)
  To: Zheng Liu; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 11:03:36PM +0800, Zheng Liu wrote:
> Hi Mel,
> 
> Thanks for reporting it.
> 
> On 04/02/2013 10:27 PM, Mel Gorman wrote:
> > I'm testing a page-reclaim-related series on my laptop that is partially
> > aimed at fixing long stalls when doing metadata-intensive operations on
> > low memory such as a git checkout. I've been running 3.9-rc2 with the
> > series applied but found that the interactive performance was awful even
> > when there was plenty of free memory.
> > 
> > I activated a monitor from mmtests that logs when a process is stuck for
> > a long time in D state and found that there are a lot of stalls in ext4.
> > The report first states that processes have been stalled for a total of
> > 6498 seconds on IO which seems like a lot. Here is a breakdown of the
> > recorded events.
> 
> In this merge window, we add a status tree as a extent cache.  Meanwhile
> a es_cache shrinker is registered to try to reclaim from this cache when
> we are under a high memory pressure. 

Ok.

> So I suspect that the root cause
> is this shrinker.  Could you please tell me how to reproduce this
> problem?  If I understand correctly, I can run mmtest to reproduce this
> problem, right?
> 

This is normal desktop usage with some development thrown in, nothing
spectacular but nothing obviously reproducible either unfortuantely. I
just noticed that some git operations were taking abnormally long, mutt
was very slow opening mail, applications like mozilla were very slow to
launch etc. and dug a little further. I haven't checked if regression
tests under mmtests captured something similar yet.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 15:15     ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-02 15:15 UTC (permalink / raw)
  To: Zheng Liu; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 11:03:36PM +0800, Zheng Liu wrote:
> Hi Mel,
> 
> Thanks for reporting it.
> 
> On 04/02/2013 10:27 PM, Mel Gorman wrote:
> > I'm testing a page-reclaim-related series on my laptop that is partially
> > aimed at fixing long stalls when doing metadata-intensive operations on
> > low memory such as a git checkout. I've been running 3.9-rc2 with the
> > series applied but found that the interactive performance was awful even
> > when there was plenty of free memory.
> > 
> > I activated a monitor from mmtests that logs when a process is stuck for
> > a long time in D state and found that there are a lot of stalls in ext4.
> > The report first states that processes have been stalled for a total of
> > 6498 seconds on IO which seems like a lot. Here is a breakdown of the
> > recorded events.
> 
> In this merge window, we add a status tree as a extent cache.  Meanwhile
> a es_cache shrinker is registered to try to reclaim from this cache when
> we are under a high memory pressure. 

Ok.

> So I suspect that the root cause
> is this shrinker.  Could you please tell me how to reproduce this
> problem?  If I understand correctly, I can run mmtest to reproduce this
> problem, right?
> 

This is normal desktop usage with some development thrown in, nothing
spectacular but nothing obviously reproducible either unfortuantely. I
just noticed that some git operations were taking abnormally long, mutt
was very slow opening mail, applications like mozilla were very slow to
launch etc. and dug a little further. I haven't checked if regression
tests under mmtests captured something similar yet.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 15:14     ` Theodore Ts'o
@ 2013-04-02 18:19       ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-02 18:19 UTC (permalink / raw)
  To: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

So I tried to reproduce the problem, and so I installed systemtap
(bleeding edge, since otherwise it won't work with development
kernel), and then rebuilt a kernel with all of the necessary CONFIG
options enabled:

	CONFIG_DEBUG_INFO, CONFIG_KPROBES, CONFIG_RELAY, CONFIG_DEBUG_FS,
	CONFIG_MODULES, CONFIG_MODULE_UNLOAD

I then pulled down mmtests, and tried running watch-dstate.pl, which
is what I sasume you were using, and I got a reminder of why I've
tried very hard to use systemtap:

semantic error: while resolving probe point: identifier 'kprobe' at /tmp/stapdjN4_l:18:7
        source: probe kprobe.function("get_request_wait")
                      ^

semantic error: no match
semantic error: while resolving probe point: identifier 'kprobe' at :74:8
        source: }probe kprobe.function("get_request_wait").return
                       ^

Pass 2: analysis failed.  [man error::pass2]
Unexpected exit of STAP script at ./watch-dstate.pl line 296.

I have no clue what to do next.  Can you give me a hint?

Thanks,

						- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 18:19       ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-02 18:19 UTC (permalink / raw)
  To: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

So I tried to reproduce the problem, and so I installed systemtap
(bleeding edge, since otherwise it won't work with development
kernel), and then rebuilt a kernel with all of the necessary CONFIG
options enabled:

	CONFIG_DEBUG_INFO, CONFIG_KPROBES, CONFIG_RELAY, CONFIG_DEBUG_FS,
	CONFIG_MODULES, CONFIG_MODULE_UNLOAD

I then pulled down mmtests, and tried running watch-dstate.pl, which
is what I sasume you were using, and I got a reminder of why I've
tried very hard to use systemtap:

semantic error: while resolving probe point: identifier 'kprobe' at /tmp/stapdjN4_l:18:7
        source: probe kprobe.function("get_request_wait")
                      ^

semantic error: no match
semantic error: while resolving probe point: identifier 'kprobe' at :74:8
        source: }probe kprobe.function("get_request_wait").return
                       ^

Pass 2: analysis failed.  [man error::pass2]
Unexpected exit of STAP script at ./watch-dstate.pl line 296.

I have no clue what to do next.  Can you give me a hint?

Thanks,

						- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 14:27 ` Mel Gorman
@ 2013-04-02 23:16   ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-02 23:16 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

I've tried doing some quick timing, and if it is a performance
regression, it's not a recent one --- or I haven't been able to
reproduce what Mel is seeing.  I tried the following commands while
booted into 3.2, 3.8, and 3.9-rc3 kernels:

time git clone ...
rm .git/index ; time git reset

I did this a number of git repo's; including one that was freshly
cloned, and one that had around 3 dozen patches applied via git am (so
there were a bunch of loose objects).  And I tried doing this on an
SSD and a 5400rpm HDD, and I did it with all of the in-memory cache
flushed via "git 3 > /proc/sys/vm/drop_caches".  The worst case was
doing a "time git reset" after deleting the .git/index file after
applying all of Kent Overstreet's recent AIO patches that had been
sent out for review.  It took around 55 seconds, on 3.2, 3.8 and
3.9-rc3.  That is pretty horrible, but for me that's the reason why I
use SSD's.

Mel, how bad is various git commands that you are trying?  Have you
tried using time to get estimates of how long a git clone or other git
operation is taking?

						- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-02 23:16   ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-02 23:16 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

I've tried doing some quick timing, and if it is a performance
regression, it's not a recent one --- or I haven't been able to
reproduce what Mel is seeing.  I tried the following commands while
booted into 3.2, 3.8, and 3.9-rc3 kernels:

time git clone ...
rm .git/index ; time git reset

I did this a number of git repo's; including one that was freshly
cloned, and one that had around 3 dozen patches applied via git am (so
there were a bunch of loose objects).  And I tried doing this on an
SSD and a 5400rpm HDD, and I did it with all of the in-memory cache
flushed via "git 3 > /proc/sys/vm/drop_caches".  The worst case was
doing a "time git reset" after deleting the .git/index file after
applying all of Kent Overstreet's recent AIO patches that had been
sent out for review.  It took around 55 seconds, on 3.2, 3.8 and
3.9-rc3.  That is pretty horrible, but for me that's the reason why I
use SSD's.

Mel, how bad is various git commands that you are trying?  Have you
tried using time to get estimates of how long a git clone or other git
operation is taking?

						- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 15:14     ` Theodore Ts'o
@ 2013-04-03 10:19       ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-03 10:19 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 11:14:36AM -0400, Theodore Ts'o wrote:
> On Tue, Apr 02, 2013 at 11:06:51AM -0400, Theodore Ts'o wrote:
> > 
> > Can you try 3.9-rc4 or later and see if the problem still persists?
> > There were a number of ext4 issues especially around low memory
> > performance which weren't resolved until -rc4.
> 
> Actually, sorry, I took a closer look and I'm not as sure going to
> -rc4 is going to help (although we did have some ext4 patches to fix a
> number of bugs that flowed in as late as -rc4).
> 

I'm running with -rc5 now. I have not noticed much interactivity problems
as such but the stall detection script reported that mutt stalled for
20 seconds opening an inbox and imapd blocked for 59 seconds doing path
lookups, imaps blocked again for 12 seconds doing an atime update, an RSS
reader blocked for 3.5 seconds writing a file. etc.

There has been no reclaim activity in the system yet and 2G is still free
so it's very unlikely to be a page or slab reclaim problem.

> Can you send us the patch that you used to get record these long stall
> times? 

No patch but it depends on systemtap which you are already aware is a wreck
to work with and frequently breaks between kernel versions for a variety of
reasons. Minimally, it is necessary to revert commit ba6fdda4 (profiling:
Remove unused timer hook) to get systemtap working.  I've reported this
problem to the patch author and the systemtap mailing list.

Other workarounds are necessary so I updated mmtests in git and at
http://www.csn.ul.ie/~mel/projects/mmtests/mmtests-0.10-mmtests-0.01.tar.gz
. Download and untar it

1. stap can be "fixed" by running bin/stap-fix.sh . It will try and run
   a one-liner stap script and if that fails it'll try very crude workarounds.
   Your milage may vary considerably

2. If you want to run the monitor script yourself, it's
   sudo monitors/watch-dstate.pl | tee /tmp/foo.log

   but be aware the log may be truncated due to buffeering.  Optionally you
   can avoid the buffered write problem by running mmtests as

   sudo ./run-mmtests.sh --config configs/config-monitor-interactive stall-debug

   and the log will be in work/log/dstate-stall-debug-monitor.gz

3. Summarise the report with

   cat /tmp/foo.log | subreport/stap-dstate-frequency

I'll be digging through other mmtests results shortly to see if I already
have a better reproduction case that is eligible for bisection but those
results are based on different machines so no guarantees of success.

> And I assume you're using a laptop drive?  5400RPM or 7200RPM?
> 

Yes, laptop drive, 7200RPM. CFQ scheduler. Drive queue depth is 32. 

/dev/sda:

ATA device, with non-removable media
	Model Number:       ST9320423AS                             
	Serial Number:      5VH5M0LY
	Firmware Revision:  0003LVM1
	Transport:          Serial
Standards:
	Used: unknown (minor revision code 0x0029) 
	Supported: 8 7 6 5 
	Likely used: 8
Configuration:
	Logical		max	current
	cylinders	16383	16383
	heads		16	16
	sectors/track	63	63
	--
	CHS current addressable sectors:   16514064
	LBA    user addressable sectors:  268435455
	LBA48  user addressable sectors:  625142448
	Logical  Sector size:                   512 bytes
	Physical Sector size:                   512 bytes
	device size with M = 1024*1024:      305245 MBytes
	device size with M = 1000*1000:      320072 MBytes (320 GB)
	cache/buffer size  = 16384 KBytes
	Nominal Media Rotation Rate: 7200
Capabilities:
	LBA, IORDY(can be disabled)
	Queue depth: 32
	Standby timer values: spec'd by Standard, no device specific minimum
	R/W multiple sector transfer: Max = 16	Current = 16
	Advanced power management level: 128
	Recommended acoustic management value: 254, current value: 0
	DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5 
	     Cycle time: min=120ns recommended=120ns
	PIO: pio0 pio1 pio2 pio3 pio4 
	     Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
	Enabled	Supported:
	   *	SMART feature set
	    	Security Mode feature set
	   *	Power Management feature set
	   *	Write cache
	   *	Look-ahead
	   *	Host Protected Area feature set
	   *	WRITE_BUFFER command
	   *	READ_BUFFER command
	   *	DOWNLOAD_MICROCODE
	   *	Advanced Power Management feature set
	    	SET_MAX security extension
	   *	48-bit Address feature set
	   *	Device Configuration Overlay feature set
	   *	Mandatory FLUSH_CACHE
	   *	FLUSH_CACHE_EXT
	   *	SMART error logging
	   *	SMART self-test
	   *	General Purpose Logging feature set
	   *	64-bit World wide name
	   *	IDLE_IMMEDIATE with UNLOAD
	   *	Write-Read-Verify feature set
	   *	WRITE_UNCORRECTABLE_EXT command
	   *	{READ,WRITE}_DMA_EXT_GPL commands
	   *	Segmented DOWNLOAD_MICROCODE
	   *	Gen1 signaling speed (1.5Gb/s)
	   *	Gen2 signaling speed (3.0Gb/s)
	   *	Native Command Queueing (NCQ)
	   *	Phy event counters
	    	Device-initiated interface power management
	   *	Software settings preservation
	   *	SMART Command Transport (SCT) feature set
	   *	SCT Read/Write Long (AC1), obsolete
	   *	SCT Error Recovery Control (AC3)
	   *	SCT Features Control (AC4)
	   *	SCT Data Tables (AC5)
	    	unknown 206[12] (vendor specific)
Security: 
	Master password revision code = 65534
		supported
	not	enabled
	not	locked
		frozen
	not	expired: security count
		supported: enhanced erase
	70min for SECURITY ERASE UNIT. 70min for ENHANCED SECURITY ERASE UNIT. 
Logical Unit WWN Device Identifier: 5000c5002f2d395d
	NAA		: 5
	IEEE OUI	: 000c50
	Unique ID	: 02f2d395d
Checksum: correct

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-03 10:19       ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-03 10:19 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 11:14:36AM -0400, Theodore Ts'o wrote:
> On Tue, Apr 02, 2013 at 11:06:51AM -0400, Theodore Ts'o wrote:
> > 
> > Can you try 3.9-rc4 or later and see if the problem still persists?
> > There were a number of ext4 issues especially around low memory
> > performance which weren't resolved until -rc4.
> 
> Actually, sorry, I took a closer look and I'm not as sure going to
> -rc4 is going to help (although we did have some ext4 patches to fix a
> number of bugs that flowed in as late as -rc4).
> 

I'm running with -rc5 now. I have not noticed much interactivity problems
as such but the stall detection script reported that mutt stalled for
20 seconds opening an inbox and imapd blocked for 59 seconds doing path
lookups, imaps blocked again for 12 seconds doing an atime update, an RSS
reader blocked for 3.5 seconds writing a file. etc.

There has been no reclaim activity in the system yet and 2G is still free
so it's very unlikely to be a page or slab reclaim problem.

> Can you send us the patch that you used to get record these long stall
> times? 

No patch but it depends on systemtap which you are already aware is a wreck
to work with and frequently breaks between kernel versions for a variety of
reasons. Minimally, it is necessary to revert commit ba6fdda4 (profiling:
Remove unused timer hook) to get systemtap working.  I've reported this
problem to the patch author and the systemtap mailing list.

Other workarounds are necessary so I updated mmtests in git and at
http://www.csn.ul.ie/~mel/projects/mmtests/mmtests-0.10-mmtests-0.01.tar.gz
. Download and untar it

1. stap can be "fixed" by running bin/stap-fix.sh . It will try and run
   a one-liner stap script and if that fails it'll try very crude workarounds.
   Your milage may vary considerably

2. If you want to run the monitor script yourself, it's
   sudo monitors/watch-dstate.pl | tee /tmp/foo.log

   but be aware the log may be truncated due to buffeering.  Optionally you
   can avoid the buffered write problem by running mmtests as

   sudo ./run-mmtests.sh --config configs/config-monitor-interactive stall-debug

   and the log will be in work/log/dstate-stall-debug-monitor.gz

3. Summarise the report with

   cat /tmp/foo.log | subreport/stap-dstate-frequency

I'll be digging through other mmtests results shortly to see if I already
have a better reproduction case that is eligible for bisection but those
results are based on different machines so no guarantees of success.

> And I assume you're using a laptop drive?  5400RPM or 7200RPM?
> 

Yes, laptop drive, 7200RPM. CFQ scheduler. Drive queue depth is 32. 

/dev/sda:

ATA device, with non-removable media
	Model Number:       ST9320423AS                             
	Serial Number:      5VH5M0LY
	Firmware Revision:  0003LVM1
	Transport:          Serial
Standards:
	Used: unknown (minor revision code 0x0029) 
	Supported: 8 7 6 5 
	Likely used: 8
Configuration:
	Logical		max	current
	cylinders	16383	16383
	heads		16	16
	sectors/track	63	63
	--
	CHS current addressable sectors:   16514064
	LBA    user addressable sectors:  268435455
	LBA48  user addressable sectors:  625142448
	Logical  Sector size:                   512 bytes
	Physical Sector size:                   512 bytes
	device size with M = 1024*1024:      305245 MBytes
	device size with M = 1000*1000:      320072 MBytes (320 GB)
	cache/buffer size  = 16384 KBytes
	Nominal Media Rotation Rate: 7200
Capabilities:
	LBA, IORDY(can be disabled)
	Queue depth: 32
	Standby timer values: spec'd by Standard, no device specific minimum
	R/W multiple sector transfer: Max = 16	Current = 16
	Advanced power management level: 128
	Recommended acoustic management value: 254, current value: 0
	DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5 
	     Cycle time: min=120ns recommended=120ns
	PIO: pio0 pio1 pio2 pio3 pio4 
	     Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
	Enabled	Supported:
	   *	SMART feature set
	    	Security Mode feature set
	   *	Power Management feature set
	   *	Write cache
	   *	Look-ahead
	   *	Host Protected Area feature set
	   *	WRITE_BUFFER command
	   *	READ_BUFFER command
	   *	DOWNLOAD_MICROCODE
	   *	Advanced Power Management feature set
	    	SET_MAX security extension
	   *	48-bit Address feature set
	   *	Device Configuration Overlay feature set
	   *	Mandatory FLUSH_CACHE
	   *	FLUSH_CACHE_EXT
	   *	SMART error logging
	   *	SMART self-test
	   *	General Purpose Logging feature set
	   *	64-bit World wide name
	   *	IDLE_IMMEDIATE with UNLOAD
	   *	Write-Read-Verify feature set
	   *	WRITE_UNCORRECTABLE_EXT command
	   *	{READ,WRITE}_DMA_EXT_GPL commands
	   *	Segmented DOWNLOAD_MICROCODE
	   *	Gen1 signaling speed (1.5Gb/s)
	   *	Gen2 signaling speed (3.0Gb/s)
	   *	Native Command Queueing (NCQ)
	   *	Phy event counters
	    	Device-initiated interface power management
	   *	Software settings preservation
	   *	SMART Command Transport (SCT) feature set
	   *	SCT Read/Write Long (AC1), obsolete
	   *	SCT Error Recovery Control (AC3)
	   *	SCT Features Control (AC4)
	   *	SCT Data Tables (AC5)
	    	unknown 206[12] (vendor specific)
Security: 
	Master password revision code = 65534
		supported
	not	enabled
	not	locked
		frozen
	not	expired: security count
		supported: enhanced erase
	70min for SECURITY ERASE UNIT. 70min for ENHANCED SECURITY ERASE UNIT. 
Logical Unit WWN Device Identifier: 5000c5002f2d395d
	NAA		: 5
	IEEE OUI	: 000c50
	Unique ID	: 02f2d395d
Checksum: correct

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-03 10:19       ` Mel Gorman
@ 2013-04-03 12:05         ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-03 12:05 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Wed, Apr 03, 2013 at 11:19:25AM +0100, Mel Gorman wrote:
> 
> I'm running with -rc5 now. I have not noticed much interactivity problems
> as such but the stall detection script reported that mutt stalled for
> 20 seconds opening an inbox and imapd blocked for 59 seconds doing path
> lookups, imaps blocked again for 12 seconds doing an atime update, an RSS
> reader blocked for 3.5 seconds writing a file. etc.

If imaps blocked for 12 seconds during an atime update, combined with
everything else, at a guess it got caught by something holding up a
journal commit.  Could you try enabling the jbd2_run_stats tracepoint
and grabbing the trace log?  This will give you statistics on how long
(in milliseconds) each of the various phases of a jbd2 commit is
taking, i.e.:

    jbd2/sdb1-8-327   [002] .... 39681.874661: jbd2_run_stats: dev 8,17 tid 7163786 wait 0 request_delay 0 running 3530 locked 0 flushing 0 logging 0 handle_count 75 blocks 8 blocks_logged 9
     jbd2/sdb1-8-327   [003] .... 39682.514153: jbd2_run_stats: dev 8,17 tid 7163787 wait 0 request_delay 0 running 640 locked 0 flushing 0 logging 0 handle_count 39 blocks 12 blocks_logged 13
     jbd2/sdb1-8-327   [000] .... 39687.665609: jbd2_run_stats: dev 8,17 tid 7163788 wait 0 request_delay 0 running 5150 locked 0 flushing 0 logging 0 handle_count 60 blocks 13 blocks_logged 14
     jbd2/sdb1-8-327   [000] .... 39693.200453: jbd2_run_stats: dev 8,17 tid 7163789 wait 0 request_delay 0 running 4840 locked 0 flushing 0 logging 0 handle_count 53 blocks 10 blocks_logged 11
     jbd2/sdb1-8-327   [001] .... 39695.061657: jbd2_run_stats: dev 8,17 tid 7163790 wait 0 request_delay 0 running 1860 locked 0 flushing 0 logging 0 handle_count 124 blocks 19 blocks_logged 20

In the above sample each journal commit is running for no more than 5
seconds or so (since that's the default jbd2 commit timeout; if a
transaction is running for less than 5 seconds, then either we ran out
of room in the journal, and the blocks_logged number will be high, or
a commit was forced by something such as an fsync call).  

If an atime update is getting blocked by 12 seconds, then it would be
interesting to see if a journal commit is running for significantly
longer than 5 seconds, or if one of the other commit phases is taking
significant amounts of time.  (On the example above they are all
taking no time, since I ran this on a relatively uncontended system;
only a single git operation taking place.)

Something else that might be worth trying is grabbing a lock_stat
report and see if something is sitting on an ext4 or jbd2 mutex for a
long time.

Finally, as I mentioned I tried some rather simplistic tests and I
didn't notice any difference between a 3.2 kernel and a 3.8/3.9-rc5
kernel.  Assuming you can get a version of systemtap that
simultaneously works on 3.2 and 3.9-rc5 :-P, and chance you could do a
quick experiment and see if you're seeing a difference on your setup?

Thanks!!

					 - Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-03 12:05         ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-03 12:05 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Wed, Apr 03, 2013 at 11:19:25AM +0100, Mel Gorman wrote:
> 
> I'm running with -rc5 now. I have not noticed much interactivity problems
> as such but the stall detection script reported that mutt stalled for
> 20 seconds opening an inbox and imapd blocked for 59 seconds doing path
> lookups, imaps blocked again for 12 seconds doing an atime update, an RSS
> reader blocked for 3.5 seconds writing a file. etc.

If imaps blocked for 12 seconds during an atime update, combined with
everything else, at a guess it got caught by something holding up a
journal commit.  Could you try enabling the jbd2_run_stats tracepoint
and grabbing the trace log?  This will give you statistics on how long
(in milliseconds) each of the various phases of a jbd2 commit is
taking, i.e.:

    jbd2/sdb1-8-327   [002] .... 39681.874661: jbd2_run_stats: dev 8,17 tid 7163786 wait 0 request_delay 0 running 3530 locked 0 flushing 0 logging 0 handle_count 75 blocks 8 blocks_logged 9
     jbd2/sdb1-8-327   [003] .... 39682.514153: jbd2_run_stats: dev 8,17 tid 7163787 wait 0 request_delay 0 running 640 locked 0 flushing 0 logging 0 handle_count 39 blocks 12 blocks_logged 13
     jbd2/sdb1-8-327   [000] .... 39687.665609: jbd2_run_stats: dev 8,17 tid 7163788 wait 0 request_delay 0 running 5150 locked 0 flushing 0 logging 0 handle_count 60 blocks 13 blocks_logged 14
     jbd2/sdb1-8-327   [000] .... 39693.200453: jbd2_run_stats: dev 8,17 tid 7163789 wait 0 request_delay 0 running 4840 locked 0 flushing 0 logging 0 handle_count 53 blocks 10 blocks_logged 11
     jbd2/sdb1-8-327   [001] .... 39695.061657: jbd2_run_stats: dev 8,17 tid 7163790 wait 0 request_delay 0 running 1860 locked 0 flushing 0 logging 0 handle_count 124 blocks 19 blocks_logged 20

In the above sample each journal commit is running for no more than 5
seconds or so (since that's the default jbd2 commit timeout; if a
transaction is running for less than 5 seconds, then either we ran out
of room in the journal, and the blocks_logged number will be high, or
a commit was forced by something such as an fsync call).  

If an atime update is getting blocked by 12 seconds, then it would be
interesting to see if a journal commit is running for significantly
longer than 5 seconds, or if one of the other commit phases is taking
significant amounts of time.  (On the example above they are all
taking no time, since I ran this on a relatively uncontended system;
only a single git operation taking place.)

Something else that might be worth trying is grabbing a lock_stat
report and see if something is sitting on an ext4 or jbd2 mutex for a
long time.

Finally, as I mentioned I tried some rather simplistic tests and I
didn't notice any difference between a 3.2 kernel and a 3.8/3.9-rc5
kernel.  Assuming you can get a version of systemtap that
simultaneously works on 3.2 and 3.9-rc5 :-P, and chance you could do a
quick experiment and see if you're seeing a difference on your setup?

Thanks!!

					 - Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-03 12:05         ` Theodore Ts'o
  (?)
@ 2013-04-03 15:15         ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-03 15:15 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

[-- Attachment #1: Type: text/plain, Size: 5304 bytes --]

On Wed, Apr 03, 2013 at 08:05:30AM -0400, Theodore Ts'o wrote:
> On Wed, Apr 03, 2013 at 11:19:25AM +0100, Mel Gorman wrote:
> > 
> > I'm running with -rc5 now. I have not noticed much interactivity problems
> > as such but the stall detection script reported that mutt stalled for
> > 20 seconds opening an inbox and imapd blocked for 59 seconds doing path
> > lookups, imaps blocked again for 12 seconds doing an atime update, an RSS
> > reader blocked for 3.5 seconds writing a file. etc.
> 
> If imaps blocked for 12 seconds during an atime update, combined with
> everything else, at a guess it got caught by something holding up a
> journal commit. 

It's a possibility.

I apologise but I forgot that mail is stored on a crypted partition on
this machine. It's formatted ext4 but dmcrypt could be making this problem
worse if it's stalling ext4 waiting to encrypt/decrypt data due to either
a scheduler or workqueue change.

> Could you try enabling the jbd2_run_stats tracepoint
> and grabbing the trace log?  This will give you statistics on how long
> (in milliseconds) each of the various phases of a jbd2 commit is
> taking, i.e.:
> 
>     jbd2/sdb1-8-327   [002] .... 39681.874661: jbd2_run_stats: dev 8,17 tid 7163786 wait 0 request_delay 0 running 3530 locked 0 flushing 0 logging 0 handle_count 75 blocks 8 blocks_logged 9
>      jbd2/sdb1-8-327   [003] .... 39682.514153: jbd2_run_stats: dev 8,17 tid 7163787 wait 0 request_delay 0 running 640 locked 0 flushing 0 logging 0 handle_count 39 blocks 12 blocks_logged 13
>      jbd2/sdb1-8-327   [000] .... 39687.665609: jbd2_run_stats: dev 8,17 tid 7163788 wait 0 request_delay 0 running 5150 locked 0 flushing 0 logging 0 handle_count 60 blocks 13 blocks_logged 14
>      jbd2/sdb1-8-327   [000] .... 39693.200453: jbd2_run_stats: dev 8,17 tid 7163789 wait 0 request_delay 0 running 4840 locked 0 flushing 0 logging 0 handle_count 53 blocks 10 blocks_logged 11
>      jbd2/sdb1-8-327   [001] .... 39695.061657: jbd2_run_stats: dev 8,17 tid 7163790 wait 0 request_delay 0 running 1860 locked 0 flushing 0 logging 0 handle_count 124 blocks 19 blocks_logged 20
> 

Attached as well as the dstate summary that was recorded at the same
time. It's not quite as compelling but I'll keep the monitor running and
see if something falls out. I didn't find anything useful in the existing
mmtests tests that could be used to bisect this but not many of them are
focused on IO.

> In the above sample each journal commit is running for no more than 5
> seconds or so (since that's the default jbd2 commit timeout; if a
> transaction is running for less than 5 seconds, then either we ran out
> of room in the journal, and the blocks_logged number will be high, or
> a commit was forced by something such as an fsync call).  
> 

I didn't see anything majorly compelling in the jbd tracepoints but I'm
not 100% sure I'm looking for the right thing either. I also recorded
/proc/latency_stat and there were some bad sync latencies from the file
as you can see here

3 4481 1586 jbd2_log_wait_commit ext4_sync_file vfs_fsync sys_msync system_call_fastpath
3 11325 4373 sleep_on_page wait_on_page_bit kretprobe_trampoline filemap_write_and_wait_range ext4_sync_file vfs_fsync sys_msync system_call_fastpath
85 1130707 14904 jbd2_journal_stop jbd2_journal_force_commit ext4_force_commit ext4_sync_file do_fsync sys_fsync system_call_fastpath
1 2161073 2161073 start_this_handle jbd2__journal_start.part.8 jbd2__journal_start __ext4_journal_start_sb ext4_da_writepages do_writepages __filemap_fdatawrite_range filemap_write_and_wait_range ext4_sync_file do_fsync sys_fsync system_call_fastpath
118 7798435 596184 jbd2_log_wait_commit jbd2_journal_stop jbd2_journal_force_commit ext4_force_commit ext4_sync_file do_fsync sys_fsync system_call_fastpath
599 15496449 3405822 sleep_on_page wait_on_page_bit kretprobe_trampoline filemap_write_and_wait_range ext4_sync_file do_fsync sys_fsync system_call_fastpath
405 28572881 2619592 jbd2_log_wait_commit ext4_sync_file do_fsync sys_fsync system_call_fastpath


> If an atime update is getting blocked by 12 seconds, then it would be
> interesting to see if a journal commit is running for significantly
> longer than 5 seconds, or if one of the other commit phases is taking
> significant amounts of time.  (On the example above they are all
> taking no time, since I ran this on a relatively uncontended system;
> only a single git operation taking place.)
> 
> Something else that might be worth trying is grabbing a lock_stat
> report and see if something is sitting on an ext4 or jbd2 mutex for a
> long time.
> 

Ok, if nothing useful falls out in this session I'll enable lock
debugging. latency_stat on its own would not be enough to conclude that
a problem was related to lock contention.

> Finally, as I mentioned I tried some rather simplistic tests and I
> didn't notice any difference between a 3.2 kernel and a 3.8/3.9-rc5
> kernel.  Assuming you can get a version of systemtap that
> simultaneously works on 3.2 and 3.9-rc5 :-P, and chance you could do a
> quick experiment and see if you're seeing a difference on your setup?
> 

stap-fix.sh should be able to kick systemtap sufficiently hard for
either 3.2 or 3.9-rc5 to keep it working. I'll keep digging when I can.

-- 
Mel Gorman
SUSE Labs

[-- Attachment #2: dstate-summary.txt --]
[-- Type: text/plain, Size: 20392 bytes --]

Overall stalled time: 242940 ms

Time stalled in this event:    59077 ms
Event count:                       4
mutt                 sleep_on_buffer        1980 ms
latency-output       sleep_on_buffer       20272 ms
latency-output       sleep_on_buffer       19789 ms
tclsh                sleep_on_buffer       17036 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f3198>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f3209>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f57d1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119ac3e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b9b9>] update_time+0x79/0xc0
[<ffffffff8118ba98>] file_update_time+0x98/0x100
[<ffffffff81110ffc>] __generic_file_aio_write+0x17c/0x3b0
[<ffffffff811112aa>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea853>] ext4_file_write+0x83/0xd0
[<ffffffff81172b23>] do_sync_write+0xa3/0xe0
[<ffffffff811731ae>] vfs_write+0xae/0x180
[<ffffffff8117361d>] sys_write+0x4d/0x90
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    50129 ms
Event count:                       1
offlineimap          sleep_on_buffer       50129 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3996>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9b45>] ext4_find_entry+0x325/0x4f0
[<ffffffff811f9d39>] ext4_lookup.part.31+0x29/0x140
[<ffffffff811f9e75>] ext4_lookup+0x25/0x30
[<ffffffff8117c828>] lookup_real+0x18/0x50
[<ffffffff8117cc63>] __lookup_hash+0x33/0x40
[<ffffffff81585a23>] lookup_slow+0x40/0xa4
[<ffffffff8117f1b2>] path_lookupat+0x222/0x780
[<ffffffff8117f73f>] filename_lookup+0x2f/0xc0
[<ffffffff81182274>] user_path_at_empty+0x54/0xa0
[<ffffffff811822cc>] user_path_at+0xc/0x10
[<ffffffff81177d39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177dc6>] vfs_stat+0x16/0x20
[<ffffffff81177ee5>] sys_newstat+0x15/0x30
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    29283 ms
Event count:                       5
latency-output       wait_on_page_bit       6482 ms
tclsh                wait_on_page_bit       7756 ms
mutt                 wait_on_page_bit       7702 ms
latency-output       wait_on_page_bit       6017 ms
latency-output       wait_on_page_bit       1326 ms
[<ffffffff8110f180>] wait_on_page_bit+0x70/0x80
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8110f44a>] generic_perform_write+0xca/0x210
[<ffffffff8110f5e8>] generic_file_buffered_write+0x58/0x90
[<ffffffff81111036>] __generic_file_aio_write+0x1b6/0x3b0
[<ffffffff811112aa>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea853>] ext4_file_write+0x83/0xd0
[<ffffffff81172b23>] do_sync_write+0xa3/0xe0
[<ffffffff811731ae>] vfs_write+0xae/0x180
[<ffffffff8117361d>] sys_write+0x4d/0x90
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    21871 ms
Event count:                       2
imapd                sleep_on_buffer       18495 ms
imapd                sleep_on_buffer        3376 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f3198>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f3209>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f57d1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119ac3e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b9b9>] update_time+0x79/0xc0
[<ffffffff8118bc61>] touch_atime+0x161/0x170
[<ffffffff81110683>] do_generic_file_read.constprop.35+0x363/0x440
[<ffffffff811113f9>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172c03>] do_sync_read+0xa3/0xe0
[<ffffffff8117332b>] vfs_read+0xab/0x170
[<ffffffff8117358d>] sys_read+0x4d/0x90
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:    20849 ms
Event count:                       1
awesome              sleep_on_buffer       20849 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f3198>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811fbd0f>] ext4_orphan_add+0x10f/0x1f0
[<ffffffff811f37b4>] ext4_setattr+0x3d4/0x640
[<ffffffff8118d362>] notify_change+0x1f2/0x3c0
[<ffffffff81171689>] do_truncate+0x59/0xa0
[<ffffffff8117d386>] handle_truncate+0x66/0xa0
[<ffffffff81181506>] do_last+0x626/0x820
[<ffffffff811817b3>] path_openat+0xb3/0x4a0
[<ffffffff8118230d>] do_filp_open+0x3d/0xa0
[<ffffffff811727f9>] do_sys_open+0xf9/0x1e0
[<ffffffff811728fc>] sys_open+0x1c/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     7872 ms
Event count:                       1
dconf-service        sleep_on_buffer        7872 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3996>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9b45>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fbef5>] ext4_rename+0x105/0x980
[<ffffffff8117d6ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180326>] vfs_rename+0xb6/0x240
[<ffffffff81182e96>] sys_renameat+0x386/0x3d0
[<ffffffff81182ef6>] sys_rename+0x16/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     6265 ms
Event count:                       3
dconf-service        wait_on_page_bit       3486 ms
pool                 wait_on_page_bit       1059 ms
Cache I/O            wait_on_page_bit       1720 ms
[<ffffffff8110f180>] wait_on_page_bit+0x70/0x80
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff81110cf0>] filemap_write_and_wait_range+0x60/0x70
[<ffffffff811ea9fa>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1b88>] do_fsync+0x58/0x80
[<ffffffff811a1eeb>] sys_fsync+0xb/0x10
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     5349 ms
Event count:                       1
dconf-service        sleep_on_buffer        5349 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227c94>] ext4_mb_mark_diskspace_used+0x74/0x4d0
[<ffffffff812293af>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121fbb1>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811f0455>] ext4_map_blocks+0x2d5/0x470
[<ffffffff811f451a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f5020>] ext4_da_writepages+0x380/0x620
[<ffffffff8111aceb>] do_writepages+0x1b/0x30
[<ffffffff81110c89>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81110cda>] filemap_write_and_wait_range+0x4a/0x70
[<ffffffff811ea9fa>] ext4_sync_file+0x6a/0x2d0
[<ffffffff811a1b88>] do_fsync+0x58/0x80
[<ffffffff811a1eeb>] sys_fsync+0xb/0x10
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     5168 ms
Event count:                       2
evolution            wait_on_page_bit_killable   1177 ms
firefox              wait_on_page_bit_killable   3991 ms
[<ffffffff81111668>] wait_on_page_bit_killable+0x78/0x80
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff81111b18>] filemap_fault+0x3d8/0x410
[<ffffffff81135b2a>] __do_fault+0x6a/0x530
[<ffffffff8113964e>] handle_pte_fault+0xee/0x200
[<ffffffff8113a8c1>] handle_mm_fault+0x271/0x390
[<ffffffff81598e29>] __do_page_fault+0x169/0x520
[<ffffffff815991e9>] do_page_fault+0x9/0x10
[<ffffffff81595948>] page_fault+0x28/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4929 ms
Event count:                       1
flush-253:0          sleep_on_buffer        4929 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227c94>] ext4_mb_mark_diskspace_used+0x74/0x4d0
[<ffffffff812293af>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121fbb1>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811f0455>] ext4_map_blocks+0x2d5/0x470
[<ffffffff811f451a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f5020>] ext4_da_writepages+0x380/0x620
[<ffffffff8111aceb>] do_writepages+0x1b/0x30
[<ffffffff81199ce0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119c38a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c5d6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c87b>] wb_writeback+0x27b/0x330
[<ffffffff8119e280>] wb_do_writeback+0x190/0x1d0
[<ffffffff8119e343>] bdi_writeback_thread+0x83/0x280
[<ffffffff8106901b>] kthread+0xbb/0xc0
[<ffffffff8159d57c>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4840 ms
Event count:                       1
systemd-journal      sleep_on_buffer        4840 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff81227c94>] ext4_mb_mark_diskspace_used+0x74/0x4d0
[<ffffffff812293af>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121fbb1>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811f0455>] ext4_map_blocks+0x2d5/0x470
[<ffffffff8122045f>] ext4_fallocate+0x1cf/0x420
[<ffffffff81171be2>] do_fallocate+0x112/0x190
[<ffffffff81171cb2>] sys_fallocate+0x52/0x90
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     4739 ms
Event count:                       1
pool                 sleep_on_buffer        4739 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f3198>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f3209>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811fb47e>] ext4_link+0x10e/0x1b0
[<ffffffff81182033>] vfs_link+0x113/0x1c0
[<ffffffff81182aa4>] sys_linkat+0x174/0x1c0
[<ffffffff81182b09>] sys_link+0x19/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3358 ms
Event count:                       2
imapd                wait_on_page_bit       1726 ms
imapd                wait_on_page_bit       1632 ms
[<ffffffff8110f180>] wait_on_page_bit+0x70/0x80
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff8111d700>] truncate_inode_pages+0x10/0x20
[<ffffffff811f53cf>] ext4_evict_inode+0x10f/0x4d0
[<ffffffff8118beef>] evict+0xaf/0x1b0
[<ffffffff8118c771>] iput_final+0xd1/0x160
[<ffffffff8118c839>] iput+0x39/0x50
[<ffffffff81187418>] dentry_iput+0x98/0xe0
[<ffffffff81188cb8>] dput+0x128/0x230
[<ffffffff81182e4a>] sys_renameat+0x33a/0x3d0
[<ffffffff81182ef6>] sys_rename+0x16/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     3294 ms
Event count:                       1
imapd                sleep_on_buffer        3294 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3996>] __wait_on_buffer+0x26/0x30
[<ffffffff811f9b45>] ext4_find_entry+0x325/0x4f0
[<ffffffff811fca21>] ext4_unlink+0x41/0x350
[<ffffffff8117dcef>] vfs_unlink.part.31+0x7f/0xe0
[<ffffffff8117fbd7>] vfs_unlink+0x37/0x50
[<ffffffff8117fdff>] do_unlinkat+0x20f/0x260
[<ffffffff81182811>] sys_unlink+0x11/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2608 ms
Event count:                       1
pool                 sleep_on_buffer        2608 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812298fa>] ext4_free_blocks+0x36a/0xbe0
[<ffffffff8121c3b6>] ext4_remove_blocks+0x256/0x2d0
[<ffffffff8121c635>] ext4_ext_rm_leaf+0x205/0x520
[<ffffffff8121e37c>] ext4_ext_remove_space+0x4dc/0x750
[<ffffffff8122024b>] ext4_ext_truncate+0x19b/0x1e0
[<ffffffff811efb25>] ext4_truncate.part.60+0xd5/0xf0
[<ffffffff811f0c24>] ext4_truncate+0x34/0x90
[<ffffffff811f356d>] ext4_setattr+0x18d/0x640
[<ffffffff8118d362>] notify_change+0x1f2/0x3c0
[<ffffffff81171689>] do_truncate+0x59/0xa0
[<ffffffff8117d386>] handle_truncate+0x66/0xa0
[<ffffffff81181506>] do_last+0x626/0x820
[<ffffffff811817b3>] path_openat+0xb3/0x4a0
[<ffffffff8118230d>] do_filp_open+0x3d/0xa0
[<ffffffff811727f9>] do_sys_open+0xf9/0x1e0
[<ffffffff811728fc>] sys_open+0x1c/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2546 ms
Event count:                       1
offlineimap          sleep_on_buffer        2546 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fa412>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811faf65>] ext4_add_entry+0x265/0x2d0
[<ffffffff811fc556>] ext4_rename+0x766/0x980
[<ffffffff8117d6ed>] vfs_rename_other+0xcd/0x120
[<ffffffff81180326>] vfs_rename+0xb6/0x240
[<ffffffff81182e96>] sys_renameat+0x386/0x3d0
[<ffffffff81182ef6>] sys_rename+0x16/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2199 ms
Event count:                       1
folder-markup.s      sleep_on_buffer        2199 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811fa412>] ext4_dx_add_entry+0xc2/0x590
[<ffffffff811faf65>] ext4_add_entry+0x265/0x2d0
[<ffffffff811faff6>] ext4_add_nondir+0x26/0x80
[<ffffffff811fb2df>] ext4_create+0xff/0x190
[<ffffffff81180ca5>] vfs_create+0xb5/0x120
[<ffffffff81180e4e>] lookup_open+0x13e/0x1d0
[<ffffffff811811e7>] do_last+0x307/0x820
[<ffffffff811817b3>] path_openat+0xb3/0x4a0
[<ffffffff8118230d>] do_filp_open+0x3d/0xa0
[<ffffffff811727f9>] do_sys_open+0xf9/0x1e0
[<ffffffff811728fc>] sys_open+0x1c/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     2124 ms
Event count:                       2
evolution            sleep_on_buffer        1088 ms
imapd                sleep_on_buffer        1036 ms
[<ffffffff8110efb2>] __lock_page_killable+0x62/0x70
[<ffffffff811105a7>] do_generic_file_read.constprop.35+0x287/0x440
[<ffffffff811113f9>] generic_file_aio_read+0xd9/0x220
[<ffffffff81172c03>] do_sync_read+0xa3/0xe0
[<ffffffff8117332b>] vfs_read+0xab/0x170
[<ffffffff8117358d>] sys_read+0x4d/0x90
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1220 ms
Event count:                       1
jbd2/dm-0-8          sleep_on_buffer        1220 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3996>] __wait_on_buffer+0x26/0x30
[<ffffffff8123c6d1>] jbd2_journal_commit_transaction+0x1241/0x13c0
[<ffffffff81240d33>] kjournald2+0xb3/0x240
[<ffffffff8106901b>] kthread+0xbb/0xc0
[<ffffffff8159d57c>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1088 ms
Event count:                       1
firefox              sleep_on_buffer        1088 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3996>] __wait_on_buffer+0x26/0x30
[<ffffffff811ef5de>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f133e>] ext4_iget+0x7e/0x940
[<ffffffff811f9dd6>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9e75>] ext4_lookup+0x25/0x30
[<ffffffff8117c828>] lookup_real+0x18/0x50
[<ffffffff8117cc63>] __lookup_hash+0x33/0x40
[<ffffffff81585a23>] lookup_slow+0x40/0xa4
[<ffffffff8117f1b2>] path_lookupat+0x222/0x780
[<ffffffff8117f73f>] filename_lookup+0x2f/0xc0
[<ffffffff81182274>] user_path_at_empty+0x54/0xa0
[<ffffffff811822cc>] user_path_at+0xc/0x10
[<ffffffff81171d87>] sys_faccessat+0x97/0x220
[<ffffffff81171f23>] sys_access+0x13/0x20
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1076 ms
Event count:                       1
imapd                sleep_on_buffer        1076 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3996>] __wait_on_buffer+0x26/0x30
[<ffffffff811e7cc8>] ext4_wait_block_bitmap+0xb8/0xc0
[<ffffffff81224d6e>] ext4_mb_init_cache+0x1ce/0x730
[<ffffffff8122536e>] ext4_mb_init_group+0x9e/0x100
[<ffffffff812254d7>] ext4_mb_good_group+0x107/0x1a0
[<ffffffff81227973>] ext4_mb_regular_allocator+0x183/0x430
[<ffffffff812294f6>] ext4_mb_new_blocks+0x3f6/0x490
[<ffffffff8121fbb1>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811f0455>] ext4_map_blocks+0x2d5/0x470
[<ffffffff811f451a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f5020>] ext4_da_writepages+0x380/0x620
[<ffffffff8111aceb>] do_writepages+0x1b/0x30
[<ffffffff81110c89>] __filemap_fdatawrite_range+0x49/0x50
[<ffffffff81111557>] filemap_flush+0x17/0x20
[<ffffffff811f0964>] ext4_alloc_da_blocks+0x44/0xa0
[<ffffffff811ea6b1>] ext4_release_file+0x61/0xd0
[<ffffffff811744a0>] __fput+0xb0/0x240
[<ffffffff81174639>] ____fput+0x9/0x10
[<ffffffff81065cf7>] task_work_run+0x97/0xd0
[<ffffffff81002cbc>] do_notify_resume+0x9c/0xb0
[<ffffffff8159d8ea>] int_signal+0x12/0x17
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1042 ms
Event count:                       1
offlineimap          wait_on_page_bit       1042 ms
[<ffffffff811eab95>] ext4_sync_file+0x205/0x2d0
[<ffffffff811a1b88>] do_fsync+0x58/0x80
[<ffffffff811a1eeb>] sys_fsync+0xb/0x10
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1011 ms
Event count:                       1
firefox              sleep_on_buffer        1011 ms
[<ffffffff81597b84>] kretprobe_trampoline+0x0/0x4c
[<ffffffff811a3996>] __wait_on_buffer+0x26/0x30
[<ffffffff811ef5de>] __ext4_get_inode_loc+0x1be/0x3f0
[<ffffffff811f133e>] ext4_iget+0x7e/0x940
[<ffffffff811f9dd6>] ext4_lookup.part.31+0xc6/0x140
[<ffffffff811f9e75>] ext4_lookup+0x25/0x30
[<ffffffff8117c828>] lookup_real+0x18/0x50
[<ffffffff8117cc63>] __lookup_hash+0x33/0x40
[<ffffffff81585a23>] lookup_slow+0x40/0xa4
[<ffffffff8117f1b2>] path_lookupat+0x222/0x780
[<ffffffff8117f73f>] filename_lookup+0x2f/0xc0
[<ffffffff81182274>] user_path_at_empty+0x54/0xa0
[<ffffffff811822cc>] user_path_at+0xc/0x10
[<ffffffff81177d39>] vfs_fstatat+0x49/0xa0
[<ffffffff81177dc6>] vfs_stat+0x16/0x20
[<ffffffff81177ee5>] sys_newstat+0x15/0x30
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:     1003 ms
Event count:                       1
folder-markup.s      sleep_on_buffer        1003 ms
[<ffffffff8117bb0e>] pipe_read+0x20e/0x340
[<ffffffff81172c03>] do_sync_read+0xa3/0xe0
[<ffffffff8117332b>] vfs_read+0xab/0x170
[<ffffffff8117358d>] sys_read+0x4d/0x90
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

Time stalled in this event:        0 ms
Event count:                       1


[-- Attachment #3: ftrace-debug-stalls-monitor.gz --]
[-- Type: application/x-gzip, Size: 13681 bytes --]

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 23:16   ` Theodore Ts'o
@ 2013-04-03 15:22     ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-03 15:22 UTC (permalink / raw)
  To: Theodore Ts'o, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 07:16:13PM -0400, Theodore Ts'o wrote:
> I've tried doing some quick timing, and if it is a performance
> regression, it's not a recent one --- or I haven't been able to
> reproduce what Mel is seeing.  I tried the following commands while
> booted into 3.2, 3.8, and 3.9-rc3 kernels:
> 
> time git clone ...
> rm .git/index ; time git reset
> 

FWIW, I had run a number if git checkout based tests over time and none
of them revealed anything useful. Granted it was on other machines but I
don't think it's git on its own. It's a combination that leads to this
problem. Maybe it's really an IO scheduler problem and I need to figure
out what combination triggers it.

> <SNIP>
>
> Mel, how bad is various git commands that you are trying?  Have you
> tried using time to get estimates of how long a git clone or other git
> operation is taking?
> 

Unfortunately, the milage varies considerably and it's not always
possible to time the operation. It may be that one occasion that opening
a mail takes an abnormal length time with git operations occasionally
making it far worse.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-03 15:22     ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-03 15:22 UTC (permalink / raw)
  To: Theodore Ts'o, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 07:16:13PM -0400, Theodore Ts'o wrote:
> I've tried doing some quick timing, and if it is a performance
> regression, it's not a recent one --- or I haven't been able to
> reproduce what Mel is seeing.  I tried the following commands while
> booted into 3.2, 3.8, and 3.9-rc3 kernels:
> 
> time git clone ...
> rm .git/index ; time git reset
> 

FWIW, I had run a number if git checkout based tests over time and none
of them revealed anything useful. Granted it was on other machines but I
don't think it's git on its own. It's a combination that leads to this
problem. Maybe it's really an IO scheduler problem and I need to figure
out what combination triggers it.

> <SNIP>
>
> Mel, how bad is various git commands that you are trying?  Have you
> tried using time to get estimates of how long a git clone or other git
> operation is taking?
> 

Unfortunately, the milage varies considerably and it's not always
possible to time the operation. It may be that one occasion that opening
a mail takes an abnormal length time with git operations occasionally
making it far worse.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-03 10:19       ` Mel Gorman
@ 2013-04-05 22:18         ` Jiri Slaby
  -1 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-05 22:18 UTC (permalink / raw)
  To: Mel Gorman, Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM

On 04/03/2013 12:19 PM, Mel Gorman wrote:
> On Tue, Apr 02, 2013 at 11:14:36AM -0400, Theodore Ts'o wrote:
>> On Tue, Apr 02, 2013 at 11:06:51AM -0400, Theodore Ts'o wrote:
>>>
>>> Can you try 3.9-rc4 or later and see if the problem still persists?
>>> There were a number of ext4 issues especially around low memory
>>> performance which weren't resolved until -rc4.
>>
>> Actually, sorry, I took a closer look and I'm not as sure going to
>> -rc4 is going to help (although we did have some ext4 patches to fix a
>> number of bugs that flowed in as late as -rc4).
>>
> 
> I'm running with -rc5 now. I have not noticed much interactivity problems
> as such but the stall detection script reported that mutt stalled for
> 20 seconds opening an inbox and imapd blocked for 59 seconds doing path
> lookups, imaps blocked again for 12 seconds doing an atime update, an RSS
> reader blocked for 3.5 seconds writing a file. etc.
> 
> There has been no reclaim activity in the system yet and 2G is still free
> so it's very unlikely to be a page or slab reclaim problem.

Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
it still sucks. Updating a kernel in a VM still results in "Your system
is too SLOW to play this!" by mplayer and frame dropping.

3.5G out of 6G memory used, the rest is I/O cache.

I have 7200RPM disks in my desktop.

-- 
js
suse labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-05 22:18         ` Jiri Slaby
  0 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-05 22:18 UTC (permalink / raw)
  To: Mel Gorman, Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM

On 04/03/2013 12:19 PM, Mel Gorman wrote:
> On Tue, Apr 02, 2013 at 11:14:36AM -0400, Theodore Ts'o wrote:
>> On Tue, Apr 02, 2013 at 11:06:51AM -0400, Theodore Ts'o wrote:
>>>
>>> Can you try 3.9-rc4 or later and see if the problem still persists?
>>> There were a number of ext4 issues especially around low memory
>>> performance which weren't resolved until -rc4.
>>
>> Actually, sorry, I took a closer look and I'm not as sure going to
>> -rc4 is going to help (although we did have some ext4 patches to fix a
>> number of bugs that flowed in as late as -rc4).
>>
> 
> I'm running with -rc5 now. I have not noticed much interactivity problems
> as such but the stall detection script reported that mutt stalled for
> 20 seconds opening an inbox and imapd blocked for 59 seconds doing path
> lookups, imaps blocked again for 12 seconds doing an atime update, an RSS
> reader blocked for 3.5 seconds writing a file. etc.
> 
> There has been no reclaim activity in the system yet and 2G is still free
> so it's very unlikely to be a page or slab reclaim problem.

Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
it still sucks. Updating a kernel in a VM still results in "Your system
is too SLOW to play this!" by mplayer and frame dropping.

3.5G out of 6G memory used, the rest is I/O cache.

I have 7200RPM disks in my desktop.

-- 
js
suse labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-05 22:18         ` Jiri Slaby
@ 2013-04-05 23:16           ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-05 23:16 UTC (permalink / raw)
  To: Jiri Slaby; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM

On Sat, Apr 06, 2013 at 12:18:11AM +0200, Jiri Slaby wrote:
> Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
> it still sucks. Updating a kernel in a VM still results in "Your system
> is too SLOW to play this!" by mplayer and frame dropping.

What was the first kernel where you didn't have the problem?  Were you
using the 3.8 kernel earlier, and did you see the interactivity
problems there?

What else was running in on your desktop at the same time?  How was
the file system mounted, and can you send me the output of dumpe2fs -h
/dev/XXX?  Oh, and what options were you using to when you kicked off
the VM?

The other thing that would be useful was to enable the jbd2_run_stats
tracepoint and to send the output of the trace log when you notice the
interactivity problems.

Thanks,

						- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-05 23:16           ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-05 23:16 UTC (permalink / raw)
  To: Jiri Slaby; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM

On Sat, Apr 06, 2013 at 12:18:11AM +0200, Jiri Slaby wrote:
> Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
> it still sucks. Updating a kernel in a VM still results in "Your system
> is too SLOW to play this!" by mplayer and frame dropping.

What was the first kernel where you didn't have the problem?  Were you
using the 3.8 kernel earlier, and did you see the interactivity
problems there?

What else was running in on your desktop at the same time?  How was
the file system mounted, and can you send me the output of dumpe2fs -h
/dev/XXX?  Oh, and what options were you using to when you kicked off
the VM?

The other thing that would be useful was to enable the jbd2_run_stats
tracepoint and to send the output of the trace log when you notice the
interactivity problems.

Thanks,

						- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-05 23:16           ` Theodore Ts'o
@ 2013-04-06  7:29             ` Jiri Slaby
  -1 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-06  7:29 UTC (permalink / raw)
  To: Theodore Ts'o, Mel Gorman, linux-ext4, LKML, Linux-MM

On 04/06/2013 01:16 AM, Theodore Ts'o wrote:
> On Sat, Apr 06, 2013 at 12:18:11AM +0200, Jiri Slaby wrote:
>> Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
>> it still sucks. Updating a kernel in a VM still results in "Your system
>> is too SLOW to play this!" by mplayer and frame dropping.
> 
> What was the first kernel where you didn't have the problem?  Were you
> using the 3.8 kernel earlier, and did you see the interactivity
> problems there?

I'm not sure, as I am using -next like for ever. But sure, there was a
kernel which didn't ahve this problem.

> What else was running in on your desktop at the same time?

Nothing, just VM (kernel update from console) and mplayer2 on the host.
This is more-or-less reproducible with these two.

> How was
> the file system mounted,

Both are actually a single device /dev/sda5:
/dev/sda5 on /win type ext4 (rw,noatime,data=ordered)

Should I try writeback?

> and can you send me the output of dumpe2fs -h
> /dev/XXX?

dumpe2fs 1.42.7 (21-Jan-2013)
Filesystem volume name:   <none>
Last mounted on:          /win
Filesystem UUID:          cd4bf4d2-bc32-4777-a437-ee24c4ee5f1b
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index
filetype needs_recovery extent flex_bg sparse_super large_file huge_file
uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30507008
Block count:              122012416
Reserved block count:     0
Free blocks:              72021328
Free inodes:              30474619
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      994
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              32747
Flex block group size:    16
Filesystem created:       Fri Sep  7 20:44:21 2012
Last mount time:          Thu Apr  4 12:22:01 2013
Last write time:          Thu Apr  4 12:22:01 2013
Mount count:              256
Maximum mount count:      -1
Last checked:             Sat Sep  8 21:13:28 2012
Check interval:           0 (<none>)
Lifetime writes:          1011 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      b6ad3f8b-72ce-49d6-92cb-abccd7dbe98e
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x00054dc7
Journal start:            8193

> Oh, and what options were you using to when you kicked off
> the VM?

qemu-kvm -k en-us -smp 2 -m 1200 -soundhw hda -usb -usbdevice tablet
-net user -net nic,model=e1000 -serial pty -balloon virtio -hda x.img

> The other thing that would be useful was to enable the jbd2_run_stats
> tracepoint and to send the output of the trace log when you notice the
> interactivity problems.

Ok, I will try.

thanks,
-- 
js
suse labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-06  7:29             ` Jiri Slaby
  0 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-06  7:29 UTC (permalink / raw)
  To: Theodore Ts'o, Mel Gorman, linux-ext4, LKML, Linux-MM

On 04/06/2013 01:16 AM, Theodore Ts'o wrote:
> On Sat, Apr 06, 2013 at 12:18:11AM +0200, Jiri Slaby wrote:
>> Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
>> it still sucks. Updating a kernel in a VM still results in "Your system
>> is too SLOW to play this!" by mplayer and frame dropping.
> 
> What was the first kernel where you didn't have the problem?  Were you
> using the 3.8 kernel earlier, and did you see the interactivity
> problems there?

I'm not sure, as I am using -next like for ever. But sure, there was a
kernel which didn't ahve this problem.

> What else was running in on your desktop at the same time?

Nothing, just VM (kernel update from console) and mplayer2 on the host.
This is more-or-less reproducible with these two.

> How was
> the file system mounted,

Both are actually a single device /dev/sda5:
/dev/sda5 on /win type ext4 (rw,noatime,data=ordered)

Should I try writeback?

> and can you send me the output of dumpe2fs -h
> /dev/XXX?

dumpe2fs 1.42.7 (21-Jan-2013)
Filesystem volume name:   <none>
Last mounted on:          /win
Filesystem UUID:          cd4bf4d2-bc32-4777-a437-ee24c4ee5f1b
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index
filetype needs_recovery extent flex_bg sparse_super large_file huge_file
uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30507008
Block count:              122012416
Reserved block count:     0
Free blocks:              72021328
Free inodes:              30474619
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      994
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              32747
Flex block group size:    16
Filesystem created:       Fri Sep  7 20:44:21 2012
Last mount time:          Thu Apr  4 12:22:01 2013
Last write time:          Thu Apr  4 12:22:01 2013
Mount count:              256
Maximum mount count:      -1
Last checked:             Sat Sep  8 21:13:28 2012
Check interval:           0 (<none>)
Lifetime writes:          1011 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      b6ad3f8b-72ce-49d6-92cb-abccd7dbe98e
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x00054dc7
Journal start:            8193

> Oh, and what options were you using to when you kicked off
> the VM?

qemu-kvm -k en-us -smp 2 -m 1200 -soundhw hda -usb -usbdevice tablet
-net user -net nic,model=e1000 -serial pty -balloon virtio -hda x.img

> The other thing that would be useful was to enable the jbd2_run_stats
> tracepoint and to send the output of the trace log when you notice the
> interactivity problems.

Ok, I will try.

thanks,
-- 
js
suse labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-06  7:29             ` Jiri Slaby
@ 2013-04-06  7:37               ` Jiri Slaby
  -1 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-06  7:37 UTC (permalink / raw)
  To: Theodore Ts'o, Mel Gorman, linux-ext4, LKML, Linux-MM

On 04/06/2013 09:29 AM, Jiri Slaby wrote:
> On 04/06/2013 01:16 AM, Theodore Ts'o wrote:
>> On Sat, Apr 06, 2013 at 12:18:11AM +0200, Jiri Slaby wrote:
>>> Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
>>> it still sucks. Updating a kernel in a VM still results in "Your system
>>> is too SLOW to play this!" by mplayer and frame dropping.
>>
>> What was the first kernel where you didn't have the problem?  Were you
>> using the 3.8 kernel earlier, and did you see the interactivity
>> problems there?
> 
> I'm not sure, as I am using -next like for ever. But sure, there was a
> kernel which didn't ahve this problem.
> 
>> What else was running in on your desktop at the same time?
> 
> Nothing, just VM (kernel update from console) and mplayer2 on the host.
> This is more-or-less reproducible with these two.

Ok,
  dd if=/dev/zero of=xxx
is enough instead of "kernel update".

Writeback mount doesn't help.

>> How was
>> the file system mounted,
> 
> Both are actually a single device /dev/sda5:
> /dev/sda5 on /win type ext4 (rw,noatime,data=ordered)
> 
> Should I try writeback?
> 
>> and can you send me the output of dumpe2fs -h
>> /dev/XXX?
> 
> dumpe2fs 1.42.7 (21-Jan-2013)
> Filesystem volume name:   <none>
> Last mounted on:          /win
> Filesystem UUID:          cd4bf4d2-bc32-4777-a437-ee24c4ee5f1b
> Filesystem magic number:  0xEF53
> Filesystem revision #:    1 (dynamic)
> Filesystem features:      has_journal ext_attr resize_inode dir_index
> filetype needs_recovery extent flex_bg sparse_super large_file huge_file
> uninit_bg dir_nlink extra_isize
> Filesystem flags:         signed_directory_hash
> Default mount options:    user_xattr acl
> Filesystem state:         clean
> Errors behavior:          Continue
> Filesystem OS type:       Linux
> Inode count:              30507008
> Block count:              122012416
> Reserved block count:     0
> Free blocks:              72021328
> Free inodes:              30474619
> First block:              0
> Block size:               4096
> Fragment size:            4096
> Reserved GDT blocks:      994
> Blocks per group:         32768
> Fragments per group:      32768
> Inodes per group:         8192
> Inode blocks per group:   512
> RAID stride:              32747
> Flex block group size:    16
> Filesystem created:       Fri Sep  7 20:44:21 2012
> Last mount time:          Thu Apr  4 12:22:01 2013
> Last write time:          Thu Apr  4 12:22:01 2013
> Mount count:              256
> Maximum mount count:      -1
> Last checked:             Sat Sep  8 21:13:28 2012
> Check interval:           0 (<none>)
> Lifetime writes:          1011 GB
> Reserved blocks uid:      0 (user root)
> Reserved blocks gid:      0 (group root)
> First inode:              11
> Inode size:               256
> Required extra isize:     28
> Desired extra isize:      28
> Journal inode:            8
> Default directory hash:   half_md4
> Directory Hash Seed:      b6ad3f8b-72ce-49d6-92cb-abccd7dbe98e
> Journal backup:           inode blocks
> Journal features:         journal_incompat_revoke
> Journal size:             128M
> Journal length:           32768
> Journal sequence:         0x00054dc7
> Journal start:            8193
> 
>> Oh, and what options were you using to when you kicked off
>> the VM?
> 
> qemu-kvm -k en-us -smp 2 -m 1200 -soundhw hda -usb -usbdevice tablet
> -net user -net nic,model=e1000 -serial pty -balloon virtio -hda x.img
> 
>> The other thing that would be useful was to enable the jbd2_run_stats
>> tracepoint and to send the output of the trace log when you notice the
>> interactivity problems.
> 
> Ok, I will try.
> 
> thanks,
> 


-- 
js
suse labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-06  7:37               ` Jiri Slaby
  0 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-06  7:37 UTC (permalink / raw)
  To: Theodore Ts'o, Mel Gorman, linux-ext4, LKML, Linux-MM

On 04/06/2013 09:29 AM, Jiri Slaby wrote:
> On 04/06/2013 01:16 AM, Theodore Ts'o wrote:
>> On Sat, Apr 06, 2013 at 12:18:11AM +0200, Jiri Slaby wrote:
>>> Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
>>> it still sucks. Updating a kernel in a VM still results in "Your system
>>> is too SLOW to play this!" by mplayer and frame dropping.
>>
>> What was the first kernel where you didn't have the problem?  Were you
>> using the 3.8 kernel earlier, and did you see the interactivity
>> problems there?
> 
> I'm not sure, as I am using -next like for ever. But sure, there was a
> kernel which didn't ahve this problem.
> 
>> What else was running in on your desktop at the same time?
> 
> Nothing, just VM (kernel update from console) and mplayer2 on the host.
> This is more-or-less reproducible with these two.

Ok,
  dd if=/dev/zero of=xxx
is enough instead of "kernel update".

Writeback mount doesn't help.

>> How was
>> the file system mounted,
> 
> Both are actually a single device /dev/sda5:
> /dev/sda5 on /win type ext4 (rw,noatime,data=ordered)
> 
> Should I try writeback?
> 
>> and can you send me the output of dumpe2fs -h
>> /dev/XXX?
> 
> dumpe2fs 1.42.7 (21-Jan-2013)
> Filesystem volume name:   <none>
> Last mounted on:          /win
> Filesystem UUID:          cd4bf4d2-bc32-4777-a437-ee24c4ee5f1b
> Filesystem magic number:  0xEF53
> Filesystem revision #:    1 (dynamic)
> Filesystem features:      has_journal ext_attr resize_inode dir_index
> filetype needs_recovery extent flex_bg sparse_super large_file huge_file
> uninit_bg dir_nlink extra_isize
> Filesystem flags:         signed_directory_hash
> Default mount options:    user_xattr acl
> Filesystem state:         clean
> Errors behavior:          Continue
> Filesystem OS type:       Linux
> Inode count:              30507008
> Block count:              122012416
> Reserved block count:     0
> Free blocks:              72021328
> Free inodes:              30474619
> First block:              0
> Block size:               4096
> Fragment size:            4096
> Reserved GDT blocks:      994
> Blocks per group:         32768
> Fragments per group:      32768
> Inodes per group:         8192
> Inode blocks per group:   512
> RAID stride:              32747
> Flex block group size:    16
> Filesystem created:       Fri Sep  7 20:44:21 2012
> Last mount time:          Thu Apr  4 12:22:01 2013
> Last write time:          Thu Apr  4 12:22:01 2013
> Mount count:              256
> Maximum mount count:      -1
> Last checked:             Sat Sep  8 21:13:28 2012
> Check interval:           0 (<none>)
> Lifetime writes:          1011 GB
> Reserved blocks uid:      0 (user root)
> Reserved blocks gid:      0 (group root)
> First inode:              11
> Inode size:               256
> Required extra isize:     28
> Desired extra isize:      28
> Journal inode:            8
> Default directory hash:   half_md4
> Directory Hash Seed:      b6ad3f8b-72ce-49d6-92cb-abccd7dbe98e
> Journal backup:           inode blocks
> Journal features:         journal_incompat_revoke
> Journal size:             128M
> Journal length:           32768
> Journal sequence:         0x00054dc7
> Journal start:            8193
> 
>> Oh, and what options were you using to when you kicked off
>> the VM?
> 
> qemu-kvm -k en-us -smp 2 -m 1200 -soundhw hda -usb -usbdevice tablet
> -net user -net nic,model=e1000 -serial pty -balloon virtio -hda x.img
> 
>> The other thing that would be useful was to enable the jbd2_run_stats
>> tracepoint and to send the output of the trace log when you notice the
>> interactivity problems.
> 
> Ok, I will try.
> 
> thanks,
> 


-- 
js
suse labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-06  7:37               ` Jiri Slaby
  (?)
@ 2013-04-06  8:19               ` Jiri Slaby
  -1 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-06  8:19 UTC (permalink / raw)
  To: Theodore Ts'o, Mel Gorman, linux-ext4, LKML, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 13411 bytes --]

On 04/06/2013 09:37 AM, Jiri Slaby wrote:
> On 04/06/2013 09:29 AM, Jiri Slaby wrote:
>> On 04/06/2013 01:16 AM, Theodore Ts'o wrote:
>>> On Sat, Apr 06, 2013 at 12:18:11AM +0200, Jiri Slaby wrote:
>>>> Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but
>>>> it still sucks. Updating a kernel in a VM still results in "Your system
>>>> is too SLOW to play this!" by mplayer and frame dropping.
>>>
>>> What was the first kernel where you didn't have the problem?  Were you
>>> using the 3.8 kernel earlier, and did you see the interactivity
>>> problems there?
>>
>> I'm not sure, as I am using -next like for ever. But sure, there was a
>> kernel which didn't ahve this problem.
>>
>>> What else was running in on your desktop at the same time?
>>
>> Nothing, just VM (kernel update from console) and mplayer2 on the host.
>> This is more-or-less reproducible with these two.
> 
> Ok,
>   dd if=/dev/zero of=xxx
> is enough instead of "kernel update".
> 
> Writeback mount doesn't help.
> 
>>> How was
>>> the file system mounted,
>>
>> Both are actually a single device /dev/sda5:
>> /dev/sda5 on /win type ext4 (rw,noatime,data=ordered)
>>
>> Should I try writeback?
>>
>>> and can you send me the output of dumpe2fs -h
>>> /dev/XXX?
>>
>> dumpe2fs 1.42.7 (21-Jan-2013)
>> Filesystem volume name:   <none>
>> Last mounted on:          /win
>> Filesystem UUID:          cd4bf4d2-bc32-4777-a437-ee24c4ee5f1b
>> Filesystem magic number:  0xEF53
>> Filesystem revision #:    1 (dynamic)
>> Filesystem features:      has_journal ext_attr resize_inode dir_index
>> filetype needs_recovery extent flex_bg sparse_super large_file huge_file
>> uninit_bg dir_nlink extra_isize
>> Filesystem flags:         signed_directory_hash
>> Default mount options:    user_xattr acl
>> Filesystem state:         clean
>> Errors behavior:          Continue
>> Filesystem OS type:       Linux
>> Inode count:              30507008
>> Block count:              122012416
>> Reserved block count:     0
>> Free blocks:              72021328
>> Free inodes:              30474619
>> First block:              0
>> Block size:               4096
>> Fragment size:            4096
>> Reserved GDT blocks:      994
>> Blocks per group:         32768
>> Fragments per group:      32768
>> Inodes per group:         8192
>> Inode blocks per group:   512
>> RAID stride:              32747
>> Flex block group size:    16
>> Filesystem created:       Fri Sep  7 20:44:21 2012
>> Last mount time:          Thu Apr  4 12:22:01 2013
>> Last write time:          Thu Apr  4 12:22:01 2013
>> Mount count:              256
>> Maximum mount count:      -1
>> Last checked:             Sat Sep  8 21:13:28 2012
>> Check interval:           0 (<none>)
>> Lifetime writes:          1011 GB
>> Reserved blocks uid:      0 (user root)
>> Reserved blocks gid:      0 (group root)
>> First inode:              11
>> Inode size:               256
>> Required extra isize:     28
>> Desired extra isize:      28
>> Journal inode:            8
>> Default directory hash:   half_md4
>> Directory Hash Seed:      b6ad3f8b-72ce-49d6-92cb-abccd7dbe98e
>> Journal backup:           inode blocks
>> Journal features:         journal_incompat_revoke
>> Journal size:             128M
>> Journal length:           32768
>> Journal sequence:         0x00054dc7
>> Journal start:            8193
>>
>>> Oh, and what options were you using to when you kicked off
>>> the VM?
>>
>> qemu-kvm -k en-us -smp 2 -m 1200 -soundhw hda -usb -usbdevice tablet
>> -net user -net nic,model=e1000 -serial pty -balloon virtio -hda x.img
>>
>>> The other thing that would be useful was to enable the jbd2_run_stats
>>> tracepoint and to send the output of the trace log when you notice the
>>> interactivity problems.
>>
>> Ok, I will try.

Inline here, as well as attached:
# tracer: nop
#
# entries-in-buffer/entries-written: 46/46   #P:2
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
     jbd2/sda5-8-10969 [000] ....   387.054319: jbd2_run_stats: dev
259,655360 tid 348892 wait 0 request_delay 0 running 5728 locked 0
flushing 0 logging 28 handle_count 10 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   392.594132: jbd2_run_stats: dev
259,655360 tid 348893 wait 0 request_delay 0 running 5300 locked 0
flushing 0 logging 64 handle_count 75944 blocks 1 blocks_logged 2
      jbd2/md2-8-959   [000] ....   396.249990: jbd2_run_stats: dev 9,2
tid 382990 wait 0 request_delay 0 running 5500 locked 0 flushing 0
logging 220 handle_count 3 blocks 1 blocks_logged 2
      jbd2/md1-8-1826  [000] ....   397.205670: jbd2_run_stats: dev 9,1
tid 1081270 wait 0 request_delay 0 running 5760 locked 0 flushing 0
logging 200 handle_count 2 blocks 0 blocks_logged 0
     jbd2/sda5-8-10969 [000] ....   397.563660: jbd2_run_stats: dev
259,655360 tid 348894 wait 0 request_delay 0 running 5000 locked 0
flushing 0 logging 32 handle_count 89397 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   403.679552: jbd2_run_stats: dev
259,655360 tid 348895 wait 0 request_delay 0 running 5000 locked 1040
flushing 0 logging 112 handle_count 148224 blocks 1 blocks_logged 2
      jbd2/md1-8-1826  [000] ....   407.981693: jbd2_run_stats: dev 9,1
tid 1081271 wait 0 request_delay 0 running 5064 locked 0 flushing 0
logging 152 handle_count 198 blocks 20 blocks_logged 21
      jbd2/md2-8-959   [000] ....   408.111339: jbd2_run_stats: dev 9,2
tid 382991 wait 0 request_delay 0 running 5156 locked 2268 flushing 0
logging 124 handle_count 5 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   408.823650: jbd2_run_stats: dev
259,655360 tid 348896 wait 0 request_delay 0 running 5156 locked 0
flushing 0 logging 100 handle_count 63257 blocks 1 blocks_logged 2
      jbd2/md1-8-1826  [000] ....   411.385104: jbd2_run_stats: dev 9,1
tid 1081272 wait 0 request_delay 0 running 3236 locked 0 flushing 0
logging 116 handle_count 42 blocks 7 blocks_logged 8
      jbd2/md1-8-1826  [000] ....   412.590289: jbd2_run_stats: dev 9,1
tid 1081273 wait 0 request_delay 0 running 124 locked 0 flushing 0
logging 740 handle_count 7 blocks 5 blocks_logged 6
      jbd2/md2-8-959   [000] ....   413.087300: jbd2_run_stats: dev 9,2
tid 382992 wait 0 request_delay 0 running 5012 locked 0 flushing 0
logging 92 handle_count 12 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   414.047500: jbd2_run_stats: dev
259,655360 tid 348897 wait 0 request_delay 0 running 5004 locked 32
flushing 0 logging 292 handle_count 104485 blocks 4 blocks_logged 5
      jbd2/md2-8-959   [000] ....   418.301823: jbd2_run_stats: dev 9,2
tid 382993 wait 0 request_delay 0 running 5024 locked 0 flushing 0
logging 284 handle_count 4 blocks 0 blocks_logged 0
      jbd2/md1-8-1826  [001] ....   418.384624: jbd2_run_stats: dev 9,1
tid 1081274 wait 0 request_delay 0 running 5416 locked 0 flushing 0
logging 384 handle_count 393 blocks 14 blocks_logged 15
     jbd2/sda5-8-10969 [000] ....   418.599524: jbd2_run_stats: dev
259,655360 tid 348898 wait 0 request_delay 0 running 4736 locked 0
flushing 0 logging 112 handle_count 43360 blocks 17 blocks_logged 18
      jbd2/md1-8-1826  [001] ....   418.711491: jbd2_run_stats: dev 9,1
tid 1081275 wait 0 request_delay 0 running 40 locked 0 flushing 0
logging 48 handle_count 4 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   422.444437: jbd2_run_stats: dev
259,655360 tid 348899 wait 0 request_delay 0 running 3684 locked 0
flushing 0 logging 144 handle_count 62564 blocks 22 blocks_logged 23
     jbd2/sda5-8-10969 [000] ....   427.903435: jbd2_run_stats: dev
259,655360 tid 348900 wait 0 request_delay 0 running 5332 locked 0
flushing 0 logging 128 handle_count 118362 blocks 19 blocks_logged 20
     jbd2/sda5-8-10969 [000] ....   431.981049: jbd2_run_stats: dev
259,655360 tid 348901 wait 0 request_delay 0 running 3976 locked 0
flushing 0 logging 100 handle_count 88833 blocks 13 blocks_logged 14
      jbd2/md1-8-1826  [001] ....   437.291566: jbd2_run_stats: dev 9,1
tid 1081276 wait 0 request_delay 0 running 244 locked 0 flushing 0
logging 380 handle_count 5 blocks 6 blocks_logged 7
     jbd2/sda5-8-10969 [000] ....   437.342205: jbd2_run_stats: dev
259,655360 tid 348902 wait 0 request_delay 0 running 5016 locked 0
flushing 0 logging 344 handle_count 134290 blocks 13 blocks_logged 14
     jbd2/sda5-8-10969 [000] ....   441.879748: jbd2_run_stats: dev
259,655360 tid 348903 wait 0 request_delay 0 running 3624 locked 0
flushing 0 logging 76 handle_count 81013 blocks 13 blocks_logged 14
     jbd2/sda5-8-10969 [000] ....   447.059645: jbd2_run_stats: dev
259,655360 tid 348904 wait 0 request_delay 0 running 5048 locked 0
flushing 0 logging 128 handle_count 127735 blocks 13 blocks_logged 14
     jbd2/sda5-8-10969 [001] ....   447.667205: jbd2_run_stats: dev
259,655360 tid 348905 wait 0 request_delay 0 running 580 locked 0
flushing 0 logging 156 handle_count 131 blocks 4 blocks_logged 5
     jbd2/sda5-8-10969 [001] ....   453.156101: jbd2_run_stats: dev
259,655360 tid 348906 wait 0 request_delay 0 running 5308 locked 0
flushing 0 logging 184 handle_count 109134 blocks 16 blocks_logged 17
     jbd2/sda5-8-10969 [001] ....   456.546335: jbd2_run_stats: dev
259,655360 tid 348907 wait 0 request_delay 0 running 3248 locked 0
flushing 0 logging 228 handle_count 66315 blocks 10 blocks_logged 11
      jbd2/md2-8-959   [001] ....   458.812838: jbd2_run_stats: dev 9,2
tid 382994 wait 0 request_delay 0 running 5052 locked 92 flushing 0
logging 232 handle_count 8 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   462.113411: jbd2_run_stats: dev
259,655360 tid 348908 wait 0 request_delay 0 running 5292 locked 4
flushing 0 logging 268 handle_count 139470 blocks 14 blocks_logged 15
      jbd2/md2-8-959   [001] ....   463.012109: jbd2_run_stats: dev 9,2
tid 382995 wait 0 request_delay 0 running 4380 locked 0 flushing 0
logging 52 handle_count 3 blocks 0 blocks_logged 0
     jbd2/sda5-8-10969 [000] ....   463.012121: jbd2_run_stats: dev
259,655360 tid 348909 wait 0 request_delay 0 running 1116 locked 0
flushing 0 logging 52 handle_count 5 blocks 4 blocks_logged 5
     jbd2/sda5-8-10969 [001] ....   468.229949: jbd2_run_stats: dev
259,655360 tid 348910 wait 0 request_delay 0 running 5012 locked 0
flushing 0 logging 204 handle_count 134170 blocks 18 blocks_logged 19
      jbd2/md2-8-959   [000] ....   473.230180: jbd2_run_stats: dev 9,2
tid 382996 wait 0 request_delay 0 running 5116 locked 0 flushing 0
logging 268 handle_count 3 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   473.422616: jbd2_run_stats: dev
259,655360 tid 348911 wait 0 request_delay 0 running 5292 locked 0
flushing 0 logging 108 handle_count 84844 blocks 15 blocks_logged 16
      jbd2/md1-8-1826  [000] ....   477.503164: jbd2_run_stats: dev 9,1
tid 1081277 wait 0 request_delay 0 running 5580 locked 0 flushing 0
logging 852 handle_count 124 blocks 4 blocks_logged 5
     jbd2/sda5-8-10969 [000] ....   479.048020: jbd2_run_stats: dev
259,655360 tid 348912 wait 0 request_delay 0 running 5000 locked 212
flushing 0 logging 416 handle_count 139926 blocks 17 blocks_logged 18
      jbd2/md1-8-1826  [000] ....   482.570545: jbd2_run_stats: dev 9,1
tid 1081278 wait 0 request_delay 0 running 5316 locked 0 flushing 0
logging 604 handle_count 11 blocks 0 blocks_logged 0
     jbd2/sda5-8-10969 [001] ....   484.456879: jbd2_run_stats: dev
259,655360 tid 348913 wait 0 request_delay 0 running 5284 locked 0
flushing 0 logging 544 handle_count 40620 blocks 11 blocks_logged 12
     jbd2/sda5-8-10969 [001] ....   486.014655: jbd2_run_stats: dev
259,655360 tid 348914 wait 0 request_delay 0 running 1540 locked 108
flushing 0 logging 456 handle_count 55965 blocks 4 blocks_logged 5
     jbd2/sda5-8-10969 [001] ....   491.082420: jbd2_run_stats: dev
259,655360 tid 348915 wait 0 request_delay 0 running 5160 locked 0
flushing 0 logging 368 handle_count 33509 blocks 12 blocks_logged 13
      jbd2/md1-8-1826  [000] ....   494.688094: jbd2_run_stats: dev 9,1
tid 1081279 wait 0 request_delay 0 running 5828 locked 0 flushing 0
logging 716 handle_count 2 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   497.548126: jbd2_run_stats: dev
259,655360 tid 348916 wait 0 request_delay 0 running 5020 locked 36
flushing 0 logging 1780 handle_count 1481 blocks 13 blocks_logged 14
      jbd2/md2-8-959   [000] ....   500.647267: jbd2_run_stats: dev 9,2
tid 382997 wait 0 request_delay 0 running 5272 locked 244 flushing 0
logging 432 handle_count 5 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   501.134535: jbd2_run_stats: dev
259,655360 tid 348917 wait 0 request_delay 0 running 5040 locked 0
flushing 0 logging 328 handle_count 755 blocks 4 blocks_logged 5
      jbd2/md1-8-1826  [001] ....   502.020846: jbd2_run_stats: dev 9,1
tid 1081280 wait 0 request_delay 0 running 5896 locked 0 flushing 0
logging 52 handle_count 20 blocks 5 blocks_logged 6
      jbd2/md2-8-959   [000] ....   505.989307: jbd2_run_stats: dev 9,2
tid 382998 wait 0 request_delay 0 running 5756 locked 0 flushing 0
logging 20 handle_count 8 blocks 1 blocks_logged 2

thanks,
-- 
js
suse labs

[-- Attachment #2: trace --]
[-- Type: text/plain, Size: 9561 bytes --]

# tracer: nop
#
# entries-in-buffer/entries-written: 46/46   #P:2
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
     jbd2/sda5-8-10969 [000] ....   387.054319: jbd2_run_stats: dev 259,655360 tid 348892 wait 0 request_delay 0 running 5728 locked 0 flushing 0 logging 28 handle_count 10 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   392.594132: jbd2_run_stats: dev 259,655360 tid 348893 wait 0 request_delay 0 running 5300 locked 0 flushing 0 logging 64 handle_count 75944 blocks 1 blocks_logged 2
      jbd2/md2-8-959   [000] ....   396.249990: jbd2_run_stats: dev 9,2 tid 382990 wait 0 request_delay 0 running 5500 locked 0 flushing 0 logging 220 handle_count 3 blocks 1 blocks_logged 2
      jbd2/md1-8-1826  [000] ....   397.205670: jbd2_run_stats: dev 9,1 tid 1081270 wait 0 request_delay 0 running 5760 locked 0 flushing 0 logging 200 handle_count 2 blocks 0 blocks_logged 0
     jbd2/sda5-8-10969 [000] ....   397.563660: jbd2_run_stats: dev 259,655360 tid 348894 wait 0 request_delay 0 running 5000 locked 0 flushing 0 logging 32 handle_count 89397 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   403.679552: jbd2_run_stats: dev 259,655360 tid 348895 wait 0 request_delay 0 running 5000 locked 1040 flushing 0 logging 112 handle_count 148224 blocks 1 blocks_logged 2
      jbd2/md1-8-1826  [000] ....   407.981693: jbd2_run_stats: dev 9,1 tid 1081271 wait 0 request_delay 0 running 5064 locked 0 flushing 0 logging 152 handle_count 198 blocks 20 blocks_logged 21
      jbd2/md2-8-959   [000] ....   408.111339: jbd2_run_stats: dev 9,2 tid 382991 wait 0 request_delay 0 running 5156 locked 2268 flushing 0 logging 124 handle_count 5 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   408.823650: jbd2_run_stats: dev 259,655360 tid 348896 wait 0 request_delay 0 running 5156 locked 0 flushing 0 logging 100 handle_count 63257 blocks 1 blocks_logged 2
      jbd2/md1-8-1826  [000] ....   411.385104: jbd2_run_stats: dev 9,1 tid 1081272 wait 0 request_delay 0 running 3236 locked 0 flushing 0 logging 116 handle_count 42 blocks 7 blocks_logged 8
      jbd2/md1-8-1826  [000] ....   412.590289: jbd2_run_stats: dev 9,1 tid 1081273 wait 0 request_delay 0 running 124 locked 0 flushing 0 logging 740 handle_count 7 blocks 5 blocks_logged 6
      jbd2/md2-8-959   [000] ....   413.087300: jbd2_run_stats: dev 9,2 tid 382992 wait 0 request_delay 0 running 5012 locked 0 flushing 0 logging 92 handle_count 12 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   414.047500: jbd2_run_stats: dev 259,655360 tid 348897 wait 0 request_delay 0 running 5004 locked 32 flushing 0 logging 292 handle_count 104485 blocks 4 blocks_logged 5
      jbd2/md2-8-959   [000] ....   418.301823: jbd2_run_stats: dev 9,2 tid 382993 wait 0 request_delay 0 running 5024 locked 0 flushing 0 logging 284 handle_count 4 blocks 0 blocks_logged 0
      jbd2/md1-8-1826  [001] ....   418.384624: jbd2_run_stats: dev 9,1 tid 1081274 wait 0 request_delay 0 running 5416 locked 0 flushing 0 logging 384 handle_count 393 blocks 14 blocks_logged 15
     jbd2/sda5-8-10969 [000] ....   418.599524: jbd2_run_stats: dev 259,655360 tid 348898 wait 0 request_delay 0 running 4736 locked 0 flushing 0 logging 112 handle_count 43360 blocks 17 blocks_logged 18
      jbd2/md1-8-1826  [001] ....   418.711491: jbd2_run_stats: dev 9,1 tid 1081275 wait 0 request_delay 0 running 40 locked 0 flushing 0 logging 48 handle_count 4 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   422.444437: jbd2_run_stats: dev 259,655360 tid 348899 wait 0 request_delay 0 running 3684 locked 0 flushing 0 logging 144 handle_count 62564 blocks 22 blocks_logged 23
     jbd2/sda5-8-10969 [000] ....   427.903435: jbd2_run_stats: dev 259,655360 tid 348900 wait 0 request_delay 0 running 5332 locked 0 flushing 0 logging 128 handle_count 118362 blocks 19 blocks_logged 20
     jbd2/sda5-8-10969 [000] ....   431.981049: jbd2_run_stats: dev 259,655360 tid 348901 wait 0 request_delay 0 running 3976 locked 0 flushing 0 logging 100 handle_count 88833 blocks 13 blocks_logged 14
      jbd2/md1-8-1826  [001] ....   437.291566: jbd2_run_stats: dev 9,1 tid 1081276 wait 0 request_delay 0 running 244 locked 0 flushing 0 logging 380 handle_count 5 blocks 6 blocks_logged 7
     jbd2/sda5-8-10969 [000] ....   437.342205: jbd2_run_stats: dev 259,655360 tid 348902 wait 0 request_delay 0 running 5016 locked 0 flushing 0 logging 344 handle_count 134290 blocks 13 blocks_logged 14
     jbd2/sda5-8-10969 [000] ....   441.879748: jbd2_run_stats: dev 259,655360 tid 348903 wait 0 request_delay 0 running 3624 locked 0 flushing 0 logging 76 handle_count 81013 blocks 13 blocks_logged 14
     jbd2/sda5-8-10969 [000] ....   447.059645: jbd2_run_stats: dev 259,655360 tid 348904 wait 0 request_delay 0 running 5048 locked 0 flushing 0 logging 128 handle_count 127735 blocks 13 blocks_logged 14
     jbd2/sda5-8-10969 [001] ....   447.667205: jbd2_run_stats: dev 259,655360 tid 348905 wait 0 request_delay 0 running 580 locked 0 flushing 0 logging 156 handle_count 131 blocks 4 blocks_logged 5
     jbd2/sda5-8-10969 [001] ....   453.156101: jbd2_run_stats: dev 259,655360 tid 348906 wait 0 request_delay 0 running 5308 locked 0 flushing 0 logging 184 handle_count 109134 blocks 16 blocks_logged 17
     jbd2/sda5-8-10969 [001] ....   456.546335: jbd2_run_stats: dev 259,655360 tid 348907 wait 0 request_delay 0 running 3248 locked 0 flushing 0 logging 228 handle_count 66315 blocks 10 blocks_logged 11
      jbd2/md2-8-959   [001] ....   458.812838: jbd2_run_stats: dev 9,2 tid 382994 wait 0 request_delay 0 running 5052 locked 92 flushing 0 logging 232 handle_count 8 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   462.113411: jbd2_run_stats: dev 259,655360 tid 348908 wait 0 request_delay 0 running 5292 locked 4 flushing 0 logging 268 handle_count 139470 blocks 14 blocks_logged 15
      jbd2/md2-8-959   [001] ....   463.012109: jbd2_run_stats: dev 9,2 tid 382995 wait 0 request_delay 0 running 4380 locked 0 flushing 0 logging 52 handle_count 3 blocks 0 blocks_logged 0
     jbd2/sda5-8-10969 [000] ....   463.012121: jbd2_run_stats: dev 259,655360 tid 348909 wait 0 request_delay 0 running 1116 locked 0 flushing 0 logging 52 handle_count 5 blocks 4 blocks_logged 5
     jbd2/sda5-8-10969 [001] ....   468.229949: jbd2_run_stats: dev 259,655360 tid 348910 wait 0 request_delay 0 running 5012 locked 0 flushing 0 logging 204 handle_count 134170 blocks 18 blocks_logged 19
      jbd2/md2-8-959   [000] ....   473.230180: jbd2_run_stats: dev 9,2 tid 382996 wait 0 request_delay 0 running 5116 locked 0 flushing 0 logging 268 handle_count 3 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   473.422616: jbd2_run_stats: dev 259,655360 tid 348911 wait 0 request_delay 0 running 5292 locked 0 flushing 0 logging 108 handle_count 84844 blocks 15 blocks_logged 16
      jbd2/md1-8-1826  [000] ....   477.503164: jbd2_run_stats: dev 9,1 tid 1081277 wait 0 request_delay 0 running 5580 locked 0 flushing 0 logging 852 handle_count 124 blocks 4 blocks_logged 5
     jbd2/sda5-8-10969 [000] ....   479.048020: jbd2_run_stats: dev 259,655360 tid 348912 wait 0 request_delay 0 running 5000 locked 212 flushing 0 logging 416 handle_count 139926 blocks 17 blocks_logged 18
      jbd2/md1-8-1826  [000] ....   482.570545: jbd2_run_stats: dev 9,1 tid 1081278 wait 0 request_delay 0 running 5316 locked 0 flushing 0 logging 604 handle_count 11 blocks 0 blocks_logged 0
     jbd2/sda5-8-10969 [001] ....   484.456879: jbd2_run_stats: dev 259,655360 tid 348913 wait 0 request_delay 0 running 5284 locked 0 flushing 0 logging 544 handle_count 40620 blocks 11 blocks_logged 12
     jbd2/sda5-8-10969 [001] ....   486.014655: jbd2_run_stats: dev 259,655360 tid 348914 wait 0 request_delay 0 running 1540 locked 108 flushing 0 logging 456 handle_count 55965 blocks 4 blocks_logged 5
     jbd2/sda5-8-10969 [001] ....   491.082420: jbd2_run_stats: dev 259,655360 tid 348915 wait 0 request_delay 0 running 5160 locked 0 flushing 0 logging 368 handle_count 33509 blocks 12 blocks_logged 13
      jbd2/md1-8-1826  [000] ....   494.688094: jbd2_run_stats: dev 9,1 tid 1081279 wait 0 request_delay 0 running 5828 locked 0 flushing 0 logging 716 handle_count 2 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   497.548126: jbd2_run_stats: dev 259,655360 tid 348916 wait 0 request_delay 0 running 5020 locked 36 flushing 0 logging 1780 handle_count 1481 blocks 13 blocks_logged 14
      jbd2/md2-8-959   [000] ....   500.647267: jbd2_run_stats: dev 9,2 tid 382997 wait 0 request_delay 0 running 5272 locked 244 flushing 0 logging 432 handle_count 5 blocks 1 blocks_logged 2
     jbd2/sda5-8-10969 [000] ....   501.134535: jbd2_run_stats: dev 259,655360 tid 348917 wait 0 request_delay 0 running 5040 locked 0 flushing 0 logging 328 handle_count 755 blocks 4 blocks_logged 5
      jbd2/md1-8-1826  [001] ....   502.020846: jbd2_run_stats: dev 9,1 tid 1081280 wait 0 request_delay 0 running 5896 locked 0 flushing 0 logging 52 handle_count 20 blocks 5 blocks_logged 6
      jbd2/md2-8-959   [000] ....   505.989307: jbd2_run_stats: dev 9,2 tid 382998 wait 0 request_delay 0 running 5756 locked 0 flushing 0 logging 20 handle_count 8 blocks 1 blocks_logged 2

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-06  7:29             ` Jiri Slaby
@ 2013-04-06 13:15               ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-06 13:15 UTC (permalink / raw)
  To: Jiri Slaby; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM

On Sat, Apr 06, 2013 at 09:29:48AM +0200, Jiri Slaby wrote:
> 
> I'm not sure, as I am using -next like for ever. But sure, there was a
> kernel which didn't ahve this problem.

Any chance you could try rolling back to 3.2 or 3.5 to see if you can
get a starting point?  Even a high-level bisection search would be
helpful to give us a hint.

>Ok,
>  dd if=/dev/zero of=xxx
>is enough instead of "kernel update".

Was the dd running in the VM or in the host OS?  Basically, is running
the VM required?

> Nothing, just VM (kernel update from console) and mplayer2 on the host.
> This is more-or-less reproducible with these two.

No browser or anything else running that might be introducing a stream
of fsync()'s?

>     jbd2/sda5-8-10969 [000] ....   403.679552: jbd2_run_stats: dev
>259,655360 tid 348895 wait 0 request_delay 0 running 5000 locked 1040
>flushing 0 logging 112 handle_count 148224 blocks 1 blocks_logged 2

>      jbd2/md2-8-959   [000] ....   408.111339: jbd2_run_stats: dev 9,2
>tid 382991 wait 0 request_delay 0 running 5156 locked 2268 flushing 0
>logging 124 handle_count 5 blocks 1 blocks_logged 2

OK, so this is interesting.  The commit is stalling for 1 second in
the the transaction commit on sda5, and then very shortly thereafter
for 2.2 seconds on md2, while we are trying to lock down the
transaction.  What that means is that we are waiting for all of the
transaction handles opened against that particular commit to complete
before we can let the transaction commit proceed.

Is md2 sharing the same disk spindle as sda5?  And to which disk were
you doing the "dd if=/dev/zero of=/dev/XXX" command?

If I had to guess what's going on, the disk is accepting a huge amount
of writes to its track buffer, and then occasionally it is going out
to lunch trying to write all of this data to the disk platter.  This
is not (always) happening when we do the commit (with its attended
cache flush command), but in a few cases, we are doing a read command
which is getting stalled.  There are a few cases where we start a
transaction handle, and then discover that we need to read in a disk
block, and if that read stalls for a long period of time, it will hold
the transaction handle open, and this will in turn stall the commit.

If you were to grab a blocktrace, I suspect that is what you will
find; that it's actually a read command which is stalling at some
point, correlated with when we are trying to start transaction commit.

       		       	       	   	     - Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-06 13:15               ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-06 13:15 UTC (permalink / raw)
  To: Jiri Slaby; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM

On Sat, Apr 06, 2013 at 09:29:48AM +0200, Jiri Slaby wrote:
> 
> I'm not sure, as I am using -next like for ever. But sure, there was a
> kernel which didn't ahve this problem.

Any chance you could try rolling back to 3.2 or 3.5 to see if you can
get a starting point?  Even a high-level bisection search would be
helpful to give us a hint.

>Ok,
>  dd if=/dev/zero of=xxx
>is enough instead of "kernel update".

Was the dd running in the VM or in the host OS?  Basically, is running
the VM required?

> Nothing, just VM (kernel update from console) and mplayer2 on the host.
> This is more-or-less reproducible with these two.

No browser or anything else running that might be introducing a stream
of fsync()'s?

>     jbd2/sda5-8-10969 [000] ....   403.679552: jbd2_run_stats: dev
>259,655360 tid 348895 wait 0 request_delay 0 running 5000 locked 1040
>flushing 0 logging 112 handle_count 148224 blocks 1 blocks_logged 2

>      jbd2/md2-8-959   [000] ....   408.111339: jbd2_run_stats: dev 9,2
>tid 382991 wait 0 request_delay 0 running 5156 locked 2268 flushing 0
>logging 124 handle_count 5 blocks 1 blocks_logged 2

OK, so this is interesting.  The commit is stalling for 1 second in
the the transaction commit on sda5, and then very shortly thereafter
for 2.2 seconds on md2, while we are trying to lock down the
transaction.  What that means is that we are waiting for all of the
transaction handles opened against that particular commit to complete
before we can let the transaction commit proceed.

Is md2 sharing the same disk spindle as sda5?  And to which disk were
you doing the "dd if=/dev/zero of=/dev/XXX" command?

If I had to guess what's going on, the disk is accepting a huge amount
of writes to its track buffer, and then occasionally it is going out
to lunch trying to write all of this data to the disk platter.  This
is not (always) happening when we do the commit (with its attended
cache flush command), but in a few cases, we are doing a read command
which is getting stalled.  There are a few cases where we start a
transaction handle, and then discover that we need to read in a disk
block, and if that read stalls for a long period of time, it will hold
the transaction handle open, and this will in turn stall the commit.

If you were to grab a blocktrace, I suspect that is what you will
find; that it's actually a read command which is stalling at some
point, correlated with when we are trying to start transaction commit.

       		       	       	   	     - Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 18:19       ` Theodore Ts'o
@ 2013-04-07 21:59         ` Frank Ch. Eigler
  -1 siblings, 0 replies; 105+ messages in thread
From: Frank Ch. Eigler @ 2013-04-07 21:59 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby


Hi -


tytso wrote:

> So I tried to reproduce the problem, and so I installed systemtap
> (bleeding edge, since otherwise it won't work with development
> kernel), and then rebuilt a kernel with all of the necessary CONFIG
> options enabled:
>
> 	CONFIG_DEBUG_INFO, CONFIG_KPROBES, CONFIG_RELAY, CONFIG_DEBUG_FS,
> 	CONFIG_MODULES, CONFIG_MODULE_UNLOAD
> [...]

That sounds about right.


> I then pulled down mmtests, and tried running watch-dstate.pl, which
> is what I assume you were using [...]

I just took a look at the mmtests, particularly the stap-fix.sh stuff.
The heroics therein are really not called for.  git kernel developers
should use git systemtap, as has always been the case.  All
compatibility hacks in stap-fix.sh have already been merged, in many
cases for months.


> [...]
> semantic error: while resolving probe point: identifier 'kprobe' at /tmp/stapdjN4_l:18:7
>         source: probe kprobe.function("get_request_wait")
>                       ^
> Pass 2: analysis failed.  [man error::pass2]
> Unexpected exit of STAP script at ./watch-dstate.pl line 296.
> I have no clue what to do next.  Can you give me a hint?

You should see the error::pass2 man page, which refers to
error::reporting, which refers to involving stap folks and running
stap-report to gather needed info.

But in this case, that's unnecessary: the problem is most likely that
the get_request_wait function does not actually exist any longer, since

commit a06e05e6afab70b4b23c0a7975aaeae24b195cd6
Author: Tejun Heo <tj@kernel.org>
Date:   Mon Jun 4 20:40:55 2012 -0700

    block: refactor get_request[_wait]()


Systemtap could endavour to list roughly-matching functions that do
exist, if you think that's be helpful.


- FChE

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-07 21:59         ` Frank Ch. Eigler
  0 siblings, 0 replies; 105+ messages in thread
From: Frank Ch. Eigler @ 2013-04-07 21:59 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby


Hi -


tytso wrote:

> So I tried to reproduce the problem, and so I installed systemtap
> (bleeding edge, since otherwise it won't work with development
> kernel), and then rebuilt a kernel with all of the necessary CONFIG
> options enabled:
>
> 	CONFIG_DEBUG_INFO, CONFIG_KPROBES, CONFIG_RELAY, CONFIG_DEBUG_FS,
> 	CONFIG_MODULES, CONFIG_MODULE_UNLOAD
> [...]

That sounds about right.


> I then pulled down mmtests, and tried running watch-dstate.pl, which
> is what I assume you were using [...]

I just took a look at the mmtests, particularly the stap-fix.sh stuff.
The heroics therein are really not called for.  git kernel developers
should use git systemtap, as has always been the case.  All
compatibility hacks in stap-fix.sh have already been merged, in many
cases for months.


> [...]
> semantic error: while resolving probe point: identifier 'kprobe' at /tmp/stapdjN4_l:18:7
>         source: probe kprobe.function("get_request_wait")
>                       ^
> Pass 2: analysis failed.  [man error::pass2]
> Unexpected exit of STAP script at ./watch-dstate.pl line 296.
> I have no clue what to do next.  Can you give me a hint?

You should see the error::pass2 man page, which refers to
error::reporting, which refers to involving stap folks and running
stap-report to gather needed info.

But in this case, that's unnecessary: the problem is most likely that
the get_request_wait function does not actually exist any longer, since

commit a06e05e6afab70b4b23c0a7975aaeae24b195cd6
Author: Tejun Heo <tj@kernel.org>
Date:   Mon Jun 4 20:40:55 2012 -0700

    block: refactor get_request[_wait]()


Systemtap could endavour to list roughly-matching functions that do
exist, if you think that's be helpful.


- FChE

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-07 21:59         ` Frank Ch. Eigler
@ 2013-04-08  8:36           ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-08  8:36 UTC (permalink / raw)
  To: Frank Ch. Eigler
  Cc: Theodore Ts'o, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Sun, Apr 07, 2013 at 05:59:06PM -0400, Frank Ch. Eigler wrote:
> 
> Hi -
> 
> 
> tytso wrote:
> 
> > So I tried to reproduce the problem, and so I installed systemtap
> > (bleeding edge, since otherwise it won't work with development
> > kernel), and then rebuilt a kernel with all of the necessary CONFIG
> > options enabled:
> >
> > 	CONFIG_DEBUG_INFO, CONFIG_KPROBES, CONFIG_RELAY, CONFIG_DEBUG_FS,
> > 	CONFIG_MODULES, CONFIG_MODULE_UNLOAD
> > [...]
> 
> That sounds about right.
> 
> 
> > I then pulled down mmtests, and tried running watch-dstate.pl, which
> > is what I assume you were using [...]
> 
> I just took a look at the mmtests, particularly the stap-fix.sh stuff.
> The heroics therein are really not called for.  git kernel developers
> should use git systemtap, as has always been the case.  All
> compatibility hacks in stap-fix.sh have already been merged, in many
> cases for months.
> 

At one point in the past this used to be the case but then systemtap had to
be compiled as part of automated tests across different kernel versions. It
could have been worked around in various ways or even installed manually
when machines were deployed but stap-fix.sh generally took less time to
keep working.

> 
> > [...]
> > semantic error: while resolving probe point: identifier 'kprobe' at /tmp/stapdjN4_l:18:7
> >         source: probe kprobe.function("get_request_wait")
> >                       ^
> > Pass 2: analysis failed.  [man error::pass2]
> > Unexpected exit of STAP script at ./watch-dstate.pl line 296.
> > I have no clue what to do next.  Can you give me a hint?
> 
> You should see the error::pass2 man page, which refers to
> error::reporting, which refers to involving stap folks and running
> stap-report to gather needed info.
> 
> But in this case, that's unnecessary: the problem is most likely that
> the get_request_wait function does not actually exist any longer, since
> 
> commit a06e05e6afab70b4b23c0a7975aaeae24b195cd6
> Author: Tejun Heo <tj@kernel.org>
> Date:   Mon Jun 4 20:40:55 2012 -0700
> 
>     block: refactor get_request[_wait]()
> 

Yes, this was indeed the problem. The next version of watch-dstate.pl
treated get_request_wait() as a function that may or may not exist. It
uses /proc/kallsyms to figure it out.

Thanks.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-08  8:36           ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-08  8:36 UTC (permalink / raw)
  To: Frank Ch. Eigler
  Cc: Theodore Ts'o, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Sun, Apr 07, 2013 at 05:59:06PM -0400, Frank Ch. Eigler wrote:
> 
> Hi -
> 
> 
> tytso wrote:
> 
> > So I tried to reproduce the problem, and so I installed systemtap
> > (bleeding edge, since otherwise it won't work with development
> > kernel), and then rebuilt a kernel with all of the necessary CONFIG
> > options enabled:
> >
> > 	CONFIG_DEBUG_INFO, CONFIG_KPROBES, CONFIG_RELAY, CONFIG_DEBUG_FS,
> > 	CONFIG_MODULES, CONFIG_MODULE_UNLOAD
> > [...]
> 
> That sounds about right.
> 
> 
> > I then pulled down mmtests, and tried running watch-dstate.pl, which
> > is what I assume you were using [...]
> 
> I just took a look at the mmtests, particularly the stap-fix.sh stuff.
> The heroics therein are really not called for.  git kernel developers
> should use git systemtap, as has always been the case.  All
> compatibility hacks in stap-fix.sh have already been merged, in many
> cases for months.
> 

At one point in the past this used to be the case but then systemtap had to
be compiled as part of automated tests across different kernel versions. It
could have been worked around in various ways or even installed manually
when machines were deployed but stap-fix.sh generally took less time to
keep working.

> 
> > [...]
> > semantic error: while resolving probe point: identifier 'kprobe' at /tmp/stapdjN4_l:18:7
> >         source: probe kprobe.function("get_request_wait")
> >                       ^
> > Pass 2: analysis failed.  [man error::pass2]
> > Unexpected exit of STAP script at ./watch-dstate.pl line 296.
> > I have no clue what to do next.  Can you give me a hint?
> 
> You should see the error::pass2 man page, which refers to
> error::reporting, which refers to involving stap folks and running
> stap-report to gather needed info.
> 
> But in this case, that's unnecessary: the problem is most likely that
> the get_request_wait function does not actually exist any longer, since
> 
> commit a06e05e6afab70b4b23c0a7975aaeae24b195cd6
> Author: Tejun Heo <tj@kernel.org>
> Date:   Mon Jun 4 20:40:55 2012 -0700
> 
>     block: refactor get_request[_wait]()
> 

Yes, this was indeed the problem. The next version of watch-dstate.pl
treated get_request_wait() as a function that may or may not exist. It
uses /proc/kallsyms to figure it out.

Thanks.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-08  8:36           ` Mel Gorman
@ 2013-04-08 10:52             ` Frank Ch. Eigler
  -1 siblings, 0 replies; 105+ messages in thread
From: Frank Ch. Eigler @ 2013-04-08 10:52 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Theodore Ts'o, linux-ext4, LKML, Linux-MM, Jiri Slaby

Hi, Mel -

> > [...]  git kernel developers
> > should use git systemtap, as has always been the case.  [...]
> 
> At one point in the past this used to be the case but then systemtap had to
> be compiled as part of automated tests across different kernel versions. It
> could have been worked around in various ways or even installed manually
> when machines were deployed but stap-fix.sh generally took less time to
> keep working.

OK, if that works for you.  Keep in mind though that newer versions of
systemtap retain backward-compatibility for ancient versions of the
kernel, so git systemtap should work on those older versions just
fine.


> [...]
> Yes, this was indeed the problem. The next version of watch-dstate.pl
> treated get_request_wait() as a function that may or may not exist. It
> uses /proc/kallsyms to figure it out.

... or you can use the "?" punctuation in the script to have
systemtap adapt:

    probe kprobe.function("get_request_wait") ?  { ... }


- FChE

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-08 10:52             ` Frank Ch. Eigler
  0 siblings, 0 replies; 105+ messages in thread
From: Frank Ch. Eigler @ 2013-04-08 10:52 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Theodore Ts'o, linux-ext4, LKML, Linux-MM, Jiri Slaby

Hi, Mel -

> > [...]  git kernel developers
> > should use git systemtap, as has always been the case.  [...]
> 
> At one point in the past this used to be the case but then systemtap had to
> be compiled as part of automated tests across different kernel versions. It
> could have been worked around in various ways or even installed manually
> when machines were deployed but stap-fix.sh generally took less time to
> keep working.

OK, if that works for you.  Keep in mind though that newer versions of
systemtap retain backward-compatibility for ancient versions of the
kernel, so git systemtap should work on those older versions just
fine.


> [...]
> Yes, this was indeed the problem. The next version of watch-dstate.pl
> treated get_request_wait() as a function that may or may not exist. It
> uses /proc/kallsyms to figure it out.

... or you can use the "?" punctuation in the script to have
systemtap adapt:

    probe kprobe.function("get_request_wait") ?  { ... }


- FChE

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-07 21:59         ` Frank Ch. Eigler
@ 2013-04-08 11:01           ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-08 11:01 UTC (permalink / raw)
  To: Frank Ch. Eigler; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Sun, Apr 07, 2013 at 05:59:06PM -0400, Frank Ch. Eigler wrote:
> > semantic error: while resolving probe point: identifier 'kprobe' at /tmp/stapdjN4_l:18:7
> >         source: probe kprobe.function("get_request_wait")
> >                       ^
> > Pass 2: analysis failed.  [man error::pass2]
> > Unexpected exit of STAP script at ./watch-dstate.pl line 296.
> > I have no clue what to do next.  Can you give me a hint?

Is there any reason why the error message couldn't be simplified, to
something as "kernel symbol not found"?  I wasn't sure if the problem
was that there was some incompatibility between a recent change with
kprobe and systemtap, or parse failure in the systemtap script, etc.

> Systemtap could endavour to list roughly-matching functions that do
> exist, if you think that's be helpful.

If the goal is ease of use, I suspect the more important thing that
systemtap could do is to make its error messages more easily
understandable, instead of pointing the user to read a man page where
the user then has to figure out which one of a number of failure
scenarios were caused by a particularly opaque error message.  (The
man page doesn't even say that "semantic error while resolving probe
point" means that a kernel function doesn't exist -- especially
complaining about the kprobe identifier points the user in the wrong
direction.)

							- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-08 11:01           ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-08 11:01 UTC (permalink / raw)
  To: Frank Ch. Eigler; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Sun, Apr 07, 2013 at 05:59:06PM -0400, Frank Ch. Eigler wrote:
> > semantic error: while resolving probe point: identifier 'kprobe' at /tmp/stapdjN4_l:18:7
> >         source: probe kprobe.function("get_request_wait")
> >                       ^
> > Pass 2: analysis failed.  [man error::pass2]
> > Unexpected exit of STAP script at ./watch-dstate.pl line 296.
> > I have no clue what to do next.  Can you give me a hint?

Is there any reason why the error message couldn't be simplified, to
something as "kernel symbol not found"?  I wasn't sure if the problem
was that there was some incompatibility between a recent change with
kprobe and systemtap, or parse failure in the systemtap script, etc.

> Systemtap could endavour to list roughly-matching functions that do
> exist, if you think that's be helpful.

If the goal is ease of use, I suspect the more important thing that
systemtap could do is to make its error messages more easily
understandable, instead of pointing the user to read a man page where
the user then has to figure out which one of a number of failure
scenarios were caused by a particularly opaque error message.  (The
man page doesn't even say that "semantic error while resolving probe
point" means that a kernel function doesn't exist -- especially
complaining about the kprobe identifier points the user in the wrong
direction.)

							- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-02 15:06   ` Theodore Ts'o
@ 2013-04-10 10:56     ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-10 10:56 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 11:06:51AM -0400, Theodore Ts'o wrote:
> On Tue, Apr 02, 2013 at 03:27:17PM +0100, Mel Gorman wrote:
> > I'm testing a page-reclaim-related series on my laptop that is partially
> > aimed at fixing long stalls when doing metadata-intensive operations on
> > low memory such as a git checkout. I've been running 3.9-rc2 with the
> > series applied but found that the interactive performance was awful even
> > when there was plenty of free memory.
> 
> Can you try 3.9-rc4 or later and see if the problem still persists?
> There were a number of ext4 issues especially around low memory
> performance which weren't resolved until -rc4.
> 

I experimented with this for a while. -rc6 "feels" much better where
-rc2 felt it would stall for prolonged periods of time but it could be
my imagination too. It does appear that queue depth and await times are
slowly increasing for various reasons.

It's always been the case for me that metadata intensive and write activities
in the background (opening maildir + cache cold git checkout) would stall
the machine for periods of time. This time around, I timed how long it
takes gnome-terminal to open, run find on a directory and exit again while
a cache cold git checkout and a maildir folder were running
	
v3.0.66
  count	time
    471 5
     23 10
     11 15
     14 20
      4 25
      8 30
      3 35

v3.7
    636 5
     20 10
     13 15
     11 20
      7 25
      1 30
      3 35
      1 40
      1 45

v3.8
  count time
    394 5
     10 10
     12 15
      8 20
      9 25
      6 30
      2 35
      3 40

v3.9-rc6
  count time
    481 5
     14 10
      9 15
     12 20
      8 25
      4 30
      2 35
      3 40
      1 45
      1 50
      1 140

This shows that kernel 3.7 was able to open the terminal in 5 seconds or
less 636 times during the test. Very broadly speaking, v3.0.66 is snappier
and generally able to open the terminal and do some work faster. v3.9-rc6 is
sometimes much slower such as when it took 140 seconds to open the terminal
but not consistently slow enough to allow it to be reliably bisected.

Further, whatever my perceptions are telling me, the fact is that git
checkouts are not obviously worse. However, queue depth and IO wait
times are higher but it's gradual and would not obviously make a very bad
impression. See here;

 v3.0.66  checkout:278 depth:387.36 await: 878.97 launch:29.39 max_launch:34.20
 v3.7     checkout:268 depth:439.96 await: 971.39 launch:29.46 max_launch:40.42
 v3.8     checkout:275 depth:598.12 await:1280.62 launch:31.95 max_launch:38.50
 v3.9-rc6 checkout:266 depth:540.74 await:1182.10 launch:45.39 max_launch:138.14

Cache cold git checkout times are roughly comparable but average queue depth
has been increasing and average wait times in v3.8 and v3.9-rc6 are higher
in comparison to v3.0.66. The average time it takes to launch a terminal
and do something with it is also increasing. Unfortunately, these results
are not always perfectly reproducible and it cannot be reliably bisected.

That said, the worst IO wait times (in milliseconds) are getting higher
       
               await      r_await      w_await
 v3.0.66     5811.24        39.19     28309.72 
    v3.7     7508.79        46.36     36318.96 
    v3.8     7083.35        47.55     35305.46 
v3.9-rc2     9211.14        35.25     34560.08 
v3.9-rc6     7499.53        95.21    122780.43 

Worst-case small read times have almost doubled. A worst case write
delay was 122 seconds in v3.9-rc6!

The average wait times are also not painting a pretty picture

               await      r_await      w_await
 v3.0.66      878.97         7.79      6975.51 
    v3.7      971.39         7.84      7745.57 
    v3.8     1280.63         7.75     10306.62 
v3.9-rc2     1280.37         7.55      7687.20 
v3.9-rc6     1182.11         8.11     13869.67 

That is indicating that average wait times have almost doubled since
v3.7. Even though -rc2 felt bad, it's not obviously reflected in the await
figures which is partially what makes bisecting this difficult. At least
you can get an impression of the wait times from this smoothened graph
showing await times from iostat

http://www.csn.ul.ie/~mel/postings/interactivity-20130410/await-times-smooth.png

Again, while one can see the wait times are worse, it's not generally
worse enough to pinpoint it to a single change.

Other observations

On my laptop, pm-utils was setting dirty_background_ratio to 5% and
dirty_ratio to 10% away from the expected defaults of 10% and 20%. Any
of the changes related to dirty balancing could have affected how often
processes get dirty rate-limited.

During major activity there is likely to be "good" behaviour
with stalls roughly every 30 seconds roughly corresponding to
dirty_expire_centiseconds. As you'd expect, the flusher thread is stuck
when this happens.

  237 ?        00:00:00 flush-8:0
[<ffffffff811a35b9>] sleep_on_buffer+0x9/0x10
[<ffffffff811a35ee>] __lock_buffer+0x2e/0x30
[<ffffffff8123a21f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a3db>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220b89>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812278a4>] ext4_mb_mark_diskspace_used+0x74/0x4d0
[<ffffffff81228fbf>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f7c1>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811f0065>] ext4_map_blocks+0x2d5/0x470
[<ffffffff811f412a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4c30>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac3b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff8106901b>] kthread+0xbb/0xc0
[<ffffffff8159d47c>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

For other stalls it looks like journal collisions like this;

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mel       9593  4.9  0.2 583212 20576 pts/2    Dl+  11:49   0:00 gnome-terminal --disable-
[<ffffffff81238693>] start_this_handle+0x2c3/0x3e0
[<ffffffff81238970>] jbd2__journal_start.part.8+0x90/0x190
[<ffffffff81238ab5>] jbd2__journal_start+0x45/0x50
[<ffffffff81220921>] __ext4_journal_start_sb+0x81/0x170
[<ffffffff811f53cb>] ext4_dirty_inode+0x2b/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811f335c>] ext4_setattr+0x36c/0x640
[<ffffffff8118cf72>] notify_change+0x1f2/0x3c0
[<ffffffff81170f7d>] chown_common+0xbd/0xd0
[<ffffffff811720d7>] sys_fchown+0xb7/0xd0
[<ffffffff8159d52d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       758  0.0  0.0      0     0 ?        D    11:16   0:00
[jbd2/sda6-8]
[<ffffffff8123b28a>] jbd2_journal_commit_transaction+0x1ea/0x13c0
[<ffffffff81240943>] kjournald2+0xb3/0x240
[<ffffffff8106901b>] kthread+0xbb/0xc0
[<ffffffff8159d47c>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

So for myself I can increase the dirty limits, the writeback expire times
and maybe up the journal commit interval from the default of 5 seconds and
see what that "feels" like over the next few days but it still leaves the
fact that worst-case IO wait times in default configurations appear to be
getting worse over time.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-10 10:56     ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-10 10:56 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 02, 2013 at 11:06:51AM -0400, Theodore Ts'o wrote:
> On Tue, Apr 02, 2013 at 03:27:17PM +0100, Mel Gorman wrote:
> > I'm testing a page-reclaim-related series on my laptop that is partially
> > aimed at fixing long stalls when doing metadata-intensive operations on
> > low memory such as a git checkout. I've been running 3.9-rc2 with the
> > series applied but found that the interactive performance was awful even
> > when there was plenty of free memory.
> 
> Can you try 3.9-rc4 or later and see if the problem still persists?
> There were a number of ext4 issues especially around low memory
> performance which weren't resolved until -rc4.
> 

I experimented with this for a while. -rc6 "feels" much better where
-rc2 felt it would stall for prolonged periods of time but it could be
my imagination too. It does appear that queue depth and await times are
slowly increasing for various reasons.

It's always been the case for me that metadata intensive and write activities
in the background (opening maildir + cache cold git checkout) would stall
the machine for periods of time. This time around, I timed how long it
takes gnome-terminal to open, run find on a directory and exit again while
a cache cold git checkout and a maildir folder were running
	
v3.0.66
  count	time
    471 5
     23 10
     11 15
     14 20
      4 25
      8 30
      3 35

v3.7
    636 5
     20 10
     13 15
     11 20
      7 25
      1 30
      3 35
      1 40
      1 45

v3.8
  count time
    394 5
     10 10
     12 15
      8 20
      9 25
      6 30
      2 35
      3 40

v3.9-rc6
  count time
    481 5
     14 10
      9 15
     12 20
      8 25
      4 30
      2 35
      3 40
      1 45
      1 50
      1 140

This shows that kernel 3.7 was able to open the terminal in 5 seconds or
less 636 times during the test. Very broadly speaking, v3.0.66 is snappier
and generally able to open the terminal and do some work faster. v3.9-rc6 is
sometimes much slower such as when it took 140 seconds to open the terminal
but not consistently slow enough to allow it to be reliably bisected.

Further, whatever my perceptions are telling me, the fact is that git
checkouts are not obviously worse. However, queue depth and IO wait
times are higher but it's gradual and would not obviously make a very bad
impression. See here;

 v3.0.66  checkout:278 depth:387.36 await: 878.97 launch:29.39 max_launch:34.20
 v3.7     checkout:268 depth:439.96 await: 971.39 launch:29.46 max_launch:40.42
 v3.8     checkout:275 depth:598.12 await:1280.62 launch:31.95 max_launch:38.50
 v3.9-rc6 checkout:266 depth:540.74 await:1182.10 launch:45.39 max_launch:138.14

Cache cold git checkout times are roughly comparable but average queue depth
has been increasing and average wait times in v3.8 and v3.9-rc6 are higher
in comparison to v3.0.66. The average time it takes to launch a terminal
and do something with it is also increasing. Unfortunately, these results
are not always perfectly reproducible and it cannot be reliably bisected.

That said, the worst IO wait times (in milliseconds) are getting higher
       
               await      r_await      w_await
 v3.0.66     5811.24        39.19     28309.72 
    v3.7     7508.79        46.36     36318.96 
    v3.8     7083.35        47.55     35305.46 
v3.9-rc2     9211.14        35.25     34560.08 
v3.9-rc6     7499.53        95.21    122780.43 

Worst-case small read times have almost doubled. A worst case write
delay was 122 seconds in v3.9-rc6!

The average wait times are also not painting a pretty picture

               await      r_await      w_await
 v3.0.66      878.97         7.79      6975.51 
    v3.7      971.39         7.84      7745.57 
    v3.8     1280.63         7.75     10306.62 
v3.9-rc2     1280.37         7.55      7687.20 
v3.9-rc6     1182.11         8.11     13869.67 

That is indicating that average wait times have almost doubled since
v3.7. Even though -rc2 felt bad, it's not obviously reflected in the await
figures which is partially what makes bisecting this difficult. At least
you can get an impression of the wait times from this smoothened graph
showing await times from iostat

http://www.csn.ul.ie/~mel/postings/interactivity-20130410/await-times-smooth.png

Again, while one can see the wait times are worse, it's not generally
worse enough to pinpoint it to a single change.

Other observations

On my laptop, pm-utils was setting dirty_background_ratio to 5% and
dirty_ratio to 10% away from the expected defaults of 10% and 20%. Any
of the changes related to dirty balancing could have affected how often
processes get dirty rate-limited.

During major activity there is likely to be "good" behaviour
with stalls roughly every 30 seconds roughly corresponding to
dirty_expire_centiseconds. As you'd expect, the flusher thread is stuck
when this happens.

  237 ?        00:00:00 flush-8:0
[<ffffffff811a35b9>] sleep_on_buffer+0x9/0x10
[<ffffffff811a35ee>] __lock_buffer+0x2e/0x30
[<ffffffff8123a21f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a3db>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220b89>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff812278a4>] ext4_mb_mark_diskspace_used+0x74/0x4d0
[<ffffffff81228fbf>] ext4_mb_new_blocks+0x2af/0x490
[<ffffffff8121f7c1>] ext4_ext_map_blocks+0x501/0xa00
[<ffffffff811f0065>] ext4_map_blocks+0x2d5/0x470
[<ffffffff811f412a>] mpage_da_map_and_submit+0xba/0x2f0
[<ffffffff811f4c30>] ext4_da_writepages+0x380/0x620
[<ffffffff8111ac3b>] do_writepages+0x1b/0x30
[<ffffffff811998f0>] __writeback_single_inode+0x40/0x1b0
[<ffffffff8119bf9a>] writeback_sb_inodes+0x19a/0x350
[<ffffffff8119c1e6>] __writeback_inodes_wb+0x96/0xc0
[<ffffffff8119c48b>] wb_writeback+0x27b/0x330
[<ffffffff8119c5d7>] wb_check_old_data_flush+0x97/0xa0
[<ffffffff8119de49>] wb_do_writeback+0x149/0x1d0
[<ffffffff8119df53>] bdi_writeback_thread+0x83/0x280
[<ffffffff8106901b>] kthread+0xbb/0xc0
[<ffffffff8159d47c>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

For other stalls it looks like journal collisions like this;

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mel       9593  4.9  0.2 583212 20576 pts/2    Dl+  11:49   0:00 gnome-terminal --disable-
[<ffffffff81238693>] start_this_handle+0x2c3/0x3e0
[<ffffffff81238970>] jbd2__journal_start.part.8+0x90/0x190
[<ffffffff81238ab5>] jbd2__journal_start+0x45/0x50
[<ffffffff81220921>] __ext4_journal_start_sb+0x81/0x170
[<ffffffff811f53cb>] ext4_dirty_inode+0x2b/0x60
[<ffffffff8119a84e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff811f335c>] ext4_setattr+0x36c/0x640
[<ffffffff8118cf72>] notify_change+0x1f2/0x3c0
[<ffffffff81170f7d>] chown_common+0xbd/0xd0
[<ffffffff811720d7>] sys_fchown+0xb7/0xd0
[<ffffffff8159d52d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       758  0.0  0.0      0     0 ?        D    11:16   0:00
[jbd2/sda6-8]
[<ffffffff8123b28a>] jbd2_journal_commit_transaction+0x1ea/0x13c0
[<ffffffff81240943>] kjournald2+0xb3/0x240
[<ffffffff8106901b>] kthread+0xbb/0xc0
[<ffffffff8159d47c>] ret_from_fork+0x7c/0xb0
[<ffffffffffffffff>] 0xffffffffffffffff

So for myself I can increase the dirty limits, the writeback expire times
and maybe up the journal commit interval from the default of 5 seconds and
see what that "feels" like over the next few days but it still leaves the
fact that worst-case IO wait times in default configurations appear to be
getting worse over time.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-10 10:56     ` Mel Gorman
@ 2013-04-10 13:12       ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-10 13:12 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Wed, Apr 10, 2013 at 11:56:08AM +0100, Mel Gorman wrote:
> During major activity there is likely to be "good" behaviour
> with stalls roughly every 30 seconds roughly corresponding to
> dirty_expire_centiseconds. As you'd expect, the flusher thread is stuck
> when this happens.
> 
>   237 ?        00:00:00 flush-8:0
> [<ffffffff811a35b9>] sleep_on_buffer+0x9/0x10
> [<ffffffff811a35ee>] __lock_buffer+0x2e/0x30
> [<ffffffff8123a21f>] do_get_write_access+0x43f/0x4b0

If we're stalling on lock_buffer(), that implies that buffer was being
written, and for some reason it was taking a very long time to
complete.

It might be worthwhile to put a timestamp in struct dm_crypt_io, and
record the time when a particular I/O encryption/decryption is getting
queued to the kcryptd workqueues, and when they finally squirt out.

Something else that might be worth trying is to add WQ_HIGHPRI to the
workqueue flags and see if that makes a difference.

	  	    	   	- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-10 13:12       ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-10 13:12 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Wed, Apr 10, 2013 at 11:56:08AM +0100, Mel Gorman wrote:
> During major activity there is likely to be "good" behaviour
> with stalls roughly every 30 seconds roughly corresponding to
> dirty_expire_centiseconds. As you'd expect, the flusher thread is stuck
> when this happens.
> 
>   237 ?        00:00:00 flush-8:0
> [<ffffffff811a35b9>] sleep_on_buffer+0x9/0x10
> [<ffffffff811a35ee>] __lock_buffer+0x2e/0x30
> [<ffffffff8123a21f>] do_get_write_access+0x43f/0x4b0

If we're stalling on lock_buffer(), that implies that buffer was being
written, and for some reason it was taking a very long time to
complete.

It might be worthwhile to put a timestamp in struct dm_crypt_io, and
record the time when a particular I/O encryption/decryption is getting
queued to the kcryptd workqueues, and when they finally squirt out.

Something else that might be worth trying is to add WQ_HIGHPRI to the
workqueue flags and see if that makes a difference.

	  	    	   	- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-10 13:12       ` Theodore Ts'o
@ 2013-04-11 17:04         ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-11 17:04 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Wed, Apr 10, 2013 at 09:12:45AM -0400, Theodore Ts'o wrote:
> On Wed, Apr 10, 2013 at 11:56:08AM +0100, Mel Gorman wrote:
> > During major activity there is likely to be "good" behaviour
> > with stalls roughly every 30 seconds roughly corresponding to
> > dirty_expire_centiseconds. As you'd expect, the flusher thread is stuck
> > when this happens.
> > 
> >   237 ?        00:00:00 flush-8:0
> > [<ffffffff811a35b9>] sleep_on_buffer+0x9/0x10
> > [<ffffffff811a35ee>] __lock_buffer+0x2e/0x30
> > [<ffffffff8123a21f>] do_get_write_access+0x43f/0x4b0
> 
> If we're stalling on lock_buffer(), that implies that buffer was being
> written, and for some reason it was taking a very long time to
> complete.
> 

Yes.

> It might be worthwhile to put a timestamp in struct dm_crypt_io, and
> record the time when a particular I/O encryption/decryption is getting
> queued to the kcryptd workqueues, and when they finally squirt out.
> 

That somewhat assumes that dm_crypt was at fault which is not unreasonable
but I was skeptical as the workload on dm_crypt was opening a maildir
and mostly reads.

I used a tracepoint in jbd2 to get an idea of what device the buffer_head
was managing and dm did not show up on the list. This is what a trace-cmd
log of the test told me.

       flush-8:0-240   [005]   236.655363: jbd2_lock_buffer_stall: dev 8,8 stall_ms 1096
         awesome-1364  [005]   290.594396: jbd2_lock_buffer_stall: dev 8,6 stall_ms 7312
 gnome-pty-helpe-2256  [005]   290.836952: jbd2_lock_buffer_stall: dev 8,8 stall_ms 7528
       flush-8:0-240   [003]   304.012424: jbd2_lock_buffer_stall: dev 8,8 stall_ms 4472
  gnome-terminal-2332  [005]   308.290879: jbd2_lock_buffer_stall: dev 8,6 stall_ms 3060
         awesome-1364  [006]   308.291318: jbd2_lock_buffer_stall: dev 8,6 stall_ms 3048
       flush-8:0-240   [005]   331.525996: jbd2_lock_buffer_stall: dev 8,5 stall_ms 8732
       flush-8:0-240   [005]   332.353526: jbd2_lock_buffer_stall: dev 8,5 stall_ms 472
       flush-8:0-240   [005]   345.341547: jbd2_lock_buffer_stall: dev 8,5 stall_ms 10024
  gnome-terminal-2418  [005]   347.166876: jbd2_lock_buffer_stall: dev 8,6 stall_ms 11852
         awesome-1364  [005]   347.167082: jbd2_lock_buffer_stall: dev 8,6 stall_ms 11844
       flush-8:0-240   [005]   347.424520: jbd2_lock_buffer_stall: dev 8,5 stall_ms 2012
       flush-8:0-240   [005]   347.583752: jbd2_lock_buffer_stall: dev 8,5 stall_ms 156
       flush-8:0-240   [005]   390.079682: jbd2_lock_buffer_stall: dev 8,8 stall_ms 396
       flush-8:0-240   [002]   407.882385: jbd2_lock_buffer_stall: dev 8,8 stall_ms 12244
       flush-8:0-240   [005]   408.003976: jbd2_lock_buffer_stall: dev 8,8 stall_ms 124
  gnome-terminal-2610  [005]   413.613365: jbd2_lock_buffer_stall: dev 8,6 stall_ms 3400
         awesome-1364  [006]   413.613605: jbd2_lock_buffer_stall: dev 8,6 stall_ms 3736
       flush-8:0-240   [002]   430.706475: jbd2_lock_buffer_stall: dev 8,5 stall_ms 9748
       flush-8:0-240   [005]   458.188896: jbd2_lock_buffer_stall: dev 8,5 stall_ms 7748
       flush-8:0-240   [005]   458.828143: jbd2_lock_buffer_stall: dev 8,5 stall_ms 348
       flush-8:0-240   [006]   459.163814: jbd2_lock_buffer_stall: dev 8,5 stall_ms 252
       flush-8:0-240   [005]   462.340173: jbd2_lock_buffer_stall: dev 8,5 stall_ms 3160
       flush-8:0-240   [005]   469.917705: jbd2_lock_buffer_stall: dev 8,5 stall_ms 6340
       flush-8:0-240   [005]   474.434206: jbd2_lock_buffer_stall: dev 8,5 stall_ms 4512
             tar-2315  [005]   510.043613: jbd2_lock_buffer_stall: dev 8,5 stall_ms 4316
           tclsh-1780  [005]   773.336488: jbd2_lock_buffer_stall: dev 8,5 stall_ms 736
             git-3100  [005]   775.933506: jbd2_lock_buffer_stall: dev 8,5 stall_ms 3664
             git-4763  [005]   864.093317: jbd2_lock_buffer_stall: dev 8,5 stall_ms 140
       flush-8:0-240   [005]   864.242068: jbd2_lock_buffer_stall: dev 8,6 stall_ms 280
             git-4763  [005]   864.264157: jbd2_lock_buffer_stall: dev 8,5 stall_ms 148
       flush-8:0-240   [005]   865.200004: jbd2_lock_buffer_stall: dev 8,5 stall_ms 464
             git-4763  [000]   865.602469: jbd2_lock_buffer_stall: dev 8,5 stall_ms 300
       flush-8:0-240   [005]   865.705448: jbd2_lock_buffer_stall: dev 8,5 stall_ms 500
       flush-8:0-240   [005]   885.367576: jbd2_lock_buffer_stall: dev 8,8 stall_ms 11024
       flush-8:0-240   [005]   895.339697: jbd2_lock_buffer_stall: dev 8,5 stall_ms 120
       flush-8:0-240   [005]   895.765488: jbd2_lock_buffer_stall: dev 8,5 stall_ms 424
 systemd-journal-265   [005]   915.687201: jbd2_lock_buffer_stall: dev 8,8 stall_ms 14844
       flush-8:0-240   [005]   915.690529: jbd2_lock_buffer_stall: dev 8,6 stall_ms 19656
             git-5442  [005]  1034.845674: jbd2_lock_buffer_stall: dev 8,5 stall_ms 344
             git-5442  [005]  1035.157389: jbd2_lock_buffer_stall: dev 8,5 stall_ms 264
       flush-8:0-240   [005]  1035.875478: jbd2_lock_buffer_stall: dev 8,8 stall_ms 1368
       flush-8:0-240   [005]  1036.189218: jbd2_lock_buffer_stall: dev 8,8 stall_ms 312
  gnome-terminal-5592  [005]  1037.318594: jbd2_lock_buffer_stall: dev 8,6 stall_ms 2628
         awesome-1364  [000]  1037.318913: jbd2_lock_buffer_stall: dev 8,6 stall_ms 2632
             git-5789  [005]  1076.805405: jbd2_lock_buffer_stall: dev 8,5 stall_ms 184
       flush-8:0-240   [005]  1078.401721: jbd2_lock_buffer_stall: dev 8,5 stall_ms 700
       flush-8:0-240   [005]  1078.784200: jbd2_lock_buffer_stall: dev 8,5 stall_ms 356
             git-5789  [005]  1079.722683: jbd2_lock_buffer_stall: dev 8,5 stall_ms 1452
       flush-8:0-240   [005]  1109.928552: jbd2_lock_buffer_stall: dev 8,5 stall_ms 976
       flush-8:0-240   [005]  1111.762280: jbd2_lock_buffer_stall: dev 8,5 stall_ms 1832
       flush-8:0-240   [005]  1260.197720: jbd2_lock_buffer_stall: dev 8,5 stall_ms 344
       flush-8:0-240   [005]  1260.403556: jbd2_lock_buffer_stall: dev 8,5 stall_ms 204
       flush-8:0-240   [005]  1260.550904: jbd2_lock_buffer_stall: dev 8,5 stall_ms 108
             git-6598  [005]  1260.832948: jbd2_lock_buffer_stall: dev 8,5 stall_ms 1084
       flush-8:0-240   [005]  1311.736367: jbd2_lock_buffer_stall: dev 8,5 stall_ms 260
       flush-8:0-240   [005]  1313.689297: jbd2_lock_buffer_stall: dev 8,5 stall_ms 412
       flush-8:0-240   [005]  1314.230420: jbd2_lock_buffer_stall: dev 8,5 stall_ms 540
             git-7022  [006]  1314.241607: jbd2_lock_buffer_stall: dev 8,5 stall_ms 668
       flush-8:0-240   [000]  1347.980425: jbd2_lock_buffer_stall: dev 8,5 stall_ms 308
       flush-8:0-240   [005]  1348.164598: jbd2_lock_buffer_stall: dev 8,5 stall_ms 104
             git-7998  [005]  1547.755328: jbd2_lock_buffer_stall: dev 8,5 stall_ms 304
       flush-8:0-240   [006]  1547.764209: jbd2_lock_buffer_stall: dev 8,5 stall_ms 208
       flush-8:0-240   [005]  1548.653365: jbd2_lock_buffer_stall: dev 8,5 stall_ms 844
       flush-8:0-240   [005]  1549.255022: jbd2_lock_buffer_stall: dev 8,5 stall_ms 460
       flush-8:0-240   [005]  1725.036408: jbd2_lock_buffer_stall: dev 8,6 stall_ms 156
             git-8743  [005]  1740.492630: jbd2_lock_buffer_stall: dev 8,5 stall_ms 15032
             git-8743  [005]  1749.485214: jbd2_lock_buffer_stall: dev 8,5 stall_ms 8648
       flush-8:0-240   [005]  1775.937819: jbd2_lock_buffer_stall: dev 8,5 stall_ms 4268
       flush-8:0-240   [006]  1776.335682: jbd2_lock_buffer_stall: dev 8,5 stall_ms 336
       flush-8:0-240   [006]  1776.446799: jbd2_lock_buffer_stall: dev 8,5 stall_ms 112
       flush-8:0-240   [005]  1802.593183: jbd2_lock_buffer_stall: dev 8,6 stall_ms 108
       flush-8:0-240   [006]  1802.809237: jbd2_lock_buffer_stall: dev 8,8 stall_ms 208
       flush-8:0-240   [005]  2012.041976: jbd2_lock_buffer_stall: dev 8,6 stall_ms 292
           tclsh-1778  [005]  2012.055139: jbd2_lock_buffer_stall: dev 8,5 stall_ms 424
  latency-output-1933  [002]  2012.055147: jbd2_lock_buffer_stall: dev 8,5 stall_ms 136
             git-10209 [005]  2012.074584: jbd2_lock_buffer_stall: dev 8,5 stall_ms 164
       flush-8:0-240   [005]  2012.177241: jbd2_lock_buffer_stall: dev 8,5 stall_ms 128
             git-10209 [005]  2012.297472: jbd2_lock_buffer_stall: dev 8,5 stall_ms 216
       flush-8:0-240   [005]  2012.299828: jbd2_lock_buffer_stall: dev 8,5 stall_ms 120

dm is not obviously at fault there. sda5 is /usr/src (git checkout
running there with some logging), sda6 is /home and sda8 is / . This is
the tracepoint patch used.

---8<---
jbd2: Trace when lock_buffer at the start of a journal write takes a long time

While investigating interactivity problems it was clear that processes
sometimes stall for long periods of times if an attempt is made to lock
a buffer that is already part of a transaction. It would stall in a
trace looking something like

[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f3198>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f3209>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f57d1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119ac3e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b9b9>] update_time+0x79/0xc0
[<ffffffff8118ba98>] file_update_time+0x98/0x100
[<ffffffff81110ffc>] __generic_file_aio_write+0x17c/0x3b0
[<ffffffff811112aa>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea853>] ext4_file_write+0x83/0xd0
[<ffffffff81172b23>] do_sync_write+0xa3/0xe0
[<ffffffff811731ae>] vfs_write+0xae/0x180
[<ffffffff8117361d>] sys_write+0x4d/0x90
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

There was a suspicion that dm_crypt might be part responsible so this
patch adds a tracepoint capturing when lock_buffer takes too long
in do_get_write_access() that logs what device is being written and
how long the stall was for.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 fs/jbd2/transaction.c       |  8 ++++++++
 include/trace/events/jbd2.h | 21 +++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index 325bc01..1be0ccb 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -640,6 +640,7 @@ do_get_write_access(handle_t *handle, struct journal_head *jh,
 	int error;
 	char *frozen_buffer = NULL;
 	int need_copy = 0;
+	unsigned long start_lock, time_lock;
 
 	if (is_handle_aborted(handle))
 		return -EROFS;
@@ -655,9 +656,16 @@ repeat:
 
 	/* @@@ Need to check for errors here at some point. */
 
+ 	start_lock = jiffies;
 	lock_buffer(bh);
 	jbd_lock_bh_state(bh);
 
+	/* If it takes too long to lock the buffer, trace it */
+	time_lock = jbd2_time_diff(start_lock, jiffies);
+	if (time_lock > HZ/10)
+		trace_jbd2_lock_buffer_stall(bh->b_bdev->bd_dev,
+			jiffies_to_msecs(time_lock));
+
 	/* We now hold the buffer lock so it is safe to query the buffer
 	 * state.  Is the buffer dirty?
 	 *
diff --git a/include/trace/events/jbd2.h b/include/trace/events/jbd2.h
index 070df49..c1d1f3e 100644
--- a/include/trace/events/jbd2.h
+++ b/include/trace/events/jbd2.h
@@ -358,6 +358,27 @@ TRACE_EVENT(jbd2_write_superblock,
 		  MINOR(__entry->dev), __entry->write_op)
 );
 
+TRACE_EVENT(jbd2_lock_buffer_stall,
+
+	TP_PROTO(dev_t dev, unsigned long stall_ms),
+
+	TP_ARGS(dev, stall_ms),
+
+	TP_STRUCT__entry(
+		__field(        dev_t, dev	)
+		__field(unsigned long, stall_ms	)
+	),
+
+	TP_fast_assign(
+		__entry->dev		= dev;
+		__entry->stall_ms	= stall_ms;
+	),
+
+	TP_printk("dev %d,%d stall_ms %lu",
+		MAJOR(__entry->dev), MINOR(__entry->dev),
+		__entry->stall_ms)
+);
+
 #endif /* _TRACE_JBD2_H */
 
 /* This part must be outside protection */

^ permalink raw reply related	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-11 17:04         ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-11 17:04 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Wed, Apr 10, 2013 at 09:12:45AM -0400, Theodore Ts'o wrote:
> On Wed, Apr 10, 2013 at 11:56:08AM +0100, Mel Gorman wrote:
> > During major activity there is likely to be "good" behaviour
> > with stalls roughly every 30 seconds roughly corresponding to
> > dirty_expire_centiseconds. As you'd expect, the flusher thread is stuck
> > when this happens.
> > 
> >   237 ?        00:00:00 flush-8:0
> > [<ffffffff811a35b9>] sleep_on_buffer+0x9/0x10
> > [<ffffffff811a35ee>] __lock_buffer+0x2e/0x30
> > [<ffffffff8123a21f>] do_get_write_access+0x43f/0x4b0
> 
> If we're stalling on lock_buffer(), that implies that buffer was being
> written, and for some reason it was taking a very long time to
> complete.
> 

Yes.

> It might be worthwhile to put a timestamp in struct dm_crypt_io, and
> record the time when a particular I/O encryption/decryption is getting
> queued to the kcryptd workqueues, and when they finally squirt out.
> 

That somewhat assumes that dm_crypt was at fault which is not unreasonable
but I was skeptical as the workload on dm_crypt was opening a maildir
and mostly reads.

I used a tracepoint in jbd2 to get an idea of what device the buffer_head
was managing and dm did not show up on the list. This is what a trace-cmd
log of the test told me.

       flush-8:0-240   [005]   236.655363: jbd2_lock_buffer_stall: dev 8,8 stall_ms 1096
         awesome-1364  [005]   290.594396: jbd2_lock_buffer_stall: dev 8,6 stall_ms 7312
 gnome-pty-helpe-2256  [005]   290.836952: jbd2_lock_buffer_stall: dev 8,8 stall_ms 7528
       flush-8:0-240   [003]   304.012424: jbd2_lock_buffer_stall: dev 8,8 stall_ms 4472
  gnome-terminal-2332  [005]   308.290879: jbd2_lock_buffer_stall: dev 8,6 stall_ms 3060
         awesome-1364  [006]   308.291318: jbd2_lock_buffer_stall: dev 8,6 stall_ms 3048
       flush-8:0-240   [005]   331.525996: jbd2_lock_buffer_stall: dev 8,5 stall_ms 8732
       flush-8:0-240   [005]   332.353526: jbd2_lock_buffer_stall: dev 8,5 stall_ms 472
       flush-8:0-240   [005]   345.341547: jbd2_lock_buffer_stall: dev 8,5 stall_ms 10024
  gnome-terminal-2418  [005]   347.166876: jbd2_lock_buffer_stall: dev 8,6 stall_ms 11852
         awesome-1364  [005]   347.167082: jbd2_lock_buffer_stall: dev 8,6 stall_ms 11844
       flush-8:0-240   [005]   347.424520: jbd2_lock_buffer_stall: dev 8,5 stall_ms 2012
       flush-8:0-240   [005]   347.583752: jbd2_lock_buffer_stall: dev 8,5 stall_ms 156
       flush-8:0-240   [005]   390.079682: jbd2_lock_buffer_stall: dev 8,8 stall_ms 396
       flush-8:0-240   [002]   407.882385: jbd2_lock_buffer_stall: dev 8,8 stall_ms 12244
       flush-8:0-240   [005]   408.003976: jbd2_lock_buffer_stall: dev 8,8 stall_ms 124
  gnome-terminal-2610  [005]   413.613365: jbd2_lock_buffer_stall: dev 8,6 stall_ms 3400
         awesome-1364  [006]   413.613605: jbd2_lock_buffer_stall: dev 8,6 stall_ms 3736
       flush-8:0-240   [002]   430.706475: jbd2_lock_buffer_stall: dev 8,5 stall_ms 9748
       flush-8:0-240   [005]   458.188896: jbd2_lock_buffer_stall: dev 8,5 stall_ms 7748
       flush-8:0-240   [005]   458.828143: jbd2_lock_buffer_stall: dev 8,5 stall_ms 348
       flush-8:0-240   [006]   459.163814: jbd2_lock_buffer_stall: dev 8,5 stall_ms 252
       flush-8:0-240   [005]   462.340173: jbd2_lock_buffer_stall: dev 8,5 stall_ms 3160
       flush-8:0-240   [005]   469.917705: jbd2_lock_buffer_stall: dev 8,5 stall_ms 6340
       flush-8:0-240   [005]   474.434206: jbd2_lock_buffer_stall: dev 8,5 stall_ms 4512
             tar-2315  [005]   510.043613: jbd2_lock_buffer_stall: dev 8,5 stall_ms 4316
           tclsh-1780  [005]   773.336488: jbd2_lock_buffer_stall: dev 8,5 stall_ms 736
             git-3100  [005]   775.933506: jbd2_lock_buffer_stall: dev 8,5 stall_ms 3664
             git-4763  [005]   864.093317: jbd2_lock_buffer_stall: dev 8,5 stall_ms 140
       flush-8:0-240   [005]   864.242068: jbd2_lock_buffer_stall: dev 8,6 stall_ms 280
             git-4763  [005]   864.264157: jbd2_lock_buffer_stall: dev 8,5 stall_ms 148
       flush-8:0-240   [005]   865.200004: jbd2_lock_buffer_stall: dev 8,5 stall_ms 464
             git-4763  [000]   865.602469: jbd2_lock_buffer_stall: dev 8,5 stall_ms 300
       flush-8:0-240   [005]   865.705448: jbd2_lock_buffer_stall: dev 8,5 stall_ms 500
       flush-8:0-240   [005]   885.367576: jbd2_lock_buffer_stall: dev 8,8 stall_ms 11024
       flush-8:0-240   [005]   895.339697: jbd2_lock_buffer_stall: dev 8,5 stall_ms 120
       flush-8:0-240   [005]   895.765488: jbd2_lock_buffer_stall: dev 8,5 stall_ms 424
 systemd-journal-265   [005]   915.687201: jbd2_lock_buffer_stall: dev 8,8 stall_ms 14844
       flush-8:0-240   [005]   915.690529: jbd2_lock_buffer_stall: dev 8,6 stall_ms 19656
             git-5442  [005]  1034.845674: jbd2_lock_buffer_stall: dev 8,5 stall_ms 344
             git-5442  [005]  1035.157389: jbd2_lock_buffer_stall: dev 8,5 stall_ms 264
       flush-8:0-240   [005]  1035.875478: jbd2_lock_buffer_stall: dev 8,8 stall_ms 1368
       flush-8:0-240   [005]  1036.189218: jbd2_lock_buffer_stall: dev 8,8 stall_ms 312
  gnome-terminal-5592  [005]  1037.318594: jbd2_lock_buffer_stall: dev 8,6 stall_ms 2628
         awesome-1364  [000]  1037.318913: jbd2_lock_buffer_stall: dev 8,6 stall_ms 2632
             git-5789  [005]  1076.805405: jbd2_lock_buffer_stall: dev 8,5 stall_ms 184
       flush-8:0-240   [005]  1078.401721: jbd2_lock_buffer_stall: dev 8,5 stall_ms 700
       flush-8:0-240   [005]  1078.784200: jbd2_lock_buffer_stall: dev 8,5 stall_ms 356
             git-5789  [005]  1079.722683: jbd2_lock_buffer_stall: dev 8,5 stall_ms 1452
       flush-8:0-240   [005]  1109.928552: jbd2_lock_buffer_stall: dev 8,5 stall_ms 976
       flush-8:0-240   [005]  1111.762280: jbd2_lock_buffer_stall: dev 8,5 stall_ms 1832
       flush-8:0-240   [005]  1260.197720: jbd2_lock_buffer_stall: dev 8,5 stall_ms 344
       flush-8:0-240   [005]  1260.403556: jbd2_lock_buffer_stall: dev 8,5 stall_ms 204
       flush-8:0-240   [005]  1260.550904: jbd2_lock_buffer_stall: dev 8,5 stall_ms 108
             git-6598  [005]  1260.832948: jbd2_lock_buffer_stall: dev 8,5 stall_ms 1084
       flush-8:0-240   [005]  1311.736367: jbd2_lock_buffer_stall: dev 8,5 stall_ms 260
       flush-8:0-240   [005]  1313.689297: jbd2_lock_buffer_stall: dev 8,5 stall_ms 412
       flush-8:0-240   [005]  1314.230420: jbd2_lock_buffer_stall: dev 8,5 stall_ms 540
             git-7022  [006]  1314.241607: jbd2_lock_buffer_stall: dev 8,5 stall_ms 668
       flush-8:0-240   [000]  1347.980425: jbd2_lock_buffer_stall: dev 8,5 stall_ms 308
       flush-8:0-240   [005]  1348.164598: jbd2_lock_buffer_stall: dev 8,5 stall_ms 104
             git-7998  [005]  1547.755328: jbd2_lock_buffer_stall: dev 8,5 stall_ms 304
       flush-8:0-240   [006]  1547.764209: jbd2_lock_buffer_stall: dev 8,5 stall_ms 208
       flush-8:0-240   [005]  1548.653365: jbd2_lock_buffer_stall: dev 8,5 stall_ms 844
       flush-8:0-240   [005]  1549.255022: jbd2_lock_buffer_stall: dev 8,5 stall_ms 460
       flush-8:0-240   [005]  1725.036408: jbd2_lock_buffer_stall: dev 8,6 stall_ms 156
             git-8743  [005]  1740.492630: jbd2_lock_buffer_stall: dev 8,5 stall_ms 15032
             git-8743  [005]  1749.485214: jbd2_lock_buffer_stall: dev 8,5 stall_ms 8648
       flush-8:0-240   [005]  1775.937819: jbd2_lock_buffer_stall: dev 8,5 stall_ms 4268
       flush-8:0-240   [006]  1776.335682: jbd2_lock_buffer_stall: dev 8,5 stall_ms 336
       flush-8:0-240   [006]  1776.446799: jbd2_lock_buffer_stall: dev 8,5 stall_ms 112
       flush-8:0-240   [005]  1802.593183: jbd2_lock_buffer_stall: dev 8,6 stall_ms 108
       flush-8:0-240   [006]  1802.809237: jbd2_lock_buffer_stall: dev 8,8 stall_ms 208
       flush-8:0-240   [005]  2012.041976: jbd2_lock_buffer_stall: dev 8,6 stall_ms 292
           tclsh-1778  [005]  2012.055139: jbd2_lock_buffer_stall: dev 8,5 stall_ms 424
  latency-output-1933  [002]  2012.055147: jbd2_lock_buffer_stall: dev 8,5 stall_ms 136
             git-10209 [005]  2012.074584: jbd2_lock_buffer_stall: dev 8,5 stall_ms 164
       flush-8:0-240   [005]  2012.177241: jbd2_lock_buffer_stall: dev 8,5 stall_ms 128
             git-10209 [005]  2012.297472: jbd2_lock_buffer_stall: dev 8,5 stall_ms 216
       flush-8:0-240   [005]  2012.299828: jbd2_lock_buffer_stall: dev 8,5 stall_ms 120

dm is not obviously at fault there. sda5 is /usr/src (git checkout
running there with some logging), sda6 is /home and sda8 is / . This is
the tracepoint patch used.

---8<---
jbd2: Trace when lock_buffer at the start of a journal write takes a long time

While investigating interactivity problems it was clear that processes
sometimes stall for long periods of times if an attempt is made to lock
a buffer that is already part of a transaction. It would stall in a
trace looking something like

[<ffffffff811a39de>] __lock_buffer+0x2e/0x30
[<ffffffff8123a60f>] do_get_write_access+0x43f/0x4b0
[<ffffffff8123a7cb>] jbd2_journal_get_write_access+0x2b/0x50
[<ffffffff81220f79>] __ext4_journal_get_write_access+0x39/0x80
[<ffffffff811f3198>] ext4_reserve_inode_write+0x78/0xa0
[<ffffffff811f3209>] ext4_mark_inode_dirty+0x49/0x220
[<ffffffff811f57d1>] ext4_dirty_inode+0x41/0x60
[<ffffffff8119ac3e>] __mark_inode_dirty+0x4e/0x2d0
[<ffffffff8118b9b9>] update_time+0x79/0xc0
[<ffffffff8118ba98>] file_update_time+0x98/0x100
[<ffffffff81110ffc>] __generic_file_aio_write+0x17c/0x3b0
[<ffffffff811112aa>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ea853>] ext4_file_write+0x83/0xd0
[<ffffffff81172b23>] do_sync_write+0xa3/0xe0
[<ffffffff811731ae>] vfs_write+0xae/0x180
[<ffffffff8117361d>] sys_write+0x4d/0x90
[<ffffffff8159d62d>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

There was a suspicion that dm_crypt might be part responsible so this
patch adds a tracepoint capturing when lock_buffer takes too long
in do_get_write_access() that logs what device is being written and
how long the stall was for.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 fs/jbd2/transaction.c       |  8 ++++++++
 include/trace/events/jbd2.h | 21 +++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index 325bc01..1be0ccb 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -640,6 +640,7 @@ do_get_write_access(handle_t *handle, struct journal_head *jh,
 	int error;
 	char *frozen_buffer = NULL;
 	int need_copy = 0;
+	unsigned long start_lock, time_lock;
 
 	if (is_handle_aborted(handle))
 		return -EROFS;
@@ -655,9 +656,16 @@ repeat:
 
 	/* @@@ Need to check for errors here at some point. */
 
+ 	start_lock = jiffies;
 	lock_buffer(bh);
 	jbd_lock_bh_state(bh);
 
+	/* If it takes too long to lock the buffer, trace it */
+	time_lock = jbd2_time_diff(start_lock, jiffies);
+	if (time_lock > HZ/10)
+		trace_jbd2_lock_buffer_stall(bh->b_bdev->bd_dev,
+			jiffies_to_msecs(time_lock));
+
 	/* We now hold the buffer lock so it is safe to query the buffer
 	 * state.  Is the buffer dirty?
 	 *
diff --git a/include/trace/events/jbd2.h b/include/trace/events/jbd2.h
index 070df49..c1d1f3e 100644
--- a/include/trace/events/jbd2.h
+++ b/include/trace/events/jbd2.h
@@ -358,6 +358,27 @@ TRACE_EVENT(jbd2_write_superblock,
 		  MINOR(__entry->dev), __entry->write_op)
 );
 
+TRACE_EVENT(jbd2_lock_buffer_stall,
+
+	TP_PROTO(dev_t dev, unsigned long stall_ms),
+
+	TP_ARGS(dev, stall_ms),
+
+	TP_STRUCT__entry(
+		__field(        dev_t, dev	)
+		__field(unsigned long, stall_ms	)
+	),
+
+	TP_fast_assign(
+		__entry->dev		= dev;
+		__entry->stall_ms	= stall_ms;
+	),
+
+	TP_printk("dev %d,%d stall_ms %lu",
+		MAJOR(__entry->dev), MINOR(__entry->dev),
+		__entry->stall_ms)
+);
+
 #endif /* _TRACE_JBD2_H */
 
 /* This part must be outside protection */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-11 17:04         ` Mel Gorman
@ 2013-04-11 18:35           ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-11 18:35 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 06:04:02PM +0100, Mel Gorman wrote:
> > If we're stalling on lock_buffer(), that implies that buffer was being
> > written, and for some reason it was taking a very long time to
> > complete.
> > 
> 
> Yes.
> 
> > It might be worthwhile to put a timestamp in struct dm_crypt_io, and
> > record the time when a particular I/O encryption/decryption is getting
> > queued to the kcryptd workqueues, and when they finally squirt out.
> > 
> 
> That somewhat assumes that dm_crypt was at fault which is not unreasonable
> but I was skeptical as the workload on dm_crypt was opening a maildir
> and mostly reads.

Hmm... well, I've reviewed all of the places in the ext4 and jbd2
layer where we call lock_buffer(), and with one exception[1] we're not
holding the the bh locked any longer than necessary.  There are a few
places where we grab a spinlock or two before we can do what we need
to do and then release the lock'ed buffer head, but the only time we
hold the bh locked for long periods of time is when we submit metadata
blocks for I/O.

[1] There is one exception in ext4_xattr_release_block() where I
believe we should move the call to unlock_buffer(bh) before the call
to ext4_free_blocks(), since we've already elevanted the bh count and
ext4_free_blocks() does not need to have the bh locked.  It's not
related to any of the stalls you've repored, though, as near as I can
tell (none of the stack traces include the ext4 xattr code, and this
would only affect external extended attribute blocks).


Could you code which checks the hold time of lock_buffer(), measuing
from when the lock is successfully grabbed, to see if you can see if I
missed some code path in ext4 or jbd2 where the bh is locked and then
there is some call to some function which needs to block for some
random reason?  What I'd suggest is putting a timestamp in buffer_head
structure, which is set by lock_buffer once it is successfully grabbed
the lock, and then in unlock_buffer(), if it is held for more than a
second or some such, to dump out the stack trace.

Because at this point, either I'm missing something or I'm beginning
to suspect that your hard drive (or maybe something the block layer?)
is simply taking a long time to service an I/O request.  Putting in
this check should be able to very quickly determine what code path
and/or which subsystem we should be focused upon.

Thanks,

					- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-11 18:35           ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-11 18:35 UTC (permalink / raw)
  To: Mel Gorman; +Cc: linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 06:04:02PM +0100, Mel Gorman wrote:
> > If we're stalling on lock_buffer(), that implies that buffer was being
> > written, and for some reason it was taking a very long time to
> > complete.
> > 
> 
> Yes.
> 
> > It might be worthwhile to put a timestamp in struct dm_crypt_io, and
> > record the time when a particular I/O encryption/decryption is getting
> > queued to the kcryptd workqueues, and when they finally squirt out.
> > 
> 
> That somewhat assumes that dm_crypt was at fault which is not unreasonable
> but I was skeptical as the workload on dm_crypt was opening a maildir
> and mostly reads.

Hmm... well, I've reviewed all of the places in the ext4 and jbd2
layer where we call lock_buffer(), and with one exception[1] we're not
holding the the bh locked any longer than necessary.  There are a few
places where we grab a spinlock or two before we can do what we need
to do and then release the lock'ed buffer head, but the only time we
hold the bh locked for long periods of time is when we submit metadata
blocks for I/O.

[1] There is one exception in ext4_xattr_release_block() where I
believe we should move the call to unlock_buffer(bh) before the call
to ext4_free_blocks(), since we've already elevanted the bh count and
ext4_free_blocks() does not need to have the bh locked.  It's not
related to any of the stalls you've repored, though, as near as I can
tell (none of the stack traces include the ext4 xattr code, and this
would only affect external extended attribute blocks).


Could you code which checks the hold time of lock_buffer(), measuing
from when the lock is successfully grabbed, to see if you can see if I
missed some code path in ext4 or jbd2 where the bh is locked and then
there is some call to some function which needs to block for some
random reason?  What I'd suggest is putting a timestamp in buffer_head
structure, which is set by lock_buffer once it is successfully grabbed
the lock, and then in unlock_buffer(), if it is held for more than a
second or some such, to dump out the stack trace.

Because at this point, either I'm missing something or I'm beginning
to suspect that your hard drive (or maybe something the block layer?)
is simply taking a long time to service an I/O request.  Putting in
this check should be able to very quickly determine what code path
and/or which subsystem we should be focused upon.

Thanks,

					- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-11 18:35           ` Theodore Ts'o
@ 2013-04-11 21:33             ` Jan Kara
  -1 siblings, 0 replies; 105+ messages in thread
From: Jan Kara @ 2013-04-11 21:33 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu 11-04-13 14:35:12, Ted Tso wrote:
> On Thu, Apr 11, 2013 at 06:04:02PM +0100, Mel Gorman wrote:
> > > If we're stalling on lock_buffer(), that implies that buffer was being
> > > written, and for some reason it was taking a very long time to
> > > complete.
> > > 
> > 
> > Yes.
> > 
> > > It might be worthwhile to put a timestamp in struct dm_crypt_io, and
> > > record the time when a particular I/O encryption/decryption is getting
> > > queued to the kcryptd workqueues, and when they finally squirt out.
> > > 
> > 
> > That somewhat assumes that dm_crypt was at fault which is not unreasonable
> > but I was skeptical as the workload on dm_crypt was opening a maildir
> > and mostly reads.
> 
> Hmm... well, I've reviewed all of the places in the ext4 and jbd2
> layer where we call lock_buffer(), and with one exception[1] we're not
> holding the the bh locked any longer than necessary.  There are a few
> places where we grab a spinlock or two before we can do what we need
> to do and then release the lock'ed buffer head, but the only time we
> hold the bh locked for long periods of time is when we submit metadata
> blocks for I/O.
> 
> [1] There is one exception in ext4_xattr_release_block() where I
> believe we should move the call to unlock_buffer(bh) before the call
> to ext4_free_blocks(), since we've already elevanted the bh count and
> ext4_free_blocks() does not need to have the bh locked.  It's not
> related to any of the stalls you've repored, though, as near as I can
> tell (none of the stack traces include the ext4 xattr code, and this
> would only affect external extended attribute blocks).
> 
> 
> Could you code which checks the hold time of lock_buffer(), measuing
> from when the lock is successfully grabbed, to see if you can see if I
> missed some code path in ext4 or jbd2 where the bh is locked and then
> there is some call to some function which needs to block for some
> random reason?  What I'd suggest is putting a timestamp in buffer_head
> structure, which is set by lock_buffer once it is successfully grabbed
> the lock, and then in unlock_buffer(), if it is held for more than a
> second or some such, to dump out the stack trace.
> 
> Because at this point, either I'm missing something or I'm beginning
> to suspect that your hard drive (or maybe something the block layer?)
> is simply taking a long time to service an I/O request.  Putting in
> this check should be able to very quickly determine what code path
> and/or which subsystem we should be focused upon.
  I think it might be more enlightening if Mel traced which process in
which funclion is holding the buffer lock. I suspect we'll find out that
the flusher thread has submitted the buffer for IO as an async write and
thus it takes a long time to complete in presence of reads which have
higher priority.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-11 21:33             ` Jan Kara
  0 siblings, 0 replies; 105+ messages in thread
From: Jan Kara @ 2013-04-11 21:33 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu 11-04-13 14:35:12, Ted Tso wrote:
> On Thu, Apr 11, 2013 at 06:04:02PM +0100, Mel Gorman wrote:
> > > If we're stalling on lock_buffer(), that implies that buffer was being
> > > written, and for some reason it was taking a very long time to
> > > complete.
> > > 
> > 
> > Yes.
> > 
> > > It might be worthwhile to put a timestamp in struct dm_crypt_io, and
> > > record the time when a particular I/O encryption/decryption is getting
> > > queued to the kcryptd workqueues, and when they finally squirt out.
> > > 
> > 
> > That somewhat assumes that dm_crypt was at fault which is not unreasonable
> > but I was skeptical as the workload on dm_crypt was opening a maildir
> > and mostly reads.
> 
> Hmm... well, I've reviewed all of the places in the ext4 and jbd2
> layer where we call lock_buffer(), and with one exception[1] we're not
> holding the the bh locked any longer than necessary.  There are a few
> places where we grab a spinlock or two before we can do what we need
> to do and then release the lock'ed buffer head, but the only time we
> hold the bh locked for long periods of time is when we submit metadata
> blocks for I/O.
> 
> [1] There is one exception in ext4_xattr_release_block() where I
> believe we should move the call to unlock_buffer(bh) before the call
> to ext4_free_blocks(), since we've already elevanted the bh count and
> ext4_free_blocks() does not need to have the bh locked.  It's not
> related to any of the stalls you've repored, though, as near as I can
> tell (none of the stack traces include the ext4 xattr code, and this
> would only affect external extended attribute blocks).
> 
> 
> Could you code which checks the hold time of lock_buffer(), measuing
> from when the lock is successfully grabbed, to see if you can see if I
> missed some code path in ext4 or jbd2 where the bh is locked and then
> there is some call to some function which needs to block for some
> random reason?  What I'd suggest is putting a timestamp in buffer_head
> structure, which is set by lock_buffer once it is successfully grabbed
> the lock, and then in unlock_buffer(), if it is held for more than a
> second or some such, to dump out the stack trace.
> 
> Because at this point, either I'm missing something or I'm beginning
> to suspect that your hard drive (or maybe something the block layer?)
> is simply taking a long time to service an I/O request.  Putting in
> this check should be able to very quickly determine what code path
> and/or which subsystem we should be focused upon.
  I think it might be more enlightening if Mel traced which process in
which funclion is holding the buffer lock. I suspect we'll find out that
the flusher thread has submitted the buffer for IO as an async write and
thus it takes a long time to complete in presence of reads which have
higher priority.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-11 21:33             ` Jan Kara
@ 2013-04-12  2:57               ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-12  2:57 UTC (permalink / raw)
  To: Jan Kara; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 11:33:35PM +0200, Jan Kara wrote:
>   I think it might be more enlightening if Mel traced which process in
> which funclion is holding the buffer lock. I suspect we'll find out that
> the flusher thread has submitted the buffer for IO as an async write and
> thus it takes a long time to complete in presence of reads which have
> higher priority.

That's an interesting theory.  If the workload is one which is very
heavy on reads and writes, that could explain the high latency.  That
would explain why those of us who are using primarily SSD's are seeing
the problems, because would be reads are nice and fast.

If that is the case, one possible solution that comes to mind would be
to mark buffer_heads that contain metadata with a flag, so that the
flusher thread can write them back at the same priority as reads.

The only problem I can see with this hypothesis is that if this is the
explanation for what Mel and Jiri are seeing, it's something that
would have been around for a long time, and would affect ext3 as well
as ext4.  That isn't quite consistent, however, with Mel's observation
that this is a probablem which has gotten worse in relatively
recently.

	  	    	   	    	       - Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-12  2:57               ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-12  2:57 UTC (permalink / raw)
  To: Jan Kara; +Cc: Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 11:33:35PM +0200, Jan Kara wrote:
>   I think it might be more enlightening if Mel traced which process in
> which funclion is holding the buffer lock. I suspect we'll find out that
> the flusher thread has submitted the buffer for IO as an async write and
> thus it takes a long time to complete in presence of reads which have
> higher priority.

That's an interesting theory.  If the workload is one which is very
heavy on reads and writes, that could explain the high latency.  That
would explain why those of us who are using primarily SSD's are seeing
the problems, because would be reads are nice and fast.

If that is the case, one possible solution that comes to mind would be
to mark buffer_heads that contain metadata with a flag, so that the
flusher thread can write them back at the same priority as reads.

The only problem I can see with this hypothesis is that if this is the
explanation for what Mel and Jiri are seeing, it's something that
would have been around for a long time, and would affect ext3 as well
as ext4.  That isn't quite consistent, however, with Mel's observation
that this is a probablem which has gotten worse in relatively
recently.

	  	    	   	    	       - Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-12  2:57               ` Theodore Ts'o
@ 2013-04-12  4:50                 ` Dave Chinner
  -1 siblings, 0 replies; 105+ messages in thread
From: Dave Chinner @ 2013-04-12  4:50 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, Mel Gorman, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 10:57:08PM -0400, Theodore Ts'o wrote:
> On Thu, Apr 11, 2013 at 11:33:35PM +0200, Jan Kara wrote:
> >   I think it might be more enlightening if Mel traced which process in
> > which funclion is holding the buffer lock. I suspect we'll find out that
> > the flusher thread has submitted the buffer for IO as an async write and
> > thus it takes a long time to complete in presence of reads which have
> > higher priority.
> 
> That's an interesting theory.  If the workload is one which is very
> heavy on reads and writes, that could explain the high latency.  That
> would explain why those of us who are using primarily SSD's are seeing
> the problems, because would be reads are nice and fast.
> 
> If that is the case, one possible solution that comes to mind would be
> to mark buffer_heads that contain metadata with a flag, so that the
> flusher thread can write them back at the same priority as reads.

Ext4 is already using REQ_META for this purpose.

I'm surprised that no-one has suggested "change the IO elevator"
yet.....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-12  4:50                 ` Dave Chinner
  0 siblings, 0 replies; 105+ messages in thread
From: Dave Chinner @ 2013-04-12  4:50 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, Mel Gorman, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 10:57:08PM -0400, Theodore Ts'o wrote:
> On Thu, Apr 11, 2013 at 11:33:35PM +0200, Jan Kara wrote:
> >   I think it might be more enlightening if Mel traced which process in
> > which funclion is holding the buffer lock. I suspect we'll find out that
> > the flusher thread has submitted the buffer for IO as an async write and
> > thus it takes a long time to complete in presence of reads which have
> > higher priority.
> 
> That's an interesting theory.  If the workload is one which is very
> heavy on reads and writes, that could explain the high latency.  That
> would explain why those of us who are using primarily SSD's are seeing
> the problems, because would be reads are nice and fast.
> 
> If that is the case, one possible solution that comes to mind would be
> to mark buffer_heads that contain metadata with a flag, so that the
> flusher thread can write them back at the same priority as reads.

Ext4 is already using REQ_META for this purpose.

I'm surprised that no-one has suggested "change the IO elevator"
yet.....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-11 18:35           ` Theodore Ts'o
@ 2013-04-12  9:45             ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-12  9:45 UTC (permalink / raw)
  To: Theodore Ts'o, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 02:35:12PM -0400, Theodore Ts'o wrote:
> On Thu, Apr 11, 2013 at 06:04:02PM +0100, Mel Gorman wrote:
> > > If we're stalling on lock_buffer(), that implies that buffer was being
> > > written, and for some reason it was taking a very long time to
> > > complete.
> > > 
> > 
> > Yes.
> > 
> > > It might be worthwhile to put a timestamp in struct dm_crypt_io, and
> > > record the time when a particular I/O encryption/decryption is getting
> > > queued to the kcryptd workqueues, and when they finally squirt out.
> > > 
> > 
> > That somewhat assumes that dm_crypt was at fault which is not unreasonable
> > but I was skeptical as the workload on dm_crypt was opening a maildir
> > and mostly reads.
> 
> Hmm... well, I've reviewed all of the places in the ext4 and jbd2
> layer where we call lock_buffer(), and with one exception[1] we're not
> holding the the bh locked any longer than necessary.  There are a few
> places where we grab a spinlock or two before we can do what we need
> to do and then release the lock'ed buffer head, but the only time we
> hold the bh locked for long periods of time is when we submit metadata
> blocks for I/O.
> 

Yeah, ok. This is not the answer I was hoping for but it's the answer I
expected.

> Could you code which checks the hold time of lock_buffer(), measuing
> from when the lock is successfully grabbed, to see if you can see if I
> missed some code path in ext4 or jbd2 where the bh is locked and then
> there is some call to some function which needs to block for some
> random reason?
>
> What I'd suggest is putting a timestamp in buffer_head
> structure, which is set by lock_buffer once it is successfully grabbed
> the lock, and then in unlock_buffer(), if it is held for more than a
> second or some such, to dump out the stack trace.
> 

I can do that but the results might lack meaning. What I could do instead
is use a variation of the page owner tracking patch (current iteration at
https://lkml.org/lkml/2012/12/7/487) to record a stack trace in lock_buffer
and dump it from jbd2/transaction.c if it stalls for too long. I'll report
if I find something useful.

> Because at this point, either I'm missing something or I'm beginning
> to suspect that your hard drive (or maybe something the block layer?)
> is simply taking a long time to service an I/O request. 

It could be because the drive is a piece of crap.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-12  9:45             ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-12  9:45 UTC (permalink / raw)
  To: Theodore Ts'o, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 02:35:12PM -0400, Theodore Ts'o wrote:
> On Thu, Apr 11, 2013 at 06:04:02PM +0100, Mel Gorman wrote:
> > > If we're stalling on lock_buffer(), that implies that buffer was being
> > > written, and for some reason it was taking a very long time to
> > > complete.
> > > 
> > 
> > Yes.
> > 
> > > It might be worthwhile to put a timestamp in struct dm_crypt_io, and
> > > record the time when a particular I/O encryption/decryption is getting
> > > queued to the kcryptd workqueues, and when they finally squirt out.
> > > 
> > 
> > That somewhat assumes that dm_crypt was at fault which is not unreasonable
> > but I was skeptical as the workload on dm_crypt was opening a maildir
> > and mostly reads.
> 
> Hmm... well, I've reviewed all of the places in the ext4 and jbd2
> layer where we call lock_buffer(), and with one exception[1] we're not
> holding the the bh locked any longer than necessary.  There are a few
> places where we grab a spinlock or two before we can do what we need
> to do and then release the lock'ed buffer head, but the only time we
> hold the bh locked for long periods of time is when we submit metadata
> blocks for I/O.
> 

Yeah, ok. This is not the answer I was hoping for but it's the answer I
expected.

> Could you code which checks the hold time of lock_buffer(), measuing
> from when the lock is successfully grabbed, to see if you can see if I
> missed some code path in ext4 or jbd2 where the bh is locked and then
> there is some call to some function which needs to block for some
> random reason?
>
> What I'd suggest is putting a timestamp in buffer_head
> structure, which is set by lock_buffer once it is successfully grabbed
> the lock, and then in unlock_buffer(), if it is held for more than a
> second or some such, to dump out the stack trace.
> 

I can do that but the results might lack meaning. What I could do instead
is use a variation of the page owner tracking patch (current iteration at
https://lkml.org/lkml/2012/12/7/487) to record a stack trace in lock_buffer
and dump it from jbd2/transaction.c if it stalls for too long. I'll report
if I find something useful.

> Because at this point, either I'm missing something or I'm beginning
> to suspect that your hard drive (or maybe something the block layer?)
> is simply taking a long time to service an I/O request. 

It could be because the drive is a piece of crap.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-12  2:57               ` Theodore Ts'o
@ 2013-04-12  9:47                 ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-12  9:47 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 10:57:08PM -0400, Theodore Ts'o wrote:
> On Thu, Apr 11, 2013 at 11:33:35PM +0200, Jan Kara wrote:
> >   I think it might be more enlightening if Mel traced which process in
> > which funclion is holding the buffer lock. I suspect we'll find out that
> > the flusher thread has submitted the buffer for IO as an async write and
> > thus it takes a long time to complete in presence of reads which have
> > higher priority.
> 
> That's an interesting theory.  If the workload is one which is very
> heavy on reads and writes, that could explain the high latency.  That
> would explain why those of us who are using primarily SSD's are seeing
> the problems, because would be reads are nice and fast.
> 
> If that is the case, one possible solution that comes to mind would be
> to mark buffer_heads that contain metadata with a flag, so that the
> flusher thread can write them back at the same priority as reads.
> 
> The only problem I can see with this hypothesis is that if this is the
> explanation for what Mel and Jiri are seeing, it's something that
> would have been around for a long time, and would affect ext3 as well
> as ext4.  That isn't quite consistent, however, with Mel's observation
> that this is a probablem which has gotten worse in relatively
> recently.
> 

According to the tests I've run, multi-second stalls have been a problem for
a while but never really bothered me. I'm not sure why it felt particularly
bad around -rc2 or why it seems to be better now. Maybe I just had my
cranky pants on.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-12  9:47                 ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-12  9:47 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Thu, Apr 11, 2013 at 10:57:08PM -0400, Theodore Ts'o wrote:
> On Thu, Apr 11, 2013 at 11:33:35PM +0200, Jan Kara wrote:
> >   I think it might be more enlightening if Mel traced which process in
> > which funclion is holding the buffer lock. I suspect we'll find out that
> > the flusher thread has submitted the buffer for IO as an async write and
> > thus it takes a long time to complete in presence of reads which have
> > higher priority.
> 
> That's an interesting theory.  If the workload is one which is very
> heavy on reads and writes, that could explain the high latency.  That
> would explain why those of us who are using primarily SSD's are seeing
> the problems, because would be reads are nice and fast.
> 
> If that is the case, one possible solution that comes to mind would be
> to mark buffer_heads that contain metadata with a flag, so that the
> flusher thread can write them back at the same priority as reads.
> 
> The only problem I can see with this hypothesis is that if this is the
> explanation for what Mel and Jiri are seeing, it's something that
> would have been around for a long time, and would affect ext3 as well
> as ext4.  That isn't quite consistent, however, with Mel's observation
> that this is a probablem which has gotten worse in relatively
> recently.
> 

According to the tests I've run, multi-second stalls have been a problem for
a while but never really bothered me. I'm not sure why it felt particularly
bad around -rc2 or why it seems to be better now. Maybe I just had my
cranky pants on.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-12  2:57               ` Theodore Ts'o
@ 2013-04-12 10:18                 ` Tvrtko Ursulin
  -1 siblings, 0 replies; 105+ messages in thread
From: Tvrtko Ursulin @ 2013-04-12 10:18 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Jan Kara, Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby


Hi all,

On Thursday 11 April 2013 22:57:08 Theodore Ts'o wrote:
> That's an interesting theory.  If the workload is one which is very
> heavy on reads and writes, that could explain the high latency.  That
> would explain why those of us who are using primarily SSD's are seeing
> the problems, because would be reads are nice and fast.
> 
> If that is the case, one possible solution that comes to mind would be
> to mark buffer_heads that contain metadata with a flag, so that the
> flusher thread can write them back at the same priority as reads.
> 
> The only problem I can see with this hypothesis is that if this is the
> explanation for what Mel and Jiri are seeing, it's something that
> would have been around for a long time, and would affect ext3 as well
> as ext4.  That isn't quite consistent, however, with Mel's observation
> that this is a probablem which has gotten worse in relatively
> recently.

Dropping in as a casual observer and having missed the start of the thread, 
risking that I will just muddle the water for you.

I had a similar problem for quite a while with ext4, at least that was my 
conclusion since the fix was to migrate one filesystem to xfs which fixed it 
for me. Time period when I observed this was between 3.5 and 3.7 kernels.

Situation was I had an ext4 filesystem (on top of LVM, which was on top of MD 
RAID 1, which was on top of two mechanical hard drives) which was dedicated to 
holding a large SVN check-out. Other filesystems were also ext4 on different 
logical volumes (but same spindles).

Symptoms were long stalls of everything (including window management!) on a 
relatively heavily loaded desktop (which was KDE). Stalls would last anything 
from five to maybe even 30 seconds. Not sure exactly but long enough that you 
think the system has actually crashed. I couldn't even switch away to a 
different virtual terminal during the stall, nothing.

Eventually I traced it down to kdesvn (subversion client) periodically 
refreshing (or something) it's metadata and hence generating some IO on that 
dedicated filesystem. That combined with some other desktop activity had an 
effect of stalling everything else. I thought it was very weird, but I suppose 
KDE and all the rest nowadays do to much IO in everything they do.

Following a hunch I reformatted that filesystem as XFS which fixed the 
problem.

I can't reproduce this now to run any tests so I know this is not very helpful 
now. But perhaps some of the info will be useful to someone.

Tvrtko


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-12 10:18                 ` Tvrtko Ursulin
  0 siblings, 0 replies; 105+ messages in thread
From: Tvrtko Ursulin @ 2013-04-12 10:18 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Jan Kara, Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby


Hi all,

On Thursday 11 April 2013 22:57:08 Theodore Ts'o wrote:
> That's an interesting theory.  If the workload is one which is very
> heavy on reads and writes, that could explain the high latency.  That
> would explain why those of us who are using primarily SSD's are seeing
> the problems, because would be reads are nice and fast.
> 
> If that is the case, one possible solution that comes to mind would be
> to mark buffer_heads that contain metadata with a flag, so that the
> flusher thread can write them back at the same priority as reads.
> 
> The only problem I can see with this hypothesis is that if this is the
> explanation for what Mel and Jiri are seeing, it's something that
> would have been around for a long time, and would affect ext3 as well
> as ext4.  That isn't quite consistent, however, with Mel's observation
> that this is a probablem which has gotten worse in relatively
> recently.

Dropping in as a casual observer and having missed the start of the thread, 
risking that I will just muddle the water for you.

I had a similar problem for quite a while with ext4, at least that was my 
conclusion since the fix was to migrate one filesystem to xfs which fixed it 
for me. Time period when I observed this was between 3.5 and 3.7 kernels.

Situation was I had an ext4 filesystem (on top of LVM, which was on top of MD 
RAID 1, which was on top of two mechanical hard drives) which was dedicated to 
holding a large SVN check-out. Other filesystems were also ext4 on different 
logical volumes (but same spindles).

Symptoms were long stalls of everything (including window management!) on a 
relatively heavily loaded desktop (which was KDE). Stalls would last anything 
from five to maybe even 30 seconds. Not sure exactly but long enough that you 
think the system has actually crashed. I couldn't even switch away to a 
different virtual terminal during the stall, nothing.

Eventually I traced it down to kdesvn (subversion client) periodically 
refreshing (or something) it's metadata and hence generating some IO on that 
dedicated filesystem. That combined with some other desktop activity had an 
effect of stalling everything else. I thought it was very weird, but I suppose 
KDE and all the rest nowadays do to much IO in everything they do.

Following a hunch I reformatted that filesystem as XFS which fixed the 
problem.

I can't reproduce this now to run any tests so I know this is not very helpful 
now. But perhaps some of the info will be useful to someone.

Tvrtko

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-12  4:50                 ` Dave Chinner
@ 2013-04-12 15:19                   ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-12 15:19 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Jan Kara, Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Fri, Apr 12, 2013 at 02:50:42PM +1000, Dave Chinner wrote:
> > If that is the case, one possible solution that comes to mind would be
> > to mark buffer_heads that contain metadata with a flag, so that the
> > flusher thread can write them back at the same priority as reads.
> 
> Ext4 is already using REQ_META for this purpose.

We're using REQ_META | REQ_PRIO for reads, not writes.

> I'm surprised that no-one has suggested "change the IO elevator"
> yet.....

Well, testing to see if the stalls go away with the noop schedule is a
good thing to try just to validate the theory.

The thing is, we do want to make ext4 work well with cfq, and
prioritizing non-readahead read requests ahead of data writeback does
make sense.  The issue is with is that metadata writes going through
the block device could in some cases effectively cause a priority
inversion when what had previously been an asynchronous writeback
starts blocking a foreground, user-visible process.

At least, that's the theory; we should confirm that this is indeed
what is causing the data stalls which Mel is reporting on HDD's before
we start figuring out how to fix this problem.

   	 	      	     	 - Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-12 15:19                   ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-12 15:19 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Jan Kara, Mel Gorman, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Fri, Apr 12, 2013 at 02:50:42PM +1000, Dave Chinner wrote:
> > If that is the case, one possible solution that comes to mind would be
> > to mark buffer_heads that contain metadata with a flag, so that the
> > flusher thread can write them back at the same priority as reads.
> 
> Ext4 is already using REQ_META for this purpose.

We're using REQ_META | REQ_PRIO for reads, not writes.

> I'm surprised that no-one has suggested "change the IO elevator"
> yet.....

Well, testing to see if the stalls go away with the noop schedule is a
good thing to try just to validate the theory.

The thing is, we do want to make ext4 work well with cfq, and
prioritizing non-readahead read requests ahead of data writeback does
make sense.  The issue is with is that metadata writes going through
the block device could in some cases effectively cause a priority
inversion when what had previously been an asynchronous writeback
starts blocking a foreground, user-visible process.

At least, that's the theory; we should confirm that this is indeed
what is causing the data stalls which Mel is reporting on HDD's before
we start figuring out how to fix this problem.

   	 	      	     	 - Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-12 15:19                   ` Theodore Ts'o
@ 2013-04-13  1:23                     ` Dave Chinner
  -1 siblings, 0 replies; 105+ messages in thread
From: Dave Chinner @ 2013-04-13  1:23 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, Mel Gorman, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

On Fri, Apr 12, 2013 at 11:19:52AM -0400, Theodore Ts'o wrote:
> On Fri, Apr 12, 2013 at 02:50:42PM +1000, Dave Chinner wrote:
> > > If that is the case, one possible solution that comes to mind would be
> > > to mark buffer_heads that contain metadata with a flag, so that the
> > > flusher thread can write them back at the same priority as reads.
> > 
> > Ext4 is already using REQ_META for this purpose.
> 
> We're using REQ_META | REQ_PRIO for reads, not writes.
> 
> > I'm surprised that no-one has suggested "change the IO elevator"
> > yet.....
> 
> Well, testing to see if the stalls go away with the noop schedule is a
> good thing to try just to validate the theory.

Exactly.

> The thing is, we do want to make ext4 work well with cfq, and
> prioritizing non-readahead read requests ahead of data writeback does
> make sense.  The issue is with is that metadata writes going through
> the block device could in some cases effectively cause a priority
> inversion when what had previously been an asynchronous writeback
> starts blocking a foreground, user-visible process.

Here's the historic problem with CFQ: it's scheduling algorithms
change from release to release, and so what you tune the filesystem
to for this release is likely to cause different behaviour
in a few releases time.

We've had this problem time and time again with CFQ+XFS, so we
stopped trying to "tune" to a particular elevator long ago.  The
best you can do it tag the Io as appropriately as possible (e.g.
metadata with REQ_META, sync IO with ?_SYNC, etc), and then hope CFQ
hasn't been broken since the last release....

> At least, that's the theory; we should confirm that this is indeed
> what is causing the data stalls which Mel is reporting on HDD's before
> we start figuring out how to fix this problem.

*nod*.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-13  1:23                     ` Dave Chinner
  0 siblings, 0 replies; 105+ messages in thread
From: Dave Chinner @ 2013-04-13  1:23 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, Mel Gorman, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

On Fri, Apr 12, 2013 at 11:19:52AM -0400, Theodore Ts'o wrote:
> On Fri, Apr 12, 2013 at 02:50:42PM +1000, Dave Chinner wrote:
> > > If that is the case, one possible solution that comes to mind would be
> > > to mark buffer_heads that contain metadata with a flag, so that the
> > > flusher thread can write them back at the same priority as reads.
> > 
> > Ext4 is already using REQ_META for this purpose.
> 
> We're using REQ_META | REQ_PRIO for reads, not writes.
> 
> > I'm surprised that no-one has suggested "change the IO elevator"
> > yet.....
> 
> Well, testing to see if the stalls go away with the noop schedule is a
> good thing to try just to validate the theory.

Exactly.

> The thing is, we do want to make ext4 work well with cfq, and
> prioritizing non-readahead read requests ahead of data writeback does
> make sense.  The issue is with is that metadata writes going through
> the block device could in some cases effectively cause a priority
> inversion when what had previously been an asynchronous writeback
> starts blocking a foreground, user-visible process.

Here's the historic problem with CFQ: it's scheduling algorithms
change from release to release, and so what you tune the filesystem
to for this release is likely to cause different behaviour
in a few releases time.

We've had this problem time and time again with CFQ+XFS, so we
stopped trying to "tune" to a particular elevator long ago.  The
best you can do it tag the Io as appropriately as possible (e.g.
metadata with REQ_META, sync IO with ?_SYNC, etc), and then hope CFQ
hasn't been broken since the last release....

> At least, that's the theory; we should confirm that this is indeed
> what is causing the data stalls which Mel is reporting on HDD's before
> we start figuring out how to fix this problem.

*nod*.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-12  9:47                 ` Mel Gorman
@ 2013-04-21  0:05                   ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21  0:05 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

As an update to this thread, we brought up this issue at LSF/MM, and
there is a thought that we should be able to solve this problem by
having lock_buffer() check to see if the buffer is locked due to a
write being queued, to have the priority of the write bumped up in the
write queues to resolve the priority inversion.  I believe Jeff Moyer
was going to look into this, if I remember correctly.

An alternate solution which I've been playing around adds buffer_head
flags so we can indicate that a buffer contains metadata and/or should
have I/O submitted with the REQ_PRIO flag set.

Adding a buffer_head flag for at least BH_Meta is probably a good
thing, since that way the blktrace will be properly annotated.
Whether we should keep the BH_Prio flag or rely on lock_buffer()
automatically raising the priority is, my feeling is that if
lock_buffer() can do the right thing, we should probably do it via
lock_buffer().  I have a feeling this might be decidedly non-trivial,
though, so perhaps we should just doing via BH flags?

	   	      	     	  	- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-21  0:05                   ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21  0:05 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

As an update to this thread, we brought up this issue at LSF/MM, and
there is a thought that we should be able to solve this problem by
having lock_buffer() check to see if the buffer is locked due to a
write being queued, to have the priority of the write bumped up in the
write queues to resolve the priority inversion.  I believe Jeff Moyer
was going to look into this, if I remember correctly.

An alternate solution which I've been playing around adds buffer_head
flags so we can indicate that a buffer contains metadata and/or should
have I/O submitted with the REQ_PRIO flag set.

Adding a buffer_head flag for at least BH_Meta is probably a good
thing, since that way the blktrace will be properly annotated.
Whether we should keep the BH_Prio flag or rely on lock_buffer()
automatically raising the priority is, my feeling is that if
lock_buffer() can do the right thing, we should probably do it via
lock_buffer().  I have a feeling this might be decidedly non-trivial,
though, so perhaps we should just doing via BH flags?

	   	      	     	  	- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [PATCH 1/3] ext4: mark all metadata I/O with REQ_META
  2013-04-21  0:05                   ` Theodore Ts'o
@ 2013-04-21  0:07                     ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21  0:07 UTC (permalink / raw)
  To: Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman, Theodore Ts'o

As Dave Chinner pointed out at the 2013 LSF/MM workshop, it's
important that metadata I/O requests are marked as such to avoid
priority inversions caused by I/O bandwidth throttling.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/ext4/balloc.c | 2 +-
 fs/ext4/ialloc.c | 2 +-
 fs/ext4/mmp.c    | 4 ++--
 fs/ext4/super.c  | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
index 8dcaea6..d0f13ea 100644
--- a/fs/ext4/balloc.c
+++ b/fs/ext4/balloc.c
@@ -441,7 +441,7 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
 	trace_ext4_read_block_bitmap_load(sb, block_group);
 	bh->b_end_io = ext4_end_bitmap_read;
 	get_bh(bh);
-	submit_bh(READ, bh);
+	submit_bh(READ | REQ_META | REQ_PRIO, bh);
 	return bh;
 verify:
 	ext4_validate_block_bitmap(sb, desc, block_group, bh);
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index 18d36d8..00a818d 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -166,7 +166,7 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
 	trace_ext4_load_inode_bitmap(sb, block_group);
 	bh->b_end_io = ext4_end_bitmap_read;
 	get_bh(bh);
-	submit_bh(READ, bh);
+	submit_bh(READ | REQ_META | REQ_PRIO, bh);
 	wait_on_buffer(bh);
 	if (!buffer_uptodate(bh)) {
 		put_bh(bh);
diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
index b3b1f7d..214461e 100644
--- a/fs/ext4/mmp.c
+++ b/fs/ext4/mmp.c
@@ -54,7 +54,7 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
 	lock_buffer(bh);
 	bh->b_end_io = end_buffer_write_sync;
 	get_bh(bh);
-	submit_bh(WRITE_SYNC, bh);
+	submit_bh(WRITE_SYNC | REQ_META | REQ_PRIO, bh);
 	wait_on_buffer(bh);
 	sb_end_write(sb);
 	if (unlikely(!buffer_uptodate(bh)))
@@ -86,7 +86,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh,
 		get_bh(*bh);
 		lock_buffer(*bh);
 		(*bh)->b_end_io = end_buffer_read_sync;
-		submit_bh(READ_SYNC, *bh);
+		submit_bh(READ_SYNC | REQ_META | REQ_PRIO, *bh);
 		wait_on_buffer(*bh);
 		if (!buffer_uptodate(*bh)) {
 			brelse(*bh);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index bfa29ec..dbc7c09 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4252,7 +4252,7 @@ static journal_t *ext4_get_dev_journal(struct super_block *sb,
 		goto out_bdev;
 	}
 	journal->j_private = sb;
-	ll_rw_block(READ, 1, &journal->j_sb_buffer);
+	ll_rw_block(READ | REQ_META | REQ_PRIO, 1, &journal->j_sb_buffer);
 	wait_on_buffer(journal->j_sb_buffer);
 	if (!buffer_uptodate(journal->j_sb_buffer)) {
 		ext4_msg(sb, KERN_ERR, "I/O error on journal device");
-- 
1.7.12.rc0.22.gcdd159b


^ permalink raw reply related	[flat|nested] 105+ messages in thread

* [PATCH 1/3] ext4: mark all metadata I/O with REQ_META
@ 2013-04-21  0:07                     ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21  0:07 UTC (permalink / raw)
  To: Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman, Theodore Ts'o

As Dave Chinner pointed out at the 2013 LSF/MM workshop, it's
important that metadata I/O requests are marked as such to avoid
priority inversions caused by I/O bandwidth throttling.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/ext4/balloc.c | 2 +-
 fs/ext4/ialloc.c | 2 +-
 fs/ext4/mmp.c    | 4 ++--
 fs/ext4/super.c  | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
index 8dcaea6..d0f13ea 100644
--- a/fs/ext4/balloc.c
+++ b/fs/ext4/balloc.c
@@ -441,7 +441,7 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
 	trace_ext4_read_block_bitmap_load(sb, block_group);
 	bh->b_end_io = ext4_end_bitmap_read;
 	get_bh(bh);
-	submit_bh(READ, bh);
+	submit_bh(READ | REQ_META | REQ_PRIO, bh);
 	return bh;
 verify:
 	ext4_validate_block_bitmap(sb, desc, block_group, bh);
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index 18d36d8..00a818d 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -166,7 +166,7 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
 	trace_ext4_load_inode_bitmap(sb, block_group);
 	bh->b_end_io = ext4_end_bitmap_read;
 	get_bh(bh);
-	submit_bh(READ, bh);
+	submit_bh(READ | REQ_META | REQ_PRIO, bh);
 	wait_on_buffer(bh);
 	if (!buffer_uptodate(bh)) {
 		put_bh(bh);
diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
index b3b1f7d..214461e 100644
--- a/fs/ext4/mmp.c
+++ b/fs/ext4/mmp.c
@@ -54,7 +54,7 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
 	lock_buffer(bh);
 	bh->b_end_io = end_buffer_write_sync;
 	get_bh(bh);
-	submit_bh(WRITE_SYNC, bh);
+	submit_bh(WRITE_SYNC | REQ_META | REQ_PRIO, bh);
 	wait_on_buffer(bh);
 	sb_end_write(sb);
 	if (unlikely(!buffer_uptodate(bh)))
@@ -86,7 +86,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh,
 		get_bh(*bh);
 		lock_buffer(*bh);
 		(*bh)->b_end_io = end_buffer_read_sync;
-		submit_bh(READ_SYNC, *bh);
+		submit_bh(READ_SYNC | REQ_META | REQ_PRIO, *bh);
 		wait_on_buffer(*bh);
 		if (!buffer_uptodate(*bh)) {
 			brelse(*bh);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index bfa29ec..dbc7c09 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4252,7 +4252,7 @@ static journal_t *ext4_get_dev_journal(struct super_block *sb,
 		goto out_bdev;
 	}
 	journal->j_private = sb;
-	ll_rw_block(READ, 1, &journal->j_sb_buffer);
+	ll_rw_block(READ | REQ_META | REQ_PRIO, 1, &journal->j_sb_buffer);
 	wait_on_buffer(journal->j_sb_buffer);
 	if (!buffer_uptodate(journal->j_sb_buffer)) {
 		ext4_msg(sb, KERN_ERR, "I/O error on journal device");
-- 
1.7.12.rc0.22.gcdd159b

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 105+ messages in thread

* [PATCH 2/3] buffer: add BH_Prio and BH_Meta flags
  2013-04-21  0:07                     ` Theodore Ts'o
@ 2013-04-21  0:07                       ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21  0:07 UTC (permalink / raw)
  To: Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman, Theodore Ts'o

Add buffer_head flags so that buffer cache writebacks can be marked
with the the appropriate request flags, so that metadata blocks can be
marked appropriately in blktrace.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/buffer.c                 | 5 +++++
 include/linux/buffer_head.h | 4 ++++
 2 files changed, 9 insertions(+)

diff --git a/fs/buffer.c b/fs/buffer.c
index b4dcb34..a15575c 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2988,6 +2988,11 @@ int submit_bh(int rw, struct buffer_head * bh)
 	/* Take care of bh's that straddle the end of the device */
 	guard_bh_eod(rw, bio, bh);
 
+	if (buffer_meta(bh))
+		rw |= REQ_META;
+	if (buffer_prio(bh))
+		rw |= REQ_PRIO;
+
 	bio_get(bio);
 	submit_bio(rw, bio);
 
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 5afc4f9..33c0f81 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -34,6 +34,8 @@ enum bh_state_bits {
 	BH_Write_EIO,	/* I/O error on write */
 	BH_Unwritten,	/* Buffer is allocated on disk but not written */
 	BH_Quiet,	/* Buffer Error Prinks to be quiet */
+	BH_Meta,	/* Buffer contains metadata */
+	BH_Prio,	/* Buffer should be submitted with REQ_PRIO */
 
 	BH_PrivateStart,/* not a state bit, but the first bit available
 			 * for private allocation by other entities
@@ -124,6 +126,8 @@ BUFFER_FNS(Delay, delay)
 BUFFER_FNS(Boundary, boundary)
 BUFFER_FNS(Write_EIO, write_io_error)
 BUFFER_FNS(Unwritten, unwritten)
+BUFFER_FNS(Meta, meta)
+BUFFER_FNS(Prio, prio)
 
 #define bh_offset(bh)		((unsigned long)(bh)->b_data & ~PAGE_MASK)
 
-- 
1.7.12.rc0.22.gcdd159b


^ permalink raw reply related	[flat|nested] 105+ messages in thread

* [PATCH 2/3] buffer: add BH_Prio and BH_Meta flags
@ 2013-04-21  0:07                       ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21  0:07 UTC (permalink / raw)
  To: Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman, Theodore Ts'o

Add buffer_head flags so that buffer cache writebacks can be marked
with the the appropriate request flags, so that metadata blocks can be
marked appropriately in blktrace.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/buffer.c                 | 5 +++++
 include/linux/buffer_head.h | 4 ++++
 2 files changed, 9 insertions(+)

diff --git a/fs/buffer.c b/fs/buffer.c
index b4dcb34..a15575c 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2988,6 +2988,11 @@ int submit_bh(int rw, struct buffer_head * bh)
 	/* Take care of bh's that straddle the end of the device */
 	guard_bh_eod(rw, bio, bh);
 
+	if (buffer_meta(bh))
+		rw |= REQ_META;
+	if (buffer_prio(bh))
+		rw |= REQ_PRIO;
+
 	bio_get(bio);
 	submit_bio(rw, bio);
 
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 5afc4f9..33c0f81 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -34,6 +34,8 @@ enum bh_state_bits {
 	BH_Write_EIO,	/* I/O error on write */
 	BH_Unwritten,	/* Buffer is allocated on disk but not written */
 	BH_Quiet,	/* Buffer Error Prinks to be quiet */
+	BH_Meta,	/* Buffer contains metadata */
+	BH_Prio,	/* Buffer should be submitted with REQ_PRIO */
 
 	BH_PrivateStart,/* not a state bit, but the first bit available
 			 * for private allocation by other entities
@@ -124,6 +126,8 @@ BUFFER_FNS(Delay, delay)
 BUFFER_FNS(Boundary, boundary)
 BUFFER_FNS(Write_EIO, write_io_error)
 BUFFER_FNS(Unwritten, unwritten)
+BUFFER_FNS(Meta, meta)
+BUFFER_FNS(Prio, prio)
 
 #define bh_offset(bh)		((unsigned long)(bh)->b_data & ~PAGE_MASK)
 
-- 
1.7.12.rc0.22.gcdd159b

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 105+ messages in thread

* [PATCH 3/3] ext4: mark metadata blocks using bh flags
  2013-04-21  0:07                     ` Theodore Ts'o
@ 2013-04-21  0:07                       ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21  0:07 UTC (permalink / raw)
  To: Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman, Theodore Ts'o

This allows metadata writebacks which are issued via block device
writeback to be sent with the current write request flags.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/ext4/ext4_jbd2.c | 2 ++
 fs/ext4/inode.c     | 6 +++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
index 0e1dc9e..fd97b81 100644
--- a/fs/ext4/ext4_jbd2.c
+++ b/fs/ext4/ext4_jbd2.c
@@ -215,6 +215,8 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
 
 	might_sleep();
 
+	mark_buffer_meta(bh);
+	mark_buffer_prio(bh);
 	if (ext4_handle_valid(handle)) {
 		err = jbd2_journal_dirty_metadata(handle, bh);
 		if (err) {
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 62492e9..d7518e2 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1080,10 +1080,14 @@ retry_journal:
 /* For write_end() in data=journal mode */
 static int write_end_fn(handle_t *handle, struct buffer_head *bh)
 {
+	int ret;
 	if (!buffer_mapped(bh) || buffer_freed(bh))
 		return 0;
 	set_buffer_uptodate(bh);
-	return ext4_handle_dirty_metadata(handle, NULL, bh);
+	ret = ext4_handle_dirty_metadata(handle, NULL, bh);
+	clear_buffer_meta(bh);
+	clear_buffer_prio(bh);
+	return ret;
 }
 
 /*
-- 
1.7.12.rc0.22.gcdd159b


^ permalink raw reply related	[flat|nested] 105+ messages in thread

* [PATCH 3/3] ext4: mark metadata blocks using bh flags
@ 2013-04-21  0:07                       ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21  0:07 UTC (permalink / raw)
  To: Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman, Theodore Ts'o

This allows metadata writebacks which are issued via block device
writeback to be sent with the current write request flags.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/ext4/ext4_jbd2.c | 2 ++
 fs/ext4/inode.c     | 6 +++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
index 0e1dc9e..fd97b81 100644
--- a/fs/ext4/ext4_jbd2.c
+++ b/fs/ext4/ext4_jbd2.c
@@ -215,6 +215,8 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
 
 	might_sleep();
 
+	mark_buffer_meta(bh);
+	mark_buffer_prio(bh);
 	if (ext4_handle_valid(handle)) {
 		err = jbd2_journal_dirty_metadata(handle, bh);
 		if (err) {
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 62492e9..d7518e2 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1080,10 +1080,14 @@ retry_journal:
 /* For write_end() in data=journal mode */
 static int write_end_fn(handle_t *handle, struct buffer_head *bh)
 {
+	int ret;
 	if (!buffer_mapped(bh) || buffer_freed(bh))
 		return 0;
 	set_buffer_uptodate(bh);
-	return ext4_handle_dirty_metadata(handle, NULL, bh);
+	ret = ext4_handle_dirty_metadata(handle, NULL, bh);
+	clear_buffer_meta(bh);
+	clear_buffer_prio(bh);
+	return ret;
 }
 
 /*
-- 
1.7.12.rc0.22.gcdd159b

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 105+ messages in thread

* Re: [PATCH 3/3] ext4: mark metadata blocks using bh flags
  2013-04-21  0:07                       ` Theodore Ts'o
  (?)
@ 2013-04-21  6:09                         ` Jiri Slaby
  -1 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-21  6:09 UTC (permalink / raw)
  To: Theodore Ts'o, Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman

On 04/21/2013 02:07 AM, Theodore Ts'o wrote:
> This allows metadata writebacks which are issued via block device
> writeback to be sent with the current write request flags.

Hi, where do these come from?
fs/ext4/ext4_jbd2.c: In function ‘__ext4_handle_dirty_metadata’:
fs/ext4/ext4_jbd2.c:218:2: error: implicit declaration of function
‘mark_buffer_meta’ [-Werror=implicit-function-declaration]
fs/ext4/ext4_jbd2.c:219:2: error: implicit declaration of function
‘mark_buffer_prio’ [-Werror=implicit-function-declaration]
cc1: some warnings being treated as errors

> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
> ---
>  fs/ext4/ext4_jbd2.c | 2 ++
>  fs/ext4/inode.c     | 6 +++++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
> index 0e1dc9e..fd97b81 100644
> --- a/fs/ext4/ext4_jbd2.c
> +++ b/fs/ext4/ext4_jbd2.c
> @@ -215,6 +215,8 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
>  
>  	might_sleep();
>  
> +	mark_buffer_meta(bh);
> +	mark_buffer_prio(bh);
>  	if (ext4_handle_valid(handle)) {
>  		err = jbd2_journal_dirty_metadata(handle, bh);
>  		if (err) {
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 62492e9..d7518e2 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -1080,10 +1080,14 @@ retry_journal:
>  /* For write_end() in data=journal mode */
>  static int write_end_fn(handle_t *handle, struct buffer_head *bh)
>  {
> +	int ret;
>  	if (!buffer_mapped(bh) || buffer_freed(bh))
>  		return 0;
>  	set_buffer_uptodate(bh);
> -	return ext4_handle_dirty_metadata(handle, NULL, bh);
> +	ret = ext4_handle_dirty_metadata(handle, NULL, bh);
> +	clear_buffer_meta(bh);
> +	clear_buffer_prio(bh);
> +	return ret;
>  }
>  
>  /*
> 


-- 
js
suse labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [PATCH 3/3] ext4: mark metadata blocks using bh flags
@ 2013-04-21  6:09                         ` Jiri Slaby
  0 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-21  6:09 UTC (permalink / raw)
  To: Theodore Ts'o, Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman

On 04/21/2013 02:07 AM, Theodore Ts'o wrote:
> This allows metadata writebacks which are issued via block device
> writeback to be sent with the current write request flags.

Hi, where do these come from?
fs/ext4/ext4_jbd2.c: In function ‘__ext4_handle_dirty_metadata’:
fs/ext4/ext4_jbd2.c:218:2: error: implicit declaration of function
‘mark_buffer_meta’ [-Werror=implicit-function-declaration]
fs/ext4/ext4_jbd2.c:219:2: error: implicit declaration of function
‘mark_buffer_prio’ [-Werror=implicit-function-declaration]
cc1: some warnings being treated as errors

> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
> ---
>  fs/ext4/ext4_jbd2.c | 2 ++
>  fs/ext4/inode.c     | 6 +++++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
> index 0e1dc9e..fd97b81 100644
> --- a/fs/ext4/ext4_jbd2.c
> +++ b/fs/ext4/ext4_jbd2.c
> @@ -215,6 +215,8 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
>  
>  	might_sleep();
>  
> +	mark_buffer_meta(bh);
> +	mark_buffer_prio(bh);
>  	if (ext4_handle_valid(handle)) {
>  		err = jbd2_journal_dirty_metadata(handle, bh);
>  		if (err) {
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 62492e9..d7518e2 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -1080,10 +1080,14 @@ retry_journal:
>  /* For write_end() in data=journal mode */
>  static int write_end_fn(handle_t *handle, struct buffer_head *bh)
>  {
> +	int ret;
>  	if (!buffer_mapped(bh) || buffer_freed(bh))
>  		return 0;
>  	set_buffer_uptodate(bh);
> -	return ext4_handle_dirty_metadata(handle, NULL, bh);
> +	ret = ext4_handle_dirty_metadata(handle, NULL, bh);
> +	clear_buffer_meta(bh);
> +	clear_buffer_prio(bh);
> +	return ret;
>  }
>  
>  /*
> 


-- 
js
suse labs
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [PATCH 3/3] ext4: mark metadata blocks using bh flags
@ 2013-04-21  6:09                         ` Jiri Slaby
  0 siblings, 0 replies; 105+ messages in thread
From: Jiri Slaby @ 2013-04-21  6:09 UTC (permalink / raw)
  To: Theodore Ts'o, Ext4 Developers List
  Cc: linux-mm, Linux Kernel Developers List, mgorman

On 04/21/2013 02:07 AM, Theodore Ts'o wrote:
> This allows metadata writebacks which are issued via block device
> writeback to be sent with the current write request flags.

Hi, where do these come from?
fs/ext4/ext4_jbd2.c: In function a??__ext4_handle_dirty_metadataa??:
fs/ext4/ext4_jbd2.c:218:2: error: implicit declaration of function
a??mark_buffer_metaa?? [-Werror=implicit-function-declaration]
fs/ext4/ext4_jbd2.c:219:2: error: implicit declaration of function
a??mark_buffer_prioa?? [-Werror=implicit-function-declaration]
cc1: some warnings being treated as errors

> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
> ---
>  fs/ext4/ext4_jbd2.c | 2 ++
>  fs/ext4/inode.c     | 6 +++++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
> index 0e1dc9e..fd97b81 100644
> --- a/fs/ext4/ext4_jbd2.c
> +++ b/fs/ext4/ext4_jbd2.c
> @@ -215,6 +215,8 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
>  
>  	might_sleep();
>  
> +	mark_buffer_meta(bh);
> +	mark_buffer_prio(bh);
>  	if (ext4_handle_valid(handle)) {
>  		err = jbd2_journal_dirty_metadata(handle, bh);
>  		if (err) {
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 62492e9..d7518e2 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -1080,10 +1080,14 @@ retry_journal:
>  /* For write_end() in data=journal mode */
>  static int write_end_fn(handle_t *handle, struct buffer_head *bh)
>  {
> +	int ret;
>  	if (!buffer_mapped(bh) || buffer_freed(bh))
>  		return 0;
>  	set_buffer_uptodate(bh);
> -	return ext4_handle_dirty_metadata(handle, NULL, bh);
> +	ret = ext4_handle_dirty_metadata(handle, NULL, bh);
> +	clear_buffer_meta(bh);
> +	clear_buffer_prio(bh);
> +	return ret;
>  }
>  
>  /*
> 


-- 
js
suse labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [PATCH 3/3] ext4: mark metadata blocks using bh flags
  2013-04-21  6:09                         ` Jiri Slaby
  (?)
@ 2013-04-21 19:55                           ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21 19:55 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Ext4 Developers List, linux-mm, Linux Kernel Developers List, mgorman

On Sun, Apr 21, 2013 at 08:09:14AM +0200, Jiri Slaby wrote:
> On 04/21/2013 02:07 AM, Theodore Ts'o wrote:
> > This allows metadata writebacks which are issued via block device
> > writeback to be sent with the current write request flags.
> 
> Hi, where do these come from?
> fs/ext4/ext4_jbd2.c: In function ‘__ext4_handle_dirty_metadata’:
> fs/ext4/ext4_jbd2.c:218:2: error: implicit declaration of function
> ‘mark_buffer_meta’ [-Werror=implicit-function-declaration]
> fs/ext4/ext4_jbd2.c:219:2: error: implicit declaration of function
> ‘mark_buffer_prio’ [-Werror=implicit-function-declaration]
> cc1: some warnings being treated as errors

They are defined by "[PATCH 2/3] buffer: add BH_Prio and BH_Meta flags" here:

+BUFFER_FNS(Meta, meta)
+BUFFER_FNS(Prio, prio)

When you tried applying this patch, did you try applying all three
patches in the patch series?

						- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [PATCH 3/3] ext4: mark metadata blocks using bh flags
@ 2013-04-21 19:55                           ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21 19:55 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Ext4 Developers List, linux-mm, Linux Kernel Developers List, mgorman

On Sun, Apr 21, 2013 at 08:09:14AM +0200, Jiri Slaby wrote:
> On 04/21/2013 02:07 AM, Theodore Ts'o wrote:
> > This allows metadata writebacks which are issued via block device
> > writeback to be sent with the current write request flags.
> 
> Hi, where do these come from?
> fs/ext4/ext4_jbd2.c: In function ‘__ext4_handle_dirty_metadata’:
> fs/ext4/ext4_jbd2.c:218:2: error: implicit declaration of function
> ‘mark_buffer_meta’ [-Werror=implicit-function-declaration]
> fs/ext4/ext4_jbd2.c:219:2: error: implicit declaration of function
> ‘mark_buffer_prio’ [-Werror=implicit-function-declaration]
> cc1: some warnings being treated as errors

They are defined by "[PATCH 2/3] buffer: add BH_Prio and BH_Meta flags" here:

+BUFFER_FNS(Meta, meta)
+BUFFER_FNS(Prio, prio)

When you tried applying this patch, did you try applying all three
patches in the patch series?

						- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [PATCH 3/3] ext4: mark metadata blocks using bh flags
@ 2013-04-21 19:55                           ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21 19:55 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Ext4 Developers List, linux-mm, Linux Kernel Developers List, mgorman

On Sun, Apr 21, 2013 at 08:09:14AM +0200, Jiri Slaby wrote:
> On 04/21/2013 02:07 AM, Theodore Ts'o wrote:
> > This allows metadata writebacks which are issued via block device
> > writeback to be sent with the current write request flags.
> 
> Hi, where do these come from?
> fs/ext4/ext4_jbd2.c: In function a??__ext4_handle_dirty_metadataa??:
> fs/ext4/ext4_jbd2.c:218:2: error: implicit declaration of function
> a??mark_buffer_metaa?? [-Werror=implicit-function-declaration]
> fs/ext4/ext4_jbd2.c:219:2: error: implicit declaration of function
> a??mark_buffer_prioa?? [-Werror=implicit-function-declaration]
> cc1: some warnings being treated as errors

They are defined by "[PATCH 2/3] buffer: add BH_Prio and BH_Meta flags" here:

+BUFFER_FNS(Meta, meta)
+BUFFER_FNS(Prio, prio)

When you tried applying this patch, did you try applying all three
patches in the patch series?

						- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [PATCH 3/3 -v2] ext4: mark metadata blocks using bh flags
  2013-04-21 19:55                           ` Theodore Ts'o
  (?)
@ 2013-04-21 20:48                             ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21 20:48 UTC (permalink / raw)
  To: Jiri Slaby, Ext4 Developers List, linux-mm,
	Linux Kernel Developers List, mgorman

Whoops, here's the right version of the patch.

>From 13fca323e9a8b63c08de7a4e05d3c702516b535d Mon Sep 17 00:00:00 2001
From: Theodore Ts'o <tytso@mit.edu>
Date: Sun, 21 Apr 2013 16:45:54 -0400
Subject: [PATCH 3/3] ext4: mark metadata blocks using bh flags

This allows metadata writebacks which are issued via block device
writeback to be sent with the current write request flags.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/ext4/ext4_jbd2.c | 2 ++
 fs/ext4/inode.c     | 6 +++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
index 0e1dc9e..451eb40 100644
--- a/fs/ext4/ext4_jbd2.c
+++ b/fs/ext4/ext4_jbd2.c
@@ -215,6 +215,8 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
 
 	might_sleep();
 
+	set_buffer_meta(bh);
+	set_buffer_prio(bh);
 	if (ext4_handle_valid(handle)) {
 		err = jbd2_journal_dirty_metadata(handle, bh);
 		if (err) {
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 62492e9..d7518e2 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1080,10 +1080,14 @@ retry_journal:
 /* For write_end() in data=journal mode */
 static int write_end_fn(handle_t *handle, struct buffer_head *bh)
 {
+	int ret;
 	if (!buffer_mapped(bh) || buffer_freed(bh))
 		return 0;
 	set_buffer_uptodate(bh);
-	return ext4_handle_dirty_metadata(handle, NULL, bh);
+	ret = ext4_handle_dirty_metadata(handle, NULL, bh);
+	clear_buffer_meta(bh);
+	clear_buffer_prio(bh);
+	return ret;
 }
 
 /*
-- 
1.7.12.rc0.22.gcdd159b


^ permalink raw reply related	[flat|nested] 105+ messages in thread

* [PATCH 3/3 -v2] ext4: mark metadata blocks using bh flags
@ 2013-04-21 20:48                             ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21 20:48 UTC (permalink / raw)
  To: Jiri Slaby, Ext4 Developers List, linux-mm,
	Linux Kernel Developers List, mgorman

Whoops, here's the right version of the patch.

>From 13fca323e9a8b63c08de7a4e05d3c702516b535d Mon Sep 17 00:00:00 2001
From: Theodore Ts'o <tytso@mit.edu>
Date: Sun, 21 Apr 2013 16:45:54 -0400
Subject: [PATCH 3/3] ext4: mark metadata blocks using bh flags

This allows metadata writebacks which are issued via block device
writeback to be sent with the current write request flags.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/ext4/ext4_jbd2.c | 2 ++
 fs/ext4/inode.c     | 6 +++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
index 0e1dc9e..451eb40 100644
--- a/fs/ext4/ext4_jbd2.c
+++ b/fs/ext4/ext4_jbd2.c
@@ -215,6 +215,8 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
 
 	might_sleep();
 
+	set_buffer_meta(bh);
+	set_buffer_prio(bh);
 	if (ext4_handle_valid(handle)) {
 		err = jbd2_journal_dirty_metadata(handle, bh);
 		if (err) {
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 62492e9..d7518e2 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1080,10 +1080,14 @@ retry_journal:
 /* For write_end() in data=journal mode */
 static int write_end_fn(handle_t *handle, struct buffer_head *bh)
 {
+	int ret;
 	if (!buffer_mapped(bh) || buffer_freed(bh))
 		return 0;
 	set_buffer_uptodate(bh);
-	return ext4_handle_dirty_metadata(handle, NULL, bh);
+	ret = ext4_handle_dirty_metadata(handle, NULL, bh);
+	clear_buffer_meta(bh);
+	clear_buffer_prio(bh);
+	return ret;
 }
 
 /*
-- 
1.7.12.rc0.22.gcdd159b

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 105+ messages in thread

* [PATCH 3/3 -v2] ext4: mark metadata blocks using bh flags
@ 2013-04-21 20:48                             ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-21 20:48 UTC (permalink / raw)
  To: Jiri Slaby, Ext4 Developers List, linux-mm,
	Linux Kernel Developers List, mgorman

Whoops, here's the right version of the patch.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [PATCH 1/3] ext4: mark all metadata I/O with REQ_META
  2013-04-21  0:07                     ` Theodore Ts'o
@ 2013-04-22 12:06                       ` Zheng Liu
  -1 siblings, 0 replies; 105+ messages in thread
From: Zheng Liu @ 2013-04-22 12:06 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Ext4 Developers List, linux-mm, Linux Kernel Developers List, mgorman

On Sat, Apr 20, 2013 at 08:07:06PM -0400, Theodore Ts'o wrote:
> As Dave Chinner pointed out at the 2013 LSF/MM workshop, it's
> important that metadata I/O requests are marked as such to avoid
> priority inversions caused by I/O bandwidth throttling.
> 
> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>

Reviewed-by: Zheng Liu <wenqing.lz@taobao.com>

Regards,
                                                - Zheng
> ---
>  fs/ext4/balloc.c | 2 +-
>  fs/ext4/ialloc.c | 2 +-
>  fs/ext4/mmp.c    | 4 ++--
>  fs/ext4/super.c  | 2 +-
>  4 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
> index 8dcaea6..d0f13ea 100644
> --- a/fs/ext4/balloc.c
> +++ b/fs/ext4/balloc.c
> @@ -441,7 +441,7 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
>  	trace_ext4_read_block_bitmap_load(sb, block_group);
>  	bh->b_end_io = ext4_end_bitmap_read;
>  	get_bh(bh);
> -	submit_bh(READ, bh);
> +	submit_bh(READ | REQ_META | REQ_PRIO, bh);
>  	return bh;
>  verify:
>  	ext4_validate_block_bitmap(sb, desc, block_group, bh);
> diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
> index 18d36d8..00a818d 100644
> --- a/fs/ext4/ialloc.c
> +++ b/fs/ext4/ialloc.c
> @@ -166,7 +166,7 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
>  	trace_ext4_load_inode_bitmap(sb, block_group);
>  	bh->b_end_io = ext4_end_bitmap_read;
>  	get_bh(bh);
> -	submit_bh(READ, bh);
> +	submit_bh(READ | REQ_META | REQ_PRIO, bh);
>  	wait_on_buffer(bh);
>  	if (!buffer_uptodate(bh)) {
>  		put_bh(bh);
> diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
> index b3b1f7d..214461e 100644
> --- a/fs/ext4/mmp.c
> +++ b/fs/ext4/mmp.c
> @@ -54,7 +54,7 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
>  	lock_buffer(bh);
>  	bh->b_end_io = end_buffer_write_sync;
>  	get_bh(bh);
> -	submit_bh(WRITE_SYNC, bh);
> +	submit_bh(WRITE_SYNC | REQ_META | REQ_PRIO, bh);
>  	wait_on_buffer(bh);
>  	sb_end_write(sb);
>  	if (unlikely(!buffer_uptodate(bh)))
> @@ -86,7 +86,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh,
>  		get_bh(*bh);
>  		lock_buffer(*bh);
>  		(*bh)->b_end_io = end_buffer_read_sync;
> -		submit_bh(READ_SYNC, *bh);
> +		submit_bh(READ_SYNC | REQ_META | REQ_PRIO, *bh);
>  		wait_on_buffer(*bh);
>  		if (!buffer_uptodate(*bh)) {
>  			brelse(*bh);
> diff --git a/fs/ext4/super.c b/fs/ext4/super.c
> index bfa29ec..dbc7c09 100644
> --- a/fs/ext4/super.c
> +++ b/fs/ext4/super.c
> @@ -4252,7 +4252,7 @@ static journal_t *ext4_get_dev_journal(struct super_block *sb,
>  		goto out_bdev;
>  	}
>  	journal->j_private = sb;
> -	ll_rw_block(READ, 1, &journal->j_sb_buffer);
> +	ll_rw_block(READ | REQ_META | REQ_PRIO, 1, &journal->j_sb_buffer);
>  	wait_on_buffer(journal->j_sb_buffer);
>  	if (!buffer_uptodate(journal->j_sb_buffer)) {
>  		ext4_msg(sb, KERN_ERR, "I/O error on journal device");
> -- 
> 1.7.12.rc0.22.gcdd159b
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [PATCH 1/3] ext4: mark all metadata I/O with REQ_META
@ 2013-04-22 12:06                       ` Zheng Liu
  0 siblings, 0 replies; 105+ messages in thread
From: Zheng Liu @ 2013-04-22 12:06 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Ext4 Developers List, linux-mm, Linux Kernel Developers List, mgorman

On Sat, Apr 20, 2013 at 08:07:06PM -0400, Theodore Ts'o wrote:
> As Dave Chinner pointed out at the 2013 LSF/MM workshop, it's
> important that metadata I/O requests are marked as such to avoid
> priority inversions caused by I/O bandwidth throttling.
> 
> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>

Reviewed-by: Zheng Liu <wenqing.lz@taobao.com>

Regards,
                                                - Zheng
> ---
>  fs/ext4/balloc.c | 2 +-
>  fs/ext4/ialloc.c | 2 +-
>  fs/ext4/mmp.c    | 4 ++--
>  fs/ext4/super.c  | 2 +-
>  4 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
> index 8dcaea6..d0f13ea 100644
> --- a/fs/ext4/balloc.c
> +++ b/fs/ext4/balloc.c
> @@ -441,7 +441,7 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
>  	trace_ext4_read_block_bitmap_load(sb, block_group);
>  	bh->b_end_io = ext4_end_bitmap_read;
>  	get_bh(bh);
> -	submit_bh(READ, bh);
> +	submit_bh(READ | REQ_META | REQ_PRIO, bh);
>  	return bh;
>  verify:
>  	ext4_validate_block_bitmap(sb, desc, block_group, bh);
> diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
> index 18d36d8..00a818d 100644
> --- a/fs/ext4/ialloc.c
> +++ b/fs/ext4/ialloc.c
> @@ -166,7 +166,7 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
>  	trace_ext4_load_inode_bitmap(sb, block_group);
>  	bh->b_end_io = ext4_end_bitmap_read;
>  	get_bh(bh);
> -	submit_bh(READ, bh);
> +	submit_bh(READ | REQ_META | REQ_PRIO, bh);
>  	wait_on_buffer(bh);
>  	if (!buffer_uptodate(bh)) {
>  		put_bh(bh);
> diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
> index b3b1f7d..214461e 100644
> --- a/fs/ext4/mmp.c
> +++ b/fs/ext4/mmp.c
> @@ -54,7 +54,7 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
>  	lock_buffer(bh);
>  	bh->b_end_io = end_buffer_write_sync;
>  	get_bh(bh);
> -	submit_bh(WRITE_SYNC, bh);
> +	submit_bh(WRITE_SYNC | REQ_META | REQ_PRIO, bh);
>  	wait_on_buffer(bh);
>  	sb_end_write(sb);
>  	if (unlikely(!buffer_uptodate(bh)))
> @@ -86,7 +86,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh,
>  		get_bh(*bh);
>  		lock_buffer(*bh);
>  		(*bh)->b_end_io = end_buffer_read_sync;
> -		submit_bh(READ_SYNC, *bh);
> +		submit_bh(READ_SYNC | REQ_META | REQ_PRIO, *bh);
>  		wait_on_buffer(*bh);
>  		if (!buffer_uptodate(*bh)) {
>  			brelse(*bh);
> diff --git a/fs/ext4/super.c b/fs/ext4/super.c
> index bfa29ec..dbc7c09 100644
> --- a/fs/ext4/super.c
> +++ b/fs/ext4/super.c
> @@ -4252,7 +4252,7 @@ static journal_t *ext4_get_dev_journal(struct super_block *sb,
>  		goto out_bdev;
>  	}
>  	journal->j_private = sb;
> -	ll_rw_block(READ, 1, &journal->j_sb_buffer);
> +	ll_rw_block(READ | REQ_META | REQ_PRIO, 1, &journal->j_sb_buffer);
>  	wait_on_buffer(journal->j_sb_buffer);
>  	if (!buffer_uptodate(journal->j_sb_buffer)) {
>  		ext4_msg(sb, KERN_ERR, "I/O error on journal device");
> -- 
> 1.7.12.rc0.22.gcdd159b
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-12 15:19                   ` Theodore Ts'o
@ 2013-04-22 14:38                     ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-22 14:38 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Jeff Moyer, Dave Chinner, Jan Kara, linux-ext4, LKML, Linux-MM,
	Jiri Slaby

(Adding Jeff Moyer to the cc as I'm told he is interested in the blktrace)

On Fri, Apr 12, 2013 at 11:19:52AM -0400, Theodore Ts'o wrote:
> On Fri, Apr 12, 2013 at 02:50:42PM +1000, Dave Chinner wrote:
> > > If that is the case, one possible solution that comes to mind would be
> > > to mark buffer_heads that contain metadata with a flag, so that the
> > > flusher thread can write them back at the same priority as reads.
> > 
> > Ext4 is already using REQ_META for this purpose.
> 
> We're using REQ_META | REQ_PRIO for reads, not writes.
> 
> > I'm surprised that no-one has suggested "change the IO elevator"
> > yet.....
> 
> Well, testing to see if the stalls go away with the noop schedule is a
> good thing to try just to validate the theory.
> 

I still haven't tested with a different elevator. While this bug is
relatively high priority for me, there are still are other issues in the way.

TLDR: Flusher writes pages very quickly after processes dirty a buffer. Reads
starve flusher writes.

Now the ugliness and being a windbag.

I collected blktrace and some other logs and they are available at
http://www.csn.ul.ie/~mel/postings/stalls-20130419/log.tar.gz  and there
is a lot of stuff in there.  The unix time the test started is in the
first line of the file tests-timestamp-bisect . This can help figure out
how far into the test some of the other timestamped logs are

The kernel log with information from the lock_buffer debugging patch is
in dmesg-bisect-gitcheckout . The information in it is race-prone and
cannot be 100% trusted but it's still useful.

iostat is in iostat-bisect-gitcheckout 

Here are a few observations I got from the data.

1. flushers appear to acquire buffer locks *very* quickly after an
   application writes. Look for lines that look like

   "foo failed trylock without holder released 0 ms ago acquired 0 ms ago by bar"

   There are a lot of entries like this

	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 2124 tar failed trylock without holder released 0 ms ago acquired 0 ms ago by 239 flush-8:0
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar

   I expected flushers to be writing back the buffers just released in about
   5 seconds time, not immediately.  It may indicate that when flushers
   wake to clean expired inodes that it keeps cleaning inodes as they are
   being dirtied.

2. The flush thread can prevent a process making forward progress for
   a long time. Take this as an example

         jbd2 stalled dev 8,8 for 8168 ms lock holdtime 20692 ms
         Last Owner 239 flush-8:0 Acquired Stack
          [<ffffffff8100fd8a>] save_stack_trace+0x2a/0x50
          [<ffffffff811a3ad6>] set_lock_buffer_owner+0x86/0x90
          [<ffffffff811a72ee>] __block_write_full_page+0x16e/0x360
          [<ffffffff811a75b3>] block_write_full_page_endio+0xd3/0x110
          [<ffffffff811a7600>] block_write_full_page+0x10/0x20
          [<ffffffff811aa7f3>] blkdev_writepage+0x13/0x20
          [<ffffffff81119352>] __writepage+0x12/0x40
          [<ffffffff81119b56>] write_cache_pages+0x206/0x460
          [<ffffffff81119df5>] generic_writepages+0x45/0x70
          [<ffffffff8111accb>] do_writepages+0x1b/0x30
          [<ffffffff81199d60>] __writeback_single_inode+0x40/0x1b0
          [<ffffffff8119c40a>] writeback_sb_inodes+0x19a/0x350
          [<ffffffff8119c656>] __writeback_inodes_wb+0x96/0xc0
          [<ffffffff8119c8fb>] wb_writeback+0x27b/0x330
          [<ffffffff8119e300>] wb_do_writeback+0x190/0x1d0
          [<ffffffff8119e3c3>] bdi_writeback_thread+0x83/0x280
          [<ffffffff8106901b>] kthread+0xbb/0xc0
          [<ffffffff8159e1fc>] ret_from_fork+0x7c/0xb0
          [<ffffffffffffffff>] 0xffffffffffffffff

   This part is saying that we locked the buffer due to blkdev_writepage
   which I assume must be a metadata update. Based on where we lock the
   buffer, the only reason we would leave the buffer unlocked if this
   was an asynchronous write request leaving the buffer to be unlocked by
   end_buffer_async_write at some time in the future

         Last Owner Activity Stack: 239 flush-8:0
          [<ffffffff812aee61>] __blkdev_issue_zeroout+0x191/0x1a0
          [<ffffffff812aef51>] blkdev_issue_zeroout+0xe1/0xf0
          [<ffffffff8121abe9>] ext4_ext_zeroout.isra.30+0x49/0x60
          [<ffffffff8121ee47>] ext4_ext_convert_to_initialized+0x227/0x5f0
          [<ffffffff8121f8a3>] ext4_ext_handle_uninitialized_extents+0x2f3/0x3a0
          [<ffffffff8121ff57>] ext4_ext_map_blocks+0x5d7/0xa00
          [<ffffffff811f0715>] ext4_map_blocks+0x2d5/0x470
          [<ffffffff811f47da>] mpage_da_map_and_submit+0xba/0x2f0
          [<ffffffff811f52e0>] ext4_da_writepages+0x380/0x620
          [<ffffffff8111accb>] do_writepages+0x1b/0x30
          [<ffffffff81199d60>] __writeback_single_inode+0x40/0x1b0
          [<ffffffff8119c40a>] writeback_sb_inodes+0x19a/0x350
          [<ffffffff8119c656>] __writeback_inodes_wb+0x96/0xc0
          [<ffffffff8119c8fb>] wb_writeback+0x27b/0x330

   This part is indicating that at the time a process tried to acquire
   the buffer lock that flusher was off doing something else entirely.
   That points again to the metadata write being asynchronous.

         Current Owner 1829 stap
          [<ffffffff8100fd8a>] save_stack_trace+0x2a/0x50
          [<ffffffff811a3ad6>] set_lock_buffer_owner+0x86/0x90
          [<ffffffff8123a5b2>] do_get_write_access+0xd2/0x800
          [<ffffffff8123ae2b>] jbd2_journal_get_write_access+0x2b/0x50
          [<ffffffff81221249>] __ext4_journal_get_write_access+0x39/0x80
          [<ffffffff81229bca>] ext4_free_blocks+0x36a/0xbe0
          [<ffffffff8121c686>] ext4_remove_blocks+0x256/0x2d0
          [<ffffffff8121c905>] ext4_ext_rm_leaf+0x205/0x520
          [<ffffffff8121e64c>] ext4_ext_remove_space+0x4dc/0x750
          [<ffffffff8122051b>] ext4_ext_truncate+0x19b/0x1e0
          [<ffffffff811efde5>] ext4_truncate.part.61+0xd5/0xf0
          [<ffffffff811f0ee4>] ext4_truncate+0x34/0x90
          [<ffffffff811f382d>] ext4_setattr+0x18d/0x640
          [<ffffffff8118d3e2>] notify_change+0x1f2/0x3c0
          [<ffffffff811716f9>] do_truncate+0x59/0xa0
          [<ffffffff8117d3f6>] handle_truncate+0x66/0xa0
          [<ffffffff81181576>] do_last+0x626/0x820
          [<ffffffff81181823>] path_openat+0xb3/0x4a0
          [<ffffffff8118237d>] do_filp_open+0x3d/0xa0
          [<ffffffff81172869>] do_sys_open+0xf9/0x1e0
          [<ffffffff8117296c>] sys_open+0x1c/0x20
          [<ffffffff8159e2ad>] system_call_fastpath+0x1a/0x1f
          [<ffffffffffffffff>] 0xffffffffffffffff

   This is just showing where stap was trying to acquire the buffer lock
   truncating data.

3. The blktrace indicates that reads can starve writes from flusher

   While there are people that can look at a blktrace and find problems
   like they are rain man, I'm more like an ADHD squirrel when looking at
   a trace.  I wrote a script to look for what unrelated requests completed
   while an request got stalled for over a second. It seemed like something
   that a tool shoudl already exist for but I didn't find one unless btt
   can give the information somehow.

   Each delayed request is quite long but here is the first example
   discovered by the script

Request 4174 took 1.060828037 to complete
  239    W    260608696  [flush-8:0]
Request started time index 4.731862902
Inflight while queued
  239    W    260608696  [flush-8:0]
    239    W    260608072  [flush-8:0]
    239    W    260607872  [flush-8:0]
    239    W    260608488  [flush-8:0]
    239    W    260608472  [flush-8:0]
    239    W    260608568  [flush-8:0]
    239    W    260608008  [flush-8:0]
    239    W    260607728  [flush-8:0]
    239    W    260607112  [flush-8:0]
    239    W    260608544  [flush-8:0]
    239    W    260622168  [flush-8:0]
    239    W    271863816  [flush-8:0]
    239    W    260608672  [flush-8:0]
    239    W    260607944  [flush-8:0]
    239    W    203833687  [flush-8:0]
   1676    R    541999743 [watch-inbox-ope]
    239    W    260608240  [flush-8:0]
    239    W    203851359  [flush-8:0]
    239    W    272019768  [flush-8:0]
    239    W    260607272  [flush-8:0]
    239    W    260607992  [flush-8:0]
    239    W    483478791  [flush-8:0]
    239    W    260608528  [flush-8:0]
    239    W    260607456  [flush-8:0]
    239    W    261310704  [flush-8:0]
    239    W    260608200  [flush-8:0]
    239    W    260607744  [flush-8:0]
    239    W    204729015  [flush-8:0]
    239    W    204728927  [flush-8:0]
    239    W    260608584  [flush-8:0]
    239    W    260608352  [flush-8:0]
    239    W    270532504  [flush-8:0]
    239    W    260608600  [flush-8:0]
    239    W    260607152  [flush-8:0]
    239    W    260607888  [flush-8:0]
    239    W    260607192  [flush-8:0]
    239    W    260607568  [flush-8:0]
    239    W    260607632  [flush-8:0]
    239    W    271831080  [flush-8:0]
    239    W    260608312  [flush-8:0]
    239    W    260607440  [flush-8:0]
    239    W    204729023  [flush-8:0]
    239    W    260608056  [flush-8:0]
    239    W    272019776  [flush-8:0]
    239    W    260608632  [flush-8:0]
    239    W    260607704  [flush-8:0]
    239    W    271827168  [flush-8:0]
    239    W    260607208  [flush-8:0]
    239    W    260607384  [flush-8:0]
    239    W    260607856  [flush-8:0]
    239    W    260607320  [flush-8:0]
    239    W    271827160  [flush-8:0]
    239    W    260608152  [flush-8:0]
    239    W    261271552  [flush-8:0]
    239    W    260607168  [flush-8:0]
    239    W    260608088  [flush-8:0]
    239    W    260607480  [flush-8:0]
    239    W    260608424  [flush-8:0]
    239    W    260608040  [flush-8:0]
    239    W    260608400  [flush-8:0]
    239    W    260608224  [flush-8:0]
    239    W    260607680  [flush-8:0]
    239    W    260607808  [flush-8:0]
    239    W    266347440  [flush-8:0]
    239    W    260607776  [flush-8:0]
    239    W    260607512  [flush-8:0]
    239    W    266347280  [flush-8:0]
    239    W    260607424  [flush-8:0]
    239    W    260607656  [flush-8:0]
    239    W    260607976  [flush-8:0]
    239    W    260608440  [flush-8:0]
    239    W    260608272  [flush-8:0]
    239    W    260607536  [flush-8:0]
    239    W    260607920  [flush-8:0]
    239    W    260608456  [flush-8:0]
Complete since queueing
 1676    R    541999743 [watch-inbox-ope]
  239    W    203833687  [flush-8:0]
 1676    R    541999759 [watch-inbox-ope]
 1676    R    541999791 [watch-inbox-ope]
 1676    R    541999807 [watch-inbox-ope]
 1676    R    541999839 [watch-inbox-ope]
 1676    R    541999855 [watch-inbox-ope]
 1676    R    542210351 [watch-inbox-ope]
 1676    R    542210367 [watch-inbox-ope]
 1676    R    541999887 [watch-inbox-ope]
 1676    R    541999911 [watch-inbox-ope]
 1676    R    541999935 [watch-inbox-ope]
 1676    R    541999967 [watch-inbox-ope]
 1676   RM    540448791 [watch-inbox-ope]
 1676    R    541999983 [watch-inbox-ope]
 1676    R    542051791 [watch-inbox-ope]
 1676    R    541999999 [watch-inbox-ope]
 1676    R    541949839 [watch-inbox-ope]
 1676    R    541949871 [watch-inbox-ope]
 1676    R    541949903 [watch-inbox-ope]
 1676    R    541949935 [watch-inbox-ope]
 1676    R    541949887 [watch-inbox-ope]
 1676    R    542051823 [watch-inbox-ope]
 1676    R    541949967 [watch-inbox-ope]
 1676    R    542051839 [watch-inbox-ope]
 1676    R    541949999 [watch-inbox-ope]
 1676    R    541950015 [watch-inbox-ope]
 1676    R    541950031 [watch-inbox-ope]
 1676    R    541950047 [watch-inbox-ope]
 1676    R    541950063 [watch-inbox-ope]
 1676    R    542112079 [watch-inbox-ope]
 1676    R    542112095 [watch-inbox-ope]
 1676    R    542112111 [watch-inbox-ope]
 1676    R    542112127 [watch-inbox-ope]
 1676    R    542112847 [watch-inbox-ope]
 1676    R    542112863 [watch-inbox-ope]
 1676   RM    540461311 [watch-inbox-ope]
 1676   RM    540448799 [watch-inbox-ope]
 1676    R    542112879 [watch-inbox-ope]
 1676    R    541950087 [watch-inbox-ope]
 1676    R    541950111 [watch-inbox-ope]
 1676    R    542112895 [watch-inbox-ope]
 1676    R    541950127 [watch-inbox-ope]
 1676    R    541950159 [watch-inbox-ope]
 1676    R    541950175 [watch-inbox-ope]
 1676    R    541950191 [watch-inbox-ope]
 1676    R    541950207 [watch-inbox-ope]
 1676    R    541950239 [watch-inbox-ope]
 1676    R    541950255 [watch-inbox-ope]
 1676    R    541950287 [watch-inbox-ope]
 1676    R    541950303 [watch-inbox-ope]
 1676    R    541950319 [watch-inbox-ope]
 1676    R    542113103 [watch-inbox-ope]
 1676    R    541950343 [watch-inbox-ope]
 1676    R    541950367 [watch-inbox-ope]
 1676    R    541950399 [watch-inbox-ope]
 1676    R    542113119 [watch-inbox-ope]
 1676    R    542113135 [watch-inbox-ope]
 1676    R    541950415 [watch-inbox-ope]
 1676   RM    540448815 [watch-inbox-ope]
 1676    R    542113151 [watch-inbox-ope]
 1676    R    541950447 [watch-inbox-ope]
 1676    R    541950463 [watch-inbox-ope]
 1676    R    542113743 [watch-inbox-ope]
 1676    R    542113759 [watch-inbox-ope]
 1676    R    542113775 [watch-inbox-ope]
 1676    R    542113791 [watch-inbox-ope]
  239    W    203851359  [flush-8:0]
  239    W    204729015  [flush-8:0]
  239    W    204728927  [flush-8:0]
  239    W    204729023  [flush-8:0]
  239    W    260569008  [flush-8:0]
 1676    R    542145871 [watch-inbox-ope]
 1676    R    542145903 [watch-inbox-ope]
 1676    R    542145887 [watch-inbox-ope]
 1676    R    542154639 [watch-inbox-ope]
 1676    R    542154655 [watch-inbox-ope]
 1676    R    542154671 [watch-inbox-ope]
 1676    R    542154687 [watch-inbox-ope]
 1676    R    542154831 [watch-inbox-ope]
 1676    R    542154863 [watch-inbox-ope]
 1676    R    542157647 [watch-inbox-ope]
 1676    R    542157663 [watch-inbox-ope]
 1676    R    541950479 [watch-inbox-ope]
 1676    R    541950503 [watch-inbox-ope]
 1676    R    541950535 [watch-inbox-ope]
 1676    R    541950599 [watch-inbox-ope]
 1676    R    541950727 [watch-inbox-ope]
 1676    R    541950751 [watch-inbox-ope]
 1676    R    541950767 [watch-inbox-ope]
 1676   RM    540448823 [watch-inbox-ope]
 1676    R    541950783 [watch-inbox-ope]
 1676    R    541950807 [watch-inbox-ope]
 1676    R    541950839 [watch-inbox-ope]
 1676    R    541950855 [watch-inbox-ope]
 1676    R    541950879 [watch-inbox-ope]
 1676    R    541950895 [watch-inbox-ope]
 1676    R    541950919 [watch-inbox-ope]
 1676    R    541950951 [watch-inbox-ope]
 1676    R    541950959 [watch-inbox-ope]
 1676    R    541950975 [watch-inbox-ope]
 1676    R    541951007 [watch-inbox-ope]
 1676    R    541951023 [watch-inbox-ope]
 1676    R    541951055 [watch-inbox-ope]
 1676    R    541951087 [watch-inbox-ope]
 1676    R    541951103 [watch-inbox-ope]
 1676    R    541951119 [watch-inbox-ope]
 1676    R    541951143 [watch-inbox-ope]
 1676    R    541951167 [watch-inbox-ope]
 1676    R    542157679 [watch-inbox-ope]
 1676    R    542157695 [watch-inbox-ope]
 1676    R    541951183 [watch-inbox-ope]
 1676    R    541951215 [watch-inbox-ope]
 1676    R    541951231 [watch-inbox-ope]
 1676    R    542158223 [watch-inbox-ope]
 1676   RM    540448831 [watch-inbox-ope]
 1676    R    541951247 [watch-inbox-ope]
 1676    R    541951271 [watch-inbox-ope]
 1676    R    541951295 [watch-inbox-ope]
 1676    R    542158239 [watch-inbox-ope]
 1676    R    542158255 [watch-inbox-ope]
 1676    R    541951311 [watch-inbox-ope]
 1676    R    542158271 [watch-inbox-ope]
 1676    R    541951343 [watch-inbox-ope]
 1676    R    541951359 [watch-inbox-ope]
 1676    R    541951391 [watch-inbox-ope]
 1676    R    541951407 [watch-inbox-ope]
 1676    R    541951423 [watch-inbox-ope]
 1676    R    541951439 [watch-inbox-ope]
 1676    R    541951471 [watch-inbox-ope]
 1676    R    542158607 [watch-inbox-ope]
 1676    R    541951487 [watch-inbox-ope]
 1676    R    542158639 [watch-inbox-ope]
 1676    R    542158655 [watch-inbox-ope]
 1676    R    542187215 [watch-inbox-ope]
 1676    R    542187231 [watch-inbox-ope]
 1676    R    542187247 [watch-inbox-ope]
 1676    R    541951503 [watch-inbox-ope]
 1676   RM    540448839 [watch-inbox-ope]
 1676    R    542187263 [watch-inbox-ope]
 1676    R    541951535 [watch-inbox-ope]
 1676    R    541951551 [watch-inbox-ope]
 1676    R    541951599 [watch-inbox-ope]
 1676    R    541951575 [watch-inbox-ope]
 1676    R    542190607 [watch-inbox-ope]
  239    W    261310704  [flush-8:0]
  239    W    266347280  [flush-8:0]
  239    W    266347440  [flush-8:0]
 1676    R    542190623 [watch-inbox-ope]
 1676    R    542190639 [watch-inbox-ope]
 1676    R    542190655 [watch-inbox-ope]
 1676    R    542193999 [watch-inbox-ope]
 1676    R    542194015 [watch-inbox-ope]
 1676    R    541951631 [watch-inbox-ope]
 1676    R    541951663 [watch-inbox-ope]
 1676    R    541951679 [watch-inbox-ope]
 1676    R    541951711 [watch-inbox-ope]
 1676    R    541951727 [watch-inbox-ope]
 1676    R    541951743 [watch-inbox-ope]
 1676    R    542194031 [watch-inbox-ope]
 1676    R    542194047 [watch-inbox-ope]
 1676    R    542197711 [watch-inbox-ope]
 1676   RM    540448847 [watch-inbox-ope]
 1676    R    541951759 [watch-inbox-ope]
 1676    R    541951783 [watch-inbox-ope]
 1676    R    541951807 [watch-inbox-ope]
 1676    R    542197727 [watch-inbox-ope]
 1676    R    542197743 [watch-inbox-ope]
 1676    R    542197759 [watch-inbox-ope]
 1676    R    541951823 [watch-inbox-ope]
 1676    R    541951855 [watch-inbox-ope]
 1676    R    541951871 [watch-inbox-ope]
 1676    R    541951895 [watch-inbox-ope]
 1676    R    541951919 [watch-inbox-ope]
 1676    R    541951935 [watch-inbox-ope]
 1676    R    541951951 [watch-inbox-ope]
 1676    R    541951967 [watch-inbox-ope]
 1676    R    541951983 [watch-inbox-ope]
 1676    R    542207567 [watch-inbox-ope]
 1676    R    542207599 [watch-inbox-ope]
 1676    R    542210383 [watch-inbox-ope]
 1676    R    542210399 [watch-inbox-ope]
 1676    R    542210415 [watch-inbox-ope]
 1676    R    542210431 [watch-inbox-ope]
 1676   RM    540448855 [watch-inbox-ope]
 1676    R    541952015 [watch-inbox-ope]
 1676    R    541952047 [watch-inbox-ope]
 1676    R    541952063 [watch-inbox-ope]
 1676    R    541952079 [watch-inbox-ope]
 1676    R    541952103 [watch-inbox-ope]
 1676    R    541952127 [watch-inbox-ope]
 1676    R    541952159 [watch-inbox-ope]
 1676    R    541952175 [watch-inbox-ope]
 1676    R    541952207 [watch-inbox-ope]
 1676    R    541952223 [watch-inbox-ope]
 1676    R    541952255 [watch-inbox-ope]
 1676    R    541952303 [watch-inbox-ope]
 1676    R    541952319 [watch-inbox-ope]
 1676    R    541952335 [watch-inbox-ope]
 1676    R    541952351 [watch-inbox-ope]
 1676    R    541952383 [watch-inbox-ope]
 1676    R    542051855 [watch-inbox-ope]
 1676    R    542051871 [watch-inbox-ope]
 1676    R    542051887 [watch-inbox-ope]
 1676    R    542051903 [watch-inbox-ope]
 1676    R    542051919 [watch-inbox-ope]
 1676    R    541952391 [watch-inbox-ope]
 1676    R    541952415 [watch-inbox-ope]
 1676   RM    540448863 [watch-inbox-ope]
 1676    R    542051935 [watch-inbox-ope]
 1676    R    541952431 [watch-inbox-ope]
 1676    R    541952447 [watch-inbox-ope]
 1676    R    541952463 [watch-inbox-ope]
 1676    R    541952487 [watch-inbox-ope]
 1676    R    541952511 [watch-inbox-ope]
 1676    R    541952527 [watch-inbox-ope]
 1676    R    541952559 [watch-inbox-ope]
 1676    R    541952607 [watch-inbox-ope]
 1676    R    541952623 [watch-inbox-ope]
 1676    R    542051951 [watch-inbox-ope]
 1676    R    541952639 [watch-inbox-ope]
 1676    R    542112271 [watch-inbox-ope]
  239    W    261271552  [flush-8:0]
  239    W    270532504  [flush-8:0]
  239    W    271827168  [flush-8:0]
  239    W    271827160  [flush-8:0]
  239    W    271831080  [flush-8:0]
 1676    R    542112287 [watch-inbox-ope]
 1676    R    542112303 [watch-inbox-ope]
 1676    R    542112319 [watch-inbox-ope]
 1676    R    542112335 [watch-inbox-ope]
 1676    R    542112351 [watch-inbox-ope]
 1676    R    542112367 [watch-inbox-ope]
 1676    R    542112383 [watch-inbox-ope]
 1676    R    542112655 [watch-inbox-ope]
 1676   RM    540448871 [watch-inbox-ope]
 1676    R    542112671 [watch-inbox-ope]
 1676    R    542112687 [watch-inbox-ope]
 1676    R    542112703 [watch-inbox-ope]
 1676    R    542112719 [watch-inbox-ope]
 1676    R    542112735 [watch-inbox-ope]
 1676    R    541952655 [watch-inbox-ope]
 1676    R    541952687 [watch-inbox-ope]
 1676    R    541952703 [watch-inbox-ope]
 1676    R    541952735 [watch-inbox-ope]
 1676    R    541952751 [watch-inbox-ope]
 1676    R    542112751 [watch-inbox-ope]
 1676    R    541952767 [watch-inbox-ope]
 1676    R    541952783 [watch-inbox-ope]
 1676    R    541952799 [watch-inbox-ope]
 1676    R    541952815 [watch-inbox-ope]
 1676    R    541952831 [watch-inbox-ope]
 1676    R    541952863 [watch-inbox-ope]
 1676    R    541952879 [watch-inbox-ope]
 1676    R    542113807 [watch-inbox-ope]
 1676    R    541952903 [watch-inbox-ope]
 1676    R    541952935 [watch-inbox-ope]
 1676   RM    540448879 [watch-inbox-ope]
 1676    R    541952959 [watch-inbox-ope]
 1676    R    542113823 [watch-inbox-ope]
 1676    R    542113839 [watch-inbox-ope]
 1676    R    542113855 [watch-inbox-ope]
 1676    R    541952975 [watch-inbox-ope]
 1676    R    541952991 [watch-inbox-ope]
 1676    R    541953007 [watch-inbox-ope]
 1676    R    541953023 [watch-inbox-ope]
 1676    R    541953055 [watch-inbox-ope]
 1676    R    541953071 [watch-inbox-ope]
 1676    R    541953103 [watch-inbox-ope]
 1676    R    541953119 [watch-inbox-ope]
 1676    R    541953135 [watch-inbox-ope]
 1676    R    542113871 [watch-inbox-ope]
 1676    R    542113887 [watch-inbox-ope]
 1676    R    541953167 [watch-inbox-ope]
 1676    R    541953191 [watch-inbox-ope]
 1676    R    541953223 [watch-inbox-ope]
 1676    R    541953247 [watch-inbox-ope]
 1676    R    541953263 [watch-inbox-ope]
 1676    R    541953279 [watch-inbox-ope]
 1676   RM    540448887 [watch-inbox-ope]
 1676    R    541953303 [watch-inbox-ope]
 1676    R    541953327 [watch-inbox-ope]
 1676    R    542113903 [watch-inbox-ope]
 1676    R    542113919 [watch-inbox-ope]
 1676    R    541953359 [watch-inbox-ope]
 1676    R    541953375 [watch-inbox-ope]
 1676    R    541953391 [watch-inbox-ope]
 1676    R    541953407 [watch-inbox-ope]
 1676    R    542145679 [watch-inbox-ope]
 1676    R    542145695 [watch-inbox-ope]
 1676    R    542145711 [watch-inbox-ope]
 1676    R    542145727 [watch-inbox-ope]
 1676    R    542145743 [watch-inbox-ope]
 1676    R    541953423 [watch-inbox-ope]
 1676    R    542145759 [watch-inbox-ope]
 1676    R    541953455 [watch-inbox-ope]
 1676    R    541953471 [watch-inbox-ope]
 1676    R    542145775 [watch-inbox-ope]
 1676    R    542145791 [watch-inbox-ope]
 1676    R    541953487 [watch-inbox-ope]
 1676    R    541953519 [watch-inbox-ope]
 1676   RM    540448895 [watch-inbox-ope]
 1676    R    541953535 [watch-inbox-ope]
 1676    R    541953551 [watch-inbox-ope]
 1676    R    541953567 [watch-inbox-ope]
 1676    R    541953599 [watch-inbox-ope]
 1676    R    541953615 [watch-inbox-ope]
 1676    R    541953631 [watch-inbox-ope]
 1676    R    541953647 [watch-inbox-ope]
 1676    R    542157455 [watch-inbox-ope]
 1676    R    542157471 [watch-inbox-ope]
 1676    R    542157487 [watch-inbox-ope]
 1676    R    541953671 [watch-inbox-ope]
 1676   RA    540386719 [watch-inbox-ope]
 1676   RA    540386727 [watch-inbox-ope]
 1676   RA    540386735 [watch-inbox-ope]
 1676   RA    540386743 [watch-inbox-ope]
 1676   RA    540386751 [watch-inbox-ope]
 1676   RA    540386759 [watch-inbox-ope]
 1676   RA    540386767 [watch-inbox-ope]
 1676   RA    540386775 [watch-inbox-ope]
 1676   RA    540386783 [watch-inbox-ope]
 1676   RA    540386791 [watch-inbox-ope]
 1676   RA    540386799 [watch-inbox-ope]
 1676   RA    540386807 [watch-inbox-ope]
 1676   RA    540386815 [watch-inbox-ope]
 1676   RA    540386823 [watch-inbox-ope]
 1676   RA    540386831 [watch-inbox-ope]
 1676   RA    540386839 [watch-inbox-ope]
 1676   RA    540386847 [watch-inbox-ope]
 1676   RA    540386855 [watch-inbox-ope]
 1676   RA    540386863 [watch-inbox-ope]
 1676   RA    540386871 [watch-inbox-ope]
 1676   RA    540386879 [watch-inbox-ope]
 1676   RA    540386887 [watch-inbox-ope]
 1676   RA    540386895 [watch-inbox-ope]
 1676   RA    540386903 [watch-inbox-ope]
 1676   RA    540386911 [watch-inbox-ope]
 1676   RA    540386919 [watch-inbox-ope]
 1676   RA    540386927 [watch-inbox-ope]
 1676   RA    540386935 [watch-inbox-ope]
 1676   RA    540386943 [watch-inbox-ope]
 1676   RA    540386951 [watch-inbox-ope]
 1676   RA    540386959 [watch-inbox-ope]
 1676   RM    540386711 [watch-inbox-ope]
  239    W    271863816  [flush-8:0]
  239    W    272019768  [flush-8:0]
  239    W    272019776  [flush-8:0]
  239    W    483478791  [flush-8:0]
  239    W    260578312  [flush-8:0]
  239    W    260578400  [flush-8:0]
 1676    R    541953695 [watch-inbox-ope]
 1676    R    541953711 [watch-inbox-ope]
 1676    R    542157503 [watch-inbox-ope]
 1676    R    541953743 [watch-inbox-ope]
 1676    R    541953759 [watch-inbox-ope]
 1676    R    541953775 [watch-inbox-ope]
 1676    R    542157519 [watch-inbox-ope]
 1676    R    541953791 [watch-inbox-ope]
 1676    R    542157551 [watch-inbox-ope]
 1676    R    541953807 [watch-inbox-ope]
 1676    R    541953831 [watch-inbox-ope]
 1676    R    541953863 [watch-inbox-ope]
 1676    R    541953927 [watch-inbox-ope]
 1676    R    541954055 [watch-inbox-ope]
 1676   RM    540448903 [watch-inbox-ope]
 1676    R    542157567 [watch-inbox-ope]
 1676    R    541954127 [watch-inbox-ope]
 1676    R    541954143 [watch-inbox-ope]
 1676    R    541954159 [watch-inbox-ope]
 1676    R    541954183 [watch-inbox-ope]
 1676    R    541954207 [watch-inbox-ope]
 1676    R    541954223 [watch-inbox-ope]
 1676    R    541954239 [watch-inbox-ope]
 1676    R    541954255 [watch-inbox-ope]
 1676    R    541954271 [watch-inbox-ope]
 1676    R    541954287 [watch-inbox-ope]
 1676    R    541954319 [watch-inbox-ope]
 1676    R    541954335 [watch-inbox-ope]
 1676    R    541954351 [watch-inbox-ope]
 1676    R    541954367 [watch-inbox-ope]
 1676    R    541954391 [watch-inbox-ope]
 1676    R    541954415 [watch-inbox-ope]
 1676    R    541954431 [watch-inbox-ope]
 1676    R    541954455 [watch-inbox-ope]
 1676    R    541954479 [watch-inbox-ope]
 1676    R    541954495 [watch-inbox-ope]
 1676   RM    540456719 [watch-inbox-ope]
  239    W    260622168  [flush-8:0]
  239    W    260625528  [flush-8:0]
  239    W    260625608  [flush-8:0]
  239    W    260614368  [flush-8:0]
  239    W    260614336  [flush-8:0]
  239    W    260614304  [flush-8:0]
  239    W    260614280  [flush-8:0]
 1676    R    541954511 [watch-inbox-ope]
 1676    R    541954527 [watch-inbox-ope]
 1676    R    541954543 [watch-inbox-ope]
 1676    R    541954567 [watch-inbox-ope]
 1676    R    541954599 [watch-inbox-ope]
 1676    R    541954607 [watch-inbox-ope]
 1676    R    541954623 [watch-inbox-ope]
 1676    R    541954655 [watch-inbox-ope]
 1676    R    541954671 [watch-inbox-ope]
 1676    R    541954687 [watch-inbox-ope]
 1676    R    542158351 [watch-inbox-ope]
 1676    R    542158367 [watch-inbox-ope]
 1676    R    542158383 [watch-inbox-ope]
 1676    R    542158399 [watch-inbox-ope]
 1676    R    542158415 [watch-inbox-ope]
 1676    R    541954703 [watch-inbox-ope]
 1676    R    541954727 [watch-inbox-ope]
 1676    R    541954751 [watch-inbox-ope]
 1676    R    542158431 [watch-inbox-ope]
 1676    R    542158447 [watch-inbox-ope]
 1676    R    542158463 [watch-inbox-ope]
 1676    R    541954767 [watch-inbox-ope]
 1676   RM    540456727 [watch-inbox-ope]
 1676    R    541954831 [watch-inbox-ope]
 1676    R    541954863 [watch-inbox-ope]
 1676    R    541954783 [watch-inbox-ope]
 1676    R    541954799 [watch-inbox-ope]
 1676    R    541954895 [watch-inbox-ope]
 1676    R    541954911 [watch-inbox-ope]
 1676    R    541954927 [watch-inbox-ope]
 1676    R    541954943 [watch-inbox-ope]
 1676    R    542158479 [watch-inbox-ope]
 1676    R    541954959 [watch-inbox-ope]
 1676    R    542158495 [watch-inbox-ope]
 1676    R    541954983 [watch-inbox-ope]
 1676    R    541955007 [watch-inbox-ope]
 1676    R    541955023 [watch-inbox-ope]
 1676    R    541955047 [watch-inbox-ope]
 1676    R    541955071 [watch-inbox-ope]
 1676    R    541955087 [watch-inbox-ope]
 1676    R    541955119 [watch-inbox-ope]
 1676    R    541955183 [watch-inbox-ope]
  239    W    260607112  [flush-8:0]
  239    W    260607152  [flush-8:0]
  239    W    260607168  [flush-8:0]
  239    W    260607192  [flush-8:0]
  239    W    260607208  [flush-8:0]
  239    W    260607272  [flush-8:0]
  239    W    260607320  [flush-8:0]
  239    W    260607384  [flush-8:0]
  239    W    260607424  [flush-8:0]
  239    W    260607440  [flush-8:0]
  239    W    260607456  [flush-8:0]
  239    W    260607480  [flush-8:0]
  239    W    260607512  [flush-8:0]
  239    W    260607536  [flush-8:0]
  239    W    260607568  [flush-8:0]
  239    W    260607632  [flush-8:0]
  239    W    260607656  [flush-8:0]
  239    W    260607680  [flush-8:0]
  239    W    260607704  [flush-8:0]
  239    W    260607728  [flush-8:0]
  239    W    260607744  [flush-8:0]
  239    W    260607776  [flush-8:0]
  239    W    260607808  [flush-8:0]
  239    W    260607856  [flush-8:0]
  239    W    260607872  [flush-8:0]
  239    W    260607888  [flush-8:0]
  239    W    260607920  [flush-8:0]
  239    W    260607944  [flush-8:0]
  239    W    260607976  [flush-8:0]
  239    W    260607992  [flush-8:0]
  239    W    260608008  [flush-8:0]
  239    W    260608040  [flush-8:0]
  239    W    260608056  [flush-8:0]
  239    W    260608072  [flush-8:0]
  239    W    260608088  [flush-8:0]
  239    W    260608152  [flush-8:0]
  239    W    260608200  [flush-8:0]
  239    W    260608224  [flush-8:0]
  239    W    260608240  [flush-8:0]
  239    W    260608272  [flush-8:0]
  239    W    260608312  [flush-8:0]
  239    W    260608352  [flush-8:0]
  239    W    260608400  [flush-8:0]
  239    W    260608424  [flush-8:0]
  239    W    260608440  [flush-8:0]
 1676    R    541955311 [watch-inbox-ope]
 1676    R    541955327 [watch-inbox-ope]
 1676    R    542158511 [watch-inbox-ope]
 1676    R    542158543 [watch-inbox-ope]
 1676    R    542158559 [watch-inbox-ope]
 1676   RM    540456735 [watch-inbox-ope]
 1676    R    542158575 [watch-inbox-ope]
 1676    R    542158591 [watch-inbox-ope]
 1676    R    541955343 [watch-inbox-ope]
 1676    R    541955359 [watch-inbox-ope]
 1676    R    541955375 [watch-inbox-ope]
 1676    R    541955391 [watch-inbox-ope]
 1676    R    541955423 [watch-inbox-ope]
 1676    R    541955439 [watch-inbox-ope]
 1676    R    542187279 [watch-inbox-ope]
  239    W    260608456  [flush-8:0]
  239    W    260608472  [flush-8:0]
  239    W    260608488  [flush-8:0]
  239    W    260608528  [flush-8:0]
  239    W    260608544  [flush-8:0]
  239    W    260608568  [flush-8:0]
  239    W    260608584  [flush-8:0]
  239    W    260608600  [flush-8:0]
  239    W    260608632  [flush-8:0]
  239    W    260608672  [flush-8:0]
 1676    R    542187295 [watch-inbox-ope]
 1676    R    542187311 [watch-inbox-ope]
 1676    R    541955463 [watch-inbox-ope]
 1676    R    541955503 [watch-inbox-ope]
 1676    R    541955487 [watch-inbox-ope]
 1676    R    542187327 [watch-inbox-ope]
 1676    R    541955535 [watch-inbox-ope]
 1676    R    541955551 [watch-inbox-ope]
 1676    R    541955567 [watch-inbox-ope]
 1676    R    541955583 [watch-inbox-ope]
 1676    R    541955615 [watch-inbox-ope]
 1676    R    541955655 [watch-inbox-ope]
 1676   RM    540456743 [watch-inbox-ope]
 1676    R    541955679 [watch-inbox-ope]
 1676    R    541955695 [watch-inbox-ope]
 1676    R    541955711 [watch-inbox-ope]
 1676    R    542187343 [watch-inbox-ope]
 1676    R    542187359 [watch-inbox-ope]
 1676    R    542187375 [watch-inbox-ope]
 1676    R    542187391 [watch-inbox-ope]
 1676    R    542190479 [watch-inbox-ope]
 1676    R    541955727 [watch-inbox-ope]
 1676    R    541955751 [watch-inbox-ope]
 1676    R    541955775 [watch-inbox-ope]
 1676    R    542190495 [watch-inbox-ope]
 1676    R    542190511 [watch-inbox-ope]
 1676    R    542190527 [watch-inbox-ope]
 1676    R    541955791 [watch-inbox-ope]
 1676    R    541955823 [watch-inbox-ope]
 1676    R    541955839 [watch-inbox-ope]
 1676    R    541955855 [watch-inbox-ope]
 1676    R    541955879 [watch-inbox-ope]
 1676    R    541955903 [watch-inbox-ope]
 1676    R    541955919 [watch-inbox-ope]
 1676   RM    540456751 [watch-inbox-ope]
 1676    R    541955943 [watch-inbox-ope]
 1676    R    541955967 [watch-inbox-ope]
 1676    R    542190543 [watch-inbox-ope]
 1676    R    542190559 [watch-inbox-ope]
 1676    R    542190575 [watch-inbox-ope]
 1676    R    542190591 [watch-inbox-ope]
 1676    R    542193807 [watch-inbox-ope]
 1676    R    541955983 [watch-inbox-ope]
 1676    R    541956015 [watch-inbox-ope]
 1676    R    541956031 [watch-inbox-ope]
 1676    R    541956047 [watch-inbox-ope]
 1676    R    541956079 [watch-inbox-ope]
 1676    R    541956095 [watch-inbox-ope]
 1676    R    542193839 [watch-inbox-ope]
 1676    R    542193855 [watch-inbox-ope]
 1676    R    542193871 [watch-inbox-ope]
 1676    R    541956111 [watch-inbox-ope]
 1676    R    541956143 [watch-inbox-ope]
 1676    R    541956207 [watch-inbox-ope]
 1676    R    541956255 [watch-inbox-ope]
 1676    R    542193887 [watch-inbox-ope]
 1676    R    541956271 [watch-inbox-ope]
 1676    R    541956287 [watch-inbox-ope]
 1676    R    541956335 [watch-inbox-ope]
 1676   RM    540456759 [watch-inbox-ope]
 1676    R    542193903 [watch-inbox-ope]
 1676    R    541956319 [watch-inbox-ope]
 1676    R    541956367 [watch-inbox-ope]
 1676    R    541956399 [watch-inbox-ope]
 1676    R    541956415 [watch-inbox-ope]
 1676    R    541956431 [watch-inbox-ope]
 1676    R    541956447 [watch-inbox-ope]
 1676    R    541956479 [watch-inbox-ope]
 1676    R    542197775 [watch-inbox-ope]
 1676    R    542197791 [watch-inbox-ope]
 1676    R    542197807 [watch-inbox-ope]
 1676    R    542197823 [watch-inbox-ope]
 1676    R    542197839 [watch-inbox-ope]

I recognise that the output will have a WTF reaction but the key
observations to me are

a) a single write request from flusher took over a second to complete
b) at the time it was queued, it was mostly other writes that were in
   the queue at the same time
c) The write request and the parallel writes were all asynchronous write
   requests
D) at the time the request completed, there were a LARGE number of
   other requested queued and completed at the same time.

Of the requests queued and completed in the meantime the breakdown was

     22 RM
     31 RA
     82 W
    445 R

If I'm reading this correctly, it is saying that 22 reads were merged (RM),
31 reads were remapped to another device (RA) which is probably reads from
the dm-crypt partition, 82 were writes (W) which is not far off the number
of writes that were in the queue and 445 were other reads. The delay was
dominated by reads that were queued after the write request and completed
before it.

There are lots of other example but here is part of one from much later
that starts with.

Request 27128 took 7.536721619 to complete
  239    W    188973663  [flush-8:0]

That's saying that the 27128th request in the trace took over 7 seconds
to complete and was an asynchronous write from flusher. The contents of
the queue are displayed at that time and the breakdown of requests is

     23 WS
     86 RM
    124 RA
    442 W
   1931 R

7 seconds later when it was completed the breakdown of completed
requests was

     25 WS
    114 RM
    155 RA
    408 W
   2457 R

In combination, that confirms for me that asynchronous writes from flush
are being starved by reads. When a process requires a buffer that is locked
by that asynchronous write from flusher, it stalls.

> The thing is, we do want to make ext4 work well with cfq, and
> prioritizing non-readahead read requests ahead of data writeback does
> make sense.  The issue is with is that metadata writes going through
> the block device could in some cases effectively cause a priority
> inversion when what had previously been an asynchronous writeback
> starts blocking a foreground, user-visible process.
> 
> At least, that's the theory;

I *think* the data more or less confirms the theory but it'd be nice if
someone else double checked in case I'm seeing what I want to see
instead of what is actually there.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-22 14:38                     ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-22 14:38 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Jeff Moyer, Dave Chinner, Jan Kara, linux-ext4, LKML, Linux-MM,
	Jiri Slaby

(Adding Jeff Moyer to the cc as I'm told he is interested in the blktrace)

On Fri, Apr 12, 2013 at 11:19:52AM -0400, Theodore Ts'o wrote:
> On Fri, Apr 12, 2013 at 02:50:42PM +1000, Dave Chinner wrote:
> > > If that is the case, one possible solution that comes to mind would be
> > > to mark buffer_heads that contain metadata with a flag, so that the
> > > flusher thread can write them back at the same priority as reads.
> > 
> > Ext4 is already using REQ_META for this purpose.
> 
> We're using REQ_META | REQ_PRIO for reads, not writes.
> 
> > I'm surprised that no-one has suggested "change the IO elevator"
> > yet.....
> 
> Well, testing to see if the stalls go away with the noop schedule is a
> good thing to try just to validate the theory.
> 

I still haven't tested with a different elevator. While this bug is
relatively high priority for me, there are still are other issues in the way.

TLDR: Flusher writes pages very quickly after processes dirty a buffer. Reads
starve flusher writes.

Now the ugliness and being a windbag.

I collected blktrace and some other logs and they are available at
http://www.csn.ul.ie/~mel/postings/stalls-20130419/log.tar.gz  and there
is a lot of stuff in there.  The unix time the test started is in the
first line of the file tests-timestamp-bisect . This can help figure out
how far into the test some of the other timestamped logs are

The kernel log with information from the lock_buffer debugging patch is
in dmesg-bisect-gitcheckout . The information in it is race-prone and
cannot be 100% trusted but it's still useful.

iostat is in iostat-bisect-gitcheckout 

Here are a few observations I got from the data.

1. flushers appear to acquire buffer locks *very* quickly after an
   application writes. Look for lines that look like

   "foo failed trylock without holder released 0 ms ago acquired 0 ms ago by bar"

   There are a lot of entries like this

	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 2124 tar failed trylock without holder released 0 ms ago acquired 0 ms ago by 239 flush-8:0
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar
	jbd2 239 flush-8:0 failed trylock without holder released 0 ms ago acquired 0 ms ago by 2124 tar

   I expected flushers to be writing back the buffers just released in about
   5 seconds time, not immediately.  It may indicate that when flushers
   wake to clean expired inodes that it keeps cleaning inodes as they are
   being dirtied.

2. The flush thread can prevent a process making forward progress for
   a long time. Take this as an example

         jbd2 stalled dev 8,8 for 8168 ms lock holdtime 20692 ms
         Last Owner 239 flush-8:0 Acquired Stack
          [<ffffffff8100fd8a>] save_stack_trace+0x2a/0x50
          [<ffffffff811a3ad6>] set_lock_buffer_owner+0x86/0x90
          [<ffffffff811a72ee>] __block_write_full_page+0x16e/0x360
          [<ffffffff811a75b3>] block_write_full_page_endio+0xd3/0x110
          [<ffffffff811a7600>] block_write_full_page+0x10/0x20
          [<ffffffff811aa7f3>] blkdev_writepage+0x13/0x20
          [<ffffffff81119352>] __writepage+0x12/0x40
          [<ffffffff81119b56>] write_cache_pages+0x206/0x460
          [<ffffffff81119df5>] generic_writepages+0x45/0x70
          [<ffffffff8111accb>] do_writepages+0x1b/0x30
          [<ffffffff81199d60>] __writeback_single_inode+0x40/0x1b0
          [<ffffffff8119c40a>] writeback_sb_inodes+0x19a/0x350
          [<ffffffff8119c656>] __writeback_inodes_wb+0x96/0xc0
          [<ffffffff8119c8fb>] wb_writeback+0x27b/0x330
          [<ffffffff8119e300>] wb_do_writeback+0x190/0x1d0
          [<ffffffff8119e3c3>] bdi_writeback_thread+0x83/0x280
          [<ffffffff8106901b>] kthread+0xbb/0xc0
          [<ffffffff8159e1fc>] ret_from_fork+0x7c/0xb0
          [<ffffffffffffffff>] 0xffffffffffffffff

   This part is saying that we locked the buffer due to blkdev_writepage
   which I assume must be a metadata update. Based on where we lock the
   buffer, the only reason we would leave the buffer unlocked if this
   was an asynchronous write request leaving the buffer to be unlocked by
   end_buffer_async_write at some time in the future

         Last Owner Activity Stack: 239 flush-8:0
          [<ffffffff812aee61>] __blkdev_issue_zeroout+0x191/0x1a0
          [<ffffffff812aef51>] blkdev_issue_zeroout+0xe1/0xf0
          [<ffffffff8121abe9>] ext4_ext_zeroout.isra.30+0x49/0x60
          [<ffffffff8121ee47>] ext4_ext_convert_to_initialized+0x227/0x5f0
          [<ffffffff8121f8a3>] ext4_ext_handle_uninitialized_extents+0x2f3/0x3a0
          [<ffffffff8121ff57>] ext4_ext_map_blocks+0x5d7/0xa00
          [<ffffffff811f0715>] ext4_map_blocks+0x2d5/0x470
          [<ffffffff811f47da>] mpage_da_map_and_submit+0xba/0x2f0
          [<ffffffff811f52e0>] ext4_da_writepages+0x380/0x620
          [<ffffffff8111accb>] do_writepages+0x1b/0x30
          [<ffffffff81199d60>] __writeback_single_inode+0x40/0x1b0
          [<ffffffff8119c40a>] writeback_sb_inodes+0x19a/0x350
          [<ffffffff8119c656>] __writeback_inodes_wb+0x96/0xc0
          [<ffffffff8119c8fb>] wb_writeback+0x27b/0x330

   This part is indicating that at the time a process tried to acquire
   the buffer lock that flusher was off doing something else entirely.
   That points again to the metadata write being asynchronous.

         Current Owner 1829 stap
          [<ffffffff8100fd8a>] save_stack_trace+0x2a/0x50
          [<ffffffff811a3ad6>] set_lock_buffer_owner+0x86/0x90
          [<ffffffff8123a5b2>] do_get_write_access+0xd2/0x800
          [<ffffffff8123ae2b>] jbd2_journal_get_write_access+0x2b/0x50
          [<ffffffff81221249>] __ext4_journal_get_write_access+0x39/0x80
          [<ffffffff81229bca>] ext4_free_blocks+0x36a/0xbe0
          [<ffffffff8121c686>] ext4_remove_blocks+0x256/0x2d0
          [<ffffffff8121c905>] ext4_ext_rm_leaf+0x205/0x520
          [<ffffffff8121e64c>] ext4_ext_remove_space+0x4dc/0x750
          [<ffffffff8122051b>] ext4_ext_truncate+0x19b/0x1e0
          [<ffffffff811efde5>] ext4_truncate.part.61+0xd5/0xf0
          [<ffffffff811f0ee4>] ext4_truncate+0x34/0x90
          [<ffffffff811f382d>] ext4_setattr+0x18d/0x640
          [<ffffffff8118d3e2>] notify_change+0x1f2/0x3c0
          [<ffffffff811716f9>] do_truncate+0x59/0xa0
          [<ffffffff8117d3f6>] handle_truncate+0x66/0xa0
          [<ffffffff81181576>] do_last+0x626/0x820
          [<ffffffff81181823>] path_openat+0xb3/0x4a0
          [<ffffffff8118237d>] do_filp_open+0x3d/0xa0
          [<ffffffff81172869>] do_sys_open+0xf9/0x1e0
          [<ffffffff8117296c>] sys_open+0x1c/0x20
          [<ffffffff8159e2ad>] system_call_fastpath+0x1a/0x1f
          [<ffffffffffffffff>] 0xffffffffffffffff

   This is just showing where stap was trying to acquire the buffer lock
   truncating data.

3. The blktrace indicates that reads can starve writes from flusher

   While there are people that can look at a blktrace and find problems
   like they are rain man, I'm more like an ADHD squirrel when looking at
   a trace.  I wrote a script to look for what unrelated requests completed
   while an request got stalled for over a second. It seemed like something
   that a tool shoudl already exist for but I didn't find one unless btt
   can give the information somehow.

   Each delayed request is quite long but here is the first example
   discovered by the script

Request 4174 took 1.060828037 to complete
  239    W    260608696  [flush-8:0]
Request started time index 4.731862902
Inflight while queued
  239    W    260608696  [flush-8:0]
    239    W    260608072  [flush-8:0]
    239    W    260607872  [flush-8:0]
    239    W    260608488  [flush-8:0]
    239    W    260608472  [flush-8:0]
    239    W    260608568  [flush-8:0]
    239    W    260608008  [flush-8:0]
    239    W    260607728  [flush-8:0]
    239    W    260607112  [flush-8:0]
    239    W    260608544  [flush-8:0]
    239    W    260622168  [flush-8:0]
    239    W    271863816  [flush-8:0]
    239    W    260608672  [flush-8:0]
    239    W    260607944  [flush-8:0]
    239    W    203833687  [flush-8:0]
   1676    R    541999743 [watch-inbox-ope]
    239    W    260608240  [flush-8:0]
    239    W    203851359  [flush-8:0]
    239    W    272019768  [flush-8:0]
    239    W    260607272  [flush-8:0]
    239    W    260607992  [flush-8:0]
    239    W    483478791  [flush-8:0]
    239    W    260608528  [flush-8:0]
    239    W    260607456  [flush-8:0]
    239    W    261310704  [flush-8:0]
    239    W    260608200  [flush-8:0]
    239    W    260607744  [flush-8:0]
    239    W    204729015  [flush-8:0]
    239    W    204728927  [flush-8:0]
    239    W    260608584  [flush-8:0]
    239    W    260608352  [flush-8:0]
    239    W    270532504  [flush-8:0]
    239    W    260608600  [flush-8:0]
    239    W    260607152  [flush-8:0]
    239    W    260607888  [flush-8:0]
    239    W    260607192  [flush-8:0]
    239    W    260607568  [flush-8:0]
    239    W    260607632  [flush-8:0]
    239    W    271831080  [flush-8:0]
    239    W    260608312  [flush-8:0]
    239    W    260607440  [flush-8:0]
    239    W    204729023  [flush-8:0]
    239    W    260608056  [flush-8:0]
    239    W    272019776  [flush-8:0]
    239    W    260608632  [flush-8:0]
    239    W    260607704  [flush-8:0]
    239    W    271827168  [flush-8:0]
    239    W    260607208  [flush-8:0]
    239    W    260607384  [flush-8:0]
    239    W    260607856  [flush-8:0]
    239    W    260607320  [flush-8:0]
    239    W    271827160  [flush-8:0]
    239    W    260608152  [flush-8:0]
    239    W    261271552  [flush-8:0]
    239    W    260607168  [flush-8:0]
    239    W    260608088  [flush-8:0]
    239    W    260607480  [flush-8:0]
    239    W    260608424  [flush-8:0]
    239    W    260608040  [flush-8:0]
    239    W    260608400  [flush-8:0]
    239    W    260608224  [flush-8:0]
    239    W    260607680  [flush-8:0]
    239    W    260607808  [flush-8:0]
    239    W    266347440  [flush-8:0]
    239    W    260607776  [flush-8:0]
    239    W    260607512  [flush-8:0]
    239    W    266347280  [flush-8:0]
    239    W    260607424  [flush-8:0]
    239    W    260607656  [flush-8:0]
    239    W    260607976  [flush-8:0]
    239    W    260608440  [flush-8:0]
    239    W    260608272  [flush-8:0]
    239    W    260607536  [flush-8:0]
    239    W    260607920  [flush-8:0]
    239    W    260608456  [flush-8:0]
Complete since queueing
 1676    R    541999743 [watch-inbox-ope]
  239    W    203833687  [flush-8:0]
 1676    R    541999759 [watch-inbox-ope]
 1676    R    541999791 [watch-inbox-ope]
 1676    R    541999807 [watch-inbox-ope]
 1676    R    541999839 [watch-inbox-ope]
 1676    R    541999855 [watch-inbox-ope]
 1676    R    542210351 [watch-inbox-ope]
 1676    R    542210367 [watch-inbox-ope]
 1676    R    541999887 [watch-inbox-ope]
 1676    R    541999911 [watch-inbox-ope]
 1676    R    541999935 [watch-inbox-ope]
 1676    R    541999967 [watch-inbox-ope]
 1676   RM    540448791 [watch-inbox-ope]
 1676    R    541999983 [watch-inbox-ope]
 1676    R    542051791 [watch-inbox-ope]
 1676    R    541999999 [watch-inbox-ope]
 1676    R    541949839 [watch-inbox-ope]
 1676    R    541949871 [watch-inbox-ope]
 1676    R    541949903 [watch-inbox-ope]
 1676    R    541949935 [watch-inbox-ope]
 1676    R    541949887 [watch-inbox-ope]
 1676    R    542051823 [watch-inbox-ope]
 1676    R    541949967 [watch-inbox-ope]
 1676    R    542051839 [watch-inbox-ope]
 1676    R    541949999 [watch-inbox-ope]
 1676    R    541950015 [watch-inbox-ope]
 1676    R    541950031 [watch-inbox-ope]
 1676    R    541950047 [watch-inbox-ope]
 1676    R    541950063 [watch-inbox-ope]
 1676    R    542112079 [watch-inbox-ope]
 1676    R    542112095 [watch-inbox-ope]
 1676    R    542112111 [watch-inbox-ope]
 1676    R    542112127 [watch-inbox-ope]
 1676    R    542112847 [watch-inbox-ope]
 1676    R    542112863 [watch-inbox-ope]
 1676   RM    540461311 [watch-inbox-ope]
 1676   RM    540448799 [watch-inbox-ope]
 1676    R    542112879 [watch-inbox-ope]
 1676    R    541950087 [watch-inbox-ope]
 1676    R    541950111 [watch-inbox-ope]
 1676    R    542112895 [watch-inbox-ope]
 1676    R    541950127 [watch-inbox-ope]
 1676    R    541950159 [watch-inbox-ope]
 1676    R    541950175 [watch-inbox-ope]
 1676    R    541950191 [watch-inbox-ope]
 1676    R    541950207 [watch-inbox-ope]
 1676    R    541950239 [watch-inbox-ope]
 1676    R    541950255 [watch-inbox-ope]
 1676    R    541950287 [watch-inbox-ope]
 1676    R    541950303 [watch-inbox-ope]
 1676    R    541950319 [watch-inbox-ope]
 1676    R    542113103 [watch-inbox-ope]
 1676    R    541950343 [watch-inbox-ope]
 1676    R    541950367 [watch-inbox-ope]
 1676    R    541950399 [watch-inbox-ope]
 1676    R    542113119 [watch-inbox-ope]
 1676    R    542113135 [watch-inbox-ope]
 1676    R    541950415 [watch-inbox-ope]
 1676   RM    540448815 [watch-inbox-ope]
 1676    R    542113151 [watch-inbox-ope]
 1676    R    541950447 [watch-inbox-ope]
 1676    R    541950463 [watch-inbox-ope]
 1676    R    542113743 [watch-inbox-ope]
 1676    R    542113759 [watch-inbox-ope]
 1676    R    542113775 [watch-inbox-ope]
 1676    R    542113791 [watch-inbox-ope]
  239    W    203851359  [flush-8:0]
  239    W    204729015  [flush-8:0]
  239    W    204728927  [flush-8:0]
  239    W    204729023  [flush-8:0]
  239    W    260569008  [flush-8:0]
 1676    R    542145871 [watch-inbox-ope]
 1676    R    542145903 [watch-inbox-ope]
 1676    R    542145887 [watch-inbox-ope]
 1676    R    542154639 [watch-inbox-ope]
 1676    R    542154655 [watch-inbox-ope]
 1676    R    542154671 [watch-inbox-ope]
 1676    R    542154687 [watch-inbox-ope]
 1676    R    542154831 [watch-inbox-ope]
 1676    R    542154863 [watch-inbox-ope]
 1676    R    542157647 [watch-inbox-ope]
 1676    R    542157663 [watch-inbox-ope]
 1676    R    541950479 [watch-inbox-ope]
 1676    R    541950503 [watch-inbox-ope]
 1676    R    541950535 [watch-inbox-ope]
 1676    R    541950599 [watch-inbox-ope]
 1676    R    541950727 [watch-inbox-ope]
 1676    R    541950751 [watch-inbox-ope]
 1676    R    541950767 [watch-inbox-ope]
 1676   RM    540448823 [watch-inbox-ope]
 1676    R    541950783 [watch-inbox-ope]
 1676    R    541950807 [watch-inbox-ope]
 1676    R    541950839 [watch-inbox-ope]
 1676    R    541950855 [watch-inbox-ope]
 1676    R    541950879 [watch-inbox-ope]
 1676    R    541950895 [watch-inbox-ope]
 1676    R    541950919 [watch-inbox-ope]
 1676    R    541950951 [watch-inbox-ope]
 1676    R    541950959 [watch-inbox-ope]
 1676    R    541950975 [watch-inbox-ope]
 1676    R    541951007 [watch-inbox-ope]
 1676    R    541951023 [watch-inbox-ope]
 1676    R    541951055 [watch-inbox-ope]
 1676    R    541951087 [watch-inbox-ope]
 1676    R    541951103 [watch-inbox-ope]
 1676    R    541951119 [watch-inbox-ope]
 1676    R    541951143 [watch-inbox-ope]
 1676    R    541951167 [watch-inbox-ope]
 1676    R    542157679 [watch-inbox-ope]
 1676    R    542157695 [watch-inbox-ope]
 1676    R    541951183 [watch-inbox-ope]
 1676    R    541951215 [watch-inbox-ope]
 1676    R    541951231 [watch-inbox-ope]
 1676    R    542158223 [watch-inbox-ope]
 1676   RM    540448831 [watch-inbox-ope]
 1676    R    541951247 [watch-inbox-ope]
 1676    R    541951271 [watch-inbox-ope]
 1676    R    541951295 [watch-inbox-ope]
 1676    R    542158239 [watch-inbox-ope]
 1676    R    542158255 [watch-inbox-ope]
 1676    R    541951311 [watch-inbox-ope]
 1676    R    542158271 [watch-inbox-ope]
 1676    R    541951343 [watch-inbox-ope]
 1676    R    541951359 [watch-inbox-ope]
 1676    R    541951391 [watch-inbox-ope]
 1676    R    541951407 [watch-inbox-ope]
 1676    R    541951423 [watch-inbox-ope]
 1676    R    541951439 [watch-inbox-ope]
 1676    R    541951471 [watch-inbox-ope]
 1676    R    542158607 [watch-inbox-ope]
 1676    R    541951487 [watch-inbox-ope]
 1676    R    542158639 [watch-inbox-ope]
 1676    R    542158655 [watch-inbox-ope]
 1676    R    542187215 [watch-inbox-ope]
 1676    R    542187231 [watch-inbox-ope]
 1676    R    542187247 [watch-inbox-ope]
 1676    R    541951503 [watch-inbox-ope]
 1676   RM    540448839 [watch-inbox-ope]
 1676    R    542187263 [watch-inbox-ope]
 1676    R    541951535 [watch-inbox-ope]
 1676    R    541951551 [watch-inbox-ope]
 1676    R    541951599 [watch-inbox-ope]
 1676    R    541951575 [watch-inbox-ope]
 1676    R    542190607 [watch-inbox-ope]
  239    W    261310704  [flush-8:0]
  239    W    266347280  [flush-8:0]
  239    W    266347440  [flush-8:0]
 1676    R    542190623 [watch-inbox-ope]
 1676    R    542190639 [watch-inbox-ope]
 1676    R    542190655 [watch-inbox-ope]
 1676    R    542193999 [watch-inbox-ope]
 1676    R    542194015 [watch-inbox-ope]
 1676    R    541951631 [watch-inbox-ope]
 1676    R    541951663 [watch-inbox-ope]
 1676    R    541951679 [watch-inbox-ope]
 1676    R    541951711 [watch-inbox-ope]
 1676    R    541951727 [watch-inbox-ope]
 1676    R    541951743 [watch-inbox-ope]
 1676    R    542194031 [watch-inbox-ope]
 1676    R    542194047 [watch-inbox-ope]
 1676    R    542197711 [watch-inbox-ope]
 1676   RM    540448847 [watch-inbox-ope]
 1676    R    541951759 [watch-inbox-ope]
 1676    R    541951783 [watch-inbox-ope]
 1676    R    541951807 [watch-inbox-ope]
 1676    R    542197727 [watch-inbox-ope]
 1676    R    542197743 [watch-inbox-ope]
 1676    R    542197759 [watch-inbox-ope]
 1676    R    541951823 [watch-inbox-ope]
 1676    R    541951855 [watch-inbox-ope]
 1676    R    541951871 [watch-inbox-ope]
 1676    R    541951895 [watch-inbox-ope]
 1676    R    541951919 [watch-inbox-ope]
 1676    R    541951935 [watch-inbox-ope]
 1676    R    541951951 [watch-inbox-ope]
 1676    R    541951967 [watch-inbox-ope]
 1676    R    541951983 [watch-inbox-ope]
 1676    R    542207567 [watch-inbox-ope]
 1676    R    542207599 [watch-inbox-ope]
 1676    R    542210383 [watch-inbox-ope]
 1676    R    542210399 [watch-inbox-ope]
 1676    R    542210415 [watch-inbox-ope]
 1676    R    542210431 [watch-inbox-ope]
 1676   RM    540448855 [watch-inbox-ope]
 1676    R    541952015 [watch-inbox-ope]
 1676    R    541952047 [watch-inbox-ope]
 1676    R    541952063 [watch-inbox-ope]
 1676    R    541952079 [watch-inbox-ope]
 1676    R    541952103 [watch-inbox-ope]
 1676    R    541952127 [watch-inbox-ope]
 1676    R    541952159 [watch-inbox-ope]
 1676    R    541952175 [watch-inbox-ope]
 1676    R    541952207 [watch-inbox-ope]
 1676    R    541952223 [watch-inbox-ope]
 1676    R    541952255 [watch-inbox-ope]
 1676    R    541952303 [watch-inbox-ope]
 1676    R    541952319 [watch-inbox-ope]
 1676    R    541952335 [watch-inbox-ope]
 1676    R    541952351 [watch-inbox-ope]
 1676    R    541952383 [watch-inbox-ope]
 1676    R    542051855 [watch-inbox-ope]
 1676    R    542051871 [watch-inbox-ope]
 1676    R    542051887 [watch-inbox-ope]
 1676    R    542051903 [watch-inbox-ope]
 1676    R    542051919 [watch-inbox-ope]
 1676    R    541952391 [watch-inbox-ope]
 1676    R    541952415 [watch-inbox-ope]
 1676   RM    540448863 [watch-inbox-ope]
 1676    R    542051935 [watch-inbox-ope]
 1676    R    541952431 [watch-inbox-ope]
 1676    R    541952447 [watch-inbox-ope]
 1676    R    541952463 [watch-inbox-ope]
 1676    R    541952487 [watch-inbox-ope]
 1676    R    541952511 [watch-inbox-ope]
 1676    R    541952527 [watch-inbox-ope]
 1676    R    541952559 [watch-inbox-ope]
 1676    R    541952607 [watch-inbox-ope]
 1676    R    541952623 [watch-inbox-ope]
 1676    R    542051951 [watch-inbox-ope]
 1676    R    541952639 [watch-inbox-ope]
 1676    R    542112271 [watch-inbox-ope]
  239    W    261271552  [flush-8:0]
  239    W    270532504  [flush-8:0]
  239    W    271827168  [flush-8:0]
  239    W    271827160  [flush-8:0]
  239    W    271831080  [flush-8:0]
 1676    R    542112287 [watch-inbox-ope]
 1676    R    542112303 [watch-inbox-ope]
 1676    R    542112319 [watch-inbox-ope]
 1676    R    542112335 [watch-inbox-ope]
 1676    R    542112351 [watch-inbox-ope]
 1676    R    542112367 [watch-inbox-ope]
 1676    R    542112383 [watch-inbox-ope]
 1676    R    542112655 [watch-inbox-ope]
 1676   RM    540448871 [watch-inbox-ope]
 1676    R    542112671 [watch-inbox-ope]
 1676    R    542112687 [watch-inbox-ope]
 1676    R    542112703 [watch-inbox-ope]
 1676    R    542112719 [watch-inbox-ope]
 1676    R    542112735 [watch-inbox-ope]
 1676    R    541952655 [watch-inbox-ope]
 1676    R    541952687 [watch-inbox-ope]
 1676    R    541952703 [watch-inbox-ope]
 1676    R    541952735 [watch-inbox-ope]
 1676    R    541952751 [watch-inbox-ope]
 1676    R    542112751 [watch-inbox-ope]
 1676    R    541952767 [watch-inbox-ope]
 1676    R    541952783 [watch-inbox-ope]
 1676    R    541952799 [watch-inbox-ope]
 1676    R    541952815 [watch-inbox-ope]
 1676    R    541952831 [watch-inbox-ope]
 1676    R    541952863 [watch-inbox-ope]
 1676    R    541952879 [watch-inbox-ope]
 1676    R    542113807 [watch-inbox-ope]
 1676    R    541952903 [watch-inbox-ope]
 1676    R    541952935 [watch-inbox-ope]
 1676   RM    540448879 [watch-inbox-ope]
 1676    R    541952959 [watch-inbox-ope]
 1676    R    542113823 [watch-inbox-ope]
 1676    R    542113839 [watch-inbox-ope]
 1676    R    542113855 [watch-inbox-ope]
 1676    R    541952975 [watch-inbox-ope]
 1676    R    541952991 [watch-inbox-ope]
 1676    R    541953007 [watch-inbox-ope]
 1676    R    541953023 [watch-inbox-ope]
 1676    R    541953055 [watch-inbox-ope]
 1676    R    541953071 [watch-inbox-ope]
 1676    R    541953103 [watch-inbox-ope]
 1676    R    541953119 [watch-inbox-ope]
 1676    R    541953135 [watch-inbox-ope]
 1676    R    542113871 [watch-inbox-ope]
 1676    R    542113887 [watch-inbox-ope]
 1676    R    541953167 [watch-inbox-ope]
 1676    R    541953191 [watch-inbox-ope]
 1676    R    541953223 [watch-inbox-ope]
 1676    R    541953247 [watch-inbox-ope]
 1676    R    541953263 [watch-inbox-ope]
 1676    R    541953279 [watch-inbox-ope]
 1676   RM    540448887 [watch-inbox-ope]
 1676    R    541953303 [watch-inbox-ope]
 1676    R    541953327 [watch-inbox-ope]
 1676    R    542113903 [watch-inbox-ope]
 1676    R    542113919 [watch-inbox-ope]
 1676    R    541953359 [watch-inbox-ope]
 1676    R    541953375 [watch-inbox-ope]
 1676    R    541953391 [watch-inbox-ope]
 1676    R    541953407 [watch-inbox-ope]
 1676    R    542145679 [watch-inbox-ope]
 1676    R    542145695 [watch-inbox-ope]
 1676    R    542145711 [watch-inbox-ope]
 1676    R    542145727 [watch-inbox-ope]
 1676    R    542145743 [watch-inbox-ope]
 1676    R    541953423 [watch-inbox-ope]
 1676    R    542145759 [watch-inbox-ope]
 1676    R    541953455 [watch-inbox-ope]
 1676    R    541953471 [watch-inbox-ope]
 1676    R    542145775 [watch-inbox-ope]
 1676    R    542145791 [watch-inbox-ope]
 1676    R    541953487 [watch-inbox-ope]
 1676    R    541953519 [watch-inbox-ope]
 1676   RM    540448895 [watch-inbox-ope]
 1676    R    541953535 [watch-inbox-ope]
 1676    R    541953551 [watch-inbox-ope]
 1676    R    541953567 [watch-inbox-ope]
 1676    R    541953599 [watch-inbox-ope]
 1676    R    541953615 [watch-inbox-ope]
 1676    R    541953631 [watch-inbox-ope]
 1676    R    541953647 [watch-inbox-ope]
 1676    R    542157455 [watch-inbox-ope]
 1676    R    542157471 [watch-inbox-ope]
 1676    R    542157487 [watch-inbox-ope]
 1676    R    541953671 [watch-inbox-ope]
 1676   RA    540386719 [watch-inbox-ope]
 1676   RA    540386727 [watch-inbox-ope]
 1676   RA    540386735 [watch-inbox-ope]
 1676   RA    540386743 [watch-inbox-ope]
 1676   RA    540386751 [watch-inbox-ope]
 1676   RA    540386759 [watch-inbox-ope]
 1676   RA    540386767 [watch-inbox-ope]
 1676   RA    540386775 [watch-inbox-ope]
 1676   RA    540386783 [watch-inbox-ope]
 1676   RA    540386791 [watch-inbox-ope]
 1676   RA    540386799 [watch-inbox-ope]
 1676   RA    540386807 [watch-inbox-ope]
 1676   RA    540386815 [watch-inbox-ope]
 1676   RA    540386823 [watch-inbox-ope]
 1676   RA    540386831 [watch-inbox-ope]
 1676   RA    540386839 [watch-inbox-ope]
 1676   RA    540386847 [watch-inbox-ope]
 1676   RA    540386855 [watch-inbox-ope]
 1676   RA    540386863 [watch-inbox-ope]
 1676   RA    540386871 [watch-inbox-ope]
 1676   RA    540386879 [watch-inbox-ope]
 1676   RA    540386887 [watch-inbox-ope]
 1676   RA    540386895 [watch-inbox-ope]
 1676   RA    540386903 [watch-inbox-ope]
 1676   RA    540386911 [watch-inbox-ope]
 1676   RA    540386919 [watch-inbox-ope]
 1676   RA    540386927 [watch-inbox-ope]
 1676   RA    540386935 [watch-inbox-ope]
 1676   RA    540386943 [watch-inbox-ope]
 1676   RA    540386951 [watch-inbox-ope]
 1676   RA    540386959 [watch-inbox-ope]
 1676   RM    540386711 [watch-inbox-ope]
  239    W    271863816  [flush-8:0]
  239    W    272019768  [flush-8:0]
  239    W    272019776  [flush-8:0]
  239    W    483478791  [flush-8:0]
  239    W    260578312  [flush-8:0]
  239    W    260578400  [flush-8:0]
 1676    R    541953695 [watch-inbox-ope]
 1676    R    541953711 [watch-inbox-ope]
 1676    R    542157503 [watch-inbox-ope]
 1676    R    541953743 [watch-inbox-ope]
 1676    R    541953759 [watch-inbox-ope]
 1676    R    541953775 [watch-inbox-ope]
 1676    R    542157519 [watch-inbox-ope]
 1676    R    541953791 [watch-inbox-ope]
 1676    R    542157551 [watch-inbox-ope]
 1676    R    541953807 [watch-inbox-ope]
 1676    R    541953831 [watch-inbox-ope]
 1676    R    541953863 [watch-inbox-ope]
 1676    R    541953927 [watch-inbox-ope]
 1676    R    541954055 [watch-inbox-ope]
 1676   RM    540448903 [watch-inbox-ope]
 1676    R    542157567 [watch-inbox-ope]
 1676    R    541954127 [watch-inbox-ope]
 1676    R    541954143 [watch-inbox-ope]
 1676    R    541954159 [watch-inbox-ope]
 1676    R    541954183 [watch-inbox-ope]
 1676    R    541954207 [watch-inbox-ope]
 1676    R    541954223 [watch-inbox-ope]
 1676    R    541954239 [watch-inbox-ope]
 1676    R    541954255 [watch-inbox-ope]
 1676    R    541954271 [watch-inbox-ope]
 1676    R    541954287 [watch-inbox-ope]
 1676    R    541954319 [watch-inbox-ope]
 1676    R    541954335 [watch-inbox-ope]
 1676    R    541954351 [watch-inbox-ope]
 1676    R    541954367 [watch-inbox-ope]
 1676    R    541954391 [watch-inbox-ope]
 1676    R    541954415 [watch-inbox-ope]
 1676    R    541954431 [watch-inbox-ope]
 1676    R    541954455 [watch-inbox-ope]
 1676    R    541954479 [watch-inbox-ope]
 1676    R    541954495 [watch-inbox-ope]
 1676   RM    540456719 [watch-inbox-ope]
  239    W    260622168  [flush-8:0]
  239    W    260625528  [flush-8:0]
  239    W    260625608  [flush-8:0]
  239    W    260614368  [flush-8:0]
  239    W    260614336  [flush-8:0]
  239    W    260614304  [flush-8:0]
  239    W    260614280  [flush-8:0]
 1676    R    541954511 [watch-inbox-ope]
 1676    R    541954527 [watch-inbox-ope]
 1676    R    541954543 [watch-inbox-ope]
 1676    R    541954567 [watch-inbox-ope]
 1676    R    541954599 [watch-inbox-ope]
 1676    R    541954607 [watch-inbox-ope]
 1676    R    541954623 [watch-inbox-ope]
 1676    R    541954655 [watch-inbox-ope]
 1676    R    541954671 [watch-inbox-ope]
 1676    R    541954687 [watch-inbox-ope]
 1676    R    542158351 [watch-inbox-ope]
 1676    R    542158367 [watch-inbox-ope]
 1676    R    542158383 [watch-inbox-ope]
 1676    R    542158399 [watch-inbox-ope]
 1676    R    542158415 [watch-inbox-ope]
 1676    R    541954703 [watch-inbox-ope]
 1676    R    541954727 [watch-inbox-ope]
 1676    R    541954751 [watch-inbox-ope]
 1676    R    542158431 [watch-inbox-ope]
 1676    R    542158447 [watch-inbox-ope]
 1676    R    542158463 [watch-inbox-ope]
 1676    R    541954767 [watch-inbox-ope]
 1676   RM    540456727 [watch-inbox-ope]
 1676    R    541954831 [watch-inbox-ope]
 1676    R    541954863 [watch-inbox-ope]
 1676    R    541954783 [watch-inbox-ope]
 1676    R    541954799 [watch-inbox-ope]
 1676    R    541954895 [watch-inbox-ope]
 1676    R    541954911 [watch-inbox-ope]
 1676    R    541954927 [watch-inbox-ope]
 1676    R    541954943 [watch-inbox-ope]
 1676    R    542158479 [watch-inbox-ope]
 1676    R    541954959 [watch-inbox-ope]
 1676    R    542158495 [watch-inbox-ope]
 1676    R    541954983 [watch-inbox-ope]
 1676    R    541955007 [watch-inbox-ope]
 1676    R    541955023 [watch-inbox-ope]
 1676    R    541955047 [watch-inbox-ope]
 1676    R    541955071 [watch-inbox-ope]
 1676    R    541955087 [watch-inbox-ope]
 1676    R    541955119 [watch-inbox-ope]
 1676    R    541955183 [watch-inbox-ope]
  239    W    260607112  [flush-8:0]
  239    W    260607152  [flush-8:0]
  239    W    260607168  [flush-8:0]
  239    W    260607192  [flush-8:0]
  239    W    260607208  [flush-8:0]
  239    W    260607272  [flush-8:0]
  239    W    260607320  [flush-8:0]
  239    W    260607384  [flush-8:0]
  239    W    260607424  [flush-8:0]
  239    W    260607440  [flush-8:0]
  239    W    260607456  [flush-8:0]
  239    W    260607480  [flush-8:0]
  239    W    260607512  [flush-8:0]
  239    W    260607536  [flush-8:0]
  239    W    260607568  [flush-8:0]
  239    W    260607632  [flush-8:0]
  239    W    260607656  [flush-8:0]
  239    W    260607680  [flush-8:0]
  239    W    260607704  [flush-8:0]
  239    W    260607728  [flush-8:0]
  239    W    260607744  [flush-8:0]
  239    W    260607776  [flush-8:0]
  239    W    260607808  [flush-8:0]
  239    W    260607856  [flush-8:0]
  239    W    260607872  [flush-8:0]
  239    W    260607888  [flush-8:0]
  239    W    260607920  [flush-8:0]
  239    W    260607944  [flush-8:0]
  239    W    260607976  [flush-8:0]
  239    W    260607992  [flush-8:0]
  239    W    260608008  [flush-8:0]
  239    W    260608040  [flush-8:0]
  239    W    260608056  [flush-8:0]
  239    W    260608072  [flush-8:0]
  239    W    260608088  [flush-8:0]
  239    W    260608152  [flush-8:0]
  239    W    260608200  [flush-8:0]
  239    W    260608224  [flush-8:0]
  239    W    260608240  [flush-8:0]
  239    W    260608272  [flush-8:0]
  239    W    260608312  [flush-8:0]
  239    W    260608352  [flush-8:0]
  239    W    260608400  [flush-8:0]
  239    W    260608424  [flush-8:0]
  239    W    260608440  [flush-8:0]
 1676    R    541955311 [watch-inbox-ope]
 1676    R    541955327 [watch-inbox-ope]
 1676    R    542158511 [watch-inbox-ope]
 1676    R    542158543 [watch-inbox-ope]
 1676    R    542158559 [watch-inbox-ope]
 1676   RM    540456735 [watch-inbox-ope]
 1676    R    542158575 [watch-inbox-ope]
 1676    R    542158591 [watch-inbox-ope]
 1676    R    541955343 [watch-inbox-ope]
 1676    R    541955359 [watch-inbox-ope]
 1676    R    541955375 [watch-inbox-ope]
 1676    R    541955391 [watch-inbox-ope]
 1676    R    541955423 [watch-inbox-ope]
 1676    R    541955439 [watch-inbox-ope]
 1676    R    542187279 [watch-inbox-ope]
  239    W    260608456  [flush-8:0]
  239    W    260608472  [flush-8:0]
  239    W    260608488  [flush-8:0]
  239    W    260608528  [flush-8:0]
  239    W    260608544  [flush-8:0]
  239    W    260608568  [flush-8:0]
  239    W    260608584  [flush-8:0]
  239    W    260608600  [flush-8:0]
  239    W    260608632  [flush-8:0]
  239    W    260608672  [flush-8:0]
 1676    R    542187295 [watch-inbox-ope]
 1676    R    542187311 [watch-inbox-ope]
 1676    R    541955463 [watch-inbox-ope]
 1676    R    541955503 [watch-inbox-ope]
 1676    R    541955487 [watch-inbox-ope]
 1676    R    542187327 [watch-inbox-ope]
 1676    R    541955535 [watch-inbox-ope]
 1676    R    541955551 [watch-inbox-ope]
 1676    R    541955567 [watch-inbox-ope]
 1676    R    541955583 [watch-inbox-ope]
 1676    R    541955615 [watch-inbox-ope]
 1676    R    541955655 [watch-inbox-ope]
 1676   RM    540456743 [watch-inbox-ope]
 1676    R    541955679 [watch-inbox-ope]
 1676    R    541955695 [watch-inbox-ope]
 1676    R    541955711 [watch-inbox-ope]
 1676    R    542187343 [watch-inbox-ope]
 1676    R    542187359 [watch-inbox-ope]
 1676    R    542187375 [watch-inbox-ope]
 1676    R    542187391 [watch-inbox-ope]
 1676    R    542190479 [watch-inbox-ope]
 1676    R    541955727 [watch-inbox-ope]
 1676    R    541955751 [watch-inbox-ope]
 1676    R    541955775 [watch-inbox-ope]
 1676    R    542190495 [watch-inbox-ope]
 1676    R    542190511 [watch-inbox-ope]
 1676    R    542190527 [watch-inbox-ope]
 1676    R    541955791 [watch-inbox-ope]
 1676    R    541955823 [watch-inbox-ope]
 1676    R    541955839 [watch-inbox-ope]
 1676    R    541955855 [watch-inbox-ope]
 1676    R    541955879 [watch-inbox-ope]
 1676    R    541955903 [watch-inbox-ope]
 1676    R    541955919 [watch-inbox-ope]
 1676   RM    540456751 [watch-inbox-ope]
 1676    R    541955943 [watch-inbox-ope]
 1676    R    541955967 [watch-inbox-ope]
 1676    R    542190543 [watch-inbox-ope]
 1676    R    542190559 [watch-inbox-ope]
 1676    R    542190575 [watch-inbox-ope]
 1676    R    542190591 [watch-inbox-ope]
 1676    R    542193807 [watch-inbox-ope]
 1676    R    541955983 [watch-inbox-ope]
 1676    R    541956015 [watch-inbox-ope]
 1676    R    541956031 [watch-inbox-ope]
 1676    R    541956047 [watch-inbox-ope]
 1676    R    541956079 [watch-inbox-ope]
 1676    R    541956095 [watch-inbox-ope]
 1676    R    542193839 [watch-inbox-ope]
 1676    R    542193855 [watch-inbox-ope]
 1676    R    542193871 [watch-inbox-ope]
 1676    R    541956111 [watch-inbox-ope]
 1676    R    541956143 [watch-inbox-ope]
 1676    R    541956207 [watch-inbox-ope]
 1676    R    541956255 [watch-inbox-ope]
 1676    R    542193887 [watch-inbox-ope]
 1676    R    541956271 [watch-inbox-ope]
 1676    R    541956287 [watch-inbox-ope]
 1676    R    541956335 [watch-inbox-ope]
 1676   RM    540456759 [watch-inbox-ope]
 1676    R    542193903 [watch-inbox-ope]
 1676    R    541956319 [watch-inbox-ope]
 1676    R    541956367 [watch-inbox-ope]
 1676    R    541956399 [watch-inbox-ope]
 1676    R    541956415 [watch-inbox-ope]
 1676    R    541956431 [watch-inbox-ope]
 1676    R    541956447 [watch-inbox-ope]
 1676    R    541956479 [watch-inbox-ope]
 1676    R    542197775 [watch-inbox-ope]
 1676    R    542197791 [watch-inbox-ope]
 1676    R    542197807 [watch-inbox-ope]
 1676    R    542197823 [watch-inbox-ope]
 1676    R    542197839 [watch-inbox-ope]

I recognise that the output will have a WTF reaction but the key
observations to me are

a) a single write request from flusher took over a second to complete
b) at the time it was queued, it was mostly other writes that were in
   the queue at the same time
c) The write request and the parallel writes were all asynchronous write
   requests
D) at the time the request completed, there were a LARGE number of
   other requested queued and completed at the same time.

Of the requests queued and completed in the meantime the breakdown was

     22 RM
     31 RA
     82 W
    445 R

If I'm reading this correctly, it is saying that 22 reads were merged (RM),
31 reads were remapped to another device (RA) which is probably reads from
the dm-crypt partition, 82 were writes (W) which is not far off the number
of writes that were in the queue and 445 were other reads. The delay was
dominated by reads that were queued after the write request and completed
before it.

There are lots of other example but here is part of one from much later
that starts with.

Request 27128 took 7.536721619 to complete
  239    W    188973663  [flush-8:0]

That's saying that the 27128th request in the trace took over 7 seconds
to complete and was an asynchronous write from flusher. The contents of
the queue are displayed at that time and the breakdown of requests is

     23 WS
     86 RM
    124 RA
    442 W
   1931 R

7 seconds later when it was completed the breakdown of completed
requests was

     25 WS
    114 RM
    155 RA
    408 W
   2457 R

In combination, that confirms for me that asynchronous writes from flush
are being starved by reads. When a process requires a buffer that is locked
by that asynchronous write from flusher, it stalls.

> The thing is, we do want to make ext4 work well with cfq, and
> prioritizing non-readahead read requests ahead of data writeback does
> make sense.  The issue is with is that metadata writes going through
> the block device could in some cases effectively cause a priority
> inversion when what had previously been an asynchronous writeback
> starts blocking a foreground, user-visible process.
> 
> At least, that's the theory;

I *think* the data more or less confirms the theory but it'd be nice if
someone else double checked in case I'm seeing what I want to see
instead of what is actually there.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-22 14:38                     ` Mel Gorman
@ 2013-04-22 22:42                       ` Jeff Moyer
  -1 siblings, 0 replies; 105+ messages in thread
From: Jeff Moyer @ 2013-04-22 22:42 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

Mel Gorman <mgorman@suse.de> writes:

> (Adding Jeff Moyer to the cc as I'm told he is interested in the blktrace)

Thanks.  I've got a few comments and corrections for you below.

> TLDR: Flusher writes pages very quickly after processes dirty a buffer. Reads
> starve flusher writes.
[snip]

> 3. The blktrace indicates that reads can starve writes from flusher
>
>    While there are people that can look at a blktrace and find problems
>    like they are rain man, I'm more like an ADHD squirrel when looking at
>    a trace.  I wrote a script to look for what unrelated requests completed
>    while an request got stalled for over a second. It seemed like something
>    that a tool shoudl already exist for but I didn't find one unless btt
>    can give the information somehow.

Care to share that script?

[snip]

> I recognise that the output will have a WTF reaction but the key
> observations to me are
>
> a) a single write request from flusher took over a second to complete
> b) at the time it was queued, it was mostly other writes that were in
>    the queue at the same time
> c) The write request and the parallel writes were all asynchronous write
>    requests
> D) at the time the request completed, there were a LARGE number of
>    other requested queued and completed at the same time.
>
> Of the requests queued and completed in the meantime the breakdown was
>
>      22 RM
>      31 RA
>      82 W
>     445 R
>
> If I'm reading this correctly, it is saying that 22 reads were merged (RM),
> 31 reads were remapped to another device (RA) which is probably reads from
> the dm-crypt partition, 82 were writes (W) which is not far off the number
> of writes that were in the queue and 445 were other reads. The delay was
> dominated by reads that were queued after the write request and completed
> before it.

RM == Read Meta
RA == Read Ahead  (remapping, by the way, does not happen across
                   devices, just into partitions)
W and R you understood correctly.

> That's saying that the 27128th request in the trace took over 7 seconds
> to complete and was an asynchronous write from flusher. The contents of
> the queue are displayed at that time and the breakdown of requests is
>
>      23 WS  [JEM: write sync]
>      86 RM  [JEM: Read Meta]
>     124 RA  [JEM: Read Ahead]
>     442 W
>    1931 R
>
> 7 seconds later when it was completed the breakdown of completed
> requests was
>
>      25 WS
>     114 RM
>     155 RA
>     408 W
>    2457 R
>
> In combination, that confirms for me that asynchronous writes from flush
> are being starved by reads. When a process requires a buffer that is locked
> by that asynchronous write from flusher, it stalls.
>
>> The thing is, we do want to make ext4 work well with cfq, and
>> prioritizing non-readahead read requests ahead of data writeback does
>> make sense.  The issue is with is that metadata writes going through
>> the block device could in some cases effectively cause a priority
>> inversion when what had previously been an asynchronous writeback
>> starts blocking a foreground, user-visible process.
>> 
>> At least, that's the theory;
>
> I *think* the data more or less confirms the theory but it'd be nice if
> someone else double checked in case I'm seeing what I want to see
> instead of what is actually there.

Looks sane.  You can also see a lot of "preempt"s in the blkparse
output, which indicates exactly what you're saying.  Any sync request
gets priority over the async requests.

I'll also note that even though your I/O is going all over the place
(D2C is pretty bad, 14ms), most of the time is spent waiting for a
struct request allocation or between Queue and Merge:

==================== All Devices ====================

            ALL           MIN           AVG           MAX           N
--------------- ------------- ------------- ------------- -----------

Q2Q               0.000000001   0.000992259   8.898375882     2300861
Q2G               0.000000843  10.193261239 2064.079501935     1016463 <====
G2I               0.000000461   0.000044702   3.237065090     1015803
Q2M               0.000000101   8.203147238 2064.079367557     1311662
I2D               0.000002012   1.476824812 2064.089774419     1014890
M2D               0.000003283   6.994306138 283.573348664     1284872
D2C               0.000061889   0.014438316   0.857811758     2291996
Q2C               0.000072284  13.363007244 2064.092228625     2292191

==================== Device Overhead ====================

       DEV |       Q2G       G2I       Q2M       I2D       D2C
---------- | --------- --------- --------- --------- ---------
 (  8,  0) |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
---------- | --------- --------- --------- --------- ---------
   Overall |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%

I'm not sure I believe that max value.  2064 seconds seems a bit high.
Also, Q2M should not be anywhere near that big, so more investigation is
required there.  A quick look over the data doesn't show any such delays
(making me question the tools), but I'll write some code tomorrow to
verify the btt output.

Jan, if I were to come up with a way of promoting a particular async
queue to the front of the line, where would I put such a call in the
ext4/jbd2 code to be effective?

Mel, can you reproduce this at will?  Do you have a reproducer that I
could run so I'm not constantly bugging you?

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-22 22:42                       ` Jeff Moyer
  0 siblings, 0 replies; 105+ messages in thread
From: Jeff Moyer @ 2013-04-22 22:42 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

Mel Gorman <mgorman@suse.de> writes:

> (Adding Jeff Moyer to the cc as I'm told he is interested in the blktrace)

Thanks.  I've got a few comments and corrections for you below.

> TLDR: Flusher writes pages very quickly after processes dirty a buffer. Reads
> starve flusher writes.
[snip]

> 3. The blktrace indicates that reads can starve writes from flusher
>
>    While there are people that can look at a blktrace and find problems
>    like they are rain man, I'm more like an ADHD squirrel when looking at
>    a trace.  I wrote a script to look for what unrelated requests completed
>    while an request got stalled for over a second. It seemed like something
>    that a tool shoudl already exist for but I didn't find one unless btt
>    can give the information somehow.

Care to share that script?

[snip]

> I recognise that the output will have a WTF reaction but the key
> observations to me are
>
> a) a single write request from flusher took over a second to complete
> b) at the time it was queued, it was mostly other writes that were in
>    the queue at the same time
> c) The write request and the parallel writes were all asynchronous write
>    requests
> D) at the time the request completed, there were a LARGE number of
>    other requested queued and completed at the same time.
>
> Of the requests queued and completed in the meantime the breakdown was
>
>      22 RM
>      31 RA
>      82 W
>     445 R
>
> If I'm reading this correctly, it is saying that 22 reads were merged (RM),
> 31 reads were remapped to another device (RA) which is probably reads from
> the dm-crypt partition, 82 were writes (W) which is not far off the number
> of writes that were in the queue and 445 were other reads. The delay was
> dominated by reads that were queued after the write request and completed
> before it.

RM == Read Meta
RA == Read Ahead  (remapping, by the way, does not happen across
                   devices, just into partitions)
W and R you understood correctly.

> That's saying that the 27128th request in the trace took over 7 seconds
> to complete and was an asynchronous write from flusher. The contents of
> the queue are displayed at that time and the breakdown of requests is
>
>      23 WS  [JEM: write sync]
>      86 RM  [JEM: Read Meta]
>     124 RA  [JEM: Read Ahead]
>     442 W
>    1931 R
>
> 7 seconds later when it was completed the breakdown of completed
> requests was
>
>      25 WS
>     114 RM
>     155 RA
>     408 W
>    2457 R
>
> In combination, that confirms for me that asynchronous writes from flush
> are being starved by reads. When a process requires a buffer that is locked
> by that asynchronous write from flusher, it stalls.
>
>> The thing is, we do want to make ext4 work well with cfq, and
>> prioritizing non-readahead read requests ahead of data writeback does
>> make sense.  The issue is with is that metadata writes going through
>> the block device could in some cases effectively cause a priority
>> inversion when what had previously been an asynchronous writeback
>> starts blocking a foreground, user-visible process.
>> 
>> At least, that's the theory;
>
> I *think* the data more or less confirms the theory but it'd be nice if
> someone else double checked in case I'm seeing what I want to see
> instead of what is actually there.

Looks sane.  You can also see a lot of "preempt"s in the blkparse
output, which indicates exactly what you're saying.  Any sync request
gets priority over the async requests.

I'll also note that even though your I/O is going all over the place
(D2C is pretty bad, 14ms), most of the time is spent waiting for a
struct request allocation or between Queue and Merge:

==================== All Devices ====================

            ALL           MIN           AVG           MAX           N
--------------- ------------- ------------- ------------- -----------

Q2Q               0.000000001   0.000992259   8.898375882     2300861
Q2G               0.000000843  10.193261239 2064.079501935     1016463 <====
G2I               0.000000461   0.000044702   3.237065090     1015803
Q2M               0.000000101   8.203147238 2064.079367557     1311662
I2D               0.000002012   1.476824812 2064.089774419     1014890
M2D               0.000003283   6.994306138 283.573348664     1284872
D2C               0.000061889   0.014438316   0.857811758     2291996
Q2C               0.000072284  13.363007244 2064.092228625     2292191

==================== Device Overhead ====================

       DEV |       Q2G       G2I       Q2M       I2D       D2C
---------- | --------- --------- --------- --------- ---------
 (  8,  0) |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
---------- | --------- --------- --------- --------- ---------
   Overall |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%

I'm not sure I believe that max value.  2064 seconds seems a bit high.
Also, Q2M should not be anywhere near that big, so more investigation is
required there.  A quick look over the data doesn't show any such delays
(making me question the tools), but I'll write some code tomorrow to
verify the btt output.

Jan, if I were to come up with a way of promoting a particular async
queue to the front of the line, where would I put such a call in the
ext4/jbd2 code to be effective?

Mel, can you reproduce this at will?  Do you have a reproducer that I
could run so I'm not constantly bugging you?

Cheers,
Jeff

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-22 22:42                       ` Jeff Moyer
@ 2013-04-23  0:02                         ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-23  0:02 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Mel Gorman, Dave Chinner, Jan Kara, linux-ext4, LKML, Linux-MM,
	Jiri Slaby

On Mon, Apr 22, 2013 at 06:42:23PM -0400, Jeff Moyer wrote:
> 
> Jan, if I were to come up with a way of promoting a particular async
> queue to the front of the line, where would I put such a call in the
> ext4/jbd2 code to be effective?

Well, I thought we had discussed trying to bump a pending I/O
automatically when there was an attempt to call lock_buffer() on the
bh?  That would be ideal, because we could keep the async writeback
low priority until someone is trying to wait upon it, at which point
obviously it should no longer be considered an async write call.

Failing that, this is something I've been toying with.... what do you
think?

http://patchwork.ozlabs.org/patch/238192/
http://patchwork.ozlabs.org/patch/238257/

(The first patch in the series just makes sure that allocation bitmap
reads are marked with the META/PRIO flags.  It's not strictly speaking
related to the problem discussed here, but for completeness:
http://patchwork.ozlabs.org/patch/238193/)

						- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-23  0:02                         ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-23  0:02 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Mel Gorman, Dave Chinner, Jan Kara, linux-ext4, LKML, Linux-MM,
	Jiri Slaby

On Mon, Apr 22, 2013 at 06:42:23PM -0400, Jeff Moyer wrote:
> 
> Jan, if I were to come up with a way of promoting a particular async
> queue to the front of the line, where would I put such a call in the
> ext4/jbd2 code to be effective?

Well, I thought we had discussed trying to bump a pending I/O
automatically when there was an attempt to call lock_buffer() on the
bh?  That would be ideal, because we could keep the async writeback
low priority until someone is trying to wait upon it, at which point
obviously it should no longer be considered an async write call.

Failing that, this is something I've been toying with.... what do you
think?

http://patchwork.ozlabs.org/patch/238192/
http://patchwork.ozlabs.org/patch/238257/

(The first patch in the series just makes sure that allocation bitmap
reads are marked with the META/PRIO flags.  It's not strictly speaking
related to the problem discussed here, but for completeness:
http://patchwork.ozlabs.org/patch/238193/)

						- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-22 22:42                       ` Jeff Moyer
@ 2013-04-23  9:31                         ` Jan Kara
  -1 siblings, 0 replies; 105+ messages in thread
From: Jan Kara @ 2013-04-23  9:31 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Mel Gorman, Theodore Ts'o, Dave Chinner, Jan Kara,
	linux-ext4, LKML, Linux-MM, Jiri Slaby

On Mon 22-04-13 18:42:23, Jeff Moyer wrote:
> Jan, if I were to come up with a way of promoting a particular async
> queue to the front of the line, where would I put such a call in the
> ext4/jbd2 code to be effective?
  As Ted wrote the simplies might be to put his directly in
__lock_buffer(). Something like:

diff --git a/fs/buffer.c b/fs/buffer.c
index b4dcb34..e026a3e 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -69,6 +69,12 @@ static int sleep_on_buffer(void *word)
 
 void __lock_buffer(struct buffer_head *bh)
 {
+       /*
+        * Likely under async writeback? Tell io scheduler we are
+        * now waiting for the IO...
+        */
+       if (PageWriteback(bh->b_page))
+               io_now_sync(bh->b_bdev, bh->b_blocknr);
        wait_on_bit_lock(&bh->b_state, BH_Lock, sleep_on_buffer,
                                                        TASK_UNINTERRUPTIBLE);
}

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply related	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-23  9:31                         ` Jan Kara
  0 siblings, 0 replies; 105+ messages in thread
From: Jan Kara @ 2013-04-23  9:31 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Mel Gorman, Theodore Ts'o, Dave Chinner, Jan Kara,
	linux-ext4, LKML, Linux-MM, Jiri Slaby

On Mon 22-04-13 18:42:23, Jeff Moyer wrote:
> Jan, if I were to come up with a way of promoting a particular async
> queue to the front of the line, where would I put such a call in the
> ext4/jbd2 code to be effective?
  As Ted wrote the simplies might be to put his directly in
__lock_buffer(). Something like:

diff --git a/fs/buffer.c b/fs/buffer.c
index b4dcb34..e026a3e 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -69,6 +69,12 @@ static int sleep_on_buffer(void *word)
 
 void __lock_buffer(struct buffer_head *bh)
 {
+       /*
+        * Likely under async writeback? Tell io scheduler we are
+        * now waiting for the IO...
+        */
+       if (PageWriteback(bh->b_page))
+               io_now_sync(bh->b_bdev, bh->b_blocknr);
        wait_on_bit_lock(&bh->b_state, BH_Lock, sleep_on_buffer,
                                                        TASK_UNINTERRUPTIBLE);
}

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-22 22:42                       ` Jeff Moyer
@ 2013-04-23 14:01                         ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-23 14:01 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

On Mon, Apr 22, 2013 at 06:42:23PM -0400, Jeff Moyer wrote:
> > 3. The blktrace indicates that reads can starve writes from flusher
> >
> >    While there are people that can look at a blktrace and find problems
> >    like they are rain man, I'm more like an ADHD squirrel when looking at
> >    a trace.  I wrote a script to look for what unrelated requests completed
> >    while an request got stalled for over a second. It seemed like something
> >    that a tool shoudl already exist for but I didn't find one unless btt
> >    can give the information somehow.
> 
> Care to share that script?
> 

I would have preferred not to because it is an ugly hatchet job churned
out in a few minutes. It's written in perl and uses the text output from
blkparse making it slow. It uses an excessive amount of memory because I was
taking shortcuts so is resource heavy. It ignores most of the information
from blkparse and so there are gaps in what it reports. Even though it's
dogshit, it was useful in this particular case so I added it to mmtests
anyway. Be aware that it takes ages to run and you might want to break
the blkparse output into pieces

It's used something like

blkparse -i blktrace-log > blkparse-log
cat blkparse-log | $PATH_TO_MMTESTS/subreport/blktrace-queue-watch.pl

> [snip]
> 
> > I recognise that the output will have a WTF reaction but the key
> > observations to me are
> >
> > a) a single write request from flusher took over a second to complete
> > b) at the time it was queued, it was mostly other writes that were in
> >    the queue at the same time
> > c) The write request and the parallel writes were all asynchronous write
> >    requests
> > D) at the time the request completed, there were a LARGE number of
> >    other requested queued and completed at the same time.
> >
> > Of the requests queued and completed in the meantime the breakdown was
> >
> >      22 RM
> >      31 RA
> >      82 W
> >     445 R
> >
> > If I'm reading this correctly, it is saying that 22 reads were merged (RM),
> > 31 reads were remapped to another device (RA) which is probably reads from
> > the dm-crypt partition, 82 were writes (W) which is not far off the number
> > of writes that were in the queue and 445 were other reads. The delay was
> > dominated by reads that were queued after the write request and completed
> > before it.
> 
> RM == Read Meta
> RA == Read Ahead  (remapping, by the way, does not happen across
>                    devices, just into partitions)
> W and R you understood correctly.
> 

Thanks for those corrections. I misread the meaning of the action
identifiers section of the blkparse manual. I should have double checked
the source.

> >> <SNIP>
> >> The thing is, we do want to make ext4 work well with cfq, and
> >> prioritizing non-readahead read requests ahead of data writeback does
> >> make sense.  The issue is with is that metadata writes going through
> >> the block device could in some cases effectively cause a priority
> >> inversion when what had previously been an asynchronous writeback
> >> starts blocking a foreground, user-visible process.
> >> 
> >> At least, that's the theory;
> >
> > I *think* the data more or less confirms the theory but it'd be nice if
> > someone else double checked in case I'm seeing what I want to see
> > instead of what is actually there.
> 
> Looks sane.  You can also see a lot of "preempt"s in the blkparse
> output, which indicates exactly what you're saying.  Any sync request
> gets priority over the async requests.
> 

Good to know.

> I'll also note that even though your I/O is going all over the place
> (D2C is pretty bad, 14ms), most of the time is spent waiting for a
> struct request allocation or between Queue and Merge:
> 
> ==================== All Devices ====================
> 
>             ALL           MIN           AVG           MAX           N
> --------------- ------------- ------------- ------------- -----------
> 
> Q2Q               0.000000001   0.000992259   8.898375882     2300861
> Q2G               0.000000843  10.193261239 2064.079501935     1016463 <====

This is not normally my sandbox so do you mind spelling this out?

IIUC, the time to allocate the struct request from the slab cache is just a
small portion of this time. The bulk of the time is spent in get_request()
waiting for congestion to clear on the request list for either the sync or
async queue. Once a process goes to sleep on that waitqueue, it has to wait
until enough requests on that queue have been serviced before it gets woken
again at which point it gets priority access to prevent further starvation.
This is the Queue To Get Reqiest (Q2G) delay. What we may be seeing here
is that the async queue was congested and on average, we are waiting for
10 seconds for it to clear. The maximum value may be bogus for reasons
explained later.

Is that accurate?

> G2I               0.000000461   0.000044702   3.237065090     1015803
> Q2M               0.000000101   8.203147238 2064.079367557     1311662
> I2D               0.000002012   1.476824812 2064.089774419     1014890
> M2D               0.000003283   6.994306138 283.573348664     1284872
> D2C               0.000061889   0.014438316   0.857811758     2291996
> Q2C               0.000072284  13.363007244 2064.092228625     2292191
> 
> ==================== Device Overhead ====================
> 
>        DEV |       Q2G       G2I       Q2M       I2D       D2C
> ---------- | --------- --------- --------- --------- ---------
>  (  8,  0) |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
> ---------- | --------- --------- --------- --------- ---------
>    Overall |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
> 
> I'm not sure I believe that max value.  2064 seconds seems a bit high.

It is so I looked closer at the timestamps and there is an one hour
correction about 4400 seconds into the test.  Daylight savings time kicked
in on March 31st and the machine is rarely rebooted until this test case
came along. It looks like there is a timezone or time misconfiguration
on the laptop that starts the machine with the wrong time. NTP must have
corrected the time which skewed the readings in that window severely :(

Normally on my test machines these services are disabled to avoid
exactly this sort of problem.

> Also, Q2M should not be anywhere near that big, so more investigation is
> required there.  A quick look over the data doesn't show any such delays
> (making me question the tools), but I'll write some code tomorrow to
> verify the btt output.
> 

It might be a single set of readings during a time correction that
screwed it.

> Jan, if I were to come up with a way of promoting a particular async
> queue to the front of the line, where would I put such a call in the
> ext4/jbd2 code to be effective?
> 
> Mel, can you reproduce this at will?  Do you have a reproducer that I
> could run so I'm not constantly bugging you?
> 

I can reproduce it at will. Due to the nature of the test, the test
results are variable and unfortunately it is one of the tricker mmtest
configurations to setup.

1. Get access to a webserver
2. Close mmtests to your test machine
   git clone https://github.com/gormanm/mmtests.git
3. Edit shellpacks/common-config.sh and set WEBROOT to a webserver path
4. Create a tar.gz of a large git tree and place it at $WEBROOT/linux-2.6.tar.gz
   Alternatively place a compressed git tree anywhere and edit
   configs/config-global-dhp__io-multiple-source-latency
   and update GITCHECKOUT_SOURCETAR
5. Create a tar.gz of a large maildir directory and place it at
   $WEBROOT/$WEBROOT/maildir.tar.gz
   Alternatively, use an existing maildir folder and set
   MONITOR_INBOX_OPEN_MAILDIR in
   configs/config-global-dhp__io-multiple-source-latency

It's awkward but it's not like there are standard benchmarks lying around
and it seemed the best way to reproduce the problems I typically see early
in the lifetime of a system or when running a git checkout when the tree
has not been used in a few hours. Run the actual test with

./run-mmtests.sh --config configs/config-global-dhp__io-multiple-source-latency --run-monitor test-name-of-your-choice

Results will be in work/log. You'll need to run this as root so it
can run blktrace and so it can drop_caches between git checkouts
(to force disk IO). If systemtap craps out on you, then edit
configs/config-global-dhp__io-multiple-source-latency and remove dstate
from MONITORS_GZIP

If you have trouble getting this running, ping me on IRC.

Thanks.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-23 14:01                         ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-23 14:01 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

On Mon, Apr 22, 2013 at 06:42:23PM -0400, Jeff Moyer wrote:
> > 3. The blktrace indicates that reads can starve writes from flusher
> >
> >    While there are people that can look at a blktrace and find problems
> >    like they are rain man, I'm more like an ADHD squirrel when looking at
> >    a trace.  I wrote a script to look for what unrelated requests completed
> >    while an request got stalled for over a second. It seemed like something
> >    that a tool shoudl already exist for but I didn't find one unless btt
> >    can give the information somehow.
> 
> Care to share that script?
> 

I would have preferred not to because it is an ugly hatchet job churned
out in a few minutes. It's written in perl and uses the text output from
blkparse making it slow. It uses an excessive amount of memory because I was
taking shortcuts so is resource heavy. It ignores most of the information
from blkparse and so there are gaps in what it reports. Even though it's
dogshit, it was useful in this particular case so I added it to mmtests
anyway. Be aware that it takes ages to run and you might want to break
the blkparse output into pieces

It's used something like

blkparse -i blktrace-log > blkparse-log
cat blkparse-log | $PATH_TO_MMTESTS/subreport/blktrace-queue-watch.pl

> [snip]
> 
> > I recognise that the output will have a WTF reaction but the key
> > observations to me are
> >
> > a) a single write request from flusher took over a second to complete
> > b) at the time it was queued, it was mostly other writes that were in
> >    the queue at the same time
> > c) The write request and the parallel writes were all asynchronous write
> >    requests
> > D) at the time the request completed, there were a LARGE number of
> >    other requested queued and completed at the same time.
> >
> > Of the requests queued and completed in the meantime the breakdown was
> >
> >      22 RM
> >      31 RA
> >      82 W
> >     445 R
> >
> > If I'm reading this correctly, it is saying that 22 reads were merged (RM),
> > 31 reads were remapped to another device (RA) which is probably reads from
> > the dm-crypt partition, 82 were writes (W) which is not far off the number
> > of writes that were in the queue and 445 were other reads. The delay was
> > dominated by reads that were queued after the write request and completed
> > before it.
> 
> RM == Read Meta
> RA == Read Ahead  (remapping, by the way, does not happen across
>                    devices, just into partitions)
> W and R you understood correctly.
> 

Thanks for those corrections. I misread the meaning of the action
identifiers section of the blkparse manual. I should have double checked
the source.

> >> <SNIP>
> >> The thing is, we do want to make ext4 work well with cfq, and
> >> prioritizing non-readahead read requests ahead of data writeback does
> >> make sense.  The issue is with is that metadata writes going through
> >> the block device could in some cases effectively cause a priority
> >> inversion when what had previously been an asynchronous writeback
> >> starts blocking a foreground, user-visible process.
> >> 
> >> At least, that's the theory;
> >
> > I *think* the data more or less confirms the theory but it'd be nice if
> > someone else double checked in case I'm seeing what I want to see
> > instead of what is actually there.
> 
> Looks sane.  You can also see a lot of "preempt"s in the blkparse
> output, which indicates exactly what you're saying.  Any sync request
> gets priority over the async requests.
> 

Good to know.

> I'll also note that even though your I/O is going all over the place
> (D2C is pretty bad, 14ms), most of the time is spent waiting for a
> struct request allocation or between Queue and Merge:
> 
> ==================== All Devices ====================
> 
>             ALL           MIN           AVG           MAX           N
> --------------- ------------- ------------- ------------- -----------
> 
> Q2Q               0.000000001   0.000992259   8.898375882     2300861
> Q2G               0.000000843  10.193261239 2064.079501935     1016463 <====

This is not normally my sandbox so do you mind spelling this out?

IIUC, the time to allocate the struct request from the slab cache is just a
small portion of this time. The bulk of the time is spent in get_request()
waiting for congestion to clear on the request list for either the sync or
async queue. Once a process goes to sleep on that waitqueue, it has to wait
until enough requests on that queue have been serviced before it gets woken
again at which point it gets priority access to prevent further starvation.
This is the Queue To Get Reqiest (Q2G) delay. What we may be seeing here
is that the async queue was congested and on average, we are waiting for
10 seconds for it to clear. The maximum value may be bogus for reasons
explained later.

Is that accurate?

> G2I               0.000000461   0.000044702   3.237065090     1015803
> Q2M               0.000000101   8.203147238 2064.079367557     1311662
> I2D               0.000002012   1.476824812 2064.089774419     1014890
> M2D               0.000003283   6.994306138 283.573348664     1284872
> D2C               0.000061889   0.014438316   0.857811758     2291996
> Q2C               0.000072284  13.363007244 2064.092228625     2292191
> 
> ==================== Device Overhead ====================
> 
>        DEV |       Q2G       G2I       Q2M       I2D       D2C
> ---------- | --------- --------- --------- --------- ---------
>  (  8,  0) |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
> ---------- | --------- --------- --------- --------- ---------
>    Overall |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
> 
> I'm not sure I believe that max value.  2064 seconds seems a bit high.

It is so I looked closer at the timestamps and there is an one hour
correction about 4400 seconds into the test.  Daylight savings time kicked
in on March 31st and the machine is rarely rebooted until this test case
came along. It looks like there is a timezone or time misconfiguration
on the laptop that starts the machine with the wrong time. NTP must have
corrected the time which skewed the readings in that window severely :(

Normally on my test machines these services are disabled to avoid
exactly this sort of problem.

> Also, Q2M should not be anywhere near that big, so more investigation is
> required there.  A quick look over the data doesn't show any such delays
> (making me question the tools), but I'll write some code tomorrow to
> verify the btt output.
> 

It might be a single set of readings during a time correction that
screwed it.

> Jan, if I were to come up with a way of promoting a particular async
> queue to the front of the line, where would I put such a call in the
> ext4/jbd2 code to be effective?
> 
> Mel, can you reproduce this at will?  Do you have a reproducer that I
> could run so I'm not constantly bugging you?
> 

I can reproduce it at will. Due to the nature of the test, the test
results are variable and unfortunately it is one of the tricker mmtest
configurations to setup.

1. Get access to a webserver
2. Close mmtests to your test machine
   git clone https://github.com/gormanm/mmtests.git
3. Edit shellpacks/common-config.sh and set WEBROOT to a webserver path
4. Create a tar.gz of a large git tree and place it at $WEBROOT/linux-2.6.tar.gz
   Alternatively place a compressed git tree anywhere and edit
   configs/config-global-dhp__io-multiple-source-latency
   and update GITCHECKOUT_SOURCETAR
5. Create a tar.gz of a large maildir directory and place it at
   $WEBROOT/$WEBROOT/maildir.tar.gz
   Alternatively, use an existing maildir folder and set
   MONITOR_INBOX_OPEN_MAILDIR in
   configs/config-global-dhp__io-multiple-source-latency

It's awkward but it's not like there are standard benchmarks lying around
and it seemed the best way to reproduce the problems I typically see early
in the lifetime of a system or when running a git checkout when the tree
has not been used in a few hours. Run the actual test with

./run-mmtests.sh --config configs/config-global-dhp__io-multiple-source-latency --run-monitor test-name-of-your-choice

Results will be in work/log. You'll need to run this as root so it
can run blktrace and so it can drop_caches between git checkouts
(to force disk IO). If systemtap craps out on you, then edit
configs/config-global-dhp__io-multiple-source-latency and remove dstate
from MONITORS_GZIP

If you have trouble getting this running, ping me on IRC.

Thanks.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-21  0:05                   ` Theodore Ts'o
@ 2013-04-23 15:33                     ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-23 15:33 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Sat, Apr 20, 2013 at 08:05:22PM -0400, Theodore Ts'o wrote:
> An alternate solution which I've been playing around adds buffer_head
> flags so we can indicate that a buffer contains metadata and/or should
> have I/O submitted with the REQ_PRIO flag set.
> 

I beefed up the reporting slightly and tested the patches comparing
3.9-rc6 vanilla with your patches. The full report with graphs are at

http://www.csn.ul.ie/~mel/postings/ext4tag-20130423/report.html

                           3.9.0-rc6             3.9.0-rc6
                             vanilla               ext4tag
User    min           0.00 (  0.00%)        0.00 (  0.00%)
User    mean           nan (   nan%)         nan (   nan%)
User    stddev         nan (   nan%)         nan (   nan%)
User    max           0.00 (  0.00%)        0.00 (  0.00%)
User    range         0.00 (  0.00%)        0.00 (  0.00%)
System  min           9.14 (  0.00%)        9.13 (  0.11%)
System  mean          9.60 (  0.00%)        9.73 ( -1.33%)
System  stddev        0.39 (  0.00%)        0.94 (-142.69%)
System  max          10.31 (  0.00%)       11.58 (-12.32%)
System  range         1.17 (  0.00%)        2.45 (-109.40%)
Elapsed min         665.54 (  0.00%)      612.25 (  8.01%)
Elapsed mean        775.35 (  0.00%)      688.01 ( 11.26%)
Elapsed stddev       69.11 (  0.00%)       58.22 ( 15.75%)
Elapsed max         858.40 (  0.00%)      773.06 (  9.94%)
Elapsed range       192.86 (  0.00%)      160.81 ( 16.62%)
CPU     min           3.00 (  0.00%)        3.00 (  0.00%)
CPU     mean          3.60 (  0.00%)        4.20 (-16.67%)
CPU     stddev        0.49 (  0.00%)        0.75 (-52.75%)
CPU     max           4.00 (  0.00%)        5.00 (-25.00%)
CPU     range         1.00 (  0.00%)        2.00 (-100.00%)

The patches appear to improve the git checkout times slightly but this
test is quite variable.

The vmstat figures report some reclaim activity but if you look at the graphs
further down you will see that the bulk of the kswapd reclaim scan and
steal activity is at the start of the test when it's downloading and
untarring a git tree to work with. (I also note that the mouse-over
graph for direct reclaim efficiency is broken but it's not important
right now).

>From iostat

                    3.9.0-rc6   3.9.0-rc6
                      vanilla     ext4tag
Mean dm-0-avgqz          1.18        1.19
Mean dm-0-await         17.30       16.50
Mean dm-0-r_await       17.30       16.50
Mean dm-0-w_await        0.94        0.48
Mean sda-avgqz         650.29      719.81
Mean sda-await        2501.33     2597.23
Mean sda-r_await        30.01       24.91
Mean sda-w_await     11228.80    11120.64
Max  dm-0-avgqz         12.30       10.14
Max  dm-0-await         42.65       52.23
Max  dm-0-r_await       42.65       52.23
Max  dm-0-w_await      541.00      263.83
Max  sda-avgqz        3811.93     3375.11
Max  sda-await        7178.61     7170.44
Max  sda-r_await       384.37      297.85
Max  sda-w_await     51353.93    50338.25

There are no really obvious massive advantages to me there and if you look
at the graphs for the avgqs, await etc over time, the patched kernel are
not obviously better. The Wait CPU usage looks roughly the same too.

On the more positive side, the dstate systemtap monitor script tells me
that all processes were stalled for less time -- 9575 seconds versus
10910. The most severe event to stall on is sleep_on_buffer() as a
result of ext4_bread.

Vanilla kernel	3325677 ms stalled with 57 events
Patched kernel  2411471 ms stalled with 42 events

That's a pretty big drop but it gets bad again for the second worst stall --
wait_on_page_bit as a result of generic_file_buffered_write.

Vanilla kernel  1336064 ms stalled with 109 events
Patched kernel  2338781 ms stalled with 164 events

So conceptually the patches make sense but the first set of tests do
not indicate that they'll fix the problem and the stall times do not
indicate that interactivity will be any better. I'll still apply them
and boot them on my main work machine and see how they "feel" this
evening.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-23 15:33                     ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-23 15:33 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Sat, Apr 20, 2013 at 08:05:22PM -0400, Theodore Ts'o wrote:
> An alternate solution which I've been playing around adds buffer_head
> flags so we can indicate that a buffer contains metadata and/or should
> have I/O submitted with the REQ_PRIO flag set.
> 

I beefed up the reporting slightly and tested the patches comparing
3.9-rc6 vanilla with your patches. The full report with graphs are at

http://www.csn.ul.ie/~mel/postings/ext4tag-20130423/report.html

                           3.9.0-rc6             3.9.0-rc6
                             vanilla               ext4tag
User    min           0.00 (  0.00%)        0.00 (  0.00%)
User    mean           nan (   nan%)         nan (   nan%)
User    stddev         nan (   nan%)         nan (   nan%)
User    max           0.00 (  0.00%)        0.00 (  0.00%)
User    range         0.00 (  0.00%)        0.00 (  0.00%)
System  min           9.14 (  0.00%)        9.13 (  0.11%)
System  mean          9.60 (  0.00%)        9.73 ( -1.33%)
System  stddev        0.39 (  0.00%)        0.94 (-142.69%)
System  max          10.31 (  0.00%)       11.58 (-12.32%)
System  range         1.17 (  0.00%)        2.45 (-109.40%)
Elapsed min         665.54 (  0.00%)      612.25 (  8.01%)
Elapsed mean        775.35 (  0.00%)      688.01 ( 11.26%)
Elapsed stddev       69.11 (  0.00%)       58.22 ( 15.75%)
Elapsed max         858.40 (  0.00%)      773.06 (  9.94%)
Elapsed range       192.86 (  0.00%)      160.81 ( 16.62%)
CPU     min           3.00 (  0.00%)        3.00 (  0.00%)
CPU     mean          3.60 (  0.00%)        4.20 (-16.67%)
CPU     stddev        0.49 (  0.00%)        0.75 (-52.75%)
CPU     max           4.00 (  0.00%)        5.00 (-25.00%)
CPU     range         1.00 (  0.00%)        2.00 (-100.00%)

The patches appear to improve the git checkout times slightly but this
test is quite variable.

The vmstat figures report some reclaim activity but if you look at the graphs
further down you will see that the bulk of the kswapd reclaim scan and
steal activity is at the start of the test when it's downloading and
untarring a git tree to work with. (I also note that the mouse-over
graph for direct reclaim efficiency is broken but it's not important
right now).

>From iostat

                    3.9.0-rc6   3.9.0-rc6
                      vanilla     ext4tag
Mean dm-0-avgqz          1.18        1.19
Mean dm-0-await         17.30       16.50
Mean dm-0-r_await       17.30       16.50
Mean dm-0-w_await        0.94        0.48
Mean sda-avgqz         650.29      719.81
Mean sda-await        2501.33     2597.23
Mean sda-r_await        30.01       24.91
Mean sda-w_await     11228.80    11120.64
Max  dm-0-avgqz         12.30       10.14
Max  dm-0-await         42.65       52.23
Max  dm-0-r_await       42.65       52.23
Max  dm-0-w_await      541.00      263.83
Max  sda-avgqz        3811.93     3375.11
Max  sda-await        7178.61     7170.44
Max  sda-r_await       384.37      297.85
Max  sda-w_await     51353.93    50338.25

There are no really obvious massive advantages to me there and if you look
at the graphs for the avgqs, await etc over time, the patched kernel are
not obviously better. The Wait CPU usage looks roughly the same too.

On the more positive side, the dstate systemtap monitor script tells me
that all processes were stalled for less time -- 9575 seconds versus
10910. The most severe event to stall on is sleep_on_buffer() as a
result of ext4_bread.

Vanilla kernel	3325677 ms stalled with 57 events
Patched kernel  2411471 ms stalled with 42 events

That's a pretty big drop but it gets bad again for the second worst stall --
wait_on_page_bit as a result of generic_file_buffered_write.

Vanilla kernel  1336064 ms stalled with 109 events
Patched kernel  2338781 ms stalled with 164 events

So conceptually the patches make sense but the first set of tests do
not indicate that they'll fix the problem and the stall times do not
indicate that interactivity will be any better. I'll still apply them
and boot them on my main work machine and see how they "feel" this
evening.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-23 15:33                     ` Mel Gorman
@ 2013-04-23 15:50                       ` Theodore Ts'o
  -1 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-23 15:50 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 23, 2013 at 04:33:05PM +0100, Mel Gorman wrote:
> That's a pretty big drop but it gets bad again for the second worst stall --
> wait_on_page_bit as a result of generic_file_buffered_write.
> 
> Vanilla kernel  1336064 ms stalled with 109 events
> Patched kernel  2338781 ms stalled with 164 events

Do you have the stack trace for this stall?  I'm wondering if this is
caused by the waiting for stable pages in write_begin() , or something
else.

If it is blocking caused by stable page writeback that's interesting,
since it would imply that something in your workload is trying to
write to a page that has already been modified (i.e., appending to a
log file, or updating a database file).  Does that make sense given
what your workload might be running?

					- Ted

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-23 15:50                       ` Theodore Ts'o
  0 siblings, 0 replies; 105+ messages in thread
From: Theodore Ts'o @ 2013-04-23 15:50 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 23, 2013 at 04:33:05PM +0100, Mel Gorman wrote:
> That's a pretty big drop but it gets bad again for the second worst stall --
> wait_on_page_bit as a result of generic_file_buffered_write.
> 
> Vanilla kernel  1336064 ms stalled with 109 events
> Patched kernel  2338781 ms stalled with 164 events

Do you have the stack trace for this stall?  I'm wondering if this is
caused by the waiting for stable pages in write_begin() , or something
else.

If it is blocking caused by stable page writeback that's interesting,
since it would imply that something in your workload is trying to
write to a page that has already been modified (i.e., appending to a
log file, or updating a database file).  Does that make sense given
what your workload might be running?

					- Ted

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-23 15:50                       ` Theodore Ts'o
@ 2013-04-23 16:13                         ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-23 16:13 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 23, 2013 at 11:50:19AM -0400, Theodore Ts'o wrote:
> On Tue, Apr 23, 2013 at 04:33:05PM +0100, Mel Gorman wrote:
> > That's a pretty big drop but it gets bad again for the second worst stall --
> > wait_on_page_bit as a result of generic_file_buffered_write.
> > 
> > Vanilla kernel  1336064 ms stalled with 109 events
> > Patched kernel  2338781 ms stalled with 164 events
> 
> Do you have the stack trace for this stall?  I'm wondering if this is
> caused by the waiting for stable pages in write_begin() , or something
> else.
> 

[<ffffffff81110238>] wait_on_page_bit+0x78/0x80
[<ffffffff815af294>] kretprobe_trampoline+0x0/0x4c
[<ffffffff81110e84>] generic_file_buffered_write+0x114/0x2a0
[<ffffffff81111ccd>] __generic_file_aio_write+0x1bd/0x3c0
[<ffffffff81111f4a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ee639>] ext4_file_write+0x99/0x420
[<ffffffff81174d87>] do_sync_write+0xa7/0xe0
[<ffffffff81175447>] vfs_write+0xa7/0x180
[<ffffffff811758cd>] sys_write+0x4d/0x90
[<ffffffff815b3eed>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

The processes that stalled in this particular trace are wget, latency-output,
tar and tclsh. Most of these are sequential writers except for tar which
is both a sequential reader and sequential writers.

> If it is blocking caused by stable page writeback that's interesting,
> since it would imply that something in your workload is trying to
> write to a page that has already been modified (i.e., appending to a
> log file, or updating a database file).  Does that make sense given
> what your workload might be running?
> 

I doubt it is stable write consider the type of processes that are running. I
would expect the bulk of the activity to be sequential readers or writers
of multiple files. The summarised report from the raw data is now at

http://www.csn.ul.ie/~mel/postings/ext4tag-20130423/dstate-summary-vanilla
http://www.csn.ul.ie/~mel/postings/ext4tag-20130423/dstate-summary-ext4tag

It's an aside but the worst of the stalls are incurred by systemd-tmpfile
which were not a deliberate part of the test and yet another thing that
I would not have caught unless I was running tests on my laptop. Looking
closer at that thing, the default configuration is to run the service 15
minutes after boot and after that it runs once a day. It looks like the
bulk of the scanning would be in /var/tmp/ looking at systemds own files
(over 3000 of them) which I'm a little amused by.

My normal test machines would not hit this because they are not systemd
based but the existance of thing thing is worth noting. Any IO-based tests
run on systemd-based distributions may give different results depending
on whether this service triggered during the test or not.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-23 16:13                         ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-23 16:13 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, linux-ext4, LKML, Linux-MM, Jiri Slaby

On Tue, Apr 23, 2013 at 11:50:19AM -0400, Theodore Ts'o wrote:
> On Tue, Apr 23, 2013 at 04:33:05PM +0100, Mel Gorman wrote:
> > That's a pretty big drop but it gets bad again for the second worst stall --
> > wait_on_page_bit as a result of generic_file_buffered_write.
> > 
> > Vanilla kernel  1336064 ms stalled with 109 events
> > Patched kernel  2338781 ms stalled with 164 events
> 
> Do you have the stack trace for this stall?  I'm wondering if this is
> caused by the waiting for stable pages in write_begin() , or something
> else.
> 

[<ffffffff81110238>] wait_on_page_bit+0x78/0x80
[<ffffffff815af294>] kretprobe_trampoline+0x0/0x4c
[<ffffffff81110e84>] generic_file_buffered_write+0x114/0x2a0
[<ffffffff81111ccd>] __generic_file_aio_write+0x1bd/0x3c0
[<ffffffff81111f4a>] generic_file_aio_write+0x7a/0xf0
[<ffffffff811ee639>] ext4_file_write+0x99/0x420
[<ffffffff81174d87>] do_sync_write+0xa7/0xe0
[<ffffffff81175447>] vfs_write+0xa7/0x180
[<ffffffff811758cd>] sys_write+0x4d/0x90
[<ffffffff815b3eed>] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff>] 0xffffffffffffffff

The processes that stalled in this particular trace are wget, latency-output,
tar and tclsh. Most of these are sequential writers except for tar which
is both a sequential reader and sequential writers.

> If it is blocking caused by stable page writeback that's interesting,
> since it would imply that something in your workload is trying to
> write to a page that has already been modified (i.e., appending to a
> log file, or updating a database file).  Does that make sense given
> what your workload might be running?
> 

I doubt it is stable write consider the type of processes that are running. I
would expect the bulk of the activity to be sequential readers or writers
of multiple files. The summarised report from the raw data is now at

http://www.csn.ul.ie/~mel/postings/ext4tag-20130423/dstate-summary-vanilla
http://www.csn.ul.ie/~mel/postings/ext4tag-20130423/dstate-summary-ext4tag

It's an aside but the worst of the stalls are incurred by systemd-tmpfile
which were not a deliberate part of the test and yet another thing that
I would not have caught unless I was running tests on my laptop. Looking
closer at that thing, the default configuration is to run the service 15
minutes after boot and after that it runs once a day. It looks like the
bulk of the scanning would be in /var/tmp/ looking at systemds own files
(over 3000 of them) which I'm a little amused by.

My normal test machines would not hit this because they are not systemd
based but the existance of thing thing is worth noting. Any IO-based tests
run on systemd-based distributions may give different results depending
on whether this service triggered during the test or not.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-23 14:01                         ` Mel Gorman
@ 2013-04-24 19:09                           ` Jeff Moyer
  -1 siblings, 0 replies; 105+ messages in thread
From: Jeff Moyer @ 2013-04-24 19:09 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

Mel Gorman <mgorman@suse.de> writes:

>> I'll also note that even though your I/O is going all over the place
>> (D2C is pretty bad, 14ms), most of the time is spent waiting for a
>> struct request allocation or between Queue and Merge:
>> 
>> ==================== All Devices ====================
>> 
>>             ALL           MIN           AVG           MAX           N
>> --------------- ------------- ------------- ------------- -----------
>> 
>> Q2Q               0.000000001   0.000992259   8.898375882     2300861
>> Q2G               0.000000843  10.193261239 2064.079501935     1016463 <====
>
> This is not normally my sandbox so do you mind spelling this out?
>
> IIUC, the time to allocate the struct request from the slab cache is just a
> small portion of this time. The bulk of the time is spent in get_request()
> waiting for congestion to clear on the request list for either the sync or
> async queue. Once a process goes to sleep on that waitqueue, it has to wait
> until enough requests on that queue have been serviced before it gets woken
> again at which point it gets priority access to prevent further starvation.
> This is the Queue To Get Reqiest (Q2G) delay. What we may be seeing here
> is that the async queue was congested and on average, we are waiting for
> 10 seconds for it to clear. The maximum value may be bogus for reasons
> explained later.
>
> Is that accurate?

Yes, without getting into excruciating detail.

>> G2I               0.000000461   0.000044702   3.237065090     1015803
>> Q2M               0.000000101   8.203147238 2064.079367557     1311662
>> I2D               0.000002012   1.476824812 2064.089774419     1014890
>> M2D               0.000003283   6.994306138 283.573348664     1284872
>> D2C               0.000061889   0.014438316   0.857811758     2291996
>> Q2C               0.000072284  13.363007244 2064.092228625     2292191
>> 
>> ==================== Device Overhead ====================
>> 
>>        DEV |       Q2G       G2I       Q2M       I2D       D2C
>> ---------- | --------- --------- --------- --------- ---------
>>  (  8,  0) |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
>> ---------- | --------- --------- --------- --------- ---------
>>    Overall |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
>> 
>> I'm not sure I believe that max value.  2064 seconds seems a bit high.
>
> It is so I looked closer at the timestamps and there is an one hour
> correction about 4400 seconds into the test.  Daylight savings time kicked
> in on March 31st and the machine is rarely rebooted until this test case
> came along. It looks like there is a timezone or time misconfiguration
> on the laptop that starts the machine with the wrong time. NTP must have
> corrected the time which skewed the readings in that window severely :(

Not sure I'm buying that argument, as there are no gaps in the blkparse
output.  The logging is not done using wallclock time.  I still haven't
had sufficient time to dig into these numbers.

>> Also, Q2M should not be anywhere near that big, so more investigation is
>> required there.  A quick look over the data doesn't show any such delays
>> (making me question the tools), but I'll write some code tomorrow to
>> verify the btt output.
>> 
>
> It might be a single set of readings during a time correction that
> screwed it.

Again, I don't think so.

> I can reproduce it at will. Due to the nature of the test, the test
> results are variable and unfortunately it is one of the tricker mmtest
> configurations to setup.
>
> 1. Get access to a webserver
> 2. Close mmtests to your test machine
>    git clone https://github.com/gormanm/mmtests.git
> 3. Edit shellpacks/common-config.sh and set WEBROOT to a webserver path
> 4. Create a tar.gz of a large git tree and place it at $WEBROOT/linux-2.6.tar.gz
>    Alternatively place a compressed git tree anywhere and edit
>    configs/config-global-dhp__io-multiple-source-latency
>    and update GITCHECKOUT_SOURCETAR
> 5. Create a tar.gz of a large maildir directory and place it at
>    $WEBROOT/$WEBROOT/maildir.tar.gz
>    Alternatively, use an existing maildir folder and set
>    MONITOR_INBOX_OPEN_MAILDIR in
>    configs/config-global-dhp__io-multiple-source-latency
>
> It's awkward but it's not like there are standard benchmarks lying around
> and it seemed the best way to reproduce the problems I typically see early
> in the lifetime of a system or when running a git checkout when the tree
> has not been used in a few hours. Run the actual test with
>
> ./run-mmtests.sh --config configs/config-global-dhp__io-multiple-source-latency --run-monitor test-name-of-your-choice
>
> Results will be in work/log. You'll need to run this as root so it
> can run blktrace and so it can drop_caches between git checkouts
> (to force disk IO). If systemtap craps out on you, then edit
> configs/config-global-dhp__io-multiple-source-latency and remove dstate
> from MONITORS_GZIP

And how do I determine whether I've hit the problem?

> If you have trouble getting this running, ping me on IRC.

Yes, I'm having issues getting things to go, but you didn't provide me a
time zone, an irc server or a nick to help me find you.  Was that
intentional?  ;-)

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-24 19:09                           ` Jeff Moyer
  0 siblings, 0 replies; 105+ messages in thread
From: Jeff Moyer @ 2013-04-24 19:09 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

Mel Gorman <mgorman@suse.de> writes:

>> I'll also note that even though your I/O is going all over the place
>> (D2C is pretty bad, 14ms), most of the time is spent waiting for a
>> struct request allocation or between Queue and Merge:
>> 
>> ==================== All Devices ====================
>> 
>>             ALL           MIN           AVG           MAX           N
>> --------------- ------------- ------------- ------------- -----------
>> 
>> Q2Q               0.000000001   0.000992259   8.898375882     2300861
>> Q2G               0.000000843  10.193261239 2064.079501935     1016463 <====
>
> This is not normally my sandbox so do you mind spelling this out?
>
> IIUC, the time to allocate the struct request from the slab cache is just a
> small portion of this time. The bulk of the time is spent in get_request()
> waiting for congestion to clear on the request list for either the sync or
> async queue. Once a process goes to sleep on that waitqueue, it has to wait
> until enough requests on that queue have been serviced before it gets woken
> again at which point it gets priority access to prevent further starvation.
> This is the Queue To Get Reqiest (Q2G) delay. What we may be seeing here
> is that the async queue was congested and on average, we are waiting for
> 10 seconds for it to clear. The maximum value may be bogus for reasons
> explained later.
>
> Is that accurate?

Yes, without getting into excruciating detail.

>> G2I               0.000000461   0.000044702   3.237065090     1015803
>> Q2M               0.000000101   8.203147238 2064.079367557     1311662
>> I2D               0.000002012   1.476824812 2064.089774419     1014890
>> M2D               0.000003283   6.994306138 283.573348664     1284872
>> D2C               0.000061889   0.014438316   0.857811758     2291996
>> Q2C               0.000072284  13.363007244 2064.092228625     2292191
>> 
>> ==================== Device Overhead ====================
>> 
>>        DEV |       Q2G       G2I       Q2M       I2D       D2C
>> ---------- | --------- --------- --------- --------- ---------
>>  (  8,  0) |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
>> ---------- | --------- --------- --------- --------- ---------
>>    Overall |  33.8259%   0.0001%  35.1275%   4.8932%   0.1080%
>> 
>> I'm not sure I believe that max value.  2064 seconds seems a bit high.
>
> It is so I looked closer at the timestamps and there is an one hour
> correction about 4400 seconds into the test.  Daylight savings time kicked
> in on March 31st and the machine is rarely rebooted until this test case
> came along. It looks like there is a timezone or time misconfiguration
> on the laptop that starts the machine with the wrong time. NTP must have
> corrected the time which skewed the readings in that window severely :(

Not sure I'm buying that argument, as there are no gaps in the blkparse
output.  The logging is not done using wallclock time.  I still haven't
had sufficient time to dig into these numbers.

>> Also, Q2M should not be anywhere near that big, so more investigation is
>> required there.  A quick look over the data doesn't show any such delays
>> (making me question the tools), but I'll write some code tomorrow to
>> verify the btt output.
>> 
>
> It might be a single set of readings during a time correction that
> screwed it.

Again, I don't think so.

> I can reproduce it at will. Due to the nature of the test, the test
> results are variable and unfortunately it is one of the tricker mmtest
> configurations to setup.
>
> 1. Get access to a webserver
> 2. Close mmtests to your test machine
>    git clone https://github.com/gormanm/mmtests.git
> 3. Edit shellpacks/common-config.sh and set WEBROOT to a webserver path
> 4. Create a tar.gz of a large git tree and place it at $WEBROOT/linux-2.6.tar.gz
>    Alternatively place a compressed git tree anywhere and edit
>    configs/config-global-dhp__io-multiple-source-latency
>    and update GITCHECKOUT_SOURCETAR
> 5. Create a tar.gz of a large maildir directory and place it at
>    $WEBROOT/$WEBROOT/maildir.tar.gz
>    Alternatively, use an existing maildir folder and set
>    MONITOR_INBOX_OPEN_MAILDIR in
>    configs/config-global-dhp__io-multiple-source-latency
>
> It's awkward but it's not like there are standard benchmarks lying around
> and it seemed the best way to reproduce the problems I typically see early
> in the lifetime of a system or when running a git checkout when the tree
> has not been used in a few hours. Run the actual test with
>
> ./run-mmtests.sh --config configs/config-global-dhp__io-multiple-source-latency --run-monitor test-name-of-your-choice
>
> Results will be in work/log. You'll need to run this as root so it
> can run blktrace and so it can drop_caches between git checkouts
> (to force disk IO). If systemtap craps out on you, then edit
> configs/config-global-dhp__io-multiple-source-latency and remove dstate
> from MONITORS_GZIP

And how do I determine whether I've hit the problem?

> If you have trouble getting this running, ping me on IRC.

Yes, I'm having issues getting things to go, but you didn't provide me a
time zone, an irc server or a nick to help me find you.  Was that
intentional?  ;-)

Cheers,
Jeff

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
  2013-04-24 19:09                           ` Jeff Moyer
@ 2013-04-25 12:21                             ` Mel Gorman
  -1 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-25 12:21 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

On Wed, Apr 24, 2013 at 03:09:13PM -0400, Jeff Moyer wrote:
> Mel Gorman <mgorman@suse.de> writes:
> 
> >> I'll also note that even though your I/O is going all over the place
> >> (D2C is pretty bad, 14ms), most of the time is spent waiting for a
> >> struct request allocation or between Queue and Merge:
> >> 
> >> ==================== All Devices ====================
> >> 
> >>             ALL           MIN           AVG           MAX           N
> >> --------------- ------------- ------------- ------------- -----------
> >> 
> >> Q2Q               0.000000001   0.000992259   8.898375882     2300861
> >> Q2G               0.000000843  10.193261239 2064.079501935     1016463 <====
> >
> > This is not normally my sandbox so do you mind spelling this out?
> >
> > IIUC, the time to allocate the struct request from the slab cache is just a
> > small portion of this time. The bulk of the time is spent in get_request()
> > waiting for congestion to clear on the request list for either the sync or
> > async queue. Once a process goes to sleep on that waitqueue, it has to wait
> > until enough requests on that queue have been serviced before it gets woken
> > again at which point it gets priority access to prevent further starvation.
> > This is the Queue To Get Reqiest (Q2G) delay. What we may be seeing here
> > is that the async queue was congested and on average, we are waiting for
> > 10 seconds for it to clear. The maximum value may be bogus for reasons
> > explained later.
> >
> > Is that accurate?
> 
> Yes, without getting into excruciating detail.


Good enough, thanks.

> >> I'm not sure I believe that max value.  2064 seconds seems a bit high.
> >
> > It is so I looked closer at the timestamps and there is an one hour
> > correction about 4400 seconds into the test.  Daylight savings time kicked
> > in on March 31st and the machine is rarely rebooted until this test case
> > came along. It looks like there is a timezone or time misconfiguration
> > on the laptop that starts the machine with the wrong time. NTP must have
> > corrected the time which skewed the readings in that window severely :(
> 
> Not sure I'm buying that argument, as there are no gaps in the blkparse
> output.  The logging is not done using wallclock time.  I still haven't
> had sufficient time to dig into these numbers.
> 

Ok.

> > It's awkward but it's not like there are standard benchmarks lying around
> > and it seemed the best way to reproduce the problems I typically see early
> > in the lifetime of a system or when running a git checkout when the tree
> > has not been used in a few hours. Run the actual test with
> >
> > ./run-mmtests.sh --config configs/config-global-dhp__io-multiple-source-latency --run-monitor test-name-of-your-choice
> >
> > Results will be in work/log. You'll need to run this as root so it
> > can run blktrace and so it can drop_caches between git checkouts
> > (to force disk IO). If systemtap craps out on you, then edit
> > configs/config-global-dhp__io-multiple-source-latency and remove dstate
> > from MONITORS_GZIP
> 
> And how do I determine whether I've hit the problem?
> 

If systemtap is available then

cat work/log/dstate-TESTNAME-gitcheckout | subreport/stap-dstate-frequency

will give you a report on the worst stalls and the stack traces when those
stalls occurred. If the stalls are 10+ seconds then you've certainly hit
the problem.

Alternatively

cd work/log
../../compare-kernels.sh

Look at the average time it takes to run the git checkout. Is it
abnormally high in comparison to if there was no parallel IO? If you do
not know what the normal time is, run with

./run-mmtests.sh --config configs/config-global-dhp__io-multiple-source-latency --no-monitor test-name-no-monitor

The monitors are what's opening the maildir and generating the parallel
IO. If there is an excessive difference between the git checkout times,
then you've hit the problem.

Furthermore, look at the await times. If they do not appear in the
compare-kernels.sh output then

../../bin/compare-mmtests.pl -d . -b gitcheckout -n test-name-of-your-choice --print-monitor iostat

And look at the await times. Are they very high? If so, you hit the
problem. If you want a better look at the await figures over time,
either look at the iostat file or you can graph it with

../../bin/graph-mmtests.sh -d . -b gitcheckout -n test-name-of-your-choice --print-monitor iostat --sub-heading sda-await

where sda-await should be  substituted for whatever the disk is that
you're running the test one.


> > If you have trouble getting this running, ping me on IRC.
> 
> Yes, I'm having issues getting things to go, but you didn't provide me a
> time zone, an irc server or a nick to help me find you.  Was that
> intentional?  ;-)
> 

Not consciously :) . I'm in the IST timezone (GMT+1), OFTC IRC network
and #mm channel. If it's my evening I'm not always responsive so send me
the error output on email and I'll try fixing any problems that way.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Excessive stall times on ext4 in 3.9-rc2
@ 2013-04-25 12:21                             ` Mel Gorman
  0 siblings, 0 replies; 105+ messages in thread
From: Mel Gorman @ 2013-04-25 12:21 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-ext4, LKML,
	Linux-MM, Jiri Slaby

On Wed, Apr 24, 2013 at 03:09:13PM -0400, Jeff Moyer wrote:
> Mel Gorman <mgorman@suse.de> writes:
> 
> >> I'll also note that even though your I/O is going all over the place
> >> (D2C is pretty bad, 14ms), most of the time is spent waiting for a
> >> struct request allocation or between Queue and Merge:
> >> 
> >> ==================== All Devices ====================
> >> 
> >>             ALL           MIN           AVG           MAX           N
> >> --------------- ------------- ------------- ------------- -----------
> >> 
> >> Q2Q               0.000000001   0.000992259   8.898375882     2300861
> >> Q2G               0.000000843  10.193261239 2064.079501935     1016463 <====
> >
> > This is not normally my sandbox so do you mind spelling this out?
> >
> > IIUC, the time to allocate the struct request from the slab cache is just a
> > small portion of this time. The bulk of the time is spent in get_request()
> > waiting for congestion to clear on the request list for either the sync or
> > async queue. Once a process goes to sleep on that waitqueue, it has to wait
> > until enough requests on that queue have been serviced before it gets woken
> > again at which point it gets priority access to prevent further starvation.
> > This is the Queue To Get Reqiest (Q2G) delay. What we may be seeing here
> > is that the async queue was congested and on average, we are waiting for
> > 10 seconds for it to clear. The maximum value may be bogus for reasons
> > explained later.
> >
> > Is that accurate?
> 
> Yes, without getting into excruciating detail.


Good enough, thanks.

> >> I'm not sure I believe that max value.  2064 seconds seems a bit high.
> >
> > It is so I looked closer at the timestamps and there is an one hour
> > correction about 4400 seconds into the test.  Daylight savings time kicked
> > in on March 31st and the machine is rarely rebooted until this test case
> > came along. It looks like there is a timezone or time misconfiguration
> > on the laptop that starts the machine with the wrong time. NTP must have
> > corrected the time which skewed the readings in that window severely :(
> 
> Not sure I'm buying that argument, as there are no gaps in the blkparse
> output.  The logging is not done using wallclock time.  I still haven't
> had sufficient time to dig into these numbers.
> 

Ok.

> > It's awkward but it's not like there are standard benchmarks lying around
> > and it seemed the best way to reproduce the problems I typically see early
> > in the lifetime of a system or when running a git checkout when the tree
> > has not been used in a few hours. Run the actual test with
> >
> > ./run-mmtests.sh --config configs/config-global-dhp__io-multiple-source-latency --run-monitor test-name-of-your-choice
> >
> > Results will be in work/log. You'll need to run this as root so it
> > can run blktrace and so it can drop_caches between git checkouts
> > (to force disk IO). If systemtap craps out on you, then edit
> > configs/config-global-dhp__io-multiple-source-latency and remove dstate
> > from MONITORS_GZIP
> 
> And how do I determine whether I've hit the problem?
> 

If systemtap is available then

cat work/log/dstate-TESTNAME-gitcheckout | subreport/stap-dstate-frequency

will give you a report on the worst stalls and the stack traces when those
stalls occurred. If the stalls are 10+ seconds then you've certainly hit
the problem.

Alternatively

cd work/log
../../compare-kernels.sh

Look at the average time it takes to run the git checkout. Is it
abnormally high in comparison to if there was no parallel IO? If you do
not know what the normal time is, run with

./run-mmtests.sh --config configs/config-global-dhp__io-multiple-source-latency --no-monitor test-name-no-monitor

The monitors are what's opening the maildir and generating the parallel
IO. If there is an excessive difference between the git checkout times,
then you've hit the problem.

Furthermore, look at the await times. If they do not appear in the
compare-kernels.sh output then

../../bin/compare-mmtests.pl -d . -b gitcheckout -n test-name-of-your-choice --print-monitor iostat

And look at the await times. Are they very high? If so, you hit the
problem. If you want a better look at the await figures over time,
either look at the iostat file or you can graph it with

../../bin/graph-mmtests.sh -d . -b gitcheckout -n test-name-of-your-choice --print-monitor iostat --sub-heading sda-await

where sda-await should be  substituted for whatever the disk is that
you're running the test one.


> > If you have trouble getting this running, ping me on IRC.
> 
> Yes, I'm having issues getting things to go, but you didn't provide me a
> time zone, an irc server or a nick to help me find you.  Was that
> intentional?  ;-)
> 

Not consciously :) . I'm in the IST timezone (GMT+1), OFTC IRC network
and #mm channel. If it's my evening I'm not always responsive so send me
the error output on email and I'll try fixing any problems that way.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 105+ messages in thread

end of thread, other threads:[~2013-04-25 12:21 UTC | newest]

Thread overview: 105+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-02 14:27 Excessive stall times on ext4 in 3.9-rc2 Mel Gorman
2013-04-02 14:27 ` Mel Gorman
2013-04-02 15:00 ` Jiri Slaby
2013-04-02 15:00   ` Jiri Slaby
2013-04-02 15:03 ` Zheng Liu
2013-04-02 15:03   ` Zheng Liu
2013-04-02 15:15   ` Mel Gorman
2013-04-02 15:15     ` Mel Gorman
2013-04-02 15:06 ` Theodore Ts'o
2013-04-02 15:06   ` Theodore Ts'o
2013-04-02 15:14   ` Theodore Ts'o
2013-04-02 15:14     ` Theodore Ts'o
2013-04-02 18:19     ` Theodore Ts'o
2013-04-02 18:19       ` Theodore Ts'o
2013-04-07 21:59       ` Frank Ch. Eigler
2013-04-07 21:59         ` Frank Ch. Eigler
2013-04-08  8:36         ` Mel Gorman
2013-04-08  8:36           ` Mel Gorman
2013-04-08 10:52           ` Frank Ch. Eigler
2013-04-08 10:52             ` Frank Ch. Eigler
2013-04-08 11:01         ` Theodore Ts'o
2013-04-08 11:01           ` Theodore Ts'o
2013-04-03 10:19     ` Mel Gorman
2013-04-03 10:19       ` Mel Gorman
2013-04-03 12:05       ` Theodore Ts'o
2013-04-03 12:05         ` Theodore Ts'o
2013-04-03 15:15         ` Mel Gorman
2013-04-05 22:18       ` Jiri Slaby
2013-04-05 22:18         ` Jiri Slaby
2013-04-05 23:16         ` Theodore Ts'o
2013-04-05 23:16           ` Theodore Ts'o
2013-04-06  7:29           ` Jiri Slaby
2013-04-06  7:29             ` Jiri Slaby
2013-04-06  7:37             ` Jiri Slaby
2013-04-06  7:37               ` Jiri Slaby
2013-04-06  8:19               ` Jiri Slaby
2013-04-06 13:15             ` Theodore Ts'o
2013-04-06 13:15               ` Theodore Ts'o
2013-04-10 10:56   ` Mel Gorman
2013-04-10 10:56     ` Mel Gorman
2013-04-10 13:12     ` Theodore Ts'o
2013-04-10 13:12       ` Theodore Ts'o
2013-04-11 17:04       ` Mel Gorman
2013-04-11 17:04         ` Mel Gorman
2013-04-11 18:35         ` Theodore Ts'o
2013-04-11 18:35           ` Theodore Ts'o
2013-04-11 21:33           ` Jan Kara
2013-04-11 21:33             ` Jan Kara
2013-04-12  2:57             ` Theodore Ts'o
2013-04-12  2:57               ` Theodore Ts'o
2013-04-12  4:50               ` Dave Chinner
2013-04-12  4:50                 ` Dave Chinner
2013-04-12 15:19                 ` Theodore Ts'o
2013-04-12 15:19                   ` Theodore Ts'o
2013-04-13  1:23                   ` Dave Chinner
2013-04-13  1:23                     ` Dave Chinner
2013-04-22 14:38                   ` Mel Gorman
2013-04-22 14:38                     ` Mel Gorman
2013-04-22 22:42                     ` Jeff Moyer
2013-04-22 22:42                       ` Jeff Moyer
2013-04-23  0:02                       ` Theodore Ts'o
2013-04-23  0:02                         ` Theodore Ts'o
2013-04-23  9:31                       ` Jan Kara
2013-04-23  9:31                         ` Jan Kara
2013-04-23 14:01                       ` Mel Gorman
2013-04-23 14:01                         ` Mel Gorman
2013-04-24 19:09                         ` Jeff Moyer
2013-04-24 19:09                           ` Jeff Moyer
2013-04-25 12:21                           ` Mel Gorman
2013-04-25 12:21                             ` Mel Gorman
2013-04-12  9:47               ` Mel Gorman
2013-04-12  9:47                 ` Mel Gorman
2013-04-21  0:05                 ` Theodore Ts'o
2013-04-21  0:05                   ` Theodore Ts'o
2013-04-21  0:07                   ` [PATCH 1/3] ext4: mark all metadata I/O with REQ_META Theodore Ts'o
2013-04-21  0:07                     ` Theodore Ts'o
2013-04-21  0:07                     ` [PATCH 2/3] buffer: add BH_Prio and BH_Meta flags Theodore Ts'o
2013-04-21  0:07                       ` Theodore Ts'o
2013-04-21  0:07                     ` [PATCH 3/3] ext4: mark metadata blocks using bh flags Theodore Ts'o
2013-04-21  0:07                       ` Theodore Ts'o
2013-04-21  6:09                       ` Jiri Slaby
2013-04-21  6:09                         ` Jiri Slaby
2013-04-21  6:09                         ` Jiri Slaby
2013-04-21 19:55                         ` Theodore Ts'o
2013-04-21 19:55                           ` Theodore Ts'o
2013-04-21 19:55                           ` Theodore Ts'o
2013-04-21 20:48                           ` [PATCH 3/3 -v2] " Theodore Ts'o
2013-04-21 20:48                             ` Theodore Ts'o
2013-04-21 20:48                             ` Theodore Ts'o
2013-04-22 12:06                     ` [PATCH 1/3] ext4: mark all metadata I/O with REQ_META Zheng Liu
2013-04-22 12:06                       ` Zheng Liu
2013-04-23 15:33                   ` Excessive stall times on ext4 in 3.9-rc2 Mel Gorman
2013-04-23 15:33                     ` Mel Gorman
2013-04-23 15:50                     ` Theodore Ts'o
2013-04-23 15:50                       ` Theodore Ts'o
2013-04-23 16:13                       ` Mel Gorman
2013-04-23 16:13                         ` Mel Gorman
2013-04-12 10:18               ` Tvrtko Ursulin
2013-04-12 10:18                 ` Tvrtko Ursulin
2013-04-12  9:45           ` Mel Gorman
2013-04-12  9:45             ` Mel Gorman
2013-04-02 23:16 ` Theodore Ts'o
2013-04-02 23:16   ` Theodore Ts'o
2013-04-03 15:22   ` Mel Gorman
2013-04-03 15:22     ` Mel Gorman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.