* Btrfs slowdown
@ 2011-07-25 7:54 Christian Brunner
2011-07-25 8:51 ` Andrej Podzimek
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Christian Brunner @ 2011-07-25 7:54 UTC (permalink / raw)
To: linux-btrfs, ceph-devel
Hi,
we are running a ceph cluster with btrfs as it's base filesystem
(kernel 3.0). At the beginning everything worked very well, but after
a few days (2-3) things are getting very slow.
When I look at the object store servers I see heavy disk-i/o on the
btrfs filesystems (disk utilization is between 60% and 100%). I also
did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
certain, that the majority of the disk I/O is not caused by ceph or
any other userland process.
When reboot the system(s) the problems go away for another 2-3 days,
but after that, it starts again. I'm not sure if the problem is
related to the kernel warning I've reported last week. At least there
is no temporal relationship between the warning and the slowdown.
Any hints on how to trace this would be welcome.
Thanks,
Christian
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-25 7:54 Btrfs slowdown Christian Brunner
@ 2011-07-25 8:51 ` Andrej Podzimek
2011-07-25 9:45 ` Andrej Podzimek
2011-07-25 14:37 ` Jeremy Sanders
2011-07-25 19:52 ` Chris Mason
2 siblings, 1 reply; 14+ messages in thread
From: Andrej Podzimek @ 2011-07-25 8:51 UTC (permalink / raw)
To: chb; +Cc: linux-btrfs, ceph-devel
[-- Attachment #1: Type: text/plain, Size: 2752 bytes --]
Hello,
I can see something similar on the machines I maintain, mostly single-disk setups with a 2.6.39 kernel:
1) Heavy and frequent disk thrashing, although less than 20% of RAM is used and no swap usage is reported.
2) During the disk thrashing, some processors (usually 2 or 3) spend 100% of their time busy waiting, according to htop.
3) Some userspace applications freeze for tens of seconds during the thrashing and busy waiting, sometimes even htop itself...
The problem has only been observed on 64-bit multiprocessors (Core i7 laptop and Nehalem class server Xeons). A 32-bit multiprocessor (Intel Core Duo) and a 64-bit uniprocessor (Intel Core 2 Duo class Celeron) do not seem to have any issues.
Furthermore, none of the machines had this problem with 2.6.38 and earlier kernels. Btrfs "just worked" before 2.6.39. I'll test 3.0 today to see whether some of these issues disappear.
Neither ceph nor any other remote/distributed filesystem (not even NFS) runs on the machines.
The second problem listed above looks like illegal blocking of a vital spinlock during a long disk operation, which freezes some kernel subsystems for an inordinate amount of time and causes a number of processors to wait actively for tens of seconds. (Needless to say that this is not acceptable on a laptop...)
Web browsers (Firefox and Chromium) seem to trigger this issue slightly more often than other applications, but I have no detailed statistics to prove this. ;-)
Two Core i7 class multiprocessors work 100% flawlessly with ext4, although their kernel configuration is otherwise identical to the machines that use Btrfs.
Andrej
> Hi,
>
> we are running a ceph cluster with btrfs as it's base filesystem
> (kernel 3.0). At the beginning everything worked very well, but after
> a few days (2-3) things are getting very slow.
>
> When I look at the object store servers I see heavy disk-i/o on the
> btrfs filesystems (disk utilization is between 60% and 100%). I also
> did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
> certain, that the majority of the disk I/O is not caused by ceph or
> any other userland process.
>
> When reboot the system(s) the problems go away for another 2-3 days,
> but after that, it starts again. I'm not sure if the problem is
> related to the kernel warning I've reported last week. At least there
> is no temporal relationship between the warning and the slowdown.
>
> Any hints on how to trace this would be welcome.
>
> Thanks,
> Christian
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: Elektronický podpis S/MIME --]
[-- Type: application/pkcs7-signature, Size: 5804 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-25 8:51 ` Andrej Podzimek
@ 2011-07-25 9:45 ` Andrej Podzimek
2011-08-03 15:56 ` mck
0 siblings, 1 reply; 14+ messages in thread
From: Andrej Podzimek @ 2011-07-25 9:45 UTC (permalink / raw)
To: chb; +Cc: linux-btrfs, ceph-devel
[-- Attachment #1: Type: text/plain, Size: 2946 bytes --]
Just a quick note: The issue seems to be gone in 3.0.0. But that's just a wild guess based on 1/2 hour without thrashing. :-)
Andrej
> Hello,
>
> I can see something similar on the machines I maintain, mostly single-disk setups with a 2.6.39 kernel:
>
> 1) Heavy and frequent disk thrashing, although less than 20% of RAM is used and no swap usage is reported.
> 2) During the disk thrashing, some processors (usually 2 or 3) spend 100% of their time busy waiting, according to htop.
> 3) Some userspace applications freeze for tens of seconds during the thrashing and busy waiting, sometimes even htop itself...
>
> The problem has only been observed on 64-bit multiprocessors (Core i7 laptop and Nehalem class server Xeons). A 32-bit multiprocessor (Intel Core Duo) and a 64-bit uniprocessor (Intel Core 2 Duo class Celeron) do not seem to have any issues.
>
> Furthermore, none of the machines had this problem with 2.6.38 and earlier kernels. Btrfs "just worked" before 2.6.39. I'll test 3.0 today to see whether some of these issues disappear.
>
> Neither ceph nor any other remote/distributed filesystem (not even NFS) runs on the machines.
>
> The second problem listed above looks like illegal blocking of a vital spinlock during a long disk operation, which freezes some kernel subsystems for an inordinate amount of time and causes a number of processors to wait actively for tens of seconds. (Needless to say that this is not acceptable on a laptop...)
>
> Web browsers (Firefox and Chromium) seem to trigger this issue slightly more often than other applications, but I have no detailed statistics to prove this. ;-)
>
> Two Core i7 class multiprocessors work 100% flawlessly with ext4, although their kernel configuration is otherwise identical to the machines that use Btrfs.
>
> Andrej
>
>> Hi,
>>
>> we are running a ceph cluster with btrfs as it's base filesystem
>> (kernel 3.0). At the beginning everything worked very well, but after
>> a few days (2-3) things are getting very slow.
>>
>> When I look at the object store servers I see heavy disk-i/o on the
>> btrfs filesystems (disk utilization is between 60% and 100%). I also
>> did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
>> certain, that the majority of the disk I/O is not caused by ceph or
>> any other userland process.
>>
>> When reboot the system(s) the problems go away for another 2-3 days,
>> but after that, it starts again. I'm not sure if the problem is
>> related to the kernel warning I've reported last week. At least there
>> is no temporal relationship between the warning and the slowdown.
>>
>> Any hints on how to trace this would be welcome.
>>
>> Thanks,
>> Christian
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: Elektronický podpis S/MIME --]
[-- Type: application/pkcs7-signature, Size: 5804 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-25 7:54 Btrfs slowdown Christian Brunner
2011-07-25 8:51 ` Andrej Podzimek
@ 2011-07-25 14:37 ` Jeremy Sanders
2011-07-25 19:52 ` Chris Mason
2 siblings, 0 replies; 14+ messages in thread
From: Jeremy Sanders @ 2011-07-25 14:37 UTC (permalink / raw)
To: linux-btrfs; +Cc: ceph-devel
Christian Brunner wrote:
> we are running a ceph cluster with btrfs as it's base filesystem
> (kernel 3.0). At the beginning everything worked very well, but after
> a few days (2-3) things are getting very slow.
We get quite a slowdown over time, doing rsyncs to different snapshots.
Btrfs seems to go from using several threads in parallel btrfs-endio-0,1,2,
shown in top, to just using a single thread btrfs-delalloc.
Jeremy
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-25 7:54 Btrfs slowdown Christian Brunner
2011-07-25 8:51 ` Andrej Podzimek
2011-07-25 14:37 ` Jeremy Sanders
@ 2011-07-25 19:52 ` Chris Mason
2011-07-27 8:41 ` Christian Brunner
2 siblings, 1 reply; 14+ messages in thread
From: Chris Mason @ 2011-07-25 19:52 UTC (permalink / raw)
To: Christian Brunner; +Cc: linux-btrfs, ceph-devel
Excerpts from Christian Brunner's message of 2011-07-25 03:54:47 -0400:
> Hi,
>
> we are running a ceph cluster with btrfs as it's base filesystem
> (kernel 3.0). At the beginning everything worked very well, but after
> a few days (2-3) things are getting very slow.
>
> When I look at the object store servers I see heavy disk-i/o on the
> btrfs filesystems (disk utilization is between 60% and 100%). I also
> did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
> certain, that the majority of the disk I/O is not caused by ceph or
> any other userland process.
>
> When reboot the system(s) the problems go away for another 2-3 days,
> but after that, it starts again. I'm not sure if the problem is
> related to the kernel warning I've reported last week. At least there
> is no temporal relationship between the warning and the slowdown.
>
> Any hints on how to trace this would be welcome.
The easiest way to trace this is with latencytop.
Apply this patch:
http://oss.oracle.com/~mason/latencytop.patch
And then use latencytop -c for a few minutes while the system is slow.
Send the output here and hopefully we'll be able to figure it out.
-chris
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-25 19:52 ` Chris Mason
@ 2011-07-27 8:41 ` Christian Brunner
2011-07-28 4:05 ` Marcus Sorensen
2011-08-08 21:58 ` Sage Weil
0 siblings, 2 replies; 14+ messages in thread
From: Christian Brunner @ 2011-07-27 8:41 UTC (permalink / raw)
To: Chris Mason; +Cc: linux-btrfs, ceph-devel
[-- Attachment #1: Type: text/plain, Size: 1557 bytes --]
2011/7/25 Chris Mason <chris.mason@oracle.com>:
> Excerpts from Christian Brunner's message of 2011-07-25 03:54:47 -0400:
>> Hi,
>>
>> we are running a ceph cluster with btrfs as it's base filesystem
>> (kernel 3.0). At the beginning everything worked very well, but after
>> a few days (2-3) things are getting very slow.
>>
>> When I look at the object store servers I see heavy disk-i/o on the
>> btrfs filesystems (disk utilization is between 60% and 100%). I also
>> did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
>> certain, that the majority of the disk I/O is not caused by ceph or
>> any other userland process.
>>
>> When reboot the system(s) the problems go away for another 2-3 days,
>> but after that, it starts again. I'm not sure if the problem is
>> related to the kernel warning I've reported last week. At least there
>> is no temporal relationship between the warning and the slowdown.
>>
>> Any hints on how to trace this would be welcome.
>
> The easiest way to trace this is with latencytop.
>
> Apply this patch:
>
> http://oss.oracle.com/~mason/latencytop.patch
>
> And then use latencytop -c for a few minutes while the system is slow.
> Send the output here and hopefully we'll be able to figure it out.
I've now installed latencytop. Attached are two output files: The
first is from yesterday and was created aproxematly half an hour after
the boot. The second on is from today, uptime is 19h. The load on the
system is already rising. Disk utilization is approximately at 50%.
Thanks for your help.
Christian
[-- Attachment #2: ltop-20110726.txt --]
[-- Type: text/plain, Size: 29903 bytes --]
=============== Tue Jul 26 16:25:14 2011
Globals: Cause Maximum Percentage
[wait_for_commit] 335.9 msec 42.8 %
[btrfs_commit_transaction_async] 276.5 msec 24.9 %
[sleep_on_page] 91.5 msec 17.8 %
[synchronize_sched] 15.7 msec 5.8 %
Userspace lock contention 4.8 msec 7.0 %
Waiting for event (poll) 4.1 msec 1.0 %
Waiting for event (select) 3.1 msec 0.3 %
[blkdev_issue_flush] 0.9 msec 0.1 %
synchronous write 0.9 msec 0.0 %
Process details:
Process ksoftirqd/0 (3) Total: 3.5 msec
[run_ksoftirqd] 2.0 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/1 (10) Total: 10.9 msec
[run_ksoftirqd] 3.2 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process kworker/3:0 (18) Total: 605.2 msec
. 89.5 msec 55.4 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 27.7 msec 44.6 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process ksoftirqd/4 (23) Total: 2.3 msec
[run_ksoftirqd] 2.1 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/5 (27) Total: 1.5 msec
[run_ksoftirqd] 0.5 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process kworker/7:0 (34) Total: 402.1 msec
. 68.0 msec 70.5 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 29.1 msec 29.5 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/5:1 (51) Total: 368.4 msec
. 73.5 msec 60.5 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 25.2 msec 39.5 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process fct0-worker (826) Total: 65.5 msec
[fusion_condvar_timedwait_noload] 4.0 msec 100.0 %
fusion_condvar_timedwait_noload ifio_5fb65.a9c484151da25a9eb60ef9a6e7309d1a95f.2.3.1.123
Process fioa-data-groom (858) Total: 46.5 msec
[fusion_condvar_wait] 2.4 msec 9.8 %
fusion_condvar_wait ifio_7495f.eadce65a4e06c525e92a3321007983e12d8.2.3.1.123
[fusion_condvar_timedwait_noload] 1.0 msec 90.2 %
fusion_condvar_timedwait_noload ifio_0d5cb.2e5d4455283b04989f2f9f0a242787cf79f.2.3.1.123
Process fio-submit-rq (859) Total: 28.5 msec
[fusion_condvar_timedwait_noload] 4.2 msec 100.0 %
fusion_condvar_timedwait_noload ifio_5de02.3ebd86695a57771148b5f50a3f899efeb97.2.3.1.123
Process rsyslogd (1402) Total: 36.3 msec
Userspace lock contention 3.2 msec 67.1 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (select) 3.1 msec 32.9 %
poll_schedule_timeout do_select core_sys_select sys_select
system_call_fastpath
Process btrfs-submit-0 (1623) Total: 3.0 msec
[worker_loop] 0.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-cleaner (1634) Total: 35.3 msec
[synchronize_sched] 15.6 msec 100.0 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_free_fs_root btrfs_drop_snapshot
btrfs_clean_old_snapshots cleaner_kthread kthread kernel_thread_helper
Process cosd (1647) Total: 577.5 msec
[wait_for_commit] 308.1 msec 53.3 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 113.4 msec 19.6 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 30.8 msec 15.0 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[synchronize_sched] 15.6 msec 6.9 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 2.7 msec 4.2 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 2.7 msec 0.8 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 0.2 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
synchronous write 0.1 msec 0.0 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_write vfs_write
sys_pwrite64 system_call_fastpath
Process btrfs-submit-0 (1730) Total: 2.6 msec
[worker_loop] 0.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (1735) Total: 0.3 msec
[worker_loop] 0.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process cosd (1755) Total: 778.7 msec
[wait_for_commit] 223.2 msec 28.7 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 150.3 msec 19.3 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 91.5 msec 38.6 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[synchronize_sched] 15.6 msec 5.6 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.8 msec 6.5 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 3.3 msec 1.2 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 0.2 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
[btrfs_start_ordered_extent] 0.1 msec 0.0 %
btrfs_start_ordered_extent btrfs_wait_ordered_range
prepare_pages __btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[wait_extent_bit] 0.1 msec 0.0 %
wait_extent_bit lock_extent_bits prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
Process btrfs-submit-0 (1839) Total: 9.3 msec
[worker_loop] 2.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process cosd (1875) Total: 667.7 msec
[wait_for_commit] 335.9 msec 50.3 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 200.2 msec 30.0 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[synchronize_sched] 15.7 msec 6.1 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
[sleep_on_page] 7.9 msec 2.3 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
Userspace lock contention 4.4 msec 9.7 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 4.1 msec 1.2 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
[blkdev_issue_flush] 0.9 msec 0.2 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_readv_writev
do_readv_writev vfs_writev sys_writev system_call_fastpath
synchronous write 0.9 msec 0.1 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_write vfs_write
sys_pwrite64 system_call_fastpath
Direct block device IO 0.3 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
Process btrfs-submit-0 (1965) Total: 9.3 msec
[worker_loop] 2.9 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (1966) Total: 6.2 msec
[worker_loop] 2.8 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (1970) Total: 20.3 msec
[worker_loop] 4.1 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process kworker/0:3 (1977) Total: 3.9 msec
. 3.7 msec 100.0 %
worker_thread kthread kernel_thread_helper
Process btrfs-worker-2 (2003) Total: 0.1 msec
[worker_loop] 0.1 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process cosd (2017) Total: 1410.6 msec
[wait_for_commit] 335.7 msec 43.9 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 276.5 msec 28.4 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 24.5 msec 15.2 %
sleep_on_page __lock_page prepare_uptodate_page prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[synchronize_sched] 15.6 msec 5.6 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.6 msec 5.7 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 2.6 msec 0.9 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 0.3 msec 0.2 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
[blkdev_issue_flush] 0.1 msec 0.0 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_readv_writev
do_readv_writev vfs_writev sys_writev system_call_fastpath
Process btrfs-endio-met (5802) Total: 13.5 msec
[worker_loop] 4.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-3 (5823) Total: 4.0 msec
[worker_loop] 2.8 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-2 (5825) Total: 7.3 msec
[worker_loop] 2.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5829) Total: 0.3 msec
[worker_loop] 0.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5830) Total: 20.2 msec
[worker_loop] 4.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5853) Total: 18.9 msec
[worker_loop] 4.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5920) Total: 2.6 msec
[worker_loop] 2.6 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5926) Total: 21.6 msec
[worker_loop] 4.8 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (5927) Total: 10.4 msec
[worker_loop] 3.9 msec 99.8 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6023) Total: 8.9 msec
[worker_loop] 2.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6316) Total: 0.2 msec
[worker_loop] 0.1 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process kworker/2:0 (6417) Total: 947.5 msec
. 81.2 msec 67.8 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 31.8 msec 32.2 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/5:2 (6697) Total: 0.2 msec
. 0.2 msec 100.0 %
kthread_create_on_node create_worker manage_workers
worker_thread kthread kernel_thread_helper
=============== Tue Jul 26 16:25:24 2011
Globals: Cause Maximum Percentage
[wait_for_commit] 352.3 msec 40.4 %
[btrfs_commit_transaction_async] 235.3 msec 22.3 %
[sleep_on_page] 95.5 msec 19.5 %
Marking inode dirty 63.4 msec 1.4 %
[synchronize_sched] 19.5 msec 6.1 %
Userspace lock contention 5.0 msec 8.3 %
Waiting for event (select) 5.0 msec 0.6 %
Waiting for event (poll) 4.9 msec 1.3 %
[blkdev_issue_flush] 0.9 msec 0.1 %
Process details:
Process ksoftirqd/0 (3) Total: 0.9 msec
[run_ksoftirqd] 0.9 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/1 (10) Total: 46.0 msec
[run_ksoftirqd] 4.6 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/2 (15) Total: 8.8 msec
[run_ksoftirqd] 4.8 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process kworker/3:0 (18) Total: 500.2 msec
. 204.9 msec 70.3 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 28.2 msec 29.7 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process ksoftirqd/5 (27) Total: 3.2 msec
[run_ksoftirqd] 1.8 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process kworker/7:0 (34) Total: 377.1 msec
. 82.1 msec 76.0 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 31.1 msec 24.0 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/5:1 (51) Total: 411.2 msec
. 116.0 msec 68.6 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 26.0 msec 31.4 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process fct0-worker (826) Total: 146.4 msec
[fusion_condvar_timedwait_noload] 4.5 msec 100.0 %
fusion_condvar_timedwait_noload ifio_5fb65.a9c484151da25a9eb60ef9a6e7309d1a95f.2.3.1.123
Process fioa-data-groom (858) Total: 105.3 msec
[fusion_condvar_wait] 2.1 msec 7.8 %
fusion_condvar_wait ifio_7495f.eadce65a4e06c525e92a3321007983e12d8.2.3.1.123
[fusion_condvar_timedwait_noload] 0.9 msec 92.2 %
fusion_condvar_timedwait_noload ifio_0d5cb.2e5d4455283b04989f2f9f0a242787cf79f.2.3.1.123
Process fio-submit-rq (859) Total: 47.3 msec
[fusion_condvar_timedwait_noload] 4.5 msec 100.0 %
fusion_condvar_timedwait_noload ifio_5de02.3ebd86695a57771148b5f50a3f899efeb97.2.3.1.123
Process rsyslogd (1402) Total: 83.1 msec
Userspace lock contention 5.0 msec 66.7 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (select) 5.0 msec 33.3 %
poll_schedule_timeout do_select core_sys_select sys_select
system_call_fastpath
Process btrfs-submit-0 (1623) Total: 4.3 msec
[worker_loop] 0.4 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process cosd (1647) Total: 786.8 msec
[wait_for_commit] 234.2 msec 29.8 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 129.8 msec 16.5 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 29.7 msec 23.2 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[synchronize_sched] 15.6 msec 5.1 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.9 msec 20.3 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 4.9 msec 4.8 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
[blkdev_issue_flush] 0.9 msec 0.2 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_readv_writev
do_readv_writev vfs_writev sys_writev system_call_fastpath
Direct block device IO 0.2 msec 0.2 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
Process btrfs-submit-0 (1730) Total: 8.5 msec
[worker_loop] 2.8 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process cosd (1755) Total: 1139.2 msec
[wait_for_commit] 282.4 msec 45.3 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 133.7 msec 19.6 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 51.8 msec 22.1 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[synchronize_sched] 19.5 msec 7.4 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.4 msec 4.6 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 4.4 msec 0.8 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 0.2 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
synchronous write 0.2 msec 0.0 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_write vfs_write
sys_pwrite64 system_call_fastpath
[blkdev_issue_flush] 0.1 msec 0.0 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_readv_writev
do_readv_writev vfs_writev sys_writev system_call_fastpath
[wait_extent_bit] 0.1 msec 0.0 %
wait_extent_bit lock_extent_bits prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[btrfs_start_ordered_extent] 0.1 msec 0.0 %
btrfs_start_ordered_extent btrfs_wait_ordered_range
prepare_pages __btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
Process btrfs-submit-0 (1839) Total: 10.2 msec
[worker_loop] 3.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-cleaner (1862) Total: 84.0 msec
[synchronize_sched] 15.6 msec 100.0 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_free_fs_root btrfs_drop_snapshot
btrfs_clean_old_snapshots cleaner_kthread kthread kernel_thread_helper
Process cosd (1875) Total: 1029.0 msec
[wait_for_commit] 283.1 msec 45.8 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 235.3 msec 41.3 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 22.0 msec 4.0 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[synchronize_sched] 15.6 msec 7.7 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 3.2 msec 1.2 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Direct block device IO 0.1 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
Process btrfs-endio-met (1959) Total: 0.5 msec
[worker_loop] 0.5 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-submit-0 (1965) Total: 12.9 msec
[worker_loop] 3.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (1966) Total: 22.2 msec
[worker_loop] 3.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (1970) Total: 0.4 msec
[worker_loop] 0.4 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process kworker/0:3 (1977) Total: 14.1 msec
. 4.4 msec 100.0 %
worker_thread kthread kernel_thread_helper
Process btrfs-worker-1 (2002) Total: 1.6 msec
[worker_loop] 0.4 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process cosd (2017) Total: 1569.4 msec
[wait_for_commit] 352.3 msec 40.7 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 153.7 msec 15.8 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 95.5 msec 27.0 %
sleep_on_page __lock_page prepare_uptodate_page prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
Marking inode dirty 63.4 msec 4.0 %
wait_current_trans start_transaction btrfs_join_transaction
btrfs_dirty_inode __mark_inode_dirty file_update_time
btrfs_file_aio_write do_sync_readv_writev do_readv_writev
vfs_writev sys_writev system_call_fastpath
[synchronize_sched] 15.6 msec 5.0 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.2 msec 6.5 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 2.2 msec 0.7 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 0.2 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
[blkdev_issue_flush] 0.2 msec 0.1 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_readv_writev
do_readv_writev vfs_writev sys_writev system_call_fastpath
Process btrfs-endio-met (5802) Total: 7.4 msec
[worker_loop] 4.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-2 (5822) Total: 7.0 msec
[worker_loop] 3.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-2 (5825) Total: 5.5 msec
[worker_loop] 3.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5829) Total: 0.3 msec
[worker_loop] 0.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5830) Total: 43.8 msec
[worker_loop] 4.9 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (5831) Total: 7.1 msec
[worker_loop] 2.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5835) Total: 0.3 msec
[worker_loop] 0.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5849) Total: 0.3 msec
[worker_loop] 0.3 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5853) Total: 28.4 msec
[worker_loop] 4.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5920) Total: 27.9 msec
[worker_loop] 4.6 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (5924) Total: 5.5 msec
[worker_loop] 2.4 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (5926) Total: 10.2 msec
[worker_loop] 4.4 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (5927) Total: 15.2 msec
[worker_loop] 4.5 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6023) Total: 5.5 msec
[worker_loop] 3.3 msec 99.4 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-3 (6081) Total: 3.3 msec
[worker_loop] 2.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6316) Total: 1.6 msec
[worker_loop] 0.4 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (6378) Total: 8.8 msec
[worker_loop] 4.5 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process kworker/6:2 (6393) Total: 872.8 msec
. 139.8 msec 59.1 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 31.9 msec 40.9 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/2:0 (6417) Total: 372.0 msec
. 80.1 msec 50.5 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 30.7 msec 49.5 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/1:0 (6584) Total: 317.6 msec
. 65.4 msec 73.4 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 21.9 msec 26.6 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/7:2 (6700) Total: 0.1 msec
. 0.1 msec 100.0 %
kthread_create_on_node create_worker manage_workers
worker_thread kthread kernel_thread_helper
[-- Attachment #3: ltop-20110727.txt --]
[-- Type: text/plain, Size: 27711 bytes --]
=============== Wed Jul 27 10:26:20 2011
Globals: Cause Maximum Percentage
[wait_for_commit] 4340.2 msec 53.8 %
[sleep_on_page] 2764.9 msec 36.6 %
[btrfs_commit_transaction_async] 368.1 msec 3.1 %
[synchronize_sched] 19.5 msec 0.6 %
Reading from file 15.0 msec 0.2 %
Userspace lock contention 5.0 msec 4.5 %
Waiting for event (poll) 3.9 msec 0.8 %
Waiting for event (select) 2.9 msec 0.3 %
Direct block device IO 1.1 msec 0.0 %
Process details:
Process ksoftirqd/0 (3) Total: 1.0 msec
[run_ksoftirqd] 1.0 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/1 (10) Total: 102.9 msec
[run_ksoftirqd] 3.9 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/2 (15) Total: 9.8 msec
[run_ksoftirqd] 4.9 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/5 (27) Total: 65.5 msec
[run_ksoftirqd] 2.4 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process kjournald (390) Total: 0.2 msec
EXT3: committing transaction 0.1 msec 100.0 %
sleep_on_buffer __wait_on_buffer journal_commit_transaction
kjournald kthread kernel_thread_helper
Process fct0-worker (826) Total: 147.5 msec
[fusion_condvar_timedwait_noload] 3.9 msec 100.0 %
fusion_condvar_timedwait_noload ifio_5fb65.a9c484151da25a9eb60ef9a6e7309d1a95f.2.3.1.123
Process fioa-data-groom (858) Total: 44.5 msec
[fusion_condvar_wait] 2.6 msec 11.9 %
fusion_condvar_wait ifio_7495f.eadce65a4e06c525e92a3321007983e12d8.2.3.1.123
[fusion_condvar_timedwait_noload] 0.7 msec 88.1 %
fusion_condvar_timedwait_noload ifio_0d5cb.2e5d4455283b04989f2f9f0a242787cf79f.2.3.1.123
Process fio-submit-rq (859) Total: 109.9 msec
[fusion_condvar_timedwait_noload] 3.9 msec 100.0 %
fusion_condvar_timedwait_noload ifio_5de02.3ebd86695a57771148b5f50a3f899efeb97.2.3.1.123
Process rsyslogd (1402) Total: 231.7 msec
Userspace lock contention 2.9 msec 66.8 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (select) 2.9 msec 33.2 %
poll_schedule_timeout do_select core_sys_select sys_select
system_call_fastpath
Process ntpd (1479) Total: 0.2 msec
Waiting for event (select) 0.2 msec 100.0 %
poll_schedule_timeout do_select core_sys_select sys_select
system_call_fastpath
Process btrfs-submit-0 (1623) Total: 3452.8 msec
Creating block layer request 128.0 msec 99.9 %
get_request_wait __make_request generic_make_request
submit_bio run_scheduled_bios pending_bios_fn worker_loop
kthread kernel_thread_helper
[worker_loop] 2.8 msec 0.1 %
worker_loop kthread kernel_thread_helper
Process btrfs-cleaner (1634) Total: 43.1 msec
[synchronize_sched] 15.6 msec 100.0 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_free_fs_root btrfs_drop_snapshot
btrfs_clean_old_snapshots cleaner_kthread kthread kernel_thread_helper
Process cosd (1647) Total: 8852.0 msec
[wait_for_commit] 4151.3 msec 46.9 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 2764.9 msec 45.3 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[btrfs_commit_transaction_async] 368.1 msec 4.2 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[synchronize_sched] 15.6 msec 0.5 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 5.0 msec 2.6 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 3.9 msec 0.5 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 1.1 msec 0.0 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_readv_writev do_readv_writev
vfs_writev sys_writev system_call_fastpath
[blkdev_issue_flush] 0.1 msec 0.0 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_readv_writev
do_readv_writev vfs_writev sys_writev system_call_fastpath
Process btrfs-submit-0 (1730) Total: 3595.8 msec
Creating block layer request 168.2 msec 99.9 %
get_request_wait __make_request generic_make_request
submit_bio run_scheduled_bios pending_bios_fn worker_loop
kthread kernel_thread_helper
[worker_loop] 1.6 msec 0.1 %
worker_loop kthread kernel_thread_helper
Process cosd (1755) Total: 6232.1 msec
[wait_for_commit] 4065.7 msec 65.2 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 187.6 msec 3.0 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 34.8 msec 21.1 %
sleep_on_page __lock_page find_lock_page find_or_create_page
prepare_pages __btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[synchronize_sched] 19.5 msec 0.7 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.4 msec 7.9 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 3.2 msec 1.9 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
[blkdev_issue_flush] 0.2 msec 0.0 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_readv_writev
do_readv_writev vfs_writev sys_writev system_call_fastpath
Direct block device IO 0.1 msec 0.0 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_readv_writev do_readv_writev
vfs_writev sys_writev system_call_fastpath
Process btrfs-submit-0 (1839) Total: 833.7 msec
Creating block layer request 137.6 msec 100.0 %
get_request_wait __make_request generic_make_request
submit_bio run_scheduled_bios pending_bios_fn worker_loop
kthread kernel_thread_helper
Process cosd (1875) Total: 120.7 msec
[btrfs_commit_transaction_async] 53.8 msec 44.6 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 36.2 msec 30.1 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_fdatawrite_range
sys_sync_file_range system_call_fastpath
Userspace lock contention 2.9 msec 20.3 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 1.7 msec 4.8 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 0.2 msec 0.2 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
Process btrfs-worker-2 (1957) Total: 4.0 msec
[worker_loop] 1.6 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (1960) Total: 943.5 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-submit-0 (1965) Total: 3684.4 msec
Creating block layer request 138.5 msec 99.9 %
get_request_wait __make_request generic_make_request
submit_bio run_scheduled_bios pending_bios_fn worker_loop
kthread kernel_thread_helper
[worker_loop] 2.0 msec 0.1 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (1966) Total: 8.3 msec
[worker_loop] 2.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-0 (1968) Total: 6.6 msec
[worker_loop] 4.5 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process cosd (2017) Total: 7915.1 msec
[wait_for_commit] 4340.2 msec 54.8 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 2748.2 msec 40.3 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[btrfs_commit_transaction_async] 120.2 msec 1.5 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[synchronize_sched] 16.5 msec 0.5 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Reading from file 15.0 msec 0.5 %
sleep_on_page_killable __lock_page_killable
generic_file_aio_read do_sync_read vfs_read sys_pread64
system_call_fastpath
Userspace lock contention 4.3 msec 1.9 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 2.5 msec 0.3 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 1.1 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_readv_writev do_readv_writev
vfs_writev sys_writev system_call_fastpath
[blkdev_issue_flush] 0.1 msec 0.0 %
blkdev_issue_flush blkdev_fsync vfs_fsync_range
generic_write_sync blkdev_aio_write do_sync_readv_writev
do_readv_writev vfs_writev sys_writev system_call_fastpath
Process btrfs-worker-3 (5659) Total: 7.0 msec
[worker_loop] 2.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-2 (5822) Total: 8.3 msec
[worker_loop] 2.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-3 (5823) Total: 0.1 msec
[worker_loop] 0.1 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (5924) Total: 21.8 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (5927) Total: 24.2 msec
[worker_loop] 4.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6316) Total: 11.1 msec
[worker_loop] 3.5 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-3 (6330) Total: 0.2 msec
[worker_loop] 0.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6364) Total: 4.6 msec
[worker_loop] 1.6 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (6378) Total: 13.1 msec
[worker_loop] 3.5 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (9082) Total: 1066.6 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (13693) Total: 202.9 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (16513) Total: 870.7 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process kworker/2:3 (16611) Total: 4448.5 msec
. 438.7 msec 97.4 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 23.6 msec 2.6 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/0:3 (16848) Total: 21.0 msec
. 4.7 msec 100.0 %
worker_thread kthread kernel_thread_helper
Process kworker/2:0 (16860) Total: 4506.7 msec
. 487.8 msec 92.0 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 33.0 msec 8.0 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/1:1 (16888) Total: 5145.6 msec
. 954.5 msec 95.5 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 31.7 msec 4.5 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
=============== Wed Jul 27 10:26:30 2011
Globals: Cause Maximum Percentage
[wait_for_commit] 4123.5 msec 63.9 %
[sleep_on_page] 2281.3 msec 25.4 %
[btrfs_commit_transaction_async] 385.4 msec 5.6 %
Marking inode dirty 49.7 msec 0.3 %
[synchronize_sched] 19.4 msec 1.0 %
Userspace lock contention 4.7 msec 3.1 %
Waiting for event (poll) 4.4 msec 0.7 %
Waiting for event (select) 2.4 msec 0.1 %
Direct block device IO 1.1 msec 0.1 %
Process details:
Process ksoftirqd/1 (10) Total: 32.7 msec
[run_ksoftirqd] 4.4 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/2 (15) Total: 4.9 msec
[run_ksoftirqd] 4.9 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/5 (27) Total: 11.3 msec
[run_ksoftirqd] 3.9 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process ksoftirqd/6 (31) Total: 4.9 msec
[run_ksoftirqd] 4.9 msec 100.0 %
run_ksoftirqd kthread kernel_thread_helper
Process fct0-worker (826) Total: 124.8 msec
[fusion_condvar_timedwait_noload] 4.4 msec 100.0 %
fusion_condvar_timedwait_noload ifio_5fb65.a9c484151da25a9eb60ef9a6e7309d1a95f.2.3.1.123
Process fioa-data-groom (858) Total: 98.5 msec
[fusion_condvar_wait] 2.2 msec 8.3 %
fusion_condvar_wait ifio_7495f.eadce65a4e06c525e92a3321007983e12d8.2.3.1.123
[fusion_condvar_timedwait_noload] 1.2 msec 91.7 %
fusion_condvar_timedwait_noload ifio_0d5cb.2e5d4455283b04989f2f9f0a242787cf79f.2.3.1.123
Process fio-submit-rq (859) Total: 43.3 msec
[fusion_condvar_timedwait_noload] 4.6 msec 100.0 %
fusion_condvar_timedwait_noload ifio_5de02.3ebd86695a57771148b5f50a3f899efeb97.2.3.1.123
Process rsyslogd (1402) Total: 52.1 msec
Userspace lock contention 2.5 msec 66.8 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (select) 2.4 msec 33.2 %
poll_schedule_timeout do_select core_sys_select sys_select
system_call_fastpath
Process btrfs-submit-0 (1623) Total: 3659.5 msec
Creating block layer request 167.5 msec 99.9 %
get_request_wait __make_request generic_make_request
submit_bio run_scheduled_bios pending_bios_fn worker_loop
kthread kernel_thread_helper
[worker_loop] 2.0 msec 0.1 %
worker_loop kthread kernel_thread_helper
Process cosd (1647) Total: 7271.5 msec
[wait_for_commit] 4123.5 msec 56.7 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 2281.3 msec 34.3 %
sleep_on_page __lock_page find_lock_page find_or_create_page
prepare_pages __btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[btrfs_commit_transaction_async] 385.4 msec 5.3 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[synchronize_sched] 19.4 msec 0.6 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.3 msec 2.5 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 4.2 msec 0.6 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 1.0 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_readv_writev do_readv_writev
vfs_writev sys_writev system_call_fastpath
Process btrfs-submit-0 (1730) Total: 2389.7 msec
Creating block layer request 147.9 msec 99.7 %
get_request_wait __make_request generic_make_request
submit_bio run_scheduled_bios pending_bios_fn worker_loop
kthread kernel_thread_helper
[worker_loop] 4.8 msec 0.3 %
worker_loop kthread kernel_thread_helper
Process cosd (1755) Total: 3460.0 msec
[wait_for_commit] 2982.4 msec 86.2 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[btrfs_commit_transaction_async] 174.8 msec 5.1 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[synchronize_sched] 15.7 msec 1.4 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.7 msec 5.9 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 4.4 msec 1.5 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 1.0 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
Process btrfs-submit-0 (1839) Total: 807.7 msec
Creating block layer request 99.6 msec 100.0 %
get_request_wait __make_request generic_make_request
submit_bio run_scheduled_bios pending_bios_fn worker_loop
kthread kernel_thread_helper
Process btrfs-worker-1 (1849) Total: 11.9 msec
[worker_loop] 4.8 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process cosd (1875) Total: 2336.1 msec
[wait_for_commit] 2251.8 msec 96.4 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[synchronize_sched] 15.6 msec 1.7 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.0 msec 1.5 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 1.6 msec 0.3 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 1.1 msec 0.1 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
Process btrfs-endio-met (1960) Total: 629.3 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-submit-0 (1965) Total: 3362.5 msec
Creating block layer request 127.8 msec 99.9 %
get_request_wait __make_request generic_make_request
submit_bio run_scheduled_bios pending_bios_fn worker_loop
kthread kernel_thread_helper
[worker_loop] 0.7 msec 0.1 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (1966) Total: 14.8 msec
[worker_loop] 4.6 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-cleaner (2004) Total: 179.2 msec
[synchronize_sched] 19.5 msec 100.0 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_free_fs_root btrfs_drop_snapshot
btrfs_clean_old_snapshots cleaner_kthread kthread kernel_thread_helper
Process cosd (2017) Total: 4490.1 msec
[wait_for_commit] 1895.6 msec 42.2 %
wait_for_commit btrfs_wait_for_commit btrfs_ioctl_wait_sync
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
[sleep_on_page] 1515.0 msec 44.1 %
sleep_on_page wait_on_page_bit prepare_pages
__btrfs_buffered_write btrfs_file_aio_write
do_sync_readv_writev do_readv_writev vfs_writev sys_writev
system_call_fastpath
[btrfs_commit_transaction_async] 263.6 msec 9.3 %
btrfs_commit_transaction_async btrfs_mksubvol
btrfs_ioctl_snap_create_transid btrfs_ioctl_snap_create_v2
btrfs_ioctl do_vfs_ioctl sys_ioctl system_call_fastpath
Marking inode dirty 49.7 msec 1.1 %
wait_current_trans start_transaction btrfs_join_transaction
btrfs_dirty_inode __mark_inode_dirty file_update_time
btrfs_file_aio_write do_sync_readv_writev do_readv_writev
vfs_writev sys_writev system_call_fastpath
[synchronize_sched] 15.6 msec 0.9 %
synchronize_sched __synchronize_srcu synchronize_srcu
btrfs_destroy_inode destroy_inode evict iput d_delete
btrfs_ioctl_snap_destroy btrfs_ioctl do_vfs_ioctl sys_ioctl
Userspace lock contention 4.7 msec 1.9 %
futex_wait_queue_me futex_wait do_futex sys_futex system_call_fastpath
Waiting for event (poll) 2.8 msec 0.3 %
poll_schedule_timeout do_sys_poll sys_poll system_call_fastpath
Direct block device IO 1.0 msec 0.0 %
__blockdev_direct_IO blkdev_direct_IO
generic_file_direct_write __generic_file_aio_write
blkdev_aio_write do_sync_write vfs_write sys_pwrite64 system_call_fastpath
Process btrfs-worker-3 (5659) Total: 9.4 msec
[worker_loop] 4.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-2 (5822) Total: 10.8 msec
[worker_loop] 4.1 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-3 (5823) Total: 0.3 msec
[worker_loop] 0.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-2 (5825) Total: 1.6 msec
[worker_loop] 1.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (5924) Total: 10.0 msec
[worker_loop] 4.5 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (5927) Total: 11.7 msec
[worker_loop] 4.4 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6023) Total: 5.5 msec
[worker_loop] 4.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6316) Total: 9.6 msec
[worker_loop] 4.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-worker-3 (6330) Total: 0.3 msec
[worker_loop] 0.2 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-delalloc- (6364) Total: 21.8 msec
[worker_loop] 4.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-wri (6378) Total: 6.1 msec
[worker_loop] 4.7 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process sshd (6612) Total: 0.5 msec
Waiting for event (select) 0.5 msec 89.7 %
poll_schedule_timeout do_select core_sys_select sys_select
system_call_fastpath
Process btrfs-endio-met (9082) Total: 1039.6 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (13693) Total: 428.1 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process btrfs-endio-met (16513) Total: 1115.7 msec
[worker_loop] 5.0 msec 100.0 %
worker_loop kthread kernel_thread_helper
Process kworker/6:3 (16810) Total: 2043.9 msec
. 265.6 msec 92.6 %
worker_thread kthread kernel_thread_helper
[sleep_on_page] 29.1 msec 7.4 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/u:0 (16837) Total: 7.8 msec
. 3.9 msec 100.0 %
worker_thread kthread kernel_thread_helper
Process kworker/0:3 (16848) Total: 4.4 msec
. 2.8 msec 100.0 %
worker_thread kthread kernel_thread_helper
Process kworker/2:0 (16860) Total: 2248.5 msec
. 290.0 msec 88.5 %
worker_thread kthread kernel_thread_helper
[sleep_on_page] 25.6 msec 11.5 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/6:0 (16868) Total: 4494.8 msec
. 588.6 msec 91.6 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 27.4 msec 8.4 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process kworker/1:1 (16888) Total: 1354.1 msec
. 167.7 msec 100.0 %
sleep_on_page wait_on_page_bit btrfs_wait_marked_extents
btrfs_write_and_wait_marked_extents
btrfs_write_and_wait_transaction btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
Process kworker/5:0 (16891) Total: 3148.9 msec
. 796.0 msec 94.6 %
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work worker_thread kthread
kernel_thread_helper
[sleep_on_page] 29.3 msec 5.4 %
sleep_on_page __lock_page extent_write_cache_pages.clone.0
extent_writepages btrfs_writepages do_writepages
__filemap_fdatawrite_range filemap_flush
btrfs_start_delalloc_inodes btrfs_commit_transaction
do_async_commit process_one_work
Process sshd (16911) Total: 0.2 msec
Waiting for TTY data 0.1 msec 100.0 %
flush_work tty_flush_to_ldisc n_tty_poll tty_poll do_select
core_sys_select sys_select system_call_fastpath
Process tee (16929) Total: 0.3 msec
Writing data to TTY 0.1 msec 100.0 %
n_tty_write tty_write vfs_write sys_write system_call_fastpath
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-27 8:41 ` Christian Brunner
@ 2011-07-28 4:05 ` Marcus Sorensen
2011-07-28 15:10 ` Christian Brunner
2011-08-08 21:58 ` Sage Weil
1 sibling, 1 reply; 14+ messages in thread
From: Marcus Sorensen @ 2011-07-28 4:05 UTC (permalink / raw)
To: chb; +Cc: Chris Mason, linux-btrfs, ceph-devel
Christian,
Have you checked up on the disks themselves and hardware? High
utilization can mean that the i/o load has increased, but it can also
mean that the i/o capacity has decreased. Your traces seem to
indicate that a good portion of the time is being spent on commits,
that could be waiting on disk. That "wait_for_commit" looks to
basically just spin waiting for the commit to complete, and at least
one thing that calls it raises a BUG_ON, not sure if it's one you've
seen even on 2.6.38.
There could be all sorts of performance related reasons that aren't
specific to btrfs or ceph, on our various systems we've seen things
like the raid card module being upgraded in newer kernels and suddenly
our disks start to go into sleep mode after a bit, dirty_ratio causing
multiple gigs of memory to sync because its not optimized for the
workload, external SAS enclosures stop communicating a few days after
reboot (but the disks keep working with sporadic issues), things like
patrol read hitting a bad sector on a disk, causing it to go into
enhanced error recovery and stop responding, etc.
Maybe you have already tried these things. It's where I would start
anyway. Looking at /proc/meminfo, dirty, writeback, swap, etc both
while the system is functioning desirably and when it's misbehaving.
Looking at anything else that might be in D state. Looking at not just
disk util, but the workload causing it (e.g. Was I doing 300 iops
previously with an average size of 64k, and now I'm only managing 50
iops at 64k before the disk util reports 100%?) Testing the system in
a filesystem-agnostic manner, for example when performance is bad
through btrfs, is performance the same as you got on fresh boot when
testing iops on /dev/sdb or whatever? You're not by chance swapping
after a bit of uptime on any volume that's shared with the underlying
disks that make up your osd, obfuscated by a hardware raid? I didn't
see the kernel warning you're referring to, just the ixgbe malloc
failure you mentioned the other day.
I do not mean to presume that you have not looked at these things
already. I am not very knowledgeable in btrfs specifically, but I
would expect any degradation in performance over time to be due to
what's on disk (lots of small files, fragmented, etc). This is
obviously not the case in this situation since a reboot recovers the
performance. I suppose it could also be a memory leak or something
similar, but you should be able to detect something like that by
monitoring your memory situation, /proc/slabinfo etc.
Just my thoughts, good luck on this. I am currently running 2.6.39.3
(btrfs) on the 7 node cluster I put together, but I just built it and
am comparing between various configs. It will be awhile before it is
under load for several days straight.
On Wed, Jul 27, 2011 at 2:41 AM, Christian Brunner <chb@muc.de> wrote:
> 2011/7/25 Chris Mason <chris.mason@oracle.com>:
>> Excerpts from Christian Brunner's message of 2011-07-25 03:54:47 -0400:
>>> Hi,
>>>
>>> we are running a ceph cluster with btrfs as it's base filesystem
>>> (kernel 3.0). At the beginning everything worked very well, but after
>>> a few days (2-3) things are getting very slow.
>>>
>>> When I look at the object store servers I see heavy disk-i/o on the
>>> btrfs filesystems (disk utilization is between 60% and 100%). I also
>>> did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
>>> certain, that the majority of the disk I/O is not caused by ceph or
>>> any other userland process.
>>>
>>> When reboot the system(s) the problems go away for another 2-3 days,
>>> but after that, it starts again. I'm not sure if the problem is
>>> related to the kernel warning I've reported last week. At least there
>>> is no temporal relationship between the warning and the slowdown.
>>>
>>> Any hints on how to trace this would be welcome.
>>
>> The easiest way to trace this is with latencytop.
>>
>> Apply this patch:
>>
>> http://oss.oracle.com/~mason/latencytop.patch
>>
>> And then use latencytop -c for a few minutes while the system is slow.
>> Send the output here and hopefully we'll be able to figure it out.
>
> I've now installed latencytop. Attached are two output files: The
> first is from yesterday and was created aproxematly half an hour after
> the boot. The second on is from today, uptime is 19h. The load on the
> system is already rising. Disk utilization is approximately at 50%.
>
> Thanks for your help.
>
> Christian
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-28 4:05 ` Marcus Sorensen
@ 2011-07-28 15:10 ` Christian Brunner
0 siblings, 0 replies; 14+ messages in thread
From: Christian Brunner @ 2011-07-28 15:10 UTC (permalink / raw)
To: Marcus Sorensen; +Cc: Chris Mason, linux-btrfs, ceph-devel
2011/7/28 Marcus Sorensen <shadowsor@gmail.com>:
> Christian,
>
> Have you checked up on the disks themselves and hardware? High
> utilization can mean that the i/o load has increased, but it can also
> mean that the i/o capacity has decreased. =A0Your traces seem to
> indicate that a good portion of the time is being spent on commits,
> that could be waiting on disk. That "wait_for_commit" looks to
> basically just spin waiting for the commit to complete, and at least
> one thing that calls it raises a BUG_ON, not sure if it's one you've
> seen even on 2.6.38.
>
> There could be all sorts of performance related reasons that aren't
> specific to btrfs or ceph, on our various systems we've seen things
> like the raid card module being upgraded in newer kernels and suddenl=
y
> our disks start to go into sleep mode after a bit, dirty_ratio causin=
g
> multiple gigs of memory to sync because its not optimized for the
> workload, external SAS enclosures stop communicating a few days after
> reboot (but the disks keep working with sporadic issues), things like
> patrol read hitting a bad sector on a disk, causing it to go into
> enhanced error recovery and stop responding, etc.
I' fairly confident that the hardware is ok. We see the problem on
four machines. It could be a problem with the hpsa driver/firmware,
but we haven't seen the behavior with 2.6.38 and the changes in the
hpsa driver are not that big.
> Maybe you have already tried these things. It's where I would start
> anyway. Looking at /proc/meminfo, dirty, writeback, swap, etc both
> while the system is functioning desirably and when it's misbehaving.
> Looking at anything else that might be in D state. Looking at not jus=
t
> disk util, but the workload causing it (e.g. Was I doing 300 iops
> previously with an average size of 64k, and now I'm only managing 50
> iops at 64k before the disk util reports 100%?) Testing the system in
> a filesystem-agnostic manner, for example when performance is bad
> through btrfs, is performance the same as you got on fresh boot when
> testing iops on /dev/sdb or whatever? You're not by chance swapping
> after a bit of uptime on any volume that's shared with the underlying
> disks that make up your osd, obfuscated by a hardware raid? I didn't
> see the kernel warning you're referring to, just the ixgbe malloc
> failure you mentioned the other day.
I've looked at most of this. What makes me point to btrfs, is that the
problem goes away when I reboot on server in our cluster, but persists
on the other systems. So it can't be related to the number of requests
that come in.
> I do not mean to presume that you have not looked at these things
> already. I am not very knowledgeable in btrfs specifically, but I
> would expect any degradation in performance over time to be due to
> what's on disk (lots of small files, fragmented, etc). This is
> obviously not the case in this situation since a reboot recovers the
> performance. I suppose it could also be a memory leak or something
> similar, but you should be able to detect something like that by
> monitoring your memory situation, /proc/slabinfo etc.
It could be related to a memory leak. The machine has a lot RAM (24
GB), but we have seen page allocation failures in the ixgbe driver,
when we are using jumbo frames.
> Just my thoughts, good luck on this. I am currently running 2.6.39.3
> (btrfs) on the 7 node cluster I put together, but I just built it and
> am comparing between various configs. It will be awhile before it is
> under load for several days straight.
Thanks!
When I look at the latencytop results, there is a high latency when
calling "btrfs_commit_transaction_async". Isn't "async" supposed to
return immediately?
Regards,
Christian
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
@ 2011-07-28 15:10 ` Christian Brunner
0 siblings, 0 replies; 14+ messages in thread
From: Christian Brunner @ 2011-07-28 15:10 UTC (permalink / raw)
To: Marcus Sorensen; +Cc: Chris Mason, linux-btrfs, ceph-devel
2011/7/28 Marcus Sorensen <shadowsor@gmail.com>:
> Christian,
>
> Have you checked up on the disks themselves and hardware? High
> utilization can mean that the i/o load has increased, but it can also
> mean that the i/o capacity has decreased. Your traces seem to
> indicate that a good portion of the time is being spent on commits,
> that could be waiting on disk. That "wait_for_commit" looks to
> basically just spin waiting for the commit to complete, and at least
> one thing that calls it raises a BUG_ON, not sure if it's one you've
> seen even on 2.6.38.
>
> There could be all sorts of performance related reasons that aren't
> specific to btrfs or ceph, on our various systems we've seen things
> like the raid card module being upgraded in newer kernels and suddenly
> our disks start to go into sleep mode after a bit, dirty_ratio causing
> multiple gigs of memory to sync because its not optimized for the
> workload, external SAS enclosures stop communicating a few days after
> reboot (but the disks keep working with sporadic issues), things like
> patrol read hitting a bad sector on a disk, causing it to go into
> enhanced error recovery and stop responding, etc.
I' fairly confident that the hardware is ok. We see the problem on
four machines. It could be a problem with the hpsa driver/firmware,
but we haven't seen the behavior with 2.6.38 and the changes in the
hpsa driver are not that big.
> Maybe you have already tried these things. It's where I would start
> anyway. Looking at /proc/meminfo, dirty, writeback, swap, etc both
> while the system is functioning desirably and when it's misbehaving.
> Looking at anything else that might be in D state. Looking at not just
> disk util, but the workload causing it (e.g. Was I doing 300 iops
> previously with an average size of 64k, and now I'm only managing 50
> iops at 64k before the disk util reports 100%?) Testing the system in
> a filesystem-agnostic manner, for example when performance is bad
> through btrfs, is performance the same as you got on fresh boot when
> testing iops on /dev/sdb or whatever? You're not by chance swapping
> after a bit of uptime on any volume that's shared with the underlying
> disks that make up your osd, obfuscated by a hardware raid? I didn't
> see the kernel warning you're referring to, just the ixgbe malloc
> failure you mentioned the other day.
I've looked at most of this. What makes me point to btrfs, is that the
problem goes away when I reboot on server in our cluster, but persists
on the other systems. So it can't be related to the number of requests
that come in.
> I do not mean to presume that you have not looked at these things
> already. I am not very knowledgeable in btrfs specifically, but I
> would expect any degradation in performance over time to be due to
> what's on disk (lots of small files, fragmented, etc). This is
> obviously not the case in this situation since a reboot recovers the
> performance. I suppose it could also be a memory leak or something
> similar, but you should be able to detect something like that by
> monitoring your memory situation, /proc/slabinfo etc.
It could be related to a memory leak. The machine has a lot RAM (24
GB), but we have seen page allocation failures in the ixgbe driver,
when we are using jumbo frames.
> Just my thoughts, good luck on this. I am currently running 2.6.39.3
> (btrfs) on the 7 node cluster I put together, but I just built it and
> am comparing between various configs. It will be awhile before it is
> under load for several days straight.
Thanks!
When I look at the latencytop results, there is a high latency when
calling "btrfs_commit_transaction_async". Isn't "async" supposed to
return immediately?
Regards,
Christian
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-28 15:10 ` Christian Brunner
(?)
@ 2011-07-28 16:01 ` Sage Weil
-1 siblings, 0 replies; 14+ messages in thread
From: Sage Weil @ 2011-07-28 16:01 UTC (permalink / raw)
To: Christian Brunner; +Cc: Marcus Sorensen, Chris Mason, linux-btrfs, ceph-devel
On Thu, 28 Jul 2011, Christian Brunner wrote:
> When I look at the latencytop results, there is a high latency when
> calling "btrfs_commit_transaction_async". Isn't "async" supposed to
> return immediately?
It depends. That function has to block until the commit has started
before returning in the case where it creates a new btrfs root (i.e.,
snapshot creation). Otherwise a subsequent operation (after the ioctl
returns) can sneak in before the snapshot is taken. (IIRC there was also
another problem with keeping internal structures consistent, tho I'm
forgetting the details.) And there are a bunch of things
btrfs_commit_transaction() does before setting blocked = 1 that can be
slow. There is a fair bit of transaction commit optimization work that
should eventually be done here that we sadly haven't had the resources to
look at yet.
sage
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-25 9:45 ` Andrej Podzimek
@ 2011-08-03 15:56 ` mck
0 siblings, 0 replies; 14+ messages in thread
From: mck @ 2011-08-03 15:56 UTC (permalink / raw)
To: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 691 bytes --]
I can confirm this as well (64-bit, Core i7, single-disk).
> The issue seems to be gone in 3.0.0.
After a few hours working 3.0.0 slows down on me too. The performance
becomes unusable and a reboot is a must. Certain applications
(particularly evolution ad firefox) are next to permanently greyed out.
I have had a couple of corrupted tree logs recently and had to use
btrfs-zero-log (mentioned in an earlier thread). Otherwise returning to
2.6.38 is the workaround.
~mck
--
"A mind that has been stretched will never return to it's original
dimension." Albert Einstein
| www.semb.wever.org | www.sesat.no
| http://tech.finn.no | http://xss-http-filter.sf.net
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-07-27 8:41 ` Christian Brunner
2011-07-28 4:05 ` Marcus Sorensen
@ 2011-08-08 21:58 ` Sage Weil
2011-08-09 13:33 ` Christian Brunner
1 sibling, 1 reply; 14+ messages in thread
From: Sage Weil @ 2011-08-08 21:58 UTC (permalink / raw)
To: Christian Brunner; +Cc: Chris Mason, linux-btrfs, ceph-devel
Hi Christian,
Are you still seeing this slowness?
sage
On Wed, 27 Jul 2011, Christian Brunner wrote:
> 2011/7/25 Chris Mason <chris.mason@oracle.com>:
> > Excerpts from Christian Brunner's message of 2011-07-25 03:54:47 -0400:
> >> Hi,
> >>
> >> we are running a ceph cluster with btrfs as it's base filesystem
> >> (kernel 3.0). At the beginning everything worked very well, but after
> >> a few days (2-3) things are getting very slow.
> >>
> >> When I look at the object store servers I see heavy disk-i/o on the
> >> btrfs filesystems (disk utilization is between 60% and 100%). I also
> >> did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
> >> certain, that the majority of the disk I/O is not caused by ceph or
> >> any other userland process.
> >>
> >> When reboot the system(s) the problems go away for another 2-3 days,
> >> but after that, it starts again. I'm not sure if the problem is
> >> related to the kernel warning I've reported last week. At least there
> >> is no temporal relationship between the warning and the slowdown.
> >>
> >> Any hints on how to trace this would be welcome.
> >
> > The easiest way to trace this is with latencytop.
> >
> > Apply this patch:
> >
> > http://oss.oracle.com/~mason/latencytop.patch
> >
> > And then use latencytop -c for a few minutes while the system is slow.
> > Send the output here and hopefully we'll be able to figure it out.
>
> I've now installed latencytop. Attached are two output files: The
> first is from yesterday and was created aproxematly half an hour after
> the boot. The second on is from today, uptime is 19h. The load on the
> system is already rising. Disk utilization is approximately at 50%.
>
> Thanks for your help.
>
> Christian
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
2011-08-08 21:58 ` Sage Weil
@ 2011-08-09 13:33 ` Christian Brunner
0 siblings, 0 replies; 14+ messages in thread
From: Christian Brunner @ 2011-08-09 13:33 UTC (permalink / raw)
To: Sage Weil; +Cc: Chris Mason, linux-btrfs, ceph-devel
Hi Sage,
I did some testing with btrfs-unstable yesterday. With the recent
commit from Chris it looks quite good:
"Btrfs: force unplugs when switching from high to regular priority bios=
"
However I can't test it extensively, because our main environment is
on ext4 at the moment.
Regards
Christian
2011/8/8 Sage Weil <sage@newdream.net>:
> Hi Christian,
>
> Are you still seeing this slowness?
>
> sage
>
>
> On Wed, 27 Jul 2011, Christian Brunner wrote:
>> 2011/7/25 Chris Mason <chris.mason@oracle.com>:
>> > Excerpts from Christian Brunner's message of 2011-07-25 03:54:47 -=
0400:
>> >> Hi,
>> >>
>> >> we are running a ceph cluster with btrfs as it's base filesystem
>> >> (kernel 3.0). At the beginning everything worked very well, but a=
fter
>> >> a few days (2-3) things are getting very slow.
>> >>
>> >> When I look at the object store servers I see heavy disk-i/o on t=
he
>> >> btrfs filesystems (disk utilization is between 60% and 100%). I a=
lso
>> >> did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
>> >> certain, that the majority of the disk I/O is not caused by ceph =
or
>> >> any other userland process.
>> >>
>> >> When reboot the system(s) the problems go away for another 2-3 da=
ys,
>> >> but after that, it starts again. I'm not sure if the problem is
>> >> related to the kernel warning I've reported last week. At least t=
here
>> >> is no temporal relationship between the warning and the slowdown.
>> >>
>> >> Any hints on how to trace this would be welcome.
>> >
>> > The easiest way to trace this is with latencytop.
>> >
>> > Apply this patch:
>> >
>> > http://oss.oracle.com/~mason/latencytop.patch
>> >
>> > And then use latencytop -c for a few minutes while the system is s=
low.
>> > Send the output here and hopefully we'll be able to figure it out.
>>
>> I've now installed latencytop. Attached are two output files: The
>> first is from yesterday and was created aproxematly half an hour aft=
er
>> the boot. The second on is from today, uptime is 19h. The load on th=
e
>> system is already rising. Disk utilization is approximately at 50%.
>>
>> Thanks for your help.
>>
>> Christian
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs=
" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Btrfs slowdown
@ 2011-08-09 13:33 ` Christian Brunner
0 siblings, 0 replies; 14+ messages in thread
From: Christian Brunner @ 2011-08-09 13:33 UTC (permalink / raw)
To: Sage Weil; +Cc: Chris Mason, linux-btrfs, ceph-devel
Hi Sage,
I did some testing with btrfs-unstable yesterday. With the recent
commit from Chris it looks quite good:
"Btrfs: force unplugs when switching from high to regular priority bios"
However I can't test it extensively, because our main environment is
on ext4 at the moment.
Regards
Christian
2011/8/8 Sage Weil <sage@newdream.net>:
> Hi Christian,
>
> Are you still seeing this slowness?
>
> sage
>
>
> On Wed, 27 Jul 2011, Christian Brunner wrote:
>> 2011/7/25 Chris Mason <chris.mason@oracle.com>:
>> > Excerpts from Christian Brunner's message of 2011-07-25 03:54:47 -0400:
>> >> Hi,
>> >>
>> >> we are running a ceph cluster with btrfs as it's base filesystem
>> >> (kernel 3.0). At the beginning everything worked very well, but after
>> >> a few days (2-3) things are getting very slow.
>> >>
>> >> When I look at the object store servers I see heavy disk-i/o on the
>> >> btrfs filesystems (disk utilization is between 60% and 100%). I also
>> >> did some tracing on the Cepp-Object-Store-Daemon, but I'm quite
>> >> certain, that the majority of the disk I/O is not caused by ceph or
>> >> any other userland process.
>> >>
>> >> When reboot the system(s) the problems go away for another 2-3 days,
>> >> but after that, it starts again. I'm not sure if the problem is
>> >> related to the kernel warning I've reported last week. At least there
>> >> is no temporal relationship between the warning and the slowdown.
>> >>
>> >> Any hints on how to trace this would be welcome.
>> >
>> > The easiest way to trace this is with latencytop.
>> >
>> > Apply this patch:
>> >
>> > http://oss.oracle.com/~mason/latencytop.patch
>> >
>> > And then use latencytop -c for a few minutes while the system is slow.
>> > Send the output here and hopefully we'll be able to figure it out.
>>
>> I've now installed latencytop. Attached are two output files: The
>> first is from yesterday and was created aproxematly half an hour after
>> the boot. The second on is from today, uptime is 19h. The load on the
>> system is already rising. Disk utilization is approximately at 50%.
>>
>> Thanks for your help.
>>
>> Christian
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2011-08-09 13:33 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-25 7:54 Btrfs slowdown Christian Brunner
2011-07-25 8:51 ` Andrej Podzimek
2011-07-25 9:45 ` Andrej Podzimek
2011-08-03 15:56 ` mck
2011-07-25 14:37 ` Jeremy Sanders
2011-07-25 19:52 ` Chris Mason
2011-07-27 8:41 ` Christian Brunner
2011-07-28 4:05 ` Marcus Sorensen
2011-07-28 15:10 ` Christian Brunner
2011-07-28 15:10 ` Christian Brunner
2011-07-28 16:01 ` Sage Weil
2011-08-08 21:58 ` Sage Weil
2011-08-09 13:33 ` Christian Brunner
2011-08-09 13:33 ` Christian Brunner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.