* 2.6.0 performance problems @ 2003-12-29 22:07 Thomas Molina 2003-12-29 22:21 ` Linus Torvalds ` (2 more replies) 0 siblings, 3 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-29 22:07 UTC (permalink / raw) To: Kernel Mailing List I spend almost all of my time using, testing, and hacking on development kernels. On my laptop I have noticed that things seemed to take an inordinate amount of time to complete. I've ascribed most of this to the fact that most of the systems I work on have decent system specifications while my laptop is a PIII 650 MHz processor. I just finished a couple of comparisons between 2.4 and 2.6 which seem to confirm my impressions. I understand that the comparison may not be apples to apples and my methods of testing may not be rigorous, but here it is. In contrast to some recent discussions on this list, this test is a "real world" test at which 2.6 comes off much worse than 2.4. The 2.4 kernel I used for this test is the standard RedHat kernel in Fedora Core 1, 2.4.22-1.2129.nptl. The 2.6 kernel is the latest bk pull from today. The test was doing a bk export from a freshly updated bk repository. The specific command was: bk export linux-2.5 linux-2.6-tm Under 2.4 top shows: user nice system irq softirq iowait idle 1.3 0 2.1 0 0 0 96.6 Execution time for the test was: real 13m33.482s user 0m33.540s sys 0m16.210s Under 2.6 top shows: user nice system irq softirq iowait idle 0.9 0 5.3 0.9 0.3 92.6 0 Execution time for the test was: real 22m42.397s user 0m37.753s sys 0m54.043s I've done no performance tweaking in either case. Both tests were done immediately after boot up with only the top program running in each case. I'm not sure what other data would be relevant here. Any thoughts from the group would be appreciated. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 22:07 2.6.0 performance problems Thomas Molina @ 2003-12-29 22:21 ` Linus Torvalds 2003-12-29 22:58 ` Thomas Molina 2003-12-29 23:05 ` Thomas Molina 2003-12-30 1:25 ` Roger Luethi 2003-12-30 1:27 ` Thomas Molina 2 siblings, 2 replies; 50+ messages in thread From: Linus Torvalds @ 2003-12-29 22:21 UTC (permalink / raw) To: Thomas Molina; +Cc: Kernel Mailing List On Mon, 29 Dec 2003, Thomas Molina wrote: > > I just finished a couple of comparisons between 2.4 and 2.6 which seem to > confirm my impressions. I understand that the comparison may not be > apples to apples and my methods of testing may not be rigorous, but here > it is. In contrast to some recent discussions on this list, this test is > a "real world" test at which 2.6 comes off much worse than 2.4. Are you sure you have DMA enabled on your laptop disk? Your 2.6.x system times are very high - much bigger than the user times. That sounds like PIO to me. Linus ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 22:21 ` Linus Torvalds @ 2003-12-29 22:58 ` Thomas Molina 2003-12-29 23:04 ` Linus Torvalds ` (2 more replies) 2003-12-29 23:05 ` Thomas Molina 1 sibling, 3 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-29 22:58 UTC (permalink / raw) To: Linus Torvalds; +Cc: Kernel Mailing List On Mon, 29 Dec 2003, Linus Torvalds wrote: > > > On Mon, 29 Dec 2003, Thomas Molina wrote: > > > > I just finished a couple of comparisons between 2.4 and 2.6 which seem to > > confirm my impressions. I understand that the comparison may not be > > apples to apples and my methods of testing may not be rigorous, but here > > it is. In contrast to some recent discussions on this list, this test is > > a "real world" test at which 2.6 comes off much worse than 2.4. > > Are you sure you have DMA enabled on your laptop disk? Your 2.6.x system > times are very high - much bigger than the user times. That sounds like > PIO to me. It certainly looks like DMA is enabled. Under 2.4 I get: [root@lap root]# hdparm /dev/hda /dev/hda: multcount = 16 (on) IO_support = 1 (32-bit) unmaskirq = 1 (on) using_dma = 1 (on) keepsettings = 0 (off) readonly = 0 (off) readahead = 8 (on) geometry = 2584/240/63, sectors = 39070080, start = 0 Under 2.6 I get: [root@lap root]# hdparm /dev/hda /dev/hda: multcount = 16 (on) IO_support = 1 (32-bit) unmaskirq = 1 (on) using_dma = 1 (on) keepsettings = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 38760/16/63, sectors = 39070080, start = 0 Relevant items from my 2.6 configuration file: CONFIG_GENERIC_ISA_DMA=y CONFIG_BLK_DEV_IDEDMA_PCI=y # CONFIG_BLK_DEV_IDEDMA_FORCED is not set CONFIG_IDEDMA_PCI_AUTO=y # CONFIG_IDEDMA_ONLYDISK is not set # CONFIG_IDEDMA_PCI_WIP is not set CONFIG_BLK_DEV_ADMA=y CONFIG_BLK_DEV_IDEDMA=y # CONFIG_IDEDMA_IVB is not set CONFIG_IDEDMA_AUTO=y # CONFIG_DMA_NONPCI is not set ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 22:58 ` Thomas Molina @ 2003-12-29 23:04 ` Linus Torvalds 2003-12-30 14:14 ` Thomas Molina 2003-12-29 23:14 ` Martin Schlemmer 2003-12-29 23:25 ` David B. Stevens 2 siblings, 1 reply; 50+ messages in thread From: Linus Torvalds @ 2003-12-29 23:04 UTC (permalink / raw) To: Thomas Molina; +Cc: Kernel Mailing List On Mon, 29 Dec 2003, Thomas Molina wrote: > > It certainly looks like DMA is enabled. Indeed. Can you do a simple kernel profile? Either using oprofile or just even the old profiler. It should show something (hopefully obvious), since your load seems to have a _huge_ system load. Linus ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 23:04 ` Linus Torvalds @ 2003-12-30 14:14 ` Thomas Molina 2003-12-30 14:39 ` William Lee Irwin III 2003-12-30 18:20 ` Linus Torvalds 0 siblings, 2 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-30 14:14 UTC (permalink / raw) To: Linus Torvalds; +Cc: Kernel Mailing List [-- Attachment #1: Type: TEXT/PLAIN, Size: 905 bytes --] On Mon, 29 Dec 2003, Linus Torvalds wrote: > > > On Mon, 29 Dec 2003, Thomas Molina wrote: > > > > It certainly looks like DMA is enabled. > > Indeed. Can you do a simple kernel profile? Either using oprofile or just > even the old profiler. It should show something (hopefully obvious), since > your load seems to have a _huge_ system load. OK, recompiled with oprofile support and have read the documentation. I must confess a failure to understand what data out of oprofile you want, nor how to extract that data. I apologize for my simplemindedness. Maybe there is a reference to using oprofile for kernel development. In any case, attachment one is the resulot of: opreport `bk export linux-2.5 linux-2.6-testa` attachment two is the result of: opreport -l vmlinux > vmlinux.txt where the second command was done after stopping profiling. Is any of this close to what you wanted? [-- Attachment #2: Type: TEXT/plain, Size: 1348 bytes --] 635490 78.4683 vmlinux 106995 13.2114 bk 39834 4.9186 libc-2.3.2.so 19080 2.3559 libperl.so 2720 0.3359 perl 1368 0.1689 oprofiled 1155 0.1426 top 603 0.0745 ld-2.3.2.so 591 0.0730 libpthread-0.60.so 423 0.0522 bash 323 0.0399 troff 314 0.0388 uhci_hcd 207 0.0256 sendmail.sendmail 188 0.0232 libproc.so.2.0.17 122 0.0151 usbcore 94 0.0116 cupsd 66 0.0081 less 64 0.0079 grotty 43 0.0053 rm 35 0.0043 init 26 0.0032 libncurses.so.5.3 22 0.0027 libstdc++.so.5.0.5 18 0.0022 opreport 14 0.0017 hermes 11 0.0014 man 7 8.6e-04 ls 7 8.6e-04 orinoco 7 8.6e-04 killall5 6 7.4e-04 gunzip 5 6.2e-04 syslogd 4 4.9e-04 sunrpc 4 4.9e-04 crond 3 3.7e-04 libtermcap.so.2.0.8 3 3.7e-04 tbl 2 2.5e-04 grep 2 2.5e-04 libgcc_s-3.3.2-20031023.so.1 2 2.5e-04 Socket.so 1 1.2e-04 af_packet 1 1.2e-04 gawk 1 1.2e-04 libcrypt-2.3.2.so 1 1.2e-04 libdl-2.3.2.so 1 1.2e-04 unix 1 1.2e-04 groff 1 1.2e-04 which 1 1.2e-04 libncursesw.so.5.3 1 1.2e-04 libz.so.1.2.0.7 1 1.2e-04 gpm [-- Attachment #3: Type: TEXT/plain, Size: 49298 bytes --] CPU: PIII, speed 648.076 MHz (estimated) Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 324038 vma samples % symbol name c014a1a0 195865 29.5231 module_text_address c0118510 90031 13.5705 mark_offset_tsc c0111920 49842 7.5128 mask_and_ack_8259A c014a2e0 23263 3.5065 kallsyms_lookup c0141bb0 15389 2.3196 kernel_text_address c01d1600 15017 2.2635 ext3_find_entry c0111550 14170 2.1359 enable_8259A_irq c0272560 13515 2.0371 cfb_imageblit c0120330 8149 1.2283 kernel_map_pages c0205f00 7819 1.1786 __io_virt_debug c0156560 7685 1.1584 poison_obj c01564d0 7497 1.1300 store_stackinfo c0158fa0 7192 1.0841 kmem_cache_free c019d580 7153 1.0782 __d_lookup c018e380 5132 0.7736 link_path_walk c010a700 4851 0.7312 irq_entries_start c0158b20 4754 0.7166 kmem_cache_alloc c0251e80 4429 0.6676 ide_outb c01192b0 3676 0.5541 apm_bios_call_simple c01565b0 3582 0.5399 scan_poisoned_obj c0158560 3400 0.5125 free_block c011fda0 3229 0.4867 __change_page_attr c011ff70 2619 0.3948 change_page_attr c026a7c0 2597 0.3915 sys_outbuf c010afa0 2526 0.3807 apic_timer_interrupt c0157fe0 2203 0.3321 cache_alloc_refill c0133d10 2160 0.3256 run_timer_softirq c019b250 2087 0.3146 prune_dcache c01c9310 2045 0.3082 find_next_usable_block c0251e10 2042 0.3078 ide_inb c010cf80 1877 0.2829 do_IRQ c0205ca0 1821 0.2745 atomic_dec_and_lock c0203920 1703 0.2567 radix_tree_lookup c0121360 1627 0.2452 schedule c012e400 1533 0.2311 do_softirq c0206350 1528 0.2303 __copy_from_user_ll c026fff0 1479 0.2229 bitfill32 c01da480 1472 0.2219 ext3_permission c0199f00 1358 0.2047 dput c017b4b0 1307 0.1970 __find_get_block_slow c0120d70 1298 0.1956 scheduler_tick c017d920 1245 0.1877 __find_get_block c01243b0 1204 0.1815 __might_sleep c01637e0 1190 0.1794 do_anonymous_page c01dc6c0 1163 0.1753 do_get_write_access c0113220 1151 0.1735 timer_interrupt c011bca0 1125 0.1696 smp_apic_timer_interrupt c018e2e0 1108 0.1670 do_lookup c01cbb40 1069 0.1611 ext3_new_inode c010a587 1033 0.1557 sysenter_past_esp c02062e0 1015 0.1530 __copy_to_user_ll c0242ac0 916 0.1381 __make_request c013b350 914 0.1378 do_sigaction c0206040 911 0.1373 strncpy_from_user c0129cc0 907 0.1367 profile_hook c0152690 906 0.1366 buffered_rmqueue c012e540 904 0.1363 raise_softirq c0151fa0 889 0.1340 __rmqueue c0155fd0 888 0.1338 dbg_redzone1 c017e370 885 0.1334 __block_prepare_write c014d5e0 864 0.1302 find_get_page c01daeb0 849 0.1280 start_this_handle c0132ac0 839 0.1265 __mod_timer c01de700 838 0.1263 journal_stop c01e09a0 817 0.1231 journal_commit_transaction c026a800 812 0.1224 move_buf_aligned c01cfe40 798 0.1203 ext3_do_update_inode c01ea540 782 0.1179 journal_add_journal_head c010ca40 774 0.1167 handle_IRQ_event c024e650 774 0.1167 ide_do_request c017f7a0 750 0.1130 block_write_full_page c02404b0 748 0.1127 blk_rq_map_sg c01e00e0 746 0.1124 __journal_file_buffer c0157ac0 738 0.1112 cache_grow c015d030 736 0.1109 shrink_list c027a420 721 0.1087 i8042_interrupt c0152550 709 0.1069 free_hot_cold_page c0133b80 705 0.1063 update_one_process c0156000 700 0.1055 dbg_redzone2 c0157a80 694 0.1046 kmem_flagcheck c01a0610 691 0.1042 find_inode_fast c0111390 674 0.1016 disable_8259A_irq c0163d70 669 0.1008 do_no_page c01cfae0 656 0.0989 ext3_read_inode c010af80 643 0.0969 common_interrupt c015db10 642 0.0968 shrink_cache c019c880 640 0.0965 d_alloc c010a63a 638 0.0962 restore_all c015e160 635 0.0957 refill_inactive_zone c015bf10 627 0.0945 __pagevec_lru_add c01660d0 624 0.0941 do_mmap_pgoff c014f420 610 0.0919 generic_file_aio_write_nolock c01cf6d0 599 0.0903 ext3_get_inode_block c0134180 590 0.0889 do_timer c019f7b0 590 0.0889 inode_init_once c018f2d0 579 0.0873 path_lookup c0107d80 570 0.0859 __switch_to c0152910 568 0.0856 __alloc_pages c01ddd00 564 0.0850 journal_dirty_metadata c01567a0 564 0.0850 slab_destroy c01cd1f0 563 0.0849 ext3_get_block_handle c014d430 560 0.0844 unlock_page c01ab6f0 554 0.0835 __mark_inode_dirty c017ad20 551 0.0831 wake_up_buffer c0157890 544 0.0820 cache_init_objs c0151f40 544 0.0820 prep_new_page c0119560 533 0.0803 apm_cpu_idle c0204e70 527 0.0794 number c0121d40 523 0.0788 __wake_up c010cb60 516 0.0778 note_interrupt c0150880 499 0.0752 mempool_alloc c024f230 496 0.0748 ide_intr c0248110 486 0.0733 as_completed_request c01559b0 485 0.0731 do_page_cache_readahead c017d810 484 0.0730 bh_lru_install c0241ac0 480 0.0724 get_request c01194b0 477 0.0719 apm_do_idle c0133ca0 472 0.0711 update_process_times c0203b20 471 0.0710 radix_tree_delete c013c4e0 467 0.0704 notifier_call_chain c018d850 461 0.0695 permission c011ecf0 460 0.0693 do_page_fault c02603d0 455 0.0686 ide_build_dmatable c0133580 453 0.0683 del_timer c01ad730 443 0.0668 do_mpage_readpage c0243210 442 0.0666 generic_make_request c01a00d0 441 0.0665 prune_icache c0177a90 439 0.0662 get_unused_fd c0247d30 437 0.0659 as_update_iohist c0248d00 428 0.0645 as_add_request c01e4e30 426 0.0642 __journal_clean_checkpoint_list c024d160 422 0.0636 ide_end_request c01205b0 422 0.0636 try_to_wake_up c01203e0 420 0.0633 recalc_task_prio c0121c60 413 0.0623 preempt_schedule c01668c0 405 0.0610 find_vma c0133a90 404 0.0609 update_wall_time_one_tick c01cd660 403 0.0607 ext3_getblk c018d660 399 0.0601 getname c018e1c0 396 0.0597 follow_mount c0180270 395 0.0595 bio_alloc c0151a40 388 0.0585 bad_range c01d4f10 387 0.0583 init_once c0156040 386 0.0582 dbg_userword c01881a0 384 0.0579 generic_fillattr c0243660 377 0.0568 __end_that_request_first c0203f80 374 0.0564 rb_erase c01cf800 372 0.0561 ext3_get_inode_loc c01c9bb0 371 0.0559 ext3_new_block c01ca430 369 0.0556 ext3_check_dir_entry c01ccb60 368 0.0555 ext3_get_branch c0251e90 368 0.0555 ide_outbsync c0151b50 362 0.0546 free_pages_bulk c016a3d0 359 0.0541 page_add_rmap c01184a0 358 0.0540 sched_clock c01dee60 356 0.0537 __journal_unfile_buffer c02412b0 354 0.0534 blk_run_queues c0133b40 353 0.0532 update_wall_time c027e980 350 0.0528 increment_tail c0253260 344 0.0519 ide_execute_command c0124a10 339 0.0511 prepare_to_wait c01a1aa0 338 0.0509 iget_locked c0150b60 331 0.0499 mempool_free c0132a00 329 0.0496 internal_add_timer c0111340 322 0.0485 end_8259A_irq c013e000 322 0.0485 supplemental_group_member c013f530 321 0.0484 worker_thread c01a0c20 319 0.0481 get_new_inode_fast c0251ed0 319 0.0481 ide_outl c013bf80 309 0.0466 sys_rt_sigaction c0156600 308 0.0464 check_poison_obj c01cca00 308 0.0464 ext3_block_to_path c02402b0 307 0.0463 blk_recount_segments c014d0e0 305 0.0460 add_to_page_cache c0177ee0 305 0.0460 fd_install c016a320 301 0.0454 page_referenced c01a4320 298 0.0449 dnotify_parent c01546c0 297 0.0448 __set_page_dirty_nobuffers c0179c90 297 0.0448 filp_ctor c015b780 297 0.0448 mark_page_accessed c018f8a0 296 0.0446 may_open c0188950 294 0.0443 cp_new_stat64 c01409c0 294 0.0443 rcu_do_batch c010b1d8 290 0.0437 page_fault c022f570 289 0.0436 do_con_write c0161410 288 0.0434 zap_pte_range c015a070 286 0.0431 reap_timer_fnc c017d560 284 0.0428 __brelse c01c9830 284 0.0428 ext3_try_to_allocate c02540a0 279 0.0421 do_rw_taskfile c0249090 275 0.0415 as_queue_empty c01781b0 274 0.0413 sys_close c0205f50 272 0.0410 memcpy c010a511 271 0.0408 ret_from_intr c0188250 271 0.0408 vfs_getattr c015baa0 268 0.0404 release_pages c024e390 267 0.0402 start_request c019f5f0 266 0.0401 alloc_inode c01ea760 265 0.0399 __journal_remove_journal_head c015b5c0 265 0.0399 activate_page c0248990 265 0.0399 as_dispatch_request c018f580 262 0.0395 __user_walk c017f930 262 0.0395 submit_bh c018de00 261 0.0393 real_lookup c0203dc0 258 0.0389 __rb_erase_color c010e480 254 0.0383 disable_irq_nosync c01ada40 254 0.0383 mpage_readpages c019cc10 253 0.0381 d_instantiate c019e0b0 252 0.0380 d_rehash c017dfa0 250 0.0377 __block_write_full_page c0203840 248 0.0374 radix_tree_insert c02050f0 248 0.0374 vsnprintf c0243c30 246 0.0371 get_io_context c0165b40 246 0.0371 vma_merge c01d2180 245 0.0369 add_dirent_to_buf c017da00 243 0.0366 __getblk c01e5ef0 243 0.0366 journal_cancel_revoke c01db830 242 0.0365 journal_start c0158880 239 0.0360 cache_flusharray c0118a60 238 0.0359 delay_tsc c0164520 237 0.0357 handle_mm_fault c0180550 235 0.0354 bio_add_page c027e9b0 234 0.0353 sync_buffer c017a190 231 0.0348 __fput c0177870 231 0.0348 dentry_open c01ea980 231 0.0348 journal_put_journal_head c017faa0 231 0.0348 ll_rw_block c018fa70 231 0.0348 open_namei c01bae10 226 0.0341 task_statm c0165840 223 0.0336 vma_link c01ac050 222 0.0335 sync_sb_inodes c0162860 221 0.0333 do_wp_page c010cd90 221 0.0333 enable_irq c014d8d0 220 0.0332 find_lock_page c0179e30 219 0.0330 filp_dtor c01dd7a0 219 0.0330 journal_dirty_data c0248790 217 0.0327 as_move_to_dispatch c017d110 217 0.0327 grow_dev_page c0159dc0 216 0.0326 drain_array c01dd5f0 213 0.0321 journal_get_undo_access c014ea20 212 0.0320 filemap_nopage c01d4650 210 0.0317 ext3_journal_start c01d1bc0 210 0.0317 ext3_lookup c014d330 210 0.0317 page_waitqueue c013e040 209 0.0315 in_group_p c01e7040 209 0.0315 journal_write_metadata_buffer c01d04b0 208 0.0314 ext3_reserve_inode_write c015c1b0 207 0.0312 __pagevec_lru_add_active c0247490 206 0.0311 as_find_arq_hash c0121cd0 204 0.0307 __wake_up_common c017a630 203 0.0306 file_move c02177e0 202 0.0304 add_timer_randomness c0247440 202 0.0304 as_add_arq_hash c017dd10 202 0.0304 create_empty_buffers c017ac10 201 0.0303 __constant_c_and_count_memset c0240aa0 201 0.0303 blk_plug_device c01601f0 201 0.0303 blk_queue_bounce c0247620 200 0.0301 as_choose_req c0248400 200 0.0301 as_remove_queued_request c01552d0 199 0.0300 file_ra_state_init c01672f0 198 0.0298 do_munmap c01520c0 198 0.0298 rmqueue_bulk c0267fc0 197 0.0297 fbcon_redraw c0240ed0 197 0.0297 generic_unplug_device c0203cc0 195 0.0294 rb_insert_color c017d410 190 0.0286 __getblk_slow c027eb80 190 0.0286 add_event_entry c02601c0 187 0.0282 ide_build_sglist c0178060 186 0.0280 sys_open c01cd990 185 0.0279 walk_page_buffers c0180210 184 0.0277 bio_destructor c017ad60 184 0.0277 unlock_buffer c01c8b30 181 0.0273 ext3_get_group_desc c0248fa0 180 0.0271 as_insert_request c01d03f0 180 0.0271 ext3_writepage_trans_blocks c0241a00 179 0.0270 freed_request c0165520 179 0.0270 remove_shared_vm_struct c0135d70 178 0.0268 rm_from_queue c0217990 177 0.0267 add_disk_randomness c02433a0 176 0.0265 submit_bio c01e7410 175 0.0264 __log_space_left c0180f00 174 0.0262 bio_endio c01cda80 172 0.0259 ext3_prepare_write c01ab980 171 0.0258 __sync_single_inode c017a2b0 171 0.0258 fget c01dd1d0 171 0.0258 journal_get_write_access c0203730 169 0.0255 radix_tree_preload c016a4f0 167 0.0252 page_remove_rmap c01d05e0 166 0.0250 ext3_dirty_inode c0260110 166 0.0250 ide_dma_intr c018dd40 165 0.0249 path_release c0166930 163 0.0246 find_vma_prev c017bf60 162 0.0244 inode_has_buffers c0242400 161 0.0243 drive_stat_acct c02438a0 161 0.0243 end_that_request_last c0248cb0 160 0.0241 as_next_request c0134170 160 0.0241 run_local_timers c0247bc0 159 0.0240 as_can_break_anticipation c0155450 158 0.0238 read_pages c012e380 156 0.0235 current_kernel_time c0179fc0 156 0.0235 get_empty_filp c0166780 156 0.0235 get_unmapped_area c01a2700 156 0.0235 iput c023f600 155 0.0234 clear_queue_congested c01c2160 154 0.0232 collect_sigign_sigcatch c0266600 154 0.0232 putcs_aligned c0161690 152 0.0229 unmap_vmas c025cdf0 151 0.0228 ide_do_rw_disk c01d46c0 149 0.0225 __ext3_journal_stop c0248530 149 0.0225 as_remove_dispatched_request c017a7e0 148 0.0223 file_kill c01a1d60 147 0.0222 __insert_inode_hash c0124d10 147 0.0222 finish_wait c013f150 147 0.0222 queue_work c0242490 146 0.0220 disk_round_stats c01a3100 144 0.0217 notify_change c02040b0 144 0.0217 rb_next c0240cd0 141 0.0213 blk_remove_plug c01a2db0 141 0.0213 inode_change_ok c0260910 139 0.0210 __ide_dma_read c02478b0 139 0.0210 as_antic_stop c01db7e0 139 0.0210 new_handle c0188350 139 0.0210 vfs_lstat c0247550 138 0.0208 as_add_arq_rb c01a3ff0 138 0.0208 dnotify_flush c01139d0 138 0.0208 sys_mmap2 c01d50a0 136 0.0205 ext3_clear_inode c02555a0 136 0.0205 ide_cmd_type_parser c0252760 135 0.0203 ide_wait_stat c01d0580 134 0.0202 ext3_mark_inode_dirty c023f250 133 0.0200 elv_queue_empty c01c2af0 133 0.0200 proc_pid_stat c026fbb0 132 0.0199 soft_cursor c0253df0 131 0.0197 SELECT_DRIVE c0205e80 131 0.0197 __const_udelay c01cd510 130 0.0196 ext3_get_block c014d510 129 0.0194 __lock_page c017cff0 129 0.0194 create_buffers c017afa0 129 0.0194 end_buffer_write_sync c017a430 129 0.0194 fget_light c01e5630 129 0.0194 find_revoke_record c025cc40 128 0.0193 lba_28_rw_disk c02499c0 127 0.0191 as_set_request c025ca80 124 0.0187 get_command c01a2cd0 124 0.0187 wake_up_inode c027e440 123 0.0185 get_exec_dcookie c0198bb0 123 0.0185 locks_remove_posix c017ad00 122 0.0184 bh_waitq_head c01d4e90 122 0.0184 ext3_alloc_inode c0248640 121 0.0182 as_remove_request c01ccd70 121 0.0182 ext3_alloc_branch c0243b40 121 0.0182 put_io_context c0180050 120 0.0181 alloc_buffer_head c014dc10 120 0.0181 find_or_create_page c01656f0 119 0.0179 find_vma_prepare c0166ea0 119 0.0179 unmap_region c01caf70 117 0.0176 ext3_release_file c0255500 117 0.0176 ide_handler_parser c0150d80 117 0.0176 mempool_alloc_slab c01d0470 116 0.0175 ext3_mark_iloc_dirty c0255460 116 0.0175 ide_pre_handler_parser c0152e30 116 0.0175 nr_free_pages c0177810 115 0.0173 filp_open c0123f30 115 0.0173 io_schedule c0160790 115 0.0173 pte_alloc_map c0150070 114 0.0172 generic_file_aio_write c0198e30 114 0.0172 locks_remove_flock c01be9d0 114 0.0172 pid_revalidate c01248a0 114 0.0172 remove_wait_queue c017e740 112 0.0169 __block_commit_write c0247760 112 0.0169 as_find_next_arq c01d2730 112 0.0169 ext3_add_entry c0178120 112 0.0169 filp_close c016aad0 111 0.0167 __pte_chain_free c0178f70 111 0.0167 do_sync_write c017fc50 111 0.0167 drop_buffers c0159340 110 0.0166 kfree c0133510 110 0.0166 mod_timer c01a0680 110 0.0166 new_inode c015b8e0 109 0.0164 __page_cache_release c0165a90 109 0.0164 can_vma_merge_before c017dab0 107 0.0161 __bread c0180110 107 0.0161 init_buffer_head c01a27d0 107 0.0161 inode_times_differ c0152cb0 106 0.0160 __get_free_pages c01245c0 106 0.0160 add_wait_queue c019d240 106 0.0160 d_splice_alias c023f100 106 0.0160 elv_next_request c017bac0 106 0.0160 end_buffer_async_write c01cb020 106 0.0160 ext3_file_write c0188b40 106 0.0160 inode_add_bytes c0179030 106 0.0160 vfs_write c01b34d0 104 0.0157 eventpoll_init_file c01675c0 104 0.0157 sys_munmap c010b008 103 0.0155 error_code c01cdc60 103 0.0155 ext3_ordered_commit_write c02299e0 102 0.0154 conv_uni_to_pc c023f2f0 101 0.0152 elv_set_request c0150da0 100 0.0151 mempool_free_slab c023f570 99 0.0149 elv_try_last_merge c01d0260 99 0.0149 ext3_setattr c01a26e0 99 0.0149 generic_drop_inode c015b880 99 0.0149 lru_add_drain c016ab40 99 0.0149 pte_chain_alloc c0158d40 98 0.0148 __kmalloc c0249120 98 0.0148 as_merge c01ff940 98 0.0148 cap_vm_enough_memory c01cd010 98 0.0148 ext3_splice_branch c01a2f30 98 0.0148 inode_setattr c0220260 98 0.0148 opost_block c017db70 98 0.0148 set_bh_page c01a2820 97 0.0146 update_atime c017fbf0 96 0.0145 check_ttfb_buffer c0190c30 95 0.0143 sys_unlink c027e780 95 0.0143 take_tasks_mm c019fa60 94 0.0142 clear_inode c018f7d0 94 0.0142 vfs_create c017f8f0 93 0.0140 end_bio_bh_io_sync c01cfa70 93 0.0140 ext3_set_inode_flags c02475e0 92 0.0139 as_find_arq_rb c026a990 92 0.0139 fb_get_buffer_offset c0149b20 92 0.0139 get_ksymbol c0252680 91 0.0137 drive_is_ready c01ccce0 91 0.0137 ext3_find_goal c01cafd0 90 0.0136 ext3_open_file c01e08f0 90 0.0136 journal_end_buffer_io_sync c0153f80 88 0.0133 balance_dirty_pages_ratelimited c0181280 88 0.0133 bio_phys_segments c010a544 88 0.0133 resume_kernel c0180430 87 0.0131 bio_put c023ef10 86 0.0130 __elv_add_request c01ad4e0 86 0.0130 mpage_end_io_read c017adc0 85 0.0128 __wait_on_buffer c01294b0 85 0.0128 console_conditional_schedule c01ea490 84 0.0127 journal_alloc_journal_head c01da650 83 0.0125 ext3_init_acl c01ce740 83 0.0125 ext3_set_aops c019fb20 82 0.0124 dispose_list c01ae160 82 0.0124 mpage_writepages c01ac470 82 0.0124 writeback_inodes c0199ec0 81 0.0122 d_free c014c880 80 0.0121 __remove_from_page_cache c0248710 80 0.0121 as_fifo_expired c0159fc0 80 0.0121 drain_array_locked c0188ac0 80 0.0121 sys_lstat64 c0260ab0 78 0.0118 __ide_dma_begin c0260ba0 78 0.0118 __ide_dma_test_irq c0247b00 78 0.0118 as_close_req c025af00 78 0.0118 default_end_request c0203c20 78 0.0118 radix_tree_node_ctor c0204110 78 0.0118 rb_prev c025c970 77 0.0116 __ide_do_rw_disk c017af70 77 0.0116 end_buffer_read_sync c01770f0 76 0.0115 sys_access c0171f40 75 0.0113 free_page_and_swap_cache c01e0380 75 0.0113 journal_file_buffer c0194980 75 0.0113 sys_select c017fd40 75 0.0113 try_to_free_buffers c01ea6d0 74 0.0112 journal_grab_journal_head c014d490 73 0.0110 end_page_writeback c0205e10 72 0.0109 __delay c0124ea0 72 0.0109 autoremove_wake_function c019d520 72 0.0109 d_lookup c01cba40 72 0.0109 find_group_other c0177510 72 0.0109 sys_chmod c01d3950 71 0.0107 ext3_orphan_del c017dbb0 71 0.0107 try_to_release_page c017c380 70 0.0106 __set_page_dirty_buffers c023f390 70 0.0106 elv_completed_request c01ced70 70 0.0106 ext3_free_data c01ce010 70 0.0106 ext3_ordered_writepage c017ef80 70 0.0106 generic_commit_write c015b820 70 0.0106 lru_cache_add_active c0194520 69 0.0104 do_select c0240600 69 0.0104 ll_back_merge_fn c0249950 63 0.0095 as_put_request c024eb20 63 0.0095 do_ide_request c0260c90 62 0.0093 __ide_dma_count c012c7b0 62 0.0093 next_thread c01615a0 62 0.0093 zap_pmd_range c01657a0 61 0.0092 __vma_link c02424e0 60 0.0090 __blk_put_request c0243cb0 60 0.0090 copy_io_context c0260860 60 0.0090 ide_start_dma c017d510 60 0.0090 mark_buffer_dirty c0178e40 60 0.0090 vfs_read c02497b0 58 0.0087 as_work_handler c0166f80 58 0.0087 detach_vmas_to_be_unmapped c023f210 58 0.0087 elv_remove_request c0243880 58 0.0087 end_that_request_first c01ccc50 58 0.0087 ext3_find_near c018d8a0 58 0.0087 get_write_access c027e4b0 58 0.0087 lookup_dcookie c0206180 57 0.0086 __copy_user_intel c01c03a0 57 0.0086 get_tgid_list c017ef00 56 0.0084 block_prepare_write c017a170 56 0.0084 fput c0161610 56 0.0084 unmap_page_range c0154350 55 0.0083 do_writepages c02492c0 54 0.0081 as_merged_request c0270e90 54 0.0081 bitcpy c01ea320 54 0.0081 journal_blocks_per_page c018f410 53 0.0080 __lookup_hash c01a2a30 53 0.0080 i_waitq_head c017ae90 52 0.0078 __set_page_buffers c027e5d0 52 0.0078 add_user_ctx_switch c0120390 52 0.0078 effective_prio c01a23c0 52 0.0078 generic_forget_inode c0190a70 52 0.0078 vfs_unlink c022e7f0 51 0.0077 do_con_trol c02477e0 50 0.0075 as_antic_expired c0199e80 50 0.0075 d_callback c023ee30 50 0.0075 elv_merge c0152df0 50 0.0075 free_pages c017d0a0 50 0.0075 init_page_buffers c017df30 50 0.0075 unmap_underlying_metadata c0203c80 49 0.0074 __rb_rotate_right c0121cb0 49 0.0074 default_wake_function c0166d30 49 0.0074 free_pgtables c0229bd0 49 0.0074 set_selection c01791c0 49 0.0074 sys_write c0247390 48 0.0072 as_get_io_context c0131f70 47 0.0071 access_process_vm c017ff80 47 0.0071 block_sync_page c0140990 47 0.0071 call_rcu c01d3010 47 0.0071 ext3_create c014f040 47 0.0071 generic_file_mmap c01a2900 47 0.0071 inode_update_time c0166dd0 47 0.0071 unmap_vma c014d2e0 46 0.0069 add_to_page_cache_lru c01bdd30 46 0.0069 pid_alive c01c8bf0 46 0.0069 read_block_bitmap c023f330 45 0.0068 elv_put_request c01d2f90 45 0.0068 ext3_add_nondir c0180010 45 0.0068 recalc_bh_state c0247520 44 0.0066 as_find_first_arq c01cb160 44 0.0066 read_inode_bitmap c0154910 44 0.0066 test_clear_page_dirty c01337b0 43 0.0065 cascade c01ad600 43 0.0065 mpage_alloc c02037d0 43 0.0065 radix_tree_extend c02473e0 42 0.0063 as_remove_merge_hints c01c8c80 42 0.0063 ext3_free_blocks c0108060 42 0.0063 get_wchan c01be740 42 0.0063 task_dumpable c01812b0 41 0.0062 bio_hw_segments c01ce440 41 0.0062 ext3_readpages c02036d0 41 0.0062 radix_tree_node_alloc c014f2f0 41 0.0062 remove_suid c01a3090 41 0.0062 setattr_mask c0260b00 40 0.0060 __ide_dma_end c027e680 39 0.0059 add_sample_entry c0152da0 38 0.0057 __free_pages c0134390 38 0.0057 schedule_timeout c0203c40 37 0.0056 __rb_rotate_left c01da800 37 0.0056 ext3_acl_chmod c01291f0 37 0.0056 release_console_sem c02afdf0 37 0.0056 tcp_poll c01e38d0 36 0.0054 __try_to_free_cp_buf c027e720 36 0.0054 add_sample c0247cf0 36 0.0054 as_can_anticipate c014df50 36 0.0054 do_generic_mapping_read c015b3c0 36 0.0054 rotate_reclaimable_page c021b060 36 0.0054 tty_write c0248000 35 0.0053 as_update_arq c02678d0 35 0.0053 fbcon_cursor c0228e00 35 0.0053 inverse_translate c0160b60 33 0.0050 copy_page_range c0152670 33 0.0050 free_hot_page c027a860 33 0.0050 i8042_timer_func c01bd450 33 0.0050 proc_info_read c02609e0 32 0.0048 __ide_dma_write c015be70 32 0.0048 __pagevec_release_nonlru c01a4c90 32 0.0048 lookup_mnt c027e760 32 0.0048 release_mm c0140090 32 0.0048 schedule_work c0243b10 31 0.0047 kblockd_schedule_work c0161be0 30 0.0045 follow_page c01df4a0 30 0.0045 journal_try_to_free_buffers c01e61c0 30 0.0045 journal_write_revoke_records c015ef80 29 0.0044 balance_pgdat c01cdb90 29 0.0044 ext3_journal_dirty_data c0194420 29 0.0044 max_select_fd c029b1b0 27 0.0041 dev_watchdog c01cc9b0 27 0.0041 ext3_discard_prealloc c01800b0 27 0.0041 free_buffer_head c0140c20 27 0.0041 rcu_process_callbacks c01a20d0 26 0.0039 generic_delete_inode c0125370 26 0.0039 mmgrab c019c400 26 0.0039 select_parent c0179160 26 0.0039 sys_read c0166e70 26 0.0039 unmap_vma_list c01e4ee0 25 0.0038 __journal_remove_checkpoint c0247930 25 0.0038 as_antic_timeout c01cc9c0 25 0.0038 ext3_alloc_block c01cd8e0 25 0.0038 ext3_bread c01ce4b0 25 0.0038 ext3_releasepage c027e940 25 0.0038 get_slots c0161c80 25 0.0038 get_user_pages c015b7c0 25 0.0038 lru_cache_add c01b7ef0 25 0.0038 mb_cache_shrink_fn c018cc40 25 0.0038 pipe_poll c01c1510 25 0.0038 proc_readdir c010a52c 25 0.0038 resume_userspace c015ce00 25 0.0038 shrink_slab c0282650 25 0.0038 sock_poll c01d15d0 24 0.0036 ext3_update_dx_flag c01ad5c0 24 0.0036 mpage_bio_submit c0231300 24 0.0036 screen_glyph c0229a90 24 0.0036 sel_pos c02669e0 23 0.0035 accel_putcs c0188a80 23 0.0035 sys_stat64 c018dd80 22 0.0033 cached_lookup c0289270 22 0.0033 datagram_poll c027ea90 22 0.0033 sync_cpu_buffers c019fa00 21 0.0032 __iget c01e7b00 21 0.0032 journal_next_log_block c0224030 21 0.0032 write_chan c022d800 20 0.0030 csi_K c01cebf0 20 0.0030 ext3_clear_blocks c012e650 20 0.0030 tasklet_action c017aed0 19 0.0029 __clear_page_buffers c01cf130 19 0.0029 ext3_truncate c0188b00 19 0.0029 sys_fstat64 c01d4ee0 18 0.0027 ext3_destroy_inode c01cb1f0 18 0.0027 ext3_free_inode c01ca510 18 0.0027 ext3_readdir c01dd220 18 0.0027 journal_get_create_access c011e5d0 18 0.0027 pte_alloc_one c0182880 18 0.0027 sync_supers c014e400 17 0.0026 __generic_file_aio_read c0165760 17 0.0026 __vma_link_rb c019daf0 17 0.0026 d_delete c019f750 17 0.0026 destroy_inode c014e300 17 0.0026 file_read_actor c0188ce0 17 0.0026 inode_sub_bytes c01a3340 17 0.0026 is_bad_inode c0155da0 17 0.0026 page_cache_readahead c0194320 17 0.0026 poll_freewait c0266300 16 0.0024 fb_flashcursor c017cc20 16 0.0024 invalidate_inode_buffers c01df040 16 0.0024 journal_unfile_buffer c01883b0 16 0.0024 vfs_fstat c0194110 15 0.0023 filldir64 c0260560 15 0.0023 ide_destroy_dmatable c0123510 15 0.0023 idle_cpu c01e7cf0 15 0.0023 journal_bmap c01ea520 15 0.0023 journal_free_journal_head c01ea900 15 0.0023 journal_remove_journal_head c01e6150 15 0.0023 journal_switch_revoke_table c0140a40 15 0.0023 rcu_check_quiescent_state c01a0860 15 0.0023 unlock_new_inode c01882f0 15 0.0023 vfs_stat c0247860 14 0.0021 as_antic_waitreq c0178d80 14 0.0021 do_sync_read c026a170 14 0.0021 fbcon_screen_pos c01c12a0 14 0.0021 proc_lookup c02427e0 13 0.0020 attempt_merge c01a2790 13 0.0020 bmap c0270bb0 13 0.0020 cfb_fillrect c01cc870 13 0.0020 ext3_put_inode c0272c90 13 0.0020 input_event c01bf340 13 0.0020 pid_delete_dentry c01bc940 13 0.0020 proc_pid_cmdline c0140e80 13 0.0020 rcu_check_callbacks c0133810 13 0.0020 second_overflow c0139b40 13 0.0020 sigprocmask c0129190 12 0.0018 acquire_console_sem c027e590 12 0.0018 add_kernel_ctx_switch c027e6b0 12 0.0018 add_us_sample c011ab30 12 0.0018 apm_event_handler c02490f0 12 0.0018 as_latter_request c01de0d0 12 0.0018 journal_release_buffer c015be30 11 0.0017 __pagevec_release c0194370 11 0.0017 __pollwait c01969b0 11 0.0017 __posix_lock_file c01abe30 11 0.0017 __writeback_single_inode c0184bc0 11 0.0017 blkdev_writepage c01d3700 11 0.0017 ext3_orphan_add c01c3e00 11 0.0017 get_vmalloc_info c01cdff0 11 0.0017 journal_dirty_data_fn c018f4e0 11 0.0017 lookup_hash c01c04d0 11 0.0017 proc_pid_readdir c01c32b0 11 0.0017 proc_pid_statm c01bf870 11 0.0017 proc_pident_lookup c022c970 11 0.0017 set_cursor c0190310 11 0.0017 sys_mkdir c0152d70 10 0.0015 __pagevec_free c023f280 10 0.0015 elv_latter_request c01d3d20 10 0.0015 ext3_unlink c017bdd0 10 0.0015 mark_buffer_async_write c0152e80 10 0.0015 nr_used_zone_pages c02200b0 10 0.0015 opost c01bb510 10 0.0015 proc_delete_inode c01c42b0 10 0.0015 show_stat c0139de0 10 0.0015 sys_rt_sigprocmask c012d720 10 0.0015 sys_time c018d730 10 0.0015 vfs_permission c01ab930 10 0.0015 write_inode c01255d0 9 0.0014 copy_mm c0266350 9 0.0014 cursor_timer_handler c023f480 9 0.0014 elv_rq_merge_ok c01cc8a0 9 0.0014 ext3_delete_inode c01d3190 9 0.0014 ext3_mkdir c010c590 9 0.0014 math_state_restore c0220e50 9 0.0014 n_tty_receive_buf c01c0990 9 0.0014 proc_file_read c0206120 9 0.0014 strnlen_user c0253e90 8 0.0012 SELECT_MASK c017d760 8 0.0012 __bread_slow c01e4f70 8 0.0012 __journal_insert_checkpoint c027e550 8 0.0012 add_cpu_switch c01191d0 8 0.0012 apm_bios_call c01cdfd0 8 0.0012 bget_one c0242750 8 0.0012 blk_congestion_wait c0230440 8 0.0012 con_flush_chars c0230270 8 0.0012 con_write c012d3f0 8 0.0012 do_setitimer c01cc670 8 0.0012 ext3_forget c01cb770 8 0.0012 find_group_orlov c022c8c0 8 0.0012 hide_cursor c01bba00 8 0.0012 proc_root_readdir c0134f50 8 0.0012 recalc_sigpending c01a92b0 8 0.0012 seq_printf c01badf0 8 0.0012 task_vsize c01e05b0 7 0.0011 __journal_refile_buffer c022c7d0 7 0.0011 add_softcursor c01cdfe0 7 0.0011 bput_one c0230310 7 0.0011 con_write_room c0167640 7 0.0011 do_brk c0192e70 7 0.0011 do_fcntl c017f8b0 7 0.0011 generic_block_bmap c0123f50 7 0.0011 io_schedule_timeout c0194300 7 0.0011 poll_initwait c01bcde0 7 0.0011 proc_pid_wchan c0276790 7 0.0011 psmouse_interrupt c019c840 7 0.0011 shrink_dcache_memory c019ba00 7 0.0011 shrink_dcache_sb c0120850 7 0.0011 wake_up_process c0154950 6 9.0e-04 __pdflush c01193d0 6 9.0e-04 apm_get_event c01874d0 6 9.0e-04 chrdev_open c01606e0 6 9.0e-04 clear_page_tables c010b058 6 9.0e-04 device_not_available c012d1d0 6 9.0e-04 do_getitimer c01cdf50 6 9.0e-04 ext3_bmap c014e620 6 9.0e-04 generic_file_aio_read c0255590 6 9.0e-04 ide_post_handler_parser c01e6540 6 9.0e-04 kjournald c015cf70 6 9.0e-04 may_write_to_queue c02230c0 6 9.0e-04 read_chan c0114410 6 9.0e-04 restore_fpu c01a0570 6 9.0e-04 shrink_icache_memory c027eb20 6 9.0e-04 timer_ping c0190250 6 9.0e-04 vfs_mkdir c0193d60 6 9.0e-04 vfs_readdir c01ad470 6 9.0e-04 writeback_acquire c0152f80 5 7.5e-04 __get_page_state c012e5b0 5 7.5e-04 __tasklet_schedule c01bb2e0 5 7.5e-04 de_put c022beb0 5 7.5e-04 do_update_region c023ee60 5 7.5e-04 elv_merged_request c01d2e70 5 7.5e-04 ext3_delete_entry c01d8e50 5 7.5e-04 ext3_xattr_delete_inode c015f190 5 7.5e-04 kswapd c01c3d00 5 7.5e-04 loadavg_read_proc c018fef0 5 7.5e-04 lookup_create c0125220 5 7.5e-04 mmput c0224270 5 7.5e-04 normal_poll c016aab0 5 7.5e-04 pte_chain_ctor c0194970 5 7.5e-04 select_bits_free c01cc7a0 5 7.5e-04 start_transaction c01655c0 5 7.5e-04 sys_brk c0193940 5 7.5e-04 sys_ioctl c021de90 5 7.5e-04 tty_ioctl c021cfe0 5 7.5e-04 tty_poll c027eb10 5 7.5e-04 wq_sync_buffers c01ea340 4 6.0e-04 __jbd_kmalloc c0275390 4 6.0e-04 atkbd_interrupt c011a680 4 6.0e-04 check_events c0219770 4 6.0e-04 check_tty_count c0125e60 4 6.0e-04 copy_files c01266a0 4 6.0e-04 copy_process c0189360 4 6.0e-04 copy_strings c0267620 4 6.0e-04 fbcon_clear c0267830 4 6.0e-04 fbcon_putcs c022d530 4 6.0e-04 lf c0240890 4 6.0e-04 ll_merge_requests_fn c01b5d60 4 6.0e-04 load_elf_binary c0195620 4 6.0e-04 locks_init_lock c0195e80 4 6.0e-04 locks_wake_up_blocks c0124fd0 4 6.0e-04 mm_init c010a54e 4 6.0e-04 need_resched c0185340 4 6.0e-04 nr_blockdev_pages c01c0c70 4 6.0e-04 proc_file_lseek c01be8e0 4 6.0e-04 proc_pid_make_inode c029fd20 4 6.0e-04 rt_check_expire c0194940 4 6.0e-04 select_bits_alloc c02797b0 4 6.0e-04 serio_interrupt c015ec80 4 6.0e-04 shrink_zone c0172630 4 6.0e-04 swap_info_get c0278840 4 6.0e-04 synaptics_parse_hw_state c0278a60 4 6.0e-04 synaptics_process_packet c0194240 4 6.0e-04 sys_getdents64 c022cc00 4 6.0e-04 vc_cons_allocated c0225d20 4 6.0e-04 vt_ioctl c015f290 4 6.0e-04 wakeup_kswapd c0154110 4 6.0e-04 wb_kupdate c01ad4a0 4 6.0e-04 writeback_release c0108bd0 3 4.5e-04 __down_trylock c0129db0 3 4.5e-04 __unhash_process c0266820 3 4.5e-04 accel_clear c01404d0 3 4.5e-04 attach_pid c0172c50 3 4.5e-04 can_share_swap_page c0165ae0 3 4.5e-04 can_vma_merge_after c01ff4d0 3 4.5e-04 cap_bprm_compute_creds c026a2f0 3 4.5e-04 fbcon_invert_region c01785c0 3 4.5e-04 generic_file_llseek c011a620 3 4.5e-04 get_event c0153000 3 4.5e-04 get_page_state c01dada0 3 4.5e-04 get_transaction c022d3d0 3 4.5e-04 gotoxy c0155f20 3 4.5e-04 handle_ra_miss c021b820 3 4.5e-04 init_dev c0155000 3 4.5e-04 pdflush_operation c011e640 3 4.5e-04 pgd_ctor c018c5d0 3 4.5e-04 pipe_read c01bb780 3 4.5e-04 proc_get_inode c0134380 3 4.5e-04 process_timeout c0177d70 3 4.5e-04 put_unused_fd c021c190 3 4.5e-04 release_dev c0204630 3 4.5e-04 rwsem_down_failed_common c02055b0 3 4.5e-04 sprintf c0278eb0 3 4.5e-04 synaptics_validate_byte c017fb30 3 4.5e-04 sync_dirty_buffer c012cef0 3 4.5e-04 sys_wait4 c016a670 3 4.5e-04 try_to_unmap_one c021a930 3 4.5e-04 tty_hung_up_p c0248070 3 4.5e-04 update_write_batch c02179d0 2 3.0e-04 SHATransform c017d5a0 2 3.0e-04 __bforget c019eb80 2 3.0e-04 __d_path c0135b30 2 3.0e-04 __dequeue_signal c01084f0 2 3.0e-04 __down c027e640 2 3.0e-04 add_cookie_switch c02472e0 2 3.0e-04 alloc_as_io_context c011abc0 2 3.0e-04 apm_mainloop c01957f0 2 3.0e-04 assign_type c0275300 2 3.0e-04 atkbd_report_key c017e7d0 2 3.0e-04 block_read_full_page c01cc760 2 3.0e-04 blocks_for_truncate c022c0b0 2 3.0e-04 build_attr c0187f80 2 3.0e-04 cdev_get c018b490 2 3.0e-04 compute_creds c0230330 2 3.0e-04 con_chars_in_buffer c02302c0 2 3.0e-04 con_put_char c01b5540 2 3.0e-04 create_elf_tables c022d630 2 3.0e-04 csi_J c018daf0 2 3.0e-04 deny_write_access c018ba60 2 3.0e-04 do_execve c010a360 2 3.0e-04 do_signal c0163220 2 3.0e-04 do_swap_page c01281c0 2 3.0e-04 do_syslog c0124ef0 2 3.0e-04 dup_task_struct c0166990 2 3.0e-04 expand_stack c01ce420 2 3.0e-04 ext3_readpage c01d0210 2 3.0e-04 ext3_write_inode c0193590 2 3.0e-04 fasync_helper c02681d0 2 3.0e-04 fbcon_scroll c01983e0 2 3.0e-04 fcntl_setlk c0166cb0 2 3.0e-04 find_extend_vma c014dcd0 2 3.0e-04 find_get_pages c01356c0 2 3.0e-04 flush_signal_handlers c01183d0 2 3.0e-04 get_offset_tsc c018d2c0 2 3.0e-04 get_pipe_inode c0136820 2 3.0e-04 group_send_sig_info c010a050 2 3.0e-04 handle_signal c0112390 2 3.0e-04 init_new_context c01e5420 2 3.0e-04 insert_revoke_hash c01e2fb0 2 3.0e-04 journal_brelse_array c022b7e0 2 3.0e-04 kbd_keycode c0195600 2 3.0e-04 locks_alloc_lock c01956f0 2 3.0e-04 locks_copy_lock c0195f60 2 3.0e-04 locks_delete_lock c0149c80 2 3.0e-04 module_address_lookup c02746b0 2 3.0e-04 mousedev_abs_event c0274800 2 3.0e-04 mousedev_event c021fec0 2 3.0e-04 n_tty_chars_in_buffer c0224bf0 2 3.0e-04 n_tty_ioctl c028be40 2 3.0e-04 netif_receive_skb c01c3cb0 2 3.0e-04 proc_calc_metrics c01bc0e0 2 3.0e-04 proc_exe_link c01bdac0 2 3.0e-04 proc_pid_readlink c01bb980 2 3.0e-04 proc_root_lookup c01bfad0 2 3.0e-04 proc_tgid_base_lookup c0140a00 2 3.0e-04 rcu_start_batch c0204240 2 3.0e-04 rwsem_wake c0114940 2 3.0e-04 save_i387 c01a8ba0 2 3.0e-04 seq_read c01a7cf0 2 3.0e-04 set_fs_pwd c0189930 2 3.0e-04 setup_arg_pages c0175c50 2 3.0e-04 si_swapinfo c0204e30 2 3.0e-04 skip_atoi c01931b0 2 3.0e-04 sys_fcntl64 c012d7e0 2 3.0e-04 sys_gettimeofday c0178c80 2 3.0e-04 sys_llseek c0190950 2 3.0e-04 sys_rmdir c01234f0 2 3.0e-04 task_prio c01a8ed0 2 3.0e-04 traverse c01cc800 2 3.0e-04 try_to_extend_transaction c021ce10 2 3.0e-04 tty_release c022c1c0 2 3.0e-04 update_attr c0205580 2 3.0e-04 vsprintf c012cb10 2 3.0e-04 wait_task_zombie c01d7203 1 1.5e-04 .text.lock.super c01352e0 1 1.5e-04 __exit_sighand c014ca60 1 1.5e-04 __filemap_fdatawrite c0165a10 1 1.5e-04 __insert_vm_struct c01e5050 1 1.5e-04 __journal_drop_transaction c01df250 1 1.5e-04 __journal_try_to_free_buffer c0193690 1 1.5e-04 __kill_fasync c0203980 1 1.5e-04 __lookup c0217930 1 1.5e-04 add_mouse_randomness c0247810 1 1.5e-04 as_antic_waitnext c02494a0 1 1.5e-04 as_merged_requests c02410c0 1 1.5e-04 blk_unplug_timeout c0167880 1 1.5e-04 build_mmap_rb c0128bf0 1 1.5e-04 call_console_drivers c0188030 1 1.5e-04 cdev_put c0135e00 1 1.5e-04 check_kill_permission c0229ad0 1 1.5e-04 clear_selection c01e6520 1 1.5e-04 commit_timeout c0230490 1 1.5e-04 con_open c01fd8b0 1 1.5e-04 copy_semundo c01895d0 1 1.5e-04 copy_strings_kernel c0125e20 1 1.5e-04 count_open_files c027e1b0 1 1.5e-04 cpu_buffer_reset c0135ce0 1 1.5e-04 dequeue_signal c0140670 1 1.5e-04 detach_pid c0127720 1 1.5e-04 do_fork c0112ba0 1 1.5e-04 do_gettimeofday c01bd9f0 1 1.5e-04 do_proc_readlink c0179360 1 1.5e-04 do_readv_writev c01826c0 1 1.5e-04 drop_super c012ca30 1 1.5e-04 eligible_child c0128d30 1 1.5e-04 emit_log_char c027ef10 1 1.5e-04 event_buffer_read c0189c40 1 1.5e-04 exec_mmap c01a33f0 1 1.5e-04 expand_fd_array c01ca330 1 1.5e-04 ext3_bg_has_super c01cc5d0 1 1.5e-04 ext3_count_free_inodes c01d7260 1 1.5e-04 ext3_follow_link c01d3b90 1 1.5e-04 ext3_rmdir c0217b20 1 1.5e-04 extract_entropy c026a1f0 1 1.5e-04 fbcon_getxy c0267790 1 1.5e-04 fbcon_putc c0193760 1 1.5e-04 file_ioctl c0169ce0 1 1.5e-04 filemap_sync_pte c0140780 1 1.5e-04 find_task_by_pid c018a040 1 1.5e-04 flush_old_exec c0135080 1 1.5e-04 flush_sigqueue c0107750 1 1.5e-04 flush_thread c021ea10 1 1.5e-04 flush_to_ldisc c02e1bb0 1 1.5e-04 fn_hash_lookup c017b060 1 1.5e-04 fsync_super c0186e50 1 1.5e-04 generic_writepages c01ca400 1 1.5e-04 get_dtype c01a49d0 1 1.5e-04 get_filesystem_list c01390f0 1 1.5e-04 get_signal_to_deliver c0172150 1 1.5e-04 get_swap_page c022d490 1 1.5e-04 gotoxay c01801a0 1 1.5e-04 init_buffer c01956d0 1 1.5e-04 init_once c015c5b0 1 1.5e-04 invalidate_complete_page c015cab0 1 1.5e-04 invalidate_mapping_pages c02a5ae0 1 1.5e-04 ip_local_deliver c02a5d40 1 1.5e-04 ip_rcv c01e9070 1 1.5e-04 journal_check_used_features c01de0f0 1 1.5e-04 journal_forget c01e7d90 1 1.5e-04 journal_get_descriptor_buffer c01e5d90 1 1.5e-04 journal_revoke c01e9140 1 1.5e-04 journal_set_features c022b0f0 1 1.5e-04 k_shift c022bb30 1 1.5e-04 kbd_event c01b59f0 1 1.5e-04 load_elf_interp c0195f10 1 1.5e-04 locks_insert_lock c01e7790 1 1.5e-04 log_wait_commit c01c3f70 1 1.5e-04 meminfo_read_proc c0274cc0 1 1.5e-04 mousedev_packet c0168b10 1 1.5e-04 move_one_page c01adb80 1 1.5e-04 mpage_readpage c01bf370 1 1.5e-04 name_to_int c028ba60 1 1.5e-04 netif_rx c0189b00 1 1.5e-04 open_exec c01b5510 1 1.5e-04 padzero c02a5560 1 1.5e-04 peer_check_expire c011e820 1 1.5e-04 pgd_dtor c018cdb0 1 1.5e-04 pipe_read_fasync c018cfc0 1 1.5e-04 pipe_write_release c0230c90 1 1.5e-04 poke_blanked_console c0128da0 1 1.5e-04 printk c01bb5e0 1 1.5e-04 proc_alloc_inode c01bb660 1 1.5e-04 proc_destroy_inode c01bfd80 1 1.5e-04 proc_pid_flush c01bc540 1 1.5e-04 proc_root_link c028bff0 1 1.5e-04 process_backlog c0189620 1 1.5e-04 put_dirty_page c012af70 1 1.5e-04 put_fs_struct c022a710 1 1.5e-04 puts_queue c0129e50 1 1.5e-04 release_task c0110010 1 1.5e-04 release_x86_irqs c018b720 1 1.5e-04 remove_arg_zero c02a09c0 1 1.5e-04 rt_intern_hash c02041c0 1 1.5e-04 rwsem_down_read_failed c0120bd0 1 1.5e-04 schedule_tail c018b7b0 1 1.5e-04 search_binary_handler c01a90a0 1 1.5e-04 seq_lseek c0192210 1 1.5e-04 set_close_on_exec c0109bb0 1 1.5e-04 setup_frame c02887a0 1 1.5e-04 skb_recv_datagram c0285950 1 1.5e-04 sock_alloc_send_pskb c0282c60 1 1.5e-04 sock_create c0166fe0 1 1.5e-04 split_vma c0175e30 1 1.5e-04 swap_duplicate c01729c0 1 1.5e-04 swap_entry_free c0171200 1 1.5e-04 swap_readpage c0278f00 1 1.5e-04 synaptics_process_byte c0134270 1 1.5e-04 sys_alarm c0177240 1 1.5e-04 sys_chdir c0107fe0 1 1.5e-04 sys_execve c013e480 1 1.5e-04 sys_getrlimit c0178bc0 1 1.5e-04 sys_lseek c0134560 1 1.5e-04 sys_nanosleep c013e0a0 1 1.5e-04 sys_newuname c0194ff0 1 1.5e-04 sys_poll c013dda0 1 1.5e-04 sys_setsid c0109520 1 1.5e-04 sys_sigreturn c0283f30 1 1.5e-04 sys_socketcall c0179740 1 1.5e-04 sys_writev c010a62f 1 1.5e-04 syscall_exit c02c1810 1 1.5e-04 tcp_connect c02300c0 1 1.5e-04 tioclinux c016a950 1 1.5e-04 try_to_unmap c021d060 1 1.5e-04 tty_fasync c021c900 1 1.5e-04 tty_open c02243f0 1 1.5e-04 tty_wait_until_sent c02d2a00 1 1.5e-04 udp_queue_rcv_skb c02d3290 1 1.5e-04 udp_rcv c01c3eb0 1 1.5e-04 uptime_read_proc c0190760 1 1.5e-04 vfs_rmdir c0163030 1 1.5e-04 vmtruncate c0120880 1 1.5e-04 wake_up_state c01ad490 1 1.5e-04 writeback_in_progress ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 14:14 ` Thomas Molina @ 2003-12-30 14:39 ` William Lee Irwin III 2003-12-30 21:14 ` Thomas Molina 2003-12-30 18:20 ` Linus Torvalds 1 sibling, 1 reply; 50+ messages in thread From: William Lee Irwin III @ 2003-12-30 14:39 UTC (permalink / raw) To: Thomas Molina; +Cc: Linus Torvalds, Kernel Mailing List On Tue, Dec 30, 2003 at 09:14:31AM -0500, Thomas Molina wrote: > CPU: PIII, speed 648.076 MHz (estimated) > Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 324038 > vma samples % symbol name > c014a1a0 195865 29.5231 module_text_address > c0118510 90031 13.5705 mark_offset_tsc > c0111920 49842 7.5128 mask_and_ack_8259A > c014a2e0 23263 3.5065 kallsyms_lookup > c0141bb0 15389 2.3196 kernel_text_address > c01d1600 15017 2.2635 ext3_find_entry > c0111550 14170 2.1359 enable_8259A_irq > c0272560 13515 2.0371 cfb_imageblit > c0120330 8149 1.2283 kernel_map_pages > c0205f00 7819 1.1786 __io_virt_debug > c0156560 7685 1.1584 poison_obj > c01564d0 7497 1.1300 store_stackinfo Okay, thus far we have some seriously performance-affecting debug options. Could you turn those off and build non-modular? -- wli ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 14:39 ` William Lee Irwin III @ 2003-12-30 21:14 ` Thomas Molina 2003-12-30 21:23 ` Linus Torvalds ` (2 more replies) 0 siblings, 3 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-30 21:14 UTC (permalink / raw) To: William Lee Irwin III; +Cc: Linus Torvalds, Kernel Mailing List [-- Attachment #1: Type: TEXT/PLAIN, Size: 1066 bytes --] On Tue, 30 Dec 2003, William Lee Irwin III wrote: > Okay, thus far we have some seriously performance-affecting debug > options. Could you turn those off and build non-modular? Done. report1.txt is the result of: opreport `bk export linux-2.5 linux-2.6-testb` report2.txt is the result of: opreport -l vmlinux The times for this operation is: real 15m20s user 0m35s sys 0m20s On my main system (1.3GHz Athlon, 512MB memory, fast hard drive; in other words has plenty of resources) I get similar results, scaled down of course. On 2.4 the times are real 3m47s user 14s sys 7s On 2.6 the times are real 3m27s user 14s sys 7s I also get 90+ percent iowait under 2.6 and 0 iowait in 2.4. I'm not sure how the alleged suckiness of 2.6 paging fits into this. On this system the execution times are almost the same. On this machine, in addition to the iowait differences, there are cpu use statistics as reported by top. On 2.4 idle time is 70 percent while on 2.6 the idle time is near zero percent. I'm not sure what the significance of this is. [-- Attachment #2: Type: TEXT/PLAIN, Size: 589 bytes --] CPU: PIII, speed 648.072 MHz (estimated) Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 324036 91780 55.8442 vmlinux 51219 31.1646 bk 15294 9.3057 libc-2.3.2.so 4674 2.8439 libperl.so 618 0.3760 perl 266 0.1618 oprofiled 249 0.1515 bash 119 0.0724 libpthread-0.60.so 57 0.0347 sendmail.sendmail 38 0.0231 ld-2.3.2.so 22 0.0134 cupsd 8 0.0049 init 4 0.0024 syslogd 1 6.1e-04 libdl-2.3.2.so 1 6.1e-04 gpm [-- Attachment #3: Type: TEXT/PLAIN, Size: 35154 bytes --] CPU: PIII, speed 648.072 MHz (estimated) Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 324036 vma samples % symbol name c0115e20 22498 22.6776 mark_offset_tsc c0110080 12707 12.8084 mask_and_ack_8259A c018eec0 7115 7.1718 ext3_find_entry c010ff60 4013 4.0450 enable_8259A_irq c0168d50 2650 2.6712 __d_lookup c015eb10 1727 1.7408 link_path_walk c010afd0 1482 1.4938 irq_entries_start c027f730 1316 1.3265 ide_outb c0116920 1043 1.0513 apm_bios_call_simple c0187110 1006 1.0140 find_next_usable_block c0225740 718 0.7237 __copy_from_user_ll c0225200 693 0.6985 atomic_dec_and_lock c010b870 662 0.6673 apic_timer_interrupt c013d3a0 661 0.6663 kmem_cache_alloc c027f6c0 640 0.6451 ide_inb c0197a40 532 0.5362 ext3_permission c016a2e0 498 0.5020 find_inode_fast c0223390 466 0.4697 radix_tree_lookup c0198cf0 460 0.4637 do_get_write_access c01894d0 439 0.4425 ext3_new_inode c0143560 432 0.4354 do_anonymous_page c0127bf0 430 0.4334 run_timer_softirq c0155df0 428 0.4314 block_write_full_page c01184b0 398 0.4012 smp_apic_timer_interrupt c0225430 396 0.3992 strncpy_from_user c01237c0 391 0.3941 do_softirq c0154000 389 0.3921 __find_get_block c0154910 380 0.3830 __block_prepare_write c011be50 379 0.3820 schedule c0167c90 373 0.3760 dput c01a0180 366 0.3689 journal_add_journal_head c015ea60 361 0.3639 do_lookup c013d460 357 0.3599 kmem_cache_free c0152fa0 349 0.3518 __find_get_block_slow c019ad50 344 0.3467 journal_commit_transaction c010d480 339 0.3417 do_IRQ c011b9a0 327 0.3296 scheduler_tick c0144f00 302 0.3044 do_mmap_pgoff c018d430 302 0.3044 ext3_read_inode c010ae57 297 0.2994 sysenter_past_esp c01687b0 288 0.2903 d_alloc c0260290 267 0.2691 __make_request c02256c0 264 0.2661 __copy_to_user_ll c01375f0 264 0.2661 generic_file_aio_write_nolock c025ec00 248 0.2500 blk_rq_map_sg c013d1e0 246 0.2480 free_block c0127a60 246 0.2480 update_one_process c0135c60 243 0.2449 find_get_page c0168170 235 0.2369 prune_dcache c0171790 234 0.2359 do_mpage_readpage c019a800 231 0.2328 __journal_file_buffer c01238f0 229 0.2308 raise_softirq c01396a0 220 0.2218 __rmqueue c018aaa0 215 0.2167 ext3_get_block_handle c0139af0 212 0.2137 buffered_rmqueue c018d710 210 0.2117 ext3_do_update_inode c012df00 210 0.2117 supplemental_group_member c027ce90 208 0.2097 ide_do_request c010d120 206 0.2076 handle_IRQ_event c010af0a 206 0.2076 restore_all c0111210 201 0.2026 timer_interrupt c02a3780 191 0.1925 cfb_imageblit c0152880 190 0.1915 wake_up_buffer c010d250 186 0.1875 note_interrupt c0138970 185 0.1865 mempool_alloc c0153f00 183 0.1845 bh_lru_install c012bed0 181 0.1824 do_sigaction c0169940 180 0.1814 alloc_inode c013cfd0 180 0.1814 cache_alloc_refill c010ff00 178 0.1794 disable_8259A_irq c0199d20 175 0.1764 journal_stop c0109490 174 0.1754 __switch_to c010b850 172 0.1734 common_interrupt c015e5c0 166 0.1673 permission c02d75e0 162 0.1633 i8042_interrupt c018d050 161 0.1623 ext3_get_inode_block c0198400 161 0.1623 start_this_handle c018af50 158 0.1593 ext3_getblk c0199ff0 156 0.1572 __journal_unfile_buffer c0187790 156 0.1572 ext3_new_block c012c770 156 0.1572 notifier_call_chain c015e930 154 0.1552 follow_mount c02900d0 153 0.1542 ide_build_dmatable c0127970 150 0.1512 update_wall_time_one_tick c0156710 149 0.1502 bio_alloc c02a3120 149 0.1502 bitcpy_rev c0187ec0 149 0.1502 ext3_check_dir_entry c01456f0 149 0.1502 find_vma c018a3e0 147 0.1482 ext3_get_branch c0116bd0 146 0.1472 apm_cpu_idle c0127dc0 146 0.1472 do_timer c018fae0 144 0.1451 add_dirent_to_buf c0127a20 142 0.1431 update_wall_time c0116b20 141 0.1421 apm_do_idle c025f970 141 0.1421 get_request c015f5f0 141 0.1421 path_lookup c013fbc0 139 0.1401 shrink_cache c0199870 138 0.1391 journal_dirty_metadata c02a2dd0 137 0.1381 bitcpy c011a3a0 137 0.1381 do_page_fault c015b1b0 136 0.1371 cp_new_stat64 c019c880 135 0.1361 __try_to_free_cp_buf c0127b80 135 0.1361 update_process_times c0260830 129 0.1300 generic_make_request c016ad60 126 0.1270 iput c0139250 124 0.1250 bad_range c0141f80 124 0.1250 zap_pte_range c0265580 122 0.1230 as_dispatch_request c0156a00 122 0.1230 bio_add_page c0170dd0 122 0.1230 sync_sb_inodes c0127370 121 0.1220 __mod_timer c0136bc0 121 0.1220 filemap_nopage c011b370 121 0.1220 recalc_task_prio c0143760 120 0.1210 do_no_page c018a290 120 0.1210 ext3_block_to_path c01874f0 120 0.1210 ext3_try_to_allocate c0139c60 116 0.1169 __alloc_pages c027f740 115 0.1159 ide_outbsync c015aa10 112 0.1129 generic_fillattr c01207f0 112 0.1129 profile_hook c010baa8 111 0.1119 page_fault c013b3e0 110 0.1109 __set_page_dirty_nobuffers c011b540 109 0.1099 try_to_wake_up c0153db0 108 0.1089 __brelse c0139640 108 0.1089 prep_new_page c0339700 107 0.1079 increment_tail c01272b0 107 0.1079 internal_add_timer c012df40 106 0.1068 in_group_p c0281250 104 0.1048 do_rw_taskfile c0147d00 104 0.1048 page_add_rmap c0135ab0 104 0.1048 unlock_page c01399c0 103 0.1038 free_hot_cold_page c027f780 102 0.1028 ide_outl c012c400 101 0.1018 sys_rt_sigaction c02658e0 99 0.0998 as_add_request c025e9e0 97 0.0978 blk_recount_segments c010feb0 97 0.0978 end_8259A_irq c018d160 96 0.0968 ext3_get_inode_loc c013fee0 96 0.0968 refill_inactive_zone c0150820 95 0.0958 get_unused_fd c013ea60 93 0.0937 __pagevec_lru_add c0225560 92 0.0927 __copy_user_intel c01a02d0 91 0.0917 __journal_remove_journal_head c0171b40 91 0.0917 mpage_readpages c0115da0 91 0.0917 sched_clock c0264b20 88 0.0887 as_update_iohist c016a040 88 0.0887 prune_icache c0145d50 87 0.0877 do_munmap c011d9d0 87 0.0877 prepare_to_wait c0265390 86 0.0867 as_move_to_dispatch c027d630 86 0.0867 ide_intr c018b280 85 0.0857 walk_page_buffers c015e3d0 84 0.0847 getname c011c4c0 82 0.0827 __wake_up c0169ae0 82 0.0827 inode_init_once c015fb10 81 0.0816 may_open c028feb0 80 0.0806 ide_build_sglist c011c3e0 80 0.0806 preempt_schedule c01709a0 79 0.0796 __mark_inode_dirty c01a04b0 78 0.0786 journal_put_journal_head c0135860 77 0.0776 add_to_page_cache c010ade1 77 0.0776 ret_from_intr c01540f0 76 0.0766 __getblk c0186aa0 76 0.0766 ext3_get_group_desc c01987f0 76 0.0766 journal_start c0223570 76 0.0766 radix_tree_delete c02239b0 76 0.0766 rb_erase c011c450 75 0.0756 __wake_up_common c0265c80 75 0.0756 as_queue_empty c027cbd0 74 0.0746 start_request c0264540 73 0.0736 as_choose_req c0260c70 72 0.0726 __end_that_request_first c019dd50 72 0.0726 journal_cancel_revoke c013e620 72 0.0726 mark_page_accessed c02a2630 71 0.0716 bitfill32 c01560e0 71 0.0716 ll_rw_block c0287c00 70 0.0706 lba_28_rw_disk c013d960 70 0.0706 reap_timer_fnc c0143c00 69 0.0696 handle_mm_fault c01359b0 69 0.0696 page_waitqueue c0154520 68 0.0685 __block_write_full_page c019e630 68 0.0685 journal_write_metadata_buffer c025f4e0 67 0.0675 blk_run_queues c0138ad0 67 0.0675 mempool_free c01528b0 67 0.0675 unlock_buffer c018de90 66 0.0665 ext3_dirty_inode c02232d0 66 0.0665 radix_tree_insert c0339720 66 0.0665 sync_buffer c0170ae0 65 0.0655 __sync_single_inode c010d380 65 0.0655 enable_irq c027bee0 65 0.0655 ide_end_request c02807d0 65 0.0655 ide_execute_command c010dde0 64 0.0645 disable_irq_nosync c016a520 64 0.0645 get_new_inode_fast c0199680 64 0.0645 journal_dirty_data c013f660 64 0.0645 shrink_list c015aab0 64 0.0645 vfs_getattr c0151550 64 0.0645 vfs_write c0116100 63 0.0635 delay_tsc c0155f70 63 0.0635 submit_bh c02609c0 63 0.0635 submit_bio c0264e10 62 0.0625 as_completed_request c018de30 62 0.0625 ext3_mark_inode_dirty c0261230 62 0.0625 get_io_context c02233f0 61 0.0615 __lookup c013bd20 61 0.0615 do_page_cache_readahead c0264480 60 0.0605 as_add_arq_rb c0265890 60 0.0605 as_next_request c02664a0 60 0.0605 as_set_request c0139360 60 0.0605 free_pages_bulk c02649a0 59 0.0595 as_can_break_anticipation c01275f0 59 0.0595 del_timer c02823f0 59 0.0595 ide_handler_parser c013e820 59 0.0595 release_pages c015fa30 59 0.0595 vfs_create c0144c10 59 0.0595 vma_merge c02237f0 58 0.0585 __rb_erase_color c018b380 58 0.0585 ext3_prepare_write c01722e0 58 0.0585 mpage_writepages c025fee0 57 0.0575 disk_round_stats c015f8c0 56 0.0564 __user_walk c0234410 56 0.0564 add_timer_randomness c0265300 56 0.0564 as_fifo_expired c018f4f0 56 0.0564 ext3_lookup c018a8a0 56 0.0564 ext3_splice_branch c0225340 56 0.0564 memcpy c0142c50 55 0.0554 do_wp_page c025fe50 55 0.0554 drive_stat_acct c027fe90 55 0.0554 ide_wait_stat c0147dd0 55 0.0554 page_remove_rmap c0152450 54 0.0544 __fput c0264ff0 54 0.0544 as_remove_queued_request c018dd60 54 0.0544 ext3_reserve_inode_write c025f8b0 54 0.0544 freed_request c0290590 54 0.0544 ide_start_dma c0199550 54 0.0544 journal_get_undo_access c0199300 53 0.0534 journal_get_write_access c015e780 53 0.0534 real_lookup c013e570 52 0.0524 activate_page c0264370 52 0.0524 as_add_arq_hash c025d9d0 52 0.0524 elv_queue_empty c015fcd0 52 0.0524 open_namei c0223700 52 0.0524 rb_insert_color c013b9d0 52 0.0524 read_pages c0152ae0 51 0.0514 end_buffer_write_sync c018dca0 51 0.0514 ext3_writepage_trans_blocks c0145a70 51 0.0514 unmap_region c0142200 51 0.0514 unmap_vmas c012ede0 51 0.0514 worker_thread c0154400 50 0.0504 create_empty_buffers c0168980 50 0.0504 d_instantiate c01455b0 50 0.0504 get_unmapped_area c013ae30 49 0.0494 balance_dirty_pages_ratelimited c0135b90 48 0.0484 __lock_page c0265220 48 0.0484 as_remove_request c016b2e0 48 0.0484 inode_change_ok c0141a70 48 0.0484 pte_alloc_map c02231c0 48 0.0484 radix_tree_preload c016b200 48 0.0484 wake_up_inode c02345c0 47 0.0474 add_disk_randomness c0152860 47 0.0474 bh_waitq_head c0167c10 47 0.0474 d_callback c018b580 47 0.0474 ext3_ordered_commit_write c0145910 47 0.0474 free_pgtables c016a960 47 0.0474 iget_locked c0265cf0 46 0.0464 as_merge c01900e0 46 0.0464 ext3_add_entry c018db00 46 0.0464 ext3_setattr c0192910 45 0.0454 ext3_alloc_inode c01115c0 44 0.0444 sys_mmap2 c0167c50 43 0.0433 d_free c015abb0 43 0.0433 vfs_lstat c021fe30 42 0.0423 cap_vm_enough_memory c0282350 42 0.0423 ide_pre_handler_parser c0144770 42 0.0423 remove_shared_vm_struct c01397c0 42 0.0423 rmqueue_bulk c0142180 42 0.0423 unmap_page_range c0265b90 41 0.0413 as_insert_request c0141620 41 0.0413 blk_queue_bounce c0135d30 41 0.0413 find_lock_page c0145760 41 0.0413 find_vma_prev c0152280 41 0.0413 get_empty_filp c016b650 41 0.0413 notify_change c0264500 40 0.0403 as_find_arq_rb c0192090 40 0.0403 ext3_journal_start c0282490 40 0.0403 ide_cmd_type_parser c01987a0 40 0.0403 new_handle c016a350 40 0.0403 new_inode c0148230 39 0.0393 __pte_chain_free c01570c0 39 0.0393 bio_endio c027fdb0 39 0.0393 drive_is_ready c018d3d0 39 0.0393 ext3_set_inode_flags c0287de0 39 0.0393 ide_do_rw_disk c01459b0 39 0.0393 unmap_vma c0152770 38 0.0383 __constant_c_and_count_memset c0287910 38 0.0383 __ide_do_rw_disk c01566b0 38 0.0383 bio_destructor c025f1f0 38 0.0383 blk_plug_device c0156280 38 0.0383 drop_buffers c011dad0 38 0.0383 finish_wait c016b460 38 0.0383 inode_setattr c0145f00 38 0.0383 sys_munmap c02907e0 37 0.0373 __ide_dma_begin c013eb60 37 0.0373 __pagevec_lru_add_active c0153a00 37 0.0373 create_buffers c0123740 37 0.0373 current_kernel_time c025d880 37 0.0373 elv_next_request c01448f0 37 0.0373 find_vma_prepare c0287a30 37 0.0373 get_command c028fe00 37 0.0373 ide_dma_intr c0223af0 37 0.0373 rb_next c0150340 37 0.0373 sys_chmod c02252d0 36 0.0363 __const_udelay c03398b0 36 0.0363 add_event_entry c02643c0 36 0.0363 as_find_arq_hash c011b320 36 0.0363 effective_prio c013a120 36 0.0363 nr_free_pages c0153420 35 0.0353 inode_has_buffers c019aca0 35 0.0353 journal_end_buffer_io_sync c018a5f0 34 0.0343 ext3_alloc_branch c015b3a0 34 0.0343 inode_add_bytes c0167220 34 0.0343 locks_remove_posix c025d7d0 33 0.0333 __elv_add_request c02908d0 33 0.0333 __ide_dma_test_irq c0264cf0 33 0.0333 as_update_arq c013cd40 33 0.0333 cache_grow c013b860 33 0.0333 file_ra_state_init c02cb2e0 33 0.0333 stall_callback c016ae80 33 0.0333 update_atime c0290640 32 0.0323 __ide_dma_read c01690a0 32 0.0323 d_rehash c0188ad0 32 0.0323 ext3_file_write c0150b10 32 0.0323 filp_close c016ae30 32 0.0323 inode_times_differ c016af60 32 0.0323 inode_update_time c01a00d0 32 0.0323 journal_alloc_journal_head c01482b0 32 0.0323 pte_chain_alloc c0192100 31 0.0312 __ext3_journal_stop c0290830 31 0.0312 __ide_dma_end c0151490 31 0.0312 do_sync_write c0153260 31 0.0312 end_buffer_async_write c0138240 31 0.0312 generic_file_aio_write c014ff40 31 0.0312 sys_access c025ff30 30 0.0302 __blk_put_request c0154d30 30 0.0302 __block_commit_write c02909b0 30 0.0302 __ide_dma_count c0145b50 30 0.0302 detach_vmas_to_be_unmapped c013d7e0 30 0.0302 drain_array c02a21e0 30 0.0302 soft_cursor c0150ba0 30 0.0302 sys_close c0156500 29 0.0292 alloc_buffer_head c025dcd0 29 0.0292 elv_try_last_merge c0150a10 29 0.0292 fd_install c0265110 28 0.0282 as_remove_dispatched_request c016c0e0 28 0.0282 dnotify_parent c027d230 28 0.0282 do_ide_request c0155f30 28 0.0282 end_bio_bh_io_sync c018a4d0 28 0.0282 ext3_find_near c0197c00 28 0.0282 ext3_init_acl c0188a20 28 0.0282 ext3_release_file c0150650 28 0.0282 filp_open c0339320 28 0.0282 get_exec_dcookie c0138b90 28 0.0282 mempool_alloc_slab c018b1d0 27 0.0272 ext3_bread c0135ed0 27 0.0272 find_get_pages c0261140 27 0.0272 put_io_context c016b5f0 27 0.0272 setattr_mask c029d860 27 0.0272 sys_outbuf c016aa20 26 0.0262 __insert_inode_hash c0153620 26 0.0262 __set_page_dirty_buffers c0264670 26 0.0262 as_find_next_arq c0260ec0 26 0.0262 end_that_request_last c010b8d8 26 0.0262 error_code c01a0290 26 0.0262 journal_grab_journal_head c025ed60 26 0.0262 ll_back_merge_fn c0171540 26 0.0262 mpage_end_io_read c012fb50 26 0.0262 rcu_do_batch c0127db0 26 0.0262 run_local_timers c0153c60 25 0.0252 __getblk_slow c01568e0 25 0.0252 bio_put c0169c80 25 0.0252 clear_inode c018a560 25 0.0252 ext3_find_goal c018b920 25 0.0252 ext3_ordered_writepage c0128ea0 25 0.0252 rm_from_queue c0144a40 25 0.0252 vma_link c02648e0 24 0.0242 as_close_req c025f370 24 0.0242 generic_unplug_device c015e6b0 24 0.0242 path_release c011d960 24 0.0242 remove_wait_queue c0280fa0 23 0.0232 SELECT_DRIVE c013d3e0 23 0.0232 __kmalloc c011d8a0 23 0.0232 add_wait_queue c02647d0 23 0.0232 as_antic_stop c0135b10 23 0.0232 end_page_writeback c0192a80 23 0.0232 ext3_clear_inode c018ae00 23 0.0232 ext3_get_block c018dd20 23 0.0232 ext3_mark_iloc_dirty c0137210 23 0.0232 generic_file_mmap c013e720 23 0.0232 lru_add_drain c0153d60 23 0.0232 mark_buffer_dirty c0147c50 23 0.0232 page_referenced c012ec00 23 0.0232 queue_work c0145a40 23 0.0232 unmap_vma_list c019e930 22 0.0222 __log_space_left c013cc90 22 0.0222 cache_init_objs c012fb20 22 0.0222 call_rcu c014cb10 22 0.0222 free_page_and_swap_cache c02234d0 22 0.0222 radix_tree_gang_lookup c01516f0 22 0.0222 sys_write c02c7610 22 0.0222 uhci_hub_status_data c0142110 22 0.0222 zap_pmd_range c0264ae0 21 0.0212 as_can_anticipate c02612b0 21 0.0212 copy_io_context c01506c0 21 0.0212 dentry_open c0167340 21 0.0212 locks_remove_flock c029d8a0 21 0.0212 move_buf_aligned c0186b60 21 0.0212 read_block_bitmap c010ae14 21 0.0212 resume_kernel c02b6fa0 21 0.0212 rh_report_status c015b320 21 0.0212 sys_lstat64 c0163d00 21 0.0212 sys_select c0266430 20 0.0202 as_put_request c025f2c0 20 0.0202 blk_remove_plug c01893d0 20 0.0202 find_group_other c011d2f0 20 0.0202 io_schedule c0138bb0 20 0.0202 mempool_free_slab c0154260 20 0.0202 set_bh_page c0225260 19 0.0192 __delay c011db50 19 0.0192 autoremove_wake_function c0168cf0 19 0.0192 d_lookup c0197da0 19 0.0192 ext3_acl_chmod c0190a60 19 0.0192 ext3_create c0339390 19 0.0192 lookup_dcookie c01374c0 19 0.0192 remove_suid c0264320 18 0.0181 as_remove_merge_hints c013d2c0 18 0.0181 cache_flusharray c025dd60 18 0.0181 clear_queue_congested c0152550 18 0.0181 fget_light c0152620 18 0.0181 file_move c0152430 18 0.0181 fput c0153b20 18 0.0181 grow_dev_page c0150a50 18 0.0181 sys_open c013b520 18 0.0181 test_clear_page_dirty c01541b0 17 0.0171 __bread c01354f0 17 0.0171 __remove_from_page_cache c025da50 17 0.0171 elv_set_request c018a250 17 0.0171 ext3_alloc_block c0152510 17 0.0171 fget c019ff70 17 0.0171 journal_blocks_per_page c015f810 17 0.0171 lookup_hash c0171660 17 0.0171 mpage_alloc c0160ae0 17 0.0171 sys_unlink c01529e0 16 0.0161 __set_page_buffers c0152900 16 0.0161 __wait_on_buffer c03394b0 16 0.0161 add_user_ctx_switch c0157440 16 0.0161 bio_phys_segments c016bdd0 16 0.0161 dnotify_flush c013d8b0 16 0.0161 drain_array_locked c025d6e0 16 0.0161 elv_merge c0155590 16 0.0161 generic_commit_write c02cb480 16 0.0161 init_stall_timer c013f120 16 0.0161 invalidate_mapping_pages c0261120 16 0.0161 kblockd_schedule_work c0127580 16 0.0161 mod_timer c0168bf0 15 0.0151 d_splice_alias c025daf0 15 0.0151 elv_completed_request c01909e0 15 0.0151 ext3_add_nondir c018bd90 15 0.0151 ext3_releasepage c018c060 15 0.0151 ext3_set_aops c02cbb70 15 0.0151 hc_state_transitions c016b090 15 0.0151 i_waitq_head c0223260 15 0.0151 radix_tree_extend c02236c0 14 0.0141 __rb_rotate_right c0155510 14 0.0141 block_prepare_write c01639f0 14 0.0141 do_select c01757e0 14 0.0141 eventpoll_init_file c0152680 14 0.0141 file_kill c013e6c0 14 0.0141 lru_cache_add_active c0290710 13 0.0131 __ide_dma_write c0264450 13 0.0131 as_find_first_arq c02663c0 13 0.0131 as_work_handler c0144b60 13 0.0131 can_vma_merge_before c011c430 13 0.0131 default_wake_function c0191340 13 0.0131 ext3_orphan_del c0135e00 13 0.0131 find_or_create_page c0156560 13 0.0131 free_buffer_head c0139ad0 13 0.0131 free_hot_page c0122080 13 0.0131 next_thread c010adfc 13 0.0131 resume_userspace c0160950 13 0.0131 vfs_unlink c019d050 12 0.0121 __journal_clean_checkpoint_list c0188a90 12 0.0121 ext3_open_file c018bd20 12 0.0121 ext3_readpages c019aab0 12 0.0121 journal_file_buffer c01806a0 12 0.0121 proc_lookup c01564c0 12 0.0121 recalc_bh_state c0169c20 11 0.0111 __iget c03395a0 11 0.0111 add_us_sample c0265e90 11 0.0111 as_merged_request c015e6f0 11 0.0111 cached_lookup c018b490 11 0.0111 ext3_journal_dirty_data c029da30 11 0.0111 fb_get_buffer_offset c016abe0 11 0.0111 generic_forget_inode c013d4b0 11 0.0111 kfree c01637d0 11 0.0111 poll_freewait c0188c10 11 0.0111 read_inode_bitmap c0156360 11 0.0111 try_to_free_buffers c0281030 10 0.0101 SELECT_MASK c0223680 10 0.0101 __rb_rotate_left c0339610 10 0.0101 add_sample c0127680 10 0.0101 cascade c0156220 10 0.0101 check_ttfb_buffer c025d990 10 0.0101 elv_remove_request c0152ab0 10 0.0101 end_buffer_read_sync c0299200 10 0.0101 fb_flashcursor c02d78a0 10 0.0101 i8042_timer_func c0223b50 10 0.0101 rb_prev c013c3b0 10 0.0101 slab_destroy c01544b0 10 0.0101 unmap_underlying_metadata c0141c10 9 0.0091 copy_page_range c01890f0 9 0.0091 find_group_orlov c019dec0 9 0.0091 journal_write_revoke_records c0223160 9 0.0091 radix_tree_node_alloc c0223670 9 0.0091 radix_tree_node_ctor c033c7b0 9 0.0091 sock_poll c0339670 9 0.0091 take_tasks_mm c01449a0 8 0.0081 __vma_link c02642d0 8 0.0081 as_get_io_context c0260ea0 8 0.0081 end_that_request_first c018a0f0 8 0.0081 ext3_put_inode c018ee90 8 0.0081 ext3_update_dx_flag c019d170 7 0.0071 __journal_insert_checkpoint c0135960 7 0.0071 add_to_page_cache_lru c0169d40 7 0.0071 dispose_list c025d710 7 0.0071 elv_merged_request c025da90 7 0.0071 elv_put_request c01a0160 7 0.0071 journal_free_journal_head c019ebb0 7 0.0071 journal_next_log_block c01542a0 7 0.0071 try_to_release_page c0171030 7 0.0071 writeback_inodes c013e9d0 6 0.0060 __pagevec_release_nonlru c0157470 6 0.0060 bio_hw_segments c0286a90 6 0.0060 default_end_request c0136040 6 0.0060 do_generic_mapping_read c013b200 6 0.0060 do_writepages c018b850 6 0.0060 ext3_bmap c01638f0 6 0.0060 max_select_fd c0119f60 6 0.0060 pte_alloc_one c0127f80 6 0.0060 schedule_timeout c012f3d0 6 0.0060 schedule_work c015b360 6 0.0060 sys_fstat64 c0122e10 6 0.0060 sys_time c035fec0 6 0.0060 tcp_poll c015f730 5 0.0050 __lookup_hash c013e780 5 0.0050 __page_cache_release c0123960 5 0.0050 __tasklet_schedule c0339570 5 0.0050 add_sample_entry c0264700 5 0.0050 as_antic_expired c0158d10 5 0.0050 blkdev_writepage c0350460 5 0.0050 dev_watchdog c019d6e0 5 0.0050 find_revoke_record c03396b0 5 0.0050 get_slots c0290270 5 0.0050 ide_destroy_dmatable c0153ab0 5 0.0050 init_page_buffers c013e660 5 0.0050 lru_cache_add c023bee0 5 0.0050 read_chan c0153970 5 0.0050 remove_inode_buffers c0111f00 5 0.0050 restore_fpu c0339800 5 0.0050 sync_cpu_buffers c0160430 5 0.0050 vfs_mkdir c0151360 5 0.0050 vfs_read c013a060 4 0.0040 __pagevec_free c0144960 4 0.0040 __vma_link_rb c0339470 4 0.0040 add_kernel_ctx_switch c0140720 4 0.0040 balance_pgdat c016adf0 4 0.0040 bmap c01173c0 4 0.0040 check_events c0299250 4 0.0040 cursor_timer_handler c01512a0 4 0.0040 do_sync_read c0155ef0 4 0.0040 generic_block_bmap c015e610 4 0.0040 get_write_access c0156640 4 0.0040 init_buffer c01565d0 4 0.0040 init_buffer_head c013edd0 4 0.0040 invalidate_complete_page c01a0470 4 0.0040 journal_remove_journal_head c019a2e0 4 0.0040 journal_try_to_free_buffers c013bea0 4 0.0040 page_cache_readahead c0170a90 4 0.0040 write_inode c01714f0 4 0.0040 writeback_in_progress c0152a20 3 0.0030 __clear_page_buffers c0136530 3 0.0030 __generic_file_aio_read c0139fa0 3 0.0030 __get_free_pages c0339430 3 0.0030 add_cpu_switch c0264850 3 0.0030 as_antic_timeout c0265cd0 3 0.0030 as_latter_request c01419c0 3 0.0030 clear_page_tables c0169a80 3 0.0030 destroy_inode c0190bd0 3 0.0030 ext3_mkdir c018dac0 3 0.0030 ext3_write_inode c0279480 3 0.0030 hermes_bap_pread c0192990 3 0.0030 init_once c02d0690 3 0.0030 input_event c01538f0 3 0.0030 invalidate_inode_buffers c019de50 3 0.0030 journal_switch_revoke_table c0171620 3 0.0030 mpage_bio_submit c013b760 3 0.0030 pdflush_operation c0127f70 3 0.0030 process_timeout c013f440 3 0.0030 shrink_slab c0339870 3 0.0030 timer_ping c016a3d0 3 0.0030 unlock_new_inode c011b6f0 3 0.0030 wake_up_forked_process c013a090 2 0.0020 __free_pages c019ff90 2 0.0020 __jbd_kmalloc c019ab10 2 0.0020 __journal_refile_buffer c019d0e0 2 0.0020 __journal_remove_checkpoint c019a210 2 0.0020 __journal_try_to_free_buffer c0273bc0 2 0.0020 __orinoco_ev_info c0163820 2 0.0020 __pollwait c0339530 2 0.0020 add_cookie_switch c0116830 2 0.0020 apm_bios_call c01175d0 2 0.0020 apm_event_handler c0116a40 2 0.0020 apm_get_event c0264730 2 0.0020 as_antic_waitnext c0260050 2 0.0020 blk_congestion_wait c018b8f0 2 0.0020 bput_one c01794b0 2 0.0020 create_elf_tables c03421c0 2 0.0020 datagram_poll c0246930 2 0.0020 do_con_trol c0192960 2 0.0020 ext3_destroy_inode c0136430 2 0.0020 file_read_actor c013a0e0 2 0.0020 free_pages c016aad0 2 0.0020 generic_delete_inode c016ad40 2 0.0020 generic_drop_inode c0150e10 2 0.0020 generic_file_llseek c013a2f0 2 0.0020 get_page_state c0279230 2 0.0020 hermes_bap_seek c011cbc0 2 0.0020 idle_cpu c016b870 2 0.0020 is_bad_inode c019ec70 2 0.0020 journal_bmap c018b900 2 0.0020 journal_dirty_data_fn c019a1c0 2 0.0020 journal_unfile_buffer c0243900 2 0.0020 kbd_keycode c013cd00 2 0.0020 kmem_flagcheck c0140940 2 0.0020 kswapd c01600d0 2 0.0020 lookup_create c013ece0 2 0.0020 pagevec_lookup c01509d0 2 0.0020 put_unused_fd c0299520 2 0.0020 putcs_aligned c012fd60 2 0.0020 rcu_check_callbacks c012fc60 2 0.0020 rcu_process_callbacks c013e480 2 0.0020 rotate_reclaimable_page c01276f0 2 0.0020 second_overflow c012b430 2 0.0020 sys_rt_sigprocmask c0123a00 2 0.0020 tasklet_action c015ac10 2 0.0020 vfs_fstat c0224830 2 0.0020 vsnprintf c013afe0 2 0.0020 wb_kupdate c01714d0 2 0.0020 writeback_acquire c0234600 1 0.0010 SHATransform c0153e50 1 0.0010 __bread_slow c0169380 1 0.0010 __d_path c0137530 1 0.0010 __filemap_copy_from_user_iovec c0273e40 1 0.0010 __orinoco_ev_rx c013e990 1 0.0010 __pagevec_release c013b560 1 0.0010 __pdflush c0170d20 1 0.0010 __writeback_single_inode c0299820 1 0.0010 accel_putc c0120090 1 0.0010 acquire_console_sem c012f460 1 0.0010 alloc_pidmap c0117660 1 0.0010 apm_mainloop c0264780 1 0.0010 as_antic_waitreq c02600e0 1 0.0010 attempt_merge c013acf0 1 0.0010 balance_dirty_pages c018b8e0 1 0.0010 bget_one c0154dc0 1 0.0010 block_read_full_page c0156430 1 0.0010 block_sync_page c03b9660 1 0.0010 cache_clean c0169ff0 1 0.0010 can_unuse c0248540 1 0.0010 con_chars_in_buffer c0247ed0 1 0.0010 console_callback c0241b10 1 0.0010 conv_uni_to_pc c011e780 1 0.0010 copy_process c015b7d0 1 0.0010 copy_strings c01694c0 1 0.0010 d_path c012f820 1 0.0010 detach_pid c010b928 1 0.0010 device_not_available c0247770 1 0.0010 do_con_write c011f170 1 0.0010 do_fork c01228f0 1 0.0010 do_getitimer c010ad30 1 0.0010 do_notify_resume c0243f90 1 0.0010 do_update_region c025d9f0 1 0.0010 elv_latter_request c025dbe0 1 0.0010 elv_rq_merge_ok c016b910 1 0.0010 expand_fd_array c0187d30 1 0.0010 ext3_group_sparse c018ca40 1 0.0010 ext3_truncate c01635d0 1 0.0010 filldir64 c0239440 1 0.0010 flush_to_ldisc c0117360 1 0.0010 get_event c010aad0 1 0.0010 handle_signal c011d310 1 0.0010 io_schedule_timeout c038ca70 1 0.0010 ip_ct_find_helper c0355c40 1 0.0010 ip_route_input c0199350 1 0.0010 journal_get_create_access c0242e10 1 0.0010 k_spec c0243be0 1 0.0010 kbd_event c019e240 1 0.0010 kjournald c025efc0 1 0.0010 ll_merge_requests_fn c0179cc0 1 0.0010 load_elf_binary c010cdd0 1 0.0010 math_state_restore c017bd40 1 0.0010 mb_cache_shrink_fn c02d2030 1 0.0010 mousedev_event c010ae1e 1 0.0010 need_resched c011b960 1 0.0010 nr_running c013a170 1 0.0010 nr_used_zone_pages c0111670 1 0.0010 old_mmap c023a430 1 0.0010 opost_block c015db00 1 0.0010 pipe_poll c0108f40 1 0.0010 prepare_to_copy c017d800 1 0.0010 proc_alloc_inode c01821f0 1 0.0010 proc_calc_metrics c0180100 1 0.0010 proc_file_read c017d990 1 0.0010 proc_get_inode c0128790 1 0.0010 recalc_sigpending c01200d0 1 0.0010 release_console_sem c0339650 1 0.0010 release_mm c0353160 1 0.0010 rt_hash_code c0223c80 1 0.0010 rwsem_wake c015bbc0 1 0.0010 setup_arg_pages c010a610 1 0.0010 setup_frame c016a240 1 0.0010 shrink_icache_memory c012b370 1 0.0010 sigprocmask c02244e0 1 0.0010 skip_atoi c0225510 1 0.0010 strnlen_user c02d6800 1 0.0010 synaptics_process_byte c0157bc0 1 0.0010 sync_supers c0169590 1 0.0010 sys_getcwd c01604f0 1 0.0010 sys_mkdir c0151680 1 0.0010 sys_read c0151cc0 1 0.0010 sys_writev c015ab50 1 0.0010 vfs_stat c023e1e0 1 0.0010 vt_ioctl c010af1d 1 0.0010 work_resched c023c760 1 0.0010 write_chan ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 21:14 ` Thomas Molina @ 2003-12-30 21:23 ` Linus Torvalds 2003-12-31 0:50 ` Thomas Molina 2003-12-30 21:35 ` William Lee Irwin III 2003-12-30 23:46 ` Roger Luethi 2 siblings, 1 reply; 50+ messages in thread From: Linus Torvalds @ 2003-12-30 21:23 UTC (permalink / raw) To: Thomas Molina; +Cc: William Lee Irwin III, Kernel Mailing List On Tue, 30 Dec 2003, Thomas Molina wrote: > > The times for this operation is: > real 15m20s > user 0m35s > sys 0m20s Ok. This looks much closer to the 2.4.x numbers you reported: real 13m50.198s user 0m33.780s sys 0m15.390s so I assume that we can consider this problem largely solved? There's still some difference, that could be due to just VM tuning.. I suspect that what happened is: - slab debugging adds a heavy CPU _and_ it also makes all the slab caches much less dense. - as a result, you see much higher system times, and you also end up needing much more memory for things like the dentry cache, so your memory-starved machine ended up swapping a lot more too. > On my main system (1.3GHz Athlon, 512MB memory, fast hard drive; in other > words has plenty of resources) I get similar results, scaled down of > course. > > On 2.4 the times are > real 3m47s > user 14s > sys 7s > > On 2.6 the times are > real 3m27s > user 14s > sys 7s So here 2.6.x actually outperforms 2.4.x > I also get 90+ percent iowait under 2.6 and 0 iowait in 2.4. This is likely just an issue of reporting. Under 2.6.x your idle time will be reported as iowait, while in your 2.4.x kernel you don't even have the iowait support, so all idle time is just "idle", and not split up into _why_ it is idle. Linus ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 21:23 ` Linus Torvalds @ 2003-12-31 0:50 ` Thomas Molina 2003-12-31 1:01 ` Linus Torvalds 2003-12-31 1:34 ` Andrew Morton 0 siblings, 2 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-31 0:50 UTC (permalink / raw) To: Linus Torvalds; +Cc: William Lee Irwin III, Kernel Mailing List On Tue, 30 Dec 2003, Linus Torvalds wrote: > Ok. This looks much closer to the 2.4.x numbers you reported: > > real 13m50.198s > user 0m33.780s > sys 0m15.390s > > so I assume that we can consider this problem largely solved? There's > still some difference, that could be due to just VM tuning.. > > I suspect that what happened is: > - slab debugging adds a heavy CPU _and_ it also makes all the slab caches > much less dense. > - as a result, you see much higher system times, and you also end up > needing much more memory for things like the dentry cache, so your > memory-starved machine ended up swapping a lot more too. So you are telling me that I am paying the price for running development kernels and enabling all the debugging. I enjoy running the development stuff and testing new stuff. I enabled all the kernel hacking and debugging options with the idea it might be useful for testing purposes. Disabling all the debugging stuff brings the numbers down, but things still "feel" worse. It's subjective, but there you are. I'll continue to test with whatever provides the most useful data. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-31 0:50 ` Thomas Molina @ 2003-12-31 1:01 ` Linus Torvalds 2003-12-31 1:34 ` Andrew Morton 1 sibling, 0 replies; 50+ messages in thread From: Linus Torvalds @ 2003-12-31 1:01 UTC (permalink / raw) To: Thomas Molina; +Cc: William Lee Irwin III, Kernel Mailing List On Tue, 30 Dec 2003, Thomas Molina wrote: > > So you are telling me that I am paying the price for running development > kernels and enabling all the debugging. I enjoy running the development > stuff and testing new stuff. I enabled all the kernel hacking and > debugging options with the idea it might be useful for testing purposes. It's very useful, but some of the debugging options are literally _very_ intrusive, and can change usage patterns a lot. > Disabling all the debugging stuff brings the numbers down, but things > still "feel" worse. It's subjective, but there you are. I'll continue to > test with whatever provides the most useful data. The VM in 2.6.0 is pretty stable, but it hasn't gotten as much "tweaking" as the 2.4.x code. Which tends to show as bad performance under some loads. The -mm code is likely to help a bit. I've been busy merging the stable parts as Andrew sends it today and yesterday. Linus ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-31 0:50 ` Thomas Molina 2003-12-31 1:01 ` Linus Torvalds @ 2003-12-31 1:34 ` Andrew Morton 2003-12-31 11:25 ` bert hubert 1 sibling, 1 reply; 50+ messages in thread From: Andrew Morton @ 2003-12-31 1:34 UTC (permalink / raw) To: Thomas Molina; +Cc: torvalds, wli, linux-kernel Thomas Molina <tmolina@cablespeed.com> wrote: > > On Tue, 30 Dec 2003, Linus Torvalds wrote: > > Ok. This looks much closer to the 2.4.x numbers you reported: > > > > real 13m50.198s > > user 0m33.780s > > sys 0m15.390s > > > > so I assume that we can consider this problem largely solved? There's > > still some difference, that could be due to just VM tuning.. > > > > I suspect that what happened is: > > - slab debugging adds a heavy CPU _and_ it also makes all the slab caches > > much less dense. > > - as a result, you see much higher system times, and you also end up > > needing much more memory for things like the dentry cache, so your > > memory-starved machine ended up swapping a lot more too. > > So you are telling me that I am paying the price for running development > kernels and enabling all the debugging. CONFIG_DEBUG_PAGEALLOC really does hurt on small machines. Mainly because it rounds the size of all slab object which are >= 128 bytes up to a full 4k. So things like inodes and dentries take vastly more memory. The other debug options are less costly. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-31 1:34 ` Andrew Morton @ 2003-12-31 11:25 ` bert hubert 0 siblings, 0 replies; 50+ messages in thread From: bert hubert @ 2003-12-31 11:25 UTC (permalink / raw) To: Andrew Morton; +Cc: Thomas Molina, torvalds, wli, linux-kernel > CONFIG_DEBUG_PAGEALLOC really does hurt on small machines. Mainly because > it rounds the size of all slab object which are >= 128 bytes up to a full > 4k. So things like inodes and dentries take vastly more memory. > > The other debug options are less costly. The patch below rationalizes the Kconfig documentation for the debugging options a bit. * removed one occurence of 'don't enable on production systems' as this would imply that the other options are safe to enable on such systems. * added a general warning that performance may suffer (but that you should enable nonetheless in case of debugging), and two specific warnings, one for slab poisoning, a big one for page alloc debugging. * some spelling, added notice about /proc/sysrq-trigger to magic SysRQ * Removed warning about SysRQ 'only if you know what it does' - I often ask people to press alt-sysrq to get debugging information, only to find that they have it turned off, even when I would be able to understand the output. Against 2.6.0 (path is wrong), please consider applying: --- linux-2.6.0-test11/arch/i386/Kconfig.orig Wed Dec 31 12:03:20 2003 +++ linux-2.6.0-test11/arch/i386/Kconfig Wed Dec 31 12:16:01 2003 @@ -1131,7 +1131,8 @@ bool "Kernel debugging" help Say Y here if you are developing drivers or trying to debug and - identify kernel problems. + identify kernel problems. Enabling these features often incurs + a performance hit, but will help debug problems much faster. config DEBUG_STACKOVERFLOW bool "Check for stack overflows" @@ -1143,7 +1144,7 @@ help Say Y here to have the kernel do limited verification on memory allocation as well as poisoning memory on free to catch use of freed - memory. + memory. Hurts performance. config DEBUG_IOVIRT bool "Memory mapped I/O debugging" @@ -1166,9 +1167,9 @@ immediately or dump some status information). This is accomplished by pressing various keys while holding SysRq (Alt+PrintScreen). It also works on a serial console (on PC hardware at least), if you - send a BREAK and then within 5 seconds a command keypress. The - keys are documented in <file:Documentation/sysrq.txt>. Don't say Y - unless you really know what this hack does. + send a BREAK and then within 5 seconds a command keypress. + Additionally, /proc/sysrq-trigger can be used. More documentation + is in <file:Documentation/sysrq.txt>. config DEBUG_SPINLOCK bool "Spinlock debugging" @@ -1180,19 +1181,18 @@ deadlocks are also debuggable. config DEBUG_PAGEALLOC - bool "Page alloc debugging" + bool "Page alloc debugging (slow/resource intensive)" depends on DEBUG_KERNEL help Unmap pages from the kernel linear mapping after free_pages(). - This results in a large slowdown, but helps to find certain types - of memory corruptions. + This results in a large slowdown and requires a lot of memory, + but helps to find certain types of memory corruptions. config DEBUG_HIGHMEM bool "Highmem debugging" depends on DEBUG_KERNEL && HIGHMEM help - This options enables addition error checking for high memory systems. - Disable for production systems. + This options enables additional error checking for high memory systems. config DEBUG_INFO bool "Compile the kernel with debug info" -- http://www.PowerDNS.com Open source, database driven DNS Software http://lartc.org Linux Advanced Routing & Traffic Control HOWTO ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 21:14 ` Thomas Molina 2003-12-30 21:23 ` Linus Torvalds @ 2003-12-30 21:35 ` William Lee Irwin III 2003-12-30 23:46 ` Roger Luethi 2 siblings, 0 replies; 50+ messages in thread From: William Lee Irwin III @ 2003-12-30 21:35 UTC (permalink / raw) To: Thomas Molina; +Cc: Linus Torvalds, Kernel Mailing List On Tue, Dec 30, 2003 at 04:14:13PM -0500, Thomas Molina wrote: > I also get 90+ percent iowait under 2.6 and 0 iowait in 2.4. I'm not sure > how the alleged suckiness of 2.6 paging fits into this. On this system > the execution times are almost the same. On this machine, in addition to > the iowait differences, there are cpu use statistics as reported by top. > On 2.4 idle time is 70 percent while on 2.6 the idle time is near zero > percent. I'm not sure what the significance of this is. 2.4 does not report iowait; all iowait is reported as idle time on 2.4. On Tue, Dec 30, 2003 at 04:14:13PM -0500, Thomas Molina wrote: > CPU: PIII, speed 648.072 MHz (estimated) > Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 324036 > vma samples % symbol name > c0115e20 22498 22.6776 mark_offset_tsc > c0110080 12707 12.8084 mask_and_ack_8259A > c018eec0 7115 7.1718 ext3_find_entry > c010ff60 4013 4.0450 enable_8259A_irq > c0168d50 2650 2.6712 __d_lookup > c015eb10 1727 1.7408 link_path_walk > c010afd0 1482 1.4938 irq_entries_start Well, it looks like Linus said various things along these lines in various ways before I finished writing this, but in case hearing it a second time is any reassurance: There's a slight problem here in that you're io-bound, not cpu-bound, so profiles won't actually tell us much about remaining overheads. One thing here is that since turning off all the debugging options got you down to about a 15% degradation, things aren't actually looking anywhere near as problematic as before when you had a near 90% degradation. One possible explanation is that the extensive padding done by CONFIG_DEBUG_PAGEALLOC created significant memory pressure. If you'd like further speedups, logging the things I suggested earlier and trying fiddling with swappiness might help. In fact, you are down to such a small margin of degradation that the remaining degradation vs. 2.4 may in fact be due to using oprofile, which has significant, though not overwhelming overhead. -- wli ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 21:14 ` Thomas Molina 2003-12-30 21:23 ` Linus Torvalds 2003-12-30 21:35 ` William Lee Irwin III @ 2003-12-30 23:46 ` Roger Luethi 2 siblings, 0 replies; 50+ messages in thread From: Roger Luethi @ 2003-12-30 23:46 UTC (permalink / raw) To: Thomas Molina; +Cc: William Lee Irwin III, Linus Torvalds, Kernel Mailing List On Tue, 30 Dec 2003 16:14:13 -0500, Thomas Molina wrote: > I also get 90+ percent iowait under 2.6 and 0 iowait in 2.4. I'm not sure > how the alleged suckiness of 2.6 paging fits into this. On this system It is not alleged. It is real, but the badness is not universal. I was afraid I'd have to add another category, but fortunately it seems bk export matches qsbench: Major regressions neither between test2 and test3 nor between 2.4 and 2.6. I'm still interested to learn whether 2.5.39 is a major regression (fixed later) for bk export, although that might have been due to the qs specific reference patterns, I haven't looked into it. At least for qsbench the spike is confirmed though, even with different parameters. Roger ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 14:14 ` Thomas Molina 2003-12-30 14:39 ` William Lee Irwin III @ 2003-12-30 18:20 ` Linus Torvalds 1 sibling, 0 replies; 50+ messages in thread From: Linus Torvalds @ 2003-12-30 18:20 UTC (permalink / raw) To: Thomas Molina; +Cc: Kernel Mailing List On Tue, 30 Dec 2003, Thomas Molina wrote: > > attachment two is the result of: > opreport -l vmlinux > vmlinux.txt Are you sure you used the right vmlinux binary? Some of this looks pretty strange (module_text_address? Whaa?). However, it also seems to point out that you have SLAB debugging with poisoning enabled. That will absolutely _kill_ your performance, and could easily explain part of the degradation. Linus ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 22:58 ` Thomas Molina 2003-12-29 23:04 ` Linus Torvalds @ 2003-12-29 23:14 ` Martin Schlemmer 2003-12-30 5:09 ` William Lee Irwin III 2003-12-29 23:25 ` David B. Stevens 2 siblings, 1 reply; 50+ messages in thread From: Martin Schlemmer @ 2003-12-29 23:14 UTC (permalink / raw) To: Thomas Molina; +Cc: Linus Torvalds, Kernel Mailing List [-- Attachment #1: Type: text/plain, Size: 1754 bytes --] On Tue, 2003-12-30 at 00:58, Thomas Molina wrote: > On Mon, 29 Dec 2003, Linus Torvalds wrote: > > > > > > > On Mon, 29 Dec 2003, Thomas Molina wrote: > > > > > > I just finished a couple of comparisons between 2.4 and 2.6 which seem to > > > confirm my impressions. I understand that the comparison may not be > > > apples to apples and my methods of testing may not be rigorous, but here > > > it is. In contrast to some recent discussions on this list, this test is > > > a "real world" test at which 2.6 comes off much worse than 2.4. > > > > Are you sure you have DMA enabled on your laptop disk? Your 2.6.x system > > times are very high - much bigger than the user times. That sounds like > > PIO to me. > > It certainly looks like DMA is enabled. Under 2.4 I get: > > [root@lap root]# hdparm /dev/hda > > /dev/hda: > multcount = 16 (on) > IO_support = 1 (32-bit) > unmaskirq = 1 (on) > using_dma = 1 (on) > keepsettings = 0 (off) > readonly = 0 (off) > readahead = 8 (on) > geometry = 2584/240/63, sectors = 39070080, start = 0 > > > Under 2.6 I get: > > [root@lap root]# hdparm /dev/hda > > /dev/hda: > multcount = 16 (on) > IO_support = 1 (32-bit) > unmaskirq = 1 (on) > using_dma = 1 (on) > keepsettings = 0 (off) > readonly = 0 (off) > readahead = 256 (on) > geometry = 38760/16/63, sectors = 39070080, start = 0 > Increase your readahead: # hdparm -a 8192 /dev/hda BTW: As we really do get this question a _lot_ of times, why don't the ide layer automatically set a higher readahead if there is enough cache on the drive or something? Thanks, -- Martin Schlemmer [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 23:14 ` Martin Schlemmer @ 2003-12-30 5:09 ` William Lee Irwin III 2003-12-30 10:27 ` Thomas Molina 0 siblings, 1 reply; 50+ messages in thread From: William Lee Irwin III @ 2003-12-30 5:09 UTC (permalink / raw) To: Martin Schlemmer; +Cc: Thomas Molina, Linus Torvalds, Kernel Mailing List On Tue, 2003-12-30 at 00:58, Thomas Molina wrote: >> It certainly looks like DMA is enabled. Under 2.4 I get: >> [root@lap root]# hdparm /dev/hda [...] >> readahead = 8 (on) [...] >> Under 2.6 I get: >> [root@lap root]# hdparm /dev/hda [...] >> readahead = 256 (on) On Tue, Dec 30, 2003 at 01:14:45AM +0200, Martin Schlemmer wrote: > Increase your readahead: > # hdparm -a 8192 /dev/hda > BTW: As we really do get this question a _lot_ of times, why > don't the ide layer automatically set a higher readahead > if there is enough cache on the drive or something? Could you try lowering 2.6's readahead to 2.4's levels in order to rule out readahead-induced thrashing? -- wli ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 5:09 ` William Lee Irwin III @ 2003-12-30 10:27 ` Thomas Molina 0 siblings, 0 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-30 10:27 UTC (permalink / raw) To: William Lee Irwin III Cc: Martin Schlemmer, Linus Torvalds, Kernel Mailing List On Mon, 29 Dec 2003, William Lee Irwin III wrote: > On Tue, 2003-12-30 at 00:58, Thomas Molina wrote: > >> It certainly looks like DMA is enabled. Under 2.4 I get: > >> [root@lap root]# hdparm /dev/hda > [...] > >> readahead = 8 (on) > [...] > >> Under 2.6 I get: > >> [root@lap root]# hdparm /dev/hda > [...] > >> readahead = 256 (on) > > On Tue, Dec 30, 2003 at 01:14:45AM +0200, Martin Schlemmer wrote: > > Increase your readahead: > > # hdparm -a 8192 /dev/hda > > BTW: As we really do get this question a _lot_ of times, why > > don't the ide layer automatically set a higher readahead > > if there is enough cache on the drive or something? > > Could you try lowering 2.6's readahead to 2.4's levels in order to rule > out readahead-induced thrashing? I thought I had already sent that. The timings for readahead of 8 was: real 25m39.653s user 0m37.594s sys 0m55.454s Increasing readahead in 2.6 to 8192 likewise doesn't help. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 22:58 ` Thomas Molina 2003-12-29 23:04 ` Linus Torvalds 2003-12-29 23:14 ` Martin Schlemmer @ 2003-12-29 23:25 ` David B. Stevens 2 siblings, 0 replies; 50+ messages in thread From: David B. Stevens @ 2003-12-29 23:25 UTC (permalink / raw) To: Thomas Molina; +Cc: Kernel Mailing List Thomas, Have you tried lowering the read ahead in 2.6 to 8. I don't know what you are doing with your workload but it is possible that you are transferring useless data and clogging the plumbing a bit. Cheers, Dave Thomas Molina wrote: >It certainly looks like DMA is enabled. Under 2.4 I get: > >[root@lap root]# hdparm /dev/hda > >/dev/hda: > multcount = 16 (on) > IO_support = 1 (32-bit) > unmaskirq = 1 (on) > using_dma = 1 (on) > keepsettings = 0 (off) > readonly = 0 (off) > readahead = 8 (on) > geometry = 2584/240/63, sectors = 39070080, start = 0 > > >Under 2.6 I get: > >[root@lap root]# hdparm /dev/hda > >/dev/hda: > multcount = 16 (on) > IO_support = 1 (32-bit) > unmaskirq = 1 (on) > using_dma = 1 (on) > keepsettings = 0 (off) > readonly = 0 (off) > readahead = 256 (on) > geometry = 38760/16/63, sectors = 39070080, start = 0 > > ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 22:21 ` Linus Torvalds 2003-12-29 22:58 ` Thomas Molina @ 2003-12-29 23:05 ` Thomas Molina 2003-12-29 23:43 ` Martin Schlemmer 2004-01-03 19:37 ` Bill Davidsen 1 sibling, 2 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-29 23:05 UTC (permalink / raw) To: Linus Torvalds; +Cc: Kernel Mailing List On Mon, 29 Dec 2003, Linus Torvalds wrote: > > > On Mon, 29 Dec 2003, Thomas Molina wrote: > > > > I just finished a couple of comparisons between 2.4 and 2.6 which seem to > > confirm my impressions. I understand that the comparison may not be > > apples to apples and my methods of testing may not be rigorous, but here > > it is. In contrast to some recent discussions on this list, this test is > > a "real world" test at which 2.6 comes off much worse than 2.4. > > Are you sure you have DMA enabled on your laptop disk? Your 2.6.x system > times are very high - much bigger than the user times. That sounds like > PIO to me. Sorry. One other bit of data from 2.6: [root@lap bitkeeper]# hdparm -i /dev/hda /dev/hda: Model=IBM-DJSA-220, FwRev=JS4OAC3A, SerialNo=44V44FT3300 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=DualPortCache, BuffSize=1874kB, MaxMultSect=16, MultSect=16 CurCHS=17475/15/63, CurSects=16513875, LBA=yes, LBAsects=39070080 IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 *udma2 udma3 udma4 AdvancedPM=yes: mode=0x80 (128) WriteCache=enabled Drive conforms to: ATA/ATAPI-5 T13 1321D revision 1: * signifies the current active mode ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 23:05 ` Thomas Molina @ 2003-12-29 23:43 ` Martin Schlemmer 2003-12-30 0:17 ` Thomas Molina 2004-01-03 19:37 ` Bill Davidsen 1 sibling, 1 reply; 50+ messages in thread From: Martin Schlemmer @ 2003-12-29 23:43 UTC (permalink / raw) To: Thomas Molina; +Cc: Linus Torvalds, Kernel Mailing List [-- Attachment #1: Type: text/plain, Size: 928 bytes --] On Tue, 2003-12-30 at 01:05, Thomas Molina wrote: > Sorry. One other bit of data from 2.6: > > [root@lap bitkeeper]# hdparm -i /dev/hda > > /dev/hda: > > Model=IBM-DJSA-220, FwRev=JS4OAC3A, SerialNo=44V44FT3300 > Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs } > RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 > BuffType=DualPortCache, BuffSize=1874kB, MaxMultSect=16, MultSect=16 > CurCHS=17475/15/63, CurSects=16513875, LBA=yes, LBAsects=39070080 > IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120} > PIO modes: pio0 pio1 pio2 pio3 pio4 > DMA modes: mdma0 mdma1 mdma2 > UDMA modes: udma0 udma1 *udma2 udma3 udma4 > AdvancedPM=yes: mode=0x80 (128) WriteCache=enabled > Drive conforms to: ATA/ATAPI-5 T13 1321D revision 1: > > * signifies the current active mode Any reason it is currently set to udma2 where it support udma4 ? -- Martin Schlemmer [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 23:43 ` Martin Schlemmer @ 2003-12-30 0:17 ` Thomas Molina 2003-12-30 1:23 ` Martin Schlemmer 2003-12-30 1:27 ` Dave Jones 0 siblings, 2 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-30 0:17 UTC (permalink / raw) To: Martin Schlemmer; +Cc: Linus Torvalds, Kernel Mailing List On Tue, 30 Dec 2003, Martin Schlemmer wrote: > > UDMA modes: udma0 udma1 *udma2 udma3 udma4 > > AdvancedPM=yes: mode=0x80 (128) WriteCache=enabled > > Drive conforms to: ATA/ATAPI-5 T13 1321D revision 1: > > > > * signifies the current active mode > > Any reason it is currently set to udma2 where it support udma4 ? Not really. The question was what mode the disk was running in. This is what it defaults to. This is a laptop drive that only runs at 5400RPM. Would changing the mode to udma4 make a dramatic difference? ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 0:17 ` Thomas Molina @ 2003-12-30 1:23 ` Martin Schlemmer 2003-12-30 1:27 ` Dave Jones 1 sibling, 0 replies; 50+ messages in thread From: Martin Schlemmer @ 2003-12-30 1:23 UTC (permalink / raw) To: Thomas Molina; +Cc: Linus Torvalds, Kernel Mailing List [-- Attachment #1: Type: text/plain, Size: 679 bytes --] On Tue, 2003-12-30 at 02:17, Thomas Molina wrote: > On Tue, 30 Dec 2003, Martin Schlemmer wrote: > > > > UDMA modes: udma0 udma1 *udma2 udma3 udma4 > > > AdvancedPM=yes: mode=0x80 (128) WriteCache=enabled > > > Drive conforms to: ATA/ATAPI-5 T13 1321D revision 1: > > > > > > * signifies the current active mode > > > > Any reason it is currently set to udma2 where it support udma4 ? > > Not really. The question was what mode the disk was running in. This is > what it defaults to. This is a laptop drive that only runs at 5400RPM. > Would changing the mode to udma4 make a dramatic difference? Well, should make some. -- Martin Schlemmer [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 0:17 ` Thomas Molina 2003-12-30 1:23 ` Martin Schlemmer @ 2003-12-30 1:27 ` Dave Jones 2003-12-30 1:37 ` Martin Schlemmer 1 sibling, 1 reply; 50+ messages in thread From: Dave Jones @ 2003-12-30 1:27 UTC (permalink / raw) To: Thomas Molina; +Cc: Martin Schlemmer, Linus Torvalds, Kernel Mailing List On Mon, Dec 29, 2003 at 07:17:23PM -0500, Thomas Molina wrote: > > > UDMA modes: udma0 udma1 *udma2 udma3 udma4 > > > AdvancedPM=yes: mode=0x80 (128) WriteCache=enabled > > > Drive conforms to: ATA/ATAPI-5 T13 1321D revision 1: > > Any reason it is currently set to udma2 where it support udma4 ? > > Not really. The question was what mode the disk was running in. This is > what it defaults to. This is a laptop drive that only runs at 5400RPM. > Would changing the mode to udma4 make a dramatic difference? It's not uncommon for a laptop to have a hard disk which supports higher DMA modes than what the IDE chipset supports. My aging Intel 440BX based VAIO has a disk in the same configuration as yours, supports udma4, but chipset only goes up to udma2. Dave -- Dave Jones http://www.codemonkey.org.uk ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 1:27 ` Dave Jones @ 2003-12-30 1:37 ` Martin Schlemmer 2003-12-30 1:40 ` Dave Jones 2003-12-30 1:49 ` Thomas Molina 0 siblings, 2 replies; 50+ messages in thread From: Martin Schlemmer @ 2003-12-30 1:37 UTC (permalink / raw) To: Dave Jones; +Cc: Thomas Molina, Linus Torvalds, Kernel Mailing List [-- Attachment #1: Type: text/plain, Size: 1203 bytes --] On Tue, 2003-12-30 at 03:27, Dave Jones wrote: > On Mon, Dec 29, 2003 at 07:17:23PM -0500, Thomas Molina wrote: > > > > > UDMA modes: udma0 udma1 *udma2 udma3 udma4 > > > > AdvancedPM=yes: mode=0x80 (128) WriteCache=enabled > > > > Drive conforms to: ATA/ATAPI-5 T13 1321D revision 1: > > > Any reason it is currently set to udma2 where it support udma4 ? > > > > Not really. The question was what mode the disk was running in. This is > > what it defaults to. This is a laptop drive that only runs at 5400RPM. > > Would changing the mode to udma4 make a dramatic difference? > > It's not uncommon for a laptop to have a hard disk which supports > higher DMA modes than what the IDE chipset supports. > My aging Intel 440BX based VAIO has a disk in the same configuration > as yours, supports udma4, but chipset only goes up to udma2. > Right, or as somebody else pointed out, it might not be a 80-pin cable. Lets rephrase - does it also run in udma2 mode with 2.4 ? And did you check readahead? In 2.6 it seems that a bigger value is better - I for instance have to set it to 8192 to have the same performance as in 2.4 ... -- Martin Schlemmer [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 1:37 ` Martin Schlemmer @ 2003-12-30 1:40 ` Dave Jones 2003-12-30 1:49 ` Thomas Molina 1 sibling, 0 replies; 50+ messages in thread From: Dave Jones @ 2003-12-30 1:40 UTC (permalink / raw) To: Martin Schlemmer; +Cc: Thomas Molina, Linus Torvalds, Kernel Mailing List On Tue, Dec 30, 2003 at 03:37:44AM +0200, Martin Schlemmer wrote: > > It's not uncommon for a laptop to have a hard disk which supports > > higher DMA modes than what the IDE chipset supports. > > My aging Intel 440BX based VAIO has a disk in the same configuration > > as yours, supports udma4, but chipset only goes up to udma2. > Right, or as somebody else pointed out, it might not be a 80-pin cable. > > Lets rephrase - does it also run in udma2 mode with 2.4 ? Yes, because the chipset is not capable of >udma2. Dave -- Dave Jones http://www.codemonkey.org.uk ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 1:37 ` Martin Schlemmer 2003-12-30 1:40 ` Dave Jones @ 2003-12-30 1:49 ` Thomas Molina 2003-12-30 2:03 ` Mike Fedyk 1 sibling, 1 reply; 50+ messages in thread From: Thomas Molina @ 2003-12-30 1:49 UTC (permalink / raw) To: Martin Schlemmer; +Cc: Dave Jones, Linus Torvalds, Kernel Mailing List On Tue, 30 Dec 2003, Martin Schlemmer wrote: > > It's not uncommon for a laptop to have a hard disk which supports > > higher DMA modes than what the IDE chipset supports. > > My aging Intel 440BX based VAIO has a disk in the same configuration > > as yours, supports udma4, but chipset only goes up to udma2. > > > > Right, or as somebody else pointed out, it might not be a 80-pin cable. > > Lets rephrase - does it also run in udma2 mode with 2.4 ? And did > you check readahead? In 2.6 it seems that a bigger value is better - > I for instance have to set it to 8192 to have the same performance as > in 2.4 ... 8192 will be my next test. I'm doing a compile at the moment. It runs in udma2 under both 2.4 and 2.6. If I need an 80-pin cable then udma4 is not possible for this system. If I read the following, it is only capable of 66MHz anyway: 00:07.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT8233/A/C/VT8235 PIPC Bus Master IDE (rev 10) (prog-if 8a [Master SecP PriP]) Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap+ 66Mhz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- Latency: 64 Region 4: I/O ports at 1420 [size=16] Capabilities: [c0] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 PME-Enable- DSel=0 DScale=0 PME- ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 1:49 ` Thomas Molina @ 2003-12-30 2:03 ` Mike Fedyk 0 siblings, 0 replies; 50+ messages in thread From: Mike Fedyk @ 2003-12-30 2:03 UTC (permalink / raw) To: Thomas Molina Cc: Martin Schlemmer, Dave Jones, Linus Torvalds, Kernel Mailing List On Mon, Dec 29, 2003 at 08:49:07PM -0500, Thomas Molina wrote: > On Tue, 30 Dec 2003, Martin Schlemmer wrote: > > > > It's not uncommon for a laptop to have a hard disk which supports > > > higher DMA modes than what the IDE chipset supports. > > > My aging Intel 440BX based VAIO has a disk in the same configuration > > > as yours, supports udma4, but chipset only goes up to udma2. > > > > > > > Right, or as somebody else pointed out, it might not be a 80-pin cable. > > > > Lets rephrase - does it also run in udma2 mode with 2.4 ? And did > > you check readahead? In 2.6 it seems that a bigger value is better - > > I for instance have to set it to 8192 to have the same performance as > > in 2.4 ... > > 8192 will be my next test. I'm doing a compile at the moment. It runs in > udma2 under both 2.4 and 2.6. If I need an 80-pin cable then udma4 is not > possible for this system. If I read the following, it is only capable of > 66MHz anyway: > > 00:07.1 IDE interface: VIA Technologies, Inc. > VT82C586A/B/VT82C686/A/B/VT8233/A/C/VT8235 PIPC Bus Master IDE (rev 10) > (prog-if 8a [Master SecP PriP]) > Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- > ParErr- Stepping- SERR- FastB2B- > Status: Cap+ 66Mhz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- 66Mhz has nothing to do with the DMA factor (33, 66, 100, 133, etc.). That's talking about the PCI bus, and I doubt you have a 66Mhz bus in a laptop. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 23:05 ` Thomas Molina 2003-12-29 23:43 ` Martin Schlemmer @ 2004-01-03 19:37 ` Bill Davidsen 1 sibling, 0 replies; 50+ messages in thread From: Bill Davidsen @ 2004-01-03 19:37 UTC (permalink / raw) To: linux-kernel Thomas Molina wrote: > > On Mon, 29 Dec 2003, Linus Torvalds wrote: > > >> >>On Mon, 29 Dec 2003, Thomas Molina wrote: >> >>>I just finished a couple of comparisons between 2.4 and 2.6 which seem to >>>confirm my impressions. I understand that the comparison may not be >>>apples to apples and my methods of testing may not be rigorous, but here >>>it is. In contrast to some recent discussions on this list, this test is >>>a "real world" test at which 2.6 comes off much worse than 2.4. >> >>Are you sure you have DMA enabled on your laptop disk? Your 2.6.x system >>times are very high - much bigger than the user times. That sounds like >>PIO to me. > > > > Sorry. One other bit of data from 2.6: > > [root@lap bitkeeper]# hdparm -i /dev/hda > > /dev/hda: > > Model=IBM-DJSA-220, FwRev=JS4OAC3A, SerialNo=44V44FT3300 > Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs } > RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 > BuffType=DualPortCache, BuffSize=1874kB, MaxMultSect=16, MultSect=16 > CurCHS=17475/15/63, CurSects=16513875, LBA=yes, LBAsects=39070080 > IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120} > PIO modes: pio0 pio1 pio2 pio3 pio4 > DMA modes: mdma0 mdma1 mdma2 > UDMA modes: udma0 udma1 *udma2 udma3 udma4 > AdvancedPM=yes: mode=0x80 (128) WriteCache=enabled > Drive conforms to: ATA/ATAPI-5 T13 1321D revision 1: > > * signifies the current active mode What mode does 2.4 use? -- bill davidsen <davidsen@tmr.com> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 22:07 2.6.0 performance problems Thomas Molina 2003-12-29 22:21 ` Linus Torvalds @ 2003-12-30 1:25 ` Roger Luethi 2003-12-30 1:37 ` Thomas Molina 2003-12-30 1:27 ` Thomas Molina 2 siblings, 1 reply; 50+ messages in thread From: Roger Luethi @ 2003-12-30 1:25 UTC (permalink / raw) To: Thomas Molina; +Cc: Kernel Mailing List On Mon, 29 Dec 2003 17:07:46 -0500, Thomas Molina wrote: > Execution time for the test was: > real 13m33.482s > user 0m33.540s > sys 0m16.210s > > > Under 2.6 top shows: > user nice system irq softirq iowait idle > 0.9 0 5.3 0.9 0.3 92.6 0 > > Execution time for the test was: > real 22m42.397s > user 0m37.753s > sys 0m54.043s > > I've done no performance tweaking in either case. Both tests were done > immediately after boot up with only the top program running in each case. > I'm not sure what other data would be relevant here. Any thoughts from > the group would be appreciated. I bet this is just yet another instance of a problem we've been discussing on lkml and linux-mm for several months now (although Linus asking for DMA presumably means it's not as well known as I thought it was). Basically, when you need to resort to paging for getting work done on 2.6 you're screwed. Your bk export takes a lot more memory than you have RAM in your machine, right? Check the archives for this thread: 2.6.0-test9 - poor swap performance on low end machines Roger ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 1:25 ` Roger Luethi @ 2003-12-30 1:37 ` Thomas Molina 2003-12-30 19:21 ` Andy Isaacson 0 siblings, 1 reply; 50+ messages in thread From: Thomas Molina @ 2003-12-30 1:37 UTC (permalink / raw) To: Roger Luethi; +Cc: Kernel Mailing List On Tue, 30 Dec 2003, Roger Luethi wrote: > I bet this is just yet another instance of a problem we've been > discussing on lkml and linux-mm for several months now (although Linus > asking for DMA presumably means it's not as well known as I thought > it was). > > Basically, when you need to resort to paging for getting work done on > 2.6 you're screwed. Your bk export takes a lot more memory than you > have RAM in your machine, right? Right. I have 120MB RAM and 256MB swap partition. That corresponds to the 85 to 90 percent top says I am spending in iowait. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 1:37 ` Thomas Molina @ 2003-12-30 19:21 ` Andy Isaacson 2003-12-30 19:40 ` William Lee Irwin III 0 siblings, 1 reply; 50+ messages in thread From: Andy Isaacson @ 2003-12-30 19:21 UTC (permalink / raw) To: Thomas Molina; +Cc: Roger Luethi, Kernel Mailing List On Mon, Dec 29, 2003 at 08:37:53PM -0500, Thomas Molina wrote: > On Tue, 30 Dec 2003, Roger Luethi wrote: > > I bet this is just yet another instance of a problem we've been > > discussing on lkml and linux-mm for several months now (although Linus > > asking for DMA presumably means it's not as well known as I thought > > it was). > > > > Basically, when you need to resort to paging for getting work done on > > 2.6 you're screwed. Your bk export takes a lot more memory than you > > have RAM in your machine, right? > > Right. I have 120MB RAM and 256MB swap partition. That corresponds to > the 85 to 90 percent top says I am spending in iowait. Yeah, right now BK needs about 140-160MB of working set to do a consistency check on the 2.5 tree. That means you're paging, and it sounds like paging sucks on 2.6? (Actually, BK is even happier if the kernel can keep all the sfiles in cache, so a half-gig is a comfortable amount for working with the current 2.5 tree, although 256MB should be enough to avoid paging hell. With a full gig, you can keep two full trees in "checkout:get" mode in cache, which is nice.) -andy ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 19:21 ` Andy Isaacson @ 2003-12-30 19:40 ` William Lee Irwin III 2003-12-30 22:24 ` Roger Luethi 0 siblings, 1 reply; 50+ messages in thread From: William Lee Irwin III @ 2003-12-30 19:40 UTC (permalink / raw) To: Andy Isaacson; +Cc: Thomas Molina, Roger Luethi, Kernel Mailing List On Mon, Dec 29, 2003 at 08:37:53PM -0500, Thomas Molina wrote: >> Right. I have 120MB RAM and 256MB swap partition. That corresponds to >> the 85 to 90 percent top says I am spending in iowait. On Tue, Dec 30, 2003 at 01:21:45PM -0600, Andy Isaacson wrote: > Yeah, right now BK needs about 140-160MB of working set to do a > consistency check on the 2.5 tree. That means you're paging, and it > sounds like paging sucks on 2.6? > (Actually, BK is even happier if the kernel can keep all the sfiles in > cache, so a half-gig is a comfortable amount for working with the > current 2.5 tree, although 256MB should be enough to avoid paging hell. > With a full gig, you can keep two full trees in "checkout:get" mode in > cache, which is nice.) Well, it's not supposed to suck. Something to try that affects paging directly would be adjusting /proc/sys/vm/swappiness to, say, 0 and 100 and trying it at both levels. More intelligent solutions require more instrumentation to address. I generally recommend: (1) logging top(1) running in batch mode (2) logging vmstat(1) (3) snapshotting /proc/meminfo (4) snapshotting /prov/vmstat I recommend an interval of 5s and logging with things like top b d 5 > /tmp/top.log 2>&1 & vmstat 5 > /tmp/vmstat.log 2>&1 & (while true; do cat /proc/meminfo; sleep 5; done) > /tmp/meminfo.log 2>&1 & (while true; do cat /proc/vmstat; sleep 5; done) > /tmp/proc_vmstat.log 2>&1 & Thus far interpretations of information collected this way have been somewhat lacking. Roger Luethi has identified various points at which regressions happened over the course of 2.5, but it appears that information hasn't yet been and still needs to be acted on. If you could also try to identify points in time when the system has become less responsive I'd be much obliged. -- wli ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 19:40 ` William Lee Irwin III @ 2003-12-30 22:24 ` Roger Luethi 2003-12-31 0:33 ` Thomas Molina 0 siblings, 1 reply; 50+ messages in thread From: Roger Luethi @ 2003-12-30 22:24 UTC (permalink / raw) To: William Lee Irwin III, Andy Isaacson, Thomas Molina, Kernel Mailing List On Tue, 30 Dec 2003 11:40:51 -0800, William Lee Irwin III wrote: > Thus far interpretations of information collected this way have been > somewhat lacking. Roger Luethi has identified various points at which > regressions happened over the course of 2.5, but it appears that > information hasn't yet been and still needs to be acted on. My data is interesting for kbuild/efax type work loads and it looks like bk export might be different. Thomas Molina tested with the patch I have occasionally posted to revert some VM changes in 2.6.0-test3: No apparent change in run time (hard to tell for sure since 2.6 increased variance considerably for some work loads). I'm not sure how to classify the bk export. It may be the qsbench type or something new. If it is the former, then 2.5.39 performs a lot worse than 2.5.38 (and 2.6.0, for that matter). It would also be interesting to see the numbers for 2.5.27: That's when physical scanning was introduced -- IMO that performance should be the minimal goal for 2.6. Roger ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 22:24 ` Roger Luethi @ 2003-12-31 0:33 ` Thomas Molina 2003-12-31 10:17 ` Roger Luethi 0 siblings, 1 reply; 50+ messages in thread From: Thomas Molina @ 2003-12-31 0:33 UTC (permalink / raw) To: Roger Luethi; +Cc: William Lee Irwin III, Andy Isaacson, Kernel Mailing List On Tue, 30 Dec 2003, Roger Luethi wrote: > I'm not sure how to classify the bk export. It may be the qsbench type > or something new. If it is the former, then 2.5.39 performs a lot worse > than 2.5.38 (and 2.6.0, for that matter). > > It would also be interesting to see the numbers for 2.5.27: That's when > physical scanning was introduced -- IMO that performance should be the > minimal goal for 2.6. It seems to me that the bk export test is a measure of memory pressure and io performance. On my good system with plenty of resources I see very little difference between 2.4 and 2.6. On my laptop with a slower processor, less memory, and a slower hard drive I get dramatic differences, depending on workload. I'm not sure what to think of the bk export test to tell you the truth. i've noticed for some time that 2.6 seemed to perform worse than 2.4. It was a simple "real world" test that I could use to gather real performance data. If I am understanding you, you would like data on 2.5.27, 2.5.38, and 2.5.39. I'll do it if it will help something. I'll look at it in the next couple of days. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-31 0:33 ` Thomas Molina @ 2003-12-31 10:17 ` Roger Luethi 2003-12-31 11:21 ` Jens Axboe 0 siblings, 1 reply; 50+ messages in thread From: Roger Luethi @ 2003-12-31 10:17 UTC (permalink / raw) To: Thomas Molina; +Cc: William Lee Irwin III, Andy Isaacson, Kernel Mailing List On Tue, 30 Dec 2003 19:33:06 -0500, Thomas Molina wrote: > If I am understanding you, you would like data on 2.5.27, 2.5.38, and > 2.5.39. I'll do it if it will help something. I'll look at it in the Thanks. 2.5.39 alone will do, actually. I'm just curious how far the similarity between qsbench and bk export goes. Roger ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-31 10:17 ` Roger Luethi @ 2003-12-31 11:21 ` Jens Axboe 2003-12-31 21:03 ` Roger Luethi 2004-01-01 23:09 ` Roger Luethi 0 siblings, 2 replies; 50+ messages in thread From: Jens Axboe @ 2003-12-31 11:21 UTC (permalink / raw) To: Thomas Molina, William Lee Irwin III, Andy Isaacson, Kernel Mailing List On Wed, Dec 31 2003, Roger Luethi wrote: > On Tue, 30 Dec 2003 19:33:06 -0500, Thomas Molina wrote: > > If I am understanding you, you would like data on 2.5.27, 2.5.38, and > > 2.5.39. I'll do it if it will help something. I'll look at it in the > > Thanks. 2.5.39 alone will do, actually. I'm just curious how far the > similarity between qsbench and bk export goes. 2.5.39 is when the deadline io scheduler was merged. How do you define the qsbench suckiness? Do you have numbers for 2.4.x and 2.6.1-rc with the various io schedulers (it would be interesting to see stock, elevator=deadline, and elevator=noop). -- Jens Axboe ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-31 11:21 ` Jens Axboe @ 2003-12-31 21:03 ` Roger Luethi 2004-01-01 1:27 ` Thomas Molina 2004-01-01 23:09 ` Roger Luethi 1 sibling, 1 reply; 50+ messages in thread From: Roger Luethi @ 2003-12-31 21:03 UTC (permalink / raw) To: Jens Axboe Cc: Thomas Molina, William Lee Irwin III, Andy Isaacson, Kernel Mailing List On Wed, 31 Dec 2003 12:21:19 +0100, Jens Axboe wrote: > > Thanks. 2.5.39 alone will do, actually. I'm just curious how far the > > similarity between qsbench and bk export goes. > > 2.5.39 is when the deadline io scheduler was merged. How do you define > the qsbench suckiness? 2.5.39 was the biggest regression for qsbench (fixed later, most notably in 2.5.41). 2.5.39 was a significant improvement for efax ("fixed" in 2.5.43). All I'm doing here is reading the graph I posted at: http://hellgate.ch/bench/thrash.tar.gz For the systematic testing, I used "qsbench -p 4 -m 96" on a 256 MB machine. This allowed the kernel to achieve high performance with unfairness -- that's what 2.4 does: One process dominates all others and finishes very early, taking away most of the memory pressure. The spike for qsbench in 2.5.39 remains if only one process is forked (-p1 -m 384), though. I asked for the bk export numbers with 2.5.39 because I'm curious how close to qsbench the behavior really is. > Do you have numbers for 2.4.x and 2.6.1-rc with > the various io schedulers (it would be interesting to see stock, > elevator=deadline, and elevator=noop). I planned to compare the io schedulers in 2.6.0 anyway. Do you expect different numbers for a recent bk snapshot? Roger ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-31 21:03 ` Roger Luethi @ 2004-01-01 1:27 ` Thomas Molina 2004-01-01 10:23 ` Roger Luethi 0 siblings, 1 reply; 50+ messages in thread From: Thomas Molina @ 2004-01-01 1:27 UTC (permalink / raw) To: Roger Luethi; +Cc: Kernel Mailing List On Wed, 31 Dec 2003, Roger Luethi wrote: > For the systematic testing, I used "qsbench -p 4 -m 96" on a 256 MB > machine. This allowed the kernel to achieve high performance with > unfairness -- that's what 2.4 does: One process dominates all others > and finishes very early, taking away most of the memory pressure. > The spike for qsbench in 2.5.39 remains if only one process is forked > (-p1 -m 384), though. > > I asked for the bk export numbers with 2.5.39 because I'm curious how > close to qsbench the behavior really is. 2.5.39 won't compile for me "out of the box". I thought it might have been the toolset, but I was running RH8 and it has gcc 3.2. Was there a big change between 3.2 and 3.3.2 in Fedora Core 1? The reason I ask is that I also can't get NISTNet to compile on Fedora Core 1 or RHEL WS 3. It looks like incompatible libraries. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2004-01-01 1:27 ` Thomas Molina @ 2004-01-01 10:23 ` Roger Luethi 0 siblings, 0 replies; 50+ messages in thread From: Roger Luethi @ 2004-01-01 10:23 UTC (permalink / raw) To: Thomas Molina; +Cc: Kernel Mailing List On Wed, 31 Dec 2003 20:27:34 -0500, Thomas Molina wrote: > 2.5.39 won't compile for me "out of the box". I thought it might have > been the toolset, but I was running RH8 and it has gcc 3.2. Was there a I used gcc 2.95. 3.2 won't work with older kernels, not sure when exactly problems were fixed, though. Roger ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-31 11:21 ` Jens Axboe 2003-12-31 21:03 ` Roger Luethi @ 2004-01-01 23:09 ` Roger Luethi 2004-01-02 10:11 ` Jens Axboe 1 sibling, 1 reply; 50+ messages in thread From: Roger Luethi @ 2004-01-01 23:09 UTC (permalink / raw) To: Jens Axboe; +Cc: Kernel Mailing List On Wed, 31 Dec 2003 12:21:19 +0100, Jens Axboe wrote: > the qsbench suckiness? Do you have numbers for 2.4.x and 2.6.1-rc with > the various io schedulers (it would be interesting to see stock, > elevator=deadline, and elevator=noop). For 2.6 AS comes out on top. It seems though that AS may be at least partially responsible for the exploding variance of run times for qsbench. I don't think we can compare 2.4 and 2.6 I/O schedulers for these loads. The io scheduler can do only so much if the VM evicts the wrong pages. Average, times for ten runs (in seconds, ordered). efax avg 2.4.23 228.8 227 227 228 229 229 229 229 230 230 230 2.6.0 noop 861.8 833 855 860 865 866 866 867 867 869 870 2.6.0 deadline 846.1 813 827 830 845 850 854 856 859 861 866 2.6.0 as 850.8 827 834 839 840 840 841 864 864 874 885 kbuild avg 2.4.23 140.4 116 118 124 125 132 150 153 157 161 168 2.6.0 noop 638.2 552 569 596 600 608 631 634 658 712 822 2.6.0 deadline 570.0 494 495 517 529 532 545 596 619 670 703 2.6.0 as 486.1 406 429 453 468 473 477 510 536 542 567 qsbench avg 2.4.23 223.8 219 220 221 223 223 223 223 225 230 231 2.6.0 noop 380.0 333 343 374 377 382 389 391 391 403 417 2.6.0 deadline 368.8 339 361 361 372 372 373 375 377 377 381 2.6.0 as 329.3 253 279 281 286 300 355 371 374 388 406 Roger ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2004-01-01 23:09 ` Roger Luethi @ 2004-01-02 10:11 ` Jens Axboe 0 siblings, 0 replies; 50+ messages in thread From: Jens Axboe @ 2004-01-02 10:11 UTC (permalink / raw) To: Kernel Mailing List On Fri, Jan 02 2004, Roger Luethi wrote: > On Wed, 31 Dec 2003 12:21:19 +0100, Jens Axboe wrote: > > the qsbench suckiness? Do you have numbers for 2.4.x and 2.6.1-rc with > > the various io schedulers (it would be interesting to see stock, > > elevator=deadline, and elevator=noop). > > For 2.6 AS comes out on top. It seems though that AS may be at least > partially responsible for the exploding variance of run times for > qsbench. > > I don't think we can compare 2.4 and 2.6 I/O schedulers for these > loads. The io scheduler can do only so much if the VM evicts the > wrong pages. Agree, in case of a thrashing vm it's an impossible job. > Average, times for ten runs (in seconds, ordered). > > efax avg > 2.4.23 228.8 227 227 228 229 229 229 229 230 230 230 > 2.6.0 noop 861.8 833 855 860 865 866 866 867 867 869 870 > 2.6.0 deadline 846.1 813 827 830 845 850 854 856 859 861 866 > 2.6.0 as 850.8 827 834 839 840 840 841 864 864 874 885 > > kbuild avg > 2.4.23 140.4 116 118 124 125 132 150 153 157 161 168 > 2.6.0 noop 638.2 552 569 596 600 608 631 634 658 712 822 > 2.6.0 deadline 570.0 494 495 517 529 532 545 596 619 670 703 > 2.6.0 as 486.1 406 429 453 468 473 477 510 536 542 567 > > qsbench avg > 2.4.23 223.8 219 220 221 223 223 223 223 225 230 231 > 2.6.0 noop 380.0 333 343 374 377 382 389 391 391 403 417 > 2.6.0 deadline 368.8 339 361 361 372 372 373 375 377 377 381 > 2.6.0 as 329.3 253 279 281 286 300 355 371 374 388 406 -- Jens Axboe ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-29 22:07 2.6.0 performance problems Thomas Molina 2003-12-29 22:21 ` Linus Torvalds 2003-12-30 1:25 ` Roger Luethi @ 2003-12-30 1:27 ` Thomas Molina 2003-12-30 2:53 ` Thomas Molina 2 siblings, 1 reply; 50+ messages in thread From: Thomas Molina @ 2003-12-30 1:27 UTC (permalink / raw) To: Kernel Mailing List On Mon, 29 Dec 2003, Thomas Molina wrote: > Under 2.4 top shows: > > user nice system irq softirq iowait idle > 1.3 0 2.1 0 0 0 96.6 > > Execution time for the test was: > real 13m33.482s > user 0m33.540s > sys 0m16.210s A suggestion was made that readahead might make a difference. Changing readahead under 2.4 from default of 8 to 255 the times went to: real 13m50.198s user 0m33.780s sys 0m15.390s > Under 2.6 top shows: > user nice system irq softirq iowait idle > 0.9 0 5.3 0.9 0.3 92.6 0 > > Execution time for the test was: > real 22m42.397s > user 0m37.753s > sys 0m54.043s Changing 2.6 to 8 from 255 changed times: real 25m39.653s user 0m37.594s sys 0m55.454s I'll try the suggestion of 8192 for 2.6 later. hdparm won't let me set readahead more than 255 for 2.4. I'm currently recompiling for profiling support. I'm ashamed to say that wasn't configured in. Linus, once I get that finished and do some testing I'll post results. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 1:27 ` Thomas Molina @ 2003-12-30 2:53 ` Thomas Molina 0 siblings, 0 replies; 50+ messages in thread From: Thomas Molina @ 2003-12-30 2:53 UTC (permalink / raw) To: Kernel Mailing List > > Under 2.6 top shows: > > user nice system irq softirq iowait idle > > 0.9 0 5.3 0.9 0.3 92.6 0 > > > > Execution time for the test was: > > real 22m42.397s > > user 0m37.753s > > sys 0m54.043s > > > Changing 2.6 to 8 from 255 changed times: > > real 25m39.653s > user 0m37.594s > sys 0m55.454s Changing readahead to 8192 increased the real time to 28 minutes and left the user and sys times essentially unchanged. I've recompiled with profiling support. I'll give an update on that when I can. ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems @ 2003-12-30 11:41 Samium Gromoff 2004-01-03 19:54 ` Bill Davidsen [not found] ` <200312300855.00741.edt@aei.ca> 0 siblings, 2 replies; 50+ messages in thread From: Samium Gromoff @ 2003-12-30 11:41 UTC (permalink / raw) To: linux-kernel Reality sucks. People are ignorant enough to turn blind eye to obvious vm regressions. No developers run 64M boxens anymore... regards, Samium "sad" Gromoff ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2003-12-30 11:41 Samium Gromoff @ 2004-01-03 19:54 ` Bill Davidsen [not found] ` <200312300855.00741.edt@aei.ca> 1 sibling, 0 replies; 50+ messages in thread From: Bill Davidsen @ 2004-01-03 19:54 UTC (permalink / raw) To: linux-kernel Samium Gromoff wrote: > Reality sucks. > > People are ignorant enough to turn blind eye to obvious vm regressions. > > No developers run 64M boxens anymore... Developers shound NOT be running slow machines, but they should be testing slow machines. I do my builds on a four way Xeon machine, and install on a slow machine for test. If you look at some of the response testing I'm doing, it's one a 96MB p3-350, just for that reason. And I have a P5-133 I built but haven't really benchmarked yet, it has only 64MB. I think the place such slow machines are relevant is embedded, which is why I occasionally rant about locking in code to hide Athlon CPU bugs which just wastes space on unbroken machines. I have a pile of 486 machines I want to run as firewalls, don't plan to do kernel builds on those, either :-( -- bill davidsen <davidsen@tmr.com> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 ^ permalink raw reply [flat|nested] 50+ messages in thread
[parent not found: <200312300855.00741.edt@aei.ca>]
* Re: 2.6.0 performance problems [not found] ` <200312300855.00741.edt@aei.ca> @ 2004-01-05 12:33 ` Samium Gromoff 2004-01-05 15:09 ` Ed Tomlinson 0 siblings, 1 reply; 50+ messages in thread From: Samium Gromoff @ 2004-01-05 12:33 UTC (permalink / raw) To: Ed Tomlinson; +Cc: Samium Gromoff, linux-kernel At Tue, 30 Dec 2003 08:55:00 -0500, Ed Tomlinson wrote: > > On December 30, 2003 06:41 am, Samium Gromoff wrote: > > Reality sucks. > > > > People are ignorant enough to turn blind eye to obvious vm regressions. > > > > No developers run 64M boxens anymore... > > No one is turning a blind eye. Notice Linus has reponded to and is interested in this > thread. The vm is not perfect in all cases - in most cases it is faster though... "in most cases it is faster" is a big lie. The reality is: on all usual one-way boxes 2.6 goes slower than 2.4 once you start paging. Ask Roger Luethi. (And yes, i have done tests myself) One of the worst things i see about it is that people are so terribly misinformed. > Ed Tomlinson regards, Samium Gromoff ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2004-01-05 12:33 ` Samium Gromoff @ 2004-01-05 15:09 ` Ed Tomlinson 2004-01-06 2:23 ` David Lang 0 siblings, 1 reply; 50+ messages in thread From: Ed Tomlinson @ 2004-01-05 15:09 UTC (permalink / raw) To: Samium Gromoff; +Cc: linux-kernel On January 05, 2004 07:33 am, Samium Gromoff wrote: > At Tue, 30 Dec 2003 08:55:00 -0500, > > Ed Tomlinson wrote: > > On December 30, 2003 06:41 am, Samium Gromoff wrote: > > > Reality sucks. > > > > > > People are ignorant enough to turn blind eye to obvious vm regressions. > > > > > > No developers run 64M boxens anymore... > > > > No one is turning a blind eye. Notice Linus has reponded to and is > > interested in this thread. The vm is not perfect in all cases - in most > > cases it is faster though... > > "in most cases it is faster" is a big lie. > > The reality is: on all usual one-way boxes 2.6 goes slower than 2.4 once > you start paging. I would argue that in most case you do not page or page very little - know that is the case here. In any case it does point out what part of the system needs to be improved. Ed ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2004-01-05 15:09 ` Ed Tomlinson @ 2004-01-06 2:23 ` David Lang 2004-01-06 14:44 ` Samium Gromoff 0 siblings, 1 reply; 50+ messages in thread From: David Lang @ 2004-01-06 2:23 UTC (permalink / raw) To: Ed Tomlinson; +Cc: Samium Gromoff, linux-kernel On Mon, 5 Jan 2004, Ed Tomlinson wrote: > > On January 05, 2004 07:33 am, Samium Gromoff wrote: > > At Tue, 30 Dec 2003 08:55:00 -0500, > > > > Ed Tomlinson wrote: > > > On December 30, 2003 06:41 am, Samium Gromoff wrote: > > > > Reality sucks. > > > > > > > > People are ignorant enough to turn blind eye to obvious vm regressions. > > > > > > > > No developers run 64M boxens anymore... > > > > > > No one is turning a blind eye. Notice Linus has reponded to and is > > > interested in this thread. The vm is not perfect in all cases - in most > > > cases it is faster though... > > > > "in most cases it is faster" is a big lie. > > > > The reality is: on all usual one-way boxes 2.6 goes slower than 2.4 once > > you start paging. > > I would argue that in most case you do not page or page very little - know that is > the case here. > This may be true of you have lots of memory, but with memory hogs like mozilla and openoffice out there anyone who is working on an older machine will be pageing, if only for the time it takes for the huge bloated desktop app to start and get it's working set into memory. things get even worse if you make the mistake of useing Gnome or KDE for your desktop. David Lang -- "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan ^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: 2.6.0 performance problems 2004-01-06 2:23 ` David Lang @ 2004-01-06 14:44 ` Samium Gromoff 0 siblings, 0 replies; 50+ messages in thread From: Samium Gromoff @ 2004-01-06 14:44 UTC (permalink / raw) To: David Lang; +Cc: Ed Tomlinson, Samium Gromoff, linux-kernel At Mon, 5 Jan 2004 18:23:54 -0800 (PST), David Lang wrote: > > On Mon, 5 Jan 2004, Ed Tomlinson wrote: > > > > > On January 05, 2004 07:33 am, Samium Gromoff wrote: > > > At Tue, 30 Dec 2003 08:55:00 -0500, > > > > > > Ed Tomlinson wrote: > > > > On December 30, 2003 06:41 am, Samium Gromoff wrote: > > > > > Reality sucks. > > > > > > > > > > People are ignorant enough to turn blind eye to obvious vm regressions. > > > > > > > > > > No developers run 64M boxens anymore... > > > > > > > > No one is turning a blind eye. Notice Linus has reponded to and is > > > > interested in this thread. The vm is not perfect in all cases - in most > > > > cases it is faster though... > > > > > > "in most cases it is faster" is a big lie. > > > > > > The reality is: on all usual one-way boxes 2.6 goes slower than 2.4 once > > > you start paging. > > > > I would argue that in most case you do not page or page very little - know that is > > the case here. > > > > This may be true of you have lots of memory, but with memory hogs like > mozilla and openoffice out there anyone who is working on an older machine > will be pageing, if only for the time it takes for the huge bloated > desktop app to start and get it's working set into memory. > > things get even worse if you make the mistake of useing Gnome or KDE for > your desktop. I`ve timed delta("exec startx", `last io') with 64M RAM on my box. The desktop consisted of wmaker, several xterms, devhelp (gnome2 app) and several (3-4) wmaker applets, with devhelp being the hoggiest hog. I also hade several services in the background, but they`re mostly irrelevant, due to inactivity. The discovery was that 2.6.0-test9 was about 1.5x slower to reach the `noio' state than 2.4.20-pre9. And no, i don`t use ide on my desktop, so no dma issues there ;-) > David Lang > > -- > "Debugging is twice as hard as writing the code in the first place. > Therefore, if you write the code as cleverly as possible, you are, > by definition, not smart enough to debug it." - Brian W. Kernighan regards, Samium Gromoff ^ permalink raw reply [flat|nested] 50+ messages in thread
end of thread, other threads:[~2004-01-06 14:46 UTC | newest] Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2003-12-29 22:07 2.6.0 performance problems Thomas Molina 2003-12-29 22:21 ` Linus Torvalds 2003-12-29 22:58 ` Thomas Molina 2003-12-29 23:04 ` Linus Torvalds 2003-12-30 14:14 ` Thomas Molina 2003-12-30 14:39 ` William Lee Irwin III 2003-12-30 21:14 ` Thomas Molina 2003-12-30 21:23 ` Linus Torvalds 2003-12-31 0:50 ` Thomas Molina 2003-12-31 1:01 ` Linus Torvalds 2003-12-31 1:34 ` Andrew Morton 2003-12-31 11:25 ` bert hubert 2003-12-30 21:35 ` William Lee Irwin III 2003-12-30 23:46 ` Roger Luethi 2003-12-30 18:20 ` Linus Torvalds 2003-12-29 23:14 ` Martin Schlemmer 2003-12-30 5:09 ` William Lee Irwin III 2003-12-30 10:27 ` Thomas Molina 2003-12-29 23:25 ` David B. Stevens 2003-12-29 23:05 ` Thomas Molina 2003-12-29 23:43 ` Martin Schlemmer 2003-12-30 0:17 ` Thomas Molina 2003-12-30 1:23 ` Martin Schlemmer 2003-12-30 1:27 ` Dave Jones 2003-12-30 1:37 ` Martin Schlemmer 2003-12-30 1:40 ` Dave Jones 2003-12-30 1:49 ` Thomas Molina 2003-12-30 2:03 ` Mike Fedyk 2004-01-03 19:37 ` Bill Davidsen 2003-12-30 1:25 ` Roger Luethi 2003-12-30 1:37 ` Thomas Molina 2003-12-30 19:21 ` Andy Isaacson 2003-12-30 19:40 ` William Lee Irwin III 2003-12-30 22:24 ` Roger Luethi 2003-12-31 0:33 ` Thomas Molina 2003-12-31 10:17 ` Roger Luethi 2003-12-31 11:21 ` Jens Axboe 2003-12-31 21:03 ` Roger Luethi 2004-01-01 1:27 ` Thomas Molina 2004-01-01 10:23 ` Roger Luethi 2004-01-01 23:09 ` Roger Luethi 2004-01-02 10:11 ` Jens Axboe 2003-12-30 1:27 ` Thomas Molina 2003-12-30 2:53 ` Thomas Molina 2003-12-30 11:41 Samium Gromoff 2004-01-03 19:54 ` Bill Davidsen [not found] ` <200312300855.00741.edt@aei.ca> 2004-01-05 12:33 ` Samium Gromoff 2004-01-05 15:09 ` Ed Tomlinson 2004-01-06 2:23 ` David Lang 2004-01-06 14:44 ` Samium Gromoff
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.