* CFQ scheduler leaves task in D state
@ 2003-09-01 19:52 Bongani Hlope
2003-09-02 5:48 ` Jens Axboe
0 siblings, 1 reply; 4+ messages in thread
From: Bongani Hlope @ 2003-09-01 19:52 UTC (permalink / raw)
To: linux-kernel; +Cc: Jens Axboe
[-- Attachment #1: Type: text/plain, Size: 359 bytes --]
Hi
I tried the CFQ scheduler patch you posted on top of 2.6.0-test4-mm3-1,
including the O19int patch from Con Kolivas and I got the attached stacktrace.
---------------------------------------------
This message was sent using M-Web Airmail - JUST LIKE THAT
M-Web: S.A.'s most trusted and reliable Internet Service Provider.
http://airmail.mweb.co.za/
[-- Attachment #2: oops-cfq --]
[-- Type: application/octet-stream, Size: 13700 bytes --]
Unable to handle kernel paging request at virtual address 6b6b6b77
printing eip:
c0211c42
*pde = 00000000
Oops: 0000 [#1]
PREEMPT
CPU: 0
EIP: 0060:[<c0211c42>] Not tainted VLI
EFLAGS: 00210002
EIP is at rb_next+0x22/0x60
eax: 6b6b6b6b ebx: c0279ef0 ecx: cfdffac0 edx: 6b6b6b6b
esi: c136cbf0 edi: cfdffac0 ebp: 00000000 esp: ca7dbab4
ds: 007b es: 007b ss: 0068
Process urpmi (pid: 3573, threadinfo=ca7da000 task=cf244080)
Stack: c0279f02 c136ae84 c026ead9 cfdffac0 c136cbf0 00000000 c0271a09 cfdffac0
c136cbf0 00000000 019c8989 00000000 00000000 00000008 00000008 00000000
c136cbf0 cfdffac0 00000008 cf80b1dc 00000008 c0271b35 cfdffac0 cf80b1dc
Call Trace:
[<c0279f02>] cfq_latter_request+0x12/0x20
[<c026ead9>] elv_latter_request+0x39/0x40
[<c0271a09>] __make_request+0x539/0x550
[<c0271b35>] generic_make_request+0x115/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c019dcd4>] ext3_get_block+0x64/0xb0
[<c0271c65>] submit_bio+0x55/0x80
[<c01811ab>] mpage_bio_submit+0x2b/0x40
[<c018176e>] do_mpage_readpage+0x41e/0x500
[<c03a08ca>] apic_timer_interrupt+0x1a/0x20
[<c013dcd0>] add_to_page_cache+0x70/0x100
[<c0181957>] mpage_readpages+0x107/0x1e0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c0144f53>] read_pages+0x1a3/0x1c0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c014297f>] __alloc_pages+0xbf/0x370
[<c0145307>] do_page_cache_readahead+0x177/0x1a0
[<c0145402>] page_cache_readahead+0xd2/0x190
[<c013e787>] do_generic_mapping_read+0x227/0x520
[<c010e729>] handle_IRQ_event+0x49/0x80
[<c013ea80>] file_read_actor+0x0/0x120
[<c013edb9>] __generic_file_aio_read+0x219/0x240
[<c013ea80>] file_read_actor+0x0/0x120
[<c013007b>] sys_reboot+0x14b/0x370
[<c013ee16>] generic_file_aio_read+0x36/0x40
[<c015d1f6>] do_sync_read+0xb6/0x100
[<c014ef30>] handle_mm_fault+0x190/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c011c6d8>] do_page_fault+0x1e8/0x4c3
[<c02ea9f0>] i8042_timer_func+0x0/0x40
[<c02eaa27>] i8042_timer_func+0x37/0x40
[<c012adfe>] run_timer_softirq+0xce/0x1d0
[<c012af75>] do_timer+0x65/0xf0
[<c015d33c>] vfs_read+0xfc/0x140
[<c015d6f6>] sys_pread64+0x56/0x80
[<c039ff3b>] syscall_call+0x7/0xb
[<c039007b>] xprt_transmit+0x6b/0x530
Code: 75 f7 89 d0 c3 8d 74 26 00 8b 54 24 04 8b 42 08 85 c0 74 25 89 c2 8b 40 0c 85 c0 74 15 8d b6 00 00 00 00 8d bf 00 00 00 00 89 c2 <8b> 40 0c 85 c0 75 f7 89 d0 c3 8d 74 26 00 8b 02 85 c0 89 c1 74
<6>note: urpmi[3573] exited with preempt_count 1
Debug: sleeping function called from invalid context at include/linux/rwsem.h:43
Call Trace:
[<c011fd61>] __might_sleep+0x61/0x80
[<c0122f5d>] profile_exit_task+0x1d/0x60
[<c012484b>] do_exit+0x7b/0x410
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c010d508>] die+0xf8/0x100
[<c011c94d>] do_page_fault+0x45d/0x4c3
[<c01457bd>] check_poison_obj+0x4d/0x1c0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c014160f>] mempool_free+0x4f/0xb0
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c03a0947>] error_code+0x2f/0x38
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c0211c42>] rb_next+0x22/0x60
[<c0279f02>] cfq_latter_request+0x12/0x20
[<c026ead9>] elv_latter_request+0x39/0x40
[<c0271a09>] __make_request+0x539/0x550
[<c0271b35>] generic_make_request+0x115/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c019dcd4>] ext3_get_block+0x64/0xb0
[<c0271c65>] submit_bio+0x55/0x80
[<c01811ab>] mpage_bio_submit+0x2b/0x40
[<c018176e>] do_mpage_readpage+0x41e/0x500
[<c03a08ca>] apic_timer_interrupt+0x1a/0x20
[<c013dcd0>] add_to_page_cache+0x70/0x100
[<c0181957>] mpage_readpages+0x107/0x1e0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c0144f53>] read_pages+0x1a3/0x1c0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c014297f>] __alloc_pages+0xbf/0x370
[<c0145307>] do_page_cache_readahead+0x177/0x1a0
[<c0145402>] page_cache_readahead+0xd2/0x190
[<c013e787>] do_generic_mapping_read+0x227/0x520
[<c010e729>] handle_IRQ_event+0x49/0x80
[<c013ea80>] file_read_actor+0x0/0x120
[<c013edb9>] __generic_file_aio_read+0x219/0x240
[<c013ea80>] file_read_actor+0x0/0x120
[<c013007b>] sys_reboot+0x14b/0x370
[<c013ee16>] generic_file_aio_read+0x36/0x40
[<c015d1f6>] do_sync_read+0xb6/0x100
[<c014ef30>] handle_mm_fault+0x190/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c011c6d8>] do_page_fault+0x1e8/0x4c3
[<c02ea9f0>] i8042_timer_func+0x0/0x40
[<c02eaa27>] i8042_timer_func+0x37/0x40
[<c012adfe>] run_timer_softirq+0xce/0x1d0
[<c012af75>] do_timer+0x65/0xf0
[<c015d33c>] vfs_read+0xfc/0x140
[<c015d6f6>] sys_pread64+0x56/0x80
[<c039ff3b>] syscall_call+0x7/0xb
[<c039007b>] xprt_transmit+0x6b/0x530
bad: scheduling while atomic!
Call Trace:
[<c011e6ed>] schedule+0x60d/0x640
[<c014d1cb>] unmap_page_range+0x4b/0x80
[<c014d420>] unmap_vmas+0x220/0x230
[<c0151630>] exit_mmap+0x80/0x1d0
[<c01203e6>] mmput+0x76/0xe0
[<c01248b7>] do_exit+0xe7/0x410
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c010d508>] die+0xf8/0x100
[<c011c94d>] do_page_fault+0x45d/0x4c3
[<c01457bd>] check_poison_obj+0x4d/0x1c0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c014160f>] mempool_free+0x4f/0xb0
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c03a0947>] error_code+0x2f/0x38
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c0211c42>] rb_next+0x22/0x60
[<c0279f02>] cfq_latter_request+0x12/0x20
[<c026ead9>] elv_latter_request+0x39/0x40
[<c0271a09>] __make_request+0x539/0x550
[<c0271b35>] generic_make_request+0x115/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c019dcd4>] ext3_get_block+0x64/0xb0
[<c0271c65>] submit_bio+0x55/0x80
[<c01811ab>] mpage_bio_submit+0x2b/0x40
[<c018176e>] do_mpage_readpage+0x41e/0x500
[<c03a08ca>] apic_timer_interrupt+0x1a/0x20
[<c013dcd0>] add_to_page_cache+0x70/0x100
[<c0181957>] mpage_readpages+0x107/0x1e0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c0144f53>] read_pages+0x1a3/0x1c0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c014297f>] __alloc_pages+0xbf/0x370
[<c0145307>] do_page_cache_readahead+0x177/0x1a0
[<c0145402>] page_cache_readahead+0xd2/0x190
[<c013e787>] do_generic_mapping_read+0x227/0x520
[<c010e729>] handle_IRQ_event+0x49/0x80
[<c013ea80>] file_read_actor+0x0/0x120
[<c013edb9>] __generic_file_aio_read+0x219/0x240
[<c013ea80>] file_read_actor+0x0/0x120
[<c013007b>] sys_reboot+0x14b/0x370
[<c013ee16>] generic_file_aio_read+0x36/0x40
[<c015d1f6>] do_sync_read+0xb6/0x100
[<c014ef30>] handle_mm_fault+0x190/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c011c6d8>] do_page_fault+0x1e8/0x4c3
[<c02ea9f0>] i8042_timer_func+0x0/0x40
[<c02eaa27>] i8042_timer_func+0x37/0x40
[<c012adfe>] run_timer_softirq+0xce/0x1d0
[<c012af75>] do_timer+0x65/0xf0
[<c015d33c>] vfs_read+0xfc/0x140
[<c015d6f6>] sys_pread64+0x56/0x80
[<c039ff3b>] syscall_call+0x7/0xb
[<c039007b>] xprt_transmit+0x6b/0x530
bad: scheduling while atomic!
Call Trace:
[<c011e6ed>] schedule+0x60d/0x640
[<c014d1cb>] unmap_page_range+0x4b/0x80
[<c014d420>] unmap_vmas+0x220/0x230
[<c0151630>] exit_mmap+0x80/0x1d0
[<c01203e6>] mmput+0x76/0xe0
[<c01248b7>] do_exit+0xe7/0x410
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c010d508>] die+0xf8/0x100
[<c011c94d>] do_page_fault+0x45d/0x4c3
[<c01457bd>] check_poison_obj+0x4d/0x1c0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c014160f>] mempool_free+0x4f/0xb0
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c03a0947>] error_code+0x2f/0x38
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c0211c42>] rb_next+0x22/0x60
[<c0279f02>] cfq_latter_request+0x12/0x20
[<c026ead9>] elv_latter_request+0x39/0x40
[<c0271a09>] __make_request+0x539/0x550
[<c0271b35>] generic_make_request+0x115/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c019dcd4>] ext3_get_block+0x64/0xb0
[<c0271c65>] submit_bio+0x55/0x80
[<c01811ab>] mpage_bio_submit+0x2b/0x40
[<c018176e>] do_mpage_readpage+0x41e/0x500
[<c03a08ca>] apic_timer_interrupt+0x1a/0x20
[<c013dcd0>] add_to_page_cache+0x70/0x100
[<c0181957>] mpage_readpages+0x107/0x1e0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c0144f53>] read_pages+0x1a3/0x1c0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c014297f>] __alloc_pages+0xbf/0x370
[<c0145307>] do_page_cache_readahead+0x177/0x1a0
[<c0145402>] page_cache_readahead+0xd2/0x190
[<c013e787>] do_generic_mapping_read+0x227/0x520
[<c010e729>] handle_IRQ_event+0x49/0x80
[<c013ea80>] file_read_actor+0x0/0x120
[<c013edb9>] __generic_file_aio_read+0x219/0x240
[<c013ea80>] file_read_actor+0x0/0x120
[<c013007b>] sys_reboot+0x14b/0x370
[<c013ee16>] generic_file_aio_read+0x36/0x40
[<c015d1f6>] do_sync_read+0xb6/0x100
[<c014ef30>] handle_mm_fault+0x190/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c011c6d8>] do_page_fault+0x1e8/0x4c3
[<c02ea9f0>] i8042_timer_func+0x0/0x40
[<c02eaa27>] i8042_timer_func+0x37/0x40
[<c012adfe>] run_timer_softirq+0xce/0x1d0
[<c012af75>] do_timer+0x65/0xf0
[<c015d33c>] vfs_read+0xfc/0x140
[<c015d6f6>] sys_pread64+0x56/0x80
[<c039ff3b>] syscall_call+0x7/0xb
[<c039007b>] xprt_transmit+0x6b/0x530
bad: scheduling while atomic!
Call Trace:
[<c011e6ed>] schedule+0x60d/0x640
[<c014d1cb>] unmap_page_range+0x4b/0x80
[<c014d420>] unmap_vmas+0x220/0x230
[<c0151630>] exit_mmap+0x80/0x1d0
[<c01203e6>] mmput+0x76/0xe0
[<c01248b7>] do_exit+0xe7/0x410
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c010d508>] die+0xf8/0x100
[<c011c94d>] do_page_fault+0x45d/0x4c3
[<c01457bd>] check_poison_obj+0x4d/0x1c0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c014160f>] mempool_free+0x4f/0xb0
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c03a0947>] error_code+0x2f/0x38
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c0211c42>] rb_next+0x22/0x60
[<c0279f02>] cfq_latter_request+0x12/0x20
[<c026ead9>] elv_latter_request+0x39/0x40
[<c0271a09>] __make_request+0x539/0x550
[<c0271b35>] generic_make_request+0x115/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c019dcd4>] ext3_get_block+0x64/0xb0
[<c0271c65>] submit_bio+0x55/0x80
[<c01811ab>] mpage_bio_submit+0x2b/0x40
[<c018176e>] do_mpage_readpage+0x41e/0x500
[<c03a08ca>] apic_timer_interrupt+0x1a/0x20
[<c013dcd0>] add_to_page_cache+0x70/0x100
[<c0181957>] mpage_readpages+0x107/0x1e0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c0144f53>] read_pages+0x1a3/0x1c0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c014297f>] __alloc_pages+0xbf/0x370
[<c0145307>] do_page_cache_readahead+0x177/0x1a0
[<c0145402>] page_cache_readahead+0xd2/0x190
[<c013e787>] do_generic_mapping_read+0x227/0x520
[<c010e729>] handle_IRQ_event+0x49/0x80
[<c013ea80>] file_read_actor+0x0/0x120
[<c013edb9>] __generic_file_aio_read+0x219/0x240
[<c013ea80>] file_read_actor+0x0/0x120
[<c013007b>] sys_reboot+0x14b/0x370
[<c013ee16>] generic_file_aio_read+0x36/0x40
[<c015d1f6>] do_sync_read+0xb6/0x100
[<c014ef30>] handle_mm_fault+0x190/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c011c6d8>] do_page_fault+0x1e8/0x4c3
[<c02ea9f0>] i8042_timer_func+0x0/0x40
[<c02eaa27>] i8042_timer_func+0x37/0x40
[<c012adfe>] run_timer_softirq+0xce/0x1d0
[<c012af75>] do_timer+0x65/0xf0
[<c015d33c>] vfs_read+0xfc/0x140
[<c015d6f6>] sys_pread64+0x56/0x80
[<c039ff3b>] syscall_call+0x7/0xb
[<c039007b>] xprt_transmit+0x6b/0x530
Debug: sleeping function called from invalid context at include/asm/semaphore.h:119
Call Trace:
[<c011fd61>] __might_sleep+0x61/0x80
[<c014fb01>] remove_shared_vm_struct+0x41/0xa0
[<c0151702>] exit_mmap+0x152/0x1d0
[<c01203e6>] mmput+0x76/0xe0
[<c01248b7>] do_exit+0xe7/0x410
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c010d508>] die+0xf8/0x100
[<c011c94d>] do_page_fault+0x45d/0x4c3
[<c01457bd>] check_poison_obj+0x4d/0x1c0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c0147440>] kmem_cache_alloc+0x170/0x1a0
[<c014160f>] mempool_free+0x4f/0xb0
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c011c4f0>] do_page_fault+0x0/0x4c3
[<c03a0947>] error_code+0x2f/0x38
[<c0279ef0>] cfq_latter_request+0x0/0x20
[<c0211c42>] rb_next+0x22/0x60
[<c0279f02>] cfq_latter_request+0x12/0x20
[<c026ead9>] elv_latter_request+0x39/0x40
[<c0271a09>] __make_request+0x539/0x550
[<c0271b35>] generic_make_request+0x115/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c019dcd4>] ext3_get_block+0x64/0xb0
[<c0271c65>] submit_bio+0x55/0x80
[<c01811ab>] mpage_bio_submit+0x2b/0x40
[<c018176e>] do_mpage_readpage+0x41e/0x500
[<c03a08ca>] apic_timer_interrupt+0x1a/0x20
[<c013dcd0>] add_to_page_cache+0x70/0x100
[<c0181957>] mpage_readpages+0x107/0x1e0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c0144f53>] read_pages+0x1a3/0x1c0
[<c019dc70>] ext3_get_block+0x0/0xb0
[<c014297f>] __alloc_pages+0xbf/0x370
[<c0145307>] do_page_cache_readahead+0x177/0x1a0
[<c0145402>] page_cache_readahead+0xd2/0x190
[<c013e787>] do_generic_mapping_read+0x227/0x520
[<c010e729>] handle_IRQ_event+0x49/0x80
[<c013ea80>] file_read_actor+0x0/0x120
[<c013edb9>] __generic_file_aio_read+0x219/0x240
[<c013ea80>] file_read_actor+0x0/0x120
[<c013007b>] sys_reboot+0x14b/0x370
[<c013ee16>] generic_file_aio_read+0x36/0x40
[<c015d1f6>] do_sync_read+0xb6/0x100
[<c014ef30>] handle_mm_fault+0x190/0x1f0
[<c0120140>] autoremove_wake_function+0x0/0x50
[<c011c6d8>] do_page_fault+0x1e8/0x4c3
[<c02ea9f0>] i8042_timer_func+0x0/0x40
[<c02eaa27>] i8042_timer_func+0x37/0x40
[<c012adfe>] run_timer_softirq+0xce/0x1d0
[<c012af75>] do_timer+0x65/0xf0
[<c015d33c>] vfs_read+0xfc/0x140
[<c015d6f6>] sys_pread64+0x56/0x80
[<c039ff3b>] syscall_call+0x7/0xb
[<c039007b>] xprt_transmit+0x6b/0x530
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: CFQ scheduler leaves task in D state
2003-09-01 19:52 CFQ scheduler leaves task in D state Bongani Hlope
@ 2003-09-02 5:48 ` Jens Axboe
2003-09-02 12:35 ` Jens Axboe
2003-09-02 13:21 ` Jens Axboe
0 siblings, 2 replies; 4+ messages in thread
From: Jens Axboe @ 2003-09-02 5:48 UTC (permalink / raw)
To: Bongani Hlope; +Cc: linux-kernel
On Mon, Sep 01 2003, Bongani Hlope wrote:
> Hi
>
> I tried the CFQ scheduler patch you posted on top of 2.6.0-test4-mm3-1,
> including the O19int patch from Con Kolivas and I got the attached stacktrace.
Thanks Bongani, I reproduced this problem yesterday. I'll post an update
later today. There appears to be a missing last_merge clear somewhere,
odd.
--
Jens Axboe
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: CFQ scheduler leaves task in D state
2003-09-02 5:48 ` Jens Axboe
@ 2003-09-02 12:35 ` Jens Axboe
2003-09-02 13:21 ` Jens Axboe
1 sibling, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2003-09-02 12:35 UTC (permalink / raw)
To: Bongani Hlope; +Cc: linux-kernel
On Tue, Sep 02 2003, Jens Axboe wrote:
> On Mon, Sep 01 2003, Bongani Hlope wrote:
> > Hi
> >
> > I tried the CFQ scheduler patch you posted on top of 2.6.0-test4-mm3-1,
> > including the O19int patch from Con Kolivas and I got the attached stacktrace.
>
> Thanks Bongani, I reproduced this problem yesterday. I'll post an update
> later today. There appears to be a missing last_merge clear somewhere,
> odd.
Can you try this as an incremental patch? It cleans a few things up.
===== drivers/block/cfq-iosched.c 1.1 vs edited =====
--- 1.1/drivers/block/cfq-iosched.c Fri Aug 29 10:12:50 2003
+++ edited/drivers/block/cfq-iosched.c Tue Sep 2 14:29:58 2003
@@ -149,15 +149,17 @@
/*
* rb tree support functions
*/
-#define RB_EMPTY(root) ((root)->rb_node == NULL)
-#define RB_CLEAR(root) ((root)->rb_node = NULL)
-#define ON_RB(crq) ((crq)->cfq_queue != NULL)
+#define RB_NONE (2)
+#define RB_EMPTY(node) ((node)->rb_node == NULL)
+#define RB_CLEAR(node) ((node)->rb_color = RB_NONE)
+#define RB_CLEAR_ROOT(root) ((root)->rb_node = NULL)
+#define ON_RB(node) ((node)->rb_color != RB_NONE)
#define rb_entry_crq(node) rb_entry((node), struct cfq_rq, rb_node)
#define rq_rb_key(rq) (rq)->sector
static inline void cfq_del_crq_rb(struct cfq_queue *cfqq, struct cfq_rq *crq)
{
- if (ON_RB(crq)) {
+ if (ON_RB(&crq->rb_node)) {
cfqq->queued[rq_data_dir(crq->request)]--;
rb_erase(&crq->rb_node, &cfqq->sort_list);
crq->cfq_queue = NULL;
@@ -301,7 +303,7 @@
cfq_del_crq_hash(crq);
cfq_add_crq_hash(cfqd, crq);
- if (ON_RB(crq) && rq_rb_key(req) != crq->rb_key) {
+ if (ON_RB(&crq->rb_node) && (rq_rb_key(req) != crq->rb_key)) {
struct cfq_queue *cfqq = crq->cfq_queue;
cfq_del_crq_rb(cfqq, crq);
@@ -345,17 +347,17 @@
}
static inline void
-__cfq_dispatch_requests(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+__cfq_dispatch_requests(request_queue_t *q, struct cfq_data *cfqd,
+ struct cfq_queue *cfqq)
{
- struct cfq_rq *crq;
-
- crq = rb_entry_crq(rb_first(&cfqq->sort_list));
+ struct cfq_rq *crq = rb_entry_crq(rb_first(&cfqq->sort_list));
cfq_del_crq_rb(cfqq, crq);
+ cfq_remove_merge_hints(q, crq);
cfq_dispatch_sort(cfqd->dispatch, crq);
}
-static int cfq_dispatch_requests(struct cfq_data *cfqd)
+static int cfq_dispatch_requests(request_queue_t *q, struct cfq_data *cfqd)
{
struct cfq_queue *cfqq;
struct list_head *entry, *tmp;
@@ -372,7 +374,7 @@
BUG_ON(RB_EMPTY(&cfqq->sort_list));
- __cfq_dispatch_requests(cfqd, cfqq);
+ __cfq_dispatch_requests(q, cfqd, cfqq);
if (RB_EMPTY(&cfqq->sort_list))
cfq_put_queue(cfqd, cfqq);
@@ -399,17 +401,15 @@
dispatch:
rq = list_entry_rq(cfqd->dispatch->next);
- if (q->last_merge == &rq->queuelist)
- q->last_merge = NULL;
-
+ BUG_ON(q->last_merge == &rq->queuelist);
crq = RQ_DATA(rq);
if (crq)
- cfq_del_crq_hash(crq);
+ BUG_ON(ON_MHASH(crq));
return rq;
}
- if (cfq_dispatch_requests(cfqd))
+ if (cfq_dispatch_requests(q, cfqd))
goto dispatch;
return NULL;
@@ -456,7 +456,7 @@
INIT_LIST_HEAD(&cfqq->cfq_hash);
INIT_LIST_HEAD(&cfqq->cfq_list);
- RB_CLEAR(&cfqq->sort_list);
+ RB_CLEAR_ROOT(&cfqq->sort_list);
cfqq->pid = pid;
cfqq->queued[0] = cfqq->queued[1] = 0;
@@ -583,6 +583,7 @@
struct cfq_rq *crq = mempool_alloc(cfqd->crq_pool, gfp_mask);
if (crq) {
+ RB_CLEAR(&crq->rb_node);
crq->request = rq;
crq->cfq_queue = NULL;
INIT_LIST_HEAD(&crq->hash);
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Tue, Sep 02 2003, Jens Axboe wrote:
> On Mon, Sep 01 2003, Bongani Hlope wrote:
> > Hi
> >
> > I tried the CFQ scheduler patch you posted on top of 2.6.0-test4-mm3-1,
> > including the O19int patch from Con Kolivas and I got the attached stacktrace.
>
> Thanks Bongani, I reproduced this problem yesterday. I'll post an update
> later today. There appears to be a missing last_merge clear somewhere,
> odd.
Can you try this as an incremental patch? It cleans a few things up.
===== drivers/block/cfq-iosched.c 1.1 vs edited =====
--- 1.1/drivers/block/cfq-iosched.c Fri Aug 29 10:12:50 2003
+++ edited/drivers/block/cfq-iosched.c Tue Sep 2 14:29:58 2003
@@ -149,15 +149,17 @@
/*
* rb tree support functions
*/
-#define RB_EMPTY(root) ((root)->rb_node == NULL)
-#define RB_CLEAR(root) ((root)->rb_node = NULL)
-#define ON_RB(crq) ((crq)->cfq_queue != NULL)
+#define RB_NONE (2)
+#define RB_EMPTY(node) ((node)->rb_node == NULL)
+#define RB_CLEAR(node) ((node)->rb_color = RB_NONE)
+#define RB_CLEAR_ROOT(root) ((root)->rb_node = NULL)
+#define ON_RB(node) ((node)->rb_color != RB_NONE)
#define rb_entry_crq(node) rb_entry((node), struct cfq_rq, rb_node)
#define rq_rb_key(rq) (rq)->sector
static inline void cfq_del_crq_rb(struct cfq_queue *cfqq, struct cfq_rq *crq)
{
- if (ON_RB(crq)) {
+ if (ON_RB(&crq->rb_node)) {
cfqq->queued[rq_data_dir(crq->request)]--;
rb_erase(&crq->rb_node, &cfqq->sort_list);
crq->cfq_queue = NULL;
@@ -301,7 +303,7 @@
cfq_del_crq_hash(crq);
cfq_add_crq_hash(cfqd, crq);
- if (ON_RB(crq) && rq_rb_key(req) != crq->rb_key) {
+ if (ON_RB(&crq->rb_node) && (rq_rb_key(req) != crq->rb_key)) {
struct cfq_queue *cfqq = crq->cfq_queue;
cfq_del_crq_rb(cfqq, crq);
@@ -345,17 +347,17 @@
}
static inline void
-__cfq_dispatch_requests(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+__cfq_dispatch_requests(request_queue_t *q, struct cfq_data *cfqd,
+ struct cfq_queue *cfqq)
{
- struct cfq_rq *crq;
-
- crq = rb_entry_crq(rb_first(&cfqq->sort_list));
+ struct cfq_rq *crq = rb_entry_crq(rb_first(&cfqq->sort_list));
cfq_del_crq_rb(cfqq, crq);
+ cfq_remove_merge_hints(q, crq);
cfq_dispatch_sort(cfqd->dispatch, crq);
}
-static int cfq_dispatch_requests(struct cfq_data *cfqd)
+static int cfq_dispatch_requests(request_queue_t *q, struct cfq_data *cfqd)
{
struct cfq_queue *cfqq;
struct list_head *entry, *tmp;
@@ -372,7 +374,7 @@
BUG_ON(RB_EMPTY(&cfqq->sort_list));
- __cfq_dispatch_requests(cfqd, cfqq);
+ __cfq_dispatch_requests(q, cfqd, cfqq);
if (RB_EMPTY(&cfqq->sort_list))
cfq_put_queue(cfqd, cfqq);
@@ -399,17 +401,15 @@
dispatch:
rq = list_entry_rq(cfqd->dispatch->next);
- if (q->last_merge == &rq->queuelist)
- q->last_merge = NULL;
-
+ BUG_ON(q->last_merge == &rq->queuelist);
crq = RQ_DATA(rq);
if (crq)
- cfq_del_crq_hash(crq);
+ BUG_ON(ON_MHASH(crq));
return rq;
}
- if (cfq_dispatch_requests(cfqd))
+ if (cfq_dispatch_requests(q, cfqd))
goto dispatch;
return NULL;
@@ -456,7 +456,7 @@
INIT_LIST_HEAD(&cfqq->cfq_hash);
INIT_LIST_HEAD(&cfqq->cfq_list);
- RB_CLEAR(&cfqq->sort_list);
+ RB_CLEAR_ROOT(&cfqq->sort_list);
cfqq->pid = pid;
cfqq->queued[0] = cfqq->queued[1] = 0;
@@ -583,6 +583,7 @@
struct cfq_rq *crq = mempool_alloc(cfqd->crq_pool, gfp_mask);
if (crq) {
+ RB_CLEAR(&crq->rb_node);
crq->request = rq;
crq->cfq_queue = NULL;
INIT_LIST_HEAD(&crq->hash);
--
Jens Axboe
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: CFQ scheduler leaves task in D state
2003-09-02 5:48 ` Jens Axboe
2003-09-02 12:35 ` Jens Axboe
@ 2003-09-02 13:21 ` Jens Axboe
1 sibling, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2003-09-02 13:21 UTC (permalink / raw)
To: Bongani Hlope; +Cc: linux-kernel
On Tue, Sep 02 2003, Jens Axboe wrote:
> On Mon, Sep 01 2003, Bongani Hlope wrote:
> > Hi
> >
> > I tried the CFQ scheduler patch you posted on top of 2.6.0-test4-mm3-1,
> > including the O19int patch from Con Kolivas and I got the attached stacktrace.
>
> Thanks Bongani, I reproduced this problem yesterday. I'll post an update
> later today. There appears to be a missing last_merge clear somewhere,
> odd.
This should be better, delete the one I sent an hour ago.
===== drivers/block/cfq-iosched.c 1.1 vs edited =====
--- 1.1/drivers/block/cfq-iosched.c Fri Aug 29 10:12:50 2003
+++ edited/drivers/block/cfq-iosched.c Tue Sep 2 15:20:05 2003
@@ -149,15 +149,17 @@
/*
* rb tree support functions
*/
-#define RB_EMPTY(root) ((root)->rb_node == NULL)
-#define RB_CLEAR(root) ((root)->rb_node = NULL)
-#define ON_RB(crq) ((crq)->cfq_queue != NULL)
+#define RB_NONE (2)
+#define RB_EMPTY(node) ((node)->rb_node == NULL)
+#define RB_CLEAR(node) ((node)->rb_color = RB_NONE)
+#define RB_CLEAR_ROOT(root) ((root)->rb_node = NULL)
+#define ON_RB(node) ((node)->rb_color != RB_NONE)
#define rb_entry_crq(node) rb_entry((node), struct cfq_rq, rb_node)
#define rq_rb_key(rq) (rq)->sector
static inline void cfq_del_crq_rb(struct cfq_queue *cfqq, struct cfq_rq *crq)
{
- if (ON_RB(crq)) {
+ if (ON_RB(&crq->rb_node)) {
cfqq->queued[rq_data_dir(crq->request)]--;
rb_erase(&crq->rb_node, &cfqq->sort_list);
crq->cfq_queue = NULL;
@@ -194,13 +196,12 @@
struct cfq_rq *__alias;
crq->rb_key = rq_rb_key(rq);
-
+ cfqq->queued[rq_data_dir(rq)]++;
retry:
__alias = __cfq_add_crq_rb(cfqq, crq);
if (!__alias) {
rb_insert_color(&crq->rb_node, &cfqq->sort_list);
crq->cfq_queue = cfqq;
- cfqq->queued[rq_data_dir(rq)]++;
return;
}
@@ -301,7 +302,7 @@
cfq_del_crq_hash(crq);
cfq_add_crq_hash(cfqd, crq);
- if (ON_RB(crq) && rq_rb_key(req) != crq->rb_key) {
+ if (ON_RB(&crq->rb_node) && (rq_rb_key(req) != crq->rb_key)) {
struct cfq_queue *cfqq = crq->cfq_queue;
cfq_del_crq_rb(cfqq, crq);
@@ -345,17 +346,17 @@
}
static inline void
-__cfq_dispatch_requests(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+__cfq_dispatch_requests(request_queue_t *q, struct cfq_data *cfqd,
+ struct cfq_queue *cfqq)
{
- struct cfq_rq *crq;
-
- crq = rb_entry_crq(rb_first(&cfqq->sort_list));
+ struct cfq_rq *crq = rb_entry_crq(rb_first(&cfqq->sort_list));
cfq_del_crq_rb(cfqq, crq);
+ cfq_remove_merge_hints(q, crq);
cfq_dispatch_sort(cfqd->dispatch, crq);
}
-static int cfq_dispatch_requests(struct cfq_data *cfqd)
+static int cfq_dispatch_requests(request_queue_t *q, struct cfq_data *cfqd)
{
struct cfq_queue *cfqq;
struct list_head *entry, *tmp;
@@ -372,7 +373,7 @@
BUG_ON(RB_EMPTY(&cfqq->sort_list));
- __cfq_dispatch_requests(cfqd, cfqq);
+ __cfq_dispatch_requests(q, cfqd, cfqq);
if (RB_EMPTY(&cfqq->sort_list))
cfq_put_queue(cfqd, cfqq);
@@ -399,17 +400,15 @@
dispatch:
rq = list_entry_rq(cfqd->dispatch->next);
- if (q->last_merge == &rq->queuelist)
- q->last_merge = NULL;
-
+ BUG_ON(q->last_merge == &rq->queuelist);
crq = RQ_DATA(rq);
if (crq)
- cfq_del_crq_hash(crq);
+ BUG_ON(ON_MHASH(crq));
return rq;
}
- if (cfq_dispatch_requests(cfqd))
+ if (cfq_dispatch_requests(q, cfqd))
goto dispatch;
return NULL;
@@ -456,7 +455,7 @@
INIT_LIST_HEAD(&cfqq->cfq_hash);
INIT_LIST_HEAD(&cfqq->cfq_list);
- RB_CLEAR(&cfqq->sort_list);
+ RB_CLEAR_ROOT(&cfqq->sort_list);
cfqq->pid = pid;
cfqq->queued[0] = cfqq->queued[1] = 0;
@@ -552,10 +551,9 @@
if (!cfqq)
goto out;
- if (cfqq->queued[rw] < cfq_queued)
- goto out;
-
limit = (q->nr_requests - cfq_queued) / cfqd->busy_queues;
+ if (limit < 3)
+ limit = 3;
if (cfqq->queued[rw] > limit)
ret = 0;
@@ -583,6 +581,7 @@
struct cfq_rq *crq = mempool_alloc(cfqd->crq_pool, gfp_mask);
if (crq) {
+ RB_CLEAR(&crq->rb_node);
crq->request = rq;
crq->cfq_queue = NULL;
INIT_LIST_HEAD(&crq->hash);
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Tue, Sep 02 2003, Jens Axboe wrote:
> On Mon, Sep 01 2003, Bongani Hlope wrote:
> > Hi
> >
> > I tried the CFQ scheduler patch you posted on top of 2.6.0-test4-mm3-1,
> > including the O19int patch from Con Kolivas and I got the attached stacktrace.
>
> Thanks Bongani, I reproduced this problem yesterday. I'll post an update
> later today. There appears to be a missing last_merge clear somewhere,
> odd.
This should be better, delete the one I sent an hour ago.
===== drivers/block/cfq-iosched.c 1.1 vs edited =====
--- 1.1/drivers/block/cfq-iosched.c Fri Aug 29 10:12:50 2003
+++ edited/drivers/block/cfq-iosched.c Tue Sep 2 15:20:05 2003
@@ -149,15 +149,17 @@
/*
* rb tree support functions
*/
-#define RB_EMPTY(root) ((root)->rb_node == NULL)
-#define RB_CLEAR(root) ((root)->rb_node = NULL)
-#define ON_RB(crq) ((crq)->cfq_queue != NULL)
+#define RB_NONE (2)
+#define RB_EMPTY(node) ((node)->rb_node == NULL)
+#define RB_CLEAR(node) ((node)->rb_color = RB_NONE)
+#define RB_CLEAR_ROOT(root) ((root)->rb_node = NULL)
+#define ON_RB(node) ((node)->rb_color != RB_NONE)
#define rb_entry_crq(node) rb_entry((node), struct cfq_rq, rb_node)
#define rq_rb_key(rq) (rq)->sector
static inline void cfq_del_crq_rb(struct cfq_queue *cfqq, struct cfq_rq *crq)
{
- if (ON_RB(crq)) {
+ if (ON_RB(&crq->rb_node)) {
cfqq->queued[rq_data_dir(crq->request)]--;
rb_erase(&crq->rb_node, &cfqq->sort_list);
crq->cfq_queue = NULL;
@@ -194,13 +196,12 @@
struct cfq_rq *__alias;
crq->rb_key = rq_rb_key(rq);
-
+ cfqq->queued[rq_data_dir(rq)]++;
retry:
__alias = __cfq_add_crq_rb(cfqq, crq);
if (!__alias) {
rb_insert_color(&crq->rb_node, &cfqq->sort_list);
crq->cfq_queue = cfqq;
- cfqq->queued[rq_data_dir(rq)]++;
return;
}
@@ -301,7 +302,7 @@
cfq_del_crq_hash(crq);
cfq_add_crq_hash(cfqd, crq);
- if (ON_RB(crq) && rq_rb_key(req) != crq->rb_key) {
+ if (ON_RB(&crq->rb_node) && (rq_rb_key(req) != crq->rb_key)) {
struct cfq_queue *cfqq = crq->cfq_queue;
cfq_del_crq_rb(cfqq, crq);
@@ -345,17 +346,17 @@
}
static inline void
-__cfq_dispatch_requests(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+__cfq_dispatch_requests(request_queue_t *q, struct cfq_data *cfqd,
+ struct cfq_queue *cfqq)
{
- struct cfq_rq *crq;
-
- crq = rb_entry_crq(rb_first(&cfqq->sort_list));
+ struct cfq_rq *crq = rb_entry_crq(rb_first(&cfqq->sort_list));
cfq_del_crq_rb(cfqq, crq);
+ cfq_remove_merge_hints(q, crq);
cfq_dispatch_sort(cfqd->dispatch, crq);
}
-static int cfq_dispatch_requests(struct cfq_data *cfqd)
+static int cfq_dispatch_requests(request_queue_t *q, struct cfq_data *cfqd)
{
struct cfq_queue *cfqq;
struct list_head *entry, *tmp;
@@ -372,7 +373,7 @@
BUG_ON(RB_EMPTY(&cfqq->sort_list));
- __cfq_dispatch_requests(cfqd, cfqq);
+ __cfq_dispatch_requests(q, cfqd, cfqq);
if (RB_EMPTY(&cfqq->sort_list))
cfq_put_queue(cfqd, cfqq);
@@ -399,17 +400,15 @@
dispatch:
rq = list_entry_rq(cfqd->dispatch->next);
- if (q->last_merge == &rq->queuelist)
- q->last_merge = NULL;
-
+ BUG_ON(q->last_merge == &rq->queuelist);
crq = RQ_DATA(rq);
if (crq)
- cfq_del_crq_hash(crq);
+ BUG_ON(ON_MHASH(crq));
return rq;
}
- if (cfq_dispatch_requests(cfqd))
+ if (cfq_dispatch_requests(q, cfqd))
goto dispatch;
return NULL;
@@ -456,7 +455,7 @@
INIT_LIST_HEAD(&cfqq->cfq_hash);
INIT_LIST_HEAD(&cfqq->cfq_list);
- RB_CLEAR(&cfqq->sort_list);
+ RB_CLEAR_ROOT(&cfqq->sort_list);
cfqq->pid = pid;
cfqq->queued[0] = cfqq->queued[1] = 0;
@@ -552,10 +551,9 @@
if (!cfqq)
goto out;
- if (cfqq->queued[rw] < cfq_queued)
- goto out;
-
limit = (q->nr_requests - cfq_queued) / cfqd->busy_queues;
+ if (limit < 3)
+ limit = 3;
if (cfqq->queued[rw] > limit)
ret = 0;
@@ -583,6 +581,7 @@
struct cfq_rq *crq = mempool_alloc(cfqd->crq_pool, gfp_mask);
if (crq) {
+ RB_CLEAR(&crq->rb_node);
crq->request = rq;
crq->cfq_queue = NULL;
INIT_LIST_HEAD(&crq->hash);
--
Jens Axboe
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2003-09-02 13:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-09-01 19:52 CFQ scheduler leaves task in D state Bongani Hlope
2003-09-02 5:48 ` Jens Axboe
2003-09-02 12:35 ` Jens Axboe
2003-09-02 13:21 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).