All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [patch] voluntary-preempt-2.6.9-rc3-mm3-T3
@ 2004-10-08 17:07 Mark_H_Johnson
  0 siblings, 0 replies; 31+ messages in thread
From: Mark_H_Johnson @ 2004-10-08 17:07 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Lee Revell, K.R. Foley, Rui Nuno Capela,
	Florian Schmidt, Fernando Pablo Lopez-Lezcano

>i've released the -T3 VP patch:

I ran another series of tests, this time without using threaded IRQ's. Both
/proc/sys/kernel/softirq_preemption and
/proc/sys/kernel/hardirq_preemption
were zero.

The results were somewhat similar to what I saw yesterday when both types
of
IRQ's were threaded. The number of latencies > 200 usec was higher:
  threaded IRQ's -  47
unthreaded IRQ's - 128

However, the application level overhead (as measured by latencytest)
appears to be less without threaded IRQ's than with. For example,
during the X11 stress test:

                            CPU task   samples within 0.1 msec
nominal duration:          1.16 msec     n/a
Max with threaded IRQ's:   1.38 msec     99.97%
Max with unthreaded IRQ's: 1.25 msec    100.00%

The green line in the chart is MUCH thinner in the unthreaded test
as well indicating much less overhead with unthreaded IRQ's. Please
note that in both tests, the audio IRQ was not threaded (all others
were...).

This trend continued until the disk tests were performed. In those
cases, the threaded overhead was less but I believe that is due to
the setting of the audio IRQ to be non-threaded.

The types of latency traces I saw are summarized as follows. Some
details are at the end and if anyone wants full traces, please let
me know. The numbers that follow refer to the trace numbers.

[1] rt_check_expire - hundreds of traces of _spin_lock and _spin_unlock
with the preempt count >1
00 02 78 82 87 92 101 110 126

[2] do_wait - preempt count bounces up / down many cycles
01

[3] do_IRQ - appears to be chaining of hard and soft IRQ's without any
opportunity for preemption. May also have timer tick as well. A few
different ways to start this symptom. Most traces are like one of these.
03... 37 39... 73 76 79 81 83... 86 88 89 94 96 97 99 100 102 103 106 107
109 111 114 119 122 123

[4] rt_run_flush - a VERY long trace (> 4000 samples)
38 106

[5] rcu / cache actions
74 75 77 90 95 113 124

[6] prune_icache - an interrupt causes some delays
80 91 98

[7] clear_page_tables - also seen without threaded IRQ's
92 108 112 115 ... 118 120 121 125 127

[8] avc_insert - a long delay at one step...
105

  --Mark

[1] rt_check_expire

The following is typical of this kind of trace.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 295 us, entries: 882 (882)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: X/2815, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: smp_apic_timer_interrupt+0x43/0xf0
 => ended at:   irq_exit+0x33/0x50
=======>
00010000 0.000ms (+0.000ms): smp_apic_timer_interrupt
(apic_timer_interrupt)
00010000 0.000ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010000 0.000ms (+0.000ms): profile_hook (profile_tick)
00010000 0.000ms (+0.000ms): _read_lock (profile_hook)
00010001 0.001ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 0.001ms (+0.000ms): _read_unlock (profile_tick)
00010000 0.001ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010000 0.001ms (+0.000ms): update_one_process (update_process_times)
00010000 0.002ms (+0.000ms): run_local_timers (update_process_times)
00010000 0.002ms (+0.000ms): raise_softirq (update_process_times)
00010000 0.002ms (+0.000ms): scheduler_tick (update_process_times)
00010000 0.002ms (+0.000ms): sched_clock (scheduler_tick)
00010000 0.003ms (+0.000ms): _spin_lock (scheduler_tick)
00010001 0.003ms (+0.000ms): _spin_unlock (scheduler_tick)
00010000 0.003ms (+0.000ms): rebalance_tick (scheduler_tick)
00010000 0.004ms (+0.000ms): irq_exit (smp_apic_timer_interrupt)
00000001 0.004ms (+0.000ms): do_softirq (irq_exit)
00000001 0.004ms (+0.000ms): __do_softirq (do_softirq)
00000101 0.004ms (+0.000ms): ___do_softirq (__do_softirq)
00000101 0.005ms (+0.000ms): run_timer_softirq (___do_softirq)
00000101 0.005ms (+0.000ms): _spin_lock_irq (run_timer_softirq)
00000101 0.005ms (+0.000ms): _spin_lock_irqsave (run_timer_softirq)
00000102 0.006ms (+0.000ms): _spin_unlock_irq (run_timer_softirq)
00000101 0.006ms (+0.000ms): peer_check_expire (run_timer_softirq)
00000101 0.007ms (+0.000ms): cleanup_once (peer_check_expire)
00000101 0.007ms (+0.000ms): _spin_lock_bh (cleanup_once)
00000101 0.008ms (+0.000ms): _spin_lock_irqsave (_spin_lock_bh)
00000202 0.008ms (+0.000ms): _spin_unlock_bh (cleanup_once)
00000201 0.008ms (+0.000ms): local_bh_enable (cleanup_once)
00000101 0.009ms (+0.000ms): __mod_timer (peer_check_expire)
00000101 0.009ms (+0.000ms): _spin_lock_irqsave (__mod_timer)
00000102 0.010ms (+0.000ms): _spin_lock (__mod_timer)
00000103 0.010ms (+0.000ms): internal_add_timer (__mod_timer)
00000103 0.011ms (+0.000ms): _spin_unlock (__mod_timer)
00000102 0.011ms (+0.000ms): _spin_unlock_irqrestore (__mod_timer)
00000101 0.012ms (+0.000ms): cond_resched_all (run_timer_softirq)
00000101 0.012ms (+0.000ms): cond_resched_softirq (run_timer_softirq)
00000101 0.012ms (+0.000ms): _spin_lock_irq (run_timer_softirq)
00000101 0.012ms (+0.000ms): _spin_lock_irqsave (run_timer_softirq)
00000102 0.013ms (+0.000ms): _spin_unlock_irq (run_timer_softirq)
00000101 0.013ms (+0.000ms): rt_check_expire (run_timer_softirq)
00000101 0.014ms (+0.000ms): _spin_lock (rt_check_expire)
00000102 0.014ms (+0.000ms): _spin_unlock (rt_check_expire)
00000101 0.015ms (+0.000ms): _spin_lock (rt_check_expire)
00000102 0.015ms (+0.000ms): _spin_unlock (rt_check_expire)
... MANY repetitions ...
00000101 0.151ms (+0.000ms): _spin_lock (rt_check_expire)
00000102 0.152ms (+0.000ms): _spin_unlock (rt_check_expire)
00000101 0.152ms (+0.001ms): _spin_lock (rt_check_expire)
00000102 0.153ms (+0.000ms): rt_may_expire (rt_check_expire)
00000102 0.153ms (+0.000ms): _spin_unlock (rt_check_expire)
00000101 0.154ms (+0.000ms): _spin_lock (rt_check_expire)
00000102 0.154ms (+0.000ms): _spin_unlock (rt_check_expire)
... MANY more repetitions ...
00000101 0.289ms (+0.000ms): _spin_lock (rt_check_expire)
00000102 0.289ms (+0.000ms): _spin_unlock (rt_check_expire)
00000101 0.290ms (+0.000ms): mod_timer (rt_check_expire)
00000101 0.290ms (+0.000ms): __mod_timer (rt_check_expire)
00000101 0.290ms (+0.000ms): _spin_lock_irqsave (__mod_timer)
00000102 0.291ms (+0.000ms): _spin_lock (__mod_timer)
00000103 0.291ms (+0.000ms): internal_add_timer (__mod_timer)
00000103 0.291ms (+0.000ms): _spin_unlock (__mod_timer)
00000102 0.292ms (+0.000ms): _spin_unlock_irqrestore (__mod_timer)
00000101 0.292ms (+0.000ms): cond_resched_all (run_timer_softirq)
00000101 0.292ms (+0.000ms): cond_resched_softirq (run_timer_softirq)
00000101 0.292ms (+0.000ms): _spin_lock_irq (run_timer_softirq)
00000101 0.292ms (+0.000ms): _spin_lock_irqsave (run_timer_softirq)
00000102 0.293ms (+0.000ms): _spin_unlock_irq (run_timer_softirq)
00000101 0.293ms (+0.000ms): __wake_up (run_timer_softirq)
00000101 0.293ms (+0.000ms): _spin_lock_irqsave (__wake_up)
00000102 0.293ms (+0.000ms): __wake_up_common (__wake_up)
00000102 0.294ms (+0.000ms): _spin_unlock_irqrestore (run_timer_softirq)
00000101 0.294ms (+0.000ms): cond_resched_all (___do_softirq)
00000101 0.294ms (+0.000ms): cond_resched_softirq (___do_softirq)
00000001 0.295ms (+0.000ms): sub_preempt_count (irq_exit)
00000001 0.295ms (+0.000ms): update_max_trace (check_preempt_timing)

[2] do_wait

Another example of several cycles up / down from preempt 1 to 2 to 1 ...

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 228 us, entries: 572 (572)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: init/1, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: _read_lock+0x1b/0x80
 => ended at:   _read_unlock+0x1a/0x40
=======>
00000001 0.000ms (+0.000ms): _read_lock (do_wait)
00000001 0.000ms (+0.001ms): eligible_child (do_wait)
00000001 0.002ms (+0.000ms): selinux_task_wait (eligible_child)
00000001 0.003ms (+0.000ms): task_has_perm (selinux_task_wait)
00000001 0.004ms (+0.000ms): avc_has_perm (task_has_perm)
00000001 0.004ms (+0.000ms): avc_has_perm_noaudit (avc_has_perm)
00000001 0.004ms (+0.000ms): _spin_lock_irqsave (avc_has_perm_noaudit)
00000002 0.005ms (+0.000ms): avc_lookup (avc_has_perm_noaudit)
00000002 0.006ms (+0.000ms): _spin_unlock_irqrestore (avc_has_perm_noaudit)
00000001 0.007ms (+0.000ms): security_compute_av (avc_has_perm_noaudit)
00000001 0.007ms (+0.000ms): _spin_lock_irqsave (avc_has_perm_noaudit)
00000002 0.007ms (+0.000ms): avc_insert (avc_has_perm_noaudit)
00000002 0.008ms (+0.000ms): memcpy (avc_has_perm_noaudit)
00000002 0.008ms (+0.000ms): _spin_unlock_irqrestore (avc_has_perm_noaudit)
00000001 0.009ms (+0.000ms): avc_audit (avc_has_perm)
00000001 0.009ms (+0.000ms): eligible_child (do_wait)
00000001 0.010ms (+0.000ms): selinux_task_wait (eligible_child)
00000001 0.010ms (+0.000ms): task_has_perm (selinux_task_wait)
00000001 0.011ms (+0.000ms): avc_has_perm (task_has_perm)
00000001 0.011ms (+0.000ms): avc_has_perm_noaudit (avc_has_perm)
00000001 0.012ms (+0.000ms): _spin_lock_irqsave (avc_has_perm_noaudit)
...
00000001 0.221ms (+0.000ms): _spin_lock_irqsave (avc_has_perm_noaudit)
00000002 0.221ms (+0.000ms): memcpy (avc_has_perm_noaudit)
00000002 0.221ms (+0.000ms): _spin_unlock_irqrestore (avc_has_perm_noaudit)
00000001 0.222ms (+0.000ms): avc_audit (avc_has_perm)
00000001 0.222ms (+0.000ms): eligible_child (do_wait)
00000001 0.222ms (+0.000ms): selinux_task_wait (eligible_child)
00000001 0.223ms (+0.000ms): task_has_perm (selinux_task_wait)
00000001 0.223ms (+0.000ms): avc_has_perm (task_has_perm)
00000001 0.223ms (+0.000ms): avc_has_perm_noaudit (avc_has_perm)
00000001 0.224ms (+0.000ms): _spin_lock_irqsave (avc_has_perm_noaudit)
00000002 0.224ms (+0.000ms): memcpy (avc_has_perm_noaudit)
00000002 0.224ms (+0.000ms): _spin_unlock_irqrestore (avc_has_perm_noaudit)
00000001 0.224ms (+0.000ms): avc_audit (avc_has_perm)
00000001 0.225ms (+0.000ms): wait_task_zombie (do_wait)
00000001 0.226ms (+0.000ms): _spin_lock_irq (wait_task_zombie)
00000001 0.226ms (+0.000ms): _spin_lock_irqsave (wait_task_zombie)
00000002 0.227ms (+0.000ms): _spin_unlock_irq (wait_task_zombie)
00000001 0.227ms (+0.000ms): _read_unlock (wait_task_zombie)
00000001 0.228ms (+0.000ms): sub_preempt_count (_read_unlock)
00000001 0.228ms (+0.000ms): update_max_trace (check_preempt_timing)

[3] do_IRQ

Several traces look similar to those I reported previously, just have
some additional latency at the start & end. For example:


preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 450 us, entries: 898 (898)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: cpu_burn/13771, uid:0 nice:10 policy:0 rt_prio:0
    -----------------
 => started at: do_IRQ+0x19/0x90
 => ended at:   irq_exit+0x33/0x50
=======>
00010000 0.000ms (+0.000ms): do_IRQ (common_interrupt)
00010000 0.000ms (+0.000ms): do_IRQ (<08048340>)
00010000 0.000ms (+0.000ms): do_IRQ (<0000000b>)
00010000 0.000ms (+0.000ms): _spin_lock (__do_IRQ)
00010001 0.000ms (+0.000ms): mask_and_ack_level_ioapic_irq (__do_IRQ)
...
00020000 0.041ms (+0.000ms): irq_exit (do_IRQ)
00010000 0.041ms (+0.000ms): rtl8139_interrupt (handle_IRQ_event)
00010000 0.041ms (+0.001ms): _spin_lock (rtl8139_interrupt)
00010001 0.042ms (+0.000ms): _spin_unlock (rtl8139_interrupt)
00010000 0.043ms (+0.000ms): _spin_lock (__do_IRQ)
00010001 0.043ms (+0.000ms): note_interrupt (__do_IRQ)
00010001 0.043ms (+0.000ms): end_level_ioapic_irq (__do_IRQ)
00010001 0.043ms (+0.000ms): unmask_IO_APIC_irq (__do_IRQ)
00010001 0.044ms (+0.000ms): _spin_lock_irqsave (unmask_IO_APIC_irq)
00010002 0.044ms (+0.000ms): __unmask_IO_APIC_irq (unmask_IO_APIC_irq)
00010002 0.044ms (+0.013ms): __modify_IO_APIC_irq (__unmask_IO_APIC_irq)
00010002 0.058ms (+0.000ms): _spin_unlock_irqrestore (__do_IRQ)
00010001 0.058ms (+0.000ms): _spin_unlock (__do_IRQ)
00010000 0.058ms (+0.000ms): irq_exit (do_IRQ)
00000001 0.059ms (+0.000ms): do_softirq (irq_exit)
00000001 0.059ms (+0.000ms): __do_softirq (do_softirq)
00000101 0.059ms (+0.000ms): ___do_softirq (__do_softirq)
00000101 0.059ms (+0.000ms): net_rx_action (___do_softirq)
... if I read this right, 60 usec was taken prior to getting to
the soft IRQ w/o any preemption opportunities ...
... many more traces ...
00010101 0.427ms (+0.000ms): rtl8139_interrupt (handle_IRQ_event)
00010101 0.427ms (+0.001ms): _spin_lock (rtl8139_interrupt)
00010102 0.429ms (+0.000ms): rtl8139_tx_interrupt (rtl8139_interrupt)
00010102 0.430ms (+0.000ms): _spin_unlock (rtl8139_interrupt)
00010101 0.430ms (+0.000ms): preempt_schedule (rtl8139_interrupt)
00010101 0.431ms (+0.000ms): _spin_lock (__do_IRQ)
00010102 0.431ms (+0.000ms): note_interrupt (__do_IRQ)
00010102 0.431ms (+0.000ms): end_level_ioapic_irq (__do_IRQ)
00010102 0.431ms (+0.000ms): unmask_IO_APIC_irq (__do_IRQ)
00010102 0.432ms (+0.000ms): _spin_lock_irqsave (unmask_IO_APIC_irq)
00010103 0.432ms (+0.000ms): __unmask_IO_APIC_irq (unmask_IO_APIC_irq)
00010103 0.432ms (+0.013ms): __modify_IO_APIC_irq (__unmask_IO_APIC_irq)
00010103 0.446ms (+0.000ms): _spin_unlock_irqrestore (__do_IRQ)
00010102 0.446ms (+0.000ms): preempt_schedule (__do_IRQ)
00010102 0.447ms (+0.000ms): _spin_unlock (__do_IRQ)
00010101 0.447ms (+0.000ms): preempt_schedule (__do_IRQ)
00010101 0.447ms (+0.000ms): irq_exit (do_IRQ)
00000101 0.448ms (+0.000ms): __wake_up (run_timer_softirq)
00000101 0.448ms (+0.000ms): _spin_lock_irqsave (__wake_up)
00000102 0.448ms (+0.000ms): __wake_up_common (__wake_up)
00000102 0.448ms (+0.000ms): _spin_unlock_irqrestore (run_timer_softirq)
00000101 0.449ms (+0.000ms): preempt_schedule (run_timer_softirq)
00000101 0.449ms (+0.000ms): cond_resched_all (___do_softirq)
00000101 0.449ms (+0.001ms): cond_resched_softirq (___do_softirq)
00000001 0.450ms (+0.000ms): sub_preempt_count (irq_exit)
00000001 0.450ms (+0.000ms): update_max_trace (check_preempt_timing)

[not quite sure if the same symptom but has many of the same
features as above...]

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 507 us, entries: 609 (609)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: rcp/13989, uid:2711 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: _spin_lock_irqsave+0x1f/0x80
 => ended at:   local_bh_enable+0x3f/0xb0

[16 - ditto, providing some detail]

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 474 us, entries: 919 (919)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: klogd/1953, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: schedule+0x43/0x810
 => ended at:   schedule+0x412/0x810
=======>
00000001 0.000ms (+0.000ms): schedule (work_resched)
00000001 0.000ms (+0.000ms): sched_clock (schedule)
00000001 0.001ms (+0.000ms): _spin_lock_irq (schedule)
00000001 0.001ms (+0.001ms): _spin_lock_irqsave (schedule)
00000002 0.002ms (+0.000ms): dequeue_task (schedule)
00000002 0.003ms (+0.000ms): recalc_task_prio (schedule)
00000002 0.003ms (+0.000ms): effective_prio (recalc_task_prio)
00000002 0.003ms (+0.005ms): enqueue_task (schedule)
00000002 0.008ms (+0.001ms): __switch_to (schedule)
00000002 0.010ms (+0.000ms): finish_task_switch (schedule)
00000002 0.010ms (+0.001ms): _spin_unlock_irq (finish_task_switch)
00000002 0.011ms (+0.000ms): smp_apic_timer_interrupt (_spin_unlock_irq)
00010002 0.012ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010002 0.013ms (+0.000ms): profile_hook (profile_tick)
00010002 0.013ms (+0.001ms): _read_lock (profile_hook)
00010003 0.014ms (+0.000ms): notifier_call_chain (profile_hook)
00010003 0.015ms (+0.000ms): _read_unlock (profile_tick)
00010002 0.015ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010002 0.016ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010002 0.017ms (+0.001ms): update_one_process (update_process_times)
00010002 0.018ms (+0.000ms): run_local_timers (update_process_times)
00010002 0.018ms (+0.000ms): raise_softirq (update_process_times)
00010002 0.019ms (+0.000ms): scheduler_tick (update_process_times)
00010002 0.019ms (+0.002ms): sched_clock (scheduler_tick)
00010002 0.021ms (+0.000ms): _spin_lock (scheduler_tick)
00010003 0.022ms (+0.000ms): task_timeslice (scheduler_tick)
00010003 0.022ms (+0.001ms): __bitmap_weight (scheduler_tick)
00010003 0.024ms (+0.000ms): __bitmap_weight (scheduler_tick)
00010003 0.024ms (+0.000ms): dequeue_task (scheduler_tick)
00010003 0.024ms (+0.000ms): effective_prio (scheduler_tick)
00010003 0.025ms (+0.000ms): enqueue_task (scheduler_tick)
00010003 0.025ms (+0.000ms): _spin_unlock (scheduler_tick)
00010002 0.025ms (+0.000ms): preempt_schedule (scheduler_tick)
00010002 0.025ms (+0.000ms): rebalance_tick (scheduler_tick)
00010002 0.026ms (+0.000ms): irq_exit (smp_apic_timer_interrupt)
00000003 0.026ms (+0.000ms): do_softirq (irq_exit)
00000003 0.027ms (+0.000ms): __do_softirq (do_softirq)
00000103 0.027ms (+0.000ms): ___do_softirq (__do_softirq)
00010103 0.028ms (+0.000ms): do_IRQ (___do_softirq)
00010103 0.028ms (+0.000ms): do_IRQ (<0000000b>)
00010103 0.029ms (+0.000ms): _spin_lock (__do_IRQ)
00010104 0.029ms (+0.000ms): mask_and_ack_level_ioapic_irq (__do_IRQ)
00010104 0.029ms (+0.000ms): mask_IO_APIC_irq
(mask_and_ack_level_ioapic_irq)
00010104 0.030ms (+0.000ms): _spin_lock_irqsave (mask_IO_APIC_irq)
00010105 0.030ms (+0.000ms): __mask_IO_APIC_irq (mask_IO_APIC_irq)
00010105 0.030ms (+0.014ms): __modify_IO_APIC_irq (__mask_IO_APIC_irq)
00010105 0.045ms (+0.000ms): _spin_unlock_irqrestore
(mask_and_ack_level_ioapic_irq)
00010104 0.045ms (+0.000ms): preempt_schedule
(mask_and_ack_level_ioapic_irq)
00010104 0.045ms (+0.000ms): redirect_hardirq (__do_IRQ)
00010104 0.045ms (+0.000ms): _spin_unlock (__do_IRQ)
00010103 0.046ms (+0.000ms): preempt_schedule (__do_IRQ)
00010103 0.046ms (+0.000ms): handle_IRQ_event (__do_IRQ)
00010103 0.047ms (+0.000ms): rtl8139_interrupt (handle_IRQ_event)
... pretty deep nesting ...
00000109 0.464ms (+0.000ms): activate_task (try_to_wake_up)
00000109 0.465ms (+0.000ms): sched_clock (activate_task)
00000109 0.465ms (+0.000ms): recalc_task_prio (activate_task)
00000109 0.465ms (+0.000ms): effective_prio (recalc_task_prio)
00000109 0.466ms (+0.000ms): enqueue_task (activate_task)
00000109 0.466ms (+0.000ms): _spin_unlock_irqrestore (try_to_wake_up)
00000108 0.466ms (+0.000ms): preempt_schedule (try_to_wake_up)
00000108 0.466ms (+0.000ms): _spin_unlock_irqrestore
(sk_stream_write_space)
00000107 0.467ms (+0.000ms): preempt_schedule (sk_stream_write_space)
00000107 0.467ms (+0.000ms): _spin_unlock (tcp_v4_rcv)
00000106 0.467ms (+0.000ms): preempt_schedule (tcp_v4_rcv)
00000105 0.468ms (+0.000ms): preempt_schedule (ip_local_deliver)
00000104 0.468ms (+0.000ms): preempt_schedule (netif_receive_skb)
00000104 0.469ms (+0.001ms): rtl8139_isr_ack (rtl8139_rx)
00000104 0.470ms (+0.000ms): _spin_unlock (rtl8139_poll)
00000103 0.471ms (+0.000ms): preempt_schedule (rtl8139_poll)
00000103 0.471ms (+0.000ms): cond_resched_all (___do_softirq)
00000103 0.471ms (+0.001ms): cond_resched_softirq (___do_softirq)
00000001 0.473ms (+0.001ms): preempt_schedule (finish_task_switch)
00000001 0.474ms (+0.000ms): sub_preempt_count (schedule)
00000001 0.474ms (+0.000ms): update_max_trace (check_preempt_timing)

[79 - ditto, different way to get started]

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 209 us, entries: 275 (275)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: sleep/15758, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: unmap_vmas+0x12e/0x280
 => ended at:   _spin_unlock+0x2d/0x60


[4] rt_run_flush

The trace buffer filled up before this completed so I don't have
the sequence that stopped the trace.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 2586 us, entries: 4000 (12487)   |   [VP:1 KP:1 SP:0 HP:0
#CPUS:2]
    -----------------
    | task: cpu_burn/13771, uid:0 nice:10 policy:0 rt_prio:0
    -----------------
 => started at: smp_apic_timer_interrupt+0x43/0xf0
 => ended at:   irq_exit+0x33/0x50
=======>
00010000 0.000ms (+0.000ms): smp_apic_timer_interrupt
(apic_timer_interrupt)
00010000 0.000ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010000 0.000ms (+0.000ms): profile_hook (profile_tick)
00010000 0.000ms (+0.000ms): _read_lock (profile_hook)
00010001 0.001ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 0.001ms (+0.000ms): _read_unlock (profile_tick)
00010000 0.001ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010000 0.001ms (+0.000ms): update_one_process (update_process_times)
00010000 0.002ms (+0.000ms): run_local_timers (update_process_times)
00010000 0.002ms (+0.000ms): raise_softirq (update_process_times)
00010000 0.002ms (+0.000ms): scheduler_tick (update_process_times)
00010000 0.002ms (+0.000ms): sched_clock (scheduler_tick)
00010000 0.003ms (+0.000ms): _spin_lock (scheduler_tick)
00010001 0.003ms (+0.000ms): _spin_unlock (scheduler_tick)
00010000 0.003ms (+0.000ms): rebalance_tick (scheduler_tick)
00010000 0.004ms (+0.000ms): irq_exit (smp_apic_timer_interrupt)
00000001 0.004ms (+0.000ms): do_softirq (irq_exit)
00000001 0.004ms (+0.000ms): __do_softirq (do_softirq)
00000101 0.004ms (+0.000ms): ___do_softirq (__do_softirq)
00000101 0.005ms (+0.000ms): run_timer_softirq (___do_softirq)
00000101 0.005ms (+0.000ms): _spin_lock_irq (run_timer_softirq)
00000101 0.005ms (+0.000ms): _spin_lock_irqsave (run_timer_softirq)
00000102 0.005ms (+0.000ms): _spin_unlock_irq (run_timer_softirq)
00000101 0.006ms (+0.000ms): rt_secret_rebuild (run_timer_softirq)
00000101 0.006ms (+0.000ms): rt_cache_flush (rt_secret_rebuild)
00000101 0.007ms (+0.000ms): _spin_lock_bh (rt_cache_flush)
00000101 0.007ms (+0.000ms): _spin_lock_irqsave (_spin_lock_bh)
00000202 0.007ms (+0.000ms): del_timer (rt_cache_flush)
00000202 0.008ms (+0.000ms): _spin_unlock_bh (rt_cache_flush)
00000201 0.008ms (+0.000ms): local_bh_enable (rt_cache_flush)
00000101 0.009ms (+0.000ms): rt_run_flush (rt_secret_rebuild)
00000101 0.009ms (+0.000ms): get_random_bytes (rt_run_flush)
00000101 0.009ms (+0.000ms): extract_entropy (get_random_bytes)
00000101 0.009ms (+0.000ms): _spin_lock_irqsave (extract_entropy)
00000102 0.010ms (+0.000ms): __wake_up (extract_entropy)
00000102 0.010ms (+0.000ms): _spin_lock_irqsave (__wake_up)
00000103 0.010ms (+0.000ms): __wake_up_common (__wake_up)
00000103 0.010ms (+0.000ms): _spin_unlock_irqrestore (extract_entropy)
00000102 0.011ms (+0.000ms): _spin_unlock_irqrestore (extract_entropy)
00000101 0.011ms (+0.000ms): SHATransform (extract_entropy)
00000101 0.011ms (+0.002ms): memcpy (SHATransform)
00000101 0.014ms (+0.000ms): add_entropy_words (extract_entropy)
00000101 0.014ms (+0.000ms): _spin_lock_irqsave (add_entropy_words)
00000102 0.014ms (+0.000ms): _spin_unlock_irqrestore (extract_entropy)
00000101 0.015ms (+0.000ms): SHATransform (extract_entropy)
00000101 0.015ms (+0.002ms): memcpy (SHATransform)
00000101 0.018ms (+0.000ms): add_entropy_words (extract_entropy)
00000101 0.018ms (+0.000ms): _spin_lock_irqsave (add_entropy_words)
00000102 0.018ms (+0.000ms): _spin_unlock_irqrestore (extract_entropy)
00000101 0.019ms (+0.000ms): _spin_lock_bh (rt_run_flush)
00000101 0.019ms (+0.000ms): _spin_lock_irqsave (_spin_lock_bh)
00000202 0.020ms (+0.000ms): _spin_unlock_bh (rt_run_flush)
00000201 0.020ms (+0.000ms): local_bh_enable (rt_run_flush)
00000101 0.020ms (+0.000ms): cond_resched_all (rt_run_flush)
00000101 0.020ms (+0.000ms): cond_resched_softirq (rt_run_flush)
00000101 0.021ms (+0.000ms): _spin_lock_bh (rt_run_flush)
00000101 0.021ms (+0.000ms): _spin_lock_irqsave (_spin_lock_bh)
00000202 0.021ms (+0.000ms): _spin_unlock_bh (rt_run_flush)
00000201 0.021ms (+0.000ms): local_bh_enable (rt_run_flush)
... this kind of cycle repeats a VERY long time and the watchdog
provides this piece of information ...
00000101 0.769ms (+0.000ms): _spin_lock_irqsave (_spin_lock_bh)
00000202 0.770ms (+0.000ms): _spin_unlock_bh (rt_run_flush)
00000201 0.770ms (+0.000ms): local_bh_enable (rt_run_flush)
00000101 0.770ms (+0.000ms): cond_resched_all (rt_run_flush)
00000101 0.771ms (+0.000ms): do_nmi (___trace)
00010101 0.771ms (+0.002ms): do_nmi (<08049b20>)
00010101 0.773ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010101 0.773ms (+464353.452ms): nmi_watchdog_tick (default_do_nmi)
00000002 464354.226ms (+12313.439ms): _spin_unlock (release_console_sem)
00010101 0.774ms (+0.000ms): do_IRQ (___trace)
00010101 0.774ms (+0.000ms): do_IRQ (<00000000>)
00010101 0.774ms (+0.000ms): _spin_lock (__do_IRQ)
00010102 0.775ms (+0.000ms): ack_edge_ioapic_irq (__do_IRQ)
00010102 0.775ms (+0.000ms): redirect_hardirq (__do_IRQ)
00010102 0.775ms (+0.000ms): _spin_unlock (__do_IRQ)
00010101 0.776ms (+0.000ms): handle_IRQ_event (__do_IRQ)
00010101 0.776ms (+0.000ms): timer_interrupt (handle_IRQ_event)
00010101 0.776ms (+0.000ms): _spin_lock (timer_interrupt)
00010102 0.776ms (+0.000ms): mark_offset_tsc (timer_interrupt)
00010102 0.777ms (+0.000ms): _spin_lock (mark_offset_tsc)
00010103 0.777ms (+0.010ms): _spin_lock (mark_offset_tsc)
00010104 0.787ms (+0.000ms): _spin_unlock (mark_offset_tsc)
00010103 0.788ms (+0.000ms): _spin_unlock (mark_offset_tsc)
00010102 0.788ms (+0.003ms): _spin_lock (timer_interrupt)
00010103 0.792ms (+0.000ms): _spin_unlock (timer_interrupt)
00010102 0.792ms (+0.000ms): do_timer (timer_interrupt)
00010102 0.792ms (+0.000ms): update_wall_time (do_timer)
... more traces ...
00000101 1.023ms (+0.000ms): cond_resched_all (rt_run_flush)
00000101 1.023ms (+0.000ms): cond_resched_softirq (rt_run_flush)
00000101 1.023ms (+0.000ms): _spin_lock_bh (rt_run_flush)
00000101 1.024ms (+0.000ms): _spin_lock_irqsave (_spin_lock_bh)
00000202 1.024ms (+0.000ms): _spin_unlock_bh (rt_run_flush)
00000201 1.024ms (+0.000ms): local_bh_enable (rt_run_flush)
00000101 1.025ms (+0.000ms): cond_resched_all (rt_run_flush)
00000101 1.025ms (+0.000ms): cond_resched_softirq (rt_run_flush)
00000101 1.025ms (+0.000ms): _spin_lock_bh (rt_run_flush)
00000101 1.025ms (+0.000ms): _spin_lock_irqsave (_spin_lock_bh)
00000202 1.026ms (+0.000ms): _spin_unlock_bh (rt_run_flush)
00000201 1.026ms (+0.000ms): local_bh_enable (rt_run_flush)
00000101 1.026ms (+0.000ms): cond_resched_all (rt_run_flush)
00000101 1.026ms (+2915178.859ms): cond_resched_softirq (rt_run_flush)
[not sure why the odd values shown in some of the traces either]

[5] rcu / cache actions

May be a false positive due to latency tracing overhead.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 207 us, entries: 431 (431)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: kswapd0/72, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: cond_resched_lock+0x24/0xf0
 => ended at:   cond_resched_lock+0x60/0xf0
=======>
00000001 0.000ms (+0.000ms): touch_preempt_timing (cond_resched_lock)
00000001 0.000ms (+0.000ms): smp_apic_timer_interrupt
(touch_preempt_timing)
00010001 0.000ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 0.000ms (+0.000ms): profile_hook (profile_tick)
00010001 0.001ms (+0.000ms): _read_lock (profile_hook)
00010002 0.001ms (+0.000ms): notifier_call_chain (profile_hook)
00010002 0.001ms (+0.000ms): _read_unlock (profile_tick)
00010001 0.001ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 0.002ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 0.002ms (+0.000ms): update_one_process (update_process_times)
00010001 0.002ms (+0.000ms): run_local_timers (update_process_times)
00010001 0.002ms (+0.000ms): raise_softirq (update_process_times)
00010001 0.003ms (+0.000ms): scheduler_tick (update_process_times)
00010001 0.003ms (+0.000ms): sched_clock (scheduler_tick)
00010001 0.003ms (+0.000ms): rcu_check_callbacks (scheduler_tick)
00010001 0.003ms (+0.000ms): idle_cpu (rcu_check_callbacks)
00010001 0.004ms (+0.000ms): __tasklet_schedule (scheduler_tick)
00010001 0.004ms (+0.000ms): _spin_lock (scheduler_tick)
00010002 0.004ms (+0.000ms): task_timeslice (scheduler_tick)
00010002 0.004ms (+0.000ms): __bitmap_weight (scheduler_tick)
00010002 0.005ms (+0.000ms): _spin_unlock (scheduler_tick)
00010001 0.005ms (+0.000ms): rebalance_tick (scheduler_tick)
00010001 0.005ms (+0.000ms): irq_exit (smp_apic_timer_interrupt)
00000002 0.006ms (+0.000ms): do_softirq (irq_exit)
00000002 0.006ms (+0.000ms): __do_softirq (do_softirq)
00000102 0.006ms (+0.000ms): ___do_softirq (__do_softirq)
00000102 0.006ms (+0.000ms): run_timer_softirq (___do_softirq)
00000102 0.006ms (+0.000ms): _spin_lock_irq (run_timer_softirq)
00000102 0.007ms (+0.000ms): _spin_lock_irqsave (run_timer_softirq)
00000103 0.007ms (+0.000ms): _spin_unlock_irq (run_timer_softirq)
00000102 0.007ms (+0.000ms): __wake_up (run_timer_softirq)
00000102 0.007ms (+0.000ms): _spin_lock_irqsave (__wake_up)
00000103 0.008ms (+0.000ms): __wake_up_common (__wake_up)
00000103 0.008ms (+0.000ms): _spin_unlock_irqrestore (run_timer_softirq)
00000102 0.008ms (+0.000ms): cond_resched_all (___do_softirq)
00000102 0.008ms (+0.000ms): cond_resched_softirq (___do_softirq)
00000102 0.009ms (+0.000ms): tasklet_action (___do_softirq)
00000102 0.009ms (+0.000ms): rcu_process_callbacks (tasklet_action)
00000102 0.009ms (+0.000ms): __rcu_process_callbacks
(rcu_process_callbacks)
00000102 0.010ms (+0.000ms): _spin_lock (__rcu_process_callbacks)
00000103 0.010ms (+0.000ms): rcu_start_batch (__rcu_process_callbacks)
00000103 0.010ms (+0.000ms): _spin_unlock (__rcu_process_callbacks)
00000102 0.010ms (+0.000ms): rcu_check_quiescent_state
(__rcu_process_callbacks)
00000102 0.011ms (+0.000ms): rcu_do_batch (rcu_process_callbacks)
00000102 0.012ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.012ms (+0.001ms): kmem_cache_free (d_callback)
00000102 0.013ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.014ms (+0.000ms): kmem_cache_free (d_callback)
00000102 0.014ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.015ms (+0.000ms): kmem_cache_free (d_callback)
00000102 0.016ms (+0.000ms): cache_flusharray (kmem_cache_free)
00000102 0.016ms (+0.001ms): _spin_lock (cache_flusharray)
00000103 0.017ms (+0.003ms): free_block (cache_flusharray)
00000103 0.021ms (+0.000ms): _spin_unlock (cache_flusharray)
00000102 0.021ms (+0.000ms): memmove (cache_flusharray)
00000102 0.021ms (+0.000ms): memcpy (memmove)
00000102 0.022ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.022ms (+0.000ms): kmem_cache_free (d_callback)
00000102 0.022ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.023ms (+0.000ms): kmem_cache_free (d_callback)
00000102 0.023ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.024ms (+0.000ms): kmem_cache_free (d_callback)
00000102 0.024ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.025ms (+0.000ms): kmem_cache_free (d_callback)
...
00000102 0.190ms (+0.000ms): cache_flusharray (kmem_cache_free)
00000102 0.190ms (+0.000ms): _spin_lock (cache_flusharray)
00000103 0.191ms (+0.003ms): free_block (cache_flusharray)
00000103 0.194ms (+0.000ms): _spin_unlock (cache_flusharray)
00000102 0.194ms (+0.000ms): memmove (cache_flusharray)
00000102 0.195ms (+0.000ms): memcpy (memmove)
00000102 0.195ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.195ms (+0.000ms): kmem_cache_free (d_callback)
00000102 0.196ms (+0.000ms): __tasklet_schedule (rcu_process_callbacks)
00000102 0.196ms (+0.000ms): __rcu_process_callbacks
(rcu_process_callbacks)
00000102 0.197ms (+0.000ms): rcu_check_quiescent_state
(__rcu_process_callbacks)
00000102 0.197ms (+0.000ms): cond_resched_all (___do_softirq)
00000102 0.197ms (+0.000ms): cond_resched_softirq (___do_softirq)
00000102 0.198ms (+0.000ms): wake_up_process (__do_softirq)
00000102 0.198ms (+0.000ms): try_to_wake_up (wake_up_process)
00000102 0.199ms (+0.000ms): task_rq_lock (try_to_wake_up)
00000102 0.199ms (+0.001ms): _spin_lock (task_rq_lock)
00000103 0.201ms (+0.000ms): activate_task (try_to_wake_up)
00000103 0.201ms (+0.000ms): sched_clock (activate_task)
00000103 0.202ms (+0.001ms): recalc_task_prio (activate_task)
00000103 0.203ms (+0.000ms): effective_prio (recalc_task_prio)
00000103 0.203ms (+0.001ms): enqueue_task (activate_task)
00000103 0.204ms (+0.000ms): resched_task (try_to_wake_up)
00000103 0.205ms (+0.001ms): _spin_unlock_irqrestore (try_to_wake_up)
00000102 0.206ms (+0.001ms): preempt_schedule (try_to_wake_up)
00000001 0.207ms (+0.000ms): sub_preempt_count (cond_resched_lock)
00000001 0.208ms (+0.000ms): update_max_trace (check_preempt_timing)

[6] prune_icache

If I read this right, an operation that should have taken 100 usec or
so, was preempted by an even longer duration series of operations.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 263 us, entries: 597 (597)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: kswapd0/72, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: _spin_lock+0x1f/0x70
 => ended at:   _spin_unlock+0x2d/0x60
=======>
00000001 0.000ms (+0.000ms): _spin_lock (prune_icache)
00000001 0.000ms (+0.000ms): inode_has_buffers (prune_icache)
00000001 0.001ms (+0.000ms): inode_has_buffers (prune_icache)
00000001 0.001ms (+0.000ms): inode_has_buffers (prune_icache)
00000001 0.002ms (+0.000ms): inode_has_buffers (prune_icache)
...
00000001 0.075ms (+0.000ms): smp_apic_timer_interrupt (prune_icache)
00010001 0.075ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 0.075ms (+0.000ms): profile_hook (profile_tick)
00010001 0.075ms (+0.000ms): _read_lock (profile_hook)
00010002 0.076ms (+0.000ms): notifier_call_chain (profile_hook)
00010002 0.076ms (+0.000ms): _read_unlock (profile_tick)
00010001 0.076ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 0.077ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 0.077ms (+0.000ms): update_one_process (update_process_times)
00010001 0.078ms (+0.000ms): run_local_timers (update_process_times)
00010001 0.078ms (+0.000ms): raise_softirq (update_process_times)
00010001 0.078ms (+0.000ms): scheduler_tick (update_process_times)
00010001 0.078ms (+0.000ms): sched_clock (scheduler_tick)
00010001 0.079ms (+0.000ms): rcu_check_callbacks (scheduler_tick)
00010001 0.080ms (+0.000ms): idle_cpu (rcu_check_callbacks)
... it then runs later ...
00000102 0.097ms (+0.001ms): tasklet_action (___do_softirq)
00000102 0.098ms (+0.000ms): rcu_process_callbacks (tasklet_action)
00000102 0.098ms (+0.000ms): __rcu_process_callbacks
(rcu_process_callbacks)
00000102 0.099ms (+0.000ms): rcu_check_quiescent_state
(__rcu_process_callbacks)
00000102 0.100ms (+0.000ms): rcu_do_batch (rcu_process_callbacks)
00000102 0.100ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.101ms (+0.000ms): kmem_cache_free (d_callback)
00000102 0.101ms (+0.000ms): d_callback (rcu_do_batch)
00000102 0.102ms (+0.000ms): kmem_cache_free (d_callback)
... finally getting back to the original work ...
00000103 0.237ms (+0.000ms): resched_task (try_to_wake_up)
00000103 0.238ms (+0.000ms): _spin_unlock_irqrestore (try_to_wake_up)
00000102 0.238ms (+0.001ms): preempt_schedule (try_to_wake_up)
00000001 0.239ms (+0.001ms): inode_has_buffers (prune_icache)
00000001 0.240ms (+0.000ms): inode_has_buffers (prune_icache)
00000001 0.241ms (+0.000ms): inode_has_buffers (prune_icache)
...
00000001 0.262ms (+0.000ms): inode_has_buffers (prune_icache)
00000001 0.263ms (+0.001ms): _spin_unlock (prune_icache)
00000001 0.264ms (+0.000ms): sub_preempt_count (_spin_unlock)
00000001 0.264ms (+0.000ms): update_max_trace (check_preempt_timing)

[7] clear_page_tables

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 208 us, entries: 67 (67)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: get_ltrace.sh/16132, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: unmap_vmas+0x12e/0x280
 => ended at:   _spin_unlock+0x2d/0x60
=======>
00000001 0.000ms (+0.000ms): touch_preempt_timing (unmap_vmas)
00000001 0.000ms (+0.000ms): __bitmap_weight (unmap_vmas)
00000001 0.000ms (+0.000ms): unmap_page_range (unmap_vmas)
00000001 0.000ms (+0.000ms): zap_pmd_range (unmap_page_range)
00000001 0.000ms (+0.000ms): zap_pte_range (zap_pmd_range)
00000001 0.001ms (+0.000ms): kmap_atomic (zap_pte_range)
00000002 0.001ms (+0.001ms): page_address (zap_pte_range)
00000002 0.002ms (+0.000ms): set_page_dirty (zap_pte_range)
00000002 0.002ms (+0.000ms): page_remove_rmap (zap_pte_range)
00000002 0.002ms (+0.000ms): set_page_dirty (zap_pte_range)
00000002 0.003ms (+0.000ms): page_remove_rmap (zap_pte_range)
00000002 0.003ms (+0.000ms): set_page_dirty (zap_pte_range)
00000002 0.003ms (+0.000ms): page_remove_rmap (zap_pte_range)
00000002 0.004ms (+0.000ms): kunmap_atomic (zap_pte_range)
00000001 0.004ms (+0.002ms): vm_acct_memory (exit_mmap)
00000001 0.007ms (+0.181ms): clear_page_tables (exit_mmap)
00000001 0.189ms (+0.000ms): flush_tlb_mm (exit_mmap)
...

[8] avc_insert

The watchdog timer woke this up but avc_insert appears to be
the long step.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 248 us, entries: 35 (35)   |   [VP:1 KP:1 SP:0 HP:0 #CPUS:2]
    -----------------
    | task: fam/2929, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: _spin_lock_irqsave+0x1f/0x80
 => ended at:   _spin_unlock_irqrestore+0x32/0x70
=======>
00000001 0.000ms (+0.000ms): _spin_lock_irqsave (avc_has_perm_noaudit)
00000001 0.000ms (+0.020ms): avc_insert (avc_has_perm_noaudit)
00010001 0.020ms (+0.001ms): do_nmi (avc_insert)
00010001 0.022ms (+0.005ms): do_nmi (<08049b20>)
00010001 0.027ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010001 0.027ms (+0.185ms): nmi_watchdog_tick (default_do_nmi)
00000001 0.213ms (+0.000ms): memcpy (avc_has_perm_noaudit)
00000001 0.213ms (+0.000ms): _spin_unlock_irqrestore (avc_has_perm_noaudit)
00010001 0.214ms (+0.000ms): do_IRQ (_spin_unlock_irqrestore)
00010001 0.214ms (+0.000ms): do_IRQ (<00000000>)
00010001 0.215ms (+0.000ms): _spin_lock (__do_IRQ)
00010002 0.215ms (+0.000ms): ack_edge_ioapic_irq (__do_IRQ)
00010002 0.216ms (+0.000ms): redirect_hardirq (__do_IRQ)
00010002 0.216ms (+0.000ms): _spin_unlock (__do_IRQ)
00010001 0.216ms (+0.000ms): handle_IRQ_event (__do_IRQ)
00010001 0.217ms (+0.000ms): timer_interrupt (handle_IRQ_event)
00010001 0.217ms (+0.000ms): _spin_lock (timer_interrupt)
00010002 0.218ms (+0.000ms): mark_offset_tsc (timer_interrupt)
00010002 0.218ms (+0.000ms): _spin_lock (mark_offset_tsc)
00010003 0.219ms (+0.013ms): _spin_lock (mark_offset_tsc)
00010004 0.232ms (+0.000ms): _spin_unlock (mark_offset_tsc)
00010003 0.233ms (+0.000ms): _spin_unlock (mark_offset_tsc)
00010002 0.234ms (+0.007ms): _spin_lock (timer_interrupt)
00010003 0.242ms (+0.000ms): _spin_unlock (timer_interrupt)
00010002 0.242ms (+0.000ms): do_timer (timer_interrupt)
00010002 0.243ms (+0.000ms): update_wall_time (do_timer)
00010002 0.243ms (+0.003ms): update_wall_time_one_tick (update_wall_time)
00010002 0.246ms (+0.000ms): _spin_unlock (timer_interrupt)
00010001 0.247ms (+0.000ms): _spin_lock (__do_IRQ)
00010002 0.247ms (+0.000ms): note_interrupt (__do_IRQ)
00010002 0.247ms (+0.000ms): end_edge_ioapic_irq (__do_IRQ)
00010002 0.247ms (+0.000ms): _spin_unlock (__do_IRQ)
00010001 0.248ms (+0.001ms): irq_exit (do_IRQ)
00000001 0.249ms (+0.001ms): sub_preempt_count (_spin_unlock_irqrestore)
00000001 0.250ms (+0.000ms): update_max_trace (check_preempt_timing)


^ permalink raw reply	[flat|nested] 31+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc3-mm3-T3
@ 2004-10-08 23:48 Sid Boyce
  0 siblings, 0 replies; 31+ messages in thread
From: Sid Boyce @ 2004-10-08 23:48 UTC (permalink / raw)
  To: linux-kernel

Config for HPET not correctly set in arch/x86_64/Kconfig. 
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y are the only ones to appear and it doesn't 
build hpet.o, build also fails with # CONFIG_HPET_TIMER is not set.
     CHK     include/linux/version.h
make[1]: `arch/x86_64/kernel/asm-offsets.s' is up to date.
   CHK     include/linux/compile.h
   GEN_INITRAMFS_LIST usr/initramfs_list
Using shipped usr/initramfs_list
   GEN     .version
   CHK     include/linux/compile.h
   UPD     include/linux/compile.h
   CC      init/version.o
   LD      init/built-in.o
   LD      .tmp_vmlinux1
arch/x86_64/kernel/built-in.o(.init.text+0x1dc1): In function 
`late_hpet_init':
: undefined reference to `hpet_alloc'
--------------------------------------
x86 builds OK with:-

CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_HPET=y
# CONFIG_HPET_RTC_IRQ is not set
CONFIG_HPET_MMAP=y
Regards
Sid.
-- 
Sid Boyce .... Hamradio G3VBV and keen Flyer
=====LINUX ONLY USED HERE=====

^ permalink raw reply	[flat|nested] 31+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc3-mm3-T3
@ 2004-10-07 16:55 Mark_H_Johnson
  0 siblings, 0 replies; 31+ messages in thread
From: Mark_H_Johnson @ 2004-10-07 16:55 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Lee Revell, K.R. Foley, Rui Nuno Capela,
	Florian Schmidt, Fernando Pablo Lopez-Lezcano

>i've released the -T3 VP patch: [snip]

Thanks. This fixes the USB boot time problem / USB mouse does not work.
I do not have the problem others have reported with audio - audio works
just fine for me.

I've made two runs using latencytest for the real time task / generating
audio. To recap, I have a dual 866 Mhz Pentium III w/ 512 M memory,
Ensonic audio, 8139too network interface, and IDE disk. The IDE is set
to use udma2. Both tests run with both hard IRQ and soft IRQ threads
(except for the audio IRQ).

The first run, I set a limit of 500 usec and had no latencies longer than
that. Very good.

The second run, I set the limit to 200 usec and had 47 traces > 200 usec
in a 20-30 minute period. Two were VERY long, one was about 1.7 msec and
the other was over 50 msec. The material at the end summarizes the types
of traces I had. I will send the detailed traces separately.

  --Mark

To summarize, the symptoms w/ trace numbers following:
[1] VERY long latencies
04 27
[2] Long traces, USB related
00 32 42 43 44
[3] Long traces, network related
03 05... 24 28 46
[4] Pruning icache
25 26
[5] clear_page_tables
01 02 29 30 31 33... 41 45

[1] VERY long latencies

This appears to be disk related, IRQ 14 is ide0.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 53418 us, entries: 226 (226)   |   [VP:1 KP:1 SP:1 HP:1 #CPUS:2]
    -----------------
    | task: IRQ 14/203, uid:0 nice:-10 policy:0 rt_prio:0
    -----------------
 => started at: _spin_lock_irqsave+0x1f/0x80
 => ended at:   _spin_unlock+0x2d/0x60
=======>
00000001 0.000ms (+0.000ms): _spin_lock_irqsave (ide_intr)
00000001 0.000ms (+0.000ms): drive_is_ready (ide_intr)
00000001 0.001ms (+0.000ms): __ide_dma_test_irq (drive_is_ready)
00000001 0.002ms (+0.000ms): ide_inb (__ide_dma_test_irq)
00000001 0.002ms (+0.000ms): del_timer (ide_intr)
00000001 0.003ms (+0.422ms): _spin_lock_irqsave (del_timer)
00010002 0.425ms (+0.000ms): do_nmi (sub_preempt_count)
00010002 0.426ms (+0.003ms): do_nmi (___trace)
00010002 0.429ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 0.430ms (+0.993ms): nmi_watchdog_tick (default_do_nmi)
00010002 1.423ms (+0.000ms): do_nmi (_spin_lock_irqsave)
00010002 1.424ms (+0.001ms): do_nmi (___trace)
00010002 1.425ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 1.425ms (+0.996ms): nmi_watchdog_tick (default_do_nmi)
00010001 2.422ms (+0.000ms): do_nmi (_spin_lock_irqsave)
00010001 2.422ms (+0.001ms): do_nmi (check_preempt_timing)
00010001 2.424ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010001 2.424ms (+0.996ms): nmi_watchdog_tick (default_do_nmi)
00010002 3.420ms (+0.000ms): do_nmi (_spin_lock_irqsave)
00010002 3.420ms (+0.001ms): do_nmi (check_preempt_timing)
00010002 3.422ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 3.422ms (+0.996ms): nmi_watchdog_tick (default_do_nmi)
00010002 4.418ms (+0.000ms): do_nmi (_spin_lock_irqsave)
00010002 4.419ms (+0.003ms): do_nmi (___trace)
00010002 4.422ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 4.422ms (+0.994ms): nmi_watchdog_tick (default_do_nmi)
...
00010001 49.345ms (+0.996ms): nmi_watchdog_tick (default_do_nmi)
00010002 50.341ms (+0.000ms): do_nmi (_spin_lock_irqsave)
00010002 50.341ms (+0.001ms): do_nmi (___trace)
00010002 50.343ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 50.343ms (+0.996ms): nmi_watchdog_tick (default_do_nmi)
00010002 51.339ms (+0.000ms): do_nmi (add_preempt_count)
00010002 51.340ms (+0.003ms): do_nmi (cycles_to_usecs)
00010002 51.343ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 51.343ms (+0.994ms): nmi_watchdog_tick (default_do_nmi)
00010002 52.338ms (+0.000ms): do_nmi (_spin_lock_irqsave)
00010002 52.338ms (+0.003ms): do_nmi (touch_preempt_timing)
00010002 52.342ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 52.342ms (+0.994ms): nmi_watchdog_tick (default_do_nmi)
00010002 53.336ms (+0.000ms): do_nmi (add_preempt_count)
00010002 53.336ms (+0.003ms): do_nmi (check_preempt_timing)
00010002 53.340ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 53.340ms (+0.076ms): nmi_watchdog_tick (default_do_nmi)
00000002 53.417ms (+0.000ms): _spin_unlock_irqrestore (del_timer)
00000001 53.417ms (+0.000ms): _spin_unlock (ide_intr)
00000001 53.418ms (+0.000ms): sub_preempt_count (_spin_unlock)
00000001 53.419ms (+0.000ms): update_max_trace (check_preempt_timing)

Here is the other one, same basic symptom but just not as long duration.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 1715 us, entries: 18 (18)   |   [VP:1 KP:1 SP:1 HP:1 #CPUS:2]
    -----------------
    | task: IRQ 14/203, uid:0 nice:-10 policy:0 rt_prio:0
    -----------------
 => started at: _spin_lock_irqsave+0x1f/0x80
 => ended at:   _spin_unlock+0x2d/0x60
=======>
00000001 0.000ms (+0.000ms): _spin_lock_irqsave (ide_intr)
00000001 0.000ms (+0.000ms): drive_is_ready (ide_intr)
00000001 0.001ms (+0.000ms): __ide_dma_test_irq (drive_is_ready)
00000001 0.001ms (+0.000ms): ide_inb (__ide_dma_test_irq)
00000001 0.002ms (+0.000ms): del_timer (ide_intr)
00000001 0.002ms (+0.022ms): _spin_lock_irqsave (del_timer)
00010002 0.024ms (+0.000ms): do_nmi (sub_preempt_count)
00010002 0.025ms (+0.001ms): do_nmi (check_preempt_timing)
00010002 0.026ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 0.027ms (+0.996ms): nmi_watchdog_tick (default_do_nmi)
00010002 1.023ms (+0.000ms): do_nmi (sub_preempt_count)
00010002 1.023ms (+0.003ms): do_nmi (check_preempt_timing)
00010002 1.027ms (+0.000ms): notifier_call_chain (default_do_nmi)
00010002 1.027ms (+0.688ms): nmi_watchdog_tick (default_do_nmi)
00000002 1.715ms (+0.000ms): _spin_unlock_irqrestore (del_timer)
00000001 1.715ms (+0.000ms): _spin_unlock (ide_intr)
00000001 1.716ms (+0.000ms): sub_preempt_count (_spin_unlock)
00000001 1.716ms (+0.000ms): update_max_trace (check_preempt_timing)

[2] Long traces, USB related

I had several traces with hundreds of lines of tracing data. This set
appear to be related to USB processing (moving the mouse).

This one is typical.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 365 us, entries: 320 (320)   |   [VP:1 KP:1 SP:1 HP:1 #CPUS:2]
    -----------------
    | task: X/2815, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: _spin_lock_irqsave+0x1f/0x80
 => ended at:   _spin_unlock_irqrestore+0x32/0x70

[3] Long traces, network related

Similar to the previous one, and a symptom I have reported before.
I don't see any obvious fixes for this type of problem and the overhead
of tracing may introduce a significant delay to make the problem look
much worse than it really is.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 203 us, entries: 401 (401)   |   [VP:1 KP:1 SP:1 HP:1 #CPUS:2]
    -----------------
    | task: ksoftirqd/0/3, uid:0 nice:-10 policy:0 rt_prio:0
    -----------------
 => started at: _spin_lock+0x1f/0x70
 => ended at:   rtl8139_rx+0x217/0x340

OR

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 222 us, entries: 562 (562)   |   [VP:1 KP:1 SP:1 HP:1 #CPUS:2]
    -----------------
    | task: rcp/7416, uid:2711 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: tcp_prequeue_process+0x49/0xb0
 => ended at:   local_bh_enable+0x3f/0xb0


[4] Pruning icache

Don't recall seeing this one before. It was long enough to have a clock
tick come in, but with only 25 usec overhead, the clock overhead was not
significant.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 274 us, entries: 165 (165)   |   [VP:1 KP:1 SP:1 HP:1 #CPUS:2]
    -----------------
    | task: kswapd0/72, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: _spin_lock+0x1f/0x70
 => ended at:   _spin_unlock+0x2d/0x60
=======>
00000001 0.000ms (+0.002ms): _spin_lock (prune_icache)
00000001 0.002ms (+0.005ms): inode_has_buffers (prune_icache)
00000001 0.007ms (+0.001ms): inode_has_buffers (prune_icache)
00000001 0.009ms (+0.002ms): inode_has_buffers (prune_icache)
00000001 0.011ms (+0.001ms): inode_has_buffers (prune_icache)
00000001 0.013ms (+0.002ms): inode_has_buffers (prune_icache)
...
00000001 0.263ms (+0.002ms): inode_has_buffers (prune_icache)
00000001 0.265ms (+0.001ms): inode_has_buffers (prune_icache)
00000001 0.266ms (+0.001ms): inode_has_buffers (prune_icache)
00000001 0.268ms (+0.001ms): inode_has_buffers (prune_icache)
00000001 0.269ms (+0.001ms): inode_has_buffers (prune_icache)
00000001 0.271ms (+0.001ms): inode_has_buffers (prune_icache)
00000001 0.272ms (+0.000ms): inode_has_buffers (prune_icache)
00000001 0.273ms (+0.002ms): _spin_unlock (prune_icache)
00000001 0.275ms (+0.000ms): sub_preempt_count (_spin_unlock)
00000001 0.276ms (+0.000ms): update_max_trace (check_preempt_timing)

[5] clear_page_tables

Usually short (<50-100 lines) but taking significant time to perform.
You can see the big hit at clear_page_tables in this one.

preemption latency trace v1.0.7 on 2.6.9-rc3-mm3-VP-T3
-------------------------------------------------------
 latency: 234 us, entries: 38 (38)   |   [VP:1 KP:1 SP:1 HP:1 #CPUS:2]
    -----------------
    | task: get_ltrace.sh/7082, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: unmap_vmas+0x12e/0x280
 => ended at:   _spin_unlock+0x2d/0x60
=======>
00000001 0.000ms (+0.000ms): touch_preempt_timing (unmap_vmas)
00000001 0.000ms (+0.000ms): __bitmap_weight (unmap_vmas)
00000001 0.000ms (+0.000ms): unmap_page_range (unmap_vmas)
00000001 0.000ms (+0.000ms): zap_pmd_range (unmap_page_range)
00000001 0.000ms (+0.000ms): zap_pte_range (zap_pmd_range)
00000001 0.001ms (+0.000ms): kmap_atomic (zap_pte_range)
00000002 0.001ms (+0.000ms): page_address (zap_pte_range)
00000002 0.001ms (+0.000ms): set_page_dirty (zap_pte_range)
00000002 0.001ms (+0.000ms): page_remove_rmap (zap_pte_range)
00000002 0.002ms (+0.000ms): set_page_dirty (zap_pte_range)
00000002 0.002ms (+0.000ms): page_remove_rmap (zap_pte_range)
00000002 0.002ms (+0.000ms): kunmap_atomic (zap_pte_range)
00000001 0.003ms (+0.001ms): vm_acct_memory (exit_mmap)
00000001 0.004ms (+0.217ms): clear_page_tables (exit_mmap)
00000001 0.221ms (+0.003ms): flush_tlb_mm (exit_mmap)
00000001 0.225ms (+0.000ms): free_pages_and_swap_cache (exit_mmap)
00000001 0.225ms (+0.000ms): lru_add_drain (free_pages_and_swap_cache)
00000001 0.226ms (+0.000ms): release_pages (free_pages_and_swap_cache)
00000001 0.227ms (+0.000ms): _spin_lock_irq (release_pages)
00000001 0.227ms (+0.001ms): _spin_lock_irqsave (release_pages)
00000002 0.228ms (+0.000ms): _spin_unlock_irq (release_pages)
00000001 0.228ms (+0.000ms): __pagevec_free (release_pages)
00000001 0.228ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.229ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.229ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.230ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.230ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.230ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.231ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.231ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.232ms (+0.000ms): _spin_lock_irq (release_pages)
00000001 0.232ms (+0.000ms): _spin_lock_irqsave (release_pages)
00000002 0.232ms (+0.000ms): _spin_unlock_irq (release_pages)
00000001 0.233ms (+0.000ms): __pagevec_free (release_pages)
00000001 0.233ms (+0.000ms): free_hot_cold_page (__pagevec_free)
00000001 0.233ms (+0.001ms): _spin_unlock (exit_mmap)
00000001 0.235ms (+0.001ms): sub_preempt_count (_spin_unlock)
00000001 0.236ms (+0.000ms): update_max_trace (check_preempt_timing)


^ permalink raw reply	[flat|nested] 31+ messages in thread
* Re: BKL backtraces - was: Re: [patch] voluntary-preempt-2.6.9-rc2-mm1-S1
@ 2004-09-21  7:18 Ingo Molnar
  2004-09-21  7:44 ` [patch] voluntary-preempt-2.6.9-rc2-mm1-S2 Ingo Molnar
  0 siblings, 1 reply; 31+ messages in thread
From: Ingo Molnar @ 2004-09-21  7:18 UTC (permalink / raw)
  To: K.R. Foley; +Cc: linux-kernel, Lee Revell, Mark_H_Johnson

[-- Attachment #1: Type: text/plain, Size: 1569 bytes --]


* K.R. Foley <kr@cybsft.com> wrote:

> All of these were generated while booting:
> 
> Sep 20 19:45:10 porky kernel: using smp_processor_id() in preemptible 
> code: modprobe/1019
> Sep 20 19:45:10 porky kernel:  [<c011c58e>] smp_processor_id+0x8e/0xa0
> Sep 20 19:45:10 porky kernel:  [<c013ace6>] module_unload_init+0x46/0x70
> Sep 20 19:45:10 porky kernel:  [<c013ce58>] load_module+0x598/0xb10
> Sep 20 19:45:10 porky kernel:  [<c013d438>] sys_init_module+0x68/0x280
> Sep 20 19:45:10 porky kernel:  [<c01066b9>] sysenter_past_esp+0x52/0x71
> 
> The above one of course repeats on each module load.

ok, this is a harmless false positive - the attached patch fixes it.

> Sep 20 19:45:10 porky kernel: using smp_processor_id() in preemptible 
> code: X/1017
> Sep 20 19:45:10 porky kernel:  [<c011c58e>] smp_processor_id+0x8e/0xa0
> Sep 20 19:45:10 porky kernel:  [<c01d6c15>] add_timer_randomness+0x125/0x150
> Sep 20 19:45:10 porky kernel:  [<c01d6c9e>] add_mouse_randomness+0x1e/0x30
> Sep 20 19:45:10 porky kernel:  [<c022b835>] input_event+0x55/0x3f0

> Sep 20 19:45:10 porky kernel:  [<c01e1390>] vt_ioctl+0x0/0x1ad0
> Sep 20 19:45:10 porky kernel:  [<c01dc08b>] tty_ioctl+0x37b/0x4d0
> Sep 20 19:45:10 porky kernel:  [<c0175034>] sys_ioctl+0xe4/0x240
> Sep 20 19:45:10 porky kernel:  [<c01066b9>] sysenter_past_esp+0x52/0x71
> 
> The X one above repeats once also.

aha! This is a real one, fixed by the second patch. This piece of code
relied on add_timer_randomness() always being called with preemption
disabled.

these fixes will show up in -S2.

	Ingo

[-- Attachment #2: 1 --]
[-- Type: text/plain, Size: 441 bytes --]

--- linux/kernel/module.c.orig	
+++ linux/kernel/module.c	
@@ -394,7 +394,7 @@ static void module_unload_init(struct mo
 	for (i = 0; i < NR_CPUS; i++)
 		local_set(&mod->ref[i].count, 0);
 	/* Hold reference count during initialization. */
-	local_set(&mod->ref[smp_processor_id()].count, 1);
+	local_set(&mod->ref[_smp_processor_id()].count, 1);
 	/* Backwards compatibility macros put refcount during init. */
 	mod->waiter = current;
 }

[-- Attachment #3: 2 --]
[-- Type: text/plain, Size: 698 bytes --]

--- linux/drivers/char/random.c.orig	
+++ linux/drivers/char/random.c	
@@ -807,10 +807,11 @@ static void add_timer_randomness(struct 
 	long		delta, delta2, delta3;
 	int		entropy = 0;
 
+	preempt_disable();
 	/* if over the trickle threshold, use only 1 in 4096 samples */
 	if ( random_state->entropy_count > trickle_thresh &&
 	     (__get_cpu_var(trickle_count)++ & 0xfff))
-		return;
+		goto out;
 
 	/*
 	 * Use get_cycles() if implemented, otherwise fall back to
@@ -861,6 +862,8 @@ static void add_timer_randomness(struct 
 		entropy = int_ln_12bits(delta);
 	}
 	batch_entropy_store(num, time, entropy);
+out:
+	preempt_enable();
 }
 
 void add_keyboard_randomness(unsigned char scancode)

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2004-10-13 16:53 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-10-08 17:07 [patch] voluntary-preempt-2.6.9-rc3-mm3-T3 Mark_H_Johnson
  -- strict thread matches above, loose matches on Subject: below --
2004-10-08 23:48 Sid Boyce
2004-10-07 16:55 Mark_H_Johnson
2004-09-21  7:18 BKL backtraces - was: Re: [patch] voluntary-preempt-2.6.9-rc2-mm1-S1 Ingo Molnar
2004-09-21  7:44 ` [patch] voluntary-preempt-2.6.9-rc2-mm1-S2 Ingo Molnar
2004-09-22 10:33   ` [patch] voluntary-preempt-2.6.9-rc2-mm1-S3 Ingo Molnar
2004-09-23 12:28     ` [patch] voluntary-preempt-2.6.9-rc2-mm1-S4 Ingo Molnar
2004-09-23 21:12       ` [patch] voluntary-preempt-2.6.9-rc2-mm3-S5 Ingo Molnar
2004-09-24  7:44         ` [patch] voluntary-preempt-2.6.9-rc2-mm3-S6 Ingo Molnar
2004-09-28  0:05           ` [patch] voluntary-preempt-2.6.9-rc2-mm4-S7 Ingo Molnar
2004-10-03 21:09             ` [patch] voluntary-preempt-2.6.9-rc3-mm1-S8 Ingo Molnar
2004-10-04 21:53               ` [patch] voluntary-preempt-2.6.9-rc3-mm1-S9 Ingo Molnar
2004-10-05 13:47                 ` [patch] voluntary-preempt-2.6.9-rc3-mm2-T1 Ingo Molnar
2004-10-07 10:52                   ` [patch] voluntary-preempt-2.6.9-rc3-mm3-T3 Ingo Molnar
     [not found]                     ` <20041007134116.3e53b239.h.mth@web.de>
2004-10-07 11:44                       ` Ingo Molnar
2004-10-07 12:08                         ` Hanno Meyer-Thurow
2004-10-07 12:16                     ` Rui Nuno Capela
2004-10-07 13:53                     ` Rui Nuno Capela
2004-10-07 23:26                       ` Rui Nuno Capela
2004-10-08  5:36                         ` Lee Revell
2004-10-08  6:49                           ` Con Kolivas
2004-10-08 18:05                             ` Lee Revell
2004-10-08  7:06                           ` Ingo Molnar
2004-10-08  7:36                             ` Peter Williams
2004-10-08 17:27                               ` Lee Revell
2004-10-07 17:55                     ` K.R. Foley
2004-10-07 20:29                       ` K.R. Foley
2004-10-07 21:55                         ` Ingo Molnar
2004-10-08  1:41                           ` K.R. Foley
2004-10-08  7:02                             ` Ingo Molnar
2004-10-08 14:03                               ` K.R. Foley
2004-10-08 14:28                                 ` Ingo Molnar
2004-10-08 21:14                     ` Lee Revell
2004-10-08 23:11                     ` Lee Revell
2004-10-09  4:16                     ` Lee Revell
2004-10-09  4:57                     ` Lee Revell
2004-10-09 18:16                     ` Florian Schmidt
2004-10-12  4:30                     ` Lee Revell
2004-10-12  9:17                       ` Ingo Molnar
2004-10-13 16:52                         ` Lee Revell
2004-10-13 16:48                     ` Lee Revell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.