* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-30 14:46 Tibor Billes
0 siblings, 0 replies; 28+ messages in thread
From: Tibor Billes @ 2013-08-30 14:46 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 08/30/13 03:24 AM
> To: Tibor Billes
> Subject: Re: Unusually high system CPU usage with recent kernels
>
> On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > > On Sun, Aug 25, 2013 at 09:50:21PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 08/24/13 11:03 PM
> > > > > On Sat, Aug 24, 2013 at 09:59:45PM +0200, Tibor Billes wrote:
> > > > > > From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> > > > > > > On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > > > > > > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > > > > > > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > > > > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > > > > > > > Hi,
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > > > > > > > to trigger this behaviour.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > > > > > > > >
> > > > > > > > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > > > > > > > surprised no one else noticed.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Could you please send along your .config file?
> > > > > > > > > > > >
> > > > > > > > > > > > Here it is
> > > > > > > > > > >
> > > > > > > > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > > > > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > > > > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > > > > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > > > > > > > >
> > > > > > > > > > > If that helps, there are some things I could try.
> > > > > > > > > >
> > > > > > > > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > > > > > > > >
> > > > > > > > > Interesting. Thank you for trying this -- and we at least have a
> > > > > > > > > short-term workaround for this problem. I will put a patch together
> > > > > > > > > for further investigation.
> > > > > > > >
> > > > > > > > I don't specifically need this config option so I'm fine without it in
> > > > > > > > the long term, but I guess it's not supposed to behave like that.
> > > > > > >
> > > > > > > OK, good, we have a long-term workload for your specific case,
> > > > > > > even better. ;-)
> > > > > > >
> > > > > > > But yes, there are situations where RCU_FAST_NO_HZ needs to work
> > > > > > > a bit better. I hope you will bear with me with a bit more
> > > > > > > testing...
> > > > > > >
> > > > > > > > > In the meantime, could you please tell me how you were measuring
> > > > > > > > > performance for your kernel builds? Wall-clock time required to complete
> > > > > > > > > one build? Number of builds completed per unit time? Something else?
> > > > > > > >
> > > > > > > > Actually, I wasn't all this sophisticated. I have a system monitor
> > > > > > > > applet on my top panel (using MATE, Linux Mint), four little graphs,
> > > > > > > > one of which shows CPU usage. Different colors indicate different kind
> > > > > > > > of CPU usage. Blue shows user space usage, red shows system usage, and
> > > > > > > > two more for nice and iowait. During a normal compile it's almost
> > > > > > > > completely filled with blue user space CPU usage, only the top few
> > > > > > > > pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> > > > > > > > set, about 3/4 of the graph was red system CPU usage, the rest was
> > > > > > > > blue. So I just looked for a pile of red on my graphs when I tested
> > > > > > > > different kernel builds. But also compile speed was horrible I couldn't
> > > > > > > > wait for the build to finish. Even the UI got unresponsive.
> > > > > > >
> > > > > > > We have been having problems with CPU accounting, but this one looks
> > > > > > > quite real.
> > > > > > >
> > > > > > > > Now I did some measuring. In the normal case a compile finished in 36
> > > > > > > > seconds, compiled 315 object files. Here are some output lines from
> > > > > > > > dstat -tasm --vm during compile:
> > > > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > > > 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> > > > > > > > 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> > > > > > > > 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> > > > > > > > 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> > > > > > > > 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> > > > > > > > 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> > > > > > > > 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> > > > > > > > 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> > > > > > > > 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> > > > > > > > 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> > > > > > > > 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> > > > > > > > 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
> > > > > > > >
> > > > > > > > The abnormal case compiled only 182 object file in 6 and a half minutes,
> > > > > > > > then I stopped it. The same dstat output for this case:
> > > > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > > > 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> > > > > > > > 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> > > > > > > > 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> > > > > > > > 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> > > > > > > > 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> > > > > > > > 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> > > > > > > > 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> > > > > > > > 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> > > > > > > > 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> > > > > > > > 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> > > > > > > > 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> > > > > > > > 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
> > > > > > > >
> > > > > > > > I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
> > > > > > >
> > > > > > > The massive increase in context switches does come as a bit of a surprise!
> > > > > > > It does rule out my initial suspicion of lock contention, but then again
> > > > > > > the fact that you have only four CPUs made that pretty unlikely to begin
> > > > > > > with.
> > > > > > >
> > > > > > > 2.4k average context switches in the good case for the full run vs. 3,156k
> > > > > > > for about half of a run in the bad case. That is an increase of more
> > > > > > > than three orders of magnitude!
> > > > > > >
> > > > > > > Yow!!!
> > > > > > >
> > > > > > > Page faults are actually -higher- in the good case. You have about 6.5GB
> > > > > > > free in both cases, so you are not running out of memory. Lots more disk
> > > > > > > writes in the good case, perhaps consistent with its getting more done.
> > > > > > > Networking is negligible in both cases.
> > > > > > >
> > > > > > > Lots of hardware interrupts in the bad case as well. Would you be willing
> > > > > > > to take a look at /proc/interrupts before and after to see which one you
> > > > > > > are getting hit with? (Or whatever interrupt tracking tool you prefer.)
> > > > > >
> > > > > > Here are the results.
> > > > > >
> > > > > > Good case before:
> > > > > > CPU0 CPU1 CPU2 CPU3
> > > > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > > > 1: 356 1 68 4 IO-APIC-edge i8042
> > > > > > 8: 0 0 1 0 IO-APIC-edge rtc0
> > > > > > 9: 330 14 449 71 IO-APIC-fasteoi acpi
> > > > > > 12: 10 108 269 2696 IO-APIC-edge i8042
> > > > > > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > > > > > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > > > > > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > > > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > > > > > 41: 10617 173 9959 292 PCI-MSI-edge ahci
> > > > > > 42: 862 11 186 26 PCI-MSI-edge xhci_hcd
> > > > > > 43: 107 77 27 102 PCI-MSI-edge i915
> > > > > > 44: 5322 20 434 22 PCI-MSI-edge iwlwifi
> > > > > > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > > > > > 46: 0 3 0 0 PCI-MSI-edge eth0
> > > > > > NMI: 1 0 0 0 Non-maskable interrupts
> > > > > > LOC: 16312 15177 10840 8995 Local timer interrupts
> > > > > > SPU: 0 0 0 0 Spurious interrupts
> > > > > > PMI: 1 0 0 0 Performance monitoring interrupts
> > > > > > IWI: 1160 523 1031 481 IRQ work interrupts
> > > > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > > > RES: 14976 16135 9973 10784 Rescheduling interrupts
> > > > > > CAL: 482 457 151 370 Function call interrupts
> > > > > > TLB: 70 106 352 230 TLB shootdowns
> > > > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > > > MCE: 0 0 0 0 Machine check exceptions
> > > > > > MCP: 2 2 2 2 Machine check polls
> > > > > > ERR: 0
> > > > > > MIS: 0
> > > > > >
> > > > > > Good case after:
> > > > > > CPU0 CPU1 CPU2 CPU3
> > > > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > > > 1: 367 1 81 4 IO-APIC-edge i8042
> > > > > > 8: 0 0 1 0 IO-APIC-edge rtc0
> > > > > > 9: 478 14 460 71 IO-APIC-fasteoi acpi
> > > > > > 12: 10 108 269 2696 IO-APIC-edge i8042
> > > > > > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > > > > > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > > > > > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > > > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > > > > > 41: 16888 173 9959 292 PCI-MSI-edge ahci
> > > > > > 42: 1102 11 186 26 PCI-MSI-edge xhci_hcd
> > > > > > 43: 107 132 27 136 PCI-MSI-edge i915
> > > > > > 44: 6943 20 434 22 PCI-MSI-edge iwlwifi
> > > > > > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > > > > > 46: 0 3 0 0 PCI-MSI-edge eth0
> > > > > > NMI: 4 3 3 3 Non-maskable interrupts
> > > > > > LOC: 26845 24780 19025 17746 Local timer interrupts
> > > > > > SPU: 0 0 0 0 Spurious interrupts
> > > > > > PMI: 4 3 3 3 Performance monitoring interrupts
> > > > > > IWI: 1637 751 1287 695 IRQ work interrupts
> > > > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > > > RES: 26511 26673 18791 20194 Rescheduling interrupts
> > > > > > CAL: 510 480 151 370 Function call interrupts
> > > > > > TLB: 361 292 575 461 TLB shootdowns
> > > > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > > > MCE: 0 0 0 0 Machine check exceptions
> > > > > > MCP: 2 2 2 2 Machine check polls
> > > > > > ERR: 0
> > > > > > MIS: 0
> > > > > >
> > > > > > Bad case before:
> > > > > > CPU0 CPU1 CPU2 CPU3
> > > > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > > > 1: 172 3 78 3 IO-APIC-edge i8042
> > > > > > 8: 0 1 0 0 IO-APIC-edge rtc0
> > > > > > 9: 1200 148 395 81 IO-APIC-fasteoi acpi
> > > > > > 12: 1625 2 348 10 IO-APIC-edge i8042
> > > > > > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > > > > > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > > > > > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > > > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > > > > > 41: 15776 374 8497 687 PCI-MSI-edge ahci
> > > > > > 42: 1297 829 115 24 PCI-MSI-edge xhci_hcd
> > > > > > 43: 103 149 9 212 PCI-MSI-edge i915
> > > > > > 44: 13151 101 511 91 PCI-MSI-edge iwlwifi
> > > > > > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > > > > > 46: 0 1 1 0 PCI-MSI-edge eth0
> > > > > > NMI: 32 31 31 31 Non-maskable interrupts
> > > > > > LOC: 82504 82732 74172 75985 Local timer interrupts
> > > > > > SPU: 0 0 0 0 Spurious interrupts
> > > > > > PMI: 32 31 31 31 Performance monitoring interrupts
> > > > > > IWI: 17816 16278 13833 13282 IRQ work interrupts
> > > > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > > > RES: 18784 21084 13313 12946 Rescheduling interrupts
> > > > > > CAL: 393 422 306 356 Function call interrupts
> > > > > > TLB: 231 176 235 191 TLB shootdowns
> > > > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > > > MCE: 0 0 0 0 Machine check exceptions
> > > > > > MCP: 3 3 3 3 Machine check polls
> > > > > > ERR: 0
> > > > > > MIS: 0
> > > > > >
> > > > > > Bad case after:
> > > > > > CPU0 CPU1 CPU2 CPU3
> > > > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > > > 1: 415 3 85 3 IO-APIC-edge i8042
> > > > > > 8: 0 1 0 0 IO-APIC-edge rtc0
> > > > > > 9: 1277 148 428 81 IO-APIC-fasteoi acpi
> > > > > > 12: 1625 2 348 10 IO-APIC-edge i8042
> > > > > > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > > > > > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > > > > > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > > > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > > > > > 41: 17814 374 8497 687 PCI-MSI-edge ahci
> > > > > > 42: 1567 829 115 24 PCI-MSI-edge xhci_hcd
> > > > > > 43: 103 177 9 242 PCI-MSI-edge i915
> > > > > > 44: 14956 101 511 91 PCI-MSI-edge iwlwifi
> > > > > > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > > > > > 46: 0 1 1 0 PCI-MSI-edge eth0
> > > > > > NMI: 36 35 34 34 Non-maskable interrupts
> > > > > > LOC: 92429 92708 81714 84071 Local timer interrupts
> > > > > > SPU: 0 0 0 0 Spurious interrupts
> > > > > > PMI: 36 35 34 34 Performance monitoring interrupts
> > > > > > IWI: 22594 19658 17439 14257 IRQ work interrupts
> > > > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > > > RES: 21491 24670 14618 14569 Rescheduling interrupts
> > > > > > CAL: 441 439 306 356 Function call interrupts
> > > > > > TLB: 232 181 274 465 TLB shootdowns
> > > > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > > > MCE: 0 0 0 0 Machine check exceptions
> > > > > > MCP: 3 3 3 3 Machine check polls
> > > > > > ERR: 0
> > > > > > MIS: 0
> > > > >
> > > > > Lots more local timer interrupts, which is consistent with the higher
> > > > > time in interrupt handlers for the bad case.
> > > > >
> > > > > > > One hypothesis is that your workload and configuration are interacting
> > > > > > > with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
> > > > > > > Could you please check for this by building with CONFIG_RCU_TRACE=y,
> > > > > > > mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
> > > > > > > after each run?
> > > > > >
> > > > > > Good case before:
> > > > > > completed=8756 gpnum=8757 age=0 max=21
> > > > > > after:
> > > > > > completed=14686 gpnum=14687 age=0 max=21
> > > > > >
> > > > > > Bad case before:
> > > > > > completed=22970 gpnum=22971 age=0 max=21
> > > > > > after:
> > > > > > completed=26110 gpnum=26111 age=0 max=21
> > > > >
> > > > > In the good case, (14686-8756)/40=148.25 grace periods per second, which
> > > > > is a fast but reasonable rate given your HZ=250. Not a large difference
> > > > > in the number of grace periods, but extrapolating for the longer runtime,
> > > > > maybe ten times as much. But not much change in grace-period rate per
> > > > > unit time.
> > > > >
> > > > > > The test scenario was the following in both cases (mixed english and pseudo-bash):
> > > > > > reboot, login, start terminal
> > > > > > cd project
> > > > > > rm -r build
> > > > > > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > > > > > scons -j4
> > > > > > wait ~40 sec (good case finished, Ctrl-C in bad case)
> > > > > > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > > > > >
> > > > > > I stopped the build in the bad case after about the same time the good
> > > > > > case finished, so the extra interrupts and RCU grace periods because of the
> > > > > > longer runtime don't fake the results.
> > > > >
> > > > > That procedure works for me, thank you for laying it out carefully.
> > > > >
> > > > > I believe I see what is going on and how to fix it, though it may take
> > > > > me a bit to work things through and get a good patch.
> > > > >
> > > > > Thank you very much for your testing efforts!
> > > >
> > > > I'm glad I can help. I've been using Linux for many years, now I have a
> > > > chance to help the community, to do something in return. I'm quite
> > > > enjoying this :)
> > >
> > > ;-)
> > >
> > > Here is a patch that is more likely to help. I am testing it in parallel,
> > > but figured I should send you a sneak preview.
> >
> > I tried it, but I don't see any difference in overall performance. The dstat
> > also shows the same as before.
> >
> > But I did notice something. Occasionally there is an increase in userspace
> > CPU usage, interrupts and context switches are dropping, and it really gets
> > more work done (scons printed commands more frequently). I checked that
> > this behaviour is present without your patch, I just didn't notice this
> > before. Maybe you can make some sense out of it.
> >
> > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> > 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> > 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> > 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> > 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> > 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> > 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> > 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> > 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> > 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> > 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> > 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> > 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> > 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> > 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> > 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> > 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> > 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> > 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> > 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> > 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> > 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> > 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> > 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> > 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> > 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> > 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> > 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> > 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> > 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> > 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> > 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> > 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> > 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> > 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> > 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> > 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> > 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> > 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> > 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> > 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> > 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> > 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> > 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> > 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> > 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> > 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> > 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
>
> Interesting. The number of context switches drops during the time
> that throughput improves. It would be good to find out what task(s)
> are doing all of the context switches. One way to find the pids of the
> top 20 context-switching tasks should be something like this:
>
> grep ctxt /proc/*/status | sort -k2nr | head -20
>
> You could use any number of methods to map back to the command.
> When generating my last patch, I was assuming that ksoftirqd would be
> the big offender. Of course, if it is something else, I should be
> taking a different approach.
I'll measure it after the weekend.
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-10-06 8:42 Tibor Billes
0 siblings, 0 replies; 28+ messages in thread
From: Tibor Billes @ 2013-10-06 8:42 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
From: Paul E. McKenney Sent: 10/01/13 10:27 PM
> To: Tibor Billes
> Subject: Re: Unusually high system CPU usage with recent kernels
>
> On Sat, Sep 14, 2013 at 03:59:51PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 09/13/13 02:19 AM
> > > On Wed, Sep 11, 2013 at 08:46:04AM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 09/09/13 10:44 PM
>
> [ . . . ]
>
> > > > Sure. The attached tar file contains traces of good kernels. The first is with
> > > > version 3.9.7 (no patch applied) which was the last stable kernel I tried and
> > > > didn't have this issue. The second is version 3.11 with your fix applied.
> > > > Judging by the size of the traces, 3.11.0+ is still doing more work than
> > > > 3.9.7.
> > >
> > > Indeed, though quite a bit less than the problematic traces.
> > >
> > > Did you have all three patches applied to 3.11.0, or just the last one?
> > > If the latter, could you please try it with all three?
> >
> > Only the last one was applied to 3.11.0. The attachement now contains the
> > RCU trace with all thee applied. It seems to be smaller in size, but still
> > not close to 3.9.7.
> >
> > > > > > I'm not sure about LKML policies about attaching not-so-small files to
> > > > > > emails, so I've dropped LKML from the CC list. Please CC the mailing
> > > > > > list in your reply.
> > > > >
> > > > > Done!
> > > > >
> > > > > Another approach is to post the traces on the web and send the URL to
> > > > > LKML. But whatever works for you is fine by me.
> > > >
> > > > Sending directly to you won again :) Could you please CC the list in your
> > > > reply?
> > >
> > > Done! ;-)
> >
> > Could you please CC the list in your reply again? :)
>
> This trace is quite strange -- there are a number of processes that sleep
> and then wake up almost immediately, as in within a microsecond or so.
> I don't know why they would be doing that, and I also cannot think of
> what RCU might be doing to make them do that.
>
> I am unfortunately out of ideas on this one.
Okay. Thank you for all your help! :)
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
[not found] <20130914135951.288070@gmx.com>
@ 2013-10-01 20:27 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-10-01 20:27 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Sat, Sep 14, 2013 at 03:59:51PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 09/13/13 02:19 AM
> > On Wed, Sep 11, 2013 at 08:46:04AM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 09/09/13 10:44 PM
[ . . . ]
> > > Sure. The attached tar file contains traces of good kernels. The first is with
> > > version 3.9.7 (no patch applied) which was the last stable kernel I tried and
> > > didn't have this issue. The second is version 3.11 with your fix applied.
> > > Judging by the size of the traces, 3.11.0+ is still doing more work than
> > > 3.9.7.
> >
> > Indeed, though quite a bit less than the problematic traces.
> >
> > Did you have all three patches applied to 3.11.0, or just the last one?
> > If the latter, could you please try it with all three?
>
> Only the last one was applied to 3.11.0. The attachement now contains the
> RCU trace with all thee applied. It seems to be smaller in size, but still
> not close to 3.9.7.
>
> > > > > I'm not sure about LKML policies about attaching not-so-small files to
> > > > > emails, so I've dropped LKML from the CC list. Please CC the mailing
> > > > > list in your reply.
> > > >
> > > > Done!
> > > >
> > > > Another approach is to post the traces on the web and send the URL to
> > > > LKML. But whatever works for you is fine by me.
> > >
> > > Sending directly to you won again :) Could you please CC the list in your
> > > reply?
> >
> > Done! ;-)
>
> Could you please CC the list in your reply again? :)
This trace is quite strange -- there are a number of processes that sleep
and then wake up almost immediately, as in within a microsecond or so.
I don't know why they would be doing that, and I also cannot think of
what RCU might be doing to make them do that.
I am unfortunately out of ideas on this one.
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-09-28 14:13 Tibor Billes
0 siblings, 0 replies; 28+ messages in thread
From: Tibor Billes @ 2013-09-28 14:13 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
Hi Paul,
I was just wondering if you received my last mail, because I haven't heard from
you for a while now.
Tibor
> ----- Original Message -----
> From: Tibor Billes
> Sent: 09/14/13 03:59 PM
> To: paulmck@linux.vnet.ibm.com
> Subject: Re: Unusually high system CPU usage with recent kernels
>
> > From: Paul E. McKenney Sent: 09/13/13 02:19 AM
> > On Wed, Sep 11, 2013 at 08:46:04AM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 09/09/13 10:44 PM
> > > > On Mon, Sep 09, 2013 at 09:47:37PM +0200, Tibor Billes wrote:
> > > > > > From: Paul E. McKenney Sent: 09/08/13 08:43 PM
> > > > > > On Sun, Sep 08, 2013 at 07:22:45PM +0200, Tibor Billes wrote:
> > > > > > > Good news Paul, the above patch did solve this issue :) I see no extra
> > > > > > > context switches, no extra CPU usage and no extra compile time.
> > > > > >
> > > > > > Woo-hoo!!! ;-)
> > > > > >
> > > > > > May I add your Tested-by to the fix?
> > > > >
> > > > > Yes, please :)
> > > >
> > > > Done! ;-)
> > > >
> > > > > > > Any idea why couldn't you reproduce this? Why did it only hit my system?
> > > > > >
> > > > > > Timing, maybe? Another question is "Why didn't lots of people complain
> > > > > > about this?" It would be good to find out because it is quite possible
> > > > > > that there is some other bug that this patch is masking -- or even just
> > > > > > making less probable.
> > > > >
> > > > > Good point!
> > > > >
> > > > > > If you are interested, please build one of the old kernels but with
> > > > > > CONFIG_RCU_TRACE=y. Then run something like the following as root
> > > > > > concurrently with your workload:
> > > > > >
> > > > > > sleep 10
> > > > > > echo 1 > /sys/kernel/debug/tracing/events/rcu/enable
> > > > > > sleep 0.01
> > > > > > echo 0 > /sys/kernel/debug/tracing/events/rcu/enable
> > > > > > cat /sys/kernel/debug/tracing/trace > /tmp/trace
> > > > > >
> > > > > > Send me the /tmp/trace file, which will probably be a few megabytes in
> > > > > > size, so please compress it before sending. ;-) A good compression
> > > > > > tool should be able to shrink it by a factor of 20 or thereabouts.
> > > > >
> > > > > Ok, I did that. Twice! The first is with commit
> > > > > 910ee45db2f4837c8440e770474758493ab94bf7, which was the first bad commit
> > > > > according to the bisection I did initially. Second with the current
> > > > > mainline 3.11. I have little idea of what the fields and lines mean in
> > > > > the RCU trace files, so I'm not going to guess if they are essentially
> > > > > the same or not, but it may provide more information to you. Both files
> > > > > were created by using a shell script containing the commands you
> > > > > suggested.
> > > >
> > > > So traces both correspond to bad cases, correct? They are both quite
> > > > impressive -- looks like you have quite the interrupt rate going on there!
> > > > Almost looks like interrupts are somehow getting enabled on the path
> > > > to/from idle.
> > > >
> > > > Could you please also send along a trace with the fix applied?
> > >
> > > Sure. The attached tar file contains traces of good kernels. The first is with
> > > version 3.9.7 (no patch applied) which was the last stable kernel I tried and
> > > didn't have this issue. The second is version 3.11 with your fix applied.
> > > Judging by the size of the traces, 3.11.0+ is still doing more work than
> > > 3.9.7.
> >
> > Indeed, though quite a bit less than the problematic traces.
> >
> > Did you have all three patches applied to 3.11.0, or just the last one?
> > If the latter, could you please try it with all three?
>
> Only the last one was applied to 3.11.0. The attachement now contains the
> RCU trace with all thee applied. It seems to be smaller in size, but still
> not close to 3.9.7.
>
> > > > > I'm not sure about LKML policies about attaching not-so-small files to
> > > > > emails, so I've dropped LKML from the CC list. Please CC the mailing
> > > > > list in your reply.
> > > >
> > > > Done!
> > > >
> > > > Another approach is to post the traces on the web and send the URL to
> > > > LKML. But whatever works for you is fine by me.
> > >
> > > Sending directly to you won again :) Could you please CC the list in your
> > > reply?
> >
> > Done! ;-)
>
> Could you please CC the list in your reply again? :)
>
> Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
[not found] <20130911064605.288090@gmx.com>
@ 2013-09-13 0:19 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-09-13 0:19 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Wed, Sep 11, 2013 at 08:46:04AM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 09/09/13 10:44 PM
> > On Mon, Sep 09, 2013 at 09:47:37PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 09/08/13 08:43 PM
> > > > On Sun, Sep 08, 2013 at 07:22:45PM +0200, Tibor Billes wrote:
> > > > > > From: Paul E. McKenney Sent: 09/07/13 02:23 AM
> > > > > > On Tue, Sep 03, 2013 at 03:16:07PM -0700, Paul E. McKenney wrote:
> > > > > > > On Tue, Sep 03, 2013 at 11:11:01PM +0200, Tibor Billes wrote:
> > > > > > > > > From: Paul E. McKenney Sent: 08/30/13 03:24 AM
> > > > > > > > > On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > > > > > > > > > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > > > > > > > > > > Here is a patch that is more likely to help. I am testing it in parallel,
> > > > > > > > > > > but figured I should send you a sneak preview.
> > > > > > > > > >
> > > > > > > > > > I tried it, but I don't see any difference in overall performance. The dstat
> > > > > > > > > > also shows the same as before.
> > > > > > > > > >
> > > > > > > > > > But I did notice something. Occasionally there is an increase in userspace
> > > > > > > > > > CPU usage, interrupts and context switches are dropping, and it really gets
> > > > > > > > > > more work done (scons printed commands more frequently). I checked that
> > > > > > > > > > this behaviour is present without your patch, I just didn't notice this
> > > > > > > > > > before. Maybe you can make some sense out of it.
> > > > > > > > > >
> > > > > > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > > > > > 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> > > > > > > > > > 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> > > > > > > > > > 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> > > > > > > > > > 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> > > > > > > > > > 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> > > > > > > > > > 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> > > > > > > > > > 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> > > > > > > > > > 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> > > > > > > > > > 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> > > > > > > > > > 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> > > > > > > > > > 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> > > > > > > > > > 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> > > > > > > > > > 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> > > > > > > > > > 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> > > > > > > > > > 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> > > > > > > > > > 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> > > > > > > > > > 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> > > > > > > > > > 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> > > > > > > > > > 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> > > > > > > > > > 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> > > > > > > > > > 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> > > > > > > > > > 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> > > > > > > > > > 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> > > > > > > > > > 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> > > > > > > > > > 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> > > > > > > > > > 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> > > > > > > > > > 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> > > > > > > > > > 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> > > > > > > > > > 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> > > > > > > > > > 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> > > > > > > > > > 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> > > > > > > > > > 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> > > > > > > > > > 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> > > > > > > > > > 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> > > > > > > > > > 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> > > > > > > > > > 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> > > > > > > > > > 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> > > > > > > > > > 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> > > > > > > > > > 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> > > > > > > > > > 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> > > > > > > > > > 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> > > > > > > > > > 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> > > > > > > > > > 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> > > > > > > > > > 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> > > > > > > > > > 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> > > > > > > > > > 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> > > > > > > > > > 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> > > > > > > > > > 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
> > > > > > > > >
> > > > > > > > > Interesting. The number of context switches drops during the time
> > > > > > > > > that throughput improves. It would be good to find out what task(s)
> > > > > > > > > are doing all of the context switches. One way to find the pids of the
> > > > > > > > > top 20 context-switching tasks should be something like this:
> > > > > > > > >
> > > > > > > > > grep ctxt /proc/*/status | sort -k2nr | head -20
> > > > > > > > >
> > > > > > > > > You could use any number of methods to map back to the command.
> > > > > > > > > When generating my last patch, I was assuming that ksoftirqd would be
> > > > > > > > > the big offender. Of course, if it is something else, I should be
> > > > > > > > > taking a different approach.
> > > > > > > >
> > > > > > > > I did some measuring of these context switches. I used the above command
> > > > > > > > line to get the number of context switches before compiling and about 40
> > > > > > > > seconds later.
> > > > > > > >
> > > > > > > > The good case is:
> > > > > > > > file before after process name
> > > > > > > > /proc/10/status:voluntary_ctxt_switches: 38108 54327 [rcu_sched]
> > > > > > > > /proc/19/status:voluntary_ctxt_switches: 31022 38682 [ksoftirqd/2]
> > > > > > > > /proc/14/status:voluntary_ctxt_switches: 12947 16914 [ksoftirqd/1]
> > > > > > > > /proc/3/status:voluntary_ctxt_switches: 11061 14885 [ksoftirqd/0]
> > > > > > > > /proc/194/status:voluntary_ctxt_switches: 10044 11660 [kworker/2:1H]
> > > > > > > > /proc/1603/status:voluntary_ctxt_switches: 7626 9593 /usr/bin/X
> > > > > > > > /proc/24/status:voluntary_ctxt_switches: 6415 9571 [ksoftirqd/3]
> > > > > > > > /proc/20/status:voluntary_ctxt_switches: 5317 8879 [kworker/2:0]
> > > > > > > >
> > > > > > > > The bad case is:
> > > > > > > > file before after process name
> > > > > > > > /proc/3/status:voluntary_ctxt_switches: 82411972 98542227 [ksoftirqd/0]
> > > > > > > > /proc/23/status:voluntary_ctxt_switches: 79592040 94206823 [ksoftirqd/3]
> > > > > > > > /proc/13/status:voluntary_ctxt_switches: 79053870 93654161 [ksoftirqd/1]
> > > > > > > > /proc/18/status:voluntary_ctxt_switches: 78136449 92288688 [ksoftirqd/2]
> > > > > > > > /proc/2293/status:nonvoluntary_ctxt_switches: 29038881 29038881 mate-panel
> > > > > > > > /proc/2308/status:nonvoluntary_ctxt_switches: 26607661 26607661 nm-applet
> > > > > > > > /proc/2317/status:nonvoluntary_ctxt_switches: 15494474 15494474 /usr/lib/polkit-mate/polkit-mate-authentication-agent-1
> > > > > > > > /proc/2148/status:nonvoluntary_ctxt_switches: 13763674 13763674 x-session-manager
> > > > > > > > /proc/2985/status:nonvoluntary_ctxt_switches: 12062706 python /usr/bin/scons -j4
> > > > > > > > /proc/2323/status:nonvoluntary_ctxt_switches: 11581510 11581510 mate-volume-control-applet
> > > > > > > > /proc/2353/status:nonvoluntary_ctxt_switches: 9213436 9213436 mate-power-manager
> > > > > > > > /proc/2305/status:nonvoluntary_ctxt_switches: 8328471 8328471 /usr/lib/matecomponent/matecomponent-activation-server
> > > > > > > > /proc/3041/status:nonvoluntary_ctxt_switches: 7638312 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > > > > > > > /proc/3055/status:nonvoluntary_ctxt_switches: 4843977 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > > > > > > > /proc/2253/status:nonvoluntary_ctxt_switches: 4449918 4449918 mate-keyring-daemon
> > > > > > >
> > > > > > > Thank you for the info!
> > > > > > >
> > > > > > > > The processes for the tables were partly collected by hand so they may
> > > > > > > > not contain every relevant process. But what I see is that there are a
> > > > > > > > lot more context switches in the bad case, quite many of these are non
> > > > > > > > voluntary. The context switch increase is pretty much everywhere, not
> > > > > > > > just ksoftirqd. I suspect the Mate related processes did their context
> > > > > > > > switches during bootup/login, which is consistent with the bootup/login
> > > > > > > > procedure being significantly slower (just like the compilation).
> > > > > > >
> > > > > > > Also, the non-ksoftirq context switches are nonvoluntary, which likely
> > > > > > > means that some of their context switches were preemptions by ksoftirqd.
> > > > > > >
> > > > > > > OK, time to look at whatever else might be causing this...
> > > > > >
> > > > > > I am still unable to reproduce this, so am reduced to shooting in the
> > > > > > dark. The attached patch could in theory help. If it doesn't, would
> > > > > > you be willing to do a bit of RCU event tracing?
> > > > > >
> > > > > > Thanx, Paul
> > > > > >
> > > > > > ------------------------------------------------------------------------
> > > > > >
> > > > > > rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks
> > > > > >
> > > > > > If a non-lazy callback arrives on a CPU that has previously gone idle
> > > > > > with no non-lazy callbacks, invoke_rcu_core() forces the RCU core to
> > > > > > run. However, it does not update the conditions, which could result
> > > > > > in several closely spaced invocations of the RCU core, which in turn
> > > > > > could result in an excessively high context-switch rate and resulting
> > > > > > high overhead.
> > > > > >
> > > > > > This commit therefore updates the ->all_lazy and ->nonlazy_posted_snap
> > > > > > fields to prevent closely spaced invocations.
> > > > > >
> > > > > > Reported-by: Tibor Billes <tbilles@gmx.com>
> > > > > > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > > > > >
> > > > > > diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> > > > > > index dc09fb0..9d445ab 100644
> > > > > > --- a/kernel/rcutree_plugin.h
> > > > > > +++ b/kernel/rcutree_plugin.h
> > > > > > @@ -1745,6 +1745,8 @@ static void rcu_prepare_for_idle(int cpu)
> > > > > > */
> > > > > > if (rdtp->all_lazy &&
> > > > > > rdtp->nonlazy_posted != rdtp->nonlazy_posted_snap) {
> > > > > > + rdtp->all_lazy = false;
> > > > > > + rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
> > > > > > invoke_rcu_core();
> > > > > > return;
> > > > > > }
> > > > >
> > > > > Good news Paul, the above patch did solve this issue :) I see no extra
> > > > > context switches, no extra CPU usage and no extra compile time.
> > > >
> > > > Woo-hoo!!! ;-)
> > > >
> > > > May I add your Tested-by to the fix?
> > >
> > > Yes, please :)
> >
> > Done! ;-)
> >
> > > > > Any idea why couldn't you reproduce this? Why did it only hit my system?
> > > >
> > > > Timing, maybe? Another question is "Why didn't lots of people complain
> > > > about this?" It would be good to find out because it is quite possible
> > > > that there is some other bug that this patch is masking -- or even just
> > > > making less probable.
> > >
> > > Good point!
> > >
> > > > If you are interested, please build one of the old kernels but with
> > > > CONFIG_RCU_TRACE=y. Then run something like the following as root
> > > > concurrently with your workload:
> > > >
> > > > sleep 10
> > > > echo 1 > /sys/kernel/debug/tracing/events/rcu/enable
> > > > sleep 0.01
> > > > echo 0 > /sys/kernel/debug/tracing/events/rcu/enable
> > > > cat /sys/kernel/debug/tracing/trace > /tmp/trace
> > > >
> > > > Send me the /tmp/trace file, which will probably be a few megabytes in
> > > > size, so please compress it before sending. ;-) A good compression
> > > > tool should be able to shrink it by a factor of 20 or thereabouts.
> > >
> > > Ok, I did that. Twice! The first is with commit
> > > 910ee45db2f4837c8440e770474758493ab94bf7, which was the first bad commit
> > > according to the bisection I did initially. Second with the current
> > > mainline 3.11. I have little idea of what the fields and lines mean in
> > > the RCU trace files, so I'm not going to guess if they are essentially
> > > the same or not, but it may provide more information to you. Both files
> > > were created by using a shell script containing the commands you
> > > suggested.
> >
> > So traces both correspond to bad cases, correct? They are both quite
> > impressive -- looks like you have quite the interrupt rate going on there!
> > Almost looks like interrupts are somehow getting enabled on the path
> > to/from idle.
> >
> > Could you please also send along a trace with the fix applied?
>
> Sure. The attached tar file contains traces of good kernels. The first is with
> version 3.9.7 (no patch applied) which was the last stable kernel I tried and
> didn't have this issue. The second is version 3.11 with your fix applied.
> Judging by the size of the traces, 3.11.0+ is still doing more work than
> 3.9.7.
Indeed, though quite a bit less than the problematic traces.
Did you have all three patches applied to 3.11.0, or just the last one?
If the latter, could you please try it with all three?
> > > I'm not sure about LKML policies about attaching not-so-small files to
> > > emails, so I've dropped LKML from the CC list. Please CC the mailing
> > > list in your reply.
> >
> > Done!
> >
> > Another approach is to post the traces on the web and send the URL to
> > LKML. But whatever works for you is fine by me.
>
> Sending directly to you won again :) Could you please CC the list in your
> reply?
Done! ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
[not found] <20130909194737.288090@gmx.com>
@ 2013-09-09 20:44 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-09-09 20:44 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Mon, Sep 09, 2013 at 09:47:37PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 09/08/13 08:43 PM
> > On Sun, Sep 08, 2013 at 07:22:45PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 09/07/13 02:23 AM
> > > > On Tue, Sep 03, 2013 at 03:16:07PM -0700, Paul E. McKenney wrote:
> > > > > On Tue, Sep 03, 2013 at 11:11:01PM +0200, Tibor Billes wrote:
> > > > > > > From: Paul E. McKenney Sent: 08/30/13 03:24 AM
> > > > > > > On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > > > > > > > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > > > > > > > > Here is a patch that is more likely to help. I am testing it in parallel,
> > > > > > > > > but figured I should send you a sneak preview.
> > > > > > > >
> > > > > > > > I tried it, but I don't see any difference in overall performance. The dstat
> > > > > > > > also shows the same as before.
> > > > > > > >
> > > > > > > > But I did notice something. Occasionally there is an increase in userspace
> > > > > > > > CPU usage, interrupts and context switches are dropping, and it really gets
> > > > > > > > more work done (scons printed commands more frequently). I checked that
> > > > > > > > this behaviour is present without your patch, I just didn't notice this
> > > > > > > > before. Maybe you can make some sense out of it.
> > > > > > > >
> > > > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > > > 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> > > > > > > > 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> > > > > > > > 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> > > > > > > > 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> > > > > > > > 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> > > > > > > > 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> > > > > > > > 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> > > > > > > > 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> > > > > > > > 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> > > > > > > > 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> > > > > > > > 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> > > > > > > > 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> > > > > > > > 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> > > > > > > > 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> > > > > > > > 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> > > > > > > > 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> > > > > > > > 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> > > > > > > > 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> > > > > > > > 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> > > > > > > > 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> > > > > > > > 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> > > > > > > > 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> > > > > > > > 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> > > > > > > > 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> > > > > > > > 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> > > > > > > > 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> > > > > > > > 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> > > > > > > > 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> > > > > > > > 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> > > > > > > > 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> > > > > > > > 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> > > > > > > > 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> > > > > > > > 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> > > > > > > > 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> > > > > > > > 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> > > > > > > > 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> > > > > > > > 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> > > > > > > > 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> > > > > > > > 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> > > > > > > > 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> > > > > > > > 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> > > > > > > > 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> > > > > > > > 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> > > > > > > > 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> > > > > > > > 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> > > > > > > > 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> > > > > > > > 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> > > > > > > > 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
> > > > > > >
> > > > > > > Interesting. The number of context switches drops during the time
> > > > > > > that throughput improves. It would be good to find out what task(s)
> > > > > > > are doing all of the context switches. One way to find the pids of the
> > > > > > > top 20 context-switching tasks should be something like this:
> > > > > > >
> > > > > > > grep ctxt /proc/*/status | sort -k2nr | head -20
> > > > > > >
> > > > > > > You could use any number of methods to map back to the command.
> > > > > > > When generating my last patch, I was assuming that ksoftirqd would be
> > > > > > > the big offender. Of course, if it is something else, I should be
> > > > > > > taking a different approach.
> > > > > >
> > > > > > I did some measuring of these context switches. I used the above command
> > > > > > line to get the number of context switches before compiling and about 40
> > > > > > seconds later.
> > > > > >
> > > > > > The good case is:
> > > > > > file before after process name
> > > > > > /proc/10/status:voluntary_ctxt_switches: 38108 54327 [rcu_sched]
> > > > > > /proc/19/status:voluntary_ctxt_switches: 31022 38682 [ksoftirqd/2]
> > > > > > /proc/14/status:voluntary_ctxt_switches: 12947 16914 [ksoftirqd/1]
> > > > > > /proc/3/status:voluntary_ctxt_switches: 11061 14885 [ksoftirqd/0]
> > > > > > /proc/194/status:voluntary_ctxt_switches: 10044 11660 [kworker/2:1H]
> > > > > > /proc/1603/status:voluntary_ctxt_switches: 7626 9593 /usr/bin/X
> > > > > > /proc/24/status:voluntary_ctxt_switches: 6415 9571 [ksoftirqd/3]
> > > > > > /proc/20/status:voluntary_ctxt_switches: 5317 8879 [kworker/2:0]
> > > > > >
> > > > > > The bad case is:
> > > > > > file before after process name
> > > > > > /proc/3/status:voluntary_ctxt_switches: 82411972 98542227 [ksoftirqd/0]
> > > > > > /proc/23/status:voluntary_ctxt_switches: 79592040 94206823 [ksoftirqd/3]
> > > > > > /proc/13/status:voluntary_ctxt_switches: 79053870 93654161 [ksoftirqd/1]
> > > > > > /proc/18/status:voluntary_ctxt_switches: 78136449 92288688 [ksoftirqd/2]
> > > > > > /proc/2293/status:nonvoluntary_ctxt_switches: 29038881 29038881 mate-panel
> > > > > > /proc/2308/status:nonvoluntary_ctxt_switches: 26607661 26607661 nm-applet
> > > > > > /proc/2317/status:nonvoluntary_ctxt_switches: 15494474 15494474 /usr/lib/polkit-mate/polkit-mate-authentication-agent-1
> > > > > > /proc/2148/status:nonvoluntary_ctxt_switches: 13763674 13763674 x-session-manager
> > > > > > /proc/2985/status:nonvoluntary_ctxt_switches: 12062706 python /usr/bin/scons -j4
> > > > > > /proc/2323/status:nonvoluntary_ctxt_switches: 11581510 11581510 mate-volume-control-applet
> > > > > > /proc/2353/status:nonvoluntary_ctxt_switches: 9213436 9213436 mate-power-manager
> > > > > > /proc/2305/status:nonvoluntary_ctxt_switches: 8328471 8328471 /usr/lib/matecomponent/matecomponent-activation-server
> > > > > > /proc/3041/status:nonvoluntary_ctxt_switches: 7638312 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > > > > > /proc/3055/status:nonvoluntary_ctxt_switches: 4843977 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > > > > > /proc/2253/status:nonvoluntary_ctxt_switches: 4449918 4449918 mate-keyring-daemon
> > > > >
> > > > > Thank you for the info!
> > > > >
> > > > > > The processes for the tables were partly collected by hand so they may
> > > > > > not contain every relevant process. But what I see is that there are a
> > > > > > lot more context switches in the bad case, quite many of these are non
> > > > > > voluntary. The context switch increase is pretty much everywhere, not
> > > > > > just ksoftirqd. I suspect the Mate related processes did their context
> > > > > > switches during bootup/login, which is consistent with the bootup/login
> > > > > > procedure being significantly slower (just like the compilation).
> > > > >
> > > > > Also, the non-ksoftirq context switches are nonvoluntary, which likely
> > > > > means that some of their context switches were preemptions by ksoftirqd.
> > > > >
> > > > > OK, time to look at whatever else might be causing this...
> > > >
> > > > I am still unable to reproduce this, so am reduced to shooting in the
> > > > dark. The attached patch could in theory help. If it doesn't, would
> > > > you be willing to do a bit of RCU event tracing?
> > > >
> > > > Thanx, Paul
> > > >
> > > > ------------------------------------------------------------------------
> > > >
> > > > rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks
> > > >
> > > > If a non-lazy callback arrives on a CPU that has previously gone idle
> > > > with no non-lazy callbacks, invoke_rcu_core() forces the RCU core to
> > > > run. However, it does not update the conditions, which could result
> > > > in several closely spaced invocations of the RCU core, which in turn
> > > > could result in an excessively high context-switch rate and resulting
> > > > high overhead.
> > > >
> > > > This commit therefore updates the ->all_lazy and ->nonlazy_posted_snap
> > > > fields to prevent closely spaced invocations.
> > > >
> > > > Reported-by: Tibor Billes <tbilles@gmx.com>
> > > > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > > >
> > > > diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> > > > index dc09fb0..9d445ab 100644
> > > > --- a/kernel/rcutree_plugin.h
> > > > +++ b/kernel/rcutree_plugin.h
> > > > @@ -1745,6 +1745,8 @@ static void rcu_prepare_for_idle(int cpu)
> > > > */
> > > > if (rdtp->all_lazy &&
> > > > rdtp->nonlazy_posted != rdtp->nonlazy_posted_snap) {
> > > > + rdtp->all_lazy = false;
> > > > + rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
> > > > invoke_rcu_core();
> > > > return;
> > > > }
> > >
> > > Good news Paul, the above patch did solve this issue :) I see no extra
> > > context switches, no extra CPU usage and no extra compile time.
> >
> > Woo-hoo!!! ;-)
> >
> > May I add your Tested-by to the fix?
>
> Yes, please :)
Done! ;-)
> > > Any idea why couldn't you reproduce this? Why did it only hit my system?
> >
> > Timing, maybe? Another question is "Why didn't lots of people complain
> > about this?" It would be good to find out because it is quite possible
> > that there is some other bug that this patch is masking -- or even just
> > making less probable.
>
> Good point!
>
> > If you are interested, please build one of the old kernels but with
> > CONFIG_RCU_TRACE=y. Then run something like the following as root
> > concurrently with your workload:
> >
> > sleep 10
> > echo 1 > /sys/kernel/debug/tracing/events/rcu/enable
> > sleep 0.01
> > echo 0 > /sys/kernel/debug/tracing/events/rcu/enable
> > cat /sys/kernel/debug/tracing/trace > /tmp/trace
> >
> > Send me the /tmp/trace file, which will probably be a few megabytes in
> > size, so please compress it before sending. ;-) A good compression
> > tool should be able to shrink it by a factor of 20 or thereabouts.
>
> Ok, I did that. Twice! The first is with commit
> 910ee45db2f4837c8440e770474758493ab94bf7, which was the first bad commit
> according to the bisection I did initially. Second with the current
> mainline 3.11. I have little idea of what the fields and lines mean in
> the RCU trace files, so I'm not going to guess if they are essentially
> the same or not, but it may provide more information to you. Both files
> were created by using a shell script containing the commands you
> suggested.
So traces both correspond to bad cases, correct? They are both quite
impressive -- looks like you have quite the interrupt rate going on there!
Almost looks like interrupts are somehow getting enabled on the path
to/from idle.
Could you please also send along a trace with the fix applied?
> I'm not sure about LKML policies about attaching not-so-small files to
> emails, so I've dropped LKML from the CC list. Please CC the mailing
> list in your reply.
Done!
Another approach is to post the traces on the web and send the URL to
LKML. But whatever works for you is fine by me.
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-09-08 17:22 Tibor Billes
@ 2013-09-08 18:43 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-09-08 18:43 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Sun, Sep 08, 2013 at 07:22:45PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 09/07/13 02:23 AM
> > On Tue, Sep 03, 2013 at 03:16:07PM -0700, Paul E. McKenney wrote:
> > > On Tue, Sep 03, 2013 at 11:11:01PM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 08/30/13 03:24 AM
> > > > > On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > > > > > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > > > > > > Here is a patch that is more likely to help. I am testing it in parallel,
> > > > > > > but figured I should send you a sneak preview.
> > > > > >
> > > > > > I tried it, but I don't see any difference in overall performance. The dstat
> > > > > > also shows the same as before.
> > > > > >
> > > > > > But I did notice something. Occasionally there is an increase in userspace
> > > > > > CPU usage, interrupts and context switches are dropping, and it really gets
> > > > > > more work done (scons printed commands more frequently). I checked that
> > > > > > this behaviour is present without your patch, I just didn't notice this
> > > > > > before. Maybe you can make some sense out of it.
> > > > > >
> > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> > > > > > 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> > > > > > 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> > > > > > 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> > > > > > 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> > > > > > 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> > > > > > 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> > > > > > 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> > > > > > 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> > > > > > 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> > > > > > 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> > > > > > 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> > > > > > 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> > > > > > 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> > > > > > 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> > > > > > 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> > > > > > 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> > > > > > 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> > > > > > 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> > > > > > 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> > > > > > 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> > > > > > 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> > > > > > 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> > > > > > 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> > > > > > 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> > > > > > 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> > > > > > 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> > > > > > 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> > > > > > 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> > > > > > 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> > > > > > 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> > > > > > 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> > > > > > 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> > > > > > 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> > > > > > 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> > > > > > 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> > > > > > 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> > > > > > 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> > > > > > 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> > > > > > 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> > > > > > 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> > > > > > 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> > > > > > 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> > > > > > 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> > > > > > 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> > > > > > 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> > > > > > 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> > > > > > 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
> > > > >
> > > > > Interesting. The number of context switches drops during the time
> > > > > that throughput improves. It would be good to find out what task(s)
> > > > > are doing all of the context switches. One way to find the pids of the
> > > > > top 20 context-switching tasks should be something like this:
> > > > >
> > > > > grep ctxt /proc/*/status | sort -k2nr | head -20
> > > > >
> > > > > You could use any number of methods to map back to the command.
> > > > > When generating my last patch, I was assuming that ksoftirqd would be
> > > > > the big offender. Of course, if it is something else, I should be
> > > > > taking a different approach.
> > > >
> > > > I did some measuring of these context switches. I used the above command
> > > > line to get the number of context switches before compiling and about 40
> > > > seconds later.
> > > >
> > > > The good case is:
> > > > file before after process name
> > > > /proc/10/status:voluntary_ctxt_switches: 38108 54327 [rcu_sched]
> > > > /proc/19/status:voluntary_ctxt_switches: 31022 38682 [ksoftirqd/2]
> > > > /proc/14/status:voluntary_ctxt_switches: 12947 16914 [ksoftirqd/1]
> > > > /proc/3/status:voluntary_ctxt_switches: 11061 14885 [ksoftirqd/0]
> > > > /proc/194/status:voluntary_ctxt_switches: 10044 11660 [kworker/2:1H]
> > > > /proc/1603/status:voluntary_ctxt_switches: 7626 9593 /usr/bin/X
> > > > /proc/24/status:voluntary_ctxt_switches: 6415 9571 [ksoftirqd/3]
> > > > /proc/20/status:voluntary_ctxt_switches: 5317 8879 [kworker/2:0]
> > > >
> > > > The bad case is:
> > > > file before after process name
> > > > /proc/3/status:voluntary_ctxt_switches: 82411972 98542227 [ksoftirqd/0]
> > > > /proc/23/status:voluntary_ctxt_switches: 79592040 94206823 [ksoftirqd/3]
> > > > /proc/13/status:voluntary_ctxt_switches: 79053870 93654161 [ksoftirqd/1]
> > > > /proc/18/status:voluntary_ctxt_switches: 78136449 92288688 [ksoftirqd/2]
> > > > /proc/2293/status:nonvoluntary_ctxt_switches: 29038881 29038881 mate-panel
> > > > /proc/2308/status:nonvoluntary_ctxt_switches: 26607661 26607661 nm-applet
> > > > /proc/2317/status:nonvoluntary_ctxt_switches: 15494474 15494474 /usr/lib/polkit-mate/polkit-mate-authentication-agent-1
> > > > /proc/2148/status:nonvoluntary_ctxt_switches: 13763674 13763674 x-session-manager
> > > > /proc/2985/status:nonvoluntary_ctxt_switches: 12062706 python /usr/bin/scons -j4
> > > > /proc/2323/status:nonvoluntary_ctxt_switches: 11581510 11581510 mate-volume-control-applet
> > > > /proc/2353/status:nonvoluntary_ctxt_switches: 9213436 9213436 mate-power-manager
> > > > /proc/2305/status:nonvoluntary_ctxt_switches: 8328471 8328471 /usr/lib/matecomponent/matecomponent-activation-server
> > > > /proc/3041/status:nonvoluntary_ctxt_switches: 7638312 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > > > /proc/3055/status:nonvoluntary_ctxt_switches: 4843977 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > > > /proc/2253/status:nonvoluntary_ctxt_switches: 4449918 4449918 mate-keyring-daemon
> > >
> > > Thank you for the info!
> > >
> > > > The processes for the tables were partly collected by hand so they may
> > > > not contain every relevant process. But what I see is that there are a
> > > > lot more context switches in the bad case, quite many of these are non
> > > > voluntary. The context switch increase is pretty much everywhere, not
> > > > just ksoftirqd. I suspect the Mate related processes did their context
> > > > switches during bootup/login, which is consistent with the bootup/login
> > > > procedure being significantly slower (just like the compilation).
> > >
> > > Also, the non-ksoftirq context switches are nonvoluntary, which likely
> > > means that some of their context switches were preemptions by ksoftirqd.
> > >
> > > OK, time to look at whatever else might be causing this...
> >
> > I am still unable to reproduce this, so am reduced to shooting in the
> > dark. The attached patch could in theory help. If it doesn't, would
> > you be willing to do a bit of RCU event tracing?
> >
> > Thanx, Paul
> >
> > ------------------------------------------------------------------------
> >
> > rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks
> >
> > If a non-lazy callback arrives on a CPU that has previously gone idle
> > with no non-lazy callbacks, invoke_rcu_core() forces the RCU core to
> > run. However, it does not update the conditions, which could result
> > in several closely spaced invocations of the RCU core, which in turn
> > could result in an excessively high context-switch rate and resulting
> > high overhead.
> >
> > This commit therefore updates the ->all_lazy and ->nonlazy_posted_snap
> > fields to prevent closely spaced invocations.
> >
> > Reported-by: Tibor Billes <tbilles@gmx.com>
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >
> > diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> > index dc09fb0..9d445ab 100644
> > --- a/kernel/rcutree_plugin.h
> > +++ b/kernel/rcutree_plugin.h
> > @@ -1745,6 +1745,8 @@ static void rcu_prepare_for_idle(int cpu)
> > */
> > if (rdtp->all_lazy &&
> > rdtp->nonlazy_posted != rdtp->nonlazy_posted_snap) {
> > + rdtp->all_lazy = false;
> > + rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
> > invoke_rcu_core();
> > return;
> > }
>
> Good news Paul, the above patch did solve this issue :) I see no extra
> context switches, no extra CPU usage and no extra compile time.
Woo-hoo!!! ;-)
May I add your Tested-by to the fix?
> Any idea why couldn't you reproduce this? Why did it only hit my system?
Timing, maybe? Another question is "Why didn't lots of people complain
about this?" It would be good to find out because it is quite possible
that there is some other bug that this patch is masking -- or even just
making less probable.
If you are interested, please build one of the old kernels but with
CONFIG_RCU_TRACE=y. Then run something like the following as root
concurrently with your workload:
sleep 10
echo 1 > /sys/kernel/debug/tracing/events/rcu/enable
sleep 0.01
echo 0 > /sys/kernel/debug/tracing/events/rcu/enable
cat /sys/kernel/debug/tracing/trace > /tmp/trace
Send me the /tmp/trace file, which will probably be a few megabytes in
size, so please compress it before sending. ;-) A good compression
tool should be able to shrink it by a factor of 20 or thereabouts.
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-09-08 17:22 Tibor Billes
2013-09-08 18:43 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-09-08 17:22 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 09/07/13 02:23 AM
> On Tue, Sep 03, 2013 at 03:16:07PM -0700, Paul E. McKenney wrote:
> > On Tue, Sep 03, 2013 at 11:11:01PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 08/30/13 03:24 AM
> > > > On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > > > > > Here is a patch that is more likely to help. I am testing it in parallel,
> > > > > > but figured I should send you a sneak preview.
> > > > >
> > > > > I tried it, but I don't see any difference in overall performance. The dstat
> > > > > also shows the same as before.
> > > > >
> > > > > But I did notice something. Occasionally there is an increase in userspace
> > > > > CPU usage, interrupts and context switches are dropping, and it really gets
> > > > > more work done (scons printed commands more frequently). I checked that
> > > > > this behaviour is present without your patch, I just didn't notice this
> > > > > before. Maybe you can make some sense out of it.
> > > > >
> > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> > > > > 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> > > > > 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> > > > > 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> > > > > 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> > > > > 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> > > > > 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> > > > > 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> > > > > 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> > > > > 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> > > > > 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> > > > > 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> > > > > 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> > > > > 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> > > > > 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> > > > > 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> > > > > 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> > > > > 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> > > > > 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> > > > > 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> > > > > 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> > > > > 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> > > > > 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> > > > > 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> > > > > 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> > > > > 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> > > > > 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> > > > > 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> > > > > 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> > > > > 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> > > > > 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> > > > > 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> > > > > 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> > > > > 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> > > > > 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> > > > > 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> > > > > 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> > > > > 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> > > > > 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> > > > > 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> > > > > 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> > > > > 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> > > > > 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> > > > > 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> > > > > 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> > > > > 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> > > > > 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> > > > > 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
> > > >
> > > > Interesting. The number of context switches drops during the time
> > > > that throughput improves. It would be good to find out what task(s)
> > > > are doing all of the context switches. One way to find the pids of the
> > > > top 20 context-switching tasks should be something like this:
> > > >
> > > > grep ctxt /proc/*/status | sort -k2nr | head -20
> > > >
> > > > You could use any number of methods to map back to the command.
> > > > When generating my last patch, I was assuming that ksoftirqd would be
> > > > the big offender. Of course, if it is something else, I should be
> > > > taking a different approach.
> > >
> > > I did some measuring of these context switches. I used the above command
> > > line to get the number of context switches before compiling and about 40
> > > seconds later.
> > >
> > > The good case is:
> > > file before after process name
> > > /proc/10/status:voluntary_ctxt_switches: 38108 54327 [rcu_sched]
> > > /proc/19/status:voluntary_ctxt_switches: 31022 38682 [ksoftirqd/2]
> > > /proc/14/status:voluntary_ctxt_switches: 12947 16914 [ksoftirqd/1]
> > > /proc/3/status:voluntary_ctxt_switches: 11061 14885 [ksoftirqd/0]
> > > /proc/194/status:voluntary_ctxt_switches: 10044 11660 [kworker/2:1H]
> > > /proc/1603/status:voluntary_ctxt_switches: 7626 9593 /usr/bin/X
> > > /proc/24/status:voluntary_ctxt_switches: 6415 9571 [ksoftirqd/3]
> > > /proc/20/status:voluntary_ctxt_switches: 5317 8879 [kworker/2:0]
> > >
> > > The bad case is:
> > > file before after process name
> > > /proc/3/status:voluntary_ctxt_switches: 82411972 98542227 [ksoftirqd/0]
> > > /proc/23/status:voluntary_ctxt_switches: 79592040 94206823 [ksoftirqd/3]
> > > /proc/13/status:voluntary_ctxt_switches: 79053870 93654161 [ksoftirqd/1]
> > > /proc/18/status:voluntary_ctxt_switches: 78136449 92288688 [ksoftirqd/2]
> > > /proc/2293/status:nonvoluntary_ctxt_switches: 29038881 29038881 mate-panel
> > > /proc/2308/status:nonvoluntary_ctxt_switches: 26607661 26607661 nm-applet
> > > /proc/2317/status:nonvoluntary_ctxt_switches: 15494474 15494474 /usr/lib/polkit-mate/polkit-mate-authentication-agent-1
> > > /proc/2148/status:nonvoluntary_ctxt_switches: 13763674 13763674 x-session-manager
> > > /proc/2985/status:nonvoluntary_ctxt_switches: 12062706 python /usr/bin/scons -j4
> > > /proc/2323/status:nonvoluntary_ctxt_switches: 11581510 11581510 mate-volume-control-applet
> > > /proc/2353/status:nonvoluntary_ctxt_switches: 9213436 9213436 mate-power-manager
> > > /proc/2305/status:nonvoluntary_ctxt_switches: 8328471 8328471 /usr/lib/matecomponent/matecomponent-activation-server
> > > /proc/3041/status:nonvoluntary_ctxt_switches: 7638312 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > > /proc/3055/status:nonvoluntary_ctxt_switches: 4843977 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > > /proc/2253/status:nonvoluntary_ctxt_switches: 4449918 4449918 mate-keyring-daemon
> >
> > Thank you for the info!
> >
> > > The processes for the tables were partly collected by hand so they may
> > > not contain every relevant process. But what I see is that there are a
> > > lot more context switches in the bad case, quite many of these are non
> > > voluntary. The context switch increase is pretty much everywhere, not
> > > just ksoftirqd. I suspect the Mate related processes did their context
> > > switches during bootup/login, which is consistent with the bootup/login
> > > procedure being significantly slower (just like the compilation).
> >
> > Also, the non-ksoftirq context switches are nonvoluntary, which likely
> > means that some of their context switches were preemptions by ksoftirqd.
> >
> > OK, time to look at whatever else might be causing this...
>
> I am still unable to reproduce this, so am reduced to shooting in the
> dark. The attached patch could in theory help. If it doesn't, would
> you be willing to do a bit of RCU event tracing?
>
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks
>
> If a non-lazy callback arrives on a CPU that has previously gone idle
> with no non-lazy callbacks, invoke_rcu_core() forces the RCU core to
> run. However, it does not update the conditions, which could result
> in several closely spaced invocations of the RCU core, which in turn
> could result in an excessively high context-switch rate and resulting
> high overhead.
>
> This commit therefore updates the ->all_lazy and ->nonlazy_posted_snap
> fields to prevent closely spaced invocations.
>
> Reported-by: Tibor Billes <tbilles@gmx.com>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>
> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> index dc09fb0..9d445ab 100644
> --- a/kernel/rcutree_plugin.h
> +++ b/kernel/rcutree_plugin.h
> @@ -1745,6 +1745,8 @@ static void rcu_prepare_for_idle(int cpu)
> */
> if (rdtp->all_lazy &&
> rdtp->nonlazy_posted != rdtp->nonlazy_posted_snap) {
> + rdtp->all_lazy = false;
> + rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
> invoke_rcu_core();
> return;
> }
Good news Paul, the above patch did solve this issue :) I see no extra
context switches, no extra CPU usage and no extra compile time.
Any idea why couldn't you reproduce this? Why did it only hit my system?
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-09-03 22:16 ` Paul E. McKenney
@ 2013-09-07 0:23 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-09-07 0:23 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Tue, Sep 03, 2013 at 03:16:07PM -0700, Paul E. McKenney wrote:
> On Tue, Sep 03, 2013 at 11:11:01PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/30/13 03:24 AM
> > > On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > > > > Here is a patch that is more likely to help. I am testing it in parallel,
> > > > > but figured I should send you a sneak preview.
> > > >
> > > > I tried it, but I don't see any difference in overall performance. The dstat
> > > > also shows the same as before.
> > > >
> > > > But I did notice something. Occasionally there is an increase in userspace
> > > > CPU usage, interrupts and context switches are dropping, and it really gets
> > > > more work done (scons printed commands more frequently). I checked that
> > > > this behaviour is present without your patch, I just didn't notice this
> > > > before. Maybe you can make some sense out of it.
> > > >
> > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> > > > 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> > > > 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> > > > 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> > > > 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> > > > 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> > > > 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> > > > 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> > > > 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> > > > 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> > > > 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> > > > 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> > > > 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> > > > 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> > > > 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> > > > 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> > > > 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> > > > 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> > > > 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> > > > 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> > > > 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> > > > 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> > > > 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> > > > 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> > > > 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> > > > 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> > > > 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> > > > 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> > > > 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> > > > 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> > > > 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> > > > 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> > > > 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> > > > 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> > > > 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> > > > 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> > > > 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> > > > 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> > > > 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> > > > 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> > > > 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> > > > 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> > > > 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> > > > 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> > > > 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> > > > 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> > > > 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> > > > 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
> > >
> > > Interesting. The number of context switches drops during the time
> > > that throughput improves. It would be good to find out what task(s)
> > > are doing all of the context switches. One way to find the pids of the
> > > top 20 context-switching tasks should be something like this:
> > >
> > > grep ctxt /proc/*/status | sort -k2nr | head -20
> > >
> > > You could use any number of methods to map back to the command.
> > > When generating my last patch, I was assuming that ksoftirqd would be
> > > the big offender. Of course, if it is something else, I should be
> > > taking a different approach.
> >
> > I did some measuring of these context switches. I used the above command
> > line to get the number of context switches before compiling and about 40
> > seconds later.
> >
> > The good case is:
> > file before after process name
> > /proc/10/status:voluntary_ctxt_switches: 38108 54327 [rcu_sched]
> > /proc/19/status:voluntary_ctxt_switches: 31022 38682 [ksoftirqd/2]
> > /proc/14/status:voluntary_ctxt_switches: 12947 16914 [ksoftirqd/1]
> > /proc/3/status:voluntary_ctxt_switches: 11061 14885 [ksoftirqd/0]
> > /proc/194/status:voluntary_ctxt_switches: 10044 11660 [kworker/2:1H]
> > /proc/1603/status:voluntary_ctxt_switches: 7626 9593 /usr/bin/X
> > /proc/24/status:voluntary_ctxt_switches: 6415 9571 [ksoftirqd/3]
> > /proc/20/status:voluntary_ctxt_switches: 5317 8879 [kworker/2:0]
> >
> > The bad case is:
> > file before after process name
> > /proc/3/status:voluntary_ctxt_switches: 82411972 98542227 [ksoftirqd/0]
> > /proc/23/status:voluntary_ctxt_switches: 79592040 94206823 [ksoftirqd/3]
> > /proc/13/status:voluntary_ctxt_switches: 79053870 93654161 [ksoftirqd/1]
> > /proc/18/status:voluntary_ctxt_switches: 78136449 92288688 [ksoftirqd/2]
> > /proc/2293/status:nonvoluntary_ctxt_switches: 29038881 29038881 mate-panel
> > /proc/2308/status:nonvoluntary_ctxt_switches: 26607661 26607661 nm-applet
> > /proc/2317/status:nonvoluntary_ctxt_switches: 15494474 15494474 /usr/lib/polkit-mate/polkit-mate-authentication-agent-1
> > /proc/2148/status:nonvoluntary_ctxt_switches: 13763674 13763674 x-session-manager
> > /proc/2985/status:nonvoluntary_ctxt_switches: 12062706 python /usr/bin/scons -j4
> > /proc/2323/status:nonvoluntary_ctxt_switches: 11581510 11581510 mate-volume-control-applet
> > /proc/2353/status:nonvoluntary_ctxt_switches: 9213436 9213436 mate-power-manager
> > /proc/2305/status:nonvoluntary_ctxt_switches: 8328471 8328471 /usr/lib/matecomponent/matecomponent-activation-server
> > /proc/3041/status:nonvoluntary_ctxt_switches: 7638312 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > /proc/3055/status:nonvoluntary_ctxt_switches: 4843977 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> > /proc/2253/status:nonvoluntary_ctxt_switches: 4449918 4449918 mate-keyring-daemon
>
> Thank you for the info!
>
> > The processes for the tables were partly collected by hand so they may
> > not contain every relevant process. But what I see is that there are a
> > lot more context switches in the bad case, quite many of these are non
> > voluntary. The context switch increase is pretty much everywhere, not
> > just ksoftirqd. I suspect the Mate related processes did their context
> > switches during bootup/login, which is consistent with the bootup/login
> > procedure being significantly slower (just like the compilation).
>
> Also, the non-ksoftirq context switches are nonvoluntary, which likely
> means that some of their context switches were preemptions by ksoftirqd.
>
> OK, time to look at whatever else might be causing this...
I am still unable to reproduce this, so am reduced to shooting in the
dark. The attached patch could in theory help. If it doesn't, would
you be willing to do a bit of RCU event tracing?
Thanx, Paul
------------------------------------------------------------------------
rcu: Throttle invoke_rcu_core() invocations due to non-lazy callbacks
If a non-lazy callback arrives on a CPU that has previously gone idle
with no non-lazy callbacks, invoke_rcu_core() forces the RCU core to
run. However, it does not update the conditions, which could result
in several closely spaced invocations of the RCU core, which in turn
could result in an excessively high context-switch rate and resulting
high overhead.
This commit therefore updates the ->all_lazy and ->nonlazy_posted_snap
fields to prevent closely spaced invocations.
Reported-by: Tibor Billes <tbilles@gmx.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index dc09fb0..9d445ab 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -1745,6 +1745,8 @@ static void rcu_prepare_for_idle(int cpu)
*/
if (rdtp->all_lazy &&
rdtp->nonlazy_posted != rdtp->nonlazy_posted_snap) {
+ rdtp->all_lazy = false;
+ rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
invoke_rcu_core();
return;
}
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-09-03 21:11 Tibor Billes
@ 2013-09-03 22:16 ` Paul E. McKenney
2013-09-07 0:23 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Paul E. McKenney @ 2013-09-03 22:16 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Tue, Sep 03, 2013 at 11:11:01PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/30/13 03:24 AM
> > On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > > > Here is a patch that is more likely to help. I am testing it in parallel,
> > > > but figured I should send you a sneak preview.
> > >
> > > I tried it, but I don't see any difference in overall performance. The dstat
> > > also shows the same as before.
> > >
> > > But I did notice something. Occasionally there is an increase in userspace
> > > CPU usage, interrupts and context switches are dropping, and it really gets
> > > more work done (scons printed commands more frequently). I checked that
> > > this behaviour is present without your patch, I just didn't notice this
> > > before. Maybe you can make some sense out of it.
> > >
> > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> > > 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> > > 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> > > 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> > > 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> > > 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> > > 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> > > 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> > > 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> > > 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> > > 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> > > 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> > > 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> > > 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> > > 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> > > 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> > > 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> > > 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> > > 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> > > 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> > > 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> > > 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> > > 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> > > 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> > > 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> > > 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> > > 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> > > 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> > > 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> > > 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> > > 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> > > 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> > > 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> > > 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> > > 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> > > 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> > > 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> > > 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> > > 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> > > 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> > > 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> > > 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> > > 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> > > 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> > > 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> > > 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> > > 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> > > 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
> >
> > Interesting. The number of context switches drops during the time
> > that throughput improves. It would be good to find out what task(s)
> > are doing all of the context switches. One way to find the pids of the
> > top 20 context-switching tasks should be something like this:
> >
> > grep ctxt /proc/*/status | sort -k2nr | head -20
> >
> > You could use any number of methods to map back to the command.
> > When generating my last patch, I was assuming that ksoftirqd would be
> > the big offender. Of course, if it is something else, I should be
> > taking a different approach.
>
> I did some measuring of these context switches. I used the above command
> line to get the number of context switches before compiling and about 40
> seconds later.
>
> The good case is:
> file before after process name
> /proc/10/status:voluntary_ctxt_switches: 38108 54327 [rcu_sched]
> /proc/19/status:voluntary_ctxt_switches: 31022 38682 [ksoftirqd/2]
> /proc/14/status:voluntary_ctxt_switches: 12947 16914 [ksoftirqd/1]
> /proc/3/status:voluntary_ctxt_switches: 11061 14885 [ksoftirqd/0]
> /proc/194/status:voluntary_ctxt_switches: 10044 11660 [kworker/2:1H]
> /proc/1603/status:voluntary_ctxt_switches: 7626 9593 /usr/bin/X
> /proc/24/status:voluntary_ctxt_switches: 6415 9571 [ksoftirqd/3]
> /proc/20/status:voluntary_ctxt_switches: 5317 8879 [kworker/2:0]
>
> The bad case is:
> file before after process name
> /proc/3/status:voluntary_ctxt_switches: 82411972 98542227 [ksoftirqd/0]
> /proc/23/status:voluntary_ctxt_switches: 79592040 94206823 [ksoftirqd/3]
> /proc/13/status:voluntary_ctxt_switches: 79053870 93654161 [ksoftirqd/1]
> /proc/18/status:voluntary_ctxt_switches: 78136449 92288688 [ksoftirqd/2]
> /proc/2293/status:nonvoluntary_ctxt_switches: 29038881 29038881 mate-panel
> /proc/2308/status:nonvoluntary_ctxt_switches: 26607661 26607661 nm-applet
> /proc/2317/status:nonvoluntary_ctxt_switches: 15494474 15494474 /usr/lib/polkit-mate/polkit-mate-authentication-agent-1
> /proc/2148/status:nonvoluntary_ctxt_switches: 13763674 13763674 x-session-manager
> /proc/2985/status:nonvoluntary_ctxt_switches: 12062706 python /usr/bin/scons -j4
> /proc/2323/status:nonvoluntary_ctxt_switches: 11581510 11581510 mate-volume-control-applet
> /proc/2353/status:nonvoluntary_ctxt_switches: 9213436 9213436 mate-power-manager
> /proc/2305/status:nonvoluntary_ctxt_switches: 8328471 8328471 /usr/lib/matecomponent/matecomponent-activation-server
> /proc/3041/status:nonvoluntary_ctxt_switches: 7638312 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> /proc/3055/status:nonvoluntary_ctxt_switches: 4843977 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
> /proc/2253/status:nonvoluntary_ctxt_switches: 4449918 4449918 mate-keyring-daemon
Thank you for the info!
> The processes for the tables were partly collected by hand so they may
> not contain every relevant process. But what I see is that there are a
> lot more context switches in the bad case, quite many of these are non
> voluntary. The context switch increase is pretty much everywhere, not
> just ksoftirqd. I suspect the Mate related processes did their context
> switches during bootup/login, which is consistent with the bootup/login
> procedure being significantly slower (just like the compilation).
Also, the non-ksoftirq context switches are nonvoluntary, which likely
means that some of their context switches were preemptions by ksoftirqd.
OK, time to look at whatever else might be causing this...
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-09-03 21:11 Tibor Billes
2013-09-03 22:16 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-09-03 21:11 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 08/30/13 03:24 AM
> On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > > Here is a patch that is more likely to help. I am testing it in parallel,
> > > but figured I should send you a sneak preview.
> >
> > I tried it, but I don't see any difference in overall performance. The dstat
> > also shows the same as before.
> >
> > But I did notice something. Occasionally there is an increase in userspace
> > CPU usage, interrupts and context switches are dropping, and it really gets
> > more work done (scons printed commands more frequently). I checked that
> > this behaviour is present without your patch, I just didn't notice this
> > before. Maybe you can make some sense out of it.
> >
> > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> > 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> > 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> > 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> > 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> > 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> > 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> > 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> > 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> > 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> > 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> > 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> > 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> > 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> > 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> > 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> > 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> > 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> > 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> > 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> > 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> > 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> > 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> > 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> > 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> > 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> > 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> > 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> > 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> > 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> > 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> > 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> > 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> > 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> > 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> > 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> > 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> > 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> > 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> > 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> > 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> > 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> > 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> > 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> > 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> > 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> > 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> > 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
>
> Interesting. The number of context switches drops during the time
> that throughput improves. It would be good to find out what task(s)
> are doing all of the context switches. One way to find the pids of the
> top 20 context-switching tasks should be something like this:
>
> grep ctxt /proc/*/status | sort -k2nr | head -20
>
> You could use any number of methods to map back to the command.
> When generating my last patch, I was assuming that ksoftirqd would be
> the big offender. Of course, if it is something else, I should be
> taking a different approach.
I did some measuring of these context switches. I used the above command
line to get the number of context switches before compiling and about 40
seconds later.
The good case is:
file before after process name
/proc/10/status:voluntary_ctxt_switches: 38108 54327 [rcu_sched]
/proc/19/status:voluntary_ctxt_switches: 31022 38682 [ksoftirqd/2]
/proc/14/status:voluntary_ctxt_switches: 12947 16914 [ksoftirqd/1]
/proc/3/status:voluntary_ctxt_switches: 11061 14885 [ksoftirqd/0]
/proc/194/status:voluntary_ctxt_switches: 10044 11660 [kworker/2:1H]
/proc/1603/status:voluntary_ctxt_switches: 7626 9593 /usr/bin/X
/proc/24/status:voluntary_ctxt_switches: 6415 9571 [ksoftirqd/3]
/proc/20/status:voluntary_ctxt_switches: 5317 8879 [kworker/2:0]
The bad case is:
file before after process name
/proc/3/status:voluntary_ctxt_switches: 82411972 98542227 [ksoftirqd/0]
/proc/23/status:voluntary_ctxt_switches: 79592040 94206823 [ksoftirqd/3]
/proc/13/status:voluntary_ctxt_switches: 79053870 93654161 [ksoftirqd/1]
/proc/18/status:voluntary_ctxt_switches: 78136449 92288688 [ksoftirqd/2]
/proc/2293/status:nonvoluntary_ctxt_switches: 29038881 29038881 mate-panel
/proc/2308/status:nonvoluntary_ctxt_switches: 26607661 26607661 nm-applet
/proc/2317/status:nonvoluntary_ctxt_switches: 15494474 15494474 /usr/lib/polkit-mate/polkit-mate-authentication-agent-1
/proc/2148/status:nonvoluntary_ctxt_switches: 13763674 13763674 x-session-manager
/proc/2985/status:nonvoluntary_ctxt_switches: 12062706 python /usr/bin/scons -j4
/proc/2323/status:nonvoluntary_ctxt_switches: 11581510 11581510 mate-volume-control-applet
/proc/2353/status:nonvoluntary_ctxt_switches: 9213436 9213436 mate-power-manager
/proc/2305/status:nonvoluntary_ctxt_switches: 8328471 8328471 /usr/lib/matecomponent/matecomponent-activation-server
/proc/3041/status:nonvoluntary_ctxt_switches: 7638312 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
/proc/3055/status:nonvoluntary_ctxt_switches: 4843977 sh -c "/opt/arm-2011.03/bin/arm-none-eabi-gcc" [...]
/proc/2253/status:nonvoluntary_ctxt_switches: 4449918 4449918 mate-keyring-daemon
The processes for the tables were partly collected by hand so they may
not contain every relevant process. But what I see is that there are a
lot more context switches in the bad case, quite many of these are non
voluntary. The context switch increase is pretty much everywhere, not
just ksoftirqd. I suspect the Mate related processes did their context
switches during bootup/login, which is consistent with the bootup/login
procedure being significantly slower (just like the compilation).
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-08-27 20:05 Tibor Billes
@ 2013-08-30 1:24 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-08-30 1:24 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Tue, Aug 27, 2013 at 10:05:42PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> > On Sun, Aug 25, 2013 at 09:50:21PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/24/13 11:03 PM
> > > > On Sat, Aug 24, 2013 at 09:59:45PM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> > > > > > On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > > > > > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > > > > > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > > > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > > > > > > Hi,
> > > > > > > > > > > > >
> > > > > > > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > > > > > > to trigger this behaviour.
> > > > > > > > > > > > >
> > > > > > > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > > > > > > >
> > > > > > > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > > > > > > surprised no one else noticed.
> > > > > > > > > > > >
> > > > > > > > > > > > Could you please send along your .config file?
> > > > > > > > > > >
> > > > > > > > > > > Here it is
> > > > > > > > > >
> > > > > > > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > > > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > > > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > > > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > > > > > > >
> > > > > > > > > > If that helps, there are some things I could try.
> > > > > > > > >
> > > > > > > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > > > > > > >
> > > > > > > > Interesting. Thank you for trying this -- and we at least have a
> > > > > > > > short-term workaround for this problem. I will put a patch together
> > > > > > > > for further investigation.
> > > > > > >
> > > > > > > I don't specifically need this config option so I'm fine without it in
> > > > > > > the long term, but I guess it's not supposed to behave like that.
> > > > > >
> > > > > > OK, good, we have a long-term workload for your specific case,
> > > > > > even better. ;-)
> > > > > >
> > > > > > But yes, there are situations where RCU_FAST_NO_HZ needs to work
> > > > > > a bit better. I hope you will bear with me with a bit more
> > > > > > testing...
> > > > > >
> > > > > > > > In the meantime, could you please tell me how you were measuring
> > > > > > > > performance for your kernel builds? Wall-clock time required to complete
> > > > > > > > one build? Number of builds completed per unit time? Something else?
> > > > > > >
> > > > > > > Actually, I wasn't all this sophisticated. I have a system monitor
> > > > > > > applet on my top panel (using MATE, Linux Mint), four little graphs,
> > > > > > > one of which shows CPU usage. Different colors indicate different kind
> > > > > > > of CPU usage. Blue shows user space usage, red shows system usage, and
> > > > > > > two more for nice and iowait. During a normal compile it's almost
> > > > > > > completely filled with blue user space CPU usage, only the top few
> > > > > > > pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> > > > > > > set, about 3/4 of the graph was red system CPU usage, the rest was
> > > > > > > blue. So I just looked for a pile of red on my graphs when I tested
> > > > > > > different kernel builds. But also compile speed was horrible I couldn't
> > > > > > > wait for the build to finish. Even the UI got unresponsive.
> > > > > >
> > > > > > We have been having problems with CPU accounting, but this one looks
> > > > > > quite real.
> > > > > >
> > > > > > > Now I did some measuring. In the normal case a compile finished in 36
> > > > > > > seconds, compiled 315 object files. Here are some output lines from
> > > > > > > dstat -tasm --vm during compile:
> > > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > > 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> > > > > > > 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> > > > > > > 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> > > > > > > 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> > > > > > > 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> > > > > > > 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> > > > > > > 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> > > > > > > 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> > > > > > > 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> > > > > > > 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> > > > > > > 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> > > > > > > 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
> > > > > > >
> > > > > > > The abnormal case compiled only 182 object file in 6 and a half minutes,
> > > > > > > then I stopped it. The same dstat output for this case:
> > > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > > 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> > > > > > > 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> > > > > > > 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> > > > > > > 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> > > > > > > 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> > > > > > > 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> > > > > > > 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> > > > > > > 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> > > > > > > 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> > > > > > > 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> > > > > > > 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> > > > > > > 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
> > > > > > >
> > > > > > > I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
> > > > > >
> > > > > > The massive increase in context switches does come as a bit of a surprise!
> > > > > > It does rule out my initial suspicion of lock contention, but then again
> > > > > > the fact that you have only four CPUs made that pretty unlikely to begin
> > > > > > with.
> > > > > >
> > > > > > 2.4k average context switches in the good case for the full run vs. 3,156k
> > > > > > for about half of a run in the bad case. That is an increase of more
> > > > > > than three orders of magnitude!
> > > > > >
> > > > > > Yow!!!
> > > > > >
> > > > > > Page faults are actually -higher- in the good case. You have about 6.5GB
> > > > > > free in both cases, so you are not running out of memory. Lots more disk
> > > > > > writes in the good case, perhaps consistent with its getting more done.
> > > > > > Networking is negligible in both cases.
> > > > > >
> > > > > > Lots of hardware interrupts in the bad case as well. Would you be willing
> > > > > > to take a look at /proc/interrupts before and after to see which one you
> > > > > > are getting hit with? (Or whatever interrupt tracking tool you prefer.)
> > > > >
> > > > > Here are the results.
> > > > >
> > > > > Good case before:
> > > > > CPU0 CPU1 CPU2 CPU3
> > > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > > 1: 356 1 68 4 IO-APIC-edge i8042
> > > > > 8: 0 0 1 0 IO-APIC-edge rtc0
> > > > > 9: 330 14 449 71 IO-APIC-fasteoi acpi
> > > > > 12: 10 108 269 2696 IO-APIC-edge i8042
> > > > > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > > > > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > > > > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > > > > 41: 10617 173 9959 292 PCI-MSI-edge ahci
> > > > > 42: 862 11 186 26 PCI-MSI-edge xhci_hcd
> > > > > 43: 107 77 27 102 PCI-MSI-edge i915
> > > > > 44: 5322 20 434 22 PCI-MSI-edge iwlwifi
> > > > > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > > > > 46: 0 3 0 0 PCI-MSI-edge eth0
> > > > > NMI: 1 0 0 0 Non-maskable interrupts
> > > > > LOC: 16312 15177 10840 8995 Local timer interrupts
> > > > > SPU: 0 0 0 0 Spurious interrupts
> > > > > PMI: 1 0 0 0 Performance monitoring interrupts
> > > > > IWI: 1160 523 1031 481 IRQ work interrupts
> > > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > > RES: 14976 16135 9973 10784 Rescheduling interrupts
> > > > > CAL: 482 457 151 370 Function call interrupts
> > > > > TLB: 70 106 352 230 TLB shootdowns
> > > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > > MCE: 0 0 0 0 Machine check exceptions
> > > > > MCP: 2 2 2 2 Machine check polls
> > > > > ERR: 0
> > > > > MIS: 0
> > > > >
> > > > > Good case after:
> > > > > CPU0 CPU1 CPU2 CPU3
> > > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > > 1: 367 1 81 4 IO-APIC-edge i8042
> > > > > 8: 0 0 1 0 IO-APIC-edge rtc0
> > > > > 9: 478 14 460 71 IO-APIC-fasteoi acpi
> > > > > 12: 10 108 269 2696 IO-APIC-edge i8042
> > > > > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > > > > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > > > > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > > > > 41: 16888 173 9959 292 PCI-MSI-edge ahci
> > > > > 42: 1102 11 186 26 PCI-MSI-edge xhci_hcd
> > > > > 43: 107 132 27 136 PCI-MSI-edge i915
> > > > > 44: 6943 20 434 22 PCI-MSI-edge iwlwifi
> > > > > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > > > > 46: 0 3 0 0 PCI-MSI-edge eth0
> > > > > NMI: 4 3 3 3 Non-maskable interrupts
> > > > > LOC: 26845 24780 19025 17746 Local timer interrupts
> > > > > SPU: 0 0 0 0 Spurious interrupts
> > > > > PMI: 4 3 3 3 Performance monitoring interrupts
> > > > > IWI: 1637 751 1287 695 IRQ work interrupts
> > > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > > RES: 26511 26673 18791 20194 Rescheduling interrupts
> > > > > CAL: 510 480 151 370 Function call interrupts
> > > > > TLB: 361 292 575 461 TLB shootdowns
> > > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > > MCE: 0 0 0 0 Machine check exceptions
> > > > > MCP: 2 2 2 2 Machine check polls
> > > > > ERR: 0
> > > > > MIS: 0
> > > > >
> > > > > Bad case before:
> > > > > CPU0 CPU1 CPU2 CPU3
> > > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > > 1: 172 3 78 3 IO-APIC-edge i8042
> > > > > 8: 0 1 0 0 IO-APIC-edge rtc0
> > > > > 9: 1200 148 395 81 IO-APIC-fasteoi acpi
> > > > > 12: 1625 2 348 10 IO-APIC-edge i8042
> > > > > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > > > > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > > > > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > > > > 41: 15776 374 8497 687 PCI-MSI-edge ahci
> > > > > 42: 1297 829 115 24 PCI-MSI-edge xhci_hcd
> > > > > 43: 103 149 9 212 PCI-MSI-edge i915
> > > > > 44: 13151 101 511 91 PCI-MSI-edge iwlwifi
> > > > > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > > > > 46: 0 1 1 0 PCI-MSI-edge eth0
> > > > > NMI: 32 31 31 31 Non-maskable interrupts
> > > > > LOC: 82504 82732 74172 75985 Local timer interrupts
> > > > > SPU: 0 0 0 0 Spurious interrupts
> > > > > PMI: 32 31 31 31 Performance monitoring interrupts
> > > > > IWI: 17816 16278 13833 13282 IRQ work interrupts
> > > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > > RES: 18784 21084 13313 12946 Rescheduling interrupts
> > > > > CAL: 393 422 306 356 Function call interrupts
> > > > > TLB: 231 176 235 191 TLB shootdowns
> > > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > > MCE: 0 0 0 0 Machine check exceptions
> > > > > MCP: 3 3 3 3 Machine check polls
> > > > > ERR: 0
> > > > > MIS: 0
> > > > >
> > > > > Bad case after:
> > > > > CPU0 CPU1 CPU2 CPU3
> > > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > > 1: 415 3 85 3 IO-APIC-edge i8042
> > > > > 8: 0 1 0 0 IO-APIC-edge rtc0
> > > > > 9: 1277 148 428 81 IO-APIC-fasteoi acpi
> > > > > 12: 1625 2 348 10 IO-APIC-edge i8042
> > > > > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > > > > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > > > > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > > > > 41: 17814 374 8497 687 PCI-MSI-edge ahci
> > > > > 42: 1567 829 115 24 PCI-MSI-edge xhci_hcd
> > > > > 43: 103 177 9 242 PCI-MSI-edge i915
> > > > > 44: 14956 101 511 91 PCI-MSI-edge iwlwifi
> > > > > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > > > > 46: 0 1 1 0 PCI-MSI-edge eth0
> > > > > NMI: 36 35 34 34 Non-maskable interrupts
> > > > > LOC: 92429 92708 81714 84071 Local timer interrupts
> > > > > SPU: 0 0 0 0 Spurious interrupts
> > > > > PMI: 36 35 34 34 Performance monitoring interrupts
> > > > > IWI: 22594 19658 17439 14257 IRQ work interrupts
> > > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > > RES: 21491 24670 14618 14569 Rescheduling interrupts
> > > > > CAL: 441 439 306 356 Function call interrupts
> > > > > TLB: 232 181 274 465 TLB shootdowns
> > > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > > MCE: 0 0 0 0 Machine check exceptions
> > > > > MCP: 3 3 3 3 Machine check polls
> > > > > ERR: 0
> > > > > MIS: 0
> > > >
> > > > Lots more local timer interrupts, which is consistent with the higher
> > > > time in interrupt handlers for the bad case.
> > > >
> > > > > > One hypothesis is that your workload and configuration are interacting
> > > > > > with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
> > > > > > Could you please check for this by building with CONFIG_RCU_TRACE=y,
> > > > > > mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
> > > > > > after each run?
> > > > >
> > > > > Good case before:
> > > > > completed=8756 gpnum=8757 age=0 max=21
> > > > > after:
> > > > > completed=14686 gpnum=14687 age=0 max=21
> > > > >
> > > > > Bad case before:
> > > > > completed=22970 gpnum=22971 age=0 max=21
> > > > > after:
> > > > > completed=26110 gpnum=26111 age=0 max=21
> > > >
> > > > In the good case, (14686-8756)/40=148.25 grace periods per second, which
> > > > is a fast but reasonable rate given your HZ=250. Not a large difference
> > > > in the number of grace periods, but extrapolating for the longer runtime,
> > > > maybe ten times as much. But not much change in grace-period rate per
> > > > unit time.
> > > >
> > > > > The test scenario was the following in both cases (mixed english and pseudo-bash):
> > > > > reboot, login, start terminal
> > > > > cd project
> > > > > rm -r build
> > > > > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > > > > scons -j4
> > > > > wait ~40 sec (good case finished, Ctrl-C in bad case)
> > > > > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > > > >
> > > > > I stopped the build in the bad case after about the same time the good
> > > > > case finished, so the extra interrupts and RCU grace periods because of the
> > > > > longer runtime don't fake the results.
> > > >
> > > > That procedure works for me, thank you for laying it out carefully.
> > > >
> > > > I believe I see what is going on and how to fix it, though it may take
> > > > me a bit to work things through and get a good patch.
> > > >
> > > > Thank you very much for your testing efforts!
> > >
> > > I'm glad I can help. I've been using Linux for many years, now I have a
> > > chance to help the community, to do something in return. I'm quite
> > > enjoying this :)
> >
> > ;-)
> >
> > Here is a patch that is more likely to help. I am testing it in parallel,
> > but figured I should send you a sneak preview.
>
> I tried it, but I don't see any difference in overall performance. The dstat
> also shows the same as before.
>
> But I did notice something. Occasionally there is an increase in userspace
> CPU usage, interrupts and context switches are dropping, and it really gets
> more work done (scons printed commands more frequently). I checked that
> this behaviour is present without your patch, I just didn't notice this
> before. Maybe you can make some sense out of it.
>
> ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> 27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
> 27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
> 27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
> 27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
> 27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
> 27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
> 27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
> 27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
> 27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
> 27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
> 27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
> 27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
> 27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
> 27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
> 27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
> 27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
> 27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
> 27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
> 27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
> 27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
> 27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
> 27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
> 27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
> 27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
> 27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
> 27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
> 27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
> 27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
> 27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
> 27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
> 27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
> 27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
> 27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
> 27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
> 27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
> 27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
> 27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
> 27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
> 27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
> 27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
> 27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
> 27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
> 27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
> 27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
> 27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
> 27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
> 27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
> 27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
Interesting. The number of context switches drops during the time
that throughput improves. It would be good to find out what task(s)
are doing all of the context switches. One way to find the pids of the
top 20 context-switching tasks should be something like this:
grep ctxt /proc/*/status | sort -k2nr | head -20
You could use any number of methods to map back to the command.
When generating my last patch, I was assuming that ksoftirqd would be
the big offender. Of course, if it is something else, I should be
taking a different approach.
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-27 20:05 Tibor Billes
2013-08-30 1:24 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-08-27 20:05 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 08/26/13 06:28 AM
> On Sun, Aug 25, 2013 at 09:50:21PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/24/13 11:03 PM
> > > On Sat, Aug 24, 2013 at 09:59:45PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> > > > > On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > > > > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > > > > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > > > > > Hi,
> > > > > > > > > > > >
> > > > > > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > > > > > to trigger this behaviour.
> > > > > > > > > > > >
> > > > > > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > > > > > >
> > > > > > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > > > > > surprised no one else noticed.
> > > > > > > > > > >
> > > > > > > > > > > Could you please send along your .config file?
> > > > > > > > > >
> > > > > > > > > > Here it is
> > > > > > > > >
> > > > > > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > > > > > >
> > > > > > > > > If that helps, there are some things I could try.
> > > > > > > >
> > > > > > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > > > > > >
> > > > > > > Interesting. Thank you for trying this -- and we at least have a
> > > > > > > short-term workaround for this problem. I will put a patch together
> > > > > > > for further investigation.
> > > > > >
> > > > > > I don't specifically need this config option so I'm fine without it in
> > > > > > the long term, but I guess it's not supposed to behave like that.
> > > > >
> > > > > OK, good, we have a long-term workload for your specific case,
> > > > > even better. ;-)
> > > > >
> > > > > But yes, there are situations where RCU_FAST_NO_HZ needs to work
> > > > > a bit better. I hope you will bear with me with a bit more
> > > > > testing...
> > > > >
> > > > > > > In the meantime, could you please tell me how you were measuring
> > > > > > > performance for your kernel builds? Wall-clock time required to complete
> > > > > > > one build? Number of builds completed per unit time? Something else?
> > > > > >
> > > > > > Actually, I wasn't all this sophisticated. I have a system monitor
> > > > > > applet on my top panel (using MATE, Linux Mint), four little graphs,
> > > > > > one of which shows CPU usage. Different colors indicate different kind
> > > > > > of CPU usage. Blue shows user space usage, red shows system usage, and
> > > > > > two more for nice and iowait. During a normal compile it's almost
> > > > > > completely filled with blue user space CPU usage, only the top few
> > > > > > pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> > > > > > set, about 3/4 of the graph was red system CPU usage, the rest was
> > > > > > blue. So I just looked for a pile of red on my graphs when I tested
> > > > > > different kernel builds. But also compile speed was horrible I couldn't
> > > > > > wait for the build to finish. Even the UI got unresponsive.
> > > > >
> > > > > We have been having problems with CPU accounting, but this one looks
> > > > > quite real.
> > > > >
> > > > > > Now I did some measuring. In the normal case a compile finished in 36
> > > > > > seconds, compiled 315 object files. Here are some output lines from
> > > > > > dstat -tasm --vm during compile:
> > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> > > > > > 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> > > > > > 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> > > > > > 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> > > > > > 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> > > > > > 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> > > > > > 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> > > > > > 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> > > > > > 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> > > > > > 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> > > > > > 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> > > > > > 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
> > > > > >
> > > > > > The abnormal case compiled only 182 object file in 6 and a half minutes,
> > > > > > then I stopped it. The same dstat output for this case:
> > > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > > 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> > > > > > 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> > > > > > 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> > > > > > 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> > > > > > 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> > > > > > 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> > > > > > 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> > > > > > 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> > > > > > 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> > > > > > 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> > > > > > 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> > > > > > 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
> > > > > >
> > > > > > I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
> > > > >
> > > > > The massive increase in context switches does come as a bit of a surprise!
> > > > > It does rule out my initial suspicion of lock contention, but then again
> > > > > the fact that you have only four CPUs made that pretty unlikely to begin
> > > > > with.
> > > > >
> > > > > 2.4k average context switches in the good case for the full run vs. 3,156k
> > > > > for about half of a run in the bad case. That is an increase of more
> > > > > than three orders of magnitude!
> > > > >
> > > > > Yow!!!
> > > > >
> > > > > Page faults are actually -higher- in the good case. You have about 6.5GB
> > > > > free in both cases, so you are not running out of memory. Lots more disk
> > > > > writes in the good case, perhaps consistent with its getting more done.
> > > > > Networking is negligible in both cases.
> > > > >
> > > > > Lots of hardware interrupts in the bad case as well. Would you be willing
> > > > > to take a look at /proc/interrupts before and after to see which one you
> > > > > are getting hit with? (Or whatever interrupt tracking tool you prefer.)
> > > >
> > > > Here are the results.
> > > >
> > > > Good case before:
> > > > CPU0 CPU1 CPU2 CPU3
> > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > 1: 356 1 68 4 IO-APIC-edge i8042
> > > > 8: 0 0 1 0 IO-APIC-edge rtc0
> > > > 9: 330 14 449 71 IO-APIC-fasteoi acpi
> > > > 12: 10 108 269 2696 IO-APIC-edge i8042
> > > > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > > > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > > > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > > > 41: 10617 173 9959 292 PCI-MSI-edge ahci
> > > > 42: 862 11 186 26 PCI-MSI-edge xhci_hcd
> > > > 43: 107 77 27 102 PCI-MSI-edge i915
> > > > 44: 5322 20 434 22 PCI-MSI-edge iwlwifi
> > > > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > > > 46: 0 3 0 0 PCI-MSI-edge eth0
> > > > NMI: 1 0 0 0 Non-maskable interrupts
> > > > LOC: 16312 15177 10840 8995 Local timer interrupts
> > > > SPU: 0 0 0 0 Spurious interrupts
> > > > PMI: 1 0 0 0 Performance monitoring interrupts
> > > > IWI: 1160 523 1031 481 IRQ work interrupts
> > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > RES: 14976 16135 9973 10784 Rescheduling interrupts
> > > > CAL: 482 457 151 370 Function call interrupts
> > > > TLB: 70 106 352 230 TLB shootdowns
> > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > MCE: 0 0 0 0 Machine check exceptions
> > > > MCP: 2 2 2 2 Machine check polls
> > > > ERR: 0
> > > > MIS: 0
> > > >
> > > > Good case after:
> > > > CPU0 CPU1 CPU2 CPU3
> > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > 1: 367 1 81 4 IO-APIC-edge i8042
> > > > 8: 0 0 1 0 IO-APIC-edge rtc0
> > > > 9: 478 14 460 71 IO-APIC-fasteoi acpi
> > > > 12: 10 108 269 2696 IO-APIC-edge i8042
> > > > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > > > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > > > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > > > 41: 16888 173 9959 292 PCI-MSI-edge ahci
> > > > 42: 1102 11 186 26 PCI-MSI-edge xhci_hcd
> > > > 43: 107 132 27 136 PCI-MSI-edge i915
> > > > 44: 6943 20 434 22 PCI-MSI-edge iwlwifi
> > > > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > > > 46: 0 3 0 0 PCI-MSI-edge eth0
> > > > NMI: 4 3 3 3 Non-maskable interrupts
> > > > LOC: 26845 24780 19025 17746 Local timer interrupts
> > > > SPU: 0 0 0 0 Spurious interrupts
> > > > PMI: 4 3 3 3 Performance monitoring interrupts
> > > > IWI: 1637 751 1287 695 IRQ work interrupts
> > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > RES: 26511 26673 18791 20194 Rescheduling interrupts
> > > > CAL: 510 480 151 370 Function call interrupts
> > > > TLB: 361 292 575 461 TLB shootdowns
> > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > MCE: 0 0 0 0 Machine check exceptions
> > > > MCP: 2 2 2 2 Machine check polls
> > > > ERR: 0
> > > > MIS: 0
> > > >
> > > > Bad case before:
> > > > CPU0 CPU1 CPU2 CPU3
> > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > 1: 172 3 78 3 IO-APIC-edge i8042
> > > > 8: 0 1 0 0 IO-APIC-edge rtc0
> > > > 9: 1200 148 395 81 IO-APIC-fasteoi acpi
> > > > 12: 1625 2 348 10 IO-APIC-edge i8042
> > > > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > > > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > > > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > > > 41: 15776 374 8497 687 PCI-MSI-edge ahci
> > > > 42: 1297 829 115 24 PCI-MSI-edge xhci_hcd
> > > > 43: 103 149 9 212 PCI-MSI-edge i915
> > > > 44: 13151 101 511 91 PCI-MSI-edge iwlwifi
> > > > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > > > 46: 0 1 1 0 PCI-MSI-edge eth0
> > > > NMI: 32 31 31 31 Non-maskable interrupts
> > > > LOC: 82504 82732 74172 75985 Local timer interrupts
> > > > SPU: 0 0 0 0 Spurious interrupts
> > > > PMI: 32 31 31 31 Performance monitoring interrupts
> > > > IWI: 17816 16278 13833 13282 IRQ work interrupts
> > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > RES: 18784 21084 13313 12946 Rescheduling interrupts
> > > > CAL: 393 422 306 356 Function call interrupts
> > > > TLB: 231 176 235 191 TLB shootdowns
> > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > MCE: 0 0 0 0 Machine check exceptions
> > > > MCP: 3 3 3 3 Machine check polls
> > > > ERR: 0
> > > > MIS: 0
> > > >
> > > > Bad case after:
> > > > CPU0 CPU1 CPU2 CPU3
> > > > 0: 17 0 0 0 IO-APIC-edge timer
> > > > 1: 415 3 85 3 IO-APIC-edge i8042
> > > > 8: 0 1 0 0 IO-APIC-edge rtc0
> > > > 9: 1277 148 428 81 IO-APIC-fasteoi acpi
> > > > 12: 1625 2 348 10 IO-APIC-edge i8042
> > > > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > > > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > > > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > > > 41: 17814 374 8497 687 PCI-MSI-edge ahci
> > > > 42: 1567 829 115 24 PCI-MSI-edge xhci_hcd
> > > > 43: 103 177 9 242 PCI-MSI-edge i915
> > > > 44: 14956 101 511 91 PCI-MSI-edge iwlwifi
> > > > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > > > 46: 0 1 1 0 PCI-MSI-edge eth0
> > > > NMI: 36 35 34 34 Non-maskable interrupts
> > > > LOC: 92429 92708 81714 84071 Local timer interrupts
> > > > SPU: 0 0 0 0 Spurious interrupts
> > > > PMI: 36 35 34 34 Performance monitoring interrupts
> > > > IWI: 22594 19658 17439 14257 IRQ work interrupts
> > > > RTR: 3 0 0 0 APIC ICR read retries
> > > > RES: 21491 24670 14618 14569 Rescheduling interrupts
> > > > CAL: 441 439 306 356 Function call interrupts
> > > > TLB: 232 181 274 465 TLB shootdowns
> > > > TRM: 0 0 0 0 Thermal event interrupts
> > > > THR: 0 0 0 0 Threshold APIC interrupts
> > > > MCE: 0 0 0 0 Machine check exceptions
> > > > MCP: 3 3 3 3 Machine check polls
> > > > ERR: 0
> > > > MIS: 0
> > >
> > > Lots more local timer interrupts, which is consistent with the higher
> > > time in interrupt handlers for the bad case.
> > >
> > > > > One hypothesis is that your workload and configuration are interacting
> > > > > with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
> > > > > Could you please check for this by building with CONFIG_RCU_TRACE=y,
> > > > > mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
> > > > > after each run?
> > > >
> > > > Good case before:
> > > > completed=8756 gpnum=8757 age=0 max=21
> > > > after:
> > > > completed=14686 gpnum=14687 age=0 max=21
> > > >
> > > > Bad case before:
> > > > completed=22970 gpnum=22971 age=0 max=21
> > > > after:
> > > > completed=26110 gpnum=26111 age=0 max=21
> > >
> > > In the good case, (14686-8756)/40=148.25 grace periods per second, which
> > > is a fast but reasonable rate given your HZ=250. Not a large difference
> > > in the number of grace periods, but extrapolating for the longer runtime,
> > > maybe ten times as much. But not much change in grace-period rate per
> > > unit time.
> > >
> > > > The test scenario was the following in both cases (mixed english and pseudo-bash):
> > > > reboot, login, start terminal
> > > > cd project
> > > > rm -r build
> > > > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > > > scons -j4
> > > > wait ~40 sec (good case finished, Ctrl-C in bad case)
> > > > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > > >
> > > > I stopped the build in the bad case after about the same time the good
> > > > case finished, so the extra interrupts and RCU grace periods because of the
> > > > longer runtime don't fake the results.
> > >
> > > That procedure works for me, thank you for laying it out carefully.
> > >
> > > I believe I see what is going on and how to fix it, though it may take
> > > me a bit to work things through and get a good patch.
> > >
> > > Thank you very much for your testing efforts!
> >
> > I'm glad I can help. I've been using Linux for many years, now I have a
> > chance to help the community, to do something in return. I'm quite
> > enjoying this :)
>
> ;-)
>
> Here is a patch that is more likely to help. I am testing it in parallel,
> but figured I should send you a sneak preview.
I tried it, but I don't see any difference in overall performance. The dstat
also shows the same as before.
But I did notice something. Occasionally there is an increase in userspace
CPU usage, interrupts and context switches are dropping, and it really gets
more work done (scons printed commands more frequently). I checked that
this behaviour is present without your patch, I just didn't notice this
before. Maybe you can make some sense out of it.
----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
27-08 20:51:53| 23 62 5 0 11 0| 0 0 | 0 0 | 0 0 |1274 3102k| 0 7934M| 549M 56.0M 491M 6698M| 0 28 156 159
27-08 20:51:54| 24 64 1 0 11 0| 0 0 | 0 0 | 0 0 |1317 3165k| 0 7934M| 549M 56.0M 491M 6698M| 0 53 189 182
27-08 20:51:55| 33 50 6 2 9 0| 192k 1832k| 0 0 | 0 0 |1371 2442k| 0 7934M| 544M 56.0M 492M 6702M| 0 30k 17k 17k
27-08 20:51:56| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3220k| 0 7934M| 544M 56.0M 492M 6701M| 0 21 272 232
27-08 20:51:57| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1319 3226k| 0 7934M| 544M 56.0M 492M 6701M| 0 8 96 112
27-08 20:51:58| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3224k| 0 7934M| 544M 56.0M 492M 6701M| 0 12 145 141
27-08 20:51:59| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1317 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 54 193 191
27-08 20:52:00| 25 63 0 0 12 0| 0 24k| 0 0 | 0 0 |1336 3216k| 0 7934M| 544M 56.0M 492M 6701M| 0 36 161 172
27-08 20:52:01| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1313 3225k| 0 7934M| 544M 56.0M 492M 6701M| 0 9 107 107
27-08 20:52:02| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1327 3224k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 193 200
27-08 20:52:03| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1311 3226k| 0 7934M| 545M 56.0M 492M 6701M| 0 13 114 114
27-08 20:52:04| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1331 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 41 190 178
27-08 20:52:05| 24 64 0 0 12 0| 0 8192B| 0 0 | 0 0 |1315 3222k| 0 7934M| 544M 56.0M 492M 6701M| 0 30 123 122
27-08 20:52:06| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3223k| 0 7934M| 544M 56.0M 492M 6701M| 0 16 187 195
27-08 20:52:07| 25 63 1 0 12 0|2212k 192k| 0 0 | 0 0 |1637 3194k| 0 7934M| 544M 56.2M 494M 6699M| 0 1363 2590 1947
27-08 20:52:08| 17 33 18 26 6 0|3208k 0 | 0 0 | 0 0 |1351 1766k| 0 7934M| 561M 56.3M 499M 6678M| 4 10k 7620 2055
27-08 20:52:09| 47 21 16 13 4 0|4332k 628k| 0 0 | 0 0 |1400 1081k| 0 7934M| 647M 56.3M 504M 6587M| 10 24k 25k 1151
27-08 20:52:10| 36 34 13 11 6 0|2636k 2820k| 0 0 | 0 0 |1451 1737k| 0 7934M| 598M 56.3M 507M 6632M| 5 19k 16k 28k
27-08 20:52:11| 46 17 10 25 3 0|4288k 536k| 0 0 | 0 0 |1386 868k| 0 7934M| 613M 56.3M 513M 6611M| 24 13k 8908 3616
27-08 20:52:12| 53 33 5 4 5 0|4740k 3992k| 0 0 | 0 0 |1773 1464k| 0 7934M| 562M 56.6M 521M 6654M| 0 36k 29k 40k
27-08 20:52:13| 60 34 0 1 6 0|4228k 1416k| 0 0 | 0 0 |1642 1670k| 0 7934M| 593M 56.6M 526M 6618M| 0 36k 26k 17k
27-08 20:52:14| 53 37 1 3 7 0|3008k 1976k| 0 0 | 0 0 |1513 1972k| 0 7934M| 541M 56.6M 529M 6668M| 0 10k 9986 23k
27-08 20:52:15| 55 34 1 4 6 0|3636k 1284k| 0 0 | 0 0 |1645 1688k| 0 7934M| 581M 56.6M 535M 6622M| 0 43k 30k 18k
27-08 20:52:16| 57 30 5 2 5 0|4404k 2320k| 0 0 | 0 0 |1715 1489k| 0 7934M| 570M 56.6M 541M 6627M| 0 39k 24k 26k
27-08 20:52:17| 50 35 3 7 6 0|2520k 1972k| 0 0 | 0 0 |1699 1598k| 0 7934M| 587M 56.6M 554M 6596M| 0 65k 40k 32k
27-08 20:52:18| 52 40 2 1 7 0|1556k 1732k| 0 0 | 0 0 |1582 1865k| 0 7934M| 551M 56.6M 567M 6619M| 0 35k 26k 33k
27-08 20:52:19| 26 62 0 0 12 0| 0 0 | 0 0 | 0 0 |1351 3240k| 0 7934M| 551M 56.6M 568M 6619M| 0 86 440 214
27-08 20:52:20| 26 63 0 0 11 0| 0 108k| 0 0 | 0 0 |1392 3162k| 0 7934M| 555M 56.6M 560M 6623M| 0 1801 1490 2672
27-08 20:52:21| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1332 3198k| 0 7934M| 555M 56.6M 560M 6623M| 0 50 245 255
27-08 20:52:22| 25 63 1 0 12 0| 0 0 | 0 0 | 0 0 |1350 3220k| 0 7934M| 556M 56.6M 560M 6622M| 0 755 544 286
27-08 20:52:23| 27 62 1 0 11 0| 0 272k| 0 0 | 0 0 |1323 3092k| 0 7934M| 551M 56.6M 558M 6628M| 0 341 1464 3085
27-08 20:52:24| 25 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1334 3197k| 0 7934M| 551M 56.6M 559M 6627M| 0 63 514 273
27-08 20:52:25| 25 63 0 0 12 0| 0 40k| 0 0 | 0 0 |1329 3243k| 0 7934M| 546M 56.6M 558M 6633M| 0 321 160 1679
27-08 20:52:26| 39 50 1 1 9 0| 48k 644k| 0 0 | 0 0 |1500 2556k| 0 7934M| 552M 56.6M 560M 6625M| 0 30k 14k 12k
27-08 20:52:27| 26 62 1 0 11 0| 0 192k| 0 0 | 0 0 |1380 3152k| 0 7934M| 553M 56.6M 560M 6624M| 0 2370 808 718
27-08 20:52:28| 23 55 12 0 10 0| 0 0 | 0 0 | 0 0 |1247 2993k| 0 7934M| 553M 56.6M 561M 6624M| 0 1060 428 241
27-08 20:52:29| 25 63 1 0 11 0| 0 0 | 0 0 | 0 0 |1318 3142k| 0 7934M| 554M 56.6M 561M 6623M| 0 663 442 198
27-08 20:52:30| 25 64 0 0 12 0| 0 100k| 0 0 | 0 0 |1316 3212k| 0 7934M| 554M 56.6M 561M 6623M| 0 42 187 186
27-08 20:52:31| 24 64 0 0 12 0| 0 4096B| 0 0 | 0 0 |1309 3222k| 0 7934M| 554M 56.6M 561M 6623M| 0 9 85 85
27-08 20:52:32| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1297 3219k| 0 7934M| 554M 56.6M 561M 6623M| 0 23 95 84
27-08 20:52:33| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1339 3101k| 0 7934M| 555M 56.6M 557M 6625M| 0 923 1566 2176
27-08 20:52:34| 25 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1333 3095k| 0 7934M| 555M 56.6M 559M 6623M| 0 114 701 195
27-08 20:52:35| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1342 3036k| 0 7934M| 557M 56.6M 568M 6613M| 0 957 3225 516
27-08 20:52:36| 26 60 2 0 11 0| 0 28k| 0 0 | 0 0 |1340 3091k| 0 7934M| 555M 56.6M 568M 6614M| 0 5304 5422 5609
27-08 20:52:37| 25 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1348 3260k| 0 7934M| 556M 56.6M 565M 6617M| 0 30 156 1073
27-08 20:52:38| 24 64 0 0 12 0| 0 0 | 0 0 | 0 0 |1314 3211k| 0 7934M| 556M 56.6M 549M 6633M| 0 11 105 4285
27-08 20:52:39| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1353 3031k| 0 7934M| 558M 56.6M 559M 6620M| 0 847 3866 357
27-08 20:52:40| 26 63 0 0 12 0| 0 0 | 0 0 | 0 0 |1309 3135k| 0 7934M| 569M 56.6M 566M 6602M| 0 3940 5727 1288
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-08-25 19:50 Tibor Billes
@ 2013-08-26 4:28 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-08-26 4:28 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Sun, Aug 25, 2013 at 09:50:21PM +0200, Tibor Billes wrote:
> From: Paul E. McKenney Sent: 08/24/13 11:03 PM
> > On Sat, Aug 24, 2013 at 09:59:45PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> > > > On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > > > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > > > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > > > > Hi,
> > > > > > > > > > >
> > > > > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > > > > to trigger this behaviour.
> > > > > > > > > > >
> > > > > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > > > > >
> > > > > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > > > > surprised no one else noticed.
> > > > > > > > > >
> > > > > > > > > > Could you please send along your .config file?
> > > > > > > > >
> > > > > > > > > Here it is
> > > > > > > >
> > > > > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > > > > >
> > > > > > > > If that helps, there are some things I could try.
> > > > > > >
> > > > > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > > > > >
> > > > > > Interesting. Thank you for trying this -- and we at least have a
> > > > > > short-term workaround for this problem. I will put a patch together
> > > > > > for further investigation.
> > > > >
> > > > > I don't specifically need this config option so I'm fine without it in
> > > > > the long term, but I guess it's not supposed to behave like that.
> > > >
> > > > OK, good, we have a long-term workload for your specific case,
> > > > even better. ;-)
> > > >
> > > > But yes, there are situations where RCU_FAST_NO_HZ needs to work
> > > > a bit better. I hope you will bear with me with a bit more
> > > > testing...
> > > >
> > > > > > In the meantime, could you please tell me how you were measuring
> > > > > > performance for your kernel builds? Wall-clock time required to complete
> > > > > > one build? Number of builds completed per unit time? Something else?
> > > > >
> > > > > Actually, I wasn't all this sophisticated. I have a system monitor
> > > > > applet on my top panel (using MATE, Linux Mint), four little graphs,
> > > > > one of which shows CPU usage. Different colors indicate different kind
> > > > > of CPU usage. Blue shows user space usage, red shows system usage, and
> > > > > two more for nice and iowait. During a normal compile it's almost
> > > > > completely filled with blue user space CPU usage, only the top few
> > > > > pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> > > > > set, about 3/4 of the graph was red system CPU usage, the rest was
> > > > > blue. So I just looked for a pile of red on my graphs when I tested
> > > > > different kernel builds. But also compile speed was horrible I couldn't
> > > > > wait for the build to finish. Even the UI got unresponsive.
> > > >
> > > > We have been having problems with CPU accounting, but this one looks
> > > > quite real.
> > > >
> > > > > Now I did some measuring. In the normal case a compile finished in 36
> > > > > seconds, compiled 315 object files. Here are some output lines from
> > > > > dstat -tasm --vm during compile:
> > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> > > > > 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> > > > > 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> > > > > 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> > > > > 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> > > > > 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> > > > > 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> > > > > 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> > > > > 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> > > > > 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> > > > > 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> > > > > 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
> > > > >
> > > > > The abnormal case compiled only 182 object file in 6 and a half minutes,
> > > > > then I stopped it. The same dstat output for this case:
> > > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > > 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> > > > > 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> > > > > 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> > > > > 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> > > > > 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> > > > > 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> > > > > 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> > > > > 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> > > > > 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> > > > > 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> > > > > 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> > > > > 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
> > > > >
> > > > > I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
> > > >
> > > > The massive increase in context switches does come as a bit of a surprise!
> > > > It does rule out my initial suspicion of lock contention, but then again
> > > > the fact that you have only four CPUs made that pretty unlikely to begin
> > > > with.
> > > >
> > > > 2.4k average context switches in the good case for the full run vs. 3,156k
> > > > for about half of a run in the bad case. That is an increase of more
> > > > than three orders of magnitude!
> > > >
> > > > Yow!!!
> > > >
> > > > Page faults are actually -higher- in the good case. You have about 6.5GB
> > > > free in both cases, so you are not running out of memory. Lots more disk
> > > > writes in the good case, perhaps consistent with its getting more done.
> > > > Networking is negligible in both cases.
> > > >
> > > > Lots of hardware interrupts in the bad case as well. Would you be willing
> > > > to take a look at /proc/interrupts before and after to see which one you
> > > > are getting hit with? (Or whatever interrupt tracking tool you prefer.)
> > >
> > > Here are the results.
> > >
> > > Good case before:
> > > CPU0 CPU1 CPU2 CPU3
> > > 0: 17 0 0 0 IO-APIC-edge timer
> > > 1: 356 1 68 4 IO-APIC-edge i8042
> > > 8: 0 0 1 0 IO-APIC-edge rtc0
> > > 9: 330 14 449 71 IO-APIC-fasteoi acpi
> > > 12: 10 108 269 2696 IO-APIC-edge i8042
> > > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > > 41: 10617 173 9959 292 PCI-MSI-edge ahci
> > > 42: 862 11 186 26 PCI-MSI-edge xhci_hcd
> > > 43: 107 77 27 102 PCI-MSI-edge i915
> > > 44: 5322 20 434 22 PCI-MSI-edge iwlwifi
> > > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > > 46: 0 3 0 0 PCI-MSI-edge eth0
> > > NMI: 1 0 0 0 Non-maskable interrupts
> > > LOC: 16312 15177 10840 8995 Local timer interrupts
> > > SPU: 0 0 0 0 Spurious interrupts
> > > PMI: 1 0 0 0 Performance monitoring interrupts
> > > IWI: 1160 523 1031 481 IRQ work interrupts
> > > RTR: 3 0 0 0 APIC ICR read retries
> > > RES: 14976 16135 9973 10784 Rescheduling interrupts
> > > CAL: 482 457 151 370 Function call interrupts
> > > TLB: 70 106 352 230 TLB shootdowns
> > > TRM: 0 0 0 0 Thermal event interrupts
> > > THR: 0 0 0 0 Threshold APIC interrupts
> > > MCE: 0 0 0 0 Machine check exceptions
> > > MCP: 2 2 2 2 Machine check polls
> > > ERR: 0
> > > MIS: 0
> > >
> > > Good case after:
> > > CPU0 CPU1 CPU2 CPU3
> > > 0: 17 0 0 0 IO-APIC-edge timer
> > > 1: 367 1 81 4 IO-APIC-edge i8042
> > > 8: 0 0 1 0 IO-APIC-edge rtc0
> > > 9: 478 14 460 71 IO-APIC-fasteoi acpi
> > > 12: 10 108 269 2696 IO-APIC-edge i8042
> > > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > > 41: 16888 173 9959 292 PCI-MSI-edge ahci
> > > 42: 1102 11 186 26 PCI-MSI-edge xhci_hcd
> > > 43: 107 132 27 136 PCI-MSI-edge i915
> > > 44: 6943 20 434 22 PCI-MSI-edge iwlwifi
> > > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > > 46: 0 3 0 0 PCI-MSI-edge eth0
> > > NMI: 4 3 3 3 Non-maskable interrupts
> > > LOC: 26845 24780 19025 17746 Local timer interrupts
> > > SPU: 0 0 0 0 Spurious interrupts
> > > PMI: 4 3 3 3 Performance monitoring interrupts
> > > IWI: 1637 751 1287 695 IRQ work interrupts
> > > RTR: 3 0 0 0 APIC ICR read retries
> > > RES: 26511 26673 18791 20194 Rescheduling interrupts
> > > CAL: 510 480 151 370 Function call interrupts
> > > TLB: 361 292 575 461 TLB shootdowns
> > > TRM: 0 0 0 0 Thermal event interrupts
> > > THR: 0 0 0 0 Threshold APIC interrupts
> > > MCE: 0 0 0 0 Machine check exceptions
> > > MCP: 2 2 2 2 Machine check polls
> > > ERR: 0
> > > MIS: 0
> > >
> > > Bad case before:
> > > CPU0 CPU1 CPU2 CPU3
> > > 0: 17 0 0 0 IO-APIC-edge timer
> > > 1: 172 3 78 3 IO-APIC-edge i8042
> > > 8: 0 1 0 0 IO-APIC-edge rtc0
> > > 9: 1200 148 395 81 IO-APIC-fasteoi acpi
> > > 12: 1625 2 348 10 IO-APIC-edge i8042
> > > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > > 41: 15776 374 8497 687 PCI-MSI-edge ahci
> > > 42: 1297 829 115 24 PCI-MSI-edge xhci_hcd
> > > 43: 103 149 9 212 PCI-MSI-edge i915
> > > 44: 13151 101 511 91 PCI-MSI-edge iwlwifi
> > > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > > 46: 0 1 1 0 PCI-MSI-edge eth0
> > > NMI: 32 31 31 31 Non-maskable interrupts
> > > LOC: 82504 82732 74172 75985 Local timer interrupts
> > > SPU: 0 0 0 0 Spurious interrupts
> > > PMI: 32 31 31 31 Performance monitoring interrupts
> > > IWI: 17816 16278 13833 13282 IRQ work interrupts
> > > RTR: 3 0 0 0 APIC ICR read retries
> > > RES: 18784 21084 13313 12946 Rescheduling interrupts
> > > CAL: 393 422 306 356 Function call interrupts
> > > TLB: 231 176 235 191 TLB shootdowns
> > > TRM: 0 0 0 0 Thermal event interrupts
> > > THR: 0 0 0 0 Threshold APIC interrupts
> > > MCE: 0 0 0 0 Machine check exceptions
> > > MCP: 3 3 3 3 Machine check polls
> > > ERR: 0
> > > MIS: 0
> > >
> > > Bad case after:
> > > CPU0 CPU1 CPU2 CPU3
> > > 0: 17 0 0 0 IO-APIC-edge timer
> > > 1: 415 3 85 3 IO-APIC-edge i8042
> > > 8: 0 1 0 0 IO-APIC-edge rtc0
> > > 9: 1277 148 428 81 IO-APIC-fasteoi acpi
> > > 12: 1625 2 348 10 IO-APIC-edge i8042
> > > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > > 41: 17814 374 8497 687 PCI-MSI-edge ahci
> > > 42: 1567 829 115 24 PCI-MSI-edge xhci_hcd
> > > 43: 103 177 9 242 PCI-MSI-edge i915
> > > 44: 14956 101 511 91 PCI-MSI-edge iwlwifi
> > > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > > 46: 0 1 1 0 PCI-MSI-edge eth0
> > > NMI: 36 35 34 34 Non-maskable interrupts
> > > LOC: 92429 92708 81714 84071 Local timer interrupts
> > > SPU: 0 0 0 0 Spurious interrupts
> > > PMI: 36 35 34 34 Performance monitoring interrupts
> > > IWI: 22594 19658 17439 14257 IRQ work interrupts
> > > RTR: 3 0 0 0 APIC ICR read retries
> > > RES: 21491 24670 14618 14569 Rescheduling interrupts
> > > CAL: 441 439 306 356 Function call interrupts
> > > TLB: 232 181 274 465 TLB shootdowns
> > > TRM: 0 0 0 0 Thermal event interrupts
> > > THR: 0 0 0 0 Threshold APIC interrupts
> > > MCE: 0 0 0 0 Machine check exceptions
> > > MCP: 3 3 3 3 Machine check polls
> > > ERR: 0
> > > MIS: 0
> >
> > Lots more local timer interrupts, which is consistent with the higher
> > time in interrupt handlers for the bad case.
> >
> > > > One hypothesis is that your workload and configuration are interacting
> > > > with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
> > > > Could you please check for this by building with CONFIG_RCU_TRACE=y,
> > > > mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
> > > > after each run?
> > >
> > > Good case before:
> > > completed=8756 gpnum=8757 age=0 max=21
> > > after:
> > > completed=14686 gpnum=14687 age=0 max=21
> > >
> > > Bad case before:
> > > completed=22970 gpnum=22971 age=0 max=21
> > > after:
> > > completed=26110 gpnum=26111 age=0 max=21
> >
> > In the good case, (14686-8756)/40=148.25 grace periods per second, which
> > is a fast but reasonable rate given your HZ=250. Not a large difference
> > in the number of grace periods, but extrapolating for the longer runtime,
> > maybe ten times as much. But not much change in grace-period rate per
> > unit time.
> >
> > > The test scenario was the following in both cases (mixed english and pseudo-bash):
> > > reboot, login, start terminal
> > > cd project
> > > rm -r build
> > > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > > scons -j4
> > > wait ~40 sec (good case finished, Ctrl-C in bad case)
> > > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > >
> > > I stopped the build in the bad case after about the same time the good
> > > case finished, so the extra interrupts and RCU grace periods because of the
> > > longer runtime don't fake the results.
> >
> > That procedure works for me, thank you for laying it out carefully.
> >
> > I believe I see what is going on and how to fix it, though it may take
> > me a bit to work things through and get a good patch.
> >
> > Thank you very much for your testing efforts!
>
> I'm glad I can help. I've been using Linux for many years, now I have a
> chance to help the community, to do something in return. I'm quite
> enjoying this :)
;-)
Here is a patch that is more likely to help. I am testing it in parallel,
but figured I should send you a sneak preview.
Thanx, Paul
------------------------------------------------------------------------
rcu: Throttle rcu_try_advance_all_cbs() execution
The rcu_try_advance_all_cbs() function is invoked on each attempted
entry to and every exit from idle. If this function determines that
there are callbacks ready to invoke, the caller will invoke the RCU
core, which in turn will result in a pair of context switches. If a
CPU enters and exits idle extremely frequently, this can result in
an excessive number of context switches and high CPU overhead.
This commit therefore causes rcu_try_advance_all_cbs() to throttle
itself, refusing to do work more than once per jiffy.
Reported-by: Tibor Billes <tbilles@gmx.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index 5f97eab..52be957 100644
--- a/kernel/rcutree.h
+++ b/kernel/rcutree.h
@@ -104,6 +104,8 @@ struct rcu_dynticks {
/* idle-period nonlazy_posted snapshot. */
unsigned long last_accelerate;
/* Last jiffy CBs were accelerated. */
+ unsigned long last_advance_all;
+ /* Last jiffy CBs were all advanced. */
int tick_nohz_enabled_snap; /* Previously seen value from sysfs. */
#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
};
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index a538e73..2205751 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -1630,17 +1630,23 @@ module_param(rcu_idle_lazy_gp_delay, int, 0644);
extern int tick_nohz_enabled;
/*
- * Try to advance callbacks for all flavors of RCU on the current CPU.
- * Afterwards, if there are any callbacks ready for immediate invocation,
- * return true.
+ * Try to advance callbacks for all flavors of RCU on the current CPU, but
+ * only if it has been awhile since the last time we did so. Afterwards,
+ * if there are any callbacks ready for immediate invocation, return true.
*/
static bool rcu_try_advance_all_cbs(void)
{
bool cbs_ready = false;
struct rcu_data *rdp;
+ struct rcu_dynticks *rdtp = &__get_cpu_var(rcu_dynticks);
struct rcu_node *rnp;
struct rcu_state *rsp;
+ /* Exit early if we advanced recently. */
+ if (jiffies == rdtp->last_advance_all)
+ return 0;
+ rdtp->last_advance_all = jiffies;
+
for_each_rcu_flavor(rsp) {
rdp = this_cpu_ptr(rsp->rda);
rnp = rdp->mynode;
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-25 19:50 Tibor Billes
2013-08-26 4:28 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-08-25 19:50 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
From: Paul E. McKenney Sent: 08/24/13 11:03 PM
> On Sat, Aug 24, 2013 at 09:59:45PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> > > On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > > > to trigger this behaviour.
> > > > > > > > > >
> > > > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > > > >
> > > > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > > > surprised no one else noticed.
> > > > > > > > >
> > > > > > > > > Could you please send along your .config file?
> > > > > > > >
> > > > > > > > Here it is
> > > > > > >
> > > > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > > > >
> > > > > > > If that helps, there are some things I could try.
> > > > > >
> > > > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > > > >
> > > > > Interesting. Thank you for trying this -- and we at least have a
> > > > > short-term workaround for this problem. I will put a patch together
> > > > > for further investigation.
> > > >
> > > > I don't specifically need this config option so I'm fine without it in
> > > > the long term, but I guess it's not supposed to behave like that.
> > >
> > > OK, good, we have a long-term workload for your specific case,
> > > even better. ;-)
> > >
> > > But yes, there are situations where RCU_FAST_NO_HZ needs to work
> > > a bit better. I hope you will bear with me with a bit more
> > > testing...
> > >
> > > > > In the meantime, could you please tell me how you were measuring
> > > > > performance for your kernel builds? Wall-clock time required to complete
> > > > > one build? Number of builds completed per unit time? Something else?
> > > >
> > > > Actually, I wasn't all this sophisticated. I have a system monitor
> > > > applet on my top panel (using MATE, Linux Mint), four little graphs,
> > > > one of which shows CPU usage. Different colors indicate different kind
> > > > of CPU usage. Blue shows user space usage, red shows system usage, and
> > > > two more for nice and iowait. During a normal compile it's almost
> > > > completely filled with blue user space CPU usage, only the top few
> > > > pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> > > > set, about 3/4 of the graph was red system CPU usage, the rest was
> > > > blue. So I just looked for a pile of red on my graphs when I tested
> > > > different kernel builds. But also compile speed was horrible I couldn't
> > > > wait for the build to finish. Even the UI got unresponsive.
> > >
> > > We have been having problems with CPU accounting, but this one looks
> > > quite real.
> > >
> > > > Now I did some measuring. In the normal case a compile finished in 36
> > > > seconds, compiled 315 object files. Here are some output lines from
> > > > dstat -tasm --vm during compile:
> > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> > > > 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> > > > 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> > > > 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> > > > 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> > > > 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> > > > 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> > > > 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> > > > 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> > > > 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> > > > 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> > > > 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
> > > >
> > > > The abnormal case compiled only 182 object file in 6 and a half minutes,
> > > > then I stopped it. The same dstat output for this case:
> > > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > > 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> > > > 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> > > > 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> > > > 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> > > > 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> > > > 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> > > > 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> > > > 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> > > > 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> > > > 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> > > > 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> > > > 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
> > > >
> > > > I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
> > >
> > > The massive increase in context switches does come as a bit of a surprise!
> > > It does rule out my initial suspicion of lock contention, but then again
> > > the fact that you have only four CPUs made that pretty unlikely to begin
> > > with.
> > >
> > > 2.4k average context switches in the good case for the full run vs. 3,156k
> > > for about half of a run in the bad case. That is an increase of more
> > > than three orders of magnitude!
> > >
> > > Yow!!!
> > >
> > > Page faults are actually -higher- in the good case. You have about 6.5GB
> > > free in both cases, so you are not running out of memory. Lots more disk
> > > writes in the good case, perhaps consistent with its getting more done.
> > > Networking is negligible in both cases.
> > >
> > > Lots of hardware interrupts in the bad case as well. Would you be willing
> > > to take a look at /proc/interrupts before and after to see which one you
> > > are getting hit with? (Or whatever interrupt tracking tool you prefer.)
> >
> > Here are the results.
> >
> > Good case before:
> > CPU0 CPU1 CPU2 CPU3
> > 0: 17 0 0 0 IO-APIC-edge timer
> > 1: 356 1 68 4 IO-APIC-edge i8042
> > 8: 0 0 1 0 IO-APIC-edge rtc0
> > 9: 330 14 449 71 IO-APIC-fasteoi acpi
> > 12: 10 108 269 2696 IO-APIC-edge i8042
> > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > 41: 10617 173 9959 292 PCI-MSI-edge ahci
> > 42: 862 11 186 26 PCI-MSI-edge xhci_hcd
> > 43: 107 77 27 102 PCI-MSI-edge i915
> > 44: 5322 20 434 22 PCI-MSI-edge iwlwifi
> > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > 46: 0 3 0 0 PCI-MSI-edge eth0
> > NMI: 1 0 0 0 Non-maskable interrupts
> > LOC: 16312 15177 10840 8995 Local timer interrupts
> > SPU: 0 0 0 0 Spurious interrupts
> > PMI: 1 0 0 0 Performance monitoring interrupts
> > IWI: 1160 523 1031 481 IRQ work interrupts
> > RTR: 3 0 0 0 APIC ICR read retries
> > RES: 14976 16135 9973 10784 Rescheduling interrupts
> > CAL: 482 457 151 370 Function call interrupts
> > TLB: 70 106 352 230 TLB shootdowns
> > TRM: 0 0 0 0 Thermal event interrupts
> > THR: 0 0 0 0 Threshold APIC interrupts
> > MCE: 0 0 0 0 Machine check exceptions
> > MCP: 2 2 2 2 Machine check polls
> > ERR: 0
> > MIS: 0
> >
> > Good case after:
> > CPU0 CPU1 CPU2 CPU3
> > 0: 17 0 0 0 IO-APIC-edge timer
> > 1: 367 1 81 4 IO-APIC-edge i8042
> > 8: 0 0 1 0 IO-APIC-edge rtc0
> > 9: 478 14 460 71 IO-APIC-fasteoi acpi
> > 12: 10 108 269 2696 IO-APIC-edge i8042
> > 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> > 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> > 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> > 40: 0 1 12 11 PCI-MSI-edge mei_me
> > 41: 16888 173 9959 292 PCI-MSI-edge ahci
> > 42: 1102 11 186 26 PCI-MSI-edge xhci_hcd
> > 43: 107 132 27 136 PCI-MSI-edge i915
> > 44: 6943 20 434 22 PCI-MSI-edge iwlwifi
> > 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> > 46: 0 3 0 0 PCI-MSI-edge eth0
> > NMI: 4 3 3 3 Non-maskable interrupts
> > LOC: 26845 24780 19025 17746 Local timer interrupts
> > SPU: 0 0 0 0 Spurious interrupts
> > PMI: 4 3 3 3 Performance monitoring interrupts
> > IWI: 1637 751 1287 695 IRQ work interrupts
> > RTR: 3 0 0 0 APIC ICR read retries
> > RES: 26511 26673 18791 20194 Rescheduling interrupts
> > CAL: 510 480 151 370 Function call interrupts
> > TLB: 361 292 575 461 TLB shootdowns
> > TRM: 0 0 0 0 Thermal event interrupts
> > THR: 0 0 0 0 Threshold APIC interrupts
> > MCE: 0 0 0 0 Machine check exceptions
> > MCP: 2 2 2 2 Machine check polls
> > ERR: 0
> > MIS: 0
> >
> > Bad case before:
> > CPU0 CPU1 CPU2 CPU3
> > 0: 17 0 0 0 IO-APIC-edge timer
> > 1: 172 3 78 3 IO-APIC-edge i8042
> > 8: 0 1 0 0 IO-APIC-edge rtc0
> > 9: 1200 148 395 81 IO-APIC-fasteoi acpi
> > 12: 1625 2 348 10 IO-APIC-edge i8042
> > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > 41: 15776 374 8497 687 PCI-MSI-edge ahci
> > 42: 1297 829 115 24 PCI-MSI-edge xhci_hcd
> > 43: 103 149 9 212 PCI-MSI-edge i915
> > 44: 13151 101 511 91 PCI-MSI-edge iwlwifi
> > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > 46: 0 1 1 0 PCI-MSI-edge eth0
> > NMI: 32 31 31 31 Non-maskable interrupts
> > LOC: 82504 82732 74172 75985 Local timer interrupts
> > SPU: 0 0 0 0 Spurious interrupts
> > PMI: 32 31 31 31 Performance monitoring interrupts
> > IWI: 17816 16278 13833 13282 IRQ work interrupts
> > RTR: 3 0 0 0 APIC ICR read retries
> > RES: 18784 21084 13313 12946 Rescheduling interrupts
> > CAL: 393 422 306 356 Function call interrupts
> > TLB: 231 176 235 191 TLB shootdowns
> > TRM: 0 0 0 0 Thermal event interrupts
> > THR: 0 0 0 0 Threshold APIC interrupts
> > MCE: 0 0 0 0 Machine check exceptions
> > MCP: 3 3 3 3 Machine check polls
> > ERR: 0
> > MIS: 0
> >
> > Bad case after:
> > CPU0 CPU1 CPU2 CPU3
> > 0: 17 0 0 0 IO-APIC-edge timer
> > 1: 415 3 85 3 IO-APIC-edge i8042
> > 8: 0 1 0 0 IO-APIC-edge rtc0
> > 9: 1277 148 428 81 IO-APIC-fasteoi acpi
> > 12: 1625 2 348 10 IO-APIC-edge i8042
> > 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> > 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> > 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> > 40: 0 0 14 10 PCI-MSI-edge mei_me
> > 41: 17814 374 8497 687 PCI-MSI-edge ahci
> > 42: 1567 829 115 24 PCI-MSI-edge xhci_hcd
> > 43: 103 177 9 242 PCI-MSI-edge i915
> > 44: 14956 101 511 91 PCI-MSI-edge iwlwifi
> > 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> > 46: 0 1 1 0 PCI-MSI-edge eth0
> > NMI: 36 35 34 34 Non-maskable interrupts
> > LOC: 92429 92708 81714 84071 Local timer interrupts
> > SPU: 0 0 0 0 Spurious interrupts
> > PMI: 36 35 34 34 Performance monitoring interrupts
> > IWI: 22594 19658 17439 14257 IRQ work interrupts
> > RTR: 3 0 0 0 APIC ICR read retries
> > RES: 21491 24670 14618 14569 Rescheduling interrupts
> > CAL: 441 439 306 356 Function call interrupts
> > TLB: 232 181 274 465 TLB shootdowns
> > TRM: 0 0 0 0 Thermal event interrupts
> > THR: 0 0 0 0 Threshold APIC interrupts
> > MCE: 0 0 0 0 Machine check exceptions
> > MCP: 3 3 3 3 Machine check polls
> > ERR: 0
> > MIS: 0
>
> Lots more local timer interrupts, which is consistent with the higher
> time in interrupt handlers for the bad case.
>
> > > One hypothesis is that your workload and configuration are interacting
> > > with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
> > > Could you please check for this by building with CONFIG_RCU_TRACE=y,
> > > mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
> > > after each run?
> >
> > Good case before:
> > completed=8756 gpnum=8757 age=0 max=21
> > after:
> > completed=14686 gpnum=14687 age=0 max=21
> >
> > Bad case before:
> > completed=22970 gpnum=22971 age=0 max=21
> > after:
> > completed=26110 gpnum=26111 age=0 max=21
>
> In the good case, (14686-8756)/40=148.25 grace periods per second, which
> is a fast but reasonable rate given your HZ=250. Not a large difference
> in the number of grace periods, but extrapolating for the longer runtime,
> maybe ten times as much. But not much change in grace-period rate per
> unit time.
>
> > The test scenario was the following in both cases (mixed english and pseudo-bash):
> > reboot, login, start terminal
> > cd project
> > rm -r build
> > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> > scons -j4
> > wait ~40 sec (good case finished, Ctrl-C in bad case)
> > cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> >
> > I stopped the build in the bad case after about the same time the good
> > case finished, so the extra interrupts and RCU grace periods because of the
> > longer runtime don't fake the results.
>
> That procedure works for me, thank you for laying it out carefully.
>
> I believe I see what is going on and how to fix it, though it may take
> me a bit to work things through and get a good patch.
>
> Thank you very much for your testing efforts!
I'm glad I can help. I've been using Linux for many years, now I have a
chance to help the community, to do something in return. I'm quite
enjoying this :)
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-08-24 19:59 Tibor Billes
@ 2013-08-24 21:03 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-08-24 21:03 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Sat, Aug 24, 2013 at 09:59:45PM +0200, Tibor Billes wrote:
> From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> > On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > > to trigger this behaviour.
> > > > > > > > >
> > > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > > >
> > > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > > surprised no one else noticed.
> > > > > > > >
> > > > > > > > Could you please send along your .config file?
> > > > > > >
> > > > > > > Here it is
> > > > > >
> > > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > > >
> > > > > > If that helps, there are some things I could try.
> > > > >
> > > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > > >
> > > > Interesting. Thank you for trying this -- and we at least have a
> > > > short-term workaround for this problem. I will put a patch together
> > > > for further investigation.
> > >
> > > I don't specifically need this config option so I'm fine without it in
> > > the long term, but I guess it's not supposed to behave like that.
> >
> > OK, good, we have a long-term workload for your specific case,
> > even better. ;-)
> >
> > But yes, there are situations where RCU_FAST_NO_HZ needs to work
> > a bit better. I hope you will bear with me with a bit more
> > testing...
> >
> > > > In the meantime, could you please tell me how you were measuring
> > > > performance for your kernel builds? Wall-clock time required to complete
> > > > one build? Number of builds completed per unit time? Something else?
> > >
> > > Actually, I wasn't all this sophisticated. I have a system monitor
> > > applet on my top panel (using MATE, Linux Mint), four little graphs,
> > > one of which shows CPU usage. Different colors indicate different kind
> > > of CPU usage. Blue shows user space usage, red shows system usage, and
> > > two more for nice and iowait. During a normal compile it's almost
> > > completely filled with blue user space CPU usage, only the top few
> > > pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> > > set, about 3/4 of the graph was red system CPU usage, the rest was
> > > blue. So I just looked for a pile of red on my graphs when I tested
> > > different kernel builds. But also compile speed was horrible I couldn't
> > > wait for the build to finish. Even the UI got unresponsive.
> >
> > We have been having problems with CPU accounting, but this one looks
> > quite real.
> >
> > > Now I did some measuring. In the normal case a compile finished in 36
> > > seconds, compiled 315 object files. Here are some output lines from
> > > dstat -tasm --vm during compile:
> > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> > > 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> > > 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> > > 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> > > 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> > > 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> > > 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> > > 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> > > 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> > > 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> > > 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> > > 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
> > >
> > > The abnormal case compiled only 182 object file in 6 and a half minutes,
> > > then I stopped it. The same dstat output for this case:
> > > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > > 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> > > 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> > > 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> > > 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> > > 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> > > 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> > > 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> > > 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> > > 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> > > 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> > > 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> > > 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
> > >
> > > I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
> >
> > The massive increase in context switches does come as a bit of a surprise!
> > It does rule out my initial suspicion of lock contention, but then again
> > the fact that you have only four CPUs made that pretty unlikely to begin
> > with.
> >
> > 2.4k average context switches in the good case for the full run vs. 3,156k
> > for about half of a run in the bad case. That is an increase of more
> > than three orders of magnitude!
> >
> > Yow!!!
> >
> > Page faults are actually -higher- in the good case. You have about 6.5GB
> > free in both cases, so you are not running out of memory. Lots more disk
> > writes in the good case, perhaps consistent with its getting more done.
> > Networking is negligible in both cases.
> >
> > Lots of hardware interrupts in the bad case as well. Would you be willing
> > to take a look at /proc/interrupts before and after to see which one you
> > are getting hit with? (Or whatever interrupt tracking tool you prefer.)
>
> Here are the results.
>
> Good case before:
> CPU0 CPU1 CPU2 CPU3
> 0: 17 0 0 0 IO-APIC-edge timer
> 1: 356 1 68 4 IO-APIC-edge i8042
> 8: 0 0 1 0 IO-APIC-edge rtc0
> 9: 330 14 449 71 IO-APIC-fasteoi acpi
> 12: 10 108 269 2696 IO-APIC-edge i8042
> 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> 40: 0 1 12 11 PCI-MSI-edge mei_me
> 41: 10617 173 9959 292 PCI-MSI-edge ahci
> 42: 862 11 186 26 PCI-MSI-edge xhci_hcd
> 43: 107 77 27 102 PCI-MSI-edge i915
> 44: 5322 20 434 22 PCI-MSI-edge iwlwifi
> 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> 46: 0 3 0 0 PCI-MSI-edge eth0
> NMI: 1 0 0 0 Non-maskable interrupts
> LOC: 16312 15177 10840 8995 Local timer interrupts
> SPU: 0 0 0 0 Spurious interrupts
> PMI: 1 0 0 0 Performance monitoring interrupts
> IWI: 1160 523 1031 481 IRQ work interrupts
> RTR: 3 0 0 0 APIC ICR read retries
> RES: 14976 16135 9973 10784 Rescheduling interrupts
> CAL: 482 457 151 370 Function call interrupts
> TLB: 70 106 352 230 TLB shootdowns
> TRM: 0 0 0 0 Thermal event interrupts
> THR: 0 0 0 0 Threshold APIC interrupts
> MCE: 0 0 0 0 Machine check exceptions
> MCP: 2 2 2 2 Machine check polls
> ERR: 0
> MIS: 0
>
> Good case after:
> CPU0 CPU1 CPU2 CPU3
> 0: 17 0 0 0 IO-APIC-edge timer
> 1: 367 1 81 4 IO-APIC-edge i8042
> 8: 0 0 1 0 IO-APIC-edge rtc0
> 9: 478 14 460 71 IO-APIC-fasteoi acpi
> 12: 10 108 269 2696 IO-APIC-edge i8042
> 16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
> 17: 20 3 25 4 IO-APIC-fasteoi mmc0
> 21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
> 40: 0 1 12 11 PCI-MSI-edge mei_me
> 41: 16888 173 9959 292 PCI-MSI-edge ahci
> 42: 1102 11 186 26 PCI-MSI-edge xhci_hcd
> 43: 107 132 27 136 PCI-MSI-edge i915
> 44: 6943 20 434 22 PCI-MSI-edge iwlwifi
> 45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
> 46: 0 3 0 0 PCI-MSI-edge eth0
> NMI: 4 3 3 3 Non-maskable interrupts
> LOC: 26845 24780 19025 17746 Local timer interrupts
> SPU: 0 0 0 0 Spurious interrupts
> PMI: 4 3 3 3 Performance monitoring interrupts
> IWI: 1637 751 1287 695 IRQ work interrupts
> RTR: 3 0 0 0 APIC ICR read retries
> RES: 26511 26673 18791 20194 Rescheduling interrupts
> CAL: 510 480 151 370 Function call interrupts
> TLB: 361 292 575 461 TLB shootdowns
> TRM: 0 0 0 0 Thermal event interrupts
> THR: 0 0 0 0 Threshold APIC interrupts
> MCE: 0 0 0 0 Machine check exceptions
> MCP: 2 2 2 2 Machine check polls
> ERR: 0
> MIS: 0
>
> Bad case before:
> CPU0 CPU1 CPU2 CPU3
> 0: 17 0 0 0 IO-APIC-edge timer
> 1: 172 3 78 3 IO-APIC-edge i8042
> 8: 0 1 0 0 IO-APIC-edge rtc0
> 9: 1200 148 395 81 IO-APIC-fasteoi acpi
> 12: 1625 2 348 10 IO-APIC-edge i8042
> 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> 40: 0 0 14 10 PCI-MSI-edge mei_me
> 41: 15776 374 8497 687 PCI-MSI-edge ahci
> 42: 1297 829 115 24 PCI-MSI-edge xhci_hcd
> 43: 103 149 9 212 PCI-MSI-edge i915
> 44: 13151 101 511 91 PCI-MSI-edge iwlwifi
> 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> 46: 0 1 1 0 PCI-MSI-edge eth0
> NMI: 32 31 31 31 Non-maskable interrupts
> LOC: 82504 82732 74172 75985 Local timer interrupts
> SPU: 0 0 0 0 Spurious interrupts
> PMI: 32 31 31 31 Performance monitoring interrupts
> IWI: 17816 16278 13833 13282 IRQ work interrupts
> RTR: 3 0 0 0 APIC ICR read retries
> RES: 18784 21084 13313 12946 Rescheduling interrupts
> CAL: 393 422 306 356 Function call interrupts
> TLB: 231 176 235 191 TLB shootdowns
> TRM: 0 0 0 0 Thermal event interrupts
> THR: 0 0 0 0 Threshold APIC interrupts
> MCE: 0 0 0 0 Machine check exceptions
> MCP: 3 3 3 3 Machine check polls
> ERR: 0
> MIS: 0
>
> Bad case after:
> CPU0 CPU1 CPU2 CPU3
> 0: 17 0 0 0 IO-APIC-edge timer
> 1: 415 3 85 3 IO-APIC-edge i8042
> 8: 0 1 0 0 IO-APIC-edge rtc0
> 9: 1277 148 428 81 IO-APIC-fasteoi acpi
> 12: 1625 2 348 10 IO-APIC-edge i8042
> 16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
> 17: 16 3 12 21 IO-APIC-fasteoi mmc0
> 21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
> 40: 0 0 14 10 PCI-MSI-edge mei_me
> 41: 17814 374 8497 687 PCI-MSI-edge ahci
> 42: 1567 829 115 24 PCI-MSI-edge xhci_hcd
> 43: 103 177 9 242 PCI-MSI-edge i915
> 44: 14956 101 511 91 PCI-MSI-edge iwlwifi
> 45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
> 46: 0 1 1 0 PCI-MSI-edge eth0
> NMI: 36 35 34 34 Non-maskable interrupts
> LOC: 92429 92708 81714 84071 Local timer interrupts
> SPU: 0 0 0 0 Spurious interrupts
> PMI: 36 35 34 34 Performance monitoring interrupts
> IWI: 22594 19658 17439 14257 IRQ work interrupts
> RTR: 3 0 0 0 APIC ICR read retries
> RES: 21491 24670 14618 14569 Rescheduling interrupts
> CAL: 441 439 306 356 Function call interrupts
> TLB: 232 181 274 465 TLB shootdowns
> TRM: 0 0 0 0 Thermal event interrupts
> THR: 0 0 0 0 Threshold APIC interrupts
> MCE: 0 0 0 0 Machine check exceptions
> MCP: 3 3 3 3 Machine check polls
> ERR: 0
> MIS: 0
Lots more local timer interrupts, which is consistent with the higher
time in interrupt handlers for the bad case.
> > One hypothesis is that your workload and configuration are interacting
> > with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
> > Could you please check for this by building with CONFIG_RCU_TRACE=y,
> > mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
> > after each run?
>
> Good case before:
> completed=8756 gpnum=8757 age=0 max=21
> after:
> completed=14686 gpnum=14687 age=0 max=21
>
> Bad case before:
> completed=22970 gpnum=22971 age=0 max=21
> after:
> completed=26110 gpnum=26111 age=0 max=21
In the good case, (14686-8756)/40=148.25 grace periods per second, which
is a fast but reasonable rate given your HZ=250. Not a large difference
in the number of grace periods, but extrapolating for the longer runtime,
maybe ten times as much. But not much change in grace-period rate per
unit time.
> The test scenario was the following in both cases (mixed english and pseudo-bash):
> reboot, login, start terminal
> cd project
> rm -r build
> cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
> scons -j4
> wait ~40 sec (good case finished, Ctrl-C in bad case)
> cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
>
> I stopped the build in the bad case after about the same time the good
> case finished, so the extra interrupts and RCU grace periods because of the
> longer runtime don't fake the results.
That procedure works for me, thank you for laying it out carefully.
I believe I see what is going on and how to fix it, though it may take
me a bit to work things through and get a good patch.
Thank you very much for your testing efforts!
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-24 20:30 Tibor Billes
0 siblings, 0 replies; 28+ messages in thread
From: Tibor Billes @ 2013-08-24 20:30 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 08/24/13 02:18 AM
> On Fri, Aug 23, 2013 at 03:20:25PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> > > On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > > > to trigger this behaviour.
> > > > > > > > > >
> > > > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > > > >
> > > > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > > > surprised no one else noticed.
> > > > > > > > >
> > > > > > > > > Could you please send along your .config file?
> > > > > > > >
> > > > > > > > Here it is
> > > > > > >
> > > > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > > > >
> > > > > > > If that helps, there are some things I could try.
> > > > > >
> > > > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > > > >
> > > > > Interesting. Thank you for trying this -- and we at least have a
> > > > > short-term workaround for this problem. I will put a patch together
> > > > > for further investigation.
> > > >
> > > > I don't specifically need this config option so I'm fine without it in
> > > > the long term, but I guess it's not supposed to behave like that.
> > >
> > > OK, good, we have a long-term workload for your specific case,
> > > even better. ;-)
> > >
> > > But yes, there are situations where RCU_FAST_NO_HZ needs to work
> > > a bit better. I hope you will bear with me with a bit more
> > > testing...
> >
> > Don't worry, I will :) Unfortunately I didn't have time yesterday and I
> > won't have time today either. But I'll do what you asked tomorrow and I'll
> > send you the results.
>
> Not a problem! I did find one issue that -might- help, please see
> the patch below. (Run with CONFIG_RCU_FAST_NO_HZ=y.)
>
> Please let me know how it goes!
I applied the patch on top of 3.11-rc6, but it did not help, I'm still
hitting this issue.
Let me know if you have anything else that I can test!
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-24 19:59 Tibor Billes
2013-08-24 21:03 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-08-24 19:59 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > to trigger this behaviour.
> > > > > > > >
> > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > >
> > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > surprised no one else noticed.
> > > > > > >
> > > > > > > Could you please send along your .config file?
> > > > > >
> > > > > > Here it is
> > > > >
> > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > >
> > > > > If that helps, there are some things I could try.
> > > >
> > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > >
> > > Interesting. Thank you for trying this -- and we at least have a
> > > short-term workaround for this problem. I will put a patch together
> > > for further investigation.
> >
> > I don't specifically need this config option so I'm fine without it in
> > the long term, but I guess it's not supposed to behave like that.
>
> OK, good, we have a long-term workload for your specific case,
> even better. ;-)
>
> But yes, there are situations where RCU_FAST_NO_HZ needs to work
> a bit better. I hope you will bear with me with a bit more
> testing...
>
> > > In the meantime, could you please tell me how you were measuring
> > > performance for your kernel builds? Wall-clock time required to complete
> > > one build? Number of builds completed per unit time? Something else?
> >
> > Actually, I wasn't all this sophisticated. I have a system monitor
> > applet on my top panel (using MATE, Linux Mint), four little graphs,
> > one of which shows CPU usage. Different colors indicate different kind
> > of CPU usage. Blue shows user space usage, red shows system usage, and
> > two more for nice and iowait. During a normal compile it's almost
> > completely filled with blue user space CPU usage, only the top few
> > pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> > set, about 3/4 of the graph was red system CPU usage, the rest was
> > blue. So I just looked for a pile of red on my graphs when I tested
> > different kernel builds. But also compile speed was horrible I couldn't
> > wait for the build to finish. Even the UI got unresponsive.
>
> We have been having problems with CPU accounting, but this one looks
> quite real.
>
> > Now I did some measuring. In the normal case a compile finished in 36
> > seconds, compiled 315 object files. Here are some output lines from
> > dstat -tasm --vm during compile:
> > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> > 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> > 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> > 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> > 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> > 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> > 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> > 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> > 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> > 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> > 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> > 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
> >
> > The abnormal case compiled only 182 object file in 6 and a half minutes,
> > then I stopped it. The same dstat output for this case:
> > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> > 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> > 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> > 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> > 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> > 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> > 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> > 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> > 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> > 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> > 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> > 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
> >
> > I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
>
> The massive increase in context switches does come as a bit of a surprise!
> It does rule out my initial suspicion of lock contention, but then again
> the fact that you have only four CPUs made that pretty unlikely to begin
> with.
>
> 2.4k average context switches in the good case for the full run vs. 3,156k
> for about half of a run in the bad case. That is an increase of more
> than three orders of magnitude!
>
> Yow!!!
>
> Page faults are actually -higher- in the good case. You have about 6.5GB
> free in both cases, so you are not running out of memory. Lots more disk
> writes in the good case, perhaps consistent with its getting more done.
> Networking is negligible in both cases.
>
> Lots of hardware interrupts in the bad case as well. Would you be willing
> to take a look at /proc/interrupts before and after to see which one you
> are getting hit with? (Or whatever interrupt tracking tool you prefer.)
Here are the results.
Good case before:
CPU0 CPU1 CPU2 CPU3
0: 17 0 0 0 IO-APIC-edge timer
1: 356 1 68 4 IO-APIC-edge i8042
8: 0 0 1 0 IO-APIC-edge rtc0
9: 330 14 449 71 IO-APIC-fasteoi acpi
12: 10 108 269 2696 IO-APIC-edge i8042
16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
17: 20 3 25 4 IO-APIC-fasteoi mmc0
21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
40: 0 1 12 11 PCI-MSI-edge mei_me
41: 10617 173 9959 292 PCI-MSI-edge ahci
42: 862 11 186 26 PCI-MSI-edge xhci_hcd
43: 107 77 27 102 PCI-MSI-edge i915
44: 5322 20 434 22 PCI-MSI-edge iwlwifi
45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
46: 0 3 0 0 PCI-MSI-edge eth0
NMI: 1 0 0 0 Non-maskable interrupts
LOC: 16312 15177 10840 8995 Local timer interrupts
SPU: 0 0 0 0 Spurious interrupts
PMI: 1 0 0 0 Performance monitoring interrupts
IWI: 1160 523 1031 481 IRQ work interrupts
RTR: 3 0 0 0 APIC ICR read retries
RES: 14976 16135 9973 10784 Rescheduling interrupts
CAL: 482 457 151 370 Function call interrupts
TLB: 70 106 352 230 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 2 2 2 2 Machine check polls
ERR: 0
MIS: 0
Good case after:
CPU0 CPU1 CPU2 CPU3
0: 17 0 0 0 IO-APIC-edge timer
1: 367 1 81 4 IO-APIC-edge i8042
8: 0 0 1 0 IO-APIC-edge rtc0
9: 478 14 460 71 IO-APIC-fasteoi acpi
12: 10 108 269 2696 IO-APIC-edge i8042
16: 36 10 111 2 IO-APIC-fasteoi ehci_hcd:usb1
17: 20 3 25 4 IO-APIC-fasteoi mmc0
21: 3 0 34 0 IO-APIC-fasteoi ehci_hcd:usb2
40: 0 1 12 11 PCI-MSI-edge mei_me
41: 16888 173 9959 292 PCI-MSI-edge ahci
42: 1102 11 186 26 PCI-MSI-edge xhci_hcd
43: 107 132 27 136 PCI-MSI-edge i915
44: 6943 20 434 22 PCI-MSI-edge iwlwifi
45: 180 0 183 86 PCI-MSI-edge snd_hda_intel
46: 0 3 0 0 PCI-MSI-edge eth0
NMI: 4 3 3 3 Non-maskable interrupts
LOC: 26845 24780 19025 17746 Local timer interrupts
SPU: 0 0 0 0 Spurious interrupts
PMI: 4 3 3 3 Performance monitoring interrupts
IWI: 1637 751 1287 695 IRQ work interrupts
RTR: 3 0 0 0 APIC ICR read retries
RES: 26511 26673 18791 20194 Rescheduling interrupts
CAL: 510 480 151 370 Function call interrupts
TLB: 361 292 575 461 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 2 2 2 2 Machine check polls
ERR: 0
MIS: 0
Bad case before:
CPU0 CPU1 CPU2 CPU3
0: 17 0 0 0 IO-APIC-edge timer
1: 172 3 78 3 IO-APIC-edge i8042
8: 0 1 0 0 IO-APIC-edge rtc0
9: 1200 148 395 81 IO-APIC-fasteoi acpi
12: 1625 2 348 10 IO-APIC-edge i8042
16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
17: 16 3 12 21 IO-APIC-fasteoi mmc0
21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
40: 0 0 14 10 PCI-MSI-edge mei_me
41: 15776 374 8497 687 PCI-MSI-edge ahci
42: 1297 829 115 24 PCI-MSI-edge xhci_hcd
43: 103 149 9 212 PCI-MSI-edge i915
44: 13151 101 511 91 PCI-MSI-edge iwlwifi
45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
46: 0 1 1 0 PCI-MSI-edge eth0
NMI: 32 31 31 31 Non-maskable interrupts
LOC: 82504 82732 74172 75985 Local timer interrupts
SPU: 0 0 0 0 Spurious interrupts
PMI: 32 31 31 31 Performance monitoring interrupts
IWI: 17816 16278 13833 13282 IRQ work interrupts
RTR: 3 0 0 0 APIC ICR read retries
RES: 18784 21084 13313 12946 Rescheduling interrupts
CAL: 393 422 306 356 Function call interrupts
TLB: 231 176 235 191 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 3 3 3 3 Machine check polls
ERR: 0
MIS: 0
Bad case after:
CPU0 CPU1 CPU2 CPU3
0: 17 0 0 0 IO-APIC-edge timer
1: 415 3 85 3 IO-APIC-edge i8042
8: 0 1 0 0 IO-APIC-edge rtc0
9: 1277 148 428 81 IO-APIC-fasteoi acpi
12: 1625 2 348 10 IO-APIC-edge i8042
16: 26 23 115 0 IO-APIC-fasteoi ehci_hcd:usb1
17: 16 3 12 21 IO-APIC-fasteoi mmc0
21: 2 2 33 0 IO-APIC-fasteoi ehci_hcd:usb2
40: 0 0 14 10 PCI-MSI-edge mei_me
41: 17814 374 8497 687 PCI-MSI-edge ahci
42: 1567 829 115 24 PCI-MSI-edge xhci_hcd
43: 103 177 9 242 PCI-MSI-edge i915
44: 14956 101 511 91 PCI-MSI-edge iwlwifi
45: 153 159 0 122 PCI-MSI-edge snd_hda_intel
46: 0 1 1 0 PCI-MSI-edge eth0
NMI: 36 35 34 34 Non-maskable interrupts
LOC: 92429 92708 81714 84071 Local timer interrupts
SPU: 0 0 0 0 Spurious interrupts
PMI: 36 35 34 34 Performance monitoring interrupts
IWI: 22594 19658 17439 14257 IRQ work interrupts
RTR: 3 0 0 0 APIC ICR read retries
RES: 21491 24670 14618 14569 Rescheduling interrupts
CAL: 441 439 306 356 Function call interrupts
TLB: 232 181 274 465 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 3 3 3 3 Machine check polls
ERR: 0
MIS: 0
> One hypothesis is that your workload and configuration are interacting
> with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
> Could you please check for this by building with CONFIG_RCU_TRACE=y,
> mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
> after each run?
Good case before:
completed=8756 gpnum=8757 age=0 max=21
after:
completed=14686 gpnum=14687 age=0 max=21
Bad case before:
completed=22970 gpnum=22971 age=0 max=21
after:
completed=26110 gpnum=26111 age=0 max=21
The test scenario was the following in both cases (mixed english and pseudo-bash):
reboot, login, start terminal
cd project
rm -r build
cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
scons -j4
wait ~40 sec (good case finished, Ctrl-C in bad case)
cat /proc/interrupts >> somefile ; cat /sys/kernel/debug/rcu/rcu_sched/rcugp >> somefile
I stopped the build in the bad case after about the same time the good
case finished, so the extra interrupts and RCU grace periods because of the
longer runtime don't fake the results.
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-08-23 13:20 Tibor Billes
@ 2013-08-24 0:18 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-08-24 0:18 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Fri, Aug 23, 2013 at 03:20:25PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> > On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > > to trigger this behaviour.
> > > > > > > > >
> > > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > > >
> > > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > > surprised no one else noticed.
> > > > > > > >
> > > > > > > > Could you please send along your .config file?
> > > > > > >
> > > > > > > Here it is
> > > > > >
> > > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > > >
> > > > > > If that helps, there are some things I could try.
> > > > >
> > > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > > >
> > > > Interesting. Thank you for trying this -- and we at least have a
> > > > short-term workaround for this problem. I will put a patch together
> > > > for further investigation.
> > >
> > > I don't specifically need this config option so I'm fine without it in
> > > the long term, but I guess it's not supposed to behave like that.
> >
> > OK, good, we have a long-term workload for your specific case,
> > even better. ;-)
> >
> > But yes, there are situations where RCU_FAST_NO_HZ needs to work
> > a bit better. I hope you will bear with me with a bit more
> > testing...
>
> Don't worry, I will :) Unfortunately I didn't have time yesterday and I
> won't have time today either. But I'll do what you asked tomorrow and I'll
> send you the results.
Not a problem! I did find one issue that -might- help, please see
the patch below. (Run with CONFIG_RCU_FAST_NO_HZ=y.)
Please let me know how it goes!
Thanx, Paul
------------------------------------------------------------------------
rcu: Remove redundant code from rcu_cleanup_after_idle()
The rcu_try_advance_all_cbs() function returns a bool saying whether or
not there are callbacks ready to invoke, but rcu_cleanup_after_idle()
rechecks this regardless. This commit therefore uses the value returned
by rcu_try_advance_all_cbs() instead of making rcu_cleanup_after_idle()
do this recheck.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 01676b7..a538e73 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -1768,17 +1768,11 @@ static void rcu_prepare_for_idle(int cpu)
*/
static void rcu_cleanup_after_idle(int cpu)
{
- struct rcu_data *rdp;
- struct rcu_state *rsp;
if (rcu_is_nocb_cpu(cpu))
return;
- rcu_try_advance_all_cbs();
- for_each_rcu_flavor(rsp) {
- rdp = per_cpu_ptr(rsp->rda, cpu);
- if (cpu_has_callbacks_ready_to_invoke(rdp))
- invoke_rcu_core();
- }
+ if (rcu_try_advance_all_cbs())
+ invoke_rcu_core();
}
/*
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-23 13:20 Tibor Billes
2013-08-24 0:18 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-08-23 13:20 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 08/22/13 12:09 AM
> On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > > to trigger this behaviour.
> > > > > > > >
> > > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > > >
> > > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > > surprised no one else noticed.
> > > > > > >
> > > > > > > Could you please send along your .config file?
> > > > > >
> > > > > > Here it is
> > > > >
> > > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > > relevance to the otherwise inexplicable group of commits you located
> > > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > > >
> > > > > If that helps, there are some things I could try.
> > > >
> > > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> > >
> > > Interesting. Thank you for trying this -- and we at least have a
> > > short-term workaround for this problem. I will put a patch together
> > > for further investigation.
> >
> > I don't specifically need this config option so I'm fine without it in
> > the long term, but I guess it's not supposed to behave like that.
>
> OK, good, we have a long-term workload for your specific case,
> even better. ;-)
>
> But yes, there are situations where RCU_FAST_NO_HZ needs to work
> a bit better. I hope you will bear with me with a bit more
> testing...
Don't worry, I will :) Unfortunately I didn't have time yesterday and I
won't have time today either. But I'll do what you asked tomorrow and I'll
send you the results.
Tibor
> > > In the meantime, could you please tell me how you were measuring
> > > performance for your kernel builds? Wall-clock time required to complete
> > > one build? Number of builds completed per unit time? Something else?
> >
> > Actually, I wasn't all this sophisticated. I have a system monitor
> > applet on my top panel (using MATE, Linux Mint), four little graphs,
> > one of which shows CPU usage. Different colors indicate different kind
> > of CPU usage. Blue shows user space usage, red shows system usage, and
> > two more for nice and iowait. During a normal compile it's almost
> > completely filled with blue user space CPU usage, only the top few
> > pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> > set, about 3/4 of the graph was red system CPU usage, the rest was
> > blue. So I just looked for a pile of red on my graphs when I tested
> > different kernel builds. But also compile speed was horrible I couldn't
> > wait for the build to finish. Even the UI got unresponsive.
>
> We have been having problems with CPU accounting, but this one looks
> quite real.
>
> > Now I did some measuring. In the normal case a compile finished in 36
> > seconds, compiled 315 object files. Here are some output lines from
> > dstat -tasm --vm during compile:
> > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> > 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> > 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> > 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> > 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> > 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> > 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> > 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> > 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> > 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> > 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> > 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
> >
> > The abnormal case compiled only 182 object file in 6 and a half minutes,
> > then I stopped it. The same dstat output for this case:
> > ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> > time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> > 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> > 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> > 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> > 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> > 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> > 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> > 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> > 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> > 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> > 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> > 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> > 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
> >
> > I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
>
> The massive increase in context switches does come as a bit of a surprise!
> It does rule out my initial suspicion of lock contention, but then again
> the fact that you have only four CPUs made that pretty unlikely to begin
> with.
>
> 2.4k average context switches in the good case for the full run vs. 3,156k
> for about half of a run in the bad case. That is an increase of more
> than three orders of magnitude!
>
> Yow!!!
>
> Page faults are actually -higher- in the good case. You have about 6.5GB
> free in both cases, so you are not running out of memory. Lots more disk
> writes in the good case, perhaps consistent with its getting more done.
> Networking is negligible in both cases.
>
> Lots of hardware interrupts in the bad case as well. Would you be willing
> to take a look at /proc/interrupts before and after to see which one you
> are getting hit with? (Or whatever interrupt tracking tool you prefer.)
>
> One hypothesis is that your workload and configuration are interacting
> with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
> Could you please check for this by building with CONFIG_RCU_TRACE=y,
> mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
> after each run?
>
> Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-08-21 21:05 Tibor Billes
@ 2013-08-21 22:09 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-08-21 22:09 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Wed, Aug 21, 2013 at 11:05:51PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> > On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > > to trigger this behaviour.
> > > > > > >
> > > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > > >
> > > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > > surprised no one else noticed.
> > > > > >
> > > > > > Could you please send along your .config file?
> > > > >
> > > > > Here it is
> > > >
> > > > Interesting. I don't see RCU stuff all that high on the list, but
> > > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > > relevance to the otherwise inexplicable group of commits you located
> > > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > > >
> > > > If that helps, there are some things I could try.
> > >
> > > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
> >
> > Interesting. Thank you for trying this -- and we at least have a
> > short-term workaround for this problem. I will put a patch together
> > for further investigation.
>
> I don't specifically need this config option so I'm fine without it in
> the long term, but I guess it's not supposed to behave like that.
OK, good, we have a long-term workload for your specific case,
even better. ;-)
But yes, there are situations where RCU_FAST_NO_HZ needs to work
a bit better. I hope you will bear with me with a bit more
testing...
> > In the meantime, could you please tell me how you were measuring
> > performance for your kernel builds? Wall-clock time required to complete
> > one build? Number of builds completed per unit time? Something else?
>
> Actually, I wasn't all this sophisticated. I have a system monitor
> applet on my top panel (using MATE, Linux Mint), four little graphs,
> one of which shows CPU usage. Different colors indicate different kind
> of CPU usage. Blue shows user space usage, red shows system usage, and
> two more for nice and iowait. During a normal compile it's almost
> completely filled with blue user space CPU usage, only the top few
> pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
> set, about 3/4 of the graph was red system CPU usage, the rest was
> blue. So I just looked for a pile of red on my graphs when I tested
> different kernel builds. But also compile speed was horrible I couldn't
> wait for the build to finish. Even the UI got unresponsive.
We have been having problems with CPU accounting, but this one looks
quite real.
> Now I did some measuring. In the normal case a compile finished in 36
> seconds, compiled 315 object files. Here are some output lines from
> dstat -tasm --vm during compile:
> ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> 21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
> 21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
> 21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
> 21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
> 21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
> 21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
> 21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
> 21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
> 21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
> 21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
> 21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
> 21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
>
> The abnormal case compiled only 182 object file in 6 and a half minutes,
> then I stopped it. The same dstat output for this case:
> ----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
> time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
> 21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
> 21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
> 21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
> 21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
> 21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
> 21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
> 21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
> 21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
> 21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
> 21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
> 21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
> 21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
>
> I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
The massive increase in context switches does come as a bit of a surprise!
It does rule out my initial suspicion of lock contention, but then again
the fact that you have only four CPUs made that pretty unlikely to begin
with.
2.4k average context switches in the good case for the full run vs. 3,156k
for about half of a run in the bad case. That is an increase of more
than three orders of magnitude!
Yow!!!
Page faults are actually -higher- in the good case. You have about 6.5GB
free in both cases, so you are not running out of memory. Lots more disk
writes in the good case, perhaps consistent with its getting more done.
Networking is negligible in both cases.
Lots of hardware interrupts in the bad case as well. Would you be willing
to take a look at /proc/interrupts before and after to see which one you
are getting hit with? (Or whatever interrupt tracking tool you prefer.)
One hypothesis is that your workload and configuration are interacting
with RCU_FAST_NO_HZ to force very large numbers of RCU grace periods.
Could you please check for this by building with CONFIG_RCU_TRACE=y,
mounting debugfs somewhere and dumping rcu/rcu_sched/rcugp before and
after each run?
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-21 21:05 Tibor Billes
2013-08-21 22:09 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-08-21 21:05 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 08/21/13 09:12 PM
> On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > > Hi,
> > > > > >
> > > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > > to trigger this behaviour.
> > > > > >
> > > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > > >
> > > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > > surprised no one else noticed.
> > > > >
> > > > > Could you please send along your .config file?
> > > >
> > > > Here it is
> > >
> > > Interesting. I don't see RCU stuff all that high on the list, but
> > > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > > relevance to the otherwise inexplicable group of commits you located
> > > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> > >
> > > If that helps, there are some things I could try.
> >
> > It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
>
> Interesting. Thank you for trying this -- and we at least have a
> short-term workaround for this problem. I will put a patch together
> for further investigation.
I don't specifically need this config option so I'm fine without it in
the long term, but I guess it's not supposed to behave like that.
> In the meantime, could you please tell me how you were measuring
> performance for your kernel builds? Wall-clock time required to complete
> one build? Number of builds completed per unit time? Something else?
Actually, I wasn't all this sophisticated. I have a system monitor
applet on my top panel (using MATE, Linux Mint), four little graphs,
one of which shows CPU usage. Different colors indicate different kind
of CPU usage. Blue shows user space usage, red shows system usage, and
two more for nice and iowait. During a normal compile it's almost
completely filled with blue user space CPU usage, only the top few
pixels show some iowait and system usage. With CONFIG_RCU_FAST_NO_HZ
set, about 3/4 of the graph was red system CPU usage, the rest was
blue. So I just looked for a pile of red on my graphs when I tested
different kernel builds. But also compile speed was horrible I couldn't
wait for the build to finish. Even the UI got unresponsive.
Now I did some measuring. In the normal case a compile finished in 36
seconds, compiled 315 object files. Here are some output lines from
dstat -tasm --vm during compile:
----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
21-08 21:48:05| 91 8 2 0 0 0| 0 5852k| 0 0 | 0 0 |1413 1772 | 0 7934M| 581M 58.0M 602M 6553M| 0 71k 46k 54k
21-08 21:48:06| 93 6 1 0 0 0| 0 2064k| 137B 131B| 0 0 |1356 1650 | 0 7934M| 649M 58.0M 604M 6483M| 0 72k 47k 28k
21-08 21:48:07| 86 11 4 0 0 0| 0 5872k| 0 0 | 0 0 |2000 2991 | 0 7934M| 577M 58.0M 627M 6531M| 0 99k 67k 79k
21-08 21:48:08| 87 9 3 0 0 0| 0 2840k| 0 0 | 0 0 |2558 4164 | 0 7934M| 597M 58.0M 632M 6507M| 0 96k 57k 51k
21-08 21:48:09| 93 7 1 0 0 0| 0 3032k| 0 0 | 0 0 |1329 1512 | 0 7934M| 641M 58.0M 626M 6469M| 0 61k 48k 39k
21-08 21:48:10| 93 6 0 0 0 0| 0 4984k| 0 0 | 0 0 |1160 1146 | 0 7934M| 572M 58.0M 628M 6536M| 0 50k 40k 57k
21-08 21:48:11| 86 9 6 0 0 0| 0 2520k| 0 0 | 0 0 |2947 4760 | 0 7934M| 605M 58.0M 631M 6500M| 0 103k 55k 45k
21-08 21:48:12| 90 8 2 0 0 0| 0 2840k| 0 0 | 0 0 |2674 4179 | 0 7934M| 671M 58.0M 635M 6431M| 0 84k 59k 42k
21-08 21:48:13| 90 9 1 0 0 0| 0 4656k| 0 0 | 0 0 |1223 1410 | 0 7934M| 643M 58.0M 638M 6455M| 0 90k 62k 68k
21-08 21:48:14| 91 8 1 0 0 0| 0 3572k| 0 0 | 0 0 |1432 1828 | 0 7934M| 647M 58.0M 641M 6447M| 0 81k 59k 57k
21-08 21:48:15| 91 8 1 0 0 0| 0 5116k| 116B 0 | 0 0 |1194 1295 | 0 7934M| 605M 58.0M 644M 6487M| 0 69k 54k 64k
21-08 21:48:16| 87 10 3 0 0 0| 0 5140k| 0 0 | 0 0 |1761 2586 | 0 7934M| 584M 58.0M 650M 6502M| 0 105k 64k 68k
The abnormal case compiled only 182 object file in 6 and a half minutes,
then I stopped it. The same dstat output for this case:
----system---- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- ----swap--- ------memory-usage----- -----virtual-memory----
time |usr sys idl wai hiq siq| read writ| recv send| in out | int csw | used free| used buff cach free|majpf minpf alloc free
21-08 22:10:49| 27 62 0 0 11 0| 0 0 | 210B 0 | 0 0 |1414 3137k| 0 7934M| 531M 57.6M 595M 6611M| 0 1628 1250 322
21-08 22:10:50| 25 60 4 0 11 0| 0 88k| 126B 0 | 0 0 |1337 3110k| 0 7934M| 531M 57.6M 595M 6611M| 0 91 128 115
21-08 22:10:51| 26 63 0 0 11 0| 0 184k| 294B 0 | 0 0 |1411 3147k| 0 7934M| 531M 57.6M 595M 6611M| 0 1485 814 815
21-08 22:10:52| 26 63 0 0 11 0| 0 0 | 437B 239B| 0 0 |1355 3160k| 0 7934M| 531M 57.6M 595M 6611M| 0 24 94 97
21-08 22:10:53| 26 63 0 0 11 0| 0 0 | 168B 0 | 0 0 |1397 3155k| 0 7934M| 531M 57.6M 595M 6611M| 0 479 285 273
21-08 22:10:54| 26 63 0 0 11 0| 0 4096B| 396B 324B| 0 0 |1346 3154k| 0 7934M| 531M 57.6M 595M 6611M| 0 27 145 145
21-08 22:10:55| 26 63 0 0 11 0| 0 60k| 0 0 | 0 0 |1353 3148k| 0 7934M| 531M 57.6M 595M 6610M| 0 93 117 36
21-08 22:10:56| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1341 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 158 87 74
21-08 22:10:57| 26 62 1 0 11 0| 0 0 | 42B 60B| 0 0 |1332 3162k| 0 7934M| 531M 57.6M 595M 6610M| 0 56 82 78
21-08 22:10:58| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1334 3178k| 0 7934M| 531M 57.6M 595M 6610M| 0 26 56 56
21-08 22:10:59| 26 63 0 0 11 0| 0 0 | 0 0 | 0 0 |1336 3179k| 0 7934M| 531M 57.6M 595M 6610M| 0 3 33 32
21-08 22:11:00| 26 63 0 0 11 0| 0 24k| 90B 108B| 0 0 |1347 3172k| 0 7934M| 531M 57.6M 595M 6610M| 0 41 73 71
I have four logical cores so 25% makes up 1 core. I don't know if the ~26% user CPU usage has anthing to do with this fact or just coincidence. The rest is ~63% system and ~11% hardware interrupt. Do these support what you suspect?
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-08-21 18:14 Tibor Billes
@ 2013-08-21 19:12 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-08-21 19:12 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Wed, Aug 21, 2013 at 08:14:46PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> > On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > > Hi,
> > > > >
> > > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > > to trigger this behaviour.
> > > > >
> > > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > > with scons using 4 threads, the result is appended at the end of this email.
> > > >
> > > > New one on me! You are running a mainstream system (x86_64), so I am
> > > > surprised no one else noticed.
> > > >
> > > > Could you please send along your .config file?
> > >
> > > Here it is
> >
> > Interesting. I don't see RCU stuff all that high on the list, but
> > the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> > relevance to the otherwise inexplicable group of commits you located
> > with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
> >
> > If that helps, there are some things I could try.
>
> It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
Interesting. Thank you for trying this -- and we at least have a
short-term workaround for this problem. I will put a patch together
for further investigation.
In the meantime, could you please tell me how you were measuring
performance for your kernel builds? Wall-clock time required to complete
one build? Number of builds completed per unit time? Something else?
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-21 18:14 Tibor Billes
2013-08-21 19:12 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-08-21 18:14 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 08/20/13 11:43 PM
> On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > > Hi,
> > > >
> > > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > > show up when the system is idling, only when doing some CPU intensive work,
> > > > like compiling with multiple threads. Compiling with only one thread seems not
> > > > to trigger this behaviour.
> > > >
> > > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > > with scons using 4 threads, the result is appended at the end of this email.
> > >
> > > New one on me! You are running a mainstream system (x86_64), so I am
> > > surprised no one else noticed.
> > >
> > > Could you please send along your .config file?
> >
> > Here it is
>
> Interesting. I don't see RCU stuff all that high on the list, but
> the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
> relevance to the otherwise inexplicable group of commits you located
> with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
>
> If that helps, there are some things I could try.
It did help. I didn't notice anything unusual when running with CONFIG_RCU_FAST_NO_HZ=n.
Tibor
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-08-20 20:52 Tibor Billes
@ 2013-08-20 21:43 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-08-20 21:43 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Tue, Aug 20, 2013 at 10:52:26PM +0200, Tibor Billes wrote:
> > From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> > On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > > Hi,
> > >
> > > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > > situations, making things really slow. The latest stable I tried is 3.10.7.
> > > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > > show up when the system is idling, only when doing some CPU intensive work,
> > > like compiling with multiple threads. Compiling with only one thread seems not
> > > to trigger this behaviour.
> > >
> > > To be more precise I did a `perf record -a` while compiling a large C++ program
> > > with scons using 4 threads, the result is appended at the end of this email.
> >
> > New one on me! You are running a mainstream system (x86_64), so I am
> > surprised no one else noticed.
> >
> > Could you please send along your .config file?
>
> Here it is
Interesting. I don't see RCU stuff all that high on the list, but
the items I do see lead me to suspect RCU_FAST_NO_HZ, which has some
relevance to the otherwise inexplicable group of commits you located
with your bisection. Could you please rerun with CONFIG_RCU_FAST_NO_HZ=n?
If that helps, there are some things I could try.
Thanx, Paul
> $ grep ^C .config
> CONFIG_64BIT=y
> CONFIG_X86_64=y
> CONFIG_X86=y
> CONFIG_INSTRUCTION_DECODER=y
> CONFIG_OUTPUT_FORMAT="elf64-x86-64"
> CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
> CONFIG_LOCKDEP_SUPPORT=y
> CONFIG_STACKTRACE_SUPPORT=y
> CONFIG_HAVE_LATENCYTOP_SUPPORT=y
> CONFIG_MMU=y
> CONFIG_NEED_DMA_MAP_STATE=y
> CONFIG_NEED_SG_DMA_LENGTH=y
> CONFIG_GENERIC_ISA_DMA=y
> CONFIG_GENERIC_BUG=y
> CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
> CONFIG_GENERIC_HWEIGHT=y
> CONFIG_ARCH_MAY_HAVE_PC_FDC=y
> CONFIG_RWSEM_XCHGADD_ALGORITHM=y
> CONFIG_GENERIC_CALIBRATE_DELAY=y
> CONFIG_ARCH_HAS_CPU_RELAX=y
> CONFIG_ARCH_HAS_DEFAULT_IDLE=y
> CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
> CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
> CONFIG_HAVE_SETUP_PER_CPU_AREA=y
> CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
> CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
> CONFIG_ARCH_HIBERNATION_POSSIBLE=y
> CONFIG_ARCH_SUSPEND_POSSIBLE=y
> CONFIG_ZONE_DMA32=y
> CONFIG_AUDIT_ARCH=y
> CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
> CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
> CONFIG_HAVE_INTEL_TXT=y
> CONFIG_X86_64_SMP=y
> CONFIG_X86_HT=y
> CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
> CONFIG_ARCH_CPU_PROBE_RELEASE=y
> CONFIG_ARCH_SUPPORTS_UPROBES=y
> CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
> CONFIG_IRQ_WORK=y
> CONFIG_BUILDTIME_EXTABLE_SORT=y
> CONFIG_EXPERIMENTAL=y
> CONFIG_INIT_ENV_ARG_LIMIT=32
> CONFIG_CROSS_COMPILE=""
> CONFIG_LOCALVERSION=""
> CONFIG_HAVE_KERNEL_GZIP=y
> CONFIG_HAVE_KERNEL_BZIP2=y
> CONFIG_HAVE_KERNEL_LZMA=y
> CONFIG_HAVE_KERNEL_XZ=y
> CONFIG_HAVE_KERNEL_LZO=y
> CONFIG_KERNEL_GZIP=y
> CONFIG_DEFAULT_HOSTNAME="(none)"
> CONFIG_SWAP=y
> CONFIG_SYSVIPC=y
> CONFIG_SYSVIPC_SYSCTL=y
> CONFIG_POSIX_MQUEUE=y
> CONFIG_POSIX_MQUEUE_SYSCTL=y
> CONFIG_FHANDLE=y
> CONFIG_AUDIT=y
> CONFIG_AUDITSYSCALL=y
> CONFIG_AUDIT_WATCH=y
> CONFIG_AUDIT_TREE=y
> CONFIG_HAVE_GENERIC_HARDIRQS=y
> CONFIG_GENERIC_HARDIRQS=y
> CONFIG_GENERIC_IRQ_PROBE=y
> CONFIG_GENERIC_IRQ_SHOW=y
> CONFIG_GENERIC_PENDING_IRQ=y
> CONFIG_IRQ_DOMAIN=y
> CONFIG_IRQ_FORCED_THREADING=y
> CONFIG_SPARSE_IRQ=y
> CONFIG_CLOCKSOURCE_WATCHDOG=y
> CONFIG_ARCH_CLOCKSOURCE_DATA=y
> CONFIG_ALWAYS_USE_PERSISTENT_CLOCK=y
> CONFIG_GENERIC_TIME_VSYSCALL=y
> CONFIG_GENERIC_CLOCKEVENTS=y
> CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
> CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
> CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
> CONFIG_GENERIC_CMOS_UPDATE=y
> CONFIG_TICK_ONESHOT=y
> CONFIG_NO_HZ=y
> CONFIG_HIGH_RES_TIMERS=y
> CONFIG_VIRT_CPU_ACCOUNTING=y
> CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
> CONFIG_BSD_PROCESS_ACCT=y
> CONFIG_BSD_PROCESS_ACCT_V3=y
> CONFIG_TASKSTATS=y
> CONFIG_TASK_DELAY_ACCT=y
> CONFIG_TASK_XACCT=y
> CONFIG_TASK_IO_ACCOUNTING=y
> CONFIG_TREE_RCU=y
> CONFIG_RCU_STALL_COMMON=y
> CONFIG_CONTEXT_TRACKING=y
> CONFIG_RCU_USER_QS=y
> CONFIG_CONTEXT_TRACKING_FORCE=y
> CONFIG_RCU_FANOUT=64
> CONFIG_RCU_FANOUT_LEAF=16
> CONFIG_RCU_FAST_NO_HZ=y
> CONFIG_RCU_NOCB_CPU=y
> CONFIG_RCU_NOCB_CPU_NONE=y
> CONFIG_IKCONFIG=m
> CONFIG_LOG_BUF_SHIFT=18
> CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
> CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
> CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
> CONFIG_CGROUPS=y
> CONFIG_CGROUP_FREEZER=y
> CONFIG_CGROUP_DEVICE=y
> CONFIG_CPUSETS=y
> CONFIG_PROC_PID_CPUSET=y
> CONFIG_CGROUP_CPUACCT=y
> CONFIG_RESOURCE_COUNTERS=y
> CONFIG_CGROUP_PERF=y
> CONFIG_CGROUP_SCHED=y
> CONFIG_FAIR_GROUP_SCHED=y
> CONFIG_CFS_BANDWIDTH=y
> CONFIG_RT_GROUP_SCHED=y
> CONFIG_BLK_CGROUP=y
> CONFIG_NAMESPACES=y
> CONFIG_UTS_NS=y
> CONFIG_IPC_NS=y
> CONFIG_PID_NS=y
> CONFIG_NET_NS=y
> CONFIG_UIDGID_CONVERTED=y
> CONFIG_SCHED_AUTOGROUP=y
> CONFIG_RELAY=y
> CONFIG_BLK_DEV_INITRD=y
> CONFIG_INITRAMFS_SOURCE=""
> CONFIG_RD_GZIP=y
> CONFIG_SYSCTL=y
> CONFIG_ANON_INODES=y
> CONFIG_EXPERT=y
> CONFIG_HAVE_UID16=y
> CONFIG_UID16=y
> CONFIG_SYSCTL_SYSCALL=y
> CONFIG_SYSCTL_EXCEPTION_TRACE=y
> CONFIG_KALLSYMS=y
> CONFIG_KALLSYMS_ALL=y
> CONFIG_HOTPLUG=y
> CONFIG_PRINTK=y
> CONFIG_BUG=y
> CONFIG_ELF_CORE=y
> CONFIG_PCSPKR_PLATFORM=y
> CONFIG_HAVE_PCSPKR_PLATFORM=y
> CONFIG_BASE_FULL=y
> CONFIG_FUTEX=y
> CONFIG_EPOLL=y
> CONFIG_SIGNALFD=y
> CONFIG_TIMERFD=y
> CONFIG_EVENTFD=y
> CONFIG_SHMEM=y
> CONFIG_AIO=y
> CONFIG_HAVE_PERF_EVENTS=y
> CONFIG_PERF_EVENTS=y
> CONFIG_VM_EVENT_COUNTERS=y
> CONFIG_PCI_QUIRKS=y
> CONFIG_SLUB=y
> CONFIG_PROFILING=y
> CONFIG_TRACEPOINTS=y
> CONFIG_OPROFILE=m
> CONFIG_HAVE_OPROFILE=y
> CONFIG_OPROFILE_NMI_TIMER=y
> CONFIG_KPROBES=y
> CONFIG_JUMP_LABEL=y
> CONFIG_OPTPROBES=y
> CONFIG_KPROBES_ON_FTRACE=y
> CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
> CONFIG_ARCH_USE_BUILTIN_BSWAP=y
> CONFIG_KRETPROBES=y
> CONFIG_HAVE_IOREMAP_PROT=y
> CONFIG_HAVE_KPROBES=y
> CONFIG_HAVE_KRETPROBES=y
> CONFIG_HAVE_OPTPROBES=y
> CONFIG_HAVE_KPROBES_ON_FTRACE=y
> CONFIG_HAVE_ARCH_TRACEHOOK=y
> CONFIG_HAVE_DMA_ATTRS=y
> CONFIG_USE_GENERIC_SMP_HELPERS=y
> CONFIG_GENERIC_SMP_IDLE_THREAD=y
> CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
> CONFIG_HAVE_DMA_API_DEBUG=y
> CONFIG_HAVE_HW_BREAKPOINT=y
> CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
> CONFIG_HAVE_USER_RETURN_NOTIFIER=y
> CONFIG_HAVE_PERF_EVENTS_NMI=y
> CONFIG_HAVE_PERF_REGS=y
> CONFIG_HAVE_PERF_USER_STACK_DUMP=y
> CONFIG_HAVE_ARCH_JUMP_LABEL=y
> CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
> CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
> CONFIG_HAVE_CMPXCHG_LOCAL=y
> CONFIG_HAVE_CMPXCHG_DOUBLE=y
> CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
> CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
> CONFIG_HAVE_VIRT_TO_BUS=y
> CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
> CONFIG_SECCOMP_FILTER=y
> CONFIG_HAVE_CONTEXT_TRACKING=y
> CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
> CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
> CONFIG_MODULES_USE_ELF_RELA=y
> CONFIG_OLD_SIGSUSPEND3=y
> CONFIG_COMPAT_OLD_SIGACTION=y
> CONFIG_RT_MUTEXES=y
> CONFIG_BASE_SMALL=0
> CONFIG_MODULES=y
> CONFIG_MODULE_UNLOAD=y
> CONFIG_MODVERSIONS=y
> CONFIG_MODULE_SRCVERSION_ALL=y
> CONFIG_STOP_MACHINE=y
> CONFIG_BLOCK=y
> CONFIG_BLK_DEV_BSG=y
> CONFIG_BLK_DEV_BSGLIB=y
> CONFIG_BLK_DEV_INTEGRITY=y
> CONFIG_BLK_DEV_THROTTLING=y
> CONFIG_PARTITION_ADVANCED=y
> CONFIG_MSDOS_PARTITION=y
> CONFIG_LDM_PARTITION=y
> CONFIG_EFI_PARTITION=y
> CONFIG_BLOCK_COMPAT=y
> CONFIG_IOSCHED_NOOP=y
> CONFIG_IOSCHED_CFQ=y
> CONFIG_CFQ_GROUP_IOSCHED=y
> CONFIG_DEFAULT_CFQ=y
> CONFIG_DEFAULT_IOSCHED="cfq"
> CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
> CONFIG_INLINE_READ_UNLOCK=y
> CONFIG_INLINE_READ_UNLOCK_IRQ=y
> CONFIG_INLINE_WRITE_UNLOCK=y
> CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
> CONFIG_MUTEX_SPIN_ON_OWNER=y
> CONFIG_FREEZER=y
> CONFIG_ZONE_DMA=y
> CONFIG_SMP=y
> CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
> CONFIG_SCHED_OMIT_FRAME_POINTER=y
> CONFIG_NO_BOOTMEM=y
> CONFIG_GENERIC_CPU=y
> CONFIG_X86_INTERNODE_CACHE_SHIFT=6
> CONFIG_X86_L1_CACHE_SHIFT=6
> CONFIG_X86_TSC=y
> CONFIG_X86_CMPXCHG64=y
> CONFIG_X86_CMOV=y
> CONFIG_X86_MINIMUM_CPU_FAMILY=64
> CONFIG_X86_DEBUGCTLMSR=y
> CONFIG_PROCESSOR_SELECT=y
> CONFIG_CPU_SUP_INTEL=y
> CONFIG_HPET_TIMER=y
> CONFIG_HPET_EMULATE_RTC=y
> CONFIG_DMI=y
> CONFIG_CALGARY_IOMMU=y
> CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
> CONFIG_SWIOTLB=y
> CONFIG_IOMMU_HELPER=y
> CONFIG_NR_CPUS=4
> CONFIG_SCHED_SMT=y
> CONFIG_SCHED_MC=y
> CONFIG_PREEMPT_VOLUNTARY=y
> CONFIG_X86_LOCAL_APIC=y
> CONFIG_X86_IO_APIC=y
> CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
> CONFIG_X86_MCE=y
> CONFIG_X86_MCE_INTEL=y
> CONFIG_X86_MCE_THRESHOLD=y
> CONFIG_X86_THERMAL_VECTOR=y
> CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
> CONFIG_ARCH_DMA_ADDR_T_64BIT=y
> CONFIG_DIRECT_GBPAGES=y
> CONFIG_NUMA=y
> CONFIG_X86_64_ACPI_NUMA=y
> CONFIG_NODES_SPAN_OTHER_NODES=y
> CONFIG_NODES_SHIFT=6
> CONFIG_ARCH_SPARSEMEM_ENABLE=y
> CONFIG_ARCH_SPARSEMEM_DEFAULT=y
> CONFIG_ARCH_SELECT_MEMORY_MODEL=y
> CONFIG_ARCH_PROC_KCORE_TEXT=y
> CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
> CONFIG_SELECT_MEMORY_MODEL=y
> CONFIG_SPARSEMEM_MANUAL=y
> CONFIG_SPARSEMEM=y
> CONFIG_NEED_MULTIPLE_NODES=y
> CONFIG_HAVE_MEMORY_PRESENT=y
> CONFIG_SPARSEMEM_EXTREME=y
> CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
> CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
> CONFIG_SPARSEMEM_VMEMMAP=y
> CONFIG_HAVE_MEMBLOCK=y
> CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
> CONFIG_ARCH_DISCARD_MEMBLOCK=y
> CONFIG_MEMORY_ISOLATION=y
> CONFIG_PAGEFLAGS_EXTENDED=y
> CONFIG_SPLIT_PTLOCK_CPUS=4
> CONFIG_COMPACTION=y
> CONFIG_MIGRATION=y
> CONFIG_PHYS_ADDR_T_64BIT=y
> CONFIG_ZONE_DMA_FLAG=1
> CONFIG_BOUNCE=y
> CONFIG_NEED_BOUNCE_POOL=y
> CONFIG_VIRT_TO_BUS=y
> CONFIG_KSM=y
> CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
> CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
> CONFIG_MEMORY_FAILURE=y
> CONFIG_TRANSPARENT_HUGEPAGE=y
> CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
> CONFIG_CLEANCACHE=y
> CONFIG_FRONTSWAP=y
> CONFIG_X86_CHECK_BIOS_CORRUPTION=y
> CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
> CONFIG_X86_RESERVE_LOW=64
> CONFIG_MTRR=y
> CONFIG_MTRR_SANITIZER=y
> CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
> CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
> CONFIG_X86_PAT=y
> CONFIG_ARCH_USES_PG_UNCACHED=y
> CONFIG_ARCH_RANDOM=y
> CONFIG_EFI=y
> CONFIG_SECCOMP=y
> CONFIG_CC_STACKPROTECTOR=y
> CONFIG_HZ_250=y
> CONFIG_HZ=250
> CONFIG_SCHED_HRTICK=y
> CONFIG_PHYSICAL_START=0x1000000
> CONFIG_RELOCATABLE=y
> CONFIG_PHYSICAL_ALIGN=0x1000000
> CONFIG_HOTPLUG_CPU=y
> CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
> CONFIG_USE_PERCPU_NUMA_NODE_ID=y
> CONFIG_ARCH_HIBERNATION_HEADER=y
> CONFIG_SUSPEND=y
> CONFIG_SUSPEND_FREEZER=y
> CONFIG_HIBERNATE_CALLBACKS=y
> CONFIG_HIBERNATION=y
> CONFIG_PM_STD_PARTITION=""
> CONFIG_PM_SLEEP=y
> CONFIG_PM_SLEEP_SMP=y
> CONFIG_PM_RUNTIME=y
> CONFIG_PM=y
> CONFIG_PM_DEBUG=y
> CONFIG_PM_SLEEP_DEBUG=y
> CONFIG_PM_TRACE=y
> CONFIG_PM_TRACE_RTC=y
> CONFIG_ACPI=y
> CONFIG_ACPI_SLEEP=y
> CONFIG_ACPI_PROCFS_POWER=y
> CONFIG_ACPI_PROC_EVENT=y
> CONFIG_ACPI_AC=y
> CONFIG_ACPI_BATTERY=y
> CONFIG_ACPI_BUTTON=y
> CONFIG_ACPI_VIDEO=m
> CONFIG_ACPI_FAN=y
> CONFIG_ACPI_DOCK=y
> CONFIG_ACPI_I2C=y
> CONFIG_ACPI_PROCESSOR=y
> CONFIG_ACPI_HOTPLUG_CPU=y
> CONFIG_ACPI_THERMAL=y
> CONFIG_ACPI_NUMA=y
> CONFIG_ACPI_CUSTOM_DSDT_FILE=""
> CONFIG_ACPI_BLACKLIST_YEAR=0
> CONFIG_X86_PM_TIMER=y
> CONFIG_ACPI_CONTAINER=y
> CONFIG_ACPI_HED=y
> CONFIG_ACPI_APEI=y
> CONFIG_ACPI_APEI_GHES=y
> CONFIG_ACPI_APEI_PCIEAER=y
> CONFIG_ACPI_APEI_MEMORY_FAILURE=y
> CONFIG_SFI=y
> CONFIG_CPU_FREQ=y
> CONFIG_CPU_FREQ_TABLE=y
> CONFIG_CPU_FREQ_GOV_COMMON=y
> CONFIG_CPU_FREQ_STAT=y
> CONFIG_CPU_FREQ_STAT_DETAILS=y
> CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
> CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
> CONFIG_CPU_FREQ_GOV_POWERSAVE=y
> CONFIG_CPU_FREQ_GOV_USERSPACE=y
> CONFIG_CPU_FREQ_GOV_ONDEMAND=y
> CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
> CONFIG_X86_ACPI_CPUFREQ=y
> CONFIG_X86_SPEEDSTEP_CENTRINO=y
> CONFIG_CPU_IDLE=y
> CONFIG_CPU_IDLE_GOV_LADDER=y
> CONFIG_CPU_IDLE_GOV_MENU=y
> CONFIG_INTEL_IDLE=y
> CONFIG_PCI=y
> CONFIG_PCI_DIRECT=y
> CONFIG_PCI_MMCONFIG=y
> CONFIG_PCI_DOMAINS=y
> CONFIG_PCIEPORTBUS=y
> CONFIG_PCIEAER=y
> CONFIG_PCIEASPM=y
> CONFIG_PCIEASPM_DEFAULT=y
> CONFIG_PCIE_PME=y
> CONFIG_ARCH_SUPPORTS_MSI=y
> CONFIG_PCI_MSI=y
> CONFIG_HT_IRQ=y
> CONFIG_PCI_ATS=y
> CONFIG_PCI_IOV=y
> CONFIG_PCI_PRI=y
> CONFIG_PCI_PASID=y
> CONFIG_PCI_IOAPIC=y
> CONFIG_PCI_LABEL=y
> CONFIG_ISA_DMA_API=y
> CONFIG_BINFMT_ELF=y
> CONFIG_COMPAT_BINFMT_ELF=y
> CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
> CONFIG_BINFMT_MISC=m
> CONFIG_COREDUMP=y
> CONFIG_IA32_EMULATION=y
> CONFIG_COMPAT=y
> CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
> CONFIG_SYSVIPC_COMPAT=y
> CONFIG_KEYS_COMPAT=y
> CONFIG_HAVE_TEXT_POKE_SMP=y
> CONFIG_X86_DEV_DMA_OPS=y
> CONFIG_NET=y
> CONFIG_COMPAT_NETLINK_MESSAGES=y
> CONFIG_PACKET=y
> CONFIG_UNIX=y
> CONFIG_INET=y
> CONFIG_IP_MULTICAST=y
> CONFIG_SYN_COOKIES=y
> CONFIG_INET_LRO=y
> CONFIG_TCP_CONG_ADVANCED=y
> CONFIG_TCP_CONG_CUBIC=y
> CONFIG_DEFAULT_CUBIC=y
> CONFIG_DEFAULT_TCP_CONG="cubic"
> CONFIG_NETLABEL=y
> CONFIG_NETWORK_SECMARK=y
> CONFIG_NETFILTER=y
> CONFIG_NETFILTER_ADVANCED=y
> CONFIG_NF_CONNTRACK=m
> CONFIG_NF_CONNTRACK_MARK=y
> CONFIG_NF_CONNTRACK_SECMARK=y
> CONFIG_NF_CONNTRACK_EVENTS=y
> CONFIG_NF_CONNTRACK_TIMESTAMP=y
> CONFIG_NETFILTER_XTABLES=m
> CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
> CONFIG_NETFILTER_XT_TARGET_LOG=m
> CONFIG_NETFILTER_XT_MATCH_COMMENT=m
> CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
> CONFIG_NETFILTER_XT_MATCH_STATE=m
> CONFIG_NF_DEFRAG_IPV4=m
> CONFIG_NF_CONNTRACK_IPV4=m
> CONFIG_IP_NF_IPTABLES=m
> CONFIG_IP_NF_FILTER=m
> CONFIG_IP_NF_TARGET_REJECT=m
> CONFIG_IP_NF_MANGLE=m
> CONFIG_HAVE_NET_DSA=y
> CONFIG_DNS_RESOLVER=y
> CONFIG_RPS=y
> CONFIG_RFS_ACCEL=y
> CONFIG_XPS=y
> CONFIG_BQL=y
> CONFIG_BT=m
> CONFIG_BT_RFCOMM=m
> CONFIG_BT_RFCOMM_TTY=y
> CONFIG_BT_BNEP=m
> CONFIG_BT_BNEP_MC_FILTER=y
> CONFIG_BT_BNEP_PROTO_FILTER=y
> CONFIG_WIRELESS=y
> CONFIG_WEXT_CORE=y
> CONFIG_WEXT_PROC=y
> CONFIG_CFG80211=m
> CONFIG_NL80211_TESTMODE=y
> CONFIG_CFG80211_REG_DEBUG=y
> CONFIG_CFG80211_DEFAULT_PS=y
> CONFIG_CFG80211_DEBUGFS=y
> CONFIG_CFG80211_WEXT=y
> CONFIG_MAC80211=m
> CONFIG_MAC80211_HAS_RC=y
> CONFIG_MAC80211_RC_PID=y
> CONFIG_MAC80211_RC_DEFAULT_PID=y
> CONFIG_MAC80211_RC_DEFAULT="pid"
> CONFIG_MAC80211_MESH=y
> CONFIG_MAC80211_LEDS=y
> CONFIG_MAC80211_DEBUGFS=y
> CONFIG_MAC80211_DEBUG_MENU=y
> CONFIG_RFKILL=y
> CONFIG_RFKILL_LEDS=y
> CONFIG_RFKILL_INPUT=y
> CONFIG_HAVE_BPF_JIT=y
> CONFIG_UEVENT_HELPER_PATH=""
> CONFIG_DEVTMPFS=y
> CONFIG_DEVTMPFS_MOUNT=y
> CONFIG_PREVENT_FIRMWARE_BUILD=y
> CONFIG_FW_LOADER=y
> CONFIG_FIRMWARE_IN_KERNEL=y
> CONFIG_EXTRA_FIRMWARE=""
> CONFIG_FW_LOADER_USER_HELPER=y
> CONFIG_REGMAP=y
> CONFIG_REGMAP_I2C=y
> CONFIG_REGMAP_IRQ=y
> CONFIG_DMA_SHARED_BUFFER=y
> CONFIG_CONNECTOR=y
> CONFIG_PROC_EVENTS=y
> CONFIG_PARPORT=m
> CONFIG_PARPORT_PC=m
> CONFIG_PARPORT_PC_FIFO=y
> CONFIG_PARPORT_1284=y
> CONFIG_PNP=y
> CONFIG_PNP_DEBUG_MESSAGES=y
> CONFIG_PNPACPI=y
> CONFIG_BLK_DEV=y
> CONFIG_BLK_DEV_LOOP=y
> CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
> CONFIG_BLK_DEV_RAM=y
> CONFIG_BLK_DEV_RAM_COUNT=16
> CONFIG_BLK_DEV_RAM_SIZE=65536
> CONFIG_VIRTIO_BLK=y
> CONFIG_INTEL_MEI=y
> CONFIG_INTEL_MEI_ME=y
> CONFIG_HAVE_IDE=y
> CONFIG_SCSI_MOD=y
> CONFIG_SCSI=y
> CONFIG_SCSI_DMA=y
> CONFIG_SCSI_PROC_FS=y
> CONFIG_BLK_DEV_SD=y
> CONFIG_BLK_DEV_SR=y
> CONFIG_CHR_DEV_SG=y
> CONFIG_SCSI_MULTI_LUN=y
> CONFIG_SCSI_CONSTANTS=y
> CONFIG_SCSI_LOGGING=y
> CONFIG_SCSI_SCAN_ASYNC=y
> CONFIG_SCSI_SPI_ATTRS=y
> CONFIG_SCSI_LOWLEVEL=y
> CONFIG_MEGARAID_NEWGEN=y
> CONFIG_SCSI_SYM53C8XX_2=y
> CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
> CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
> CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
> CONFIG_SCSI_SYM53C8XX_MMIO=y
> CONFIG_SCSI_DH=y
> CONFIG_ATA=y
> CONFIG_ATA_VERBOSE_ERROR=y
> CONFIG_ATA_ACPI=y
> CONFIG_SATA_PMP=y
> CONFIG_SATA_AHCI=y
> CONFIG_ATA_SFF=y
> CONFIG_PDC_ADMA=y
> CONFIG_ATA_BMDMA=y
> CONFIG_ATA_PIIX=y
> CONFIG_PATA_SIS=y
> CONFIG_PATA_ACPI=y
> CONFIG_ATA_GENERIC=y
> CONFIG_MD=y
> CONFIG_BLK_DEV_MD=y
> CONFIG_MD_AUTODETECT=y
> CONFIG_BLK_DEV_DM=y
> CONFIG_DM_CRYPT=m
> CONFIG_DM_UEVENT=y
> CONFIG_NETDEVICES=y
> CONFIG_NET_CORE=y
> CONFIG_TUN=y
> CONFIG_VIRTIO_NET=y
> CONFIG_ETHERNET=y
> CONFIG_MDIO=m
> CONFIG_NET_CADENCE=y
> CONFIG_NET_VENDOR_BROADCOM=y
> CONFIG_BNX2=m
> CONFIG_TIGON3=m
> CONFIG_BNX2X=m
> CONFIG_BNX2X_SRIOV=y
> CONFIG_PHYLIB=y
> CONFIG_BROADCOM_PHY=y
> CONFIG_WLAN=y
> CONFIG_IWLWIFI=m
> CONFIG_IWLDVM=m
> CONFIG_IWLWIFI_DEBUGFS=y
> CONFIG_IWLWIFI_DEVICE_TRACING=y
> CONFIG_IWLWIFI_DEVICE_TESTMODE=y
> CONFIG_IWLWIFI_P2P=y
> CONFIG_INPUT=y
> CONFIG_INPUT_SPARSEKMAP=m
> CONFIG_INPUT_MOUSEDEV=y
> CONFIG_INPUT_MOUSEDEV_PSAUX=y
> CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
> CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
> CONFIG_INPUT_EVDEV=y
> CONFIG_INPUT_KEYBOARD=y
> CONFIG_KEYBOARD_ATKBD=y
> CONFIG_INPUT_MOUSE=y
> CONFIG_MOUSE_PS2=m
> CONFIG_MOUSE_PS2_ALPS=y
> CONFIG_INPUT_MISC=y
> CONFIG_INPUT_UINPUT=y
> CONFIG_SERIO=y
> CONFIG_SERIO_I8042=y
> CONFIG_SERIO_LIBPS2=y
> CONFIG_SERIO_RAW=m
> CONFIG_TTY=y
> CONFIG_VT=y
> CONFIG_CONSOLE_TRANSLATIONS=y
> CONFIG_VT_CONSOLE=y
> CONFIG_VT_CONSOLE_SLEEP=y
> CONFIG_HW_CONSOLE=y
> CONFIG_VT_HW_CONSOLE_BINDING=y
> CONFIG_UNIX98_PTYS=y
> CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
> CONFIG_LEGACY_PTYS=y
> CONFIG_LEGACY_PTY_COUNT=0
> CONFIG_SERIAL_NONSTANDARD=y
> CONFIG_STALDRV=y
> CONFIG_SERIAL_8250=y
> CONFIG_SERIAL_8250_PNP=y
> CONFIG_SERIAL_8250_CONSOLE=y
> CONFIG_FIX_EARLYCON_MEM=y
> CONFIG_SERIAL_8250_DMA=y
> CONFIG_SERIAL_8250_PCI=y
> CONFIG_SERIAL_8250_NR_UARTS=48
> CONFIG_SERIAL_8250_RUNTIME_UARTS=32
> CONFIG_SERIAL_8250_EXTENDED=y
> CONFIG_SERIAL_8250_MANY_PORTS=y
> CONFIG_SERIAL_8250_SHARE_IRQ=y
> CONFIG_SERIAL_8250_RSA=y
> CONFIG_SERIAL_CORE=y
> CONFIG_SERIAL_CORE_CONSOLE=y
> CONFIG_TTY_PRINTK=y
> CONFIG_PRINTER=m
> CONFIG_PPDEV=m
> CONFIG_HW_RANDOM=y
> CONFIG_HPET=y
> CONFIG_HPET_MMAP=y
> CONFIG_DEVPORT=y
> CONFIG_I2C=y
> CONFIG_I2C_BOARDINFO=y
> CONFIG_I2C_COMPAT=y
> CONFIG_I2C_ALGOBIT=m
> CONFIG_PPS=m
> CONFIG_PTP_1588_CLOCK=m
> CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
> CONFIG_GPIO_DEVRES=y
> CONFIG_POWER_SUPPLY=y
> CONFIG_HWMON=y
> CONFIG_SENSORS_CORETEMP=m
> CONFIG_THERMAL=y
> CONFIG_THERMAL_HWMON=y
> CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
> CONFIG_THERMAL_GOV_STEP_WISE=y
> CONFIG_WATCHDOG=y
> CONFIG_WATCHDOG_CORE=y
> CONFIG_SSB_POSSIBLE=y
> CONFIG_BCMA_POSSIBLE=y
> CONFIG_MFD_CORE=y
> CONFIG_MFD_88PM860X=y
> CONFIG_MFD_TPS6586X=y
> CONFIG_MFD_STMPE=y
> CONFIG_MFD_TC3589X=y
> CONFIG_PMIC_DA903X=y
> CONFIG_PMIC_ADP5520=y
> CONFIG_MFD_MAX8925=y
> CONFIG_MFD_MAX8997=y
> CONFIG_MFD_MAX8998=y
> CONFIG_MFD_WM831X=y
> CONFIG_MFD_WM831X_I2C=y
> CONFIG_MFD_WM8350=y
> CONFIG_MFD_WM8350_I2C=y
> CONFIG_MFD_WM8994=y
> CONFIG_ABX500_CORE=y
> CONFIG_AB3100_CORE=y
> CONFIG_REGULATOR=y
> CONFIG_REGULATOR_88PM8607=y
> CONFIG_MEDIA_SUPPORT=m
> CONFIG_AGP=y
> CONFIG_AGP_INTEL=y
> CONFIG_AGP_VIA=y
> CONFIG_VGA_ARB=y
> CONFIG_VGA_ARB_MAX_GPUS=16
> CONFIG_VGA_SWITCHEROO=y
> CONFIG_DRM=m
> CONFIG_DRM_KMS_HELPER=m
> CONFIG_DRM_LOAD_EDID_FIRMWARE=y
> CONFIG_DRM_I915=m
> CONFIG_DRM_I915_KMS=y
> CONFIG_VIDEO_OUTPUT_CONTROL=m
> CONFIG_HDMI=y
> CONFIG_FB=y
> CONFIG_FIRMWARE_EDID=y
> CONFIG_FB_CFB_FILLRECT=y
> CONFIG_FB_CFB_COPYAREA=y
> CONFIG_FB_CFB_IMAGEBLIT=y
> CONFIG_FB_MODE_HELPERS=y
> CONFIG_FB_TILEBLITTING=y
> CONFIG_FB_ASILIANT=y
> CONFIG_FB_IMSTT=y
> CONFIG_FB_EFI=y
> CONFIG_FB_GEODE=y
> CONFIG_BACKLIGHT_LCD_SUPPORT=y
> CONFIG_BACKLIGHT_CLASS_DEVICE=y
> CONFIG_VGA_CONSOLE=y
> CONFIG_DUMMY_CONSOLE=y
> CONFIG_FRAMEBUFFER_CONSOLE=y
> CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
> CONFIG_FONT_8x8=y
> CONFIG_FONT_8x16=y
> CONFIG_SOUND=m
> CONFIG_SND=m
> CONFIG_SND_TIMER=m
> CONFIG_SND_PCM=m
> CONFIG_SND_HWDEP=m
> CONFIG_SND_RAWMIDI=m
> CONFIG_SND_JACK=y
> CONFIG_SND_SEQUENCER=m
> CONFIG_SND_DYNAMIC_MINORS=y
> CONFIG_SND_SUPPORT_OLD_API=y
> CONFIG_SND_VERBOSE_PROCFS=y
> CONFIG_SND_VMASTER=y
> CONFIG_SND_KCTL_JACK=y
> CONFIG_SND_DMA_SGBUF=y
> CONFIG_SND_RAWMIDI_SEQ=m
> CONFIG_SND_DRIVERS=y
> CONFIG_SND_VIRMIDI=m
> CONFIG_SND_PCI=y
> CONFIG_SND_HDA_INTEL=m
> CONFIG_SND_HDA_PREALLOC_SIZE=64
> CONFIG_SND_HDA_HWDEP=y
> CONFIG_SND_HDA_RECONFIG=y
> CONFIG_SND_HDA_INPUT_BEEP=y
> CONFIG_SND_HDA_INPUT_BEEP_MODE=0
> CONFIG_SND_HDA_INPUT_JACK=y
> CONFIG_SND_HDA_PATCH_LOADER=y
> CONFIG_SND_HDA_CODEC_REALTEK=y
> CONFIG_SND_HDA_CODEC_ANALOG=y
> CONFIG_SND_HDA_CODEC_SIGMATEL=y
> CONFIG_SND_HDA_CODEC_VIA=y
> CONFIG_SND_HDA_CODEC_HDMI=y
> CONFIG_SND_HDA_CODEC_CIRRUS=y
> CONFIG_SND_HDA_CODEC_CONEXANT=y
> CONFIG_SND_HDA_CODEC_CA0110=y
> CONFIG_SND_HDA_CODEC_CA0132=y
> CONFIG_SND_HDA_CODEC_CMEDIA=y
> CONFIG_SND_HDA_CODEC_SI3054=y
> CONFIG_SND_HDA_GENERIC=y
> CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0
> CONFIG_SND_USB=y
> CONFIG_HID=m
> CONFIG_HIDRAW=y
> CONFIG_HID_GENERIC=m
> CONFIG_USB_HID=m
> CONFIG_HID_PID=y
> CONFIG_USB_HIDDEV=y
> CONFIG_USB_ARCH_HAS_OHCI=y
> CONFIG_USB_ARCH_HAS_EHCI=y
> CONFIG_USB_ARCH_HAS_XHCI=y
> CONFIG_USB_SUPPORT=y
> CONFIG_USB_COMMON=y
> CONFIG_USB_ARCH_HAS_HCD=y
> CONFIG_USB=y
> CONFIG_USB_MON=y
> CONFIG_USB_XHCI_HCD=y
> CONFIG_USB_EHCI_HCD=y
> CONFIG_USB_EHCI_ROOT_HUB_TT=y
> CONFIG_USB_EHCI_TT_NEWSCHED=y
> CONFIG_USB_EHCI_PCI=y
> CONFIG_USB_OHCI_HCD=y
> CONFIG_USB_OHCI_LITTLE_ENDIAN=y
> CONFIG_USB_UHCI_HCD=y
> CONFIG_USB_STORAGE=m
> CONFIG_USB_SERIAL=m
> CONFIG_USB_SERIAL_GENERIC=y
> CONFIG_USB_SERIAL_FTDI_SIO=m
> CONFIG_MMC=y
> CONFIG_MMC_BLOCK=m
> CONFIG_MMC_BLOCK_MINORS=8
> CONFIG_MMC_BLOCK_BOUNCE=y
> CONFIG_MMC_SDHCI=m
> CONFIG_MMC_SDHCI_PCI=m
> CONFIG_MMC_RICOH_MMC=y
> CONFIG_NEW_LEDS=y
> CONFIG_LEDS_CLASS=y
> CONFIG_LEDS_TRIGGERS=y
> CONFIG_EDAC=y
> CONFIG_EDAC_LEGACY_SYSFS=y
> CONFIG_RTC_LIB=y
> CONFIG_RTC_CLASS=y
> CONFIG_RTC_INTF_SYSFS=y
> CONFIG_RTC_INTF_PROC=y
> CONFIG_RTC_INTF_DEV=y
> CONFIG_RTC_DRV_CMOS=y
> CONFIG_DMADEVICES=y
> CONFIG_AUXDISPLAY=y
> CONFIG_VIRTIO=y
> CONFIG_VIRTIO_PCI=y
> CONFIG_STAGING=y
> CONFIG_ZSMALLOC=y
> CONFIG_STAGING_MEDIA=y
> CONFIG_NET_VENDOR_SILICOM=y
> CONFIG_ZCACHE=y
> CONFIG_X86_PLATFORM_DEVICES=y
> CONFIG_DELL_LAPTOP=m
> CONFIG_DELL_WMI=m
> CONFIG_ACPI_WMI=m
> CONFIG_CLKEVT_I8253=y
> CONFIG_I8253_LOCK=y
> CONFIG_CLKBLD_I8253=y
> CONFIG_IOMMU_API=y
> CONFIG_IOMMU_SUPPORT=y
> CONFIG_DMAR_TABLE=y
> CONFIG_INTEL_IOMMU=y
> CONFIG_INTEL_IOMMU_FLOPPY_WA=y
> CONFIG_IRQ_REMAP=y
> CONFIG_VIRT_DRIVERS=y
> CONFIG_PM_DEVFREQ=y
> CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
> CONFIG_DEVFREQ_GOV_PERFORMANCE=y
> CONFIG_DEVFREQ_GOV_POWERSAVE=y
> CONFIG_DEVFREQ_GOV_USERSPACE=y
> CONFIG_EDD=y
> CONFIG_EDD_OFF=y
> CONFIG_FIRMWARE_MEMMAP=y
> CONFIG_EFI_VARS=y
> CONFIG_DCDBAS=m
> CONFIG_DMIID=y
> CONFIG_ISCSI_IBFT_FIND=y
> CONFIG_DCACHE_WORD_ACCESS=y
> CONFIG_EXT3_FS=y
> CONFIG_EXT3_DEFAULTS_TO_ORDERED=y
> CONFIG_EXT3_FS_XATTR=y
> CONFIG_EXT3_FS_POSIX_ACL=y
> CONFIG_EXT3_FS_SECURITY=y
> CONFIG_EXT4_FS=y
> CONFIG_EXT4_USE_FOR_EXT23=y
> CONFIG_EXT4_FS_POSIX_ACL=y
> CONFIG_EXT4_FS_SECURITY=y
> CONFIG_JBD=y
> CONFIG_JBD2=y
> CONFIG_FS_MBCACHE=y
> CONFIG_FS_POSIX_ACL=y
> CONFIG_EXPORTFS=y
> CONFIG_FILE_LOCKING=y
> CONFIG_FSNOTIFY=y
> CONFIG_DNOTIFY=y
> CONFIG_INOTIFY_USER=y
> CONFIG_FANOTIFY=y
> CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
> CONFIG_FUSE_FS=y
> CONFIG_GENERIC_ACL=y
> CONFIG_ISO9660_FS=m
> CONFIG_JOLIET=y
> CONFIG_ZISOFS=y
> CONFIG_UDF_FS=m
> CONFIG_UDF_NLS=y
> CONFIG_FAT_FS=m
> CONFIG_VFAT_FS=m
> CONFIG_FAT_DEFAULT_CODEPAGE=437
> CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
> CONFIG_NTFS_FS=m
> CONFIG_NTFS_RW=y
> CONFIG_PROC_FS=y
> CONFIG_PROC_KCORE=y
> CONFIG_PROC_SYSCTL=y
> CONFIG_PROC_PAGE_MONITOR=y
> CONFIG_SYSFS=y
> CONFIG_TMPFS=y
> CONFIG_TMPFS_POSIX_ACL=y
> CONFIG_TMPFS_XATTR=y
> CONFIG_HUGETLBFS=y
> CONFIG_HUGETLB_PAGE=y
> CONFIG_MISC_FILESYSTEMS=y
> CONFIG_ECRYPT_FS=y
> CONFIG_PSTORE=y
> CONFIG_NETWORK_FILESYSTEMS=y
> CONFIG_NFS_FS=m
> CONFIG_NFS_V3=m
> CONFIG_NFS_V4=m
> CONFIG_NFS_USE_KERNEL_DNS=y
> CONFIG_LOCKD=m
> CONFIG_LOCKD_V4=y
> CONFIG_NFS_COMMON=y
> CONFIG_SUNRPC=m
> CONFIG_SUNRPC_GSS=m
> CONFIG_NLS=y
> CONFIG_NLS_DEFAULT="utf8"
> CONFIG_NLS_CODEPAGE_437=m
> CONFIG_NLS_CODEPAGE_1250=m
> CONFIG_NLS_ISO8859_1=m
> CONFIG_NLS_ISO8859_2=m
> CONFIG_NLS_UTF8=m
> CONFIG_TRACE_IRQFLAGS_SUPPORT=y
> CONFIG_PRINTK_TIME=y
> CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
> CONFIG_FRAME_WARN=1024
> CONFIG_MAGIC_SYSRQ=y
> CONFIG_UNUSED_SYMBOLS=y
> CONFIG_DEBUG_FS=y
> CONFIG_DEBUG_KERNEL=y
> CONFIG_LOCKUP_DETECTOR=y
> CONFIG_HARDLOCKUP_DETECTOR=y
> CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=0
> CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
> CONFIG_PANIC_ON_OOPS_VALUE=0
> CONFIG_DETECT_HUNG_TASK=y
> CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
> CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
> CONFIG_SCHED_DEBUG=y
> CONFIG_SCHEDSTATS=y
> CONFIG_TIMER_STATS=y
> CONFIG_HAVE_DEBUG_KMEMLEAK=y
> CONFIG_STACKTRACE=y
> CONFIG_DEBUG_BUGVERBOSE=y
> CONFIG_DEBUG_INFO=y
> CONFIG_DEBUG_INFO_REDUCED=y
> CONFIG_ARCH_WANT_FRAME_POINTERS=y
> CONFIG_FRAME_POINTER=y
> CONFIG_BOOT_PRINTK_DELAY=y
> CONFIG_RCU_CPU_STALL_TIMEOUT=60
> CONFIG_USER_STACKTRACE_SUPPORT=y
> CONFIG_NOP_TRACER=y
> CONFIG_HAVE_FUNCTION_TRACER=y
> CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
> CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
> CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
> CONFIG_HAVE_DYNAMIC_FTRACE=y
> CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
> CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
> CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
> CONFIG_HAVE_FENTRY=y
> CONFIG_HAVE_C_RECORDMCOUNT=y
> CONFIG_TRACER_MAX_TRACE=y
> CONFIG_TRACE_CLOCK=y
> CONFIG_RING_BUFFER=y
> CONFIG_EVENT_TRACING=y
> CONFIG_CONTEXT_SWITCH_TRACER=y
> CONFIG_RING_BUFFER_ALLOW_SWAP=y
> CONFIG_TRACING=y
> CONFIG_GENERIC_TRACER=y
> CONFIG_TRACING_SUPPORT=y
> CONFIG_FTRACE=y
> CONFIG_FUNCTION_TRACER=y
> CONFIG_FUNCTION_GRAPH_TRACER=y
> CONFIG_SCHED_TRACER=y
> CONFIG_FTRACE_SYSCALLS=y
> CONFIG_TRACER_SNAPSHOT=y
> CONFIG_BRANCH_PROFILE_NONE=y
> CONFIG_STACK_TRACER=y
> CONFIG_BLK_DEV_IO_TRACE=y
> CONFIG_KPROBE_EVENT=y
> CONFIG_PROBE_EVENTS=y
> CONFIG_DYNAMIC_FTRACE=y
> CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
> CONFIG_FUNCTION_PROFILER=y
> CONFIG_FTRACE_MCOUNT_RECORD=y
> CONFIG_MMIOTRACE=y
> CONFIG_HAVE_ARCH_KGDB=y
> CONFIG_HAVE_ARCH_KMEMCHECK=y
> CONFIG_STRICT_DEVMEM=y
> CONFIG_EARLY_PRINTK=y
> CONFIG_DEBUG_RODATA=y
> CONFIG_DEBUG_SET_MODULE_RONX=y
> CONFIG_HAVE_MMIOTRACE_SUPPORT=y
> CONFIG_IO_DELAY_TYPE_0X80=0
> CONFIG_IO_DELAY_TYPE_0XED=1
> CONFIG_IO_DELAY_TYPE_UDELAY=2
> CONFIG_IO_DELAY_TYPE_NONE=3
> CONFIG_IO_DELAY_0XED=y
> CONFIG_DEFAULT_IO_DELAY_TYPE=1
> CONFIG_OPTIMIZE_INLINING=y
> CONFIG_KEYS=y
> CONFIG_ENCRYPTED_KEYS=y
> CONFIG_SECURITY=y
> CONFIG_SECURITYFS=y
> CONFIG_SECURITY_NETWORK=y
> CONFIG_SECURITY_PATH=y
> CONFIG_INTEL_TXT=y
> CONFIG_LSM_MMAP_MIN_ADDR=0
> CONFIG_SECURITY_SELINUX=y
> CONFIG_SECURITY_SELINUX_BOOTPARAM=y
> CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0
> CONFIG_SECURITY_SELINUX_DISABLE=y
> CONFIG_SECURITY_SELINUX_DEVELOP=y
> CONFIG_SECURITY_SELINUX_AVC_STATS=y
> CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
> CONFIG_SECURITY_SMACK=y
> CONFIG_SECURITY_TOMOYO=y
> CONFIG_SECURITY_TOMOYO_MAX_ACCEPT_ENTRY=2048
> CONFIG_SECURITY_TOMOYO_MAX_AUDIT_LOG=1024
> CONFIG_SECURITY_TOMOYO_POLICY_LOADER="/sbin/tomoyo-init"
> CONFIG_SECURITY_TOMOYO_ACTIVATION_TRIGGER="/sbin/init"
> CONFIG_SECURITY_APPARMOR=y
> CONFIG_SECURITY_APPARMOR_BOOTPARAM_VALUE=1
> CONFIG_SECURITY_YAMA=y
> CONFIG_INTEGRITY=y
> CONFIG_EVM=y
> CONFIG_EVM_HMAC_VERSION=2
> CONFIG_DEFAULT_SECURITY_APPARMOR=y
> CONFIG_DEFAULT_SECURITY="apparmor"
> CONFIG_CRYPTO=y
> CONFIG_CRYPTO_ALGAPI=y
> CONFIG_CRYPTO_ALGAPI2=y
> CONFIG_CRYPTO_AEAD2=y
> CONFIG_CRYPTO_BLKCIPHER=y
> CONFIG_CRYPTO_BLKCIPHER2=y
> CONFIG_CRYPTO_HASH=y
> CONFIG_CRYPTO_HASH2=y
> CONFIG_CRYPTO_RNG=y
> CONFIG_CRYPTO_RNG2=y
> CONFIG_CRYPTO_PCOMP2=y
> CONFIG_CRYPTO_MANAGER=y
> CONFIG_CRYPTO_MANAGER2=y
> CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
> CONFIG_CRYPTO_WORKQUEUE=y
> CONFIG_CRYPTO_CBC=y
> CONFIG_CRYPTO_ECB=y
> CONFIG_CRYPTO_HMAC=y
> CONFIG_CRYPTO_CRC32C=y
> CONFIG_CRYPTO_CRC32C_X86_64=y
> CONFIG_CRYPTO_CRC32C_INTEL=y
> CONFIG_CRYPTO_MD5=y
> CONFIG_CRYPTO_SHA1=y
> CONFIG_CRYPTO_SHA256=y
> CONFIG_CRYPTO_AES=y
> CONFIG_CRYPTO_ARC4=m
> CONFIG_CRYPTO_LZO=y
> CONFIG_CRYPTO_HW=y
> CONFIG_CRYPTO_DEV_PADLOCK=y
> CONFIG_HAVE_KVM=y
> CONFIG_BINARY_PRINTF=y
> CONFIG_BITREVERSE=y
> CONFIG_GENERIC_STRNCPY_FROM_USER=y
> CONFIG_GENERIC_STRNLEN_USER=y
> CONFIG_GENERIC_FIND_FIRST_BIT=y
> CONFIG_GENERIC_PCI_IOMAP=y
> CONFIG_GENERIC_IOMAP=y
> CONFIG_GENERIC_IO=y
> CONFIG_CRC16=y
> CONFIG_CRC_T10DIF=y
> CONFIG_CRC_ITU_T=m
> CONFIG_CRC32=y
> CONFIG_CRC32_SLICEBY8=y
> CONFIG_LIBCRC32C=m
> CONFIG_ZLIB_INFLATE=y
> CONFIG_LZO_COMPRESS=y
> CONFIG_LZO_DECOMPRESS=y
> CONFIG_XZ_DEC=y
> CONFIG_XZ_DEC_X86=y
> CONFIG_XZ_DEC_IA64=y
> CONFIG_XZ_DEC_BCJ=y
> CONFIG_DECOMPRESS_GZIP=y
> CONFIG_GENERIC_ALLOCATOR=y
> CONFIG_HAS_IOMEM=y
> CONFIG_HAS_IOPORT=y
> CONFIG_HAS_DMA=y
> CONFIG_CPU_RMAP=y
> CONFIG_DQL=y
> CONFIG_NLATTR=y
> CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
> CONFIG_AVERAGE=y
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
@ 2013-08-20 20:52 Tibor Billes
2013-08-20 21:43 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-08-20 20:52 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
> From: Paul E. McKenney Sent: 08/20/13 04:53 PM
> On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> > Hi,
> >
> > I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> > The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> > situations, making things really slow. The latest stable I tried is 3.10.7.
> > I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> > show up when the system is idling, only when doing some CPU intensive work,
> > like compiling with multiple threads. Compiling with only one thread seems not
> > to trigger this behaviour.
> >
> > To be more precise I did a `perf record -a` while compiling a large C++ program
> > with scons using 4 threads, the result is appended at the end of this email.
>
> New one on me! You are running a mainstream system (x86_64), so I am
> surprised no one else noticed.
>
> Could you please send along your .config file?
Here it is
$ grep ^C .config
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y
CONFIG_EXPERIMENTAL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_KERNEL_GZIP=y
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
CONFIG_HAVE_GENERIC_HARDIRQS=y
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_ALWAYS_USE_PERSISTENT_CLOCK=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_VIRT_CPU_ACCOUNTING=y
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_TREE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_CONTEXT_TRACKING=y
CONFIG_RCU_USER_QS=y
CONFIG_CONTEXT_TRACKING_FORCE=y
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_NONE=y
CONFIG_IKCONFIG=m
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_UIDGID_CONVERTED=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_EXPERT=y
CONFIG_HAVE_UID16=y
CONFIG_UID16=y
CONFIG_SYSCTL_SYSCALL=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_PERF_EVENTS=y
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_PCI_QUIRKS=y
CONFIG_SLUB=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_OPTPROBES=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_VIRT_TO_BUS=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_THROTTLING=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_MSDOS_PARTITION=y
CONFIG_LDM_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_BLOCK_COMPAT=y
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_CFQ=y
CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_CFQ=y
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_NO_BOOTMEM=y
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
CONFIG_NR_CPUS=4
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT_VOLUNTARY=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_THERMAL_VECTOR=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_NEED_BOUNCE_POOL=y
CONFIG_VIRT_TO_BUS=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_EFI=y
CONFIG_SECCOMP=y
CONFIG_CC_STACKPROTECTOR=y
CONFIG_HZ_250=y
CONFIG_HZ=250
CONFIG_SCHED_HRTICK=y
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
CONFIG_PM_RUNTIME=y
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_SLEEP_DEBUG=y
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_PROCFS_POWER=y
CONFIG_ACPI_PROC_EVENT=y
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_I2C=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_NUMA=y
CONFIG_ACPI_CUSTOM_DSDT_FILE=""
CONFIG_ACPI_BLACKLIST_YEAR=0
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HED=y
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_SFI=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_TABLE=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_STAT_DETAILS=y
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_X86_ACPI_CPUFREQ=y
CONFIG_X86_SPEEDSTEP_CENTRINO=y
CONFIG_CPU_IDLE=y
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
CONFIG_INTEL_IDLE=y
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEFAULT=y
CONFIG_PCIE_PME=y
CONFIG_ARCH_SUPPORTS_MSI=y
CONFIG_PCI_MSI=y
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
CONFIG_PCI_IOAPIC=y
CONFIG_PCI_LABEL=y
CONFIG_ISA_DMA_API=y
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y
CONFIG_COMPAT_NETLINK_MESSAGES=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_SYN_COOKIES=y
CONFIG_INET_LRO=y
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_CUBIC=y
CONFIG_DEFAULT_CUBIC=y
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_NF_CONNTRACK=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NETFILTER_XTABLES=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_CONNTRACK_IPV4=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_MANGLE=m
CONFIG_HAVE_NET_DSA=y
CONFIG_DNS_RESOLVER=y
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
CONFIG_BQL=y
CONFIG_BT=m
CONFIG_BT_RFCOMM=m
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=m
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_WIRELESS=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_CFG80211=m
CONFIG_NL80211_TESTMODE=y
CONFIG_CFG80211_REG_DEBUG=y
CONFIG_CFG80211_DEFAULT_PS=y
CONFIG_CFG80211_DEBUGFS=y
CONFIG_CFG80211_WEXT=y
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_PID=y
CONFIG_MAC80211_RC_DEFAULT_PID=y
CONFIG_MAC80211_RC_DEFAULT="pid"
CONFIG_MAC80211_MESH=y
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
CONFIG_MAC80211_DEBUG_MENU=y
CONFIG_RFKILL=y
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
CONFIG_HAVE_BPF_JIT=y
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=y
CONFIG_REGMAP_IRQ=y
CONFIG_DMA_SHARED_BUFFER=y
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_PC_FIFO=y
CONFIG_PARPORT_1284=y
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_VIRTIO_BLK=y
CONFIG_INTEL_MEI=y
CONFIG_INTEL_MEI_ME=y
CONFIG_HAVE_IDE=y
CONFIG_SCSI_MOD=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_PROC_FS=y
CONFIG_BLK_DEV_SD=y
CONFIG_BLK_DEV_SR=y
CONFIG_CHR_DEV_SG=y
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_LOWLEVEL=y
CONFIG_MEGARAID_NEWGEN=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_DH=y
CONFIG_ATA=y
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
CONFIG_SATA_PMP=y
CONFIG_SATA_AHCI=y
CONFIG_ATA_SFF=y
CONFIG_PDC_ADMA=y
CONFIG_ATA_BMDMA=y
CONFIG_ATA_PIIX=y
CONFIG_PATA_SIS=y
CONFIG_PATA_ACPI=y
CONFIG_ATA_GENERIC=y
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=m
CONFIG_DM_UEVENT=y
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
CONFIG_TUN=y
CONFIG_VIRTIO_NET=y
CONFIG_ETHERNET=y
CONFIG_MDIO=m
CONFIG_NET_CADENCE=y
CONFIG_NET_VENDOR_BROADCOM=y
CONFIG_BNX2=m
CONFIG_TIGON3=m
CONFIG_BNX2X=m
CONFIG_BNX2X_SRIOV=y
CONFIG_PHYLIB=y
CONFIG_BROADCOM_PHY=y
CONFIG_WLAN=y
CONFIG_IWLWIFI=m
CONFIG_IWLDVM=m
CONFIG_IWLWIFI_DEBUGFS=y
CONFIG_IWLWIFI_DEVICE_TRACING=y
CONFIG_IWLWIFI_DEVICE_TESTMODE=y
CONFIG_IWLWIFI_P2P=y
CONFIG_INPUT=y
CONFIG_INPUT_SPARSEKMAP=m
CONFIG_INPUT_MOUSEDEV=y
CONFIG_INPUT_MOUSEDEV_PSAUX=y
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ATKBD=y
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=m
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_UINPUT=y
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=0
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_STALDRV=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_NR_UARTS=48
CONFIG_SERIAL_8250_RUNTIME_UARTS=32
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_TTY_PRINTK=y
CONFIG_PRINTER=m
CONFIG_PPDEV=m
CONFIG_HW_RANDOM=y
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
CONFIG_I2C_ALGOBIT=m
CONFIG_PPS=m
CONFIG_PTP_1588_CLOCK=m
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
CONFIG_GPIO_DEVRES=y
CONFIG_POWER_SUPPLY=y
CONFIG_HWMON=y
CONFIG_SENSORS_CORETEMP=m
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
CONFIG_SSB_POSSIBLE=y
CONFIG_BCMA_POSSIBLE=y
CONFIG_MFD_CORE=y
CONFIG_MFD_88PM860X=y
CONFIG_MFD_TPS6586X=y
CONFIG_MFD_STMPE=y
CONFIG_MFD_TC3589X=y
CONFIG_PMIC_DA903X=y
CONFIG_PMIC_ADP5520=y
CONFIG_MFD_MAX8925=y
CONFIG_MFD_MAX8997=y
CONFIG_MFD_MAX8998=y
CONFIG_MFD_WM831X=y
CONFIG_MFD_WM831X_I2C=y
CONFIG_MFD_WM8350=y
CONFIG_MFD_WM8350_I2C=y
CONFIG_MFD_WM8994=y
CONFIG_ABX500_CORE=y
CONFIG_AB3100_CORE=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_88PM8607=y
CONFIG_MEDIA_SUPPORT=m
CONFIG_AGP=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_VIA=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=m
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_LOAD_EDID_FIRMWARE=y
CONFIG_DRM_I915=m
CONFIG_DRM_I915_KMS=y
CONFIG_VIDEO_OUTPUT_CONTROL=m
CONFIG_HDMI=y
CONFIG_FB=y
CONFIG_FIRMWARE_EDID=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y
CONFIG_FB_ASILIANT=y
CONFIG_FB_IMSTT=y
CONFIG_FB_EFI=y
CONFIG_FB_GEODE=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_SOUND=m
CONFIG_SND=m
CONFIG_SND_TIMER=m
CONFIG_SND_PCM=m
CONFIG_SND_HWDEP=m
CONFIG_SND_RAWMIDI=m
CONFIG_SND_JACK=y
CONFIG_SND_SEQUENCER=m
CONFIG_SND_DYNAMIC_MINORS=y
CONFIG_SND_SUPPORT_OLD_API=y
CONFIG_SND_VERBOSE_PROCFS=y
CONFIG_SND_VMASTER=y
CONFIG_SND_KCTL_JACK=y
CONFIG_SND_DMA_SGBUF=y
CONFIG_SND_RAWMIDI_SEQ=m
CONFIG_SND_DRIVERS=y
CONFIG_SND_VIRMIDI=m
CONFIG_SND_PCI=y
CONFIG_SND_HDA_INTEL=m
CONFIG_SND_HDA_PREALLOC_SIZE=64
CONFIG_SND_HDA_HWDEP=y
CONFIG_SND_HDA_RECONFIG=y
CONFIG_SND_HDA_INPUT_BEEP=y
CONFIG_SND_HDA_INPUT_BEEP_MODE=0
CONFIG_SND_HDA_INPUT_JACK=y
CONFIG_SND_HDA_PATCH_LOADER=y
CONFIG_SND_HDA_CODEC_REALTEK=y
CONFIG_SND_HDA_CODEC_ANALOG=y
CONFIG_SND_HDA_CODEC_SIGMATEL=y
CONFIG_SND_HDA_CODEC_VIA=y
CONFIG_SND_HDA_CODEC_HDMI=y
CONFIG_SND_HDA_CODEC_CIRRUS=y
CONFIG_SND_HDA_CODEC_CONEXANT=y
CONFIG_SND_HDA_CODEC_CA0110=y
CONFIG_SND_HDA_CODEC_CA0132=y
CONFIG_SND_HDA_CODEC_CMEDIA=y
CONFIG_SND_HDA_CODEC_SI3054=y
CONFIG_SND_HDA_GENERIC=y
CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0
CONFIG_SND_USB=y
CONFIG_HID=m
CONFIG_HIDRAW=y
CONFIG_HID_GENERIC=m
CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y
CONFIG_USB_ARCH_HAS_OHCI=y
CONFIG_USB_ARCH_HAS_EHCI=y
CONFIG_USB_ARCH_HAS_XHCI=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_MON=y
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=y
CONFIG_USB_STORAGE=m
CONFIG_USB_SERIAL=m
CONFIG_USB_SERIAL_GENERIC=y
CONFIG_USB_SERIAL_FTDI_SIO=m
CONFIG_MMC=y
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
CONFIG_MMC_BLOCK_BOUNCE=y
CONFIG_MMC_SDHCI=m
CONFIG_MMC_SDHCI_PCI=m
CONFIG_MMC_RICOH_MMC=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_TRIGGERS=y
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
CONFIG_RTC_DRV_CMOS=y
CONFIG_DMADEVICES=y
CONFIG_AUXDISPLAY=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI=y
CONFIG_STAGING=y
CONFIG_ZSMALLOC=y
CONFIG_STAGING_MEDIA=y
CONFIG_NET_VENDOR_SILICOM=y
CONFIG_ZCACHE=y
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_DELL_LAPTOP=m
CONFIG_DELL_WMI=m
CONFIG_ACPI_WMI=m
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
CONFIG_IRQ_REMAP=y
CONFIG_VIRT_DRIVERS=y
CONFIG_PM_DEVFREQ=y
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_EDD=y
CONFIG_EDD_OFF=y
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_EFI_VARS=y
CONFIG_DCDBAS=m
CONFIG_DMIID=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT3_FS=y
CONFIG_EXT3_DEFAULTS_TO_ORDERED=y
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT23=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
CONFIG_JBD=y
CONFIG_JBD2=y
CONFIG_FS_MBCACHE=y
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_FUSE_FS=y
CONFIG_GENERIC_ACL=y
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
CONFIG_UDF_NLS=y
CONFIG_FAT_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=m
CONFIG_NTFS_RW=y
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_MISC_FILESYSTEMS=y
CONFIG_ECRYPT_FS=y
CONFIG_PSTORE=y
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=m
CONFIG_NFS_V3=m
CONFIG_NFS_V4=m
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=m
CONFIG_SUNRPC_GSS=m
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_UTF8=m
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
CONFIG_FRAME_WARN=1024
CONFIG_MAGIC_SYSRQ=y
CONFIG_UNUSED_SYMBOLS=y
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_KERNEL=y
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=0
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
CONFIG_PANIC_ON_OOPS_VALUE=0
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
CONFIG_SCHED_DEBUG=y
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
CONFIG_HAVE_DEBUG_KMEMLEAK=y
CONFIG_STACKTRACE=y
CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
CONFIG_BOOT_PRINTK_DELAY=y
CONFIG_RCU_CPU_STALL_TIMEOUT=60
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_SCHED_TRACER=y
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
CONFIG_BRANCH_PROFILE_NONE=y
CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENT=y
CONFIG_PROBE_EVENTS=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_MMIOTRACE=y
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
CONFIG_STRICT_DEVMEM=y
CONFIG_EARLY_PRINTK=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_SET_MODULE_RONX=y
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0XED=y
CONFIG_DEFAULT_IO_DELAY_TYPE=1
CONFIG_OPTIMIZE_INLINING=y
CONFIG_KEYS=y
CONFIG_ENCRYPTED_KEYS=y
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_PATH=y
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=0
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
CONFIG_SECURITY_SMACK=y
CONFIG_SECURITY_TOMOYO=y
CONFIG_SECURITY_TOMOYO_MAX_ACCEPT_ENTRY=2048
CONFIG_SECURITY_TOMOYO_MAX_AUDIT_LOG=1024
CONFIG_SECURITY_TOMOYO_POLICY_LOADER="/sbin/tomoyo-init"
CONFIG_SECURITY_TOMOYO_ACTIVATION_TRIGGER="/sbin/init"
CONFIG_SECURITY_APPARMOR=y
CONFIG_SECURITY_APPARMOR_BOOTPARAM_VALUE=1
CONFIG_SECURITY_YAMA=y
CONFIG_INTEGRITY=y
CONFIG_EVM=y
CONFIG_EVM_HMAC_VERSION=2
CONFIG_DEFAULT_SECURITY_APPARMOR=y
CONFIG_DEFAULT_SECURITY="apparmor"
CONFIG_CRYPTO=y
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_WORKQUEUE=y
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_X86_64=y
CONFIG_CRYPTO_CRC32C_INTEL=y
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_LZO=y
CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_PADLOCK=y
CONFIG_HAVE_KVM=y
CONFIG_BINARY_PRINTF=y
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
CONFIG_CRC32_SLICEBY8=y
CONFIG_LIBCRC32C=m
CONFIG_ZLIB_INFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_BCJ=y
CONFIG_DECOMPRESS_GZIP=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
CONFIG_AVERAGE=y
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Unusually high system CPU usage with recent kernels
2013-08-20 6:01 Tibor Billes
@ 2013-08-20 14:53 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2013-08-20 14:53 UTC (permalink / raw)
To: Tibor Billes; +Cc: linux-kernel
On Tue, Aug 20, 2013 at 08:01:28AM +0200, Tibor Billes wrote:
> Hi,
>
> I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
> The 3.10.x series was showing unusually high (>75%) system CPU usage in some
> situations, making things really slow. The latest stable I tried is 3.10.7.
> I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
> show up when the system is idling, only when doing some CPU intensive work,
> like compiling with multiple threads. Compiling with only one thread seems not
> to trigger this behaviour.
>
> To be more precise I did a `perf record -a` while compiling a large C++ program
> with scons using 4 threads, the result is appended at the end of this email.
New one on me! You are running a mainstream system (x86_64), so I am
surprised no one else noticed.
Could you please send along your .config file?
Thanx, Paul
> I also tried bisecting it and got the following:
> There are only 'skip'ped commits left to test.
> The first bad commit could be any of:
> b8462084a2a88a6a0489f9bb7d8b1bb95bc455ab
> bd9f0686fc8c9a01c6850b1c611d1c9ad80b86d6
> 8b425aa8f1acfe48aed919c7aadff2ed290fe969
> b92db6cb7efcbd41e469e1d757c47da4865f7622
> 0446be489795d8bb994125a916ef03211f539e54
> c0f4dfd4f90f1667d234d21f15153ea09a2eaa66
> 910ee45db2f4837c8440e770474758493ab94bf7
> We cannot bisect more!
>
> I skipped these commits because they didn't boot. (I didn't spend any time to
> figure out why.) They are all RCU-related changes. The perf I mentioned above
> was done using commit 910ee45db2f4837c8440e770474758493ab94bf7,
> which is the first bad commit I could boot into.
>
> Does anyone have any idea why is this happening?
>
> I'd be happy to provide more information if needed.
>
> Tibor
>
> Here's the perf report:
> Note: I truncated the output, so it doesn't show lines with less than 0.01%
> overhead.
> # ========
> # captured on: Sun Aug 11 23:08:27 2013
> # hostname : altair
> # os release : 3.9.0-rc2+
> # perf version : 3.9.3.g4bb086
> # arch : x86_64
> # nrcpus online : 4
> # nrcpus avail : 4
> # cpudesc : Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
> # cpuid : GenuineIntel,6,58,9
> # total memory : 7981164 kB
> # cmdline : /home/tbilles/linux/stable/tools/perf/perf record -a
> # event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0, excl_host = 0, excl_guest = 1, precise_ip = 0
> # HEADER_CPU_TOPOLOGY info available, use -I to display
> # HEADER_NUMA_TOPOLOGY info available, use -I to display
> # pmu mappings: cpu = 4, software = 1, tracepoint = 2, breakpoint = 5
> # ========
> #
> # Samples: 1M of event 'cycles'
> # Event count (approx.): 1179107753326
> #
> # Overhead Command Shared Object Symbol
> # ........ ............... ............................... ......................................................
> #
> 2.49% cc1plus [kernel.kallsyms] [k] __switch_to
> 2.06% cc1plus [kernel.kallsyms] [k] __schedule
> 1.58% cc1plus [kernel.kallsyms] [k] native_sched_clock
> 1.36% cc1plus [kernel.kallsyms] [k] _raw_spin_lock
> 1.32% ksoftirqd/1 [kernel.kallsyms] [k] __switch_to
> 1.28% ksoftirqd/3 [kernel.kallsyms] [k] __switch_to
> 1.24% python python2.7 [.] 0x0000000000023f62
> 1.20% cc1plus [kernel.kallsyms] [k] update_curr
> 1.18% ksoftirqd/3 [kernel.kallsyms] [k] __schedule
> 1.18% ksoftirqd/0 [kernel.kallsyms] [k] __switch_to
> 1.13% ksoftirqd/1 [kernel.kallsyms] [k] __schedule
> 1.10% ksoftirqd/2 [kernel.kallsyms] [k] __switch_to
> 1.08% cc1plus [kernel.kallsyms] [k] enqueue_entity
> 1.05% ksoftirqd/0 [kernel.kallsyms] [k] __schedule
> 1.03% ksoftirqd/2 [kernel.kallsyms] [k] __schedule
> 0.98% cc1plus [kernel.kallsyms] [k] local_clock
> 0.95% cc1plus [kernel.kallsyms] [k] put_prev_task_fair
> 0.92% cc1plus [kernel.kallsyms] [k] __acct_update_integrals
> 0.90% python [kernel.kallsyms] [k] __switch_to
> 0.84% python [kernel.kallsyms] [k] __schedule
> 0.78% cc1plus [kernel.kallsyms] [k] try_to_wake_up
> 0.75% cc1plus [kernel.kallsyms] [k] sched_clock_cpu
> 0.70% ksoftirqd/0 [kernel.kallsyms] [k] native_sched_clock
> 0.70% ksoftirqd/3 [kernel.kallsyms] [k] native_sched_clock
> 0.69% ksoftirqd/2 [kernel.kallsyms] [k] native_sched_clock
> 0.67% ksoftirqd/1 [kernel.kallsyms] [k] native_sched_clock
> 0.65% cc1plus [kernel.kallsyms] [k] gs_change
> 0.63% ksoftirqd/2 [kernel.kallsyms] [k] gs_change
> 0.61% python [kernel.kallsyms] [k] native_sched_clock
> 0.59% cc1plus [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
> 0.58% cc1plus [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
> 0.58% cc1plus [kernel.kallsyms] [k] enqueue_task_fair
> 0.57% cc1plus [kernel.kallsyms] [k] account_system_time
> 0.57% cc1plus [kernel.kallsyms] [k] __enqueue_entity
> 0.55% ksoftirqd/2 [kernel.kallsyms] [k] rcu_process_callbacks
> 0.55% ksoftirqd/0 [kernel.kallsyms] [k] rcu_process_callbacks
> 0.54% ksoftirqd/3 [kernel.kallsyms] [k] rcu_process_callbacks
> 0.53% cc1plus [kernel.kallsyms] [k] user_enter
> 0.53% ksoftirqd/1 [kernel.kallsyms] [k] rcu_process_callbacks
> 0.53% cc1plus [kernel.kallsyms] [k] check_preempt_wakeup
> 0.53% python [kernel.kallsyms] [k] _raw_spin_lock
> 0.50% ksoftirqd/0 [kernel.kallsyms] [k] gs_change
> 0.50% ksoftirqd/3 [kernel.kallsyms] [k] _raw_spin_lock
> 0.49% ksoftirqd/1 [kernel.kallsyms] [k] _raw_spin_lock
> 0.48% ksoftirqd/0 [kernel.kallsyms] [k] _raw_spin_lock
> 0.48% ksoftirqd/2 [kernel.kallsyms] [k] _raw_spin_lock
> 0.46% python [kernel.kallsyms] [k] update_curr
> 0.46% cc1plus [kernel.kallsyms] [k] get_vtime_delta
> 0.45% ksoftirqd/0 [kernel.kallsyms] [k] smpboot_thread_fn
> 0.45% ksoftirqd/2 [kernel.kallsyms] [k] smpboot_thread_fn
> 0.45% sh [kernel.kallsyms] [k] __switch_to
> 0.44% ksoftirqd/3 [kernel.kallsyms] [k] smpboot_thread_fn
> 0.44% ksoftirqd/1 [kernel.kallsyms] [k] smpboot_thread_fn
> 0.44% cc1plus [kernel.kallsyms] [k] user_exit
> 0.43% cc1plus [kernel.kallsyms] [k] context_tracking_task_switch
> 0.43% ksoftirqd/3 [kernel.kallsyms] [k] local_clock
> 0.43% ksoftirqd/0 [kernel.kallsyms] [k] local_clock
> 0.42% ksoftirqd/2 [kernel.kallsyms] [k] local_clock
> 0.42% cc1plus [kernel.kallsyms] [k] resched_task
> 0.42% ksoftirqd/1 [kernel.kallsyms] [k] local_clock
> 0.41% cc1plus [kernel.kallsyms] [k] raise_softirq
> 0.40% cc1plus [kernel.kallsyms] [k] finish_task_switch
> 0.38% sh [kernel.kallsyms] [k] __schedule
> 0.37% ksoftirqd/1 [kernel.kallsyms] [k] gs_change
> 0.37% python [kernel.kallsyms] [k] local_clock
> 0.37% ksoftirqd/3 [kernel.kallsyms] [k] gs_change
> 0.37% ksoftirqd/2 [kernel.kallsyms] [k] account_system_time
> 0.36% python [kernel.kallsyms] [k] enqueue_entity
> 0.36% ksoftirqd/1 [kernel.kallsyms] [k] check_for_new_grace_period.isra.37
> 0.36% ksoftirqd/0 [kernel.kallsyms] [k] check_for_new_grace_period.isra.37
> 0.36% ksoftirqd/3 [kernel.kallsyms] [k] check_for_new_grace_period.isra.37
> 0.35% ksoftirqd/2 [kernel.kallsyms] [k] __do_softirq
> 0.35% ksoftirqd/2 [kernel.kallsyms] [k] check_for_new_grace_period.isra.37
> 0.35% ksoftirqd/0 [kernel.kallsyms] [k] account_system_time
> 0.35% ksoftirqd/3 [kernel.kallsyms] [k] dequeue_entity
> 0.35% ksoftirqd/0 [kernel.kallsyms] [k] __do_softirq
> 0.35% ksoftirqd/3 [kernel.kallsyms] [k] __do_softirq
> 0.35% ksoftirqd/1 [kernel.kallsyms] [k] account_system_time
> 0.35% ksoftirqd/1 [kernel.kallsyms] [k] __do_softirq
> 0.35% python [kernel.kallsyms] [k] put_prev_task_fair
> 0.34% ksoftirqd/0 [kernel.kallsyms] [k] rcu_process_gp_end
> 0.34% ksoftirqd/3 [kernel.kallsyms] [k] account_system_time
> 0.34% ksoftirqd/1 [kernel.kallsyms] [k] dequeue_entity
> 0.34% python [kernel.kallsyms] [k] __acct_update_integrals
> 0.34% ksoftirqd/2 [kernel.kallsyms] [k] rcu_process_gp_end
> 0.34% ksoftirqd/1 [kernel.kallsyms] [k] rcu_process_gp_end
> 0.33% cc1plus [kernel.kallsyms] [k] select_task_rq_fair
> 0.33% ksoftirqd/2 [kernel.kallsyms] [k] sched_clock_cpu
> 0.33% ksoftirqd/3 [kernel.kallsyms] [k] rcu_process_gp_end
> 0.33% ksoftirqd/0 [kernel.kallsyms] [k] dequeue_entity
> 0.33% cc1plus [kernel.kallsyms] [k] int_careful
> 0.32% ksoftirqd/2 [kernel.kallsyms] [k] dequeue_entity
> 0.32% ksoftirqd/0 [kernel.kallsyms] [k] sched_clock_cpu
> 0.32% cc1plus [kernel.kallsyms] [k] pick_next_task_fair
> 0.31% ksoftirqd/1 [kernel.kallsyms] [k] sched_clock_cpu
> 0.31% ksoftirqd/3 [kernel.kallsyms] [k] sched_clock_cpu
> 0.30% cc1plus [kernel.kallsyms] [k] jiffies_to_timeval
> 0.29% cc1plus [kernel.kallsyms] [k] set_next_entity
> 0.29% cc1plus [kernel.kallsyms] [k] vtime_account_user
> 0.29% python [kernel.kallsyms] [k] sched_clock_cpu
> 0.29% sh [kernel.kallsyms] [k] native_sched_clock
> 0.28% ksoftirqd/0 [kernel.kallsyms] [k] dequeue_task_fair
> 0.28% python [kernel.kallsyms] [k] try_to_wake_up
> 0.28% ksoftirqd/0 [kernel.kallsyms] [k] set_next_entity
> 0.28% cc1plus [kernel.kallsyms] [k] clear_buddies
> 0.27% ksoftirqd/2 [kernel.kallsyms] [k] dequeue_task_fair
> 0.27% cc1plus [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 0.27% cc1plus [kernel.kallsyms] [k] update_stats_wait_end
> 0.27% ksoftirqd/1 [kernel.kallsyms] [k] set_next_entity
> 0.27% ksoftirqd/3 [kernel.kallsyms] [k] set_next_entity
> 0.26% ksoftirqd/2 [kernel.kallsyms] [k] set_next_entity
> 0.26% cc1plus [kernel.kallsyms] [k] place_entity
> 0.26% ksoftirqd/0 [kernel.kallsyms] [k] __acct_update_integrals
> 0.26% arm-none-eabi-g [kernel.kallsyms] [k] __switch_to
> 0.26% ksoftirqd/1 [kernel.kallsyms] [k] dequeue_task_fair
> 0.26% cc1plus [kernel.kallsyms] [k] rb_insert_color
> 0.26% ksoftirqd/3 [kernel.kallsyms] [k] __acct_update_integrals
> 0.25% sh [kernel.kallsyms] [k] _raw_spin_lock
> 0.25% ksoftirqd/1 [kernel.kallsyms] [k] __acct_update_integrals
> 0.25% ksoftirqd/2 [kernel.kallsyms] [k] __acct_update_integrals
> 0.25% ksoftirqd/3 [kernel.kallsyms] [k] dequeue_task_fair
> 0.24% cc1plus [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.23% ksoftirqd/3 [kernel.kallsyms] [k] update_stats_wait_end
> 0.23% python [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
> 0.23% cc1plus [kernel.kallsyms] [k] cpuacct_charge
> 0.23% sh [kernel.kallsyms] [k] update_curr
> 0.23% ksoftirqd/1 [kernel.kallsyms] [k] update_stats_wait_end
> 0.22% cc1plus [kernel.kallsyms] [k] __vtime_account_system
> 0.22% ksoftirqd/2 [kernel.kallsyms] [k] update_stats_wait_end
> 0.22% cc1plus [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 0.22% python [kernel.kallsyms] [k] account_system_time
> 0.22% python [kernel.kallsyms] [k] __enqueue_entity
> 0.22% python [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
> 0.22% python [kernel.kallsyms] [k] enqueue_task_fair
> 0.22% ksoftirqd/1 [kernel.kallsyms] [k] update_curr
> 0.22% cc1plus [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.22% ksoftirqd/3 [kernel.kallsyms] [k] update_curr
> 0.21% ksoftirqd/0 [kernel.kallsyms] [k] update_stats_wait_end
> 0.21% ksoftirqd/3 [kernel.kallsyms] [k] get_vtime_delta
> 0.21% cc1plus [kernel.kallsyms] [k] enqueue_task
> 0.20% arm-none-eabi-g [kernel.kallsyms] [k] __schedule
> 0.20% ksoftirqd/0 [kernel.kallsyms] [k] context_tracking_task_switch
> 0.20% ksoftirqd/2 [kernel.kallsyms] [k] context_tracking_task_switch
> 0.20% python [kernel.kallsyms] [k] user_enter
> 0.20% ksoftirqd/2 [kernel.kallsyms] [k] update_curr
> 0.19% cc1plus [kernel.kallsyms] [k] account_user_time
> 0.19% ksoftirqd/3 [kernel.kallsyms] [k] context_tracking_task_switch
> 0.19% ksoftirqd/1 [kernel.kallsyms] [k] pick_next_task_fair
> 0.19% ksoftirqd/1 [kernel.kallsyms] [k] context_tracking_task_switch
> 0.19% ksoftirqd/0 [kernel.kallsyms] [k] update_curr
> 0.19% cc1plus [kernel.kallsyms] [k] check_preempt_curr
> 0.19% ksoftirqd/0 [kernel.kallsyms] [k] get_vtime_delta
> 0.18% ksoftirqd/2 [kernel.kallsyms] [k] pick_next_task_fair
> 0.18% ksoftirqd/1 [kernel.kallsyms] [k] get_vtime_delta
> 0.18% python [kernel.kallsyms] [k] finish_task_switch
> 0.18% python [kernel.kallsyms] [k] check_preempt_wakeup
> 0.18% sh [kernel.kallsyms] [k] local_clock
> 0.18% ksoftirqd/2 [kernel.kallsyms] [k] get_vtime_delta
> 0.18% ksoftirqd/0 [kernel.kallsyms] [k] pick_next_task_fair
> 0.18% python [kernel.kallsyms] [k] get_vtime_delta
> 0.18% perf [kernel.kallsyms] [k] aes_encrypt
> 0.17% python [kernel.kallsyms] [k] user_exit
> 0.17% sh [kernel.kallsyms] [k] enqueue_entity
> 0.17% python [kernel.kallsyms] [k] context_tracking_task_switch
> 0.17% sh [kernel.kallsyms] [k] __acct_update_integrals
> 0.16% sh [kernel.kallsyms] [k] put_prev_task_fair
> 0.16% cc1plus [kernel.kallsyms] [k] rcu_try_advance_all_cbs
> 0.16% ksoftirqd/3 [kernel.kallsyms] [k] pick_next_task_fair
> 0.16% cc1plus [kernel.kallsyms] [k] ttwu_do_wakeup
> 0.15% cc1plus [kernel.kallsyms] [k] set_next_buddy
> 0.15% ksoftirqd/1 [kernel.kallsyms] [k] vtime_account_system
> 0.15% python [kernel.kallsyms] [k] raise_softirq
> 0.15% cc1plus [kernel.kallsyms] [k] update_cfs_shares
> 0.15% cc1plus [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.15% arm-none-eabi-g [kernel.kallsyms] [k] native_sched_clock
> 0.15% ksoftirqd/2 [kernel.kallsyms] [k] vtime_account_system
> 0.15% cc1plus [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
> 0.15% cc1plus [kernel.kallsyms] [k] acct_account_cputime
> 0.15% ksoftirqd/3 [kernel.kallsyms] [k] vtime_account_system
> 0.15% cc1plus [kernel.kallsyms] [k] update_rq_clock
> 0.14% ksoftirqd/0 [kernel.kallsyms] [k] vtime_account_system
> 0.14% python [kernel.kallsyms] [k] resched_task
> 0.14% cc1plus [kernel.kallsyms] [k] rcu_eqs_exit
> 0.14% sh [kernel.kallsyms] [k] sched_clock_cpu
> 0.13% cc1plus [kernel.kallsyms] [k] vtime_user_enter
> 0.13% ksoftirqd/2 [kernel.kallsyms] [k] rb_erase
> 0.13% cc1plus [kernel.kallsyms] [k] vtime_account_system
> 0.13% ksoftirqd/2 [kernel.kallsyms] [k] __vtime_account_system
> 0.13% ksoftirqd/1 [kernel.kallsyms] [k] __vtime_account_system
> 0.13% ksoftirqd/0 [kernel.kallsyms] [k] rb_erase
> 0.13% cc1plus [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.13% ksoftirqd/0 [kernel.kallsyms] [k] dequeue_task
> 0.13% sh [kernel.kallsyms] [k] try_to_wake_up
> 0.13% arm-none-eabi-g [kernel.kallsyms] [k] _raw_spin_lock
> 0.13% ksoftirqd/1 [kernel.kallsyms] [k] rb_erase
> 0.13% ksoftirqd/0 [kernel.kallsyms] [k] clear_buddies
> 0.13% arm-none-eabi-g [kernel.kallsyms] [k] update_curr
> 0.12% ksoftirqd/2 [kernel.kallsyms] [k] dequeue_task
> 0.12% cc1plus [kernel.kallsyms] [k] schedule_user
> 0.12% ksoftirqd/0 [kernel.kallsyms] [k] __vtime_account_system
> 0.12% ksoftirqd/3 [kernel.kallsyms] [k] __vtime_account_system
> 0.12% ksoftirqd/1 [kernel.kallsyms] [k] dequeue_task
> 0.12% cc1plus [kernel.kallsyms] [k] ttwu_stat
> 0.12% ksoftirqd/3 [kernel.kallsyms] [k] dequeue_task
> 0.12% python [kernel.kallsyms] [k] select_task_rq_fair
> 0.12% cc1plus [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
> 0.12% ksoftirqd/3 [kernel.kallsyms] [k] rb_erase
> 0.12% cc1plus [kernel.kallsyms] [k] __raise_softirq_irqoff
> 0.12% cc1plus [kernel.kallsyms] [k] rb_erase
> 0.12% python [kernel.kallsyms] [k] pick_next_task_fair
> 0.12% cc1plus [kernel.kallsyms] [k] rcu_note_context_switch
> 0.11% cc1plus cc1plus [.] 0x00000000009bd77f
> 0.11% cc1plus [kernel.kallsyms] [k] rcu_eqs_enter
> 0.11% as [kernel.kallsyms] [k] __switch_to
> 0.11% python [kernel.kallsyms] [k] jiffies_to_timeval
> 0.11% sh [kernel.kallsyms] [k] __enqueue_entity
> 0.11% ksoftirqd/2 [kernel.kallsyms] [k] clear_buddies
> 0.11% ksoftirqd/2 [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.11% ksoftirqd/3 [kernel.kallsyms] [k] clear_buddies
> 0.11% python [kernel.kallsyms] [k] set_next_entity
> 0.11% ksoftirqd/0 [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.11% sh [kernel.kallsyms] [k] enqueue_task_fair
> 0.11% python libc-2.15.so [.] 0x0000000000147fc5
> 0.11% ksoftirqd/1 [kernel.kallsyms] [k] clear_buddies
> 0.11% ksoftirqd/1 [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.11% sh [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
> 0.10% ksoftirqd/3 [kernel.kallsyms] [k] finish_task_switch
> 0.10% sh [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
> 0.10% python [kernel.kallsyms] [k] clear_buddies
> 0.10% python [kernel.kallsyms] [k] update_stats_wait_end
> 0.10% ksoftirqd/3 [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.10% ksoftirqd/0 [kernel.kallsyms] [k] cpuacct_charge
> 0.10% python [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 0.10% sh [kernel.kallsyms] [k] account_system_time
> 0.10% ksoftirqd/3 [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.10% ksoftirqd/1 [kernel.kallsyms] [k] finish_task_switch
> 0.10% arm-none-eabi-g [kernel.kallsyms] [k] enqueue_entity
> 0.10% sh [kernel.kallsyms] [k] user_enter
> 0.10% python [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.10% python [kernel.kallsyms] [k] vtime_account_user
> 0.10% python [kernel.kallsyms] [k] int_careful
> 0.10% ksoftirqd/2 [kernel.kallsyms] [k] rcu_note_context_switch
> 0.10% ksoftirqd/0 [kernel.kallsyms] [k] rcu_note_context_switch
> 0.10% python [kernel.kallsyms] [k] place_entity
> 0.10% arm-none-eabi-g [kernel.kallsyms] [k] put_prev_task_fair
> 0.09% ksoftirqd/1 [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.09% cc1 cc1 [.] 0x00000000000c4674
> 0.09% ksoftirqd/2 [kernel.kallsyms] [k] put_prev_task_fair
> 0.09% as [kernel.kallsyms] [k] __schedule
> 0.09% ksoftirqd/1 [kernel.kallsyms] [k] rcu_note_context_switch
> 0.09% cc1 [kernel.kallsyms] [k] __switch_to
> 0.09% sh [kernel.kallsyms] [k] check_preempt_wakeup
> 0.09% ksoftirqd/2 [kernel.kallsyms] [k] cpuacct_charge
> 0.09% ksoftirqd/3 [kernel.kallsyms] [k] rcu_note_context_switch
> 0.09% arm-none-eabi-g [kernel.kallsyms] [k] local_clock
> 0.09% ksoftirqd/0 [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.09% ksoftirqd/0 [kernel.kallsyms] [k] cpu_needs_another_gp
> 0.09% ksoftirqd/0 [kernel.kallsyms] [k] finish_task_switch
> 0.09% ksoftirqd/2 [kernel.kallsyms] [k] cpu_needs_another_gp
> 0.09% ksoftirqd/3 [kernel.kallsyms] [k] cpu_needs_another_gp
> 0.09% ksoftirqd/2 [kernel.kallsyms] [k] finish_task_switch
> 0.09% ksoftirqd/1 [kernel.kallsyms] [k] put_prev_task_fair
> 0.09% ksoftirqd/2 [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.09% python [kernel.kallsyms] [k] __vtime_account_system
> 0.09% ksoftirqd/3 [kernel.kallsyms] [k] put_prev_task_fair
> 0.09% cc1plus [kernel.kallsyms] [k] rb_next
> 0.09% python [kernel.kallsyms] [k] cpuacct_charge
> 0.09% ksoftirqd/0 [kernel.kallsyms] [k] put_prev_task_fair
> 0.09% cc1plus [kernel.kallsyms] [k] wake_up_process
> 0.09% python [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.09% sh [kernel.kallsyms] [k] get_vtime_delta
> 0.09% ksoftirqd/0 [kernel.kallsyms] [k] rb_next
> 0.09% arm-none-eabi-g [kernel.kallsyms] [k] __acct_update_integrals
> 0.08% ksoftirqd/1 [kernel.kallsyms] [k] cpu_needs_another_gp
> 0.08% sh [kernel.kallsyms] [k] context_tracking_task_switch
> 0.08% python [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 0.08% python [kernel.kallsyms] [k] enqueue_task
> 0.08% ksoftirqd/2 [kernel.kallsyms] [k] rb_next
> 0.08% sh [kernel.kallsyms] [k] finish_task_switch
> 0.08% sh [kernel.kallsyms] [k] user_exit
> 0.08% ksoftirqd/1 [kernel.kallsyms] [k] rb_next
> 0.08% cc1 [kernel.kallsyms] [k] __schedule
> 0.08% ksoftirqd/0 [kernel.kallsyms] [k] __local_bh_enable
> 0.08% cc1plus [kernel.kallsyms] [k] int_with_check
> 0.08% ksoftirqd/0 [kernel.kallsyms] [k] update_cfs_shares
> 0.08% python [kernel.kallsyms] [k] rb_insert_color
> 0.08% ksoftirqd/1 [kernel.kallsyms] [k] __local_bh_enable
> 0.08% ksoftirqd/3 [kernel.kallsyms] [k] __local_bh_enable
> 0.07% cc1plus [kernel.kallsyms] [k] invoke_rcu_core
> 0.07% ksoftirqd/2 [kernel.kallsyms] [k] __local_bh_enable
> 0.07% python [kernel.kallsyms] [k] account_user_time
> 0.07% sh [kernel.kallsyms] [k] raise_softirq
> 0.07% arm-none-eabi-g [kernel.kallsyms] [k] sched_clock_cpu
> 0.07% cc1plus [kernel.kallsyms] [k] vtime_task_switch
> 0.07% as [kernel.kallsyms] [k] native_sched_clock
> 0.07% ksoftirqd/2 [kernel.kallsyms] [k] kthread_should_stop
> 0.07% arm-none-eabi-g [kernel.kallsyms] [k] try_to_wake_up
> 0.07% cc1plus [kernel.kallsyms] [k] calc_delta_mine
> 0.07% ksoftirqd/1 [kernel.kallsyms] [k] update_cfs_shares
> 0.07% ksoftirqd/3 [kernel.kallsyms] [k] cpuacct_charge
> 0.07% ksoftirqd/3 [kernel.kallsyms] [k] update_cfs_shares
> 0.07% ksoftirqd/2 [kernel.kallsyms] [k] update_cfs_shares
> 0.07% ksoftirqd/3 [kernel.kallsyms] [k] rb_next
> 0.07% cc1plus [kernel.kallsyms] [k] wakeup_softirqd
> 0.07% python [kernel.kallsyms] [k] check_preempt_curr
> 0.07% ksoftirqd/0 [kernel.kallsyms] [k] kthread_should_stop
> 0.07% ksoftirqd/3 [kernel.kallsyms] [k] kthread_should_stop
> 0.07% sh [kernel.kallsyms] [k] resched_task
> 0.07% as [kernel.kallsyms] [k] _raw_spin_lock
> 0.06% ksoftirqd/0 [kernel.kallsyms] [k] acct_account_cputime
> 0.06% ksoftirqd/1 [kernel.kallsyms] [k] cpuacct_charge
> 0.06% ksoftirqd/0 [kernel.kallsyms] [k] jiffies_to_timeval
> 0.06% ksoftirqd/3 [kernel.kallsyms] [k] jiffies_to_timeval
> 0.06% cc1plus [kernel.kallsyms] [k] schedule
> 0.06% ksoftirqd/1 [kernel.kallsyms] [k] acct_account_cputime
> 0.06% ksoftirqd/3 [kernel.kallsyms] [k] acct_account_cputime
> 0.06% python [kernel.kallsyms] [k] update_cfs_shares
> 0.06% sh [kernel.kallsyms] [k] int_careful
> 0.06% ksoftirqd/1 [kernel.kallsyms] [k] jiffies_to_timeval
> 0.06% python [kernel.kallsyms] [k] ttwu_do_wakeup
> 0.06% cc1 [kernel.kallsyms] [k] native_sched_clock
> 0.06% ksoftirqd/2 [kernel.kallsyms] [k] jiffies_to_timeval
> 0.06% python [kernel.kallsyms] [k] rcu_try_advance_all_cbs
> 0.06% python [kernel.kallsyms] [k] acct_account_cputime
> 0.06% python [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.06% ksoftirqd/2 [kernel.kallsyms] [k] acct_account_cputime
> 0.06% arm-none-eabi-g [kernel.kallsyms] [k] __enqueue_entity
> 0.06% python [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
> 0.06% ksoftirqd/1 [kernel.kallsyms] [k] kthread_should_stop
> 0.06% arm-none-eabi-g [kernel.kallsyms] [k] enqueue_task_fair
> 0.06% sh [kernel.kallsyms] [k] pick_next_task_fair
> 0.06% python [kernel.kallsyms] [k] update_rq_clock
> 0.06% python [kernel.kallsyms] [k] vtime_user_enter
> 0.06% as [kernel.kallsyms] [k] update_curr
> 0.06% sh [kernel.kallsyms] [k] select_task_rq_fair
> 0.06% arm-none-eabi-g [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
> 0.06% cc1plus [kernel.kallsyms] [k] task_waking_fair
> 0.05% sh [kernel.kallsyms] [k] jiffies_to_timeval
> 0.05% ksoftirqd/0 [kernel.kallsyms] [k] kthread_should_park
> 0.05% arm-none-eabi-g [kernel.kallsyms] [k] user_enter
> 0.05% ksoftirqd/0 [kernel.kallsyms] [k] vtime_account_irq_exit
> 0.05% cc1 [kernel.kallsyms] [k] _raw_spin_lock
> 0.05% python [kernel.kallsyms] [k] rcu_eqs_exit
> 0.05% python [kernel.kallsyms] [k] set_next_buddy
> 0.05% ksoftirqd/3 [kernel.kallsyms] [k] vtime_account_irq_exit
> 0.05% ksoftirqd/3 [kernel.kallsyms] [k] vtime_task_switch
> 0.05% sh [kernel.kallsyms] [k] set_next_entity
> 0.05% python [kernel.kallsyms] [k] vtime_account_system
> 0.05% arm-none-eabi-g [kernel.kallsyms] [k] account_system_time
> 0.05% ksoftirqd/2 [kernel.kallsyms] [k] vtime_task_switch
> 0.05% ksoftirqd/1 [kernel.kallsyms] [k] vtime_account_irq_exit
> 0.05% arm-none-eabi-g [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
> 0.05% ksoftirqd/2 [kernel.kallsyms] [k] vtime_account_irq_exit
> 0.05% ksoftirqd/1 [kernel.kallsyms] [k] vtime_task_switch
> 0.05% sh [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 0.05% ksoftirqd/0 [kernel.kallsyms] [k] vtime_task_switch
> 0.05% ksoftirqd/1 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.05% ksoftirqd/3 [kernel.kallsyms] [k] kthread_should_park
> 0.05% python [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.05% ksoftirqd/3 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.05% ksoftirqd/0 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.05% as [kernel.kallsyms] [k] __acct_update_integrals
> 0.05% as as [.] 0x000000000000df9c
> 0.05% as [kernel.kallsyms] [k] local_clock
> 0.05% python [kernel.kallsyms] [k] ttwu_stat
> 0.05% ksoftirqd/2 [kernel.kallsyms] [k] update_rq_clock
> 0.05% python [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
> 0.05% sh [kernel.kallsyms] [k] update_stats_wait_end
> 0.05% ksoftirqd/2 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.05% ksoftirqd/1 [kernel.kallsyms] [k] update_rq_clock
> 0.05% as [kernel.kallsyms] [k] enqueue_entity
> 0.05% ksoftirqd/0 [kernel.kallsyms] [k] update_rq_clock
> 0.05% arm-none-eabi-g [kernel.kallsyms] [k] get_vtime_delta
> 0.05% arm-none-eabi-g [kernel.kallsyms] [k] check_preempt_wakeup
> 0.05% cc1 [kernel.kallsyms] [k] update_curr
> 0.04% ksoftirqd/3 [kernel.kallsyms] [k] update_rq_clock
> 0.04% python [kernel.kallsyms] [k] rb_erase
> 0.04% as [kernel.kallsyms] [k] put_prev_task_fair
> 0.04% ksoftirqd/3 [kernel.kallsyms] [k] schedule
> 0.04% sh [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.04% ksoftirqd/0 [kernel.kallsyms] [k] schedule
> 0.04% arm-none-eabi-g [kernel.kallsyms] [k] gs_change
> 0.04% sh [kernel.kallsyms] [k] place_entity
> 0.04% python [kernel.kallsyms] [k] rcu_note_context_switch
> 0.04% python [kernel.kallsyms] [k] schedule_user
> 0.04% python [kernel.kallsyms] [k] __raise_softirq_irqoff
> 0.04% postgres postgres [.] 0x00000000001fab33
> 0.04% sh [kernel.kallsyms] [k] cpuacct_charge
> 0.04% sh [kernel.kallsyms] [k] vtime_account_user
> 0.04% ksoftirqd/2 [kernel.kallsyms] [k] kthread_should_park
> 0.04% cc1plus [kernel.kallsyms] [k] native_load_gs_index
> 0.04% sh [kernel.kallsyms] [k] __vtime_account_system
> 0.04% sh [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.04% ksoftirqd/2 [kernel.kallsyms] [k] _cond_resched
> 0.04% ksoftirqd/2 [kernel.kallsyms] [k] schedule
> 0.04% arm-none-eabi-g [kernel.kallsyms] [k] finish_task_switch
> 0.04% ksoftirqd/3 [kernel.kallsyms] [k] _cond_resched
> 0.04% arm-none-eabi-g [kernel.kallsyms] [k] user_exit
> 0.04% ksoftirqd/0 [kernel.kallsyms] [k] run_ksoftirqd
> 0.04% arm-none-eabi-g [kernel.kallsyms] [k] context_tracking_task_switch
> 0.04% python [kernel.kallsyms] [k] rcu_eqs_enter
> 0.04% sh [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 0.04% python [kernel.kallsyms] [k] retint_careful
> 0.04% ksoftirqd/1 [kernel.kallsyms] [k] kthread_should_park
> 0.04% cc1 [kernel.kallsyms] [k] enqueue_entity
> 0.04% ksoftirqd/3 [kernel.kallsyms] [k] run_ksoftirqd
> 0.04% ksoftirqd/0 [kernel.kallsyms] [k] _cond_resched
> 0.04% ksoftirqd/1 [kernel.kallsyms] [k] _cond_resched
> 0.04% ksoftirqd/2 [kernel.kallsyms] [k] vtime_account_irq_enter
> 0.04% ksoftirqd/2 [kernel.kallsyms] [k] run_ksoftirqd
> 0.04% arm-none-eabi-g [kernel.kallsyms] [k] raise_softirq
> 0.04% ksoftirqd/1 [kernel.kallsyms] [k] schedule
> 0.04% ksoftirqd/1 [kernel.kallsyms] [k] vtime_account_irq_enter
> 0.04% sh [kernel.kallsyms] [k] rb_insert_color
> 0.04% ksoftirqd/3 [kernel.kallsyms] [k] vtime_account_irq_enter
> 0.04% arm-none-eabi-g [kernel.kallsyms] [k] resched_task
> 0.04% sh [kernel.kallsyms] [k] clear_buddies
> 0.04% cc1plus [kernel.kallsyms] [k] activate_task
> 0.04% sh [kernel.kallsyms] [k] enqueue_task
> 0.04% ksoftirqd/0 [kernel.kallsyms] [k] vtime_account_irq_enter
> 0.04% cc1plus [kernel.kallsyms] [k] rcu_user_enter
> 0.04% ksoftirqd/3 [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.04% cc1 [kernel.kallsyms] [k] local_clock
> 0.04% ksoftirqd/0 [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.04% ksoftirqd/1 [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.04% as [kernel.kallsyms] [k] try_to_wake_up
> 0.04% ksoftirqd/1 [kernel.kallsyms] [k] run_ksoftirqd
> 0.03% as [kernel.kallsyms] [k] sched_clock_cpu
> 0.03% cc1plus [kernel.kallsyms] [k] rcu_user_exit
> 0.03% mate-terminal libgdk_pixbuf-2.0.so.0.2600.1 [.] 0x0000000000014868
> 0.03% cc1 [kernel.kallsyms] [k] put_prev_task_fair
> 0.03% sh [kernel.kallsyms] [k] check_preempt_curr
> 0.03% python [kernel.kallsyms] [k] rb_next
> 0.03% cc1 [kernel.kallsyms] [k] __acct_update_integrals
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] int_careful
> 0.03% sh [kernel.kallsyms] [k] account_user_time
> 0.03% sh [kernel.kallsyms] [k] update_cfs_shares
> 0.03% ksoftirqd/2 [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] jiffies_to_timeval
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] vtime_account_user
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] pick_next_task_fair
> 0.03% ksoftirqd/2 [kernel.kallsyms] [k] ksoftirqd_should_run
> 0.03% swapper [kernel.kallsyms] [k] intel_idle
> 0.03% ksoftirqd/0 [kernel.kallsyms] [k] rcu_bh_qs
> 0.03% python [kernel.kallsyms] [k] wakeup_softirqd
> 0.03% ksoftirqd/2 [kernel.kallsyms] [k] rcu_bh_qs
> 0.03% cc1 [kernel.kallsyms] [k] try_to_wake_up
> 0.03% python [kernel.kallsyms] [k] wake_up_process
> 0.03% cc1 [kernel.kallsyms] [k] sched_clock_cpu
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] select_task_rq_fair
> 0.03% ksoftirqd/1 [kernel.kallsyms] [k] ksoftirqd_should_run
> 0.03% as [kernel.kallsyms] [k] __enqueue_entity
> 0.03% ksoftirqd/1 [kernel.kallsyms] [k] rcu_bh_qs
> 0.03% sh [kernel.kallsyms] [k] acct_account_cputime
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] set_next_entity
> 0.03% cc1plus [kernel.kallsyms] [k] hrtick_update
> 0.03% sh [kernel.kallsyms] [k] rcu_try_advance_all_cbs
> 0.03% ksoftirqd/3 [kernel.kallsyms] [k] rcu_bh_qs
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] clear_buddies
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] rb_insert_color
> 0.03% sh [kernel.kallsyms] [k] update_rq_clock
> 0.03% sh [kernel.kallsyms] [k] vtime_user_enter
> 0.03% as [kernel.kallsyms] [k] user_enter
> 0.03% cc1 [kernel.kallsyms] [k] gs_change
> 0.03% sh [kernel.kallsyms] [k] ttwu_do_wakeup
> 0.03% ksoftirqd/0 [kernel.kallsyms] [k] ksoftirqd_should_run
> 0.03% as [kernel.kallsyms] [k] account_system_time
> 0.03% sh [kernel.kallsyms] [k] set_next_buddy
> 0.03% python [kernel.kallsyms] [k] vtime_task_switch
> 0.03% perf [kernel.kallsyms] [k] memcpy
> 0.03% sh [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
> 0.03% as [kernel.kallsyms] [k] enqueue_task_fair
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 0.03% sh [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.03% as [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
> 0.03% arm-none-eabi-g [kernel.kallsyms] [k] place_entity
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] update_stats_wait_end
> 0.02% python [kernel.kallsyms] [k] calc_delta_mine
> 0.02% ksoftirqd/3 [kernel.kallsyms] [k] ksoftirqd_should_run
> 0.02% as [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.02% python [kernel.kallsyms] [k] invoke_rcu_core
> 0.02% cc1 [kernel.kallsyms] [k] __enqueue_entity
> 0.02% cc1plus libc-2.15.so [.] 0x0000000000127d50
> 0.02% sh [kernel.kallsyms] [k] schedule_user
> 0.02% python [kernel.kallsyms] [k] schedule
> 0.02% python [kernel.kallsyms] [k] int_with_check
> 0.02% sh [kernel.kallsyms] [k] rcu_eqs_exit
> 0.02% sh [kernel.kallsyms] [k] vtime_account_system
> 0.02% cc1 [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
> 0.02% cc1 [kernel.kallsyms] [k] enqueue_task_fair
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] cpuacct_charge
> 0.02% sh [kernel.kallsyms] [k] __raise_softirq_irqoff
> 0.02% sh [kernel.kallsyms] [k] ttwu_stat
> 0.02% as [kernel.kallsyms] [k] check_preempt_wakeup
> 0.02% cc1 [kernel.kallsyms] [k] account_system_time
> 0.02% cc1 [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
> 0.02% sh [kernel.kallsyms] [k] rb_erase
> 0.02% cc1plus [kernel.kallsyms] [k] retint_careful
> 0.02% as [kernel.kallsyms] [k] context_tracking_task_switch
> 0.02% cc1 libc-2.15.so [.] 0x0000000000127cd7
> 0.02% as [kernel.kallsyms] [k] get_vtime_delta
> 0.02% python [kernel.kallsyms] [k] task_waking_fair
> 0.02% sh [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
> 0.02% sh [kernel.kallsyms] [k] rcu_note_context_switch
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 0.02% as [kernel.kallsyms] [k] user_exit
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] check_preempt_curr
> 0.02% ksoftirqd/2 [kernel.kallsyms] [k] deactivate_task
> 0.02% cc1 [kernel.kallsyms] [k] user_enter
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] __vtime_account_system
> 0.02% ksoftirqd/2 [kernel.kallsyms] [k] msecs_to_jiffies
> 0.02% ksoftirqd/3 [kernel.kallsyms] [k] msecs_to_jiffies
> 0.02% cc1 [kernel.kallsyms] [k] get_vtime_delta
> 0.02% cc1 [kernel.kallsyms] [k] check_preempt_wakeup
> 0.02% sh [kernel.kallsyms] [k] rcu_eqs_enter
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] enqueue_task
> 0.02% as [kernel.kallsyms] [k] select_task_rq_fair
> 0.02% ksoftirqd/0 [kernel.kallsyms] [k] deactivate_task
> 0.02% as [kernel.kallsyms] [k] raise_softirq
> 0.02% ksoftirqd/1 [kernel.kallsyms] [k] deactivate_task
> 0.02% as [kernel.kallsyms] [k] finish_task_switch
> 0.02% ksoftirqd/1 [kernel.kallsyms] [k] msecs_to_jiffies
> 0.02% sh [kernel.kallsyms] [k] wakeup_softirqd
> 0.02% ksoftirqd/3 [kernel.kallsyms] [k] hrtick_update
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] update_cfs_shares
> 0.02% ksoftirqd/0 [kernel.kallsyms] [k] msecs_to_jiffies
> 0.02% cc1 [kernel.kallsyms] [k] user_exit
> 0.02% as [kernel.kallsyms] [k] resched_task
> 0.02% ksoftirqd/2 [kernel.kallsyms] [k] native_load_gs_index
> 0.02% sh [kernel.kallsyms] [k] int_with_check
> 0.02% ksoftirqd/3 [kernel.kallsyms] [k] deactivate_task
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] rcu_try_advance_all_cbs
> 0.02% python [kernel.kallsyms] [k] activate_task
> 0.02% cc1 [kernel.kallsyms] [k] context_tracking_task_switch
> 0.02% cc1 [kernel.kallsyms] [k] resched_task
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] account_user_time
> 0.02% sh [kernel.kallsyms] [k] wake_up_process
> 0.02% as [kernel.kallsyms] [k] int_careful
> 0.02% sh [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.02% cc1 [kernel.kallsyms] [k] raise_softirq
> 0.02% arm-none-eabi-g [kernel.kallsyms] [k] ttwu_do_wakeup
> 0.02% as [kernel.kallsyms] [k] pick_next_task_fair
> 0.01% sh [kernel.kallsyms] [k] rb_next
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] ttwu_stat
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] acct_account_cputime
> 0.01% perf [kernel.kallsyms] [k] crypto_xor
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
> 0.01% python [kernel.kallsyms] [k] rcu_user_enter
> 0.01% cc1 [kernel.kallsyms] [k] finish_task_switch
> 0.01% as [kernel.kallsyms] [k] set_next_entity
> 0.01% sh [kernel.kallsyms] [k] calc_delta_mine
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] set_next_buddy
> 0.01% ksoftirqd/1 [kernel.kallsyms] [k] hrtick_update
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.01% as [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 0.01% as [kernel.kallsyms] [k] jiffies_to_timeval
> 0.01% python [kernel.kallsyms] [k] rcu_user_exit
> 0.01% cc1 [kernel.kallsyms] [k] select_task_rq_fair
> 0.01% python [kernel.kallsyms] [k] aes_decrypt
> 0.01% as [kernel.kallsyms] [k] rb_insert_color
> 0.01% sh [kernel.kallsyms] [k] vtime_task_switch
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] update_rq_clock
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] rcu_eqs_exit
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] schedule_user
> 0.01% sh [kernel.kallsyms] [k] invoke_rcu_core
> 0.01% cc1 [kernel.kallsyms] [k] vtime_account_user
> 0.01% ksoftirqd/2 [kernel.kallsyms] [k] hrtick_update
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] vtime_user_enter
> 0.01% sh [kernel.kallsyms] [k] schedule
> 0.01% ksoftirqd/0 [kernel.kallsyms] [k] hrtick_update
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] rcu_eqs_enter
> 0.01% cc1 [kernel.kallsyms] [k] clear_buddies
> 0.01% as [kernel.kallsyms] [k] vtime_account_user
> 0.01% cc1 [kernel.kallsyms] [k] pick_next_task_fair
> 0.01% cc1 [kernel.kallsyms] [k] update_stats_wait_end
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] rcu_note_context_switch
> 0.01% as [kernel.kallsyms] [k] gs_change
> 0.01% Xorg Xorg [.] 0x0000000000046479
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] vtime_account_system
> 0.01% cc1 [kernel.kallsyms] [k] jiffies_to_timeval
> 0.01% as [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.01% as [kernel.kallsyms] [k] clear_buddies
> 0.01% as [kernel.kallsyms] [k] update_stats_wait_end
> 0.01% as [kernel.kallsyms] [k] aes_encrypt
> 0.01% cc1 [kernel.kallsyms] [k] int_careful
> 0.01% python [kernel.kallsyms] [k] hrtick_update
> 0.01% ksoftirqd/0 [kernel.kallsyms] [k] native_load_gs_index
> 0.01% as [kernel.kallsyms] [k] place_entity
> 0.01% cc1 [kernel.kallsyms] [k] set_next_entity
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] rb_erase
> 0.01% cc1 [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.01% as [kernel.kallsyms] [k] cpuacct_charge
> 0.01% as [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.01% as [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 0.01% sh [kernel.kallsyms] [k] task_waking_fair
> 0.01% ksoftirqd/1 [kernel.kallsyms] [k] native_load_gs_index
> 0.01% cc1 [kernel.kallsyms] [k] _raw_spin_lock_irq
> 0.01% as [kernel.kallsyms] [k] __vtime_account_system
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] __raise_softirq_irqoff
> 0.01% cc1 [kernel.kallsyms] [k] place_entity
> 0.01% python [kernel.kallsyms] [k] aes_encrypt
> 0.01% ksoftirqd/3 [kernel.kallsyms] [k] native_load_gs_index
> 0.01% as libc-2.15.so [.] 0x0000000000134c3a
> 0.01% as [kernel.kallsyms] [k] enqueue_task
> 0.01% as [kernel.kallsyms] [k] account_user_time
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] rb_next
> 0.01% cc1 [kernel.kallsyms] [k] __vtime_account_system
> 0.01% cc1 [kernel.kallsyms] [k] rb_insert_color
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] wakeup_softirqd
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] int_with_check
> 0.01% cc1 [kernel.kallsyms] [k] arch_vtime_task_switch
> 0.01% as [kernel.kallsyms] [k] check_preempt_curr
> 0.01% cc1 [kernel.kallsyms] [k] cpuacct_charge
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] wake_up_process
> 0.01% as [kernel.kallsyms] [k] update_cfs_shares
> 0.01% multiload-apple libc-2.15.so [.] 0x00000000000f4f57
> 0.01% as [kernel.kallsyms] [k] update_rq_clock
> 0.01% cc1 [kernel.kallsyms] [k] account_user_time
> 0.01% perf [kernel.kallsyms] [k] crypto_cbc_encrypt
> 0.01% as [kernel.kallsyms] [k] acct_account_cputime
> 0.01% cc1 [kernel.kallsyms] [k] enqueue_task
> 0.01% cc1 [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 0.01% as [kernel.kallsyms] [k] ttwu_do_wakeup
> 0.01% cc1 [kernel.kallsyms] [k] check_preempt_curr
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] vtime_task_switch
> 0.01% python [kernel.kallsyms] [k] page_fault
> 0.01% as [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.01% sh [kernel.kallsyms] [k] activate_task
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] schedule
> 0.01% as [kernel.kallsyms] [k] rcu_try_advance_all_cbs
> 0.01% sh [kernel.kallsyms] [k] rcu_user_exit
> 0.01% sh [kernel.kallsyms] [k] rcu_user_enter
> 0.01% as [kernel.kallsyms] [k] set_next_buddy
> 0.01% python [kernel.kallsyms] [k] crypto_aes_expand_key
> 0.01% as [kernel.kallsyms] [k] ttwu_stat
> 0.01% ksoftirqd/0 [iwlwifi] [k] iwl_trans_pcie_read32
> 0.01% cc1 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
> 0.01% cc1 [kernel.kallsyms] [k] ttwu_do_wakeup
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] invoke_rcu_core
> 0.01% cc1 [kernel.kallsyms] [k] update_cfs_shares
> 0.01% as [kernel.kallsyms] [k] rb_erase
> 0.01% cc1plus [iwlwifi] [k] iwl_trans_pcie_read32
> 0.01% arm-none-eabi-g [kernel.kallsyms] [k] calc_delta_mine
> 0.01% as [kernel.kallsyms] [k] rcu_eqs_exit
> 0.01% as [kernel.kallsyms] [k] vtime_account_system
> 0.01% cc1 [kernel.kallsyms] [k] acct_account_cputime
> 0.01% postgres libc-2.15.so [.] 0x00000000000c0707
> 0.01% as [kernel.kallsyms] [k] rcu_note_context_switch
> 0.01% cc1 [kernel.kallsyms] [k] rcu_eqs_enter
> 0.01% as [kernel.kallsyms] [k] vtime_user_enter
> 0.01% cc1 [kernel.kallsyms] [k] rcu_try_advance_all_cbs
> 0.01% cc1 [kernel.kallsyms] [k] vtime_user_enter
> 0.01% as [kernel.kallsyms] [k] rcu_eqs_enter
> 0.01% cc1 [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
> 0.01% as [kernel.kallsyms] [k] schedule_user
> 0.01% perf [kernel.kallsyms] [k] __switch_to
> 0.01% as [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
> 0.01% cc1 [kernel.kallsyms] [k] perf_event_task_sched_out
> 0.01% python [kernel.kallsyms] [k] str2hashbuf_signed
> 0.01% as [kernel.kallsyms] [k] __raise_softirq_irqoff
> 0.01% as [kernel.kallsyms] [k] aes_decrypt
> 0.01% Xorg libc-2.15.so [.] 0x0000000000080b76
> 0.01% zabbix_agentd libc-2.15.so [.] 0x000000000007c65d
> 0.01% cc1 [kernel.kallsyms] [k] vtime_account_system
> 0.01% as [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Unusually high system CPU usage with recent kernels
@ 2013-08-20 6:01 Tibor Billes
2013-08-20 14:53 ` Paul E. McKenney
0 siblings, 1 reply; 28+ messages in thread
From: Tibor Billes @ 2013-08-20 6:01 UTC (permalink / raw)
To: linux-kernel
Hi,
I was using the 3.9.7 stable release and tried to upgrade to the 3.10.x series.
The 3.10.x series was showing unusually high (>75%) system CPU usage in some
situations, making things really slow. The latest stable I tried is 3.10.7.
I also tried 3.11-rc5, they both show this behaviour. This behaviour doesn't
show up when the system is idling, only when doing some CPU intensive work,
like compiling with multiple threads. Compiling with only one thread seems not
to trigger this behaviour.
To be more precise I did a `perf record -a` while compiling a large C++ program
with scons using 4 threads, the result is appended at the end of this email.
I also tried bisecting it and got the following:
There are only 'skip'ped commits left to test.
The first bad commit could be any of:
b8462084a2a88a6a0489f9bb7d8b1bb95bc455ab
bd9f0686fc8c9a01c6850b1c611d1c9ad80b86d6
8b425aa8f1acfe48aed919c7aadff2ed290fe969
b92db6cb7efcbd41e469e1d757c47da4865f7622
0446be489795d8bb994125a916ef03211f539e54
c0f4dfd4f90f1667d234d21f15153ea09a2eaa66
910ee45db2f4837c8440e770474758493ab94bf7
We cannot bisect more!
I skipped these commits because they didn't boot. (I didn't spend any time to
figure out why.) They are all RCU-related changes. The perf I mentioned above
was done using commit 910ee45db2f4837c8440e770474758493ab94bf7,
which is the first bad commit I could boot into.
Does anyone have any idea why is this happening?
I'd be happy to provide more information if needed.
Tibor
Here's the perf report:
Note: I truncated the output, so it doesn't show lines with less than 0.01%
overhead.
# ========
# captured on: Sun Aug 11 23:08:27 2013
# hostname : altair
# os release : 3.9.0-rc2+
# perf version : 3.9.3.g4bb086
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
# cpuid : GenuineIntel,6,58,9
# total memory : 7981164 kB
# cmdline : /home/tbilles/linux/stable/tools/perf/perf record -a
# event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0, excl_host = 0, excl_guest = 1, precise_ip = 0
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, software = 1, tracepoint = 2, breakpoint = 5
# ========
#
# Samples: 1M of event 'cycles'
# Event count (approx.): 1179107753326
#
# Overhead Command Shared Object Symbol
# ........ ............... ............................... ......................................................
#
2.49% cc1plus [kernel.kallsyms] [k] __switch_to
2.06% cc1plus [kernel.kallsyms] [k] __schedule
1.58% cc1plus [kernel.kallsyms] [k] native_sched_clock
1.36% cc1plus [kernel.kallsyms] [k] _raw_spin_lock
1.32% ksoftirqd/1 [kernel.kallsyms] [k] __switch_to
1.28% ksoftirqd/3 [kernel.kallsyms] [k] __switch_to
1.24% python python2.7 [.] 0x0000000000023f62
1.20% cc1plus [kernel.kallsyms] [k] update_curr
1.18% ksoftirqd/3 [kernel.kallsyms] [k] __schedule
1.18% ksoftirqd/0 [kernel.kallsyms] [k] __switch_to
1.13% ksoftirqd/1 [kernel.kallsyms] [k] __schedule
1.10% ksoftirqd/2 [kernel.kallsyms] [k] __switch_to
1.08% cc1plus [kernel.kallsyms] [k] enqueue_entity
1.05% ksoftirqd/0 [kernel.kallsyms] [k] __schedule
1.03% ksoftirqd/2 [kernel.kallsyms] [k] __schedule
0.98% cc1plus [kernel.kallsyms] [k] local_clock
0.95% cc1plus [kernel.kallsyms] [k] put_prev_task_fair
0.92% cc1plus [kernel.kallsyms] [k] __acct_update_integrals
0.90% python [kernel.kallsyms] [k] __switch_to
0.84% python [kernel.kallsyms] [k] __schedule
0.78% cc1plus [kernel.kallsyms] [k] try_to_wake_up
0.75% cc1plus [kernel.kallsyms] [k] sched_clock_cpu
0.70% ksoftirqd/0 [kernel.kallsyms] [k] native_sched_clock
0.70% ksoftirqd/3 [kernel.kallsyms] [k] native_sched_clock
0.69% ksoftirqd/2 [kernel.kallsyms] [k] native_sched_clock
0.67% ksoftirqd/1 [kernel.kallsyms] [k] native_sched_clock
0.65% cc1plus [kernel.kallsyms] [k] gs_change
0.63% ksoftirqd/2 [kernel.kallsyms] [k] gs_change
0.61% python [kernel.kallsyms] [k] native_sched_clock
0.59% cc1plus [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
0.58% cc1plus [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
0.58% cc1plus [kernel.kallsyms] [k] enqueue_task_fair
0.57% cc1plus [kernel.kallsyms] [k] account_system_time
0.57% cc1plus [kernel.kallsyms] [k] __enqueue_entity
0.55% ksoftirqd/2 [kernel.kallsyms] [k] rcu_process_callbacks
0.55% ksoftirqd/0 [kernel.kallsyms] [k] rcu_process_callbacks
0.54% ksoftirqd/3 [kernel.kallsyms] [k] rcu_process_callbacks
0.53% cc1plus [kernel.kallsyms] [k] user_enter
0.53% ksoftirqd/1 [kernel.kallsyms] [k] rcu_process_callbacks
0.53% cc1plus [kernel.kallsyms] [k] check_preempt_wakeup
0.53% python [kernel.kallsyms] [k] _raw_spin_lock
0.50% ksoftirqd/0 [kernel.kallsyms] [k] gs_change
0.50% ksoftirqd/3 [kernel.kallsyms] [k] _raw_spin_lock
0.49% ksoftirqd/1 [kernel.kallsyms] [k] _raw_spin_lock
0.48% ksoftirqd/0 [kernel.kallsyms] [k] _raw_spin_lock
0.48% ksoftirqd/2 [kernel.kallsyms] [k] _raw_spin_lock
0.46% python [kernel.kallsyms] [k] update_curr
0.46% cc1plus [kernel.kallsyms] [k] get_vtime_delta
0.45% ksoftirqd/0 [kernel.kallsyms] [k] smpboot_thread_fn
0.45% ksoftirqd/2 [kernel.kallsyms] [k] smpboot_thread_fn
0.45% sh [kernel.kallsyms] [k] __switch_to
0.44% ksoftirqd/3 [kernel.kallsyms] [k] smpboot_thread_fn
0.44% ksoftirqd/1 [kernel.kallsyms] [k] smpboot_thread_fn
0.44% cc1plus [kernel.kallsyms] [k] user_exit
0.43% cc1plus [kernel.kallsyms] [k] context_tracking_task_switch
0.43% ksoftirqd/3 [kernel.kallsyms] [k] local_clock
0.43% ksoftirqd/0 [kernel.kallsyms] [k] local_clock
0.42% ksoftirqd/2 [kernel.kallsyms] [k] local_clock
0.42% cc1plus [kernel.kallsyms] [k] resched_task
0.42% ksoftirqd/1 [kernel.kallsyms] [k] local_clock
0.41% cc1plus [kernel.kallsyms] [k] raise_softirq
0.40% cc1plus [kernel.kallsyms] [k] finish_task_switch
0.38% sh [kernel.kallsyms] [k] __schedule
0.37% ksoftirqd/1 [kernel.kallsyms] [k] gs_change
0.37% python [kernel.kallsyms] [k] local_clock
0.37% ksoftirqd/3 [kernel.kallsyms] [k] gs_change
0.37% ksoftirqd/2 [kernel.kallsyms] [k] account_system_time
0.36% python [kernel.kallsyms] [k] enqueue_entity
0.36% ksoftirqd/1 [kernel.kallsyms] [k] check_for_new_grace_period.isra.37
0.36% ksoftirqd/0 [kernel.kallsyms] [k] check_for_new_grace_period.isra.37
0.36% ksoftirqd/3 [kernel.kallsyms] [k] check_for_new_grace_period.isra.37
0.35% ksoftirqd/2 [kernel.kallsyms] [k] __do_softirq
0.35% ksoftirqd/2 [kernel.kallsyms] [k] check_for_new_grace_period.isra.37
0.35% ksoftirqd/0 [kernel.kallsyms] [k] account_system_time
0.35% ksoftirqd/3 [kernel.kallsyms] [k] dequeue_entity
0.35% ksoftirqd/0 [kernel.kallsyms] [k] __do_softirq
0.35% ksoftirqd/3 [kernel.kallsyms] [k] __do_softirq
0.35% ksoftirqd/1 [kernel.kallsyms] [k] account_system_time
0.35% ksoftirqd/1 [kernel.kallsyms] [k] __do_softirq
0.35% python [kernel.kallsyms] [k] put_prev_task_fair
0.34% ksoftirqd/0 [kernel.kallsyms] [k] rcu_process_gp_end
0.34% ksoftirqd/3 [kernel.kallsyms] [k] account_system_time
0.34% ksoftirqd/1 [kernel.kallsyms] [k] dequeue_entity
0.34% python [kernel.kallsyms] [k] __acct_update_integrals
0.34% ksoftirqd/2 [kernel.kallsyms] [k] rcu_process_gp_end
0.34% ksoftirqd/1 [kernel.kallsyms] [k] rcu_process_gp_end
0.33% cc1plus [kernel.kallsyms] [k] select_task_rq_fair
0.33% ksoftirqd/2 [kernel.kallsyms] [k] sched_clock_cpu
0.33% ksoftirqd/3 [kernel.kallsyms] [k] rcu_process_gp_end
0.33% ksoftirqd/0 [kernel.kallsyms] [k] dequeue_entity
0.33% cc1plus [kernel.kallsyms] [k] int_careful
0.32% ksoftirqd/2 [kernel.kallsyms] [k] dequeue_entity
0.32% ksoftirqd/0 [kernel.kallsyms] [k] sched_clock_cpu
0.32% cc1plus [kernel.kallsyms] [k] pick_next_task_fair
0.31% ksoftirqd/1 [kernel.kallsyms] [k] sched_clock_cpu
0.31% ksoftirqd/3 [kernel.kallsyms] [k] sched_clock_cpu
0.30% cc1plus [kernel.kallsyms] [k] jiffies_to_timeval
0.29% cc1plus [kernel.kallsyms] [k] set_next_entity
0.29% cc1plus [kernel.kallsyms] [k] vtime_account_user
0.29% python [kernel.kallsyms] [k] sched_clock_cpu
0.29% sh [kernel.kallsyms] [k] native_sched_clock
0.28% ksoftirqd/0 [kernel.kallsyms] [k] dequeue_task_fair
0.28% python [kernel.kallsyms] [k] try_to_wake_up
0.28% ksoftirqd/0 [kernel.kallsyms] [k] set_next_entity
0.28% cc1plus [kernel.kallsyms] [k] clear_buddies
0.27% ksoftirqd/2 [kernel.kallsyms] [k] dequeue_task_fair
0.27% cc1plus [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.27% cc1plus [kernel.kallsyms] [k] update_stats_wait_end
0.27% ksoftirqd/1 [kernel.kallsyms] [k] set_next_entity
0.27% ksoftirqd/3 [kernel.kallsyms] [k] set_next_entity
0.26% ksoftirqd/2 [kernel.kallsyms] [k] set_next_entity
0.26% cc1plus [kernel.kallsyms] [k] place_entity
0.26% ksoftirqd/0 [kernel.kallsyms] [k] __acct_update_integrals
0.26% arm-none-eabi-g [kernel.kallsyms] [k] __switch_to
0.26% ksoftirqd/1 [kernel.kallsyms] [k] dequeue_task_fair
0.26% cc1plus [kernel.kallsyms] [k] rb_insert_color
0.26% ksoftirqd/3 [kernel.kallsyms] [k] __acct_update_integrals
0.25% sh [kernel.kallsyms] [k] _raw_spin_lock
0.25% ksoftirqd/1 [kernel.kallsyms] [k] __acct_update_integrals
0.25% ksoftirqd/2 [kernel.kallsyms] [k] __acct_update_integrals
0.25% ksoftirqd/3 [kernel.kallsyms] [k] dequeue_task_fair
0.24% cc1plus [kernel.kallsyms] [k] _raw_spin_lock_irq
0.23% ksoftirqd/3 [kernel.kallsyms] [k] update_stats_wait_end
0.23% python [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
0.23% cc1plus [kernel.kallsyms] [k] cpuacct_charge
0.23% sh [kernel.kallsyms] [k] update_curr
0.23% ksoftirqd/1 [kernel.kallsyms] [k] update_stats_wait_end
0.22% cc1plus [kernel.kallsyms] [k] __vtime_account_system
0.22% ksoftirqd/2 [kernel.kallsyms] [k] update_stats_wait_end
0.22% cc1plus [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.22% python [kernel.kallsyms] [k] account_system_time
0.22% python [kernel.kallsyms] [k] __enqueue_entity
0.22% python [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
0.22% python [kernel.kallsyms] [k] enqueue_task_fair
0.22% ksoftirqd/1 [kernel.kallsyms] [k] update_curr
0.22% cc1plus [kernel.kallsyms] [k] arch_vtime_task_switch
0.22% ksoftirqd/3 [kernel.kallsyms] [k] update_curr
0.21% ksoftirqd/0 [kernel.kallsyms] [k] update_stats_wait_end
0.21% ksoftirqd/3 [kernel.kallsyms] [k] get_vtime_delta
0.21% cc1plus [kernel.kallsyms] [k] enqueue_task
0.20% arm-none-eabi-g [kernel.kallsyms] [k] __schedule
0.20% ksoftirqd/0 [kernel.kallsyms] [k] context_tracking_task_switch
0.20% ksoftirqd/2 [kernel.kallsyms] [k] context_tracking_task_switch
0.20% python [kernel.kallsyms] [k] user_enter
0.20% ksoftirqd/2 [kernel.kallsyms] [k] update_curr
0.19% cc1plus [kernel.kallsyms] [k] account_user_time
0.19% ksoftirqd/3 [kernel.kallsyms] [k] context_tracking_task_switch
0.19% ksoftirqd/1 [kernel.kallsyms] [k] pick_next_task_fair
0.19% ksoftirqd/1 [kernel.kallsyms] [k] context_tracking_task_switch
0.19% ksoftirqd/0 [kernel.kallsyms] [k] update_curr
0.19% cc1plus [kernel.kallsyms] [k] check_preempt_curr
0.19% ksoftirqd/0 [kernel.kallsyms] [k] get_vtime_delta
0.18% ksoftirqd/2 [kernel.kallsyms] [k] pick_next_task_fair
0.18% ksoftirqd/1 [kernel.kallsyms] [k] get_vtime_delta
0.18% python [kernel.kallsyms] [k] finish_task_switch
0.18% python [kernel.kallsyms] [k] check_preempt_wakeup
0.18% sh [kernel.kallsyms] [k] local_clock
0.18% ksoftirqd/2 [kernel.kallsyms] [k] get_vtime_delta
0.18% ksoftirqd/0 [kernel.kallsyms] [k] pick_next_task_fair
0.18% python [kernel.kallsyms] [k] get_vtime_delta
0.18% perf [kernel.kallsyms] [k] aes_encrypt
0.17% python [kernel.kallsyms] [k] user_exit
0.17% sh [kernel.kallsyms] [k] enqueue_entity
0.17% python [kernel.kallsyms] [k] context_tracking_task_switch
0.17% sh [kernel.kallsyms] [k] __acct_update_integrals
0.16% sh [kernel.kallsyms] [k] put_prev_task_fair
0.16% cc1plus [kernel.kallsyms] [k] rcu_try_advance_all_cbs
0.16% ksoftirqd/3 [kernel.kallsyms] [k] pick_next_task_fair
0.16% cc1plus [kernel.kallsyms] [k] ttwu_do_wakeup
0.15% cc1plus [kernel.kallsyms] [k] set_next_buddy
0.15% ksoftirqd/1 [kernel.kallsyms] [k] vtime_account_system
0.15% python [kernel.kallsyms] [k] raise_softirq
0.15% cc1plus [kernel.kallsyms] [k] update_cfs_shares
0.15% cc1plus [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.15% arm-none-eabi-g [kernel.kallsyms] [k] native_sched_clock
0.15% ksoftirqd/2 [kernel.kallsyms] [k] vtime_account_system
0.15% cc1plus [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
0.15% cc1plus [kernel.kallsyms] [k] acct_account_cputime
0.15% ksoftirqd/3 [kernel.kallsyms] [k] vtime_account_system
0.15% cc1plus [kernel.kallsyms] [k] update_rq_clock
0.14% ksoftirqd/0 [kernel.kallsyms] [k] vtime_account_system
0.14% python [kernel.kallsyms] [k] resched_task
0.14% cc1plus [kernel.kallsyms] [k] rcu_eqs_exit
0.14% sh [kernel.kallsyms] [k] sched_clock_cpu
0.13% cc1plus [kernel.kallsyms] [k] vtime_user_enter
0.13% ksoftirqd/2 [kernel.kallsyms] [k] rb_erase
0.13% cc1plus [kernel.kallsyms] [k] vtime_account_system
0.13% ksoftirqd/2 [kernel.kallsyms] [k] __vtime_account_system
0.13% ksoftirqd/1 [kernel.kallsyms] [k] __vtime_account_system
0.13% ksoftirqd/0 [kernel.kallsyms] [k] rb_erase
0.13% cc1plus [kernel.kallsyms] [k] perf_event_task_sched_out
0.13% ksoftirqd/0 [kernel.kallsyms] [k] dequeue_task
0.13% sh [kernel.kallsyms] [k] try_to_wake_up
0.13% arm-none-eabi-g [kernel.kallsyms] [k] _raw_spin_lock
0.13% ksoftirqd/1 [kernel.kallsyms] [k] rb_erase
0.13% ksoftirqd/0 [kernel.kallsyms] [k] clear_buddies
0.13% arm-none-eabi-g [kernel.kallsyms] [k] update_curr
0.12% ksoftirqd/2 [kernel.kallsyms] [k] dequeue_task
0.12% cc1plus [kernel.kallsyms] [k] schedule_user
0.12% ksoftirqd/0 [kernel.kallsyms] [k] __vtime_account_system
0.12% ksoftirqd/3 [kernel.kallsyms] [k] __vtime_account_system
0.12% ksoftirqd/1 [kernel.kallsyms] [k] dequeue_task
0.12% cc1plus [kernel.kallsyms] [k] ttwu_stat
0.12% ksoftirqd/3 [kernel.kallsyms] [k] dequeue_task
0.12% python [kernel.kallsyms] [k] select_task_rq_fair
0.12% cc1plus [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
0.12% ksoftirqd/3 [kernel.kallsyms] [k] rb_erase
0.12% cc1plus [kernel.kallsyms] [k] __raise_softirq_irqoff
0.12% cc1plus [kernel.kallsyms] [k] rb_erase
0.12% python [kernel.kallsyms] [k] pick_next_task_fair
0.12% cc1plus [kernel.kallsyms] [k] rcu_note_context_switch
0.11% cc1plus cc1plus [.] 0x00000000009bd77f
0.11% cc1plus [kernel.kallsyms] [k] rcu_eqs_enter
0.11% as [kernel.kallsyms] [k] __switch_to
0.11% python [kernel.kallsyms] [k] jiffies_to_timeval
0.11% sh [kernel.kallsyms] [k] __enqueue_entity
0.11% ksoftirqd/2 [kernel.kallsyms] [k] clear_buddies
0.11% ksoftirqd/2 [kernel.kallsyms] [k] _raw_spin_lock_irq
0.11% ksoftirqd/3 [kernel.kallsyms] [k] clear_buddies
0.11% python [kernel.kallsyms] [k] set_next_entity
0.11% ksoftirqd/0 [kernel.kallsyms] [k] _raw_spin_lock_irq
0.11% sh [kernel.kallsyms] [k] enqueue_task_fair
0.11% python libc-2.15.so [.] 0x0000000000147fc5
0.11% ksoftirqd/1 [kernel.kallsyms] [k] clear_buddies
0.11% ksoftirqd/1 [kernel.kallsyms] [k] _raw_spin_lock_irq
0.11% sh [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
0.10% ksoftirqd/3 [kernel.kallsyms] [k] finish_task_switch
0.10% sh [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
0.10% python [kernel.kallsyms] [k] clear_buddies
0.10% python [kernel.kallsyms] [k] update_stats_wait_end
0.10% ksoftirqd/3 [kernel.kallsyms] [k] _raw_spin_lock_irq
0.10% ksoftirqd/0 [kernel.kallsyms] [k] cpuacct_charge
0.10% python [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.10% sh [kernel.kallsyms] [k] account_system_time
0.10% ksoftirqd/3 [kernel.kallsyms] [k] arch_vtime_task_switch
0.10% ksoftirqd/1 [kernel.kallsyms] [k] finish_task_switch
0.10% arm-none-eabi-g [kernel.kallsyms] [k] enqueue_entity
0.10% sh [kernel.kallsyms] [k] user_enter
0.10% python [kernel.kallsyms] [k] _raw_spin_lock_irq
0.10% python [kernel.kallsyms] [k] vtime_account_user
0.10% python [kernel.kallsyms] [k] int_careful
0.10% ksoftirqd/2 [kernel.kallsyms] [k] rcu_note_context_switch
0.10% ksoftirqd/0 [kernel.kallsyms] [k] rcu_note_context_switch
0.10% python [kernel.kallsyms] [k] place_entity
0.10% arm-none-eabi-g [kernel.kallsyms] [k] put_prev_task_fair
0.09% ksoftirqd/1 [kernel.kallsyms] [k] arch_vtime_task_switch
0.09% cc1 cc1 [.] 0x00000000000c4674
0.09% ksoftirqd/2 [kernel.kallsyms] [k] put_prev_task_fair
0.09% as [kernel.kallsyms] [k] __schedule
0.09% ksoftirqd/1 [kernel.kallsyms] [k] rcu_note_context_switch
0.09% cc1 [kernel.kallsyms] [k] __switch_to
0.09% sh [kernel.kallsyms] [k] check_preempt_wakeup
0.09% ksoftirqd/2 [kernel.kallsyms] [k] cpuacct_charge
0.09% ksoftirqd/3 [kernel.kallsyms] [k] rcu_note_context_switch
0.09% arm-none-eabi-g [kernel.kallsyms] [k] local_clock
0.09% ksoftirqd/0 [kernel.kallsyms] [k] arch_vtime_task_switch
0.09% ksoftirqd/0 [kernel.kallsyms] [k] cpu_needs_another_gp
0.09% ksoftirqd/0 [kernel.kallsyms] [k] finish_task_switch
0.09% ksoftirqd/2 [kernel.kallsyms] [k] cpu_needs_another_gp
0.09% ksoftirqd/3 [kernel.kallsyms] [k] cpu_needs_another_gp
0.09% ksoftirqd/2 [kernel.kallsyms] [k] finish_task_switch
0.09% ksoftirqd/1 [kernel.kallsyms] [k] put_prev_task_fair
0.09% ksoftirqd/2 [kernel.kallsyms] [k] arch_vtime_task_switch
0.09% python [kernel.kallsyms] [k] __vtime_account_system
0.09% ksoftirqd/3 [kernel.kallsyms] [k] put_prev_task_fair
0.09% cc1plus [kernel.kallsyms] [k] rb_next
0.09% python [kernel.kallsyms] [k] cpuacct_charge
0.09% ksoftirqd/0 [kernel.kallsyms] [k] put_prev_task_fair
0.09% cc1plus [kernel.kallsyms] [k] wake_up_process
0.09% python [kernel.kallsyms] [k] arch_vtime_task_switch
0.09% sh [kernel.kallsyms] [k] get_vtime_delta
0.09% ksoftirqd/0 [kernel.kallsyms] [k] rb_next
0.09% arm-none-eabi-g [kernel.kallsyms] [k] __acct_update_integrals
0.08% ksoftirqd/1 [kernel.kallsyms] [k] cpu_needs_another_gp
0.08% sh [kernel.kallsyms] [k] context_tracking_task_switch
0.08% python [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.08% python [kernel.kallsyms] [k] enqueue_task
0.08% ksoftirqd/2 [kernel.kallsyms] [k] rb_next
0.08% sh [kernel.kallsyms] [k] finish_task_switch
0.08% sh [kernel.kallsyms] [k] user_exit
0.08% ksoftirqd/1 [kernel.kallsyms] [k] rb_next
0.08% cc1 [kernel.kallsyms] [k] __schedule
0.08% ksoftirqd/0 [kernel.kallsyms] [k] __local_bh_enable
0.08% cc1plus [kernel.kallsyms] [k] int_with_check
0.08% ksoftirqd/0 [kernel.kallsyms] [k] update_cfs_shares
0.08% python [kernel.kallsyms] [k] rb_insert_color
0.08% ksoftirqd/1 [kernel.kallsyms] [k] __local_bh_enable
0.08% ksoftirqd/3 [kernel.kallsyms] [k] __local_bh_enable
0.07% cc1plus [kernel.kallsyms] [k] invoke_rcu_core
0.07% ksoftirqd/2 [kernel.kallsyms] [k] __local_bh_enable
0.07% python [kernel.kallsyms] [k] account_user_time
0.07% sh [kernel.kallsyms] [k] raise_softirq
0.07% arm-none-eabi-g [kernel.kallsyms] [k] sched_clock_cpu
0.07% cc1plus [kernel.kallsyms] [k] vtime_task_switch
0.07% as [kernel.kallsyms] [k] native_sched_clock
0.07% ksoftirqd/2 [kernel.kallsyms] [k] kthread_should_stop
0.07% arm-none-eabi-g [kernel.kallsyms] [k] try_to_wake_up
0.07% cc1plus [kernel.kallsyms] [k] calc_delta_mine
0.07% ksoftirqd/1 [kernel.kallsyms] [k] update_cfs_shares
0.07% ksoftirqd/3 [kernel.kallsyms] [k] cpuacct_charge
0.07% ksoftirqd/3 [kernel.kallsyms] [k] update_cfs_shares
0.07% ksoftirqd/2 [kernel.kallsyms] [k] update_cfs_shares
0.07% ksoftirqd/3 [kernel.kallsyms] [k] rb_next
0.07% cc1plus [kernel.kallsyms] [k] wakeup_softirqd
0.07% python [kernel.kallsyms] [k] check_preempt_curr
0.07% ksoftirqd/0 [kernel.kallsyms] [k] kthread_should_stop
0.07% ksoftirqd/3 [kernel.kallsyms] [k] kthread_should_stop
0.07% sh [kernel.kallsyms] [k] resched_task
0.07% as [kernel.kallsyms] [k] _raw_spin_lock
0.06% ksoftirqd/0 [kernel.kallsyms] [k] acct_account_cputime
0.06% ksoftirqd/1 [kernel.kallsyms] [k] cpuacct_charge
0.06% ksoftirqd/0 [kernel.kallsyms] [k] jiffies_to_timeval
0.06% ksoftirqd/3 [kernel.kallsyms] [k] jiffies_to_timeval
0.06% cc1plus [kernel.kallsyms] [k] schedule
0.06% ksoftirqd/1 [kernel.kallsyms] [k] acct_account_cputime
0.06% ksoftirqd/3 [kernel.kallsyms] [k] acct_account_cputime
0.06% python [kernel.kallsyms] [k] update_cfs_shares
0.06% sh [kernel.kallsyms] [k] int_careful
0.06% ksoftirqd/1 [kernel.kallsyms] [k] jiffies_to_timeval
0.06% python [kernel.kallsyms] [k] ttwu_do_wakeup
0.06% cc1 [kernel.kallsyms] [k] native_sched_clock
0.06% ksoftirqd/2 [kernel.kallsyms] [k] jiffies_to_timeval
0.06% python [kernel.kallsyms] [k] rcu_try_advance_all_cbs
0.06% python [kernel.kallsyms] [k] acct_account_cputime
0.06% python [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.06% ksoftirqd/2 [kernel.kallsyms] [k] acct_account_cputime
0.06% arm-none-eabi-g [kernel.kallsyms] [k] __enqueue_entity
0.06% python [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
0.06% ksoftirqd/1 [kernel.kallsyms] [k] kthread_should_stop
0.06% arm-none-eabi-g [kernel.kallsyms] [k] enqueue_task_fair
0.06% sh [kernel.kallsyms] [k] pick_next_task_fair
0.06% python [kernel.kallsyms] [k] update_rq_clock
0.06% python [kernel.kallsyms] [k] vtime_user_enter
0.06% as [kernel.kallsyms] [k] update_curr
0.06% sh [kernel.kallsyms] [k] select_task_rq_fair
0.06% arm-none-eabi-g [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
0.06% cc1plus [kernel.kallsyms] [k] task_waking_fair
0.05% sh [kernel.kallsyms] [k] jiffies_to_timeval
0.05% ksoftirqd/0 [kernel.kallsyms] [k] kthread_should_park
0.05% arm-none-eabi-g [kernel.kallsyms] [k] user_enter
0.05% ksoftirqd/0 [kernel.kallsyms] [k] vtime_account_irq_exit
0.05% cc1 [kernel.kallsyms] [k] _raw_spin_lock
0.05% python [kernel.kallsyms] [k] rcu_eqs_exit
0.05% python [kernel.kallsyms] [k] set_next_buddy
0.05% ksoftirqd/3 [kernel.kallsyms] [k] vtime_account_irq_exit
0.05% ksoftirqd/3 [kernel.kallsyms] [k] vtime_task_switch
0.05% sh [kernel.kallsyms] [k] set_next_entity
0.05% python [kernel.kallsyms] [k] vtime_account_system
0.05% arm-none-eabi-g [kernel.kallsyms] [k] account_system_time
0.05% ksoftirqd/2 [kernel.kallsyms] [k] vtime_task_switch
0.05% ksoftirqd/1 [kernel.kallsyms] [k] vtime_account_irq_exit
0.05% arm-none-eabi-g [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
0.05% ksoftirqd/2 [kernel.kallsyms] [k] vtime_account_irq_exit
0.05% ksoftirqd/1 [kernel.kallsyms] [k] vtime_task_switch
0.05% sh [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.05% ksoftirqd/0 [kernel.kallsyms] [k] vtime_task_switch
0.05% ksoftirqd/1 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.05% ksoftirqd/3 [kernel.kallsyms] [k] kthread_should_park
0.05% python [kernel.kallsyms] [k] perf_event_task_sched_out
0.05% ksoftirqd/3 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.05% ksoftirqd/0 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.05% as [kernel.kallsyms] [k] __acct_update_integrals
0.05% as as [.] 0x000000000000df9c
0.05% as [kernel.kallsyms] [k] local_clock
0.05% python [kernel.kallsyms] [k] ttwu_stat
0.05% ksoftirqd/2 [kernel.kallsyms] [k] update_rq_clock
0.05% python [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
0.05% sh [kernel.kallsyms] [k] update_stats_wait_end
0.05% ksoftirqd/2 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.05% ksoftirqd/1 [kernel.kallsyms] [k] update_rq_clock
0.05% as [kernel.kallsyms] [k] enqueue_entity
0.05% ksoftirqd/0 [kernel.kallsyms] [k] update_rq_clock
0.05% arm-none-eabi-g [kernel.kallsyms] [k] get_vtime_delta
0.05% arm-none-eabi-g [kernel.kallsyms] [k] check_preempt_wakeup
0.05% cc1 [kernel.kallsyms] [k] update_curr
0.04% ksoftirqd/3 [kernel.kallsyms] [k] update_rq_clock
0.04% python [kernel.kallsyms] [k] rb_erase
0.04% as [kernel.kallsyms] [k] put_prev_task_fair
0.04% ksoftirqd/3 [kernel.kallsyms] [k] schedule
0.04% sh [kernel.kallsyms] [k] _raw_spin_lock_irq
0.04% ksoftirqd/0 [kernel.kallsyms] [k] schedule
0.04% arm-none-eabi-g [kernel.kallsyms] [k] gs_change
0.04% sh [kernel.kallsyms] [k] place_entity
0.04% python [kernel.kallsyms] [k] rcu_note_context_switch
0.04% python [kernel.kallsyms] [k] schedule_user
0.04% python [kernel.kallsyms] [k] __raise_softirq_irqoff
0.04% postgres postgres [.] 0x00000000001fab33
0.04% sh [kernel.kallsyms] [k] cpuacct_charge
0.04% sh [kernel.kallsyms] [k] vtime_account_user
0.04% ksoftirqd/2 [kernel.kallsyms] [k] kthread_should_park
0.04% cc1plus [kernel.kallsyms] [k] native_load_gs_index
0.04% sh [kernel.kallsyms] [k] __vtime_account_system
0.04% sh [kernel.kallsyms] [k] arch_vtime_task_switch
0.04% ksoftirqd/2 [kernel.kallsyms] [k] _cond_resched
0.04% ksoftirqd/2 [kernel.kallsyms] [k] schedule
0.04% arm-none-eabi-g [kernel.kallsyms] [k] finish_task_switch
0.04% ksoftirqd/3 [kernel.kallsyms] [k] _cond_resched
0.04% arm-none-eabi-g [kernel.kallsyms] [k] user_exit
0.04% ksoftirqd/0 [kernel.kallsyms] [k] run_ksoftirqd
0.04% arm-none-eabi-g [kernel.kallsyms] [k] context_tracking_task_switch
0.04% python [kernel.kallsyms] [k] rcu_eqs_enter
0.04% sh [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.04% python [kernel.kallsyms] [k] retint_careful
0.04% ksoftirqd/1 [kernel.kallsyms] [k] kthread_should_park
0.04% cc1 [kernel.kallsyms] [k] enqueue_entity
0.04% ksoftirqd/3 [kernel.kallsyms] [k] run_ksoftirqd
0.04% ksoftirqd/0 [kernel.kallsyms] [k] _cond_resched
0.04% ksoftirqd/1 [kernel.kallsyms] [k] _cond_resched
0.04% ksoftirqd/2 [kernel.kallsyms] [k] vtime_account_irq_enter
0.04% ksoftirqd/2 [kernel.kallsyms] [k] run_ksoftirqd
0.04% arm-none-eabi-g [kernel.kallsyms] [k] raise_softirq
0.04% ksoftirqd/1 [kernel.kallsyms] [k] schedule
0.04% ksoftirqd/1 [kernel.kallsyms] [k] vtime_account_irq_enter
0.04% sh [kernel.kallsyms] [k] rb_insert_color
0.04% ksoftirqd/3 [kernel.kallsyms] [k] vtime_account_irq_enter
0.04% arm-none-eabi-g [kernel.kallsyms] [k] resched_task
0.04% sh [kernel.kallsyms] [k] clear_buddies
0.04% cc1plus [kernel.kallsyms] [k] activate_task
0.04% sh [kernel.kallsyms] [k] enqueue_task
0.04% ksoftirqd/0 [kernel.kallsyms] [k] vtime_account_irq_enter
0.04% cc1plus [kernel.kallsyms] [k] rcu_user_enter
0.04% ksoftirqd/3 [kernel.kallsyms] [k] perf_event_task_sched_out
0.04% cc1 [kernel.kallsyms] [k] local_clock
0.04% ksoftirqd/0 [kernel.kallsyms] [k] perf_event_task_sched_out
0.04% ksoftirqd/1 [kernel.kallsyms] [k] perf_event_task_sched_out
0.04% as [kernel.kallsyms] [k] try_to_wake_up
0.04% ksoftirqd/1 [kernel.kallsyms] [k] run_ksoftirqd
0.03% as [kernel.kallsyms] [k] sched_clock_cpu
0.03% cc1plus [kernel.kallsyms] [k] rcu_user_exit
0.03% mate-terminal libgdk_pixbuf-2.0.so.0.2600.1 [.] 0x0000000000014868
0.03% cc1 [kernel.kallsyms] [k] put_prev_task_fair
0.03% sh [kernel.kallsyms] [k] check_preempt_curr
0.03% python [kernel.kallsyms] [k] rb_next
0.03% cc1 [kernel.kallsyms] [k] __acct_update_integrals
0.03% arm-none-eabi-g [kernel.kallsyms] [k] int_careful
0.03% sh [kernel.kallsyms] [k] account_user_time
0.03% sh [kernel.kallsyms] [k] update_cfs_shares
0.03% ksoftirqd/2 [kernel.kallsyms] [k] perf_event_task_sched_out
0.03% arm-none-eabi-g [kernel.kallsyms] [k] jiffies_to_timeval
0.03% arm-none-eabi-g [kernel.kallsyms] [k] vtime_account_user
0.03% arm-none-eabi-g [kernel.kallsyms] [k] pick_next_task_fair
0.03% ksoftirqd/2 [kernel.kallsyms] [k] ksoftirqd_should_run
0.03% swapper [kernel.kallsyms] [k] intel_idle
0.03% ksoftirqd/0 [kernel.kallsyms] [k] rcu_bh_qs
0.03% python [kernel.kallsyms] [k] wakeup_softirqd
0.03% ksoftirqd/2 [kernel.kallsyms] [k] rcu_bh_qs
0.03% cc1 [kernel.kallsyms] [k] try_to_wake_up
0.03% python [kernel.kallsyms] [k] wake_up_process
0.03% cc1 [kernel.kallsyms] [k] sched_clock_cpu
0.03% arm-none-eabi-g [kernel.kallsyms] [k] select_task_rq_fair
0.03% ksoftirqd/1 [kernel.kallsyms] [k] ksoftirqd_should_run
0.03% as [kernel.kallsyms] [k] __enqueue_entity
0.03% ksoftirqd/1 [kernel.kallsyms] [k] rcu_bh_qs
0.03% sh [kernel.kallsyms] [k] acct_account_cputime
0.03% arm-none-eabi-g [kernel.kallsyms] [k] set_next_entity
0.03% cc1plus [kernel.kallsyms] [k] hrtick_update
0.03% sh [kernel.kallsyms] [k] rcu_try_advance_all_cbs
0.03% ksoftirqd/3 [kernel.kallsyms] [k] rcu_bh_qs
0.03% arm-none-eabi-g [kernel.kallsyms] [k] clear_buddies
0.03% arm-none-eabi-g [kernel.kallsyms] [k] rb_insert_color
0.03% sh [kernel.kallsyms] [k] update_rq_clock
0.03% sh [kernel.kallsyms] [k] vtime_user_enter
0.03% as [kernel.kallsyms] [k] user_enter
0.03% cc1 [kernel.kallsyms] [k] gs_change
0.03% sh [kernel.kallsyms] [k] ttwu_do_wakeup
0.03% ksoftirqd/0 [kernel.kallsyms] [k] ksoftirqd_should_run
0.03% as [kernel.kallsyms] [k] account_system_time
0.03% sh [kernel.kallsyms] [k] set_next_buddy
0.03% python [kernel.kallsyms] [k] vtime_task_switch
0.03% perf [kernel.kallsyms] [k] memcpy
0.03% sh [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
0.03% as [kernel.kallsyms] [k] enqueue_task_fair
0.03% arm-none-eabi-g [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.03% sh [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.03% as [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
0.03% arm-none-eabi-g [kernel.kallsyms] [k] place_entity
0.02% arm-none-eabi-g [kernel.kallsyms] [k] update_stats_wait_end
0.02% python [kernel.kallsyms] [k] calc_delta_mine
0.02% ksoftirqd/3 [kernel.kallsyms] [k] ksoftirqd_should_run
0.02% as [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
0.02% arm-none-eabi-g [kernel.kallsyms] [k] _raw_spin_lock_irq
0.02% python [kernel.kallsyms] [k] invoke_rcu_core
0.02% cc1 [kernel.kallsyms] [k] __enqueue_entity
0.02% cc1plus libc-2.15.so [.] 0x0000000000127d50
0.02% sh [kernel.kallsyms] [k] schedule_user
0.02% python [kernel.kallsyms] [k] schedule
0.02% python [kernel.kallsyms] [k] int_with_check
0.02% sh [kernel.kallsyms] [k] rcu_eqs_exit
0.02% sh [kernel.kallsyms] [k] vtime_account_system
0.02% cc1 [kernel.kallsyms] [k] rcu_eqs_exit_common.isra.43
0.02% cc1 [kernel.kallsyms] [k] enqueue_task_fair
0.02% arm-none-eabi-g [kernel.kallsyms] [k] cpuacct_charge
0.02% sh [kernel.kallsyms] [k] __raise_softirq_irqoff
0.02% sh [kernel.kallsyms] [k] ttwu_stat
0.02% as [kernel.kallsyms] [k] check_preempt_wakeup
0.02% cc1 [kernel.kallsyms] [k] account_system_time
0.02% cc1 [kernel.kallsyms] [k] rcu_eqs_enter_common.isra.44
0.02% sh [kernel.kallsyms] [k] rb_erase
0.02% cc1plus [kernel.kallsyms] [k] retint_careful
0.02% as [kernel.kallsyms] [k] context_tracking_task_switch
0.02% cc1 libc-2.15.so [.] 0x0000000000127cd7
0.02% as [kernel.kallsyms] [k] get_vtime_delta
0.02% python [kernel.kallsyms] [k] task_waking_fair
0.02% sh [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
0.02% sh [kernel.kallsyms] [k] rcu_note_context_switch
0.02% arm-none-eabi-g [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.02% as [kernel.kallsyms] [k] user_exit
0.02% arm-none-eabi-g [kernel.kallsyms] [k] check_preempt_curr
0.02% ksoftirqd/2 [kernel.kallsyms] [k] deactivate_task
0.02% cc1 [kernel.kallsyms] [k] user_enter
0.02% arm-none-eabi-g [kernel.kallsyms] [k] __vtime_account_system
0.02% ksoftirqd/2 [kernel.kallsyms] [k] msecs_to_jiffies
0.02% ksoftirqd/3 [kernel.kallsyms] [k] msecs_to_jiffies
0.02% cc1 [kernel.kallsyms] [k] get_vtime_delta
0.02% cc1 [kernel.kallsyms] [k] check_preempt_wakeup
0.02% sh [kernel.kallsyms] [k] rcu_eqs_enter
0.02% arm-none-eabi-g [kernel.kallsyms] [k] arch_vtime_task_switch
0.02% arm-none-eabi-g [kernel.kallsyms] [k] enqueue_task
0.02% as [kernel.kallsyms] [k] select_task_rq_fair
0.02% ksoftirqd/0 [kernel.kallsyms] [k] deactivate_task
0.02% as [kernel.kallsyms] [k] raise_softirq
0.02% ksoftirqd/1 [kernel.kallsyms] [k] deactivate_task
0.02% as [kernel.kallsyms] [k] finish_task_switch
0.02% ksoftirqd/1 [kernel.kallsyms] [k] msecs_to_jiffies
0.02% sh [kernel.kallsyms] [k] wakeup_softirqd
0.02% ksoftirqd/3 [kernel.kallsyms] [k] hrtick_update
0.02% arm-none-eabi-g [kernel.kallsyms] [k] update_cfs_shares
0.02% ksoftirqd/0 [kernel.kallsyms] [k] msecs_to_jiffies
0.02% cc1 [kernel.kallsyms] [k] user_exit
0.02% as [kernel.kallsyms] [k] resched_task
0.02% ksoftirqd/2 [kernel.kallsyms] [k] native_load_gs_index
0.02% sh [kernel.kallsyms] [k] int_with_check
0.02% ksoftirqd/3 [kernel.kallsyms] [k] deactivate_task
0.02% arm-none-eabi-g [kernel.kallsyms] [k] rcu_try_advance_all_cbs
0.02% python [kernel.kallsyms] [k] activate_task
0.02% cc1 [kernel.kallsyms] [k] context_tracking_task_switch
0.02% cc1 [kernel.kallsyms] [k] resched_task
0.02% arm-none-eabi-g [kernel.kallsyms] [k] account_user_time
0.02% sh [kernel.kallsyms] [k] wake_up_process
0.02% as [kernel.kallsyms] [k] int_careful
0.02% sh [kernel.kallsyms] [k] perf_event_task_sched_out
0.02% cc1 [kernel.kallsyms] [k] raise_softirq
0.02% arm-none-eabi-g [kernel.kallsyms] [k] ttwu_do_wakeup
0.02% as [kernel.kallsyms] [k] pick_next_task_fair
0.01% sh [kernel.kallsyms] [k] rb_next
0.01% arm-none-eabi-g [kernel.kallsyms] [k] ttwu_stat
0.01% arm-none-eabi-g [kernel.kallsyms] [k] acct_account_cputime
0.01% perf [kernel.kallsyms] [k] crypto_xor
0.01% arm-none-eabi-g [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
0.01% python [kernel.kallsyms] [k] rcu_user_enter
0.01% cc1 [kernel.kallsyms] [k] finish_task_switch
0.01% as [kernel.kallsyms] [k] set_next_entity
0.01% sh [kernel.kallsyms] [k] calc_delta_mine
0.01% arm-none-eabi-g [kernel.kallsyms] [k] set_next_buddy
0.01% ksoftirqd/1 [kernel.kallsyms] [k] hrtick_update
0.01% arm-none-eabi-g [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.01% as [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.01% as [kernel.kallsyms] [k] jiffies_to_timeval
0.01% python [kernel.kallsyms] [k] rcu_user_exit
0.01% cc1 [kernel.kallsyms] [k] select_task_rq_fair
0.01% python [kernel.kallsyms] [k] aes_decrypt
0.01% as [kernel.kallsyms] [k] rb_insert_color
0.01% sh [kernel.kallsyms] [k] vtime_task_switch
0.01% arm-none-eabi-g [kernel.kallsyms] [k] update_rq_clock
0.01% arm-none-eabi-g [kernel.kallsyms] [k] rcu_eqs_exit
0.01% arm-none-eabi-g [kernel.kallsyms] [k] schedule_user
0.01% sh [kernel.kallsyms] [k] invoke_rcu_core
0.01% cc1 [kernel.kallsyms] [k] vtime_account_user
0.01% ksoftirqd/2 [kernel.kallsyms] [k] hrtick_update
0.01% arm-none-eabi-g [kernel.kallsyms] [k] vtime_user_enter
0.01% sh [kernel.kallsyms] [k] schedule
0.01% ksoftirqd/0 [kernel.kallsyms] [k] hrtick_update
0.01% arm-none-eabi-g [kernel.kallsyms] [k] rcu_eqs_enter
0.01% cc1 [kernel.kallsyms] [k] clear_buddies
0.01% as [kernel.kallsyms] [k] vtime_account_user
0.01% cc1 [kernel.kallsyms] [k] pick_next_task_fair
0.01% cc1 [kernel.kallsyms] [k] update_stats_wait_end
0.01% arm-none-eabi-g [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
0.01% arm-none-eabi-g [kernel.kallsyms] [k] rcu_note_context_switch
0.01% as [kernel.kallsyms] [k] gs_change
0.01% Xorg Xorg [.] 0x0000000000046479
0.01% arm-none-eabi-g [kernel.kallsyms] [k] vtime_account_system
0.01% cc1 [kernel.kallsyms] [k] jiffies_to_timeval
0.01% as [kernel.kallsyms] [k] arch_vtime_task_switch
0.01% as [kernel.kallsyms] [k] clear_buddies
0.01% as [kernel.kallsyms] [k] update_stats_wait_end
0.01% as [kernel.kallsyms] [k] aes_encrypt
0.01% cc1 [kernel.kallsyms] [k] int_careful
0.01% python [kernel.kallsyms] [k] hrtick_update
0.01% ksoftirqd/0 [kernel.kallsyms] [k] native_load_gs_index
0.01% as [kernel.kallsyms] [k] place_entity
0.01% cc1 [kernel.kallsyms] [k] set_next_entity
0.01% arm-none-eabi-g [kernel.kallsyms] [k] rb_erase
0.01% cc1 [kernel.kallsyms] [k] _raw_spin_lock_irqsave
0.01% arm-none-eabi-g [kernel.kallsyms] [k] perf_event_task_sched_out
0.01% as [kernel.kallsyms] [k] cpuacct_charge
0.01% as [kernel.kallsyms] [k] _raw_spin_lock_irq
0.01% as [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.01% sh [kernel.kallsyms] [k] task_waking_fair
0.01% ksoftirqd/1 [kernel.kallsyms] [k] native_load_gs_index
0.01% cc1 [kernel.kallsyms] [k] _raw_spin_lock_irq
0.01% as [kernel.kallsyms] [k] __vtime_account_system
0.01% arm-none-eabi-g [kernel.kallsyms] [k] __raise_softirq_irqoff
0.01% cc1 [kernel.kallsyms] [k] place_entity
0.01% python [kernel.kallsyms] [k] aes_encrypt
0.01% ksoftirqd/3 [kernel.kallsyms] [k] native_load_gs_index
0.01% as libc-2.15.so [.] 0x0000000000134c3a
0.01% as [kernel.kallsyms] [k] enqueue_task
0.01% as [kernel.kallsyms] [k] account_user_time
0.01% arm-none-eabi-g [kernel.kallsyms] [k] rb_next
0.01% cc1 [kernel.kallsyms] [k] __vtime_account_system
0.01% cc1 [kernel.kallsyms] [k] rb_insert_color
0.01% arm-none-eabi-g [kernel.kallsyms] [k] wakeup_softirqd
0.01% arm-none-eabi-g [kernel.kallsyms] [k] int_with_check
0.01% cc1 [kernel.kallsyms] [k] arch_vtime_task_switch
0.01% as [kernel.kallsyms] [k] check_preempt_curr
0.01% cc1 [kernel.kallsyms] [k] cpuacct_charge
0.01% arm-none-eabi-g [kernel.kallsyms] [k] wake_up_process
0.01% as [kernel.kallsyms] [k] update_cfs_shares
0.01% multiload-apple libc-2.15.so [.] 0x00000000000f4f57
0.01% as [kernel.kallsyms] [k] update_rq_clock
0.01% cc1 [kernel.kallsyms] [k] account_user_time
0.01% perf [kernel.kallsyms] [k] crypto_cbc_encrypt
0.01% as [kernel.kallsyms] [k] acct_account_cputime
0.01% cc1 [kernel.kallsyms] [k] enqueue_task
0.01% cc1 [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.01% as [kernel.kallsyms] [k] ttwu_do_wakeup
0.01% cc1 [kernel.kallsyms] [k] check_preempt_curr
0.01% arm-none-eabi-g [kernel.kallsyms] [k] vtime_task_switch
0.01% python [kernel.kallsyms] [k] page_fault
0.01% as [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.01% sh [kernel.kallsyms] [k] activate_task
0.01% arm-none-eabi-g [kernel.kallsyms] [k] schedule
0.01% as [kernel.kallsyms] [k] rcu_try_advance_all_cbs
0.01% sh [kernel.kallsyms] [k] rcu_user_exit
0.01% sh [kernel.kallsyms] [k] rcu_user_enter
0.01% as [kernel.kallsyms] [k] set_next_buddy
0.01% python [kernel.kallsyms] [k] crypto_aes_expand_key
0.01% as [kernel.kallsyms] [k] ttwu_stat
0.01% ksoftirqd/0 [iwlwifi] [k] iwl_trans_pcie_read32
0.01% cc1 [kernel.kallsyms] [k] update_cfs_rq_blocked_load
0.01% cc1 [kernel.kallsyms] [k] ttwu_do_wakeup
0.01% arm-none-eabi-g [kernel.kallsyms] [k] invoke_rcu_core
0.01% cc1 [kernel.kallsyms] [k] update_cfs_shares
0.01% as [kernel.kallsyms] [k] rb_erase
0.01% cc1plus [iwlwifi] [k] iwl_trans_pcie_read32
0.01% arm-none-eabi-g [kernel.kallsyms] [k] calc_delta_mine
0.01% as [kernel.kallsyms] [k] rcu_eqs_exit
0.01% as [kernel.kallsyms] [k] vtime_account_system
0.01% cc1 [kernel.kallsyms] [k] acct_account_cputime
0.01% postgres libc-2.15.so [.] 0x00000000000c0707
0.01% as [kernel.kallsyms] [k] rcu_note_context_switch
0.01% cc1 [kernel.kallsyms] [k] rcu_eqs_enter
0.01% as [kernel.kallsyms] [k] vtime_user_enter
0.01% cc1 [kernel.kallsyms] [k] rcu_try_advance_all_cbs
0.01% cc1 [kernel.kallsyms] [k] vtime_user_enter
0.01% as [kernel.kallsyms] [k] rcu_eqs_enter
0.01% cc1 [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
0.01% as [kernel.kallsyms] [k] schedule_user
0.01% perf [kernel.kallsyms] [k] __switch_to
0.01% as [kernel.kallsyms] [k] wakeup_preempt_entity.isra.39
0.01% cc1 [kernel.kallsyms] [k] perf_event_task_sched_out
0.01% python [kernel.kallsyms] [k] str2hashbuf_signed
0.01% as [kernel.kallsyms] [k] __raise_softirq_irqoff
0.01% as [kernel.kallsyms] [k] aes_decrypt
0.01% Xorg libc-2.15.so [.] 0x0000000000080b76
0.01% zabbix_agentd libc-2.15.so [.] 0x000000000007c65d
0.01% cc1 [kernel.kallsyms] [k] vtime_account_system
0.01% as [kernel.kallsyms] [k] ttwu_do_activate.constprop.87
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2013-10-06 8:42 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-30 14:46 Unusually high system CPU usage with recent kernels Tibor Billes
-- strict thread matches above, loose matches on Subject: below --
2013-10-06 8:42 Tibor Billes
[not found] <20130914135951.288070@gmx.com>
2013-10-01 20:27 ` Paul E. McKenney
2013-09-28 14:13 Tibor Billes
[not found] <20130911064605.288090@gmx.com>
2013-09-13 0:19 ` Paul E. McKenney
[not found] <20130909194737.288090@gmx.com>
2013-09-09 20:44 ` Paul E. McKenney
2013-09-08 17:22 Tibor Billes
2013-09-08 18:43 ` Paul E. McKenney
2013-09-03 21:11 Tibor Billes
2013-09-03 22:16 ` Paul E. McKenney
2013-09-07 0:23 ` Paul E. McKenney
2013-08-27 20:05 Tibor Billes
2013-08-30 1:24 ` Paul E. McKenney
2013-08-25 19:50 Tibor Billes
2013-08-26 4:28 ` Paul E. McKenney
2013-08-24 20:30 Tibor Billes
2013-08-24 19:59 Tibor Billes
2013-08-24 21:03 ` Paul E. McKenney
2013-08-23 13:20 Tibor Billes
2013-08-24 0:18 ` Paul E. McKenney
2013-08-21 21:05 Tibor Billes
2013-08-21 22:09 ` Paul E. McKenney
2013-08-21 18:14 Tibor Billes
2013-08-21 19:12 ` Paul E. McKenney
2013-08-20 20:52 Tibor Billes
2013-08-20 21:43 ` Paul E. McKenney
2013-08-20 6:01 Tibor Billes
2013-08-20 14:53 ` Paul E. McKenney
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.