From: Con Kolivas <kernel@kolivas.org>
To: malc <av1474@comtv.ru>
Cc: Pavel Machek <pavel@ucw.cz>, linux-kernel@vger.kernel.org
Subject: Re: CPU load
Date: Wed, 14 Feb 2007 19:09:09 +1100 [thread overview]
Message-ID: <200702141909.09848.kernel@kolivas.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0702141023000.672@home.oyster.ru>
On Wednesday 14 February 2007 18:28, malc wrote:
> On Wed, 14 Feb 2007, Con Kolivas wrote:
> > On Wednesday 14 February 2007 09:01, malc wrote:
> >> On Mon, 12 Feb 2007, Pavel Machek wrote:
> >>> Hi!
>
> [..snip..]
>
> >>> I have (had?) code that 'exploits' this. I believe I could eat 90% of
> >>> cpu without being noticed.
> >>
> >> Slightly changed version of hog(around 3 lines in total changed) does
> >> that easily on 2.6.18.3 on PPC.
> >>
> >> http://www.boblycat.org/~malc/apc/load-hog-ppc.png
> >
> > I guess it's worth mentioning this is _only_ about displaying the cpu
> > usage to userspace, as the cpu scheduler knows the accounting of each
> > task in different ways. This behaviour can not be used to exploit the cpu
> > scheduler into a starvation situation. Using the discrete per process
> > accounting to accumulate the displayed values to userspace would fix this
> > problem, but would be expensive.
>
> Guess you are right, but, once again, the problem is not so much about
> fooling the system to do something or other, but confusing the user:
Yes and I certainly am not arguing against that.
>
> a. Everything is fine - the load is 0%, the fact that the system is
> overheating and/or that some processes do not do as much as they
> could is probably due to the bad hardware.
>
> b. The weird load pattern must be the result of bugs in my code.
> (And then a whole lot of time/effort is poured into fixing the
> problem which is simply not there)
>
> The current situation ought to be documented. Better yet some flag can
> be introduced somewhere in the system so that it exports realy values to
> /proc, not the estimations that are innacurate in some cases (like hog)
I wouldn't argue against any of those either. schedstats with userspace tools
to understand the data will give better information I believe.
--
-ck
next prev parent reply other threads:[~2007-02-14 8:10 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-02-12 5:33 CPU load Vassili Karpov
2007-02-12 5:44 ` Con Kolivas
2007-02-12 5:54 ` malc
2007-02-12 6:12 ` Con Kolivas
2007-02-12 7:10 ` malc
2007-02-12 7:29 ` Con Kolivas
2007-02-12 5:55 ` Stephen Rothwell
2007-02-12 6:08 ` Con Kolivas
2007-02-12 14:32 ` Pavel Machek
2007-02-13 22:01 ` malc
2007-02-13 22:08 ` Con Kolivas
2007-02-14 7:28 ` malc
2007-02-14 8:09 ` Con Kolivas [this message]
2007-02-14 20:45 ` Pavel Machek
2007-02-25 10:35 ` malc
2007-02-26 9:28 ` Pavel Machek
2007-02-26 10:42 ` malc
2007-02-26 16:38 ` Randy Dunlap
2007-02-12 18:05 ` malc
-- strict thread matches above, loose matches on Subject: below --
2007-02-12 16:57 Andrew Burgess
2007-02-12 18:15 ` malc
2002-07-10 14:50 David Chow
2002-07-10 16:54 ` William Lee Irwin III
2002-07-10 17:49 ` Robert Love
2002-07-26 17:38 ` David Chow
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200702141909.09848.kernel@kolivas.org \
--to=kernel@kolivas.org \
--cc=av1474@comtv.ru \
--cc=linux-kernel@vger.kernel.org \
--cc=pavel@ucw.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).