From: "Martin J. Bligh" <mbligh@aracnet.com>
To: William Lee Irwin III <wli@holomorphy.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: percpu-2.5.63-bk5-1 (properly generated)
Date: Sun, 02 Mar 2003 16:07:01 -0800 [thread overview]
Message-ID: <88060000.1046650020@[10.10.2.4]> (raw)
In-Reply-To: <20030302234252.GL1195@holomorphy.com>
>> Still degraded: diffprofile:
>> 781 1.6% total
>> 346 1.0% default_idle
>> 217 10.1% __down
>> 79 12.0% __wake_up
>> 51 70.8% page_address
>> 32 66.7% kmap_atomic
>> 24 5.3% page_remove_rmap
>> 16 19.3% clear_page_tables
>> 14 4.6% release_pages
>> 13 33.3% path_release
>> 13 6.7% __copy_to_user_ll
>> 13 260.0% bad_range
>> 11 1.3% do_schedule
>> 10 15.6% pte_alloc_one
>
> The largest issue is probably idle time, which appears to have gone up
> enormously in absolute terms. I'll split the pieces out and see what
> happens. From this it looks like the indirection is a slowdown, but the
> cost in absolute terms is insignificant, as there aren't enough samples.
>
> There's no clear reason __down() should have become more expensive,
> nor __wake_up(). I'd really like an instruction-level profile. AFAICT
> node_nr_running is 100% harmless instruction-wise, unless the copy
> propagated a nonzero value (which would be a bug), and per_cpu
> runqueues are largely unknown, but would be accountable to schedule(),
> which is not particularly offensive wrt. additional cpu time.
>
> Some kind of dump of internal scheduler statistics to verify they've
> been faithfully preserved would help also. Instruction-level cpu and
> cache profiling would also be helpful. There may very well be an odd
> cache coloring conflict at work here. If it's too big to take on, I
> might need some kind of help or a pointer to a package so I don't have
> to crap all over userspace for the benchmark. I may also need a .config
> in order to reproduce the usual bullcrap like (#@%$ing) link order.
I think you'd be better off profiling the improvement you saw, and working
out where that comes from.
Failing that, if you can split it into 3 or 4 patches along the lines I
suggested earlier, I'll try benching each bit seperately for you.
M.
next prev parent reply other threads:[~2003-03-02 23:56 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-03-02 18:24 percpu-2.5.63-bk5-1 (properly generated) Martin J. Bligh
2003-03-02 20:24 ` William Lee Irwin III
2003-03-02 20:46 ` Martin J. Bligh
2003-03-02 21:06 ` William Lee Irwin III
2003-03-02 21:58 ` Martin J. Bligh
2003-03-02 22:10 ` William Lee Irwin III
2003-03-02 23:13 ` Martin J. Bligh
2003-03-02 23:42 ` William Lee Irwin III
2003-03-03 0:07 ` Martin J. Bligh [this message]
2003-03-03 1:43 ` William Lee Irwin III
2003-03-03 17:40 ` Martin J. Bligh
2003-03-03 22:51 ` William Lee Irwin III
2003-03-03 23:30 ` Martin J. Bligh
2003-03-04 0:14 ` William Lee Irwin III
-- strict thread matches above, loose matches on Subject: below --
2003-03-02 11:07 William Lee Irwin III
2003-03-02 13:15 ` William Lee Irwin III
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='88060000.1046650020@[10.10.2.4]' \
--to=mbligh@aracnet.com \
--cc=linux-kernel@vger.kernel.org \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).