From: Mike Galbraith <bitbucket@online.de>
To: Michael Wang <wangyun@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org, mingo@redhat.com,
peterz@infradead.org, mingo@kernel.org, a.p.zijlstra@chello.nl
Subject: Re: [RFC PATCH 0/2] sched: simplify the select_task_rq_fair()
Date: Mon, 21 Jan 2013 10:11:16 +0100 [thread overview]
Message-ID: <1358759476.4994.110.camel@marge.simpson.net> (raw)
In-Reply-To: <50FD005C.8040402@linux.vnet.ibm.com>
On Mon, 2013-01-21 at 16:46 +0800, Michael Wang wrote:
> On 01/21/2013 04:26 PM, Mike Galbraith wrote:
> > On Mon, 2013-01-21 at 15:34 +0800, Michael Wang wrote:
> >> On 01/21/2013 02:42 PM, Mike Galbraith wrote:
> >>> On Mon, 2013-01-21 at 13:07 +0800, Michael Wang wrote:
> >>>
> >>>> That seems like the default one, could you please show me the numbers in
> >>>> your datapoint file?
> >>>
> >>> Yup, I do not touch the workfile. Datapoints is what you see in the
> >>> tabulated result...
> >>>
> >>> 1
> >>> 1
> >>> 1
> >>> 5
> >>> 5
> >>> 5
> >>> 10
> >>> 10
> >>> 10
> >>> ...
> >>>
> >>> so it does three consecutive runs at each load level. I quiesce the
> >>> box, set governor to performance, echo 250 32000 32 4096
> >>>> /proc/sys/kernel/sem, then ./multitask -nl -f, and point it
> >>> at ./datapoints.
> >>
> >> I have changed the "/proc/sys/kernel/sem" to:
> >>
> >> 2000 2048000 256 1024
> >>
> >> and run few rounds, seems like I can't reproduce this issue on my 12 cpu
> >> X86 server:
> >>
> >> prev post
> >> Tasks jobs/min jobs/min
> >> 1 508.39 506.69
> >> 5 2792.63 2792.63
> >> 10 5454.55 5449.64
> >> 20 10262.49 10271.19
> >> 40 18089.55 18184.55
> >> 80 28995.22 28960.57
> >> 160 41365.19 41613.73
> >> 320 53099.67 52767.35
> >> 640 61308.88 61483.83
> >> 1280 66707.95 66484.96
> >> 2560 69736.58 69350.02
> >>
> >> Almost nothing changed...I would like to find another machine and do the
> >> test again later.
> >
> > Hm. Those numbers look odd. Ok, I've got 8 more cores, but your hefty
> > load throughput is low. When I look low end numbers, seems your cores
> > are more macho than my 2.27 GHz EX cores, so it should have been a lot
> > closer. Oh wait, you said "12 cpu".. so 1 6 core package + HT? This
> > box is 2 NUMA nodes (was 4), 2 (was 4) 10 core packages + HT.
>
> It's a 12 core package, and only 1 physical cpu:
>
> Intel(R) Xeon(R) CPU X5690 @ 3.47GHz
>
> So does that means the issue was related to the case when there are
> multiple nodes?
Seems likely. I had 4 nodes earlier though, and did NOT see collapse.
-Mike
next prev parent reply other threads:[~2013-01-21 9:11 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1356588535-23251-1-git-send-email-wangyun@linux.vnet.ibm.com>
2013-01-09 9:28 ` [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Michael Wang
2013-01-12 8:01 ` Mike Galbraith
2013-01-12 10:19 ` Mike Galbraith
2013-01-14 9:21 ` Mike Galbraith
2013-01-15 3:10 ` Michael Wang
2013-01-15 4:52 ` Mike Galbraith
2013-01-15 8:26 ` Michael Wang
2013-01-17 5:55 ` Michael Wang
2013-01-20 4:09 ` Mike Galbraith
2013-01-21 2:50 ` Michael Wang
2013-01-21 4:38 ` Mike Galbraith
2013-01-21 5:07 ` Michael Wang
2013-01-21 6:42 ` Mike Galbraith
2013-01-21 7:09 ` Mike Galbraith
2013-01-21 7:45 ` Michael Wang
2013-01-21 9:09 ` Mike Galbraith
2013-01-21 9:22 ` Michael Wang
2013-01-21 9:44 ` Mike Galbraith
2013-01-21 10:30 ` Mike Galbraith
2013-01-22 3:43 ` Michael Wang
2013-01-22 8:03 ` Mike Galbraith
2013-01-22 8:56 ` Michael Wang
2013-01-22 11:34 ` Mike Galbraith
2013-01-23 3:01 ` Michael Wang
2013-01-23 5:02 ` Mike Galbraith
2013-01-22 14:41 ` Mike Galbraith
2013-01-23 2:44 ` Michael Wang
2013-01-23 4:31 ` Mike Galbraith
2013-01-23 5:09 ` Michael Wang
2013-01-23 6:28 ` Mike Galbraith
2013-01-23 7:10 ` Michael Wang
2013-01-23 8:20 ` Mike Galbraith
2013-01-23 8:30 ` Michael Wang
2013-01-23 8:49 ` Mike Galbraith
2013-01-23 9:00 ` Michael Wang
2013-01-23 9:18 ` Mike Galbraith
2013-01-23 9:26 ` Michael Wang
2013-01-23 9:37 ` Mike Galbraith
2013-01-23 9:32 ` Mike Galbraith
2013-01-24 6:01 ` Michael Wang
2013-01-24 6:51 ` Mike Galbraith
2013-01-24 7:15 ` Michael Wang
2013-01-24 7:47 ` Mike Galbraith
2013-01-24 8:14 ` Michael Wang
2013-01-24 9:07 ` Mike Galbraith
2013-01-24 9:26 ` Michael Wang
2013-01-24 10:34 ` Mike Galbraith
2013-01-25 2:14 ` Michael Wang
2013-01-24 7:00 ` Michael Wang
2013-01-21 7:34 ` Michael Wang
2013-01-21 8:26 ` Mike Galbraith
2013-01-21 8:46 ` Michael Wang
2013-01-21 9:11 ` Mike Galbraith [this message]
2013-01-15 2:46 ` Michael Wang
2013-01-11 8:15 Michael Wang
2013-01-11 10:13 ` Nikunj A Dadhania
2013-01-15 2:20 ` Michael Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1358759476.4994.110.camel@marge.simpson.net \
--to=bitbucket@online.de \
--cc=a.p.zijlstra@chello.nl \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=wangyun@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.