linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Doug Smythies <dsmythies@telus.net>
To: Pratik Sampat <psampat@linux.ibm.com>
Cc: rjw@rjwysocki.net, Daniel Lezcano <daniel.lezcano@linaro.org>,
	shuah@kernel.org, ego@linux.vnet.ibm.com, svaidy@linux.ibm.com,
	Linux PM list <linux-pm@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-kselftest@vger.kernel.org, pratik.r.sampat@gmail.com,
	dsmythies <dsmythies@telus.net>
Subject: Re: [RFC v3 0/2] CPU-Idle latency selftest framework
Date: Fri, 9 Apr 2021 07:26:38 -0700	[thread overview]
Message-ID: <CAAYoRsXqUpkVxDuRUoapBJ__EUPbMBSWJ7QigVcKbr6ApRxzbg@mail.gmail.com> (raw)
In-Reply-To: <0a4b32e0-426e-4886-ae37-6d0bdafdea7f@linux.ibm.com>

On Fri, Apr 9, 2021 at 12:43 AM Pratik Sampat <psampat@linux.ibm.com> wrote:
> On 09/04/21 10:53 am, Doug Smythies wrote:
> > I tried V3 on a Intel i5-10600K processor with 6 cores and 12 CPUs.
> > The core to cpu mappings are:
> > core 0 has cpus 0 and 6
> > core 1 has cpus 1 and 7
> > core 2 has cpus 2 and 8
> > core 3 has cpus 3 and 9
> > core 4 has cpus 4 and 10
> > core 5 has cpus 5 and 11
> >
> > By default, it will test CPUs 0,2,4,6,10 on cores 0,2,4,0,2,4.
> > wouldn't it make more sense to test each core once?
>
> Ideally it would be better to run on all the CPUs, however on larger systems
> that I'm testing on with hundreds of cores and a high a thread count, the
> execution time increases while not particularly bringing any additional
> information to the table.
>
> That is why it made sense only run on one of the threads of each core to make
> the experiment faster while preserving accuracy.
>
> To handle various thread topologies it maybe worthwhile if we parse
> /sys/devices/system/cpu/cpuX/topology/thread_siblings_list for each core and
> use this information to run only once per physical core, rather than
> assuming the topology.
>
> What are your thoughts on a mechanism like this?

Yes, seems like a good solution.

... Doug

  reply	other threads:[~2021-04-09 14:26 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-04  8:33 [RFC v3 0/2] CPU-Idle latency selftest framework Pratik Rajesh Sampat
2021-04-04  8:33 ` [RFC v3 1/2] cpuidle: Extract IPI based and timer based wakeup latency from idle states Pratik Rajesh Sampat
2021-04-04  8:33 ` [RFC v3 2/2] selftest/cpuidle: Add support for cpuidle latency measurement Pratik Rajesh Sampat
2021-04-09  5:23 ` [RFC v3 0/2] CPU-Idle latency selftest framework Doug Smythies
2021-04-09  7:43   ` Pratik Sampat
2021-04-09 14:26     ` Doug Smythies [this message]
2023-09-11  5:36 Aboorva Devarajan
2023-09-25  5:06 ` Aboorva Devarajan
2023-10-12  4:48   ` Aboorva Devarajan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAYoRsXqUpkVxDuRUoapBJ__EUPbMBSWJ7QigVcKbr6ApRxzbg@mail.gmail.com \
    --to=dsmythies@telus.net \
    --cc=daniel.lezcano@linaro.org \
    --cc=ego@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=pratik.r.sampat@gmail.com \
    --cc=psampat@linux.ibm.com \
    --cc=rjw@rjwysocki.net \
    --cc=shuah@kernel.org \
    --cc=svaidy@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).