All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jiri Olsa <jolsa@redhat.com>
To: "Jin, Yao" <yao.jin@linux.intel.com>
Cc: acme@kernel.org, jolsa@kernel.org, peterz@infradead.org,
	mingo@redhat.com, alexander.shishkin@linux.intel.com,
	Linux-kernel@vger.kernel.org, ak@linux.intel.com,
	kan.liang@intel.com, yao.jin@intel.com
Subject: Re: [PATCH] perf evsel: Get group fd from CPU0 for system wide event
Date: Tue, 5 May 2020 02:03:52 +0200	[thread overview]
Message-ID: <20200505000352.GH1916255@krava> (raw)
In-Reply-To: <b799b66a-42aa-6c55-647e-7b718473632a@linux.intel.com>

On Sat, May 02, 2020 at 10:33:59AM +0800, Jin, Yao wrote:

SNIP

> > > @@ -1461,6 +1461,9 @@ static int get_group_fd(struct evsel *evsel, int cpu, int thread)
> > >   	BUG_ON(!leader->core.fd);
> > >   	fd = FD(leader, cpu, thread);
> > > +	if (fd == -1 && leader->core.system_wide)
> > 
> > fd does not need to be -1 in here.. in my setup cstate_pkg/c2-residency/
> > has cpumask 0, so other cpus never get open and are 0, and the whole thing
> > ends up with:
> > 
> > 	sys_perf_event_open: pid -1  cpu 1  group_fd 0  flags 0
> > 	sys_perf_event_open failed, error -9
> > 
> > I actualy thought we put -1 to fd array but couldn't find it.. perhaps we should od that
> > 
> > 
> 
> I have tested on two platforms. On KBL desktop fd is 0 for this case, but on
> oncascadelakex server, fd is -1, so the BUG_ON(fd == -1) is triggered.
> 
> > > +		fd = FD(leader, 0, thread);
> > > +
> > 
> > so how do we group following events?
> > 
> >    cstate_pkg/c2-residency/ - cpumask 0
> >    msr/tsc/                 - all cpus
> > 
> 
> Not sure if it's enough to only use cpumask 0 because
> cstate_pkg/c2-residency/ should be per-socket.
> 
> > cpu 0 is fine.. the rest I have no idea ;-)
> > 
> 
> Perhaps we directly remove the BUG_ON(fd == -1) assertion?

I think we need to make clear how to deal with grouping over
events that comes for different cpus

	so how do we group following events?
	
	   cstate_pkg/c2-residency/ - cpumask 0
	   msr/tsc/                 - all cpus


what's the reason/expected output of groups with above events?
seems to make sense only if we limit msr/tsc/ to cpumask 0 as well

jirka


  reply	other threads:[~2020-05-05  0:04 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-30  1:34 [PATCH] perf evsel: Get group fd from CPU0 for system wide event Jin Yao
2020-05-01 10:23 ` Jiri Olsa
2020-05-02  2:33   ` Jin, Yao
2020-05-05  0:03     ` Jiri Olsa [this message]
2020-05-09  7:37       ` Jin, Yao
2020-05-15  6:04         ` Jin, Yao
2020-05-15  8:33           ` Jiri Olsa
2020-05-18  3:28             ` Jin, Yao
2020-05-20  5:36               ` Jin, Yao
2020-05-20  7:50                 ` Jiri Olsa
2020-05-21  4:38                   ` Jin, Yao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200505000352.GH1916255@krava \
    --to=jolsa@redhat.com \
    --cc=Linux-kernel@vger.kernel.org \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@intel.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=yao.jin@intel.com \
    --cc=yao.jin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.