linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Song Liu <songliubraving@fb.com>
To: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Jiri Olsa <jolsa@redhat.com>, Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Stephane Eranian <eranian@google.com>,
	Andi Kleen <ak@linux.intel.com>,
	"Ian Rogers" <irogers@google.com>, Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH 1/2] perf/core: Share an event with multiple cgroups
Date: Sun, 28 Mar 2021 17:17:03 +0000	[thread overview]
Message-ID: <180E871A-7C5E-4DB2-80F4-643237A7FA69@fb.com> (raw)
In-Reply-To: <20210323162156.1340260-2-namhyung@kernel.org>



> On Mar 23, 2021, at 9:21 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> 
> As we can run many jobs (in container) on a big machine, we want to
> measure each job's performance during the run.  To do that, the
> perf_event can be associated to a cgroup to measure it only.
> 
> However such cgroup events need to be opened separately and it causes
> significant overhead in event multiplexing during the context switch
> as well as resource consumption like in file descriptors and memory
> footprint.
> 
> As a cgroup event is basically a cpu event, we can share a single cpu
> event for multiple cgroups.  All we need is a separate counter (and
> two timing variables) for each cgroup.  I added a hash table to map
> from cgroup id to the attached cgroups.
> 
> With this change, the cpu event needs to calculate a delta of event
> counter values when the cgroups of current and the next task are
> different.  And it attributes the delta to the current task's cgroup.
> 
> This patch adds two new ioctl commands to perf_event for light-weight
> cgroup event counting (i.e. perf stat).
> 
> * PERF_EVENT_IOC_ATTACH_CGROUP - it takes a buffer consists of a
>     64-bit array to attach given cgroups.  The first element is a
>     number of cgroups in the buffer, and the rest is a list of cgroup
>     ids to add a cgroup info to the given event.
> 
> * PERF_EVENT_IOC_READ_CGROUP - it takes a buffer consists of a 64-bit
>     array to get the event counter values.  The first element is size
>     of the array in byte, and the second element is a cgroup id to
>     read.  The rest is to save the counter value and timings.
> 
> This attaches all cgroups in a single syscall and I didn't add the
> DETACH command deliberately to make the implementation simple.  The
> attached cgroup nodes would be deleted when the file descriptor of the
> perf_event is closed.
> 
> Cc: Tejun Heo <tj@kernel.org>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> include/linux/perf_event.h      |  22 ++
> include/uapi/linux/perf_event.h |   2 +
> kernel/events/core.c            | 474 ++++++++++++++++++++++++++++++--
> 3 files changed, 471 insertions(+), 27 deletions(-)

[...]

> @@ -4461,6 +4787,8 @@ static void __perf_event_init_context(struct perf_event_context *ctx)
> 	INIT_LIST_HEAD(&ctx->event_list);
> 	INIT_LIST_HEAD(&ctx->pinned_active);
> 	INIT_LIST_HEAD(&ctx->flexible_active);
> +	INIT_LIST_HEAD(&ctx->cgrp_ctx_entry);
> +	INIT_LIST_HEAD(&ctx->cgrp_node_list);

I guess we need ifdef CONFIG_CGROUP_PERF here?

> 	refcount_set(&ctx->refcount, 1);
> }
> 
> @@ -4851,6 +5179,8 @@ static void _free_event(struct perf_event *event)
> 	if (is_cgroup_event(event))
> 		perf_detach_cgroup(event);
> 
> +	perf_event_destroy_cgroup_nodes(event);
> +
> 	if (!event->parent) {
> 		if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
> 			put_callchain_buffers();

[...]

> +static void perf_sched_enable(void)
> +{
> +	/*
> +	 * We need the mutex here because static_branch_enable()
> +	 * must complete *before* the perf_sched_count increment
> +	 * becomes visible.
> +	 */
> +	if (atomic_inc_not_zero(&perf_sched_count))
> +		return;

Why don't we use perf_cgroup_events for the new use case? 

> +
> +	mutex_lock(&perf_sched_mutex);
> +	if (!atomic_read(&perf_sched_count)) {
> +		static_branch_enable(&perf_sched_events);
> +		/*
> +		 * Guarantee that all CPUs observe they key change and
> +		 * call the perf scheduling hooks before proceeding to
> +		 * install events that need them.
> +		 */
> +		synchronize_rcu();
> +	}
> +	/*
> +	 * Now that we have waited for the sync_sched(), allow further
> +	 * increments to by-pass the mutex.
> +	 */
> +	atomic_inc(&perf_sched_count);
> +	mutex_unlock(&perf_sched_mutex);
> +}
> +
> static long perf_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> {
> 	struct perf_event *event = file->private_data;
> 	struct perf_event_context *ctx;
> +	bool do_sched_enable = false;
> 	long ret;
> 
> 	/* Treat ioctl like writes as it is likely a mutating operation. */
> @@ -5595,9 +6006,19 @@ static long perf_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> 		return ret;
> 
> 	ctx = perf_event_ctx_lock(event);
> +	/* ATTACH_CGROUP requires context switch callback */
> +	if (cmd == PERF_EVENT_IOC_ATTACH_CGROUP && !event_has_cgroup_node(event))
> +		do_sched_enable = true;
> 	ret = _perf_ioctl(event, cmd, arg);
> 	perf_event_ctx_unlock(event, ctx);
> 
> +	/*
> +	 * Due to the circular lock dependency, it cannot call
> +	 * static_branch_enable() under the ctx->mutex.
> +	 */
> +	if (do_sched_enable && ret >= 0)
> +		perf_sched_enable();
> +
> 	return ret;
> }
> 
> @@ -11240,33 +11661,8 @@ static void account_event(struct perf_event *event)
> 	if (event->attr.text_poke)
> 		atomic_inc(&nr_text_poke_events);
> 
> -	if (inc) {
> -		/*
> -		 * We need the mutex here because static_branch_enable()
> -		 * must complete *before* the perf_sched_count increment
> -		 * becomes visible.
> -		 */
> -		if (atomic_inc_not_zero(&perf_sched_count))
> -			goto enabled;
> -
> -		mutex_lock(&perf_sched_mutex);
> -		if (!atomic_read(&perf_sched_count)) {
> -			static_branch_enable(&perf_sched_events);
> -			/*
> -			 * Guarantee that all CPUs observe they key change and
> -			 * call the perf scheduling hooks before proceeding to
> -			 * install events that need them.
> -			 */
> -			synchronize_rcu();
> -		}
> -		/*
> -		 * Now that we have waited for the sync_sched(), allow further
> -		 * increments to by-pass the mutex.
> -		 */
> -		atomic_inc(&perf_sched_count);
> -		mutex_unlock(&perf_sched_mutex);
> -	}
> -enabled:
> +	if (inc)
> +		perf_sched_enable();
> 
> 	account_event_cpu(event, event->cpu);
> 
> @@ -13008,6 +13404,7 @@ static void __init perf_event_init_all_cpus(void)
> 
> #ifdef CONFIG_CGROUP_PERF
> 		INIT_LIST_HEAD(&per_cpu(cgrp_cpuctx_list, cpu));
> +		INIT_LIST_HEAD(&per_cpu(cgroup_ctx_list, cpu));
> #endif
> 		INIT_LIST_HEAD(&per_cpu(sched_cb_list, cpu));
> 	}
> @@ -13218,6 +13615,28 @@ static int perf_cgroup_css_online(struct cgroup_subsys_state *css)
> 	return 0;
> }
> 
> +static int __perf_cgroup_update_node(void *info)
> +{
> +	struct task_struct *task = info;
> +
> +	rcu_read_lock();
> +	cgroup_node_sched_out(task);
> +	rcu_read_unlock();
> +
> +	return 0;
> +}
> +
> +static int perf_cgroup_can_attach(struct cgroup_taskset *tset)
> +{
> +	struct task_struct *task;
> +	struct cgroup_subsys_state *css;
> +
> +	cgroup_taskset_for_each(task, css, tset)
> +		task_function_call(task, __perf_cgroup_update_node, task);
> +
> +	return 0;
> +}

Could you please explain why we need this logic in can_attach? 

Thanks,
Song

  parent reply	other threads:[~2021-03-28 17:18 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-23 16:21 [RFC 0/2] perf core: Sharing events with multiple cgroups Namhyung Kim
2021-03-23 16:21 ` [PATCH 1/2] perf/core: Share an event " Namhyung Kim
2021-03-24  0:30   ` Song Liu
2021-03-24  1:06     ` Namhyung Kim
2021-03-25  0:55   ` Song Liu
2021-03-25  2:44     ` Namhyung Kim
2021-03-25 12:57     ` Arnaldo Carvalho de Melo
2021-03-28 17:17   ` Song Liu [this message]
2021-03-29 11:33     ` Namhyung Kim
2021-03-30  6:33       ` Song Liu
2021-03-30 15:11         ` Namhyung Kim
2021-04-01  6:00           ` Stephane Eranian
2021-04-01  6:19           ` Song Liu
2021-03-23 16:21 ` [PATCH 2/2] perf/core: Support reading group events with shared cgroups Namhyung Kim
2021-03-28 17:31   ` Song Liu
2021-03-29 11:36     ` Namhyung Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=180E871A-7C5E-4DB2-80F4-643237A7FA69@fb.com \
    --to=songliubraving@fb.com \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=eranian@google.com \
    --cc=irogers@google.com \
    --cc=jolsa@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).