linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Jiri Olsa <jolsa@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@kernel.org>,
	Namhyung Kim <namhyung@kernel.org>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Andi Kleen <andi@firstfloor.org>
Subject: Re: [PATCH] perf: Prevent recursion in ring buffer
Date: Thu, 13 Sep 2018 09:40:42 +0200	[thread overview]
Message-ID: <20180913074042.GU24124@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20180912193317.10339-1-jolsa@kernel.org>

On Wed, Sep 12, 2018 at 09:33:17PM +0200, Jiri Olsa wrote:

>   # perf record -e 'sched:sched_switch,sched:sched_wakeup' perf bench sched messaging

> The reason for the corruptions are some of the scheduling tracepoints,
> that have __perf_task dfined and thus allow to store data to another
> cpu ring buffer:
> 
>   sched_waking
>   sched_wakeup
>   sched_wakeup_new
>   sched_stat_wait
>   sched_stat_sleep
>   sched_stat_iowait
>   sched_stat_blocked

> And then iterates events of the 'task' and store the sample
> for any task's event that passes tracepoint checks:
> 
>   ctx = rcu_dereference(task->perf_event_ctxp[perf_sw_context]);
> 
>   list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
>     if (event->attr.type != PERF_TYPE_TRACEPOINT)
>       continue;
>     if (event->attr.config != entry->type)
>       continue;
> 
>     perf_swevent_event(event, count, &data, regs);
>   }
> 
> Above code can race with same code running on another cpu,
> ending up with 2 cpus trying to store under the same ring
> buffer, which is not handled at the moment.

It can yes, however the only way I can see this breaking is if we use
!inherited events with a strict per-task buffer, but your record command
doesn't use that.

Now, your test-case uses inherited events, which would all share the
buffer, however IIRC inherited events require per-task-per-cpu buffers,
because there is already no guarantee the various tasks run on the same
CPU in the first place.

This means we _should_ write to the @task's local CPU buffer, and that
would work again.

Let me try and figure out where this is going wrong.

  parent reply	other threads:[~2018-09-13  7:40 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-12 19:33 [PATCH] perf: Prevent recursion in ring buffer Jiri Olsa
2018-09-13  7:07 ` Peter Zijlstra
2018-09-13  7:41   ` Jiri Olsa
2018-09-13  7:46   ` Jiri Olsa
2018-09-13  9:37     ` Peter Zijlstra
2018-09-23 16:13       ` [PATCHv2] perf: Prevent concurent ring buffer access Jiri Olsa
2018-10-02 10:01         ` [tip:perf/core] perf/ring_buffer: " tip-bot for Jiri Olsa
2018-09-13  7:40 ` Peter Zijlstra [this message]
2018-09-13  7:53   ` [PATCH] perf: Prevent recursion in ring buffer Jiri Olsa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180913074042.GU24124@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=andi@firstfloor.org \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).