From: Rob Herring <robh@kernel.org> To: Will Deacon <will@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, Mark Rutland <mark.rutland@arm.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>, Jiri Olsa <jolsa@redhat.com>, Kan Liang <kan.liang@linux.intel.com>, Ian Rogers <irogers@google.com>, Alexander Shishkin <alexander.shishkin@linux.intel.com>, honnappa.nagarahalli@arm.com, Zachary.Leaf@arm.com, Raphael Gault <raphael.gault@arm.com>, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Namhyung Kim <namhyung@kernel.org>, Itaru Kitayama <itaru.kitayama@gmail.com>, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v8 2/5] perf: Track per-PMU sched_task() callback users Date: Mon, 17 May 2021 14:54:02 -0500 [thread overview] Message-ID: <20210517195405.3079458-3-robh@kernel.org> (raw) In-Reply-To: <20210517195405.3079458-1-robh@kernel.org> From: Kan Liang <kan.liang@linux.intel.com> Current perf only tracks the per-CPU sched_task() callback users, which doesn't work if a callback user is a task. For example, the dirty counters have to be cleared to prevent data leakage when a new userspace access task is scheduled in. The task may be created on one CPU but running on another CPU. It cannot be tracked by the per-CPU variable. A global variable is not going to work either because of the hybrid PMUs. Add a per-PMU variable to track the callback users. Suggested-by: Rob Herring <robh@kernel.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> [robh: Also call sched_task() got sched out cases] Signed-off-by: Rob Herring <robh@kernel.org> --- include/linux/perf_event.h | 3 +++ kernel/events/core.c | 8 +++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 4cf081e22b76..a88d52e80864 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -300,6 +300,9 @@ struct pmu { /* number of address filters this PMU can do */ unsigned int nr_addr_filters; + /* Track the per PMU sched_task() callback users */ + atomic_t sched_cb_usage; + /* * Fully disable/enable this PMU, can be used to protect from the PMI * as well as for lazy/batch writing of the MSRs. diff --git a/kernel/events/core.c b/kernel/events/core.c index 2e947a485898..6d0507c23240 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3448,7 +3448,8 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, perf_pmu_disable(pmu); - if (cpuctx->sched_cb_usage && pmu->sched_task) + if (pmu->sched_task && + (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage))) pmu->sched_task(ctx, false); /* @@ -3488,7 +3489,8 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, raw_spin_lock(&ctx->lock); perf_pmu_disable(pmu); - if (cpuctx->sched_cb_usage && pmu->sched_task) + if (pmu->sched_task && + (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage))) pmu->sched_task(ctx, false); task_ctx_sched_out(cpuctx, ctx, EVENT_ALL); @@ -3851,7 +3853,7 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx, cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE); perf_event_sched_in(cpuctx, ctx, task); - if (cpuctx->sched_cb_usage && pmu->sched_task) + if (pmu->sched_task && (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage))) pmu->sched_task(cpuctx->task_ctx, true); perf_pmu_enable(pmu); -- 2.27.0
WARNING: multiple messages have this Message-ID (diff)
From: Rob Herring <robh@kernel.org> To: Will Deacon <will@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, Mark Rutland <mark.rutland@arm.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>, Jiri Olsa <jolsa@redhat.com>, Kan Liang <kan.liang@linux.intel.com>, Ian Rogers <irogers@google.com>, Alexander Shishkin <alexander.shishkin@linux.intel.com>, honnappa.nagarahalli@arm.com, Zachary.Leaf@arm.com, Raphael Gault <raphael.gault@arm.com>, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Namhyung Kim <namhyung@kernel.org>, Itaru Kitayama <itaru.kitayama@gmail.com>, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v8 2/5] perf: Track per-PMU sched_task() callback users Date: Mon, 17 May 2021 14:54:02 -0500 [thread overview] Message-ID: <20210517195405.3079458-3-robh@kernel.org> (raw) In-Reply-To: <20210517195405.3079458-1-robh@kernel.org> From: Kan Liang <kan.liang@linux.intel.com> Current perf only tracks the per-CPU sched_task() callback users, which doesn't work if a callback user is a task. For example, the dirty counters have to be cleared to prevent data leakage when a new userspace access task is scheduled in. The task may be created on one CPU but running on another CPU. It cannot be tracked by the per-CPU variable. A global variable is not going to work either because of the hybrid PMUs. Add a per-PMU variable to track the callback users. Suggested-by: Rob Herring <robh@kernel.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> [robh: Also call sched_task() got sched out cases] Signed-off-by: Rob Herring <robh@kernel.org> --- include/linux/perf_event.h | 3 +++ kernel/events/core.c | 8 +++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 4cf081e22b76..a88d52e80864 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -300,6 +300,9 @@ struct pmu { /* number of address filters this PMU can do */ unsigned int nr_addr_filters; + /* Track the per PMU sched_task() callback users */ + atomic_t sched_cb_usage; + /* * Fully disable/enable this PMU, can be used to protect from the PMI * as well as for lazy/batch writing of the MSRs. diff --git a/kernel/events/core.c b/kernel/events/core.c index 2e947a485898..6d0507c23240 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3448,7 +3448,8 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, perf_pmu_disable(pmu); - if (cpuctx->sched_cb_usage && pmu->sched_task) + if (pmu->sched_task && + (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage))) pmu->sched_task(ctx, false); /* @@ -3488,7 +3489,8 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, raw_spin_lock(&ctx->lock); perf_pmu_disable(pmu); - if (cpuctx->sched_cb_usage && pmu->sched_task) + if (pmu->sched_task && + (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage))) pmu->sched_task(ctx, false); task_ctx_sched_out(cpuctx, ctx, EVENT_ALL); @@ -3851,7 +3853,7 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx, cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE); perf_event_sched_in(cpuctx, ctx, task); - if (cpuctx->sched_cb_usage && pmu->sched_task) + if (pmu->sched_task && (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage))) pmu->sched_task(cpuctx->task_ctx, true); perf_pmu_enable(pmu); -- 2.27.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-05-17 19:54 UTC|newest] Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-17 19:54 [PATCH v8 0/5] arm64 userspace counter support Rob Herring 2021-05-17 19:54 ` Rob Herring 2021-05-17 19:54 ` [PATCH v8 1/5] perf: Add a counter for number of user access events in context Rob Herring 2021-05-17 19:54 ` Rob Herring 2021-05-17 19:54 ` Rob Herring [this message] 2021-05-17 19:54 ` [PATCH v8 2/5] perf: Track per-PMU sched_task() callback users Rob Herring 2021-05-17 19:54 ` [PATCH v8 3/5] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring 2021-05-17 19:54 ` Rob Herring 2021-06-01 13:55 ` Mark Rutland 2021-06-01 13:55 ` Mark Rutland 2021-06-01 15:00 ` Rob Herring 2021-06-01 15:00 ` Rob Herring 2021-06-01 17:11 ` Mark Rutland 2021-06-01 17:11 ` Mark Rutland 2021-06-03 16:40 ` Rob Herring 2021-06-03 16:40 ` Rob Herring 2021-07-21 15:59 ` Rob Herring 2021-07-21 15:59 ` Rob Herring 2021-05-17 19:54 ` [PATCH v8 4/5] arm64: perf: Add userspace counter access disable switch Rob Herring 2021-05-17 19:54 ` Rob Herring 2021-06-01 12:57 ` Will Deacon 2021-06-01 12:57 ` Will Deacon 2021-05-17 19:54 ` [PATCH v8 5/5] Documentation: arm64: Document PMU counters access from userspace Rob Herring 2021-05-17 19:54 ` Rob Herring
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210517195405.3079458-3-robh@kernel.org \ --to=robh@kernel.org \ --cc=Jonathan.Cameron@huawei.com \ --cc=Zachary.Leaf@arm.com \ --cc=acme@kernel.org \ --cc=alexander.shishkin@linux.intel.com \ --cc=catalin.marinas@arm.com \ --cc=honnappa.nagarahalli@arm.com \ --cc=irogers@google.com \ --cc=itaru.kitayama@gmail.com \ --cc=jolsa@redhat.com \ --cc=kan.liang@linux.intel.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mark.rutland@arm.com \ --cc=mingo@redhat.com \ --cc=namhyung@kernel.org \ --cc=peterz@infradead.org \ --cc=raphael.gault@arm.com \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.