From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BABA0C43460 for ; Fri, 16 Apr 2021 15:01:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D98D6137D for ; Fri, 16 Apr 2021 15:01:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245266AbhDPPCQ (ORCPT ); Fri, 16 Apr 2021 11:02:16 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:57818 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244922AbhDPPCO (ORCPT ); Fri, 16 Apr 2021 11:02:14 -0400 Date: Fri, 16 Apr 2021 15:01:48 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1618585308; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RzKn57bCJg60fkvGrJOWEogUDKg2NSidUWrh09cmp7k=; b=Wrt1RGacFVY5OWxHLuDERKPLMxcjer41em6FV9jX5q+SDocRQM/PMTGZZPJqmPyNQ8fg/B V4NPYAz5J/fXAU6EEQGFxAeKPJW/kNKe/gNGcMDjIUYfUWF+2P5QxbPVtl+RE3W+gUiHhk u2tSlFIgzBLGw1ztSro5d+68voOzGVXkseO4JbCnt7qfYBjeeoucWpXZFCKQAAY1DDlYFl 8lXnXmXuYc/fX1/Soqficvxm5QhLCA45nuNza8tqZk+LhCYWD01RJdwIQ1p5msgPBpdWaQ 5lgEtw1a7AX5p9dyPPQbpHJW95qJXo32Q3x6e8iyVrg7vCTEqeMNfAuTqfP+IQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1618585308; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RzKn57bCJg60fkvGrJOWEogUDKg2NSidUWrh09cmp7k=; b=9cWSRvKO4NXWdiFv4v7AgENnAcR1Mf0NU88SlAeOydIV73meim5a1+lT4iIjlbi3INvrOv PeSmpGt18UMGUHDw== From: "tip-bot2 for Namhyung Kim" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: perf/core] perf core: Factor out __perf_sw_event_sched Cc: Peter Zijlstra , Namhyung Kim , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20210210083327.22726-1-namhyung@kernel.org> References: <20210210083327.22726-1-namhyung@kernel.org> MIME-Version: 1.0 Message-ID: <161858530815.29796.14637571571889250954.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the perf/core branch of tip: Commit-ID: 64f6aeb6dc7a2426278fd9017264cf24bfdbebd6 Gitweb: https://git.kernel.org/tip/64f6aeb6dc7a2426278fd9017264cf24bfdbebd6 Author: Namhyung Kim AuthorDate: Wed, 10 Feb 2021 17:33:25 +09:00 Committer: Peter Zijlstra CommitterDate: Fri, 16 Apr 2021 16:32:43 +02:00 perf core: Factor out __perf_sw_event_sched In some cases, we need to check more than whether the software event is enabled. So split the condition check and the actual event handling. This is a preparation for the next change. Suggested-by: Peter Zijlstra Signed-off-by: Namhyung Kim Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20210210083327.22726-1-namhyung@kernel.org --- include/linux/perf_event.h | 33 ++++++++++++--------------------- 1 file changed, 12 insertions(+), 21 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 7d7280a..92d51a7 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1178,30 +1178,24 @@ DECLARE_PER_CPU(struct pt_regs, __perf_regs[4]); * which is guaranteed by us not actually scheduling inside other swevents * because those disable preemption. */ -static __always_inline void -perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) +static __always_inline void __perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) { - if (static_key_false(&perf_swevent_enabled[event_id])) { - struct pt_regs *regs = this_cpu_ptr(&__perf_regs[0]); + struct pt_regs *regs = this_cpu_ptr(&__perf_regs[0]); - perf_fetch_caller_regs(regs); - ___perf_sw_event(event_id, nr, regs, addr); - } + perf_fetch_caller_regs(regs); + ___perf_sw_event(event_id, nr, regs, addr); } extern struct static_key_false perf_sched_events; -static __always_inline bool -perf_sw_migrate_enabled(void) +static __always_inline bool __perf_sw_enabled(int swevt) { - if (static_key_false(&perf_swevent_enabled[PERF_COUNT_SW_CPU_MIGRATIONS])) - return true; - return false; + return static_key_false(&perf_swevent_enabled[swevt]); } static inline void perf_event_task_migrate(struct task_struct *task) { - if (perf_sw_migrate_enabled()) + if (__perf_sw_enabled(PERF_COUNT_SW_CPU_MIGRATIONS)) task->sched_migrated = 1; } @@ -1211,11 +1205,9 @@ static inline void perf_event_task_sched_in(struct task_struct *prev, if (static_branch_unlikely(&perf_sched_events)) __perf_event_task_sched_in(prev, task); - if (perf_sw_migrate_enabled() && task->sched_migrated) { - struct pt_regs *regs = this_cpu_ptr(&__perf_regs[0]); - - perf_fetch_caller_regs(regs); - ___perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, regs, 0); + if (__perf_sw_enabled(PERF_COUNT_SW_CPU_MIGRATIONS) && + task->sched_migrated) { + __perf_sw_event_sched(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 0); task->sched_migrated = 0; } } @@ -1223,7 +1215,8 @@ static inline void perf_event_task_sched_in(struct task_struct *prev, static inline void perf_event_task_sched_out(struct task_struct *prev, struct task_struct *next) { - perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0); + if (__perf_sw_enabled(PERF_COUNT_SW_CONTEXT_SWITCHES)) + __perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0); if (static_branch_unlikely(&perf_sched_events)) __perf_event_task_sched_out(prev, next); @@ -1480,8 +1473,6 @@ static inline int perf_event_refresh(struct perf_event *event, int refresh) static inline void perf_sw_event(u32 event_id, u64 nr, struct pt_regs *regs, u64 addr) { } static inline void -perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) { } -static inline void perf_bp_event(struct perf_event *event, void *data) { } static inline int perf_register_guest_info_callbacks