From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x22466/9MhUceqfE+ZRMeySQ767g8trUIrull3w/bYcJUvTMDOpSFnTAUKM1UWjgQAkqOXaZz ARC-Seal: i=1; a=rsa-sha256; t=1516721581; cv=none; d=google.com; s=arc-20160816; b=m9dWFbzqE1W45wxZGBFH7IWrhGxVJcqayr3/m0EDkIxmoMfWYgecdQAc8z2nQ7plnF fLSKxXsvB8NvcxcqtIC+y+rxFPF8ldijS7sIfQWn/6OmxdOjLrZzoiA8g9F8FUVG6leD RjzZfUohX5Q1GyUZvv2H2o7KLJ5/3XfZuk5S2JrCAs4ES3ifot9SeU+c3Vf2FCSdgv5g TWQ8UdMfVAtchEpNlvmEPq4o/JxbQaAL7QsyHAzt+EqS8kyk1Ic1kkNizCsLvvhV61EX 0fcERtte9GlYVNYQo+bcJ3f78ulB91ie7CSJ2mxm6aY1iQdAJCRtHBa3cculV8iiXvlr jzlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-disposition:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature:arc-authentication-results; bh=fnsnDTqO4bmqHzxSAbDH8uEV6ELei19v6hbASLv7DyQ=; b=SL6UsXtpLUebQ2D0PJPXo7Vsy9ghqKOU9iQ2gtdgWIQmQliqIX6bvebs+EZmZ8MP4E vI9CtbLV9LrU9B6RzZiulWiWXSC2YXc2fYCd+i6f83Otrzf4QL3X5cKBL3AP7upSKKLd d6LghWIP6lNljAXhlwmIS4td/+ETzCT7m26JjQnHq80yUHk5dWDh7DPV22OGEtNMqKCG ANGuog81xrkrHillyo+qt0O/iYFfLLVu44AcugkSRGwQ5NhQ5HhjFmp0hKLE9Ucq8kDC BXbbsfJiMCPrg5BD2ngQcpl6/DVIwddTckphioMhpme91gRuQ+EjL+nUm99lgZgnIkyE daMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=FZBfDwvY; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 65.50.211.133 as permitted sender) smtp.mailfrom=peterz@infradead.org Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=FZBfDwvY; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 65.50.211.133 as permitted sender) smtp.mailfrom=peterz@infradead.org Message-Id: <20180123152638.571651113@infradead.org> User-Agent: quilt/0.63-1 Date: Tue, 23 Jan 2018 16:25:49 +0100 From: Peter Zijlstra To: David Woodhouse , Thomas Gleixner , Josh Poimboeuf Cc: linux-kernel@vger.kernel.org, Dave Hansen , Ashok Raj , Tim Chen , Andy Lutomirski , Linus Torvalds , Greg KH , Andrea Arcangeli , Andi Kleen , Arjan Van De Ven , Dan Williams , Paolo Bonzini , Jun Nakajima , Asit Mallick , Jason Baron , Peter Zijlstra Subject: [PATCH 10/24] sched: Optimize ttwu_stat() References: <20180123152539.374360046@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-sched-opt-ttwu_stat.patch X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1590397849546853421?= X-GMAIL-MSGID: =?utf-8?q?1590397849546853421?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: The whole of ttwu_stat() is guarded by a single schedstat_enabled(), there is absolutely no point in then issuing another static_branch for every single schedstat_inc() in there. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 16 ++++++++-------- kernel/sched/stats.h | 2 ++ 2 files changed, 10 insertions(+), 8 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1630,16 +1630,16 @@ ttwu_stat(struct task_struct *p, int cpu #ifdef CONFIG_SMP if (cpu == rq->cpu) { - schedstat_inc(rq->ttwu_local); - schedstat_inc(p->se.statistics.nr_wakeups_local); + __schedstat_inc(rq->ttwu_local); + __schedstat_inc(p->se.statistics.nr_wakeups_local); } else { struct sched_domain *sd; - schedstat_inc(p->se.statistics.nr_wakeups_remote); + __schedstat_inc(p->se.statistics.nr_wakeups_remote); rcu_read_lock(); for_each_domain(rq->cpu, sd) { if (cpumask_test_cpu(cpu, sched_domain_span(sd))) { - schedstat_inc(sd->ttwu_wake_remote); + __schedstat_inc(sd->ttwu_wake_remote); break; } } @@ -1647,14 +1647,14 @@ ttwu_stat(struct task_struct *p, int cpu } if (wake_flags & WF_MIGRATED) - schedstat_inc(p->se.statistics.nr_wakeups_migrate); + __schedstat_inc(p->se.statistics.nr_wakeups_migrate); #endif /* CONFIG_SMP */ - schedstat_inc(rq->ttwu_count); - schedstat_inc(p->se.statistics.nr_wakeups); + __schedstat_inc(rq->ttwu_count); + __schedstat_inc(p->se.statistics.nr_wakeups); if (wake_flags & WF_SYNC) - schedstat_inc(p->se.statistics.nr_wakeups_sync); + __schedstat_inc(p->se.statistics.nr_wakeups_sync); } static inline void ttwu_activate(struct rq *rq, struct task_struct *p, int en_flags) --- a/kernel/sched/stats.h +++ b/kernel/sched/stats.h @@ -31,6 +31,7 @@ rq_sched_info_dequeued(struct rq *rq, un rq->rq_sched_info.run_delay += delta; } #define schedstat_enabled() static_branch_unlikely(&sched_schedstats) +#define __schedstat_inc(var) do { var++; } while (0) #define schedstat_inc(var) do { if (schedstat_enabled()) { var++; } } while (0) #define schedstat_add(var, amt) do { if (schedstat_enabled()) { var += (amt); } } while (0) #define schedstat_set(var, val) do { if (schedstat_enabled()) { var = (val); } } while (0) @@ -48,6 +49,7 @@ static inline void rq_sched_info_depart(struct rq *rq, unsigned long long delta) {} #define schedstat_enabled() 0 +#define __schedstat_inc(var) do { } while (0) #define schedstat_inc(var) do { } while (0) #define schedstat_add(var, amt) do { } while (0) #define schedstat_set(var, val) do { } while (0)