From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933674AbdDGODq (ORCPT ); Fri, 7 Apr 2017 10:03:46 -0400 Received: from mail.kernel.org ([198.145.29.136]:34422 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755873AbdDGODN (ORCPT ); Fri, 7 Apr 2017 10:03:13 -0400 Message-Id: <20170407140308.502725512@goodmis.org> User-Agent: quilt/0.63-1 Date: Fri, 07 Apr 2017 10:01:08 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Ingo Molnar , Andrew Morton , "Paul E. McKenney" Subject: [PATCH 2/5 v2] tracing: Replace the per_cpu() with this_cpu() in trace_stack.c References: <20170407140106.051135969@goodmis.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Disposition: inline; filename=0002-tracing-Replace-the-per_cpu-with-this_cpu-in-trace_s.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Steven Rostedt (VMware)" The updates to the trace_active per cpu variable can be updated with the this_cpu_*() functions as it only gets updated on the CPU that the variable is on. Signed-off-by: Steven Rostedt (VMware) --- kernel/trace/trace_stack.c | 23 +++++++---------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c index 5fb1f2c87e6b..05ad2b86461e 100644 --- a/kernel/trace/trace_stack.c +++ b/kernel/trace/trace_stack.c @@ -207,13 +207,12 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct pt_regs *pt_regs) { unsigned long stack; - int cpu; preempt_disable_notrace(); - cpu = raw_smp_processor_id(); /* no atomic needed, we only modify this variable by this cpu */ - if (per_cpu(trace_active, cpu)++ != 0) + this_cpu_inc(trace_active); + if (this_cpu_read(trace_active) != 1) goto out; ip += MCOUNT_INSN_SIZE; @@ -221,7 +220,7 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip, check_stack(ip, &stack); out: - per_cpu(trace_active, cpu)--; + this_cpu_dec(trace_active); /* prevent recursion in schedule */ preempt_enable_notrace(); } @@ -253,7 +252,6 @@ stack_max_size_write(struct file *filp, const char __user *ubuf, long *ptr = filp->private_data; unsigned long val, flags; int ret; - int cpu; ret = kstrtoul_from_user(ubuf, count, 10, &val); if (ret) @@ -266,14 +264,13 @@ stack_max_size_write(struct file *filp, const char __user *ubuf, * we will cause circular lock, so we also need to increase * the percpu trace_active here. */ - cpu = smp_processor_id(); - per_cpu(trace_active, cpu)++; + this_cpu_inc(trace_active); arch_spin_lock(&stack_trace_max_lock); *ptr = val; arch_spin_unlock(&stack_trace_max_lock); - per_cpu(trace_active, cpu)--; + this_cpu_dec(trace_active); local_irq_restore(flags); return count; @@ -307,12 +304,9 @@ t_next(struct seq_file *m, void *v, loff_t *pos) static void *t_start(struct seq_file *m, loff_t *pos) { - int cpu; - local_irq_disable(); - cpu = smp_processor_id(); - per_cpu(trace_active, cpu)++; + this_cpu_inc(trace_active); arch_spin_lock(&stack_trace_max_lock); @@ -324,12 +318,9 @@ static void *t_start(struct seq_file *m, loff_t *pos) static void t_stop(struct seq_file *m, void *p) { - int cpu; - arch_spin_unlock(&stack_trace_max_lock); - cpu = smp_processor_id(); - per_cpu(trace_active, cpu)--; + this_cpu_dec(trace_active); local_irq_enable(); } -- 2.10.2