From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932839AbdDFUW6 (ORCPT ); Thu, 6 Apr 2017 16:22:58 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:33620 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755785AbdDFUVV (ORCPT ); Thu, 6 Apr 2017 16:21:21 -0400 Date: Thu, 6 Apr 2017 13:21:17 -0700 From: "Paul E. McKenney" To: Steven Rostedt Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Andrew Morton Subject: Re: [PATCH 3/4] tracing: Add stack_tracer_disable/enable() functions Reply-To: paulmck@linux.vnet.ibm.com References: <20170406164237.874767449@goodmis.org> <20170406164432.361457723@goodmis.org> <20170406181222.GH1600@linux.vnet.ibm.com> <20170406144803.63ee287c@gandalf.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170406144803.63ee287c@gandalf.local.home> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17040620-0056-0000-0000-0000033134A8 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006889; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000208; SDB=6.00844013; UDB=6.00415949; IPR=6.00622237; BA=6.00005274; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00014940; XFM=3.00000013; UTC=2017-04-06 20:21:18 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17040620-0057-0000-0000-00000767381C Message-Id: <20170406202117.GK1600@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-04-06_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1704060165 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 06, 2017 at 02:48:03PM -0400, Steven Rostedt wrote: > On Thu, 6 Apr 2017 11:12:22 -0700 > "Paul E. McKenney" wrote: > > > On Thu, Apr 06, 2017 at 12:42:40PM -0400, Steven Rostedt wrote: > > > From: "Steven Rostedt (VMware)" > > > > > > There are certain parts of the kernel that can not let stack tracing > > > proceed (namely in RCU), because the stack tracer uses RCU, and parts of RCU > > > internals can not handle having RCU read side locks taken. > > > > > > Add stack_tracer_disable() and stack_tracer_enable() functions to let RCU > > > stop stack tracing on the current CPU as it is in those critical sections. > > > > s/as it is in/when it is in/? > > > > > Signed-off-by: Steven Rostedt (VMware) > > > > One quibble above, one objection below. > > > > Thanx, Paul > > > > > --- > > > include/linux/ftrace.h | 6 ++++++ > > > kernel/trace/trace_stack.c | 28 ++++++++++++++++++++++++++++ > > > 2 files changed, 34 insertions(+) > > > > > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > > > index ef7123219f14..40afee35565a 100644 > > > --- a/include/linux/ftrace.h > > > +++ b/include/linux/ftrace.h > > > @@ -286,6 +286,12 @@ int > > > stack_trace_sysctl(struct ctl_table *table, int write, > > > void __user *buffer, size_t *lenp, > > > loff_t *ppos); > > > + > > > +void stack_tracer_disable(void); > > > +void stack_tracer_enable(void); > > > +#else > > > +static inline void stack_tracer_disable(void) { } > > > +static inline void stack_tracer_enabe(void) { } > > > #endif > > > > > > struct ftrace_func_command { > > > diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c > > > index 05ad2b86461e..5adbb73ec2ec 100644 > > > --- a/kernel/trace/trace_stack.c > > > +++ b/kernel/trace/trace_stack.c > > > @@ -41,6 +41,34 @@ static DEFINE_MUTEX(stack_sysctl_mutex); > > > int stack_tracer_enabled; > > > static int last_stack_tracer_enabled; > > > > > > +/** > > > + * stack_tracer_disable - temporarily disable the stack tracer > > > + * > > > + * There's a few locations (namely in RCU) where stack tracing > > > + * can not be executed. This function is used to disable stack > > > + * tracing during those critical sections. > > > + * > > > + * This function will disable preemption. stack_tracer_enable() > > > + * must be called shortly after this is called. > > > + */ > > > +void stack_tracer_disable(void) > > > +{ > > > + preempt_disable_notrace(); > > > > Interrupts are disabled in all current call points, so you don't really > > need to disable preemption. I would normally not worry, given the > > ease-of-use improvements, but some people get annoyed about even slight > > increases in idle-entry overhead. > > My worry is that we add another caller that doesn't disable interrupts > or preemption. > > I could add a __stack_trace_disable() that skips the disabling of > preemption, as the "__" usually denotes the call is "special". Given that interrupts are disabled at that point, and given also that NMI skips stack tracing if growth is required, could we just leave out the stack_tracer_disable() and stack_tracer_enable()? Thanx, Paul > -- Steve > > > > > > + this_cpu_inc(trace_active); > > > +} > > > + > > > +/** > > > + * stack_tracer_enable - re-enable the stack tracer > > > + * > > > + * After stack_tracer_disable() is called, stack_tracer_enable() > > > + * must shortly be called afterward. > > > + */ > > > +void stack_tracer_enable(void) > > > +{ > > > + this_cpu_dec(trace_active); > > > + preempt_enable_notrace(); > > > > Ditto... > > > > > +} > > > + > > > void stack_trace_print(void) > > > { > > > long i; > > > -- > > > 2.10.2 > > > > > > >