From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752579AbcF2Mp0 (ORCPT ); Wed, 29 Jun 2016 08:45:26 -0400 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:45504 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751767AbcF2MpY (ORCPT ); Wed, 29 Jun 2016 08:45:24 -0400 X-Original-SENDERIP: 156.147.1.127 X-Original-MAILFROM: byungchul.park@lge.com X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com Date: Wed, 29 Jun 2016 21:43:43 +0900 From: Byungchul Park To: peterz@infradead.org, mingo@kernel.org Cc: linux-kernel@vger.kernel.org, npiggin@suse.de, walken@google.com, ak@suse.de, xinhui.pan@linux.vnet.ibm.com Subject: Re: [RFC 12/12] x86/dumpstack: Optimize save_stack_trace Message-ID: <20160629124343.GQ2279@X58A-UD3R> References: <1466398527-1122-1-git-send-email-byungchul.park@lge.com> <1466398527-1122-13-git-send-email-byungchul.park@lge.com> <57679B57.40905@linux.vnet.ibm.com> <000101d1cac8$6f321c40$4d9654c0$@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <000101d1cac8$6f321c40$4d9654c0$@lge.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 20, 2016 at 04:50:37PM +0900, byungchul.park wrote: > > -----Original Message----- > > From: xinhui [mailto:xinhui.pan@linux.vnet.ibm.com] > > Sent: Monday, June 20, 2016 4:29 PM > > To: Byungchul Park; peterz@infradead.org; mingo@kernel.org > > Cc: linux-kernel@vger.kernel.org; npiggin@suse.de; walken@google.com; > > ak@suse.de; tglx@inhelltoy.tec.linutronix.de > > Subject: Re: [RFC 12/12] x86/dumpstack: Optimize save_stack_trace > > > > > > On 2016年06月20日 12:55, Byungchul Park wrote: > > > Currently, x86 implementation of save_stack_trace() is walking all stack > > > region word by word regardless of what the trace->max_entries is. > > > However, it's unnecessary to walk after already fulfilling caller's > > > requirement, say, if trace->nr_entries >= trace->max_entries is true. > > > > > > For example, CONFIG_LOCKDEP_CROSSRELEASE implementation calls > > > save_stack_trace() with max_entries = 5 frequently. I measured its > > > overhead and printed its difference of sched_clock() with my QEMU x86 > > > machine. > > > > > > The latency was improved over 70% when trace->max_entries = 5. > > > > > [snip] > > > > > +static int save_stack_end(void *data) > > > +{ > > > + struct stack_trace *trace = data; > > > + return trace->nr_entries >= trace->max_entries; > > > +} > > > + > > > static const struct stacktrace_ops save_stack_ops = { > > > .stack = save_stack_stack, > > > .address = save_stack_address, > > then why not check the return value of ->address(), -1 indicate there is > > no room to store any pointer. > > Hello, > > Indeed. It also looks good to me even though it has to propagate the condition > between callback functions. I will modify it if it's better. Do you also think it would be better to make it propagate the result of ->address() rather than add a new callback, say, end_walk? > > Thank you. > Byungchul > > > > > > .walk_stack = print_context_stack, > > > + .end_walk = save_stack_end, > > > }; > > > > > > static const struct stacktrace_ops save_stack_ops_nosched = { > > >