From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964909AbdIYW5e (ORCPT ); Mon, 25 Sep 2017 18:57:34 -0400 Received: from mail-qk0-f170.google.com ([209.85.220.170]:49749 "EHLO mail-qk0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935113AbdIYW5c (ORCPT ); Mon, 25 Sep 2017 18:57:32 -0400 X-Google-Smtp-Source: AOwi7QDfiBzTRAwZTWib9SMyGehCLEB5ZEdFpThGAKp53zih9uuR4aH+ZUrJBdAgggZEqTjdg67Ptqt1xbPjaV7XrkA= MIME-Version: 1.0 In-Reply-To: <20170925065226.137ec45e@vmware.local.home> References: <20170922015024.16123-1-joelaf@google.com> <20170922015024.16123-3-joelaf@google.com> <20170922090229.rdicci6emtyffqdn@hirez.programming.kicks-ass.net> <20170925083432.jvaewlsrb46wjmjj@hirez.programming.kicks-ass.net> <20170925061528.09082957@vmware.local.home> <20170925103223.horr6amiunw6if3n@hirez.programming.kicks-ass.net> <20170925065226.137ec45e@vmware.local.home> From: Joel Fernandes Date: Mon, 25 Sep 2017 15:57:30 -0700 Message-ID: Subject: Re: [PATCH v6 2/2] tracing: Add support for preempt and irq enable/disable events To: Steven Rostedt Cc: Peter Zijlstra , LKML , kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 25, 2017 at 3:52 AM, Steven Rostedt wrote: > On Mon, 25 Sep 2017 12:32:23 +0200 > Peter Zijlstra wrote: > > >> > You mean you want to trace all calls to preempt and irq off even if >> > preempt and irqs are already off? >> >> Sure, why not? This stuff naturally nests, and who is to say its not a >> useful thing to trace all of them? >> >> By also tracing the nested sections you can, for instance, see how much >> you'd really win by getting rid of the outer one. If, for instance, the >> outer only accounts for 1% of the time, while the inner ones are >> interlinked and span the other 99%, there's more work to do than if that >> were not the case. > > If we do this we need a field to record if the preemption or irqs were > toggled by that call. Something that filters could easily be added to > only show what this patch set has. I request that we please not do this for my patchset, there are a number of reasons in my mind: 1. trace_preempt_off in existing code is only called the first time preempt is turned off. This is the definition of "preempt off", its the first time Preemption is actually turned off, and has nothing much to do with going into a deeper preempt count. Whether the count increases or not, preempt is already off and that's confirmed by the first preempt off event. This is how I read it in the comments in sched/core.c as well: " * If the value passed in is equal to the current preempt count * then we just disabled preemption." This is how I based this patchset as well, againt its not my usecase and it can be a future patch if its useful to track this. 2. This stuff is already a bit trace heavy, I prefer not to generate event every time the preempt count increases. Ofcourse filters but still then we have the filtering overhead for a usecase that I am not really targetting with this patchset. 3. It will complicate the patch more, apart from adding filters as Steven suggested, it would also mean we change how preempt_latency_start in sched/core.c works. Do you mind if we please keep this as a 'future' usecase for a future patch? Its not my usecase at all for this patchset and not what I was intending. I will reply to Peter's other comments on the other email shortly. thanks! - Joel