From: Joel Fernandes <joel@joelfernandes.org>
To: Viktor Rosendahl <viktor.rosendahl@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>,
Ingo Molnar <mingo@redhat.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v6 1/4] ftrace: Implement fs notification for tracing_max_latency
Date: Sat, 7 Sep 2019 19:38:01 -0400 [thread overview]
Message-ID: <20190907233801.GA117656@google.com> (raw)
In-Reply-To: <c35722db-bb79-7e09-ac02-e82ab827e1e3@gmail.com>
On Sat, Sep 07, 2019 at 11:12:59PM +0200, Viktor Rosendahl wrote:
> On 9/6/19 4:17 PM, Joel Fernandes wrote:
> > On Thu, Sep 05, 2019 at 03:25:45PM +0200, Viktor Rosendahl wrote:
> <clip>
> > > +
> > > +__init static int latency_fsnotify_init(void)
> > > +{
> > > + fsnotify_wq = alloc_workqueue("tr_max_lat_wq",
> > > + WQ_UNBOUND | WQ_HIGHPRI, 0);
> > > + if (!fsnotify_wq) {
> > > + pr_err("Unable to allocate tr_max_lat_wq\n");
> > > + return -ENOMEM;
> > > + }
> >
> > Why not just use the system workqueue instead of adding another workqueue?
> >
>
> For the the latency-collector to work properly in the worst case, when a
> new latency occurs immediately, the fsnotify must be received in less
> time than what the threshold is set to. If we always are slower we will
> always lose certain latencies.
>
> My intention was to minimize latency in some important cases, so that
> user space receives the notification sooner rather than later.
>
> There doesn't seem to be any system workqueue with WQ_UNBOUND and
> WQ_HIGHPRI. My thinking was that WQ_UNBOUND might help with the latency
> in some important cases.
>
> If we use:
>
> queue_work(system_highpri_wq, &tr->fsnotify_work);
>
> then the work will (almost) always execute on the same CPU but if we are
> unlucky that CPU could be too busy while there could be another CPU in
> the system that would be able to process the work soon enough.
>
> queue_work_on() could be used to queue the work on another CPU but it
> seems difficult to select the right CPU.
Ok, a separate WQ is fine with me as such since the preempt/irq events are on
a debug kernel anyway.
I'll keep reviewing your patches next few days, I am at the LPC conference so
might be a bit slow. Overall I think the series look like its maturing and
getting close.
thanks,
- Joel
next prev parent reply other threads:[~2019-09-07 23:38 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-05 13:25 [PATCH v6 0/4] Some new features for the preempt/irqsoff tracers Viktor Rosendahl
2019-09-05 13:25 ` [PATCH v6 1/4] ftrace: Implement fs notification for tracing_max_latency Viktor Rosendahl
2019-09-06 14:17 ` Joel Fernandes
2019-09-07 21:12 ` Viktor Rosendahl
2019-09-07 23:38 ` Joel Fernandes [this message]
2019-09-08 17:05 ` Viktor Rosendahl
2019-09-05 13:25 ` [PATCH v6 2/4] preemptirq_delay_test: Add the burst feature and a sysfs trigger Viktor Rosendahl
2019-09-05 13:25 ` [PATCH v6 3/4] Add the latency-collector to tools Viktor Rosendahl
2019-09-05 13:25 ` [PATCH v6 4/4] ftrace: Add an option for tracing console latencies Viktor Rosendahl
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190907233801.GA117656@google.com \
--to=joel@joelfernandes.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=rostedt@goodmis.org \
--cc=viktor.rosendahl@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).