All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] ftrace / perf 'recursion'
@ 2016-08-17  9:19 Peter Zijlstra
  2016-08-17 10:33 ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2016-08-17  9:19 UTC (permalink / raw)
  To: Steven Rostedt, Thomas Gleixner, Ingo Molnar, Alexander Shishkin
  Cc: linux-kernel


blergh, now with LKML added...

---

Much like: d525211f9d1b ("perf: Fix irq_work 'tail' recursion")

I found another infinite recursion problem with irq_work:

 <IRQ>  [<ffffffff811bb985>] ? perf_output_begin_forward+0x5/0x1e0
 [<ffffffff81067835>] ? arch_irq_work_raise+0x5/0x40
 [<ffffffff811ba170>] ? perf_event_output_forward+0x30/0x60
 [<ffffffff81067835>] arch_irq_work_raise+0x5/0x40
 [<ffffffff811ab547>] irq_work_queue+0x97/0xa0
 [<ffffffff81067835>] ? arch_irq_work_raise+0x5/0x40
 [<ffffffff811ab547>] ? irq_work_queue+0x97/0xa0
 [<ffffffff811af88f>] __perf_event_overflow+0xcf/0x1b0
 [<ffffffff811afa0a>] perf_swevent_overflow+0x9a/0xc0
 [<ffffffff811afa8d>] perf_swevent_event+0x5d/0x80
 [<ffffffff811b0472>] perf_tp_event+0x1a2/0x1b0
 [<ffffffff81a559b0>] ? _raw_spin_trylock+0x30/0x30
 [<ffffffff8119dc73>] ? perf_ftrace_function_call+0x83/0xd0
 [<ffffffff8117db25>] ? ftrace_ops_assist_func+0xb5/0x110
 [<ffffffff8117db25>] ? ftrace_ops_assist_func+0xb5/0x110
 [<ffffffff810df52d>] ? do_send_sig_info+0x5d/0x80
 [<ffffffff81a559b0>] ? _raw_spin_trylock+0x30/0x30
 [<ffffffff8119db3f>] ? perf_trace_buf_alloc+0x1f/0xa0
 [<ffffffff8124740b>] ? kill_fasync+0x6b/0x90
 [<ffffffff81a559b0>] ? _raw_spin_trylock+0x30/0x30
 [<ffffffff8119dc73>] ? perf_ftrace_function_call+0x83/0xd0
 [<ffffffff81067753>] ? smp_irq_work_interrupt+0x33/0x40
 [<ffffffff810d6f20>] ? irq_enter+0x70/0x70
 [<ffffffff8119dcaf>] perf_ftrace_function_call+0xbf/0xd0
 [<ffffffff8117db25>] ? ftrace_ops_assist_func+0xb5/0x110
 [<ffffffff8117db25>] ftrace_ops_assist_func+0xb5/0x110
 [<ffffffff81067753>] ? smp_irq_work_interrupt+0x33/0x40
 [<ffffffff810d6f20>] ? irq_enter+0x70/0x70
 [<ffffffffa157e077>] 0xffffffffa157e077
 [<ffffffff8124740b>] ? kill_fasync+0x6b/0x90
 [<ffffffff810d6f25>] ? irq_exit+0x5/0xb0
 [<ffffffff810d6f25>] irq_exit+0x5/0xb0
 [<ffffffff81067753>] smp_irq_work_interrupt+0x33/0x40
 [<ffffffff810d6f25>] ? irq_exit+0x5/0xb0
 [<ffffffff81067753>] ? smp_irq_work_interrupt+0x33/0x40
 [<ffffffff81a580b9>] irq_work_interrupt+0x89/0x90
 <EOI>

Here every irq_work execution triggers another irq_work queue, which
gets us stuck in an IRQ loop ad infinitum.

This is through function tracing of irq_exit(), but the same can be done
through function tracing of pretty much anything else around there and
through the explicit IRQ_WORK_VECTOR tracepoints.

The only 'solution' is something like the below, which I absolutely
detest because it makes the irq_work code slower for everyone.

Also, this doesn't fix the problem for any other arch :/

I would much rather tag the whole irq_work thing notrace and remove the
tracepoints, but I'm sure that'll not be a popular solution either :/



---
 arch/x86/kernel/irq_work.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/irq_work.c b/arch/x86/kernel/irq_work.c
index 3512ba607361..a8a7999f1147 100644
--- a/arch/x86/kernel/irq_work.c
+++ b/arch/x86/kernel/irq_work.c
@@ -10,26 +10,41 @@
 #include <asm/apic.h>
 #include <asm/trace/irq_vectors.h>
 
+/*
+ * I'm sure header recursion will bite my head off
+ */
+#ifdef CONFIG_PERF_EVENTS
+extern int perf_swevent_get_recursion_context(void);
+extern void perf_swevent_put_recursion_context(int rctx);
+#else
+static inline int  perf_swevent_get_recursion_context(void)		{ return -1; }
+static inline void perf_swevent_put_recursion_context(int rctx)		{ }
+#endif
+
 static inline void __smp_irq_work_interrupt(void)
 {
 	inc_irq_stat(apic_irq_work_irqs);
 	irq_work_run();
 }
 
-__visible void smp_irq_work_interrupt(struct pt_regs *regs)
+__visible notrace void smp_irq_work_interrupt(struct pt_regs *regs)
 {
+	int rctx = perf_swevent_get_recursion_context();
 	ipi_entering_ack_irq();
 	__smp_irq_work_interrupt();
 	exiting_irq();
+	perf_swevent_put_recursionn_context(rctx);
 }
 
-__visible void smp_trace_irq_work_interrupt(struct pt_regs *regs)
+__visible notrace void smp_trace_irq_work_interrupt(struct pt_regs *regs)
 {
+	int rctx = perf_swevent_get_recursion_context();
 	ipi_entering_ack_irq();
 	trace_irq_work_entry(IRQ_WORK_VECTOR);
 	__smp_irq_work_interrupt();
 	trace_irq_work_exit(IRQ_WORK_VECTOR);
 	exiting_irq();
+	perf_swevent_put_recursionn_context(rctx);
 }
 
 void arch_irq_work_raise(void)

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC] ftrace / perf 'recursion'
  2016-08-17  9:19 [RFC] ftrace / perf 'recursion' Peter Zijlstra
@ 2016-08-17 10:33 ` Peter Zijlstra
  2016-08-17 10:57   ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2016-08-17 10:33 UTC (permalink / raw)
  To: Steven Rostedt, Thomas Gleixner, Ingo Molnar, Alexander Shishkin
  Cc: linux-kernel

On Wed, Aug 17, 2016 at 11:19:53AM +0200, Peter Zijlstra wrote:
> 
> blergh, now with LKML added...
> 
> ---
> 
> Much like: d525211f9d1b ("perf: Fix irq_work 'tail' recursion")
> 
> I found another infinite recursion problem with irq_work:
> 
>  <IRQ>  [<ffffffff811bb985>] ? perf_output_begin_forward+0x5/0x1e0
>  [<ffffffff81067835>] ? arch_irq_work_raise+0x5/0x40
>  [<ffffffff811ba170>] ? perf_event_output_forward+0x30/0x60
>  [<ffffffff81067835>] arch_irq_work_raise+0x5/0x40
>  [<ffffffff811ab547>] irq_work_queue+0x97/0xa0
>  [<ffffffff81067835>] ? arch_irq_work_raise+0x5/0x40
>  [<ffffffff811ab547>] ? irq_work_queue+0x97/0xa0
>  [<ffffffff811af88f>] __perf_event_overflow+0xcf/0x1b0
>  [<ffffffff811afa0a>] perf_swevent_overflow+0x9a/0xc0
>  [<ffffffff811afa8d>] perf_swevent_event+0x5d/0x80
>  [<ffffffff811b0472>] perf_tp_event+0x1a2/0x1b0
>  [<ffffffff81a559b0>] ? _raw_spin_trylock+0x30/0x30
>  [<ffffffff8119dc73>] ? perf_ftrace_function_call+0x83/0xd0
>  [<ffffffff8117db25>] ? ftrace_ops_assist_func+0xb5/0x110
>  [<ffffffff8117db25>] ? ftrace_ops_assist_func+0xb5/0x110
>  [<ffffffff810df52d>] ? do_send_sig_info+0x5d/0x80
>  [<ffffffff81a559b0>] ? _raw_spin_trylock+0x30/0x30
>  [<ffffffff8119db3f>] ? perf_trace_buf_alloc+0x1f/0xa0
>  [<ffffffff8124740b>] ? kill_fasync+0x6b/0x90
>  [<ffffffff81a559b0>] ? _raw_spin_trylock+0x30/0x30
>  [<ffffffff8119dc73>] ? perf_ftrace_function_call+0x83/0xd0
>  [<ffffffff81067753>] ? smp_irq_work_interrupt+0x33/0x40
>  [<ffffffff810d6f20>] ? irq_enter+0x70/0x70
>  [<ffffffff8119dcaf>] perf_ftrace_function_call+0xbf/0xd0
>  [<ffffffff8117db25>] ? ftrace_ops_assist_func+0xb5/0x110
>  [<ffffffff8117db25>] ftrace_ops_assist_func+0xb5/0x110
>  [<ffffffff81067753>] ? smp_irq_work_interrupt+0x33/0x40
>  [<ffffffff810d6f20>] ? irq_enter+0x70/0x70
>  [<ffffffffa157e077>] 0xffffffffa157e077
>  [<ffffffff8124740b>] ? kill_fasync+0x6b/0x90
>  [<ffffffff810d6f25>] ? irq_exit+0x5/0xb0
>  [<ffffffff810d6f25>] irq_exit+0x5/0xb0
>  [<ffffffff81067753>] smp_irq_work_interrupt+0x33/0x40
>  [<ffffffff810d6f25>] ? irq_exit+0x5/0xb0
>  [<ffffffff81067753>] ? smp_irq_work_interrupt+0x33/0x40
>  [<ffffffff81a580b9>] irq_work_interrupt+0x89/0x90
>  <EOI>
> 
> Here every irq_work execution triggers another irq_work queue, which
> gets us stuck in an IRQ loop ad infinitum.
> 
> This is through function tracing of irq_exit(), but the same can be done
> through function tracing of pretty much anything else around there and
> through the explicit IRQ_WORK_VECTOR tracepoints.
> 
> The only 'solution' is something like the below, which I absolutely
> detest because it makes the irq_work code slower for everyone.
> 
> Also, this doesn't fix the problem for any other arch :/
> 
> I would much rather tag the whole irq_work thing notrace and remove the
> tracepoints, but I'm sure that'll not be a popular solution either :/

So I also found: d5b5f391d434 ("ftrace, perf: Avoid infinite event
generation loop") which is very similar.

I suppose that the entry tracepoint is harmless because you cannot queue
an irq_work that is already queued, so there it doesn't cause the
recursion.

So how to extend the same to function tracer .... we'd have to mark
exiting_irq() -> irq_exit() and everything from that as notrace, which
seems somewhat excessive, fragile and undesired because tracing those
functions is useful in other context :/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] ftrace / perf 'recursion'
  2016-08-17 10:33 ` Peter Zijlstra
@ 2016-08-17 10:57   ` Peter Zijlstra
  2016-08-17 13:49     ` Steven Rostedt
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2016-08-17 10:57 UTC (permalink / raw)
  To: Steven Rostedt, Thomas Gleixner, Ingo Molnar, Alexander Shishkin
  Cc: linux-kernel

On Wed, Aug 17, 2016 at 12:33:06PM +0200, Peter Zijlstra wrote:

> So how to extend the same to function tracer .... we'd have to mark
> exiting_irq() -> irq_exit() and everything from that as notrace, which
> seems somewhat excessive, fragile and undesired because tracing those
> functions is useful in other context :/

Steve, would something like so work? It would completely kill function
tracing for the irq_work exit path, but that seems fairly sane over-all.
After all, all the common irq_work code is already notrace as well.

 arch/x86/kernel/irq_work.c | 23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/irq_work.c b/arch/x86/kernel/irq_work.c
index 3512ba607361..24de793c35c6 100644
--- a/arch/x86/kernel/irq_work.c
+++ b/arch/x86/kernel/irq_work.c
@@ -10,17 +10,34 @@
 #include <asm/apic.h>
 #include <asm/trace/irq_vectors.h>
 
-static inline void __smp_irq_work_interrupt(void)
+static inline notrace void __smp_irq_work_interrupt(void)
 {
 	inc_irq_stat(apic_irq_work_irqs);
 	irq_work_run();
 }
 
+static inline notrace void exiting_irq_work(void)
+{
+#ifdef CONFIG_TRACING
+	if (unlikely(1 /* function_tracing_enabled() */)) {
+		unsigned long trace_recursion = current->trace_recursion;
+
+		current->trace_recursion |= 1 << 10; /* TRACE_INTERNAL_IRQ_BIT */
+		barrier();
+		exiting_irq();
+		barrier();
+		current->trace_recursion = trace_recursion;
+		return;
+	}
+#endif
+	exiting_irq();
+}
+
 __visible void smp_irq_work_interrupt(struct pt_regs *regs)
 {
 	ipi_entering_ack_irq();
 	__smp_irq_work_interrupt();
-	exiting_irq();
+	exiting_irq_work();
 }
 
 __visible void smp_trace_irq_work_interrupt(struct pt_regs *regs)
@@ -29,7 +46,7 @@ __visible void smp_trace_irq_work_interrupt(struct pt_regs *regs)
 	trace_irq_work_entry(IRQ_WORK_VECTOR);
 	__smp_irq_work_interrupt();
 	trace_irq_work_exit(IRQ_WORK_VECTOR);
-	exiting_irq();
+	exiting_irq_work();
 }
 
 void arch_irq_work_raise(void)

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC] ftrace / perf 'recursion'
  2016-08-17 10:57   ` Peter Zijlstra
@ 2016-08-17 13:49     ` Steven Rostedt
  2016-08-17 14:06       ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Steven Rostedt @ 2016-08-17 13:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, Alexander Shishkin, linux-kernel

On Wed, 17 Aug 2016 12:57:16 +0200
Peter Zijlstra <peterz@infradead.org> wrote:

> On Wed, Aug 17, 2016 at 12:33:06PM +0200, Peter Zijlstra wrote:
> 
> > So how to extend the same to function tracer .... we'd have to mark
> > exiting_irq() -> irq_exit() and everything from that as notrace, which
> > seems somewhat excessive, fragile and undesired because tracing those
> > functions is useful in other context :/  
> 
> Steve, would something like so work? It would completely kill function
> tracing for the irq_work exit path, but that seems fairly sane over-all.
> After all, all the common irq_work code is already notrace as well.
> 
>  arch/x86/kernel/irq_work.c | 23 ++++++++++++++++++++---
>  1 file changed, 20 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/irq_work.c b/arch/x86/kernel/irq_work.c
> index 3512ba607361..24de793c35c6 100644
> --- a/arch/x86/kernel/irq_work.c
> +++ b/arch/x86/kernel/irq_work.c
> @@ -10,17 +10,34 @@
>  #include <asm/apic.h>
>  #include <asm/trace/irq_vectors.h>
>  
> -static inline void __smp_irq_work_interrupt(void)
> +static inline notrace void __smp_irq_work_interrupt(void)

FYI, anything marked "inline" is also marked "notrace", because it only
gets traced if gcc decides not to inline it, and because that
"randomness" caused issues in the past, we define all "inline"s to
include "notrace" so a function marked inline will never be traced
regardless if gcc decides not to inline it.


>  {
>  	inc_irq_stat(apic_irq_work_irqs);
>  	irq_work_run();
>  }
>  
> +static inline notrace void exiting_irq_work(void)
> +{
> +#ifdef CONFIG_TRACING
> +	if (unlikely(1 /* function_tracing_enabled() */)) {
> +		unsigned long trace_recursion = current->trace_recursion;
> +
> +		current->trace_recursion |= 1 << 10; /* TRACE_INTERNAL_IRQ_BIT */
> +		barrier();
> +		exiting_irq();
> +		barrier();
> +		current->trace_recursion = trace_recursion;
> +		return;
> +	}
> +#endif

yuck. This looks very fragile. What happens if perf gets hooked to
function graph tracing, then this wont help that on function exit.

Also, it will prevent any tracing of NMIs that occur in there.

I would really like to keep this fix within perf if possible. If
anything, the flag should just tell the perf function handler not to
trace, this shouldn't stop all function handlers.

Maybe just have a flag that says that you are in a irq_work, and if
that is set, then have the perf tracepoints and function handle not
trigger more events?

-- Steve


> +	exiting_irq();
> +}
> +
>  __visible void smp_irq_work_interrupt(struct pt_regs *regs)
>  {
>  	ipi_entering_ack_irq();
>  	__smp_irq_work_interrupt();
> -	exiting_irq();
> +	exiting_irq_work();
>  }
>  
>  __visible void smp_trace_irq_work_interrupt(struct pt_regs *regs)
> @@ -29,7 +46,7 @@ __visible void smp_trace_irq_work_interrupt(struct pt_regs *regs)
>  	trace_irq_work_entry(IRQ_WORK_VECTOR);
>  	__smp_irq_work_interrupt();
>  	trace_irq_work_exit(IRQ_WORK_VECTOR);
> -	exiting_irq();
> +	exiting_irq_work();
>  }
>  
>  void arch_irq_work_raise(void)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] ftrace / perf 'recursion'
  2016-08-17 13:49     ` Steven Rostedt
@ 2016-08-17 14:06       ` Peter Zijlstra
  2016-08-17 14:25         ` Steven Rostedt
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2016-08-17 14:06 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Thomas Gleixner, Ingo Molnar, Alexander Shishkin, linux-kernel

On Wed, Aug 17, 2016 at 09:49:32AM -0400, Steven Rostedt wrote:
> On Wed, 17 Aug 2016 12:57:16 +0200
> Peter Zijlstra <peterz@infradead.org> wrote:
> 

> > +static inline notrace void __smp_irq_work_interrupt(void)
> 
> FYI, anything marked "inline" is also marked "notrace", because it only
> gets traced if gcc decides not to inline it, and because that
> "randomness" caused issues in the past, we define all "inline"s to
> include "notrace" so a function marked inline will never be traced
> regardless if gcc decides not to inline it.

Ah, missed that.

> > +static inline notrace void exiting_irq_work(void)
> > +{
> > +#ifdef CONFIG_TRACING
> > +	if (unlikely(1 /* function_tracing_enabled() */)) {
> > +		unsigned long trace_recursion = current->trace_recursion;
> > +
> > +		current->trace_recursion |= 1 << 10; /* TRACE_INTERNAL_IRQ_BIT */
> > +		barrier();
> > +		exiting_irq();
> > +		barrier();
> > +		current->trace_recursion = trace_recursion;
> > +		return;
> > +	}
> > +#endif
> 
> yuck. 

Well, yes ;-)

> This looks very fragile. What happens if perf gets hooked to
> function graph tracing, then this wont help that on function exit.

Not sure what you mean, all callers of this are also notrace. There
should not be any return trampoline pending.

> Also, it will prevent any tracing of NMIs that occur in there.

It should not, see how I only mark the IRQ bit, not the NMI bit.

Could be I misunderstand your recursion bits though....

> I would really like to keep this fix within perf if possible. If
> anything, the flag should just tell the perf function handler not to
> trace, this shouldn't stop all function handlers.

Well, my thinking was that there's a reason most of irq_work is already
notrace. kernel/irq_work.c has CC_FLAGS_FTRACE removed. That seems to
suggest that tracing irq_work is a problem.

tracing also seems to use irq_work..

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] ftrace / perf 'recursion'
  2016-08-17 14:06       ` Peter Zijlstra
@ 2016-08-17 14:25         ` Steven Rostedt
  2016-08-17 14:57           ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Steven Rostedt @ 2016-08-17 14:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, Alexander Shishkin, linux-kernel

On Wed, 17 Aug 2016 16:06:12 +0200
Peter Zijlstra <peterz@infradead.org> wrote:


> > Also, it will prevent any tracing of NMIs that occur in there.  
> 
> It should not, see how I only mark the IRQ bit, not the NMI bit.

Ah, I didn't look deep at what you set there. Maybe that would work.
Still pretty hacky.

> 
> Could be I misunderstand your recursion bits though....

No, I think you are correct.

> 
> > I would really like to keep this fix within perf if possible. If
> > anything, the flag should just tell the perf function handler not to
> > trace, this shouldn't stop all function handlers.  
> 
> Well, my thinking was that there's a reason most of irq_work is already
> notrace. kernel/irq_work.c has CC_FLAGS_FTRACE removed. That seems to
> suggest that tracing irq_work is a problem.

Well, you were the one that added that ;-)

> 
> tracing also seems to use irq_work..

Yep, but it only calls irq_work if there's a reader waiting, and once
it calls it, it wont call it again until the reader wakes up. I have no
issues in tracing that, as it will trace the wake up as well.

Are you calling a signal to userspace via the irq work? Maybe we should
have a kernel thread that does that instead. That way, the irq works
can be suspended until the kernel thread gets to run. Then even though
the waking of the thread will cause more events, it will be spaced out
enough not to cause an irq work storm.

-- Steve

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] ftrace / perf 'recursion'
  2016-08-17 14:25         ` Steven Rostedt
@ 2016-08-17 14:57           ` Peter Zijlstra
  2016-08-17 15:04             ` Steven Rostedt
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2016-08-17 14:57 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Thomas Gleixner, Ingo Molnar, Alexander Shishkin, linux-kernel

On Wed, Aug 17, 2016 at 10:25:59AM -0400, Steven Rostedt wrote:

> > > Also, it will prevent any tracing of NMIs that occur in there.  
> > 
> > It should not, see how I only mark the IRQ bit, not the NMI bit.
> 
> Ah, I didn't look deep at what you set there. Maybe that would work.
> Still pretty hacky.

Sure :-)

> > > I would really like to keep this fix within perf if possible. If
> > > anything, the flag should just tell the perf function handler not to
> > > trace, this shouldn't stop all function handlers.  
> > 
> > Well, my thinking was that there's a reason most of irq_work is already
> > notrace. kernel/irq_work.c has CC_FLAGS_FTRACE removed. That seems to
> > suggest that tracing irq_work is a problem.
> 
> Well, you were the one that added that ;-)

OK, I suppose I can do the same for perf only, which is basically the
first patch on this thread. And then remove the notrace muck for
irq_work.c.

> Are you calling a signal to userspace via the irq work? Maybe we should
> have a kernel thread that does that instead. That way, the irq works
> can be suspended until the kernel thread gets to run. Then even though
> the waking of the thread will cause more events, it will be spaced out
> enough not to cause an irq work storm.

Nah, that'd wreck the desired semantics. We could maybe use a task_work
for the signal cruft though, and only generate the signal on the return
to userspace. But I'm not sure that will cure the problem.

We'd still need the irq_work to wake tasks stuck in poll() and friends.
And once we're over the watermark, every new event will trigger that
wakeup, and the wakeup will generate a new event etc..

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] ftrace / perf 'recursion'
  2016-08-17 14:57           ` Peter Zijlstra
@ 2016-08-17 15:04             ` Steven Rostedt
  0 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2016-08-17 15:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, Alexander Shishkin, linux-kernel

On Wed, 17 Aug 2016 16:57:09 +0200
Peter Zijlstra <peterz@infradead.org> wrote:


> We'd still need the irq_work to wake tasks stuck in poll() and friends.
> And once we're over the watermark, every new event will trigger that
> wakeup, and the wakeup will generate a new event etc..

I just byte the bullet for those (first OS world) issues. If one wants
to trace wake ups, and the wake up of the trace task gets traced, which
causes the trace task to wake up more often, and add more events, then
so be it. I like to see how the tracer affects the system as well. I
noticed that hiding the tracer can make it confusing if one sees gaps
in the trace, where it was the tracer causing it.

-- Steve

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-08-17 15:04 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-17  9:19 [RFC] ftrace / perf 'recursion' Peter Zijlstra
2016-08-17 10:33 ` Peter Zijlstra
2016-08-17 10:57   ` Peter Zijlstra
2016-08-17 13:49     ` Steven Rostedt
2016-08-17 14:06       ` Peter Zijlstra
2016-08-17 14:25         ` Steven Rostedt
2016-08-17 14:57           ` Peter Zijlstra
2016-08-17 15:04             ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.