* [PATCH] ftrace: Remove use of control list and ops
@ 2015-11-30 22:36 Steven Rostedt
2015-12-01 13:42 ` Jiri Olsa
0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2015-11-30 22:36 UTC (permalink / raw)
To: Jiri Olsa; +Cc: LKML, Ingo Molnar, Peter Zijlstra
[ Jiri, can you take a look at this. You can also apply it on top of my
branch ftrace/core, and run any specific tests. I just need to nuke
that control structure for further updates with ftrace. ]
Currently perf has its own list function within the ftrace infrastructure
that seems to be used only to allow for it to have per-cpu disabling as well
as a check to make sure that it's not called while RCU is not watching. It
uses something called the "control_ops" which is used to iterate over ops
under it with the control_list_func().
The problem is that this control_ops and control_list_func unnecessarily
complicates the code. By replacing FTRACE_OPS_FL_CONTROL with two new flags
(FTRACE_OPS_FL_RCU and FTRACE_OPS_FL_PER_CPU) we can remove all the code
that is special with the control ops and add the needed checks within the
generic ftrace_list_func().
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/ftrace.h | 35 +++++------
kernel/trace/ftrace.c | 126 ++++++++++++----------------------------
kernel/trace/trace.h | 2 -
kernel/trace/trace_event_perf.c | 2 +-
4 files changed, 57 insertions(+), 108 deletions(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 134f8d45b35b..4736a826baf5 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -76,8 +76,8 @@ ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops);
* ENABLED - set/unset when ftrace_ops is registered/unregistered
* DYNAMIC - set when ftrace_ops is registered to denote dynamically
* allocated ftrace_ops which need special care
- * CONTROL - set manualy by ftrace_ops user to denote the ftrace_ops
- * could be controled by following calls:
+ * PER_CPU - set manualy by ftrace_ops user to denote the ftrace_ops
+ * could be controlled by following calls:
* ftrace_function_local_enable
* ftrace_function_local_disable
* SAVE_REGS - The ftrace_ops wants regs saved at each function called
@@ -121,7 +121,7 @@ ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops);
enum {
FTRACE_OPS_FL_ENABLED = 1 << 0,
FTRACE_OPS_FL_DYNAMIC = 1 << 1,
- FTRACE_OPS_FL_CONTROL = 1 << 2,
+ FTRACE_OPS_FL_PER_CPU = 1 << 2,
FTRACE_OPS_FL_SAVE_REGS = 1 << 3,
FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED = 1 << 4,
FTRACE_OPS_FL_RECURSION_SAFE = 1 << 5,
@@ -134,6 +134,7 @@ enum {
FTRACE_OPS_FL_ALLOC_TRAMP = 1 << 12,
FTRACE_OPS_FL_IPMODIFY = 1 << 13,
FTRACE_OPS_FL_PID = 1 << 14,
+ FTRACE_OPS_FL_RCU = 1 << 15,
};
#ifdef CONFIG_DYNAMIC_FTRACE
@@ -146,11 +147,11 @@ struct ftrace_ops_hash {
#endif
/*
- * Note, ftrace_ops can be referenced outside of RCU protection.
- * (Although, for perf, the control ops prevent that). If ftrace_ops is
- * allocated and not part of kernel core data, the unregistering of it will
- * perform a scheduling on all CPUs to make sure that there are no more users.
- * Depending on the load of the system that may take a bit of time.
+ * Note, ftrace_ops can be referenced outside of RCU protection, unless
+ * the RCU flag is set. If ftrace_ops is allocated and not part of kernel
+ * core data, the unregistering of it will perform a scheduling on all CPUs
+ * to make sure that there are no more users. Depending on the load of the
+ * system that may take a bit of time.
*
* Any private data added must also take care not to be freed and if private
* data is added to a ftrace_ops that is in core code, the user of the
@@ -196,34 +197,34 @@ int unregister_ftrace_function(struct ftrace_ops *ops);
void clear_ftrace_function(void);
/**
- * ftrace_function_local_enable - enable controlled ftrace_ops on current cpu
+ * ftrace_function_local_enable - enable ftrace_ops on current cpu
*
* This function enables tracing on current cpu by decreasing
* the per cpu control variable.
* It must be called with preemption disabled and only on ftrace_ops
- * registered with FTRACE_OPS_FL_CONTROL. If called without preemption
+ * registered with FTRACE_OPS_FL_PER_CPU. If called without preemption
* disabled, this_cpu_ptr will complain when CONFIG_DEBUG_PREEMPT is enabled.
*/
static inline void ftrace_function_local_enable(struct ftrace_ops *ops)
{
- if (WARN_ON_ONCE(!(ops->flags & FTRACE_OPS_FL_CONTROL)))
+ if (WARN_ON_ONCE(!(ops->flags & FTRACE_OPS_FL_PER_CPU)))
return;
(*this_cpu_ptr(ops->disabled))--;
}
/**
- * ftrace_function_local_disable - enable controlled ftrace_ops on current cpu
+ * ftrace_function_local_disable - disable ftrace_ops on current cpu
*
- * This function enables tracing on current cpu by decreasing
+ * This function disables tracing on current cpu by increasing
* the per cpu control variable.
* It must be called with preemption disabled and only on ftrace_ops
- * registered with FTRACE_OPS_FL_CONTROL. If called without preemption
+ * registered with FTRACE_OPS_FL_PER_CPU. If called without preemption
* disabled, this_cpu_ptr will complain when CONFIG_DEBUG_PREEMPT is enabled.
*/
static inline void ftrace_function_local_disable(struct ftrace_ops *ops)
{
- if (WARN_ON_ONCE(!(ops->flags & FTRACE_OPS_FL_CONTROL)))
+ if (WARN_ON_ONCE(!(ops->flags & FTRACE_OPS_FL_PER_CPU)))
return;
(*this_cpu_ptr(ops->disabled))++;
@@ -235,12 +236,12 @@ static inline void ftrace_function_local_disable(struct ftrace_ops *ops)
*
* This function returns value of ftrace_ops::disabled on current cpu.
* It must be called with preemption disabled and only on ftrace_ops
- * registered with FTRACE_OPS_FL_CONTROL. If called without preemption
+ * registered with FTRACE_OPS_FL_PER_CPU. If called without preemption
* disabled, this_cpu_ptr will complain when CONFIG_DEBUG_PREEMPT is enabled.
*/
static inline int ftrace_function_local_disabled(struct ftrace_ops *ops)
{
- WARN_ON_ONCE(!(ops->flags & FTRACE_OPS_FL_CONTROL));
+ WARN_ON_ONCE(!(ops->flags & FTRACE_OPS_FL_PER_CPU));
return *this_cpu_ptr(ops->disabled);
}
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index e371aed51fcf..4e4df8e8674b 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -62,8 +62,6 @@
#define FTRACE_HASH_DEFAULT_BITS 10
#define FTRACE_HASH_MAX_BITS 12
-#define FL_GLOBAL_CONTROL_MASK (FTRACE_OPS_FL_CONTROL)
-
#ifdef CONFIG_DYNAMIC_FTRACE
#define INIT_OPS_HASH(opsname) \
.func_hash = &opsname.local_hash, \
@@ -113,11 +111,9 @@ static int ftrace_disabled __read_mostly;
static DEFINE_MUTEX(ftrace_lock);
-static struct ftrace_ops *ftrace_control_list __read_mostly = &ftrace_list_end;
static struct ftrace_ops *ftrace_ops_list __read_mostly = &ftrace_list_end;
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
static struct ftrace_ops global_ops;
-static struct ftrace_ops control_ops;
static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct pt_regs *regs);
@@ -203,7 +199,7 @@ void clear_ftrace_function(void)
ftrace_trace_function = ftrace_stub;
}
-static void control_ops_disable_all(struct ftrace_ops *ops)
+static void per_cpu_ops_disable_all(struct ftrace_ops *ops)
{
int cpu;
@@ -211,16 +207,19 @@ static void control_ops_disable_all(struct ftrace_ops *ops)
*per_cpu_ptr(ops->disabled, cpu) = 1;
}
-static int control_ops_alloc(struct ftrace_ops *ops)
+static int per_cpu_ops_alloc(struct ftrace_ops *ops)
{
int __percpu *disabled;
+ if (WARN_ON_ONCE(!(ops->flags & FTRACE_OPS_FL_PER_CPU)))
+ return -EINVAL;
+
disabled = alloc_percpu(int);
if (!disabled)
return -ENOMEM;
ops->disabled = disabled;
- control_ops_disable_all(ops);
+ per_cpu_ops_disable_all(ops);
return 0;
}
@@ -256,10 +255,11 @@ static inline void update_function_graph_func(void) { }
static ftrace_func_t ftrace_ops_get_list_func(struct ftrace_ops *ops)
{
/*
- * If this is a dynamic ops or we force list func,
+ * If this is a dynamic, RCU, or per CPU ops, or we force list func,
* then it needs to call the list anyway.
*/
- if (ops->flags & FTRACE_OPS_FL_DYNAMIC || FTRACE_FORCE_LIST_FUNC)
+ if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_PER_CPU |
+ FTRACE_OPS_FL_RCU) || FTRACE_FORCE_LIST_FUNC)
return ftrace_ops_list_func;
return ftrace_ops_get_func(ops);
@@ -383,26 +383,6 @@ static int remove_ftrace_ops(struct ftrace_ops **list, struct ftrace_ops *ops)
return 0;
}
-static void add_ftrace_list_ops(struct ftrace_ops **list,
- struct ftrace_ops *main_ops,
- struct ftrace_ops *ops)
-{
- int first = *list == &ftrace_list_end;
- add_ftrace_ops(list, ops);
- if (first)
- add_ftrace_ops(&ftrace_ops_list, main_ops);
-}
-
-static int remove_ftrace_list_ops(struct ftrace_ops **list,
- struct ftrace_ops *main_ops,
- struct ftrace_ops *ops)
-{
- int ret = remove_ftrace_ops(list, ops);
- if (!ret && *list == &ftrace_list_end)
- ret = remove_ftrace_ops(&ftrace_ops_list, main_ops);
- return ret;
-}
-
static void ftrace_update_trampoline(struct ftrace_ops *ops);
static int __register_ftrace_function(struct ftrace_ops *ops)
@@ -430,14 +410,12 @@ static int __register_ftrace_function(struct ftrace_ops *ops)
if (!core_kernel_data((unsigned long)ops))
ops->flags |= FTRACE_OPS_FL_DYNAMIC;
- if (ops->flags & FTRACE_OPS_FL_CONTROL) {
- if (control_ops_alloc(ops))
+ if (ops->flags & FTRACE_OPS_FL_PER_CPU) {
+ if (per_cpu_ops_alloc(ops))
return -ENOMEM;
- add_ftrace_list_ops(&ftrace_control_list, &control_ops, ops);
- /* The control_ops needs the trampoline update */
- ops = &control_ops;
- } else
- add_ftrace_ops(&ftrace_ops_list, ops);
+ }
+
+ add_ftrace_ops(&ftrace_ops_list, ops);
/* Always save the function, and reset at unregistering */
ops->saved_func = ops->func;
@@ -460,11 +438,7 @@ static int __unregister_ftrace_function(struct ftrace_ops *ops)
if (WARN_ON(!(ops->flags & FTRACE_OPS_FL_ENABLED)))
return -EBUSY;
- if (ops->flags & FTRACE_OPS_FL_CONTROL) {
- ret = remove_ftrace_list_ops(&ftrace_control_list,
- &control_ops, ops);
- } else
- ret = remove_ftrace_ops(&ftrace_ops_list, ops);
+ ret = remove_ftrace_ops(&ftrace_ops_list, ops);
if (ret < 0)
return ret;
@@ -2630,7 +2604,7 @@ void __weak arch_ftrace_trampoline_free(struct ftrace_ops *ops)
{
}
-static void control_ops_free(struct ftrace_ops *ops)
+static void per_cpu_ops_free(struct ftrace_ops *ops)
{
free_percpu(ops->disabled);
}
@@ -2731,13 +2705,13 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command)
if (!command || !ftrace_enabled) {
/*
- * If these are control ops, they still need their
+ * If these are per_cpu ops, they still need their
* per_cpu field freed. Since, function tracing is
* not currently active, we can just free them
* without synchronizing all CPUs.
*/
- if (ops->flags & FTRACE_OPS_FL_CONTROL)
- control_ops_free(ops);
+ if (ops->flags & FTRACE_OPS_FL_PER_CPU)
+ per_cpu_ops_free(ops);
return 0;
}
@@ -2778,7 +2752,7 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command)
/*
* Dynamic ops may be freed, we must make sure that all
* callers are done before leaving this function.
- * The same goes for freeing the per_cpu data of the control
+ * The same goes for freeing the per_cpu data of the per_cpu
* ops.
*
* Again, normal synchronize_sched() is not good enough.
@@ -2789,13 +2763,13 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command)
* infrastructure to do the synchronization, thus we must do it
* ourselves.
*/
- if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_CONTROL)) {
+ if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_PER_CPU)) {
schedule_on_each_cpu(ftrace_sync);
arch_ftrace_trampoline_free(ops);
- if (ops->flags & FTRACE_OPS_FL_CONTROL)
- control_ops_free(ops);
+ if (ops->flags & FTRACE_OPS_FL_PER_CPU)
+ per_cpu_ops_free(ops);
}
return 0;
@@ -5184,44 +5158,6 @@ void ftrace_reset_array_ops(struct trace_array *tr)
tr->ops->func = ftrace_stub;
}
-static void
-ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip,
- struct ftrace_ops *op, struct pt_regs *regs)
-{
- if (unlikely(trace_recursion_test(TRACE_CONTROL_BIT)))
- return;
-
- /*
- * Some of the ops may be dynamically allocated,
- * they must be freed after a synchronize_sched().
- */
- preempt_disable_notrace();
- trace_recursion_set(TRACE_CONTROL_BIT);
-
- /*
- * Control funcs (perf) uses RCU. Only trace if
- * RCU is currently active.
- */
- if (!rcu_is_watching())
- goto out;
-
- do_for_each_ftrace_op(op, ftrace_control_list) {
- if (!(op->flags & FTRACE_OPS_FL_STUB) &&
- !ftrace_function_local_disabled(op) &&
- ftrace_ops_test(op, ip, regs))
- op->func(ip, parent_ip, op, regs);
- } while_for_each_ftrace_op(op);
- out:
- trace_recursion_clear(TRACE_CONTROL_BIT);
- preempt_enable_notrace();
-}
-
-static struct ftrace_ops control_ops = {
- .func = ftrace_ops_control_func,
- .flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_OPS_HASH(control_ops)
-};
-
static inline void
__ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ignored, struct pt_regs *regs)
@@ -5238,8 +5174,22 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
* they must be freed after a synchronize_sched().
*/
preempt_disable_notrace();
+
do_for_each_ftrace_op(op, ftrace_ops_list) {
- if (ftrace_ops_test(op, ip, regs)) {
+ /*
+ * Check the following for each ops before calling their func:
+ * if RCU flag is set, then rcu_is_watching() must be true
+ * if PER_CPU is set, then ftrace_function_local_disable()
+ * must be false
+ * Otherwise test if the ip matches the ops filter
+ *
+ * If any of the above fails then the op->func() is not executed.
+ */
+ if ((!(op->flags & FTRACE_OPS_FL_RCU) || rcu_is_watching()) &&
+ (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
+ !ftrace_function_local_disabled(op)) &&
+ ftrace_ops_test(op, ip, regs)) {
+
if (FTRACE_WARN_ON(!op->func)) {
pr_warn("op=%p %pS\n", op, op);
goto out;
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 919d9d07686f..d3980b87bf04 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -467,8 +467,6 @@ enum {
TRACE_INTERNAL_IRQ_BIT,
TRACE_INTERNAL_SIRQ_BIT,
- TRACE_CONTROL_BIT,
-
TRACE_BRANCH_BIT,
/*
* Abuse of the trace_recursion.
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index abfc903e741e..2649c85cd162 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -334,7 +334,7 @@ static int perf_ftrace_function_register(struct perf_event *event)
{
struct ftrace_ops *ops = &event->ftrace_ops;
- ops->flags |= FTRACE_OPS_FL_CONTROL;
+ ops->flags |= FTRACE_OPS_FL_PER_CPU | FTRACE_OPS_FL_RCU;
ops->func = perf_ftrace_function_call;
return register_ftrace_function(ops);
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-11-30 22:36 [PATCH] ftrace: Remove use of control list and ops Steven Rostedt
@ 2015-12-01 13:42 ` Jiri Olsa
2015-12-01 14:55 ` Steven Rostedt
0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2015-12-01 13:42 UTC (permalink / raw)
To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Mon, Nov 30, 2015 at 05:36:40PM -0500, Steven Rostedt wrote:
>
> [ Jiri, can you take a look at this. You can also apply it on top of my
> branch ftrace/core, and run any specific tests. I just need to nuke
> that control structure for further updates with ftrace. ]
>
>
> Currently perf has its own list function within the ftrace infrastructure
> that seems to be used only to allow for it to have per-cpu disabling as well
> as a check to make sure that it's not called while RCU is not watching. It
> uses something called the "control_ops" which is used to iterate over ops
> under it with the control_list_func().
>
> The problem is that this control_ops and control_list_func unnecessarily
> complicates the code. By replacing FTRACE_OPS_FL_CONTROL with two new flags
> (FTRACE_OPS_FL_RCU and FTRACE_OPS_FL_PER_CPU) we can remove all the code
> that is special with the control ops and add the needed checks within the
> generic ftrace_list_func().
hum,
do we need also change for the trampoline, something like below?
I needed attached patch to get the perf ftrace:function
event work properly..
thanks,
jirka
---
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index c4d881200d1f..2705ac2f3487 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5179,6 +5179,26 @@ static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
trace_clear_recursion(bit);
}
+static void ftrace_ops_per_cpu_func(unsigned long ip, unsigned long parent_ip,
+ struct ftrace_ops *op, struct pt_regs *regs)
+{
+ int bit;
+
+ bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
+ if (bit < 0)
+ return;
+
+ preempt_disable_notrace();
+
+ if ((!(op->flags & FTRACE_OPS_FL_RCU) || rcu_is_watching()) &&
+ (!(op->flags & FTRACE_OPS_FL_PER_CPU) || !ftrace_function_local_disabled(op))) {
+ op->func(ip, parent_ip, op, regs);
+ }
+
+ preempt_enable_notrace();
+ trace_clear_recursion(bit);
+}
+
/**
* ftrace_ops_get_func - get the function a trampoline should call
* @ops: the ops to get the function for
@@ -5192,6 +5212,11 @@ static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
*/
ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops)
{
+ if (ops->flags & (FTRACE_OPS_FL_PER_CPU|FTRACE_OPS_FL_RCU)) {
+ printk("used per cpu trampoline function\n");
+ return ftrace_ops_per_cpu_func;
+ }
+
/*
* If the func handles its own recursion, call it directly.
* Otherwise call the recursion protected function that
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-01 13:42 ` Jiri Olsa
@ 2015-12-01 14:55 ` Steven Rostedt
2015-12-01 15:57 ` Jiri Olsa
0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2015-12-01 14:55 UTC (permalink / raw)
To: Jiri Olsa; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Tue, 1 Dec 2015 14:42:36 +0100
Jiri Olsa <jolsa@redhat.com> wrote:
> On Mon, Nov 30, 2015 at 05:36:40PM -0500, Steven Rostedt wrote:
> >
> > [ Jiri, can you take a look at this. You can also apply it on top of my
> > branch ftrace/core, and run any specific tests. I just need to nuke
> > that control structure for further updates with ftrace. ]
> >
> >
> > Currently perf has its own list function within the ftrace infrastructure
> > that seems to be used only to allow for it to have per-cpu disabling as well
> > as a check to make sure that it's not called while RCU is not watching. It
> > uses something called the "control_ops" which is used to iterate over ops
> > under it with the control_list_func().
> >
> > The problem is that this control_ops and control_list_func unnecessarily
> > complicates the code. By replacing FTRACE_OPS_FL_CONTROL with two new flags
> > (FTRACE_OPS_FL_RCU and FTRACE_OPS_FL_PER_CPU) we can remove all the code
> > that is special with the control ops and add the needed checks within the
> > generic ftrace_list_func().
>
> hum,
> do we need also change for the trampoline, something like below?
>
> I needed attached patch to get the perf ftrace:function
> event work properly..
Hmm, I thought that I forced the list function when RCU or PER_CPU
was set. Oh wait. I have CONFIG_PREEMPT set, which will change the
logic slightly. I'm guessing you have PREEMPT_VOLUNTARY set. I'll try
that out.
>
> thanks,
> jirka
>
>
> ---
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index c4d881200d1f..2705ac2f3487 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -5179,6 +5179,26 @@ static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
> trace_clear_recursion(bit);
> }
>
> +static void ftrace_ops_per_cpu_func(unsigned long ip, unsigned long parent_ip,
> + struct ftrace_ops *op, struct pt_regs *regs)
> +{
> + int bit;
> +
> + bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
> + if (bit < 0)
> + return;
> +
> + preempt_disable_notrace();
> +
> + if ((!(op->flags & FTRACE_OPS_FL_RCU) || rcu_is_watching()) &&
> + (!(op->flags & FTRACE_OPS_FL_PER_CPU) || !ftrace_function_local_disabled(op))) {
> + op->func(ip, parent_ip, op, regs);
> + }
> +
> + preempt_enable_notrace();
> + trace_clear_recursion(bit);
> +}
> +
> /**
> * ftrace_ops_get_func - get the function a trampoline should call
> * @ops: the ops to get the function for
> @@ -5192,6 +5212,11 @@ static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
> */
> ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops)
> {
> + if (ops->flags & (FTRACE_OPS_FL_PER_CPU|FTRACE_OPS_FL_RCU)) {
> + printk("used per cpu trampoline function\n");
> + return ftrace_ops_per_cpu_func;
I have a slight different idea on how to handle this.
-- Steve
> + }
> +
> /*
> * If the func handles its own recursion, call it directly.
> * Otherwise call the recursion protected function that
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-01 14:55 ` Steven Rostedt
@ 2015-12-01 15:57 ` Jiri Olsa
2015-12-01 16:56 ` Steven Rostedt
0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2015-12-01 15:57 UTC (permalink / raw)
To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Tue, Dec 01, 2015 at 09:55:54AM -0500, Steven Rostedt wrote:
> On Tue, 1 Dec 2015 14:42:36 +0100
> Jiri Olsa <jolsa@redhat.com> wrote:
>
> > On Mon, Nov 30, 2015 at 05:36:40PM -0500, Steven Rostedt wrote:
> > >
> > > [ Jiri, can you take a look at this. You can also apply it on top of my
> > > branch ftrace/core, and run any specific tests. I just need to nuke
> > > that control structure for further updates with ftrace. ]
> > >
> > >
> > > Currently perf has its own list function within the ftrace infrastructure
> > > that seems to be used only to allow for it to have per-cpu disabling as well
> > > as a check to make sure that it's not called while RCU is not watching. It
> > > uses something called the "control_ops" which is used to iterate over ops
> > > under it with the control_list_func().
> > >
> > > The problem is that this control_ops and control_list_func unnecessarily
> > > complicates the code. By replacing FTRACE_OPS_FL_CONTROL with two new flags
> > > (FTRACE_OPS_FL_RCU and FTRACE_OPS_FL_PER_CPU) we can remove all the code
> > > that is special with the control ops and add the needed checks within the
> > > generic ftrace_list_func().
> >
> > hum,
> > do we need also change for the trampoline, something like below?
> >
> > I needed attached patch to get the perf ftrace:function
> > event work properly..
>
> Hmm, I thought that I forced the list function when RCU or PER_CPU
> was set. Oh wait. I have CONFIG_PREEMPT set, which will change the
> logic slightly. I'm guessing you have PREEMPT_VOLUNTARY set. I'll try
> that out.
yep, but the trampoline has separate code path
to set the ops func
jirka
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-01 15:57 ` Jiri Olsa
@ 2015-12-01 16:56 ` Steven Rostedt
2015-12-02 7:59 ` Jiri Olsa
2015-12-02 8:58 ` Jiri Olsa
0 siblings, 2 replies; 14+ messages in thread
From: Steven Rostedt @ 2015-12-01 16:56 UTC (permalink / raw)
To: Jiri Olsa; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Tue, 1 Dec 2015 16:57:44 +0100
Jiri Olsa <jolsa@redhat.com> wrote:
> > Hmm, I thought that I forced the list function when RCU or PER_CPU
> > was set. Oh wait. I have CONFIG_PREEMPT set, which will change the
> > logic slightly. I'm guessing you have PREEMPT_VOLUNTARY set. I'll try
> > that out.
>
> yep, but the trampoline has separate code path
> to set the ops func
Right, but that only gets called for DYNAMIC ops (which perf is) when
CONFIG_PREEMPT is not set. What about this patch instead? (on top of my
other one)
It's a lot like yours, but instead of creating a new helper function, I
just reuse the recurs_func instead.
-- Steve
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 8b22e681baf8..05ed87d06bb0 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5230,20 +5230,29 @@ static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip)
/*
* If there's only one function registered but it does not support
- * recursion, this function will be called by the mcount trampoline.
- * This function will handle recursion protection.
+ * recursion, needs RCU protection and/or requires per cpu handling, then
+ * this function will be called by the mcount trampoline.
*/
-static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
+static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct pt_regs *regs)
{
int bit;
+ if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
+ return;
+
bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
if (bit < 0)
return;
- op->func(ip, parent_ip, op, regs);
+ preempt_disable_notrace();
+ if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
+ ftrace_function_local_disabled(op)) {
+ op->func(ip, parent_ip, op, regs);
+ }
+
+ preempt_enable_notrace();
trace_clear_recursion(bit);
}
@@ -5261,12 +5270,12 @@ static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops)
{
/*
- * If the func handles its own recursion, call it directly.
- * Otherwise call the recursion protected function that
- * will call the ftrace ops function.
+ * If the function does not handle recursion, needs to be RCU safe,
+ * or does per cpu logic, then we need to call the assist handler.
*/
- if (!(ops->flags & FTRACE_OPS_FL_RECURSION_SAFE))
- return ftrace_ops_recurs_func;
+ if (!(ops->flags & FTRACE_OPS_FL_RECURSION_SAFE) ||
+ ops->flags & (FTRACE_OPS_FL_RCU | FTRACE_OPS_FL_PER_CPU))
+ return ftrace_ops_assist_func;
return ops->func;
}
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-01 16:56 ` Steven Rostedt
@ 2015-12-02 7:59 ` Jiri Olsa
2015-12-02 14:15 ` Steven Rostedt
2015-12-02 8:58 ` Jiri Olsa
1 sibling, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2015-12-02 7:59 UTC (permalink / raw)
To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Tue, Dec 01, 2015 at 11:56:55AM -0500, Steven Rostedt wrote:
> On Tue, 1 Dec 2015 16:57:44 +0100
> Jiri Olsa <jolsa@redhat.com> wrote:
>
>
> > > Hmm, I thought that I forced the list function when RCU or PER_CPU
> > > was set. Oh wait. I have CONFIG_PREEMPT set, which will change the
> > > logic slightly. I'm guessing you have PREEMPT_VOLUNTARY set. I'll try
> > > that out.
> >
> > yep, but the trampoline has separate code path
> > to set the ops func
>
> Right, but that only gets called for DYNAMIC ops (which perf is) when
> CONFIG_PREEMPT is not set. What about this patch instead? (on top of my
> other one)
>
> It's a lot like yours, but instead of creating a new helper function, I
> just reuse the recurs_func instead.
>
> -- Steve
>
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 8b22e681baf8..05ed87d06bb0 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -5230,20 +5230,29 @@ static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip)
>
> /*
> * If there's only one function registered but it does not support
> - * recursion, this function will be called by the mcount trampoline.
> - * This function will handle recursion protection.
> + * recursion, needs RCU protection and/or requires per cpu handling, then
> + * this function will be called by the mcount trampoline.
> */
> -static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
> +static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
> struct ftrace_ops *op, struct pt_regs *regs)
> {
> int bit;
>
> + if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
> + return;
> +
> bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
> if (bit < 0)
> return;
>
> - op->func(ip, parent_ip, op, regs);
> + preempt_disable_notrace();
I was wondering about not disabling preemption in the original
ftrace_ops_recurs_func function.. thats why I added new one ;-)
I'll test the patch
thanks,
jirka
>
> + if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
> + ftrace_function_local_disabled(op)) {
> + op->func(ip, parent_ip, op, regs);
> + }
> +
> + preempt_enable_notrace();
> trace_clear_recursion(bit);
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-02 7:59 ` Jiri Olsa
@ 2015-12-02 14:15 ` Steven Rostedt
0 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2015-12-02 14:15 UTC (permalink / raw)
To: Jiri Olsa; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Wed, 2 Dec 2015 08:59:11 +0100
Jiri Olsa <jolsa@redhat.com> wrote:
> > - op->func(ip, parent_ip, op, regs);
> > + preempt_disable_notrace();
>
> I was wondering about not disabling preemption in the original
> ftrace_ops_recurs_func function.. thats why I added new one ;-)
Yeah, I paused on that too. But then I realized that pretty much all
callers of function tracing disable preemption too, so it shouldn't
cause really any more overhead.
-- Steve
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-01 16:56 ` Steven Rostedt
2015-12-02 7:59 ` Jiri Olsa
@ 2015-12-02 8:58 ` Jiri Olsa
2015-12-02 14:27 ` Steven Rostedt
1 sibling, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2015-12-02 8:58 UTC (permalink / raw)
To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Tue, Dec 01, 2015 at 11:56:55AM -0500, Steven Rostedt wrote:
SNIP
> -static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
> +static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
> struct ftrace_ops *op, struct pt_regs *regs)
> {
> int bit;
>
> + if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
> + return;
> +
> bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
> if (bit < 0)
> return;
>
> - op->func(ip, parent_ip, op, regs);
> + preempt_disable_notrace();
>
> + if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
> + ftrace_function_local_disabled(op)) {
should be !ftrace_function_local_disabled(op) in here,
I passed my test with attached patch
thanks,
jirka
> + op->func(ip, parent_ip, op, regs);
> + }
> +
> + preempt_enable_notrace();
> trace_clear_recursion(bit);
> }
SNIP
---
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 710430fcfd9c..ad0e0e7eaea2 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -115,7 +115,7 @@ static struct ftrace_ops *ftrace_ops_list __read_mostly = &ftrace_list_end;
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
static struct ftrace_ops global_ops;
-static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
+static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct pt_regs *regs);
#if ARCH_SUPPORTS_FTRACE_OPS
@@ -5180,7 +5180,7 @@ static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
preempt_disable_notrace();
if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
- ftrace_function_local_disabled(op)) {
+ !ftrace_function_local_disabled(op)) {
op->func(ip, parent_ip, op, regs);
}
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-02 8:58 ` Jiri Olsa
@ 2015-12-02 14:27 ` Steven Rostedt
2015-12-02 14:50 ` Jiri Olsa
0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2015-12-02 14:27 UTC (permalink / raw)
To: Jiri Olsa; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Wed, 2 Dec 2015 09:58:26 +0100
Jiri Olsa <jolsa@redhat.com> wrote:
> On Tue, Dec 01, 2015 at 11:56:55AM -0500, Steven Rostedt wrote:
>
> SNIP
>
> > -static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
> > +static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
> > struct ftrace_ops *op, struct pt_regs *regs)
> > {
> > int bit;
> >
> > + if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
> > + return;
> > +
> > bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
> > if (bit < 0)
> > return;
> >
> > - op->func(ip, parent_ip, op, regs);
> > + preempt_disable_notrace();
> >
> > + if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
> > + ftrace_function_local_disabled(op)) {
>
> should be !ftrace_function_local_disabled(op) in here,
> I passed my test with attached patch
>
Can you retest with this patch:
-- Steve
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index bc7f4eb6b4b0..e290a30f2d0b 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -115,9 +115,6 @@ static struct ftrace_ops *ftrace_ops_list __read_mostly = &ftrace_list_end;
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
static struct ftrace_ops global_ops;
-static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
- struct ftrace_ops *op, struct pt_regs *regs);
-
#if ARCH_SUPPORTS_FTRACE_OPS
static void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct pt_regs *regs);
@@ -5231,20 +5228,29 @@ static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip)
/*
* If there's only one function registered but it does not support
- * recursion, this function will be called by the mcount trampoline.
- * This function will handle recursion protection.
+ * recursion, needs RCU protection and/or requires per cpu handling, then
+ * this function will be called by the mcount trampoline.
*/
-static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
+static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct pt_regs *regs)
{
int bit;
+ if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
+ return;
+
bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
if (bit < 0)
return;
- op->func(ip, parent_ip, op, regs);
+ preempt_disable_notrace();
+ if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
+ !ftrace_function_local_disabled(op)) {
+ op->func(ip, parent_ip, op, regs);
+ }
+
+ preempt_enable_notrace();
trace_clear_recursion(bit);
}
@@ -5262,12 +5268,12 @@ static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops)
{
/*
- * If the func handles its own recursion, call it directly.
- * Otherwise call the recursion protected function that
- * will call the ftrace ops function.
+ * If the function does not handle recursion, needs to be RCU safe,
+ * or does per cpu logic, then we need to call the assist handler.
*/
- if (!(ops->flags & FTRACE_OPS_FL_RECURSION_SAFE))
- return ftrace_ops_recurs_func;
+ if (!(ops->flags & FTRACE_OPS_FL_RECURSION_SAFE) ||
+ ops->flags & (FTRACE_OPS_FL_RCU | FTRACE_OPS_FL_PER_CPU))
+ return ftrace_ops_assist_func;
return ops->func;
}
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-02 14:27 ` Steven Rostedt
@ 2015-12-02 14:50 ` Jiri Olsa
2015-12-02 15:03 ` Steven Rostedt
0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2015-12-02 14:50 UTC (permalink / raw)
To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Wed, Dec 02, 2015 at 09:27:00AM -0500, Steven Rostedt wrote:
> On Wed, 2 Dec 2015 09:58:26 +0100
> Jiri Olsa <jolsa@redhat.com> wrote:
>
> > On Tue, Dec 01, 2015 at 11:56:55AM -0500, Steven Rostedt wrote:
> >
> > SNIP
> >
> > > -static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
> > > +static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
> > > struct ftrace_ops *op, struct pt_regs *regs)
> > > {
> > > int bit;
> > >
> > > + if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
> > > + return;
> > > +
> > > bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
> > > if (bit < 0)
> > > return;
> > >
> > > - op->func(ip, parent_ip, op, regs);
> > > + preempt_disable_notrace();
> > >
> > > + if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
> > > + ftrace_function_local_disabled(op)) {
> >
> > should be !ftrace_function_local_disabled(op) in here,
> > I passed my test with attached patch
> >
>
> Can you retest with this patch:
sure, but other than the declaration removal it's the same change
jirka
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-02 14:50 ` Jiri Olsa
@ 2015-12-02 15:03 ` Steven Rostedt
2015-12-02 15:23 ` Jiri Olsa
0 siblings, 1 reply; 14+ messages in thread
From: Steven Rostedt @ 2015-12-02 15:03 UTC (permalink / raw)
To: Jiri Olsa; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Wed, 2 Dec 2015 15:50:32 +0100
Jiri Olsa <jolsa@redhat.com> wrote:
> On Wed, Dec 02, 2015 at 09:27:00AM -0500, Steven Rostedt wrote:
> > On Wed, 2 Dec 2015 09:58:26 +0100
> > Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > > On Tue, Dec 01, 2015 at 11:56:55AM -0500, Steven Rostedt wrote:
> > >
> > > SNIP
> > >
> > > > -static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
> > > > +static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
> > > > struct ftrace_ops *op, struct pt_regs *regs)
> > > > {
> > > > int bit;
> > > >
> > > > + if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
> > > > + return;
> > > > +
> > > > bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
> > > > if (bit < 0)
> > > > return;
> > > >
> > > > - op->func(ip, parent_ip, op, regs);
> > > > + preempt_disable_notrace();
> > > >
> > > > + if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
> > > > + ftrace_function_local_disabled(op)) {
> > >
> > > should be !ftrace_function_local_disabled(op) in here,
> > > I passed my test with attached patch
> > >
> >
> > Can you retest with this patch:
>
> sure, but other than the declaration removal it's the same change
>
Yep, but I'm paranoid :-) Any change I like to get retested. But if
you're fine, can I get a 'Tested-by' from you?
-- Steve
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-02 15:03 ` Steven Rostedt
@ 2015-12-02 15:23 ` Jiri Olsa
2015-12-18 10:18 ` Jiri Olsa
0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2015-12-02 15:23 UTC (permalink / raw)
To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Wed, Dec 02, 2015 at 10:03:46AM -0500, Steven Rostedt wrote:
> On Wed, 2 Dec 2015 15:50:32 +0100
> Jiri Olsa <jolsa@redhat.com> wrote:
>
> > On Wed, Dec 02, 2015 at 09:27:00AM -0500, Steven Rostedt wrote:
> > > On Wed, 2 Dec 2015 09:58:26 +0100
> > > Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > > On Tue, Dec 01, 2015 at 11:56:55AM -0500, Steven Rostedt wrote:
> > > >
> > > > SNIP
> > > >
> > > > > -static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
> > > > > +static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
> > > > > struct ftrace_ops *op, struct pt_regs *regs)
> > > > > {
> > > > > int bit;
> > > > >
> > > > > + if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
> > > > > + return;
> > > > > +
> > > > > bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
> > > > > if (bit < 0)
> > > > > return;
> > > > >
> > > > > - op->func(ip, parent_ip, op, regs);
> > > > > + preempt_disable_notrace();
> > > > >
> > > > > + if (!(op->flags & FTRACE_OPS_FL_PER_CPU) ||
> > > > > + ftrace_function_local_disabled(op)) {
> > > >
> > > > should be !ftrace_function_local_disabled(op) in here,
> > > > I passed my test with attached patch
> > > >
> > >
> > > Can you retest with this patch:
> >
> > sure, but other than the declaration removal it's the same change
> >
>
> Yep, but I'm paranoid :-) Any change I like to get retested. But if
> you're fine, can I get a 'Tested-by' from you?
yep, I re-tested ;-)
Tested-by: Jiri Olsa <jolsa@kernel.org>
thanks,
jirka
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-02 15:23 ` Jiri Olsa
@ 2015-12-18 10:18 ` Jiri Olsa
2015-12-18 14:35 ` Steven Rostedt
0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2015-12-18 10:18 UTC (permalink / raw)
To: Steven Rostedt; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Wed, Dec 02, 2015 at 04:23:50PM +0100, Jiri Olsa wrote:
SNIP
> > > > >
> > > > > should be !ftrace_function_local_disabled(op) in here,
> > > > > I passed my test with attached patch
> > > > >
> > > >
> > > > Can you retest with this patch:
> > >
> > > sure, but other than the declaration removal it's the same change
> > >
> >
> > Yep, but I'm paranoid :-) Any change I like to get retested. But if
> > you're fine, can I get a 'Tested-by' from you?
>
> yep, I re-tested ;-)
>
> Tested-by: Jiri Olsa <jolsa@kernel.org>
>
hi,
what's the status of this one? I cannot find it out there ;-)
thanks,
jirka
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] ftrace: Remove use of control list and ops
2015-12-18 10:18 ` Jiri Olsa
@ 2015-12-18 14:35 ` Steven Rostedt
0 siblings, 0 replies; 14+ messages in thread
From: Steven Rostedt @ 2015-12-18 14:35 UTC (permalink / raw)
To: Jiri Olsa; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Fri, 18 Dec 2015 11:18:58 +0100
Jiri Olsa <jolsa@redhat.com> wrote:
> hi,
> what's the status of this one? I cannot find it out there ;-)
Yeah, I haven't gotten it through my tests yet. I was testing some
upstream bug fixes to push for this -rc, and my test box HD crashed.
It's definitely in the queue, but the queue is currently stuck :-/
-- Steve
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2015-12-18 14:35 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-30 22:36 [PATCH] ftrace: Remove use of control list and ops Steven Rostedt
2015-12-01 13:42 ` Jiri Olsa
2015-12-01 14:55 ` Steven Rostedt
2015-12-01 15:57 ` Jiri Olsa
2015-12-01 16:56 ` Steven Rostedt
2015-12-02 7:59 ` Jiri Olsa
2015-12-02 14:15 ` Steven Rostedt
2015-12-02 8:58 ` Jiri Olsa
2015-12-02 14:27 ` Steven Rostedt
2015-12-02 14:50 ` Jiri Olsa
2015-12-02 15:03 ` Steven Rostedt
2015-12-02 15:23 ` Jiri Olsa
2015-12-18 10:18 ` Jiri Olsa
2015-12-18 14:35 ` Steven Rostedt
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.