All of lore.kernel.org
 help / color / mirror / Atom feed
* [for-next][PATCH 0/4] tracing: updates for 4.1
@ 2015-03-25 13:00 Steven Rostedt
  2015-03-25 13:00 ` [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*() Steven Rostedt
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Steven Rostedt @ 2015-03-25 13:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next

Head SHA1: d9a16d3ab8770357015c85a07387f1d2676a4773


He Kuang (1):
      tracing: remove ftrace:function TRACE_EVENT_FL_USE_CALL_FILTER flag

Scott Wood (1):
      tracing: %pF is only for function pointers

Stephen Rothwell (1):
      trace: Don't use __weak in header files

Steven Rostedt (1):
      ring-buffer: Replace this_cpu_*() with __this_cpu_*()

----
 include/trace/events/btrfs.h       |  4 ++--
 include/trace/events/ext3.h        |  2 +-
 include/trace/events/ext4.h        |  6 +++---
 include/trace/events/module.h      |  4 ++--
 include/trace/events/random.h      | 10 +++++-----
 kernel/trace/ring_buffer.c         | 11 +++++------
 kernel/trace/trace_entries.h       |  6 +++---
 kernel/trace/trace_export.c        |  2 +-
 kernel/trace/trace_kprobe.c        |  5 +++--
 kernel/trace/trace_probe.c         | 19 +++++++------------
 kernel/trace/trace_probe.h         | 10 ++--------
 kernel/trace/trace_uprobe.c        |  5 +++--
 tools/lib/traceevent/event-parse.c |  2 +-
 13 files changed, 38 insertions(+), 48 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*()
  2015-03-25 13:00 [for-next][PATCH 0/4] tracing: updates for 4.1 Steven Rostedt
@ 2015-03-25 13:00 ` Steven Rostedt
  2015-03-27 19:41   ` Christoph Lameter
  2015-03-25 13:00 ` [for-next][PATCH 2/4] tracing: %pF is only for function pointers Steven Rostedt
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2015-03-25 13:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, stable, Christoph Lameter, Uwe Kleine-Koenig

[-- Attachment #1: 0001-ring-buffer-Replace-this_cpu_-with-__this_cpu_.patch --]
[-- Type: text/plain, Size: 3003 bytes --]

From: Steven Rostedt <rostedt@goodmis.org>

It has come to my attention that this_cpu_read/write are horrible on
architectures other than x86. Worse yet, they actually disable
preemption or interrupts! This caused some unexpected tracing results
on ARM.

   101.356868: preempt_count_add <-ring_buffer_lock_reserve
   101.356870: preempt_count_sub <-ring_buffer_lock_reserve

The ring_buffer_lock_reserve has recursion protection that requires
accessing a per cpu variable. But since preempt_disable() is traced, it
too got traced while accessing the variable that is suppose to prevent
recursion like this.

The generic version of this_cpu_read() and write() are:

 #define this_cpu_generic_read(pcp)					\
 ({	typeof(pcp) ret__;						\
	preempt_disable();						\
	ret__ = *this_cpu_ptr(&(pcp));					\
	preempt_enable();						\
	ret__;								\
 })

 #define this_cpu_generic_to_op(pcp, val, op)				\
 do {									\
	unsigned long flags;						\
	raw_local_irq_save(flags);					\
	*__this_cpu_ptr(&(pcp)) op val;					\
	raw_local_irq_restore(flags);					\
 } while (0)

Which is unacceptable for locations that know they are within preempt
disabled or interrupt disabled locations.

Paul McKenney stated that __this_cpu_() versions produce much better code on
other architectures than this_cpu_() does, if we know that the call is done in
a preempt disabled location.

I also changed the recursive_unlock() to use two local variables instead
of accessing the per_cpu variable twice.

Link: http://lkml.kernel.org/r/20150317114411.GE3589@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/20150317104038.312e73d1@gandalf.local.home

Cc: stable@vger.kernel.org
Acked-by: Christoph Lameter <cl@linux.com>
Reported-by: Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>
Tested-by: Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/ring_buffer.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 5040d44fe5a3..922048a0f7ea 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2679,7 +2679,7 @@ static DEFINE_PER_CPU(unsigned int, current_context);
 
 static __always_inline int trace_recursive_lock(void)
 {
-	unsigned int val = this_cpu_read(current_context);
+	unsigned int val = __this_cpu_read(current_context);
 	int bit;
 
 	if (in_interrupt()) {
@@ -2696,18 +2696,17 @@ static __always_inline int trace_recursive_lock(void)
 		return 1;
 
 	val |= (1 << bit);
-	this_cpu_write(current_context, val);
+	__this_cpu_write(current_context, val);
 
 	return 0;
 }
 
 static __always_inline void trace_recursive_unlock(void)
 {
-	unsigned int val = this_cpu_read(current_context);
+	unsigned int val = __this_cpu_read(current_context);
 
-	val--;
-	val &= this_cpu_read(current_context);
-	this_cpu_write(current_context, val);
+	val &= val & (val - 1);
+	__this_cpu_write(current_context, val);
 }
 
 #else
-- 
2.1.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [for-next][PATCH 2/4] tracing: %pF is only for function pointers
  2015-03-25 13:00 [for-next][PATCH 0/4] tracing: updates for 4.1 Steven Rostedt
  2015-03-25 13:00 ` [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*() Steven Rostedt
@ 2015-03-25 13:00 ` Steven Rostedt
  2015-03-25 13:00 ` [for-next][PATCH 3/4] tracing: remove ftrace:function TRACE_EVENT_FL_USE_CALL_FILTER flag Steven Rostedt
  2015-03-25 13:00 ` [for-next][PATCH 4/4] trace: Dont use __weak in header files Steven Rostedt
  3 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2015-03-25 13:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, Scott Wood

[-- Attachment #1: 0002-tracing-pF-is-only-for-function-pointers.patch --]
[-- Type: text/plain, Size: 6523 bytes --]

From: Scott Wood <scottwood@freescale.com>

Use %pS for actual addresses, otherwise you'll get bad output
on arches like ppc64 where %pF expects a function descriptor.

Link: http://lkml.kernel.org/r/1426130037-17956-22-git-send-email-scottwood@freescale.com

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/trace/events/btrfs.h       |  4 ++--
 include/trace/events/ext3.h        |  2 +-
 include/trace/events/ext4.h        |  6 +++---
 include/trace/events/module.h      |  4 ++--
 include/trace/events/random.h      | 10 +++++-----
 kernel/trace/trace_entries.h       |  6 +++---
 tools/lib/traceevent/event-parse.c |  2 +-
 7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
index 1faecea101f3..572e6503394a 100644
--- a/include/trace/events/btrfs.h
+++ b/include/trace/events/btrfs.h
@@ -962,7 +962,7 @@ TRACE_EVENT(alloc_extent_state,
 		__entry->ip	= IP
 	),
 
-	TP_printk("state=%p; mask = %s; caller = %pF", __entry->state,
+	TP_printk("state=%p; mask = %s; caller = %pS", __entry->state,
 		  show_gfp_flags(__entry->mask), (void *)__entry->ip)
 );
 
@@ -982,7 +982,7 @@ TRACE_EVENT(free_extent_state,
 		__entry->ip = IP
 	),
 
-	TP_printk(" state=%p; caller = %pF", __entry->state,
+	TP_printk(" state=%p; caller = %pS", __entry->state,
 		  (void *)__entry->ip)
 );
 
diff --git a/include/trace/events/ext3.h b/include/trace/events/ext3.h
index 6797b9de90ed..7f20707849bb 100644
--- a/include/trace/events/ext3.h
+++ b/include/trace/events/ext3.h
@@ -144,7 +144,7 @@ TRACE_EVENT(ext3_mark_inode_dirty,
 		__entry->ip	= IP;
 	),
 
-	TP_printk("dev %d,%d ino %lu caller %pF",
+	TP_printk("dev %d,%d ino %lu caller %pS",
 		  MAJOR(__entry->dev), MINOR(__entry->dev),
 		  (unsigned long) __entry->ino, (void *)__entry->ip)
 );
diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
index 6e5abd6d38a2..47fca36ee426 100644
--- a/include/trace/events/ext4.h
+++ b/include/trace/events/ext4.h
@@ -240,7 +240,7 @@ TRACE_EVENT(ext4_mark_inode_dirty,
 		__entry->ip	= IP;
 	),
 
-	TP_printk("dev %d,%d ino %lu caller %pF",
+	TP_printk("dev %d,%d ino %lu caller %pS",
 		  MAJOR(__entry->dev), MINOR(__entry->dev),
 		  (unsigned long) __entry->ino, (void *)__entry->ip)
 );
@@ -1762,7 +1762,7 @@ TRACE_EVENT(ext4_journal_start,
 		__entry->rsv_blocks	 = rsv_blocks;
 	),
 
-	TP_printk("dev %d,%d blocks, %d rsv_blocks, %d caller %pF",
+	TP_printk("dev %d,%d blocks, %d rsv_blocks, %d caller %pS",
 		  MAJOR(__entry->dev), MINOR(__entry->dev),
 		  __entry->blocks, __entry->rsv_blocks, (void *)__entry->ip)
 );
@@ -1784,7 +1784,7 @@ TRACE_EVENT(ext4_journal_start_reserved,
 		__entry->blocks		 = blocks;
 	),
 
-	TP_printk("dev %d,%d blocks, %d caller %pF",
+	TP_printk("dev %d,%d blocks, %d caller %pS",
 		  MAJOR(__entry->dev), MINOR(__entry->dev),
 		  __entry->blocks, (void *)__entry->ip)
 );
diff --git a/include/trace/events/module.h b/include/trace/events/module.h
index 81c4c183d348..28c45997e451 100644
--- a/include/trace/events/module.h
+++ b/include/trace/events/module.h
@@ -84,7 +84,7 @@ DECLARE_EVENT_CLASS(module_refcnt,
 		__assign_str(name, mod->name);
 	),
 
-	TP_printk("%s call_site=%pf refcnt=%d",
+	TP_printk("%s call_site=%ps refcnt=%d",
 		  __get_str(name), (void *)__entry->ip, __entry->refcnt)
 );
 
@@ -121,7 +121,7 @@ TRACE_EVENT(module_request,
 		__assign_str(name, name);
 	),
 
-	TP_printk("%s wait=%d call_site=%pf",
+	TP_printk("%s wait=%d call_site=%ps",
 		  __get_str(name), (int)__entry->wait, (void *)__entry->ip)
 );
 
diff --git a/include/trace/events/random.h b/include/trace/events/random.h
index 805af6db41cc..4684de344c5d 100644
--- a/include/trace/events/random.h
+++ b/include/trace/events/random.h
@@ -22,7 +22,7 @@ TRACE_EVENT(add_device_randomness,
 		__entry->IP		= IP;
 	),
 
-	TP_printk("bytes %d caller %pF",
+	TP_printk("bytes %d caller %pS",
 		__entry->bytes, (void *)__entry->IP)
 );
 
@@ -43,7 +43,7 @@ DECLARE_EVENT_CLASS(random__mix_pool_bytes,
 		__entry->IP		= IP;
 	),
 
-	TP_printk("%s pool: bytes %d caller %pF",
+	TP_printk("%s pool: bytes %d caller %pS",
 		  __entry->pool_name, __entry->bytes, (void *)__entry->IP)
 );
 
@@ -82,7 +82,7 @@ TRACE_EVENT(credit_entropy_bits,
 	),
 
 	TP_printk("%s pool: bits %d entropy_count %d entropy_total %d "
-		  "caller %pF", __entry->pool_name, __entry->bits,
+		  "caller %pS", __entry->pool_name, __entry->bits,
 		  __entry->entropy_count, __entry->entropy_total,
 		  (void *)__entry->IP)
 );
@@ -207,7 +207,7 @@ DECLARE_EVENT_CLASS(random__get_random_bytes,
 		__entry->IP		= IP;
 	),
 
-	TP_printk("nbytes %d caller %pF", __entry->nbytes, (void *)__entry->IP)
+	TP_printk("nbytes %d caller %pS", __entry->nbytes, (void *)__entry->IP)
 );
 
 DEFINE_EVENT(random__get_random_bytes, get_random_bytes,
@@ -242,7 +242,7 @@ DECLARE_EVENT_CLASS(random__extract_entropy,
 		__entry->IP		= IP;
 	),
 
-	TP_printk("%s pool: nbytes %d entropy_count %d caller %pF",
+	TP_printk("%s pool: nbytes %d entropy_count %d caller %pS",
 		  __entry->pool_name, __entry->nbytes, __entry->entropy_count,
 		  (void *)__entry->IP)
 );
diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h
index e2d027ac66a2..ee7b94a4810a 100644
--- a/kernel/trace/trace_entries.h
+++ b/kernel/trace/trace_entries.h
@@ -223,7 +223,7 @@ FTRACE_ENTRY(bprint, bprint_entry,
 		__dynamic_array(	u32,	buf	)
 	),
 
-	F_printk("%pf: %s",
+	F_printk("%ps: %s",
 		 (void *)__entry->ip, __entry->fmt),
 
 	FILTER_OTHER
@@ -238,7 +238,7 @@ FTRACE_ENTRY(print, print_entry,
 		__dynamic_array(	char,	buf	)
 	),
 
-	F_printk("%pf: %s",
+	F_printk("%ps: %s",
 		 (void *)__entry->ip, __entry->buf),
 
 	FILTER_OTHER
@@ -253,7 +253,7 @@ FTRACE_ENTRY(bputs, bputs_entry,
 		__field(	const char *,	str	)
 	),
 
-	F_printk("%pf: %s",
+	F_printk("%ps: %s",
 		 (void *)__entry->ip, __entry->str),
 
 	FILTER_OTHER
diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
index afe20ed9fac8..2c0bd8f2aad0 100644
--- a/tools/lib/traceevent/event-parse.c
+++ b/tools/lib/traceevent/event-parse.c
@@ -3976,7 +3976,7 @@ static struct print_arg *make_bprint_args(char *fmt, void *data, int size, struc
 	if (asprintf(&arg->atom.atom, "%lld", ip) < 0)
 		goto out_free;
 
-	/* skip the first "%pf: " */
+	/* skip the first "%ps: " */
 	for (ptr = fmt + 5, bptr = data + field->offset;
 	     bptr < data + size && *ptr; ptr++) {
 		int ls = 0;
-- 
2.1.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [for-next][PATCH 3/4] tracing: remove ftrace:function TRACE_EVENT_FL_USE_CALL_FILTER flag
  2015-03-25 13:00 [for-next][PATCH 0/4] tracing: updates for 4.1 Steven Rostedt
  2015-03-25 13:00 ` [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*() Steven Rostedt
  2015-03-25 13:00 ` [for-next][PATCH 2/4] tracing: %pF is only for function pointers Steven Rostedt
@ 2015-03-25 13:00 ` Steven Rostedt
  2015-03-25 13:00 ` [for-next][PATCH 4/4] trace: Dont use __weak in header files Steven Rostedt
  3 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2015-03-25 13:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Andrew Morton, He Kuang

[-- Attachment #1: 0003-tracing-remove-ftrace-function-TRACE_EVENT_FL_USE_CA.patch --]
[-- Type: text/plain, Size: 1583 bytes --]

From: He Kuang <hekuang@huawei.com>

TRACE_EVENT_FL_USE_CALL_FILTER flag in ftrace:functon event can be
removed. This flag was first introduced in commit
f306cc82a93d ("tracing: Update event filters for multibuffer").

Now, the only place uses this flag is ftrace:function, but the filter of
ftrace:function has a different code path with events/syscalls and
events/tracepoints. It uses ftrace_filter_write() and perf's
ftrace_profile_set_filter() to set the filter, the functionality of file
'tracing/events/ftrace/function/filter' is bypassed in function
init_pred(), in which case, neither call->filter nor file->filter is
used.

So we can safely remove TRACE_EVENT_FL_USE_CALL_FILTER flag from
ftrace:function events.

Link: http://lkml.kernel.org/r/1425367294-27852-1-git-send-email-hekuang@huawei.com

Signed-off-by: He Kuang <hekuang@huawei.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_export.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/trace_export.c b/kernel/trace/trace_export.c
index 12e2b99be862..174a6a71146c 100644
--- a/kernel/trace/trace_export.c
+++ b/kernel/trace/trace_export.c
@@ -177,7 +177,7 @@ struct ftrace_event_call __used event_##call = {			\
 	},								\
 	.event.type		= etype,				\
 	.print_fmt		= print,				\
-	.flags			= TRACE_EVENT_FL_IGNORE_ENABLE | TRACE_EVENT_FL_USE_CALL_FILTER, \
+	.flags			= TRACE_EVENT_FL_IGNORE_ENABLE,		\
 };									\
 struct ftrace_event_call __used						\
 __attribute__((section("_ftrace_events"))) *__event_##call = &event_##call;
-- 
2.1.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [for-next][PATCH 4/4] trace: Dont use __weak in header files
  2015-03-25 13:00 [for-next][PATCH 0/4] tracing: updates for 4.1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2015-03-25 13:00 ` [for-next][PATCH 3/4] tracing: remove ftrace:function TRACE_EVENT_FL_USE_CALL_FILTER flag Steven Rostedt
@ 2015-03-25 13:00 ` Steven Rostedt
  3 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2015-03-25 13:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu, Stephen Rothwell

[-- Attachment #1: 0004-trace-Don-t-use-__weak-in-header-files.patch --]
[-- Type: text/plain, Size: 6140 bytes --]

From: Stephen Rothwell <sfr@canb.auug.org.au>

The commit that added a check for this to checkpatch says:

"Using weak declarations can have unintended link defects.  The __weak on
the declaration causes non-weak definitions to become weak."

In this case, when a PowerPC kernel is built with CONFIG_KPROBE_EVENT
but not CONFIG_UPROBE_EVENT, it generates the following warning:

WARNING: 1 bad relocations
c0000000014f2190 R_PPC64_ADDR64    uprobes_fetch_type_table

This is fixed by passing the fetch_table arrays to
traceprobe_parse_probe_arg() which also means that they can never be NULL.

Link: http://lkml.kernel.org/r/20150312165834.4482cb48@canb.auug.org.au

Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_kprobe.c |  5 +++--
 kernel/trace/trace_probe.c  | 19 +++++++------------
 kernel/trace/trace_probe.h  | 10 ++--------
 kernel/trace/trace_uprobe.c |  5 +++--
 4 files changed, 15 insertions(+), 24 deletions(-)

diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index d73f565b4e06..f34c3ad1b5f4 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -250,7 +250,7 @@ DEFINE_FETCH_symbol(string_size)
 #define fetch_file_offset_string_size	NULL
 
 /* Fetch type information table */
-const struct fetch_type kprobes_fetch_type_table[] = {
+static const struct fetch_type kprobes_fetch_type_table[] = {
 	/* Special types */
 	[FETCH_TYPE_STRING] = __ASSIGN_FETCH_TYPE("string", string, string,
 					sizeof(u32), 1, "__data_loc char[]"),
@@ -760,7 +760,8 @@ static int create_trace_kprobe(int argc, char **argv)
 
 		/* Parse fetch argument */
 		ret = traceprobe_parse_probe_arg(arg, &tk->tp.size, parg,
-						is_return, true);
+						is_return, true,
+						kprobes_fetch_type_table);
 		if (ret) {
 			pr_info("Parse error at argument[%d]. (%d)\n", i, ret);
 			goto error;
diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
index b983b2fd2ca1..1769a81da8a7 100644
--- a/kernel/trace/trace_probe.c
+++ b/kernel/trace/trace_probe.c
@@ -356,17 +356,14 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
 
 /* Recursive argument parser */
 static int parse_probe_arg(char *arg, const struct fetch_type *t,
-		     struct fetch_param *f, bool is_return, bool is_kprobe)
+		     struct fetch_param *f, bool is_return, bool is_kprobe,
+		     const struct fetch_type *ftbl)
 {
-	const struct fetch_type *ftbl;
 	unsigned long param;
 	long offset;
 	char *tmp;
 	int ret = 0;
 
-	ftbl = is_kprobe ? kprobes_fetch_type_table : uprobes_fetch_type_table;
-	BUG_ON(ftbl == NULL);
-
 	switch (arg[0]) {
 	case '$':
 		ret = parse_probe_vars(arg + 1, t, f, is_return, is_kprobe);
@@ -447,7 +444,7 @@ static int parse_probe_arg(char *arg, const struct fetch_type *t,
 			dprm->fetch_size = get_fetch_size_function(t,
 							dprm->fetch, ftbl);
 			ret = parse_probe_arg(arg, t2, &dprm->orig, is_return,
-							is_kprobe);
+							is_kprobe, ftbl);
 			if (ret)
 				kfree(dprm);
 			else {
@@ -505,15 +502,12 @@ static int __parse_bitfield_probe_arg(const char *bf,
 
 /* String length checking wrapper */
 int traceprobe_parse_probe_arg(char *arg, ssize_t *size,
-		struct probe_arg *parg, bool is_return, bool is_kprobe)
+		struct probe_arg *parg, bool is_return, bool is_kprobe,
+		const struct fetch_type *ftbl)
 {
-	const struct fetch_type *ftbl;
 	const char *t;
 	int ret;
 
-	ftbl = is_kprobe ? kprobes_fetch_type_table : uprobes_fetch_type_table;
-	BUG_ON(ftbl == NULL);
-
 	if (strlen(arg) > MAX_ARGSTR_LEN) {
 		pr_info("Argument is too long.: %s\n",  arg);
 		return -ENOSPC;
@@ -535,7 +529,8 @@ int traceprobe_parse_probe_arg(char *arg, ssize_t *size,
 	}
 	parg->offset = *size;
 	*size += parg->type->size;
-	ret = parse_probe_arg(arg, parg->type, &parg->fetch, is_return, is_kprobe);
+	ret = parse_probe_arg(arg, parg->type, &parg->fetch, is_return,
+			      is_kprobe, ftbl);
 
 	if (ret >= 0 && t != NULL)
 		ret = __parse_bitfield_probe_arg(t, parg->type, &parg->fetch);
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index 4f815fbce16d..e30f6cce4af6 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -229,13 +229,6 @@ ASSIGN_FETCH_FUNC(file_offset, ftype),			\
 #define FETCH_TYPE_STRING	0
 #define FETCH_TYPE_STRSIZE	1
 
-/*
- * Fetch type information table.
- * It's declared as a weak symbol due to conditional compilation.
- */
-extern __weak const struct fetch_type kprobes_fetch_type_table[];
-extern __weak const struct fetch_type uprobes_fetch_type_table[];
-
 #ifdef CONFIG_KPROBE_EVENT
 struct symbol_cache;
 unsigned long update_symbol_cache(struct symbol_cache *sc);
@@ -333,7 +326,8 @@ find_event_file_link(struct trace_probe *tp, struct ftrace_event_file *file)
 }
 
 extern int traceprobe_parse_probe_arg(char *arg, ssize_t *size,
-		   struct probe_arg *parg, bool is_return, bool is_kprobe);
+		   struct probe_arg *parg, bool is_return, bool is_kprobe,
+		   const struct fetch_type *ftbl);
 
 extern int traceprobe_conflict_field_name(const char *name,
 			       struct probe_arg *args, int narg);
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 7dc1c8abecd6..74865465e0b7 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -196,7 +196,7 @@ DEFINE_FETCH_file_offset(string)
 DEFINE_FETCH_file_offset(string_size)
 
 /* Fetch type information table */
-const struct fetch_type uprobes_fetch_type_table[] = {
+static const struct fetch_type uprobes_fetch_type_table[] = {
 	/* Special types */
 	[FETCH_TYPE_STRING] = __ASSIGN_FETCH_TYPE("string", string, string,
 					sizeof(u32), 1, "__data_loc char[]"),
@@ -535,7 +535,8 @@ static int create_trace_uprobe(int argc, char **argv)
 
 		/* Parse fetch argument */
 		ret = traceprobe_parse_probe_arg(arg, &tu->tp.size, parg,
-						 is_return, false);
+						 is_return, false,
+						 uprobes_fetch_type_table);
 		if (ret) {
 			pr_info("Parse error at argument[%d]. (%d)\n", i, ret);
 			goto error;
-- 
2.1.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*()
  2015-03-25 13:00 ` [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*() Steven Rostedt
@ 2015-03-27 19:41   ` Christoph Lameter
  2015-03-27 20:11     ` Steven Rostedt
  2015-03-27 21:50     ` [PATCH] ring-buffer: Remove duplicate use of '&' in recursive code Steven Rostedt
  0 siblings, 2 replies; 11+ messages in thread
From: Christoph Lameter @ 2015-03-27 19:41 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, stable, Uwe Kleine-Koenig

On Wed, 25 Mar 2015, Steven Rostedt wrote:

> It has come to my attention that this_cpu_read/write are horrible on
> architectures other than x86. Worse yet, they actually disable
> preemption or interrupts! This caused some unexpected tracing results
> on ARM.

This isnt something new and I thought the comment was dropped from the
patch? This is a plain error in using this_cpu_* where __this_cpu_* would
have been sufficient. Code was uselessly disabling preemption twice.

> Which is unacceptable for locations that know they are within preempt
> disabled or interrupt disabled locations.

Well yes. Thats why the __this_cpu ops are there to avoid this
overhead.

> I also changed the recursive_unlock() to use two local variables instead
> of accessing the per_cpu variable twice.

Ok gotta look at that.

>  static __always_inline void trace_recursive_unlock(void)
>  {
> -	unsigned int val = this_cpu_read(current_context);
> +	unsigned int val = __this_cpu_read(current_context);
>
> -	val--;
> -	val &= this_cpu_read(current_context);
> -	this_cpu_write(current_context, val);
> +	val &= val & (val - 1);
> +	__this_cpu_write(current_context, val);
>  }

Ummm... This is does not look like an equivalent thing. Should this not
be:

	unsigned int val = __this_cpu_read(current_context);
	unsigned int newval = val - 1;

	newval &= val;
	__this_cpu_write(current_context, newval);

or more compact

	unsigned int val = __this_cpu_read(current_context);

	__this_cpu_write(current_context, val & (val - 1));


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*()
  2015-03-27 19:41   ` Christoph Lameter
@ 2015-03-27 20:11     ` Steven Rostedt
  2015-03-30 12:44       ` Christoph Lameter
  2015-03-27 21:50     ` [PATCH] ring-buffer: Remove duplicate use of '&' in recursive code Steven Rostedt
  1 sibling, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2015-03-27 20:11 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, stable, Uwe Kleine-Koenig

On Fri, 27 Mar 2015 14:41:44 -0500 (CDT)
Christoph Lameter <cl@linux.com> wrote:

> On Wed, 25 Mar 2015, Steven Rostedt wrote:
> 
> > It has come to my attention that this_cpu_read/write are horrible on
> > architectures other than x86. Worse yet, they actually disable
> > preemption or interrupts! This caused some unexpected tracing results
> > on ARM.
> 
> This isnt something new and I thought the comment was dropped from the
> patch? This is a plain error in using this_cpu_* where __this_cpu_* would
> have been sufficient. Code was uselessly disabling preemption twice.
> 

Where in the patch do you see the comment? Or were you talking about
the change log? The original patch did have a comment, an it was
dropped, that's what I thought you were talking about.

> > Which is unacceptable for locations that know they are within preempt
> > disabled or interrupt disabled locations.
> 
> Well yes. Thats why the __this_cpu ops are there to avoid this
> overhead.
> 
> > I also changed the recursive_unlock() to use two local variables instead
> > of accessing the per_cpu variable twice.
> 
> Ok gotta look at that.
> 
> >  static __always_inline void trace_recursive_unlock(void)
> >  {
> > -	unsigned int val = this_cpu_read(current_context);
> > +	unsigned int val = __this_cpu_read(current_context);
> >
> > -	val--;
> > -	val &= this_cpu_read(current_context);
> > -	this_cpu_write(current_context, val);
> > +	val &= val & (val - 1);
> > +	__this_cpu_write(current_context, val);
> >  }
> 
> Ummm... This is does not look like an equivalent thing. Should this not
> be:
> 
> 	unsigned int val = __this_cpu_read(current_context);
> 	unsigned int newval = val - 1;
> 
> 	newval &= val;
> 	__this_cpu_write(current_context, newval);

Actually, it is equivalent, but I do see a issue with my patch.

	val &= val & (val - 1);

is the same as the more reasonable:

	val &= val - 1;

I think I meant to replace &= with = :-/

> 
> or more compact
> 
> 	unsigned int val = __this_cpu_read(current_context);
> 
> 	__this_cpu_write(current_context, val & (val - 1));

Maybe I'll just use your compact version.

Thanks,

-- Steve

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] ring-buffer: Remove duplicate use of '&' in recursive code
  2015-03-27 19:41   ` Christoph Lameter
  2015-03-27 20:11     ` Steven Rostedt
@ 2015-03-27 21:50     ` Steven Rostedt
  1 sibling, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2015-03-27 21:50 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, stable, Uwe Kleine-Koenig


Steven Rostedt (Red Hat) (1):
      ring-buffer: Remove duplicate use of '&' in recursive code

----
 kernel/trace/ring_buffer.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
---------------------------
commit 7eb867195b9f3990da60738b1f26d0a71f37f77f
Author: Steven Rostedt (Red Hat) <rostedt@goodmis.org>
Date:   Fri Mar 27 17:39:49 2015 -0400

    ring-buffer: Remove duplicate use of '&' in recursive code
    
    A clean up of the recursive protection code changed
    
      val = this_cpu_read(current_context);
      val--;
      val &= this_cpu_read(current_context);
    
    to
    
      val = this_cpu_read(current_context);
      val &= val & (val - 1);
    
    Which has a duplicate use of '&' as the above is the same as
    
      val = val & (val - 1);
    
    Actually, it would be best to remove that line altogether and
    just add it to where it is used.
    
    Link: http://lkml.kernel.org/alpine.DEB.2.11.1503271423580.23114@gentwo.org
    
    Suggested-by: Christoph Lameter <cl@linux.com>
    Signed-off-by: Steven Rostedt <rostedt@goodmis.org>

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 922048a0f7ea..93caf56567cb 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2705,8 +2705,7 @@ static __always_inline void trace_recursive_unlock(void)
 {
 	unsigned int val = __this_cpu_read(current_context);
 
-	val &= val & (val - 1);
-	__this_cpu_write(current_context, val);
+	__this_cpu_write(current_context, val & (val - 1));
 }
 
 #else

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*()
  2015-03-27 20:11     ` Steven Rostedt
@ 2015-03-30 12:44       ` Christoph Lameter
  2015-03-30 13:37         ` Steven Rostedt
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Lameter @ 2015-03-30 12:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, stable, Uwe Kleine-Koenig

On Fri, 27 Mar 2015, Steven Rostedt wrote:

> Where in the patch do you see the comment? Or were you talking about
> the change log? The original patch did have a comment, an it was
> dropped, that's what I thought you were talking about.

Sorry yes the changelog.

> Actually, it is equivalent, but I do see a issue with my patch.
>
> 	val &= val & (val - 1);
>
> is the same as the more reasonable:
>
> 	val &= val - 1;
>
> I think I meant to replace &= with = :-/
>
> >
> > or more compact
> >
> > 	unsigned int val = __this_cpu_read(current_context);
> >
> > 	__this_cpu_write(current_context, val & (val - 1));
>
> Maybe I'll just use your compact version.

Hmmm... It could even be more compact

__this_cpu_and(current_context, __this_cpu_read(current_context) - 1);


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*()
  2015-03-30 12:44       ` Christoph Lameter
@ 2015-03-30 13:37         ` Steven Rostedt
  2015-03-30 14:32           ` Christoph Lameter
  0 siblings, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2015-03-30 13:37 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, stable, Uwe Kleine-Koenig

On Mon, 30 Mar 2015 07:44:30 -0500 (CDT)
Christoph Lameter <cl@linux.com> wrote:


> > >
> > > or more compact
> > >
> > > 	unsigned int val = __this_cpu_read(current_context);
> > >
> > > 	__this_cpu_write(current_context, val & (val - 1));
> >
> > Maybe I'll just use your compact version.
> 
> Hmmm... It could even be more compact
> 
> __this_cpu_and(current_context, __this_cpu_read(current_context) - 1);

Hmm,  I didn't realize there was an "and" version. I'm guessing this
would bring down the instruction count even more?

/me tries it.

I just finished testing my previous version. If this does prove to be
more compact, I'll have to replace that one with this one.

-- Steve


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*()
  2015-03-30 13:37         ` Steven Rostedt
@ 2015-03-30 14:32           ` Christoph Lameter
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2015-03-30 14:32 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Ingo Molnar, Andrew Morton, stable, Uwe Kleine-Koenig

On Mon, 30 Mar 2015, Steven Rostedt wrote:

> Hmm,  I didn't realize there was an "and" version. I'm guessing this
> would bring down the instruction count even more?

Yes two segment prefixed instructions and a decrement.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-03-30 14:33 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-25 13:00 [for-next][PATCH 0/4] tracing: updates for 4.1 Steven Rostedt
2015-03-25 13:00 ` [for-next][PATCH 1/4] ring-buffer: Replace this_cpu_*() with __this_cpu_*() Steven Rostedt
2015-03-27 19:41   ` Christoph Lameter
2015-03-27 20:11     ` Steven Rostedt
2015-03-30 12:44       ` Christoph Lameter
2015-03-30 13:37         ` Steven Rostedt
2015-03-30 14:32           ` Christoph Lameter
2015-03-27 21:50     ` [PATCH] ring-buffer: Remove duplicate use of '&' in recursive code Steven Rostedt
2015-03-25 13:00 ` [for-next][PATCH 2/4] tracing: %pF is only for function pointers Steven Rostedt
2015-03-25 13:00 ` [for-next][PATCH 3/4] tracing: remove ftrace:function TRACE_EVENT_FL_USE_CALL_FILTER flag Steven Rostedt
2015-03-25 13:00 ` [for-next][PATCH 4/4] trace: Dont use __weak in header files Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.