linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] ftrace: trivial cleanup
@ 2020-08-31  3:10 Wei Yang
  2020-08-31  3:10 ` [PATCH 1/6] ftrace: define seq_file only for FMODE_READ Wei Yang
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Wei Yang @ 2020-08-31  3:10 UTC (permalink / raw)
  To: rostedt, mingo; +Cc: linux-kernel, Wei Yang

Trivial cleanups relates to ftrace.

Wei Yang (6):
  ftrace: define seq_file only for FMODE_READ
  ftrace: use fls() to get the bits for dup_hash()
  ftrace: simplify the dyn_ftrace->flags macro
  ftrace: simplify the calculation of page number for
    ftrace_page->records
  ftrace: replace do_for_each_ftrace_rec() with for_ftrace_rec_iter()
  ftrace: ftrace_global_list is renamed to ftrace_ops_list

 include/linux/ftrace.h |  11 ++---
 kernel/trace/ftrace.c  | 108 ++++++++++++++++++++---------------------
 2 files changed, 56 insertions(+), 63 deletions(-)

-- 
2.20.1 (Apple Git-117)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/6] ftrace: define seq_file only for FMODE_READ
  2020-08-31  3:10 [PATCH 0/6] ftrace: trivial cleanup Wei Yang
@ 2020-08-31  3:10 ` Wei Yang
  2020-10-06 14:36   ` Steven Rostedt
  2020-08-31  3:11 ` [PATCH 2/6] ftrace: use fls() to get the bits for dup_hash() Wei Yang
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Wei Yang @ 2020-08-31  3:10 UTC (permalink / raw)
  To: rostedt, mingo; +Cc: linux-kernel, Wei Yang

The purpose of the operation is to get ftrace_iterator, which is embedded
in file or seq_file for FMODE_WRITE/FMODE_READ respectively. Since we
don't have a seq_file for FMODE_WRITE case, it is meaningless to cast
file->private_data to seq_file.

Let's move the definition when there is a valid seq_file.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
---
 kernel/trace/ftrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index edc233122598..12cb535769bc 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5558,7 +5558,6 @@ static void __init set_ftrace_early_filters(void)
 
 int ftrace_regex_release(struct inode *inode, struct file *file)
 {
-	struct seq_file *m = (struct seq_file *)file->private_data;
 	struct ftrace_iterator *iter;
 	struct ftrace_hash **orig_hash;
 	struct trace_parser *parser;
@@ -5566,6 +5565,7 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
 	int ret;
 
 	if (file->f_mode & FMODE_READ) {
+		struct seq_file *m = file->private_data;
 		iter = m->private;
 		seq_release(inode, file);
 	} else
-- 
2.20.1 (Apple Git-117)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/6] ftrace: use fls() to get the bits for dup_hash()
  2020-08-31  3:10 [PATCH 0/6] ftrace: trivial cleanup Wei Yang
  2020-08-31  3:10 ` [PATCH 1/6] ftrace: define seq_file only for FMODE_READ Wei Yang
@ 2020-08-31  3:11 ` Wei Yang
  2020-08-31  3:11 ` [PATCH 3/6] ftrace: simplify the dyn_ftrace->flags macro Wei Yang
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Wei Yang @ 2020-08-31  3:11 UTC (permalink / raw)
  To: rostedt, mingo; +Cc: linux-kernel, Wei Yang

The effect here is to get the number of bits, lets use fls() to do
this job.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
---
 kernel/trace/ftrace.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 12cb535769bc..9021e16fa079 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1370,8 +1370,9 @@ static struct ftrace_hash *dup_hash(struct ftrace_hash *src, int size)
 	/*
 	 * Make the hash size about 1/2 the # found
 	 */
-	for (size /= 2; size; size >>= 1)
-		bits++;
+	bits = fls(size);
+	if (bits)
+		bits--;
 
 	/* Don't allocate too much */
 	if (bits > FTRACE_HASH_MAX_BITS)
-- 
2.20.1 (Apple Git-117)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/6] ftrace: simplify the dyn_ftrace->flags macro
  2020-08-31  3:10 [PATCH 0/6] ftrace: trivial cleanup Wei Yang
  2020-08-31  3:10 ` [PATCH 1/6] ftrace: define seq_file only for FMODE_READ Wei Yang
  2020-08-31  3:11 ` [PATCH 2/6] ftrace: use fls() to get the bits for dup_hash() Wei Yang
@ 2020-08-31  3:11 ` Wei Yang
  2020-08-31  3:11 ` [PATCH 4/6] ftrace: simplify the calculation of page number for ftrace_page->records Wei Yang
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Wei Yang @ 2020-08-31  3:11 UTC (permalink / raw)
  To: rostedt, mingo; +Cc: linux-kernel, Wei Yang

All the three macro are defined to be used for ftrace_rec_count(). This
can be achieved by (flags & FTRACE_REF_MAX) directly.

Since no other places would use those macros, remove them for clarity.

Also it fixes a typo in the comment.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
---
 include/linux/ftrace.h | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 3e3fe779a8c2..23c4d6526998 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -433,7 +433,7 @@ bool is_ftrace_trampoline(unsigned long addr);
  *  DIRECT   - there is a direct function to call
  *
  * When a new ftrace_ops is registered and wants a function to save
- * pt_regs, the rec->flag REGS is set. When the function has been
+ * pt_regs, the rec->flags REGS is set. When the function has been
  * set up to save regs, the REG_EN flag is set. Once a function
  * starts saving regs it will do so until all ftrace_ops are removed
  * from tracing that function.
@@ -451,12 +451,9 @@ enum {
 };
 
 #define FTRACE_REF_MAX_SHIFT	23
-#define FTRACE_FL_BITS		9
-#define FTRACE_FL_MASKED_BITS	((1UL << FTRACE_FL_BITS) - 1)
-#define FTRACE_FL_MASK		(FTRACE_FL_MASKED_BITS << FTRACE_REF_MAX_SHIFT)
 #define FTRACE_REF_MAX		((1UL << FTRACE_REF_MAX_SHIFT) - 1)
 
-#define ftrace_rec_count(rec)	((rec)->flags & ~FTRACE_FL_MASK)
+#define ftrace_rec_count(rec)	((rec)->flags & FTRACE_REF_MAX)
 
 struct dyn_ftrace {
 	unsigned long		ip; /* address of mcount call-site */
-- 
2.20.1 (Apple Git-117)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/6] ftrace: simplify the calculation of page number for ftrace_page->records
  2020-08-31  3:10 [PATCH 0/6] ftrace: trivial cleanup Wei Yang
                   ` (2 preceding siblings ...)
  2020-08-31  3:11 ` [PATCH 3/6] ftrace: simplify the dyn_ftrace->flags macro Wei Yang
@ 2020-08-31  3:11 ` Wei Yang
  2020-08-31  3:11 ` [PATCH 5/6] ftrace: replace do_for_each_ftrace_rec() with for_ftrace_rec_iter() Wei Yang
  2020-08-31  3:11 ` [PATCH 6/6] ftrace: ftrace_global_list is renamed to ftrace_ops_list Wei Yang
  5 siblings, 0 replies; 11+ messages in thread
From: Wei Yang @ 2020-08-31  3:11 UTC (permalink / raw)
  To: rostedt, mingo; +Cc: linux-kernel, Wei Yang

Based on the following two reasones, we could simplify the calculation:

  - If the number after roundup count is not power of 2, we would
    definitely have more than 1 empty page with a higher order.
  - get_count_order() just return current order, so one lower order
    could meet the requirement.

The calculation could be simplified by lower one order level when pages
are not power of 2.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
---
 kernel/trace/ftrace.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 9021e16fa079..15fcfa16895d 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -3127,18 +3127,19 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
 static int ftrace_allocate_records(struct ftrace_page *pg, int count)
 {
 	int order;
-	int cnt;
+	int pages, cnt;
 
 	if (WARN_ON(!count))
 		return -EINVAL;
 
-	order = get_count_order(DIV_ROUND_UP(count, ENTRIES_PER_PAGE));
+	pages = DIV_ROUND_UP(count, ENTRIES_PER_PAGE);
+	order = get_count_order(pages);
 
 	/*
 	 * We want to fill as much as possible. No more than a page
 	 * may be empty.
 	 */
-	while ((PAGE_SIZE << order) / ENTRY_SIZE >= count + ENTRIES_PER_PAGE)
+	if (!is_power_of_2(pages))
 		order--;
 
  again:
-- 
2.20.1 (Apple Git-117)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/6] ftrace: replace do_for_each_ftrace_rec() with for_ftrace_rec_iter()
  2020-08-31  3:10 [PATCH 0/6] ftrace: trivial cleanup Wei Yang
                   ` (3 preceding siblings ...)
  2020-08-31  3:11 ` [PATCH 4/6] ftrace: simplify the calculation of page number for ftrace_page->records Wei Yang
@ 2020-08-31  3:11 ` Wei Yang
  2020-10-06 14:42   ` Steven Rostedt
  2020-08-31  3:11 ` [PATCH 6/6] ftrace: ftrace_global_list is renamed to ftrace_ops_list Wei Yang
  5 siblings, 1 reply; 11+ messages in thread
From: Wei Yang @ 2020-08-31  3:11 UTC (permalink / raw)
  To: rostedt, mingo; +Cc: linux-kernel, Wei Yang

Now we have two similar infrastructure to iterate ftrace_page and
dyn_ftrace:

  * do_for_each_ftrace_rec()
  * for_ftrace_rec_iter()

The 2nd one, for_ftrace_rec_iter(), looks more generic, so preserve it
and replace do_for_each_ftrace_rec() with it.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
---
 kernel/trace/ftrace.c | 94 ++++++++++++++++++++-----------------------
 1 file changed, 44 insertions(+), 50 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 15fcfa16895d..4def668f45ba 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1501,21 +1501,6 @@ ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs)
 	return ret;
 }
 
-/*
- * This is a double for. Do not use 'break' to break out of the loop,
- * you must use a goto.
- */
-#define do_for_each_ftrace_rec(pg, rec)					\
-	for (pg = ftrace_pages_start; pg; pg = pg->next) {		\
-		int _____i;						\
-		for (_____i = 0; _____i < pg->index; _____i++) {	\
-			rec = &pg->records[_____i];
-
-#define while_for_each_ftrace_rec()		\
-		}				\
-	}
-
-
 static int ftrace_cmp_recs(const void *a, const void *b)
 {
 	const struct dyn_ftrace *key = a;
@@ -1638,7 +1623,7 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
 {
 	struct ftrace_hash *hash;
 	struct ftrace_hash *other_hash;
-	struct ftrace_page *pg;
+	struct ftrace_rec_iter *iter;
 	struct dyn_ftrace *rec;
 	bool update = false;
 	int count = 0;
@@ -1676,11 +1661,13 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
 			return false;
 	}
 
-	do_for_each_ftrace_rec(pg, rec) {
+	for_ftrace_rec_iter(iter) {
 		int in_other_hash = 0;
 		int in_hash = 0;
 		int match = 0;
 
+		rec = ftrace_rec_iter_record(iter);
+
 		if (rec->flags & FTRACE_FL_DISABLED)
 			continue;
 
@@ -1797,7 +1784,7 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
 		/* Shortcut, if we handled all records, we are done. */
 		if (!all && count == hash->count)
 			return update;
-	} while_for_each_ftrace_rec();
+	}
 
 	return update;
 }
@@ -1862,7 +1849,7 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
 					 struct ftrace_hash *old_hash,
 					 struct ftrace_hash *new_hash)
 {
-	struct ftrace_page *pg;
+	struct ftrace_rec_iter *iter;
 	struct dyn_ftrace *rec, *end = NULL;
 	int in_old, in_new;
 
@@ -1881,7 +1868,8 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
 		return -EINVAL;
 
 	/* Update rec->flags */
-	do_for_each_ftrace_rec(pg, rec) {
+	for_ftrace_rec_iter(iter) {
+		rec = ftrace_rec_iter_record(iter);
 
 		if (rec->flags & FTRACE_FL_DISABLED)
 			continue;
@@ -1899,7 +1887,7 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
 			rec->flags |= FTRACE_FL_IPMODIFY;
 		} else /* Removed entry */
 			rec->flags &= ~FTRACE_FL_IPMODIFY;
-	} while_for_each_ftrace_rec();
+	}
 
 	return 0;
 
@@ -1907,7 +1895,8 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
 	end = rec;
 
 	/* Roll back what we did above */
-	do_for_each_ftrace_rec(pg, rec) {
+	for_ftrace_rec_iter(iter) {
+		rec = ftrace_rec_iter_record(iter);
 
 		if (rec->flags & FTRACE_FL_DISABLED)
 			continue;
@@ -1924,7 +1913,7 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
 			rec->flags &= ~FTRACE_FL_IPMODIFY;
 		else
 			rec->flags |= FTRACE_FL_IPMODIFY;
-	} while_for_each_ftrace_rec();
+	}
 
 err_out:
 	return -EBUSY;
@@ -2517,8 +2506,8 @@ __ftrace_replace_code(struct dyn_ftrace *rec, bool enable)
 
 void __weak ftrace_replace_code(int mod_flags)
 {
+	struct ftrace_rec_iter *iter;
 	struct dyn_ftrace *rec;
-	struct ftrace_page *pg;
 	bool enable = mod_flags & FTRACE_MODIFY_ENABLE_FL;
 	int schedulable = mod_flags & FTRACE_MODIFY_MAY_SLEEP_FL;
 	int failed;
@@ -2526,7 +2515,8 @@ void __weak ftrace_replace_code(int mod_flags)
 	if (unlikely(ftrace_disabled))
 		return;
 
-	do_for_each_ftrace_rec(pg, rec) {
+	for_ftrace_rec_iter(iter) {
+		rec = ftrace_rec_iter_record(iter);
 
 		if (rec->flags & FTRACE_FL_DISABLED)
 			continue;
@@ -2539,7 +2529,7 @@ void __weak ftrace_replace_code(int mod_flags)
 		}
 		if (schedulable)
 			cond_resched();
-	} while_for_each_ftrace_rec();
+	}
 }
 
 struct ftrace_rec_iter {
@@ -2940,14 +2930,15 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
 	 */
 	if (rcu_dereference_protected(ftrace_ops_list,
 			lockdep_is_held(&ftrace_lock)) == &ftrace_list_end) {
-		struct ftrace_page *pg;
+		struct ftrace_rec_iter *iter;
 		struct dyn_ftrace *rec;
 
-		do_for_each_ftrace_rec(pg, rec) {
+		for_ftrace_rec_iter(iter) {
+			rec = ftrace_rec_iter_record(iter);
 			if (FTRACE_WARN_ON_ONCE(rec->flags & ~FTRACE_FL_DISABLED))
 				pr_warn("  %pS flags:%lx\n",
 					(void *)rec->ip, rec->flags);
-		} while_for_each_ftrace_rec();
+		}
 	}
 
 	ops->old_hash.filter_hash = NULL;
@@ -3917,6 +3908,7 @@ add_rec_by_index(struct ftrace_hash *hash, struct ftrace_glob *func_g,
 		 int clear_filter)
 {
 	long index = simple_strtoul(func_g->search, NULL, 0);
+	struct ftrace_rec_iter *iter;
 	struct ftrace_page *pg;
 	struct dyn_ftrace *rec;
 
@@ -3924,16 +3916,17 @@ add_rec_by_index(struct ftrace_hash *hash, struct ftrace_glob *func_g,
 	if (--index < 0)
 		return 0;
 
-	do_for_each_ftrace_rec(pg, rec) {
+	for_ftrace_rec_iter(iter) {
+		pg = iter->pg;
 		if (pg->index <= index) {
+			iter->index = pg->index;
 			index -= pg->index;
-			/* this is a double loop, break goes to the next page */
-			break;
+			continue;
 		}
 		rec = &pg->records[index];
 		enter_record(hash, rec, clear_filter);
 		return 1;
-	} while_for_each_ftrace_rec();
+	}
 	return 0;
 }
 
@@ -3978,7 +3971,7 @@ ftrace_match_record(struct dyn_ftrace *rec, struct ftrace_glob *func_g,
 static int
 match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
 {
-	struct ftrace_page *pg;
+	struct ftrace_rec_iter *iter;
 	struct dyn_ftrace *rec;
 	struct ftrace_glob func_g = { .type = MATCH_FULL };
 	struct ftrace_glob mod_g = { .type = MATCH_FULL };
@@ -4010,7 +4003,8 @@ match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
 		goto out_unlock;
 	}
 
-	do_for_each_ftrace_rec(pg, rec) {
+	for_ftrace_rec_iter(iter) {
+		rec = ftrace_rec_iter_record(iter);
 
 		if (rec->flags & FTRACE_FL_DISABLED)
 			continue;
@@ -4023,7 +4017,7 @@ match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
 			}
 			found = 1;
 		}
-	} while_for_each_ftrace_rec();
+	}
  out_unlock:
 	mutex_unlock(&ftrace_lock);
 
@@ -5945,7 +5939,7 @@ ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer)
 {
 	struct ftrace_glob func_g;
 	struct dyn_ftrace *rec;
-	struct ftrace_page *pg;
+	struct ftrace_rec_iter *iter;
 	struct ftrace_func_entry *entry;
 	int fail = 1;
 	int not;
@@ -5963,7 +5957,8 @@ ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer)
 		return -ENODEV;
 	}
 
-	do_for_each_ftrace_rec(pg, rec) {
+	for_ftrace_rec_iter(iter) {
+		rec = ftrace_rec_iter_record(iter);
 
 		if (rec->flags & FTRACE_FL_DISABLED)
 			continue;
@@ -5985,7 +5980,7 @@ ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer)
 				}
 			}
 		}
-	} while_for_each_ftrace_rec();
+	}
 out:
 	mutex_unlock(&ftrace_lock);
 
@@ -6407,7 +6402,7 @@ void ftrace_release_mod(struct module *mod)
 void ftrace_module_enable(struct module *mod)
 {
 	struct dyn_ftrace *rec;
-	struct ftrace_page *pg;
+	struct ftrace_rec_iter *iter;
 
 	mutex_lock(&ftrace_lock);
 
@@ -6430,17 +6425,16 @@ void ftrace_module_enable(struct module *mod)
 	if (ftrace_start_up)
 		ftrace_arch_code_modify_prepare();
 
-	do_for_each_ftrace_rec(pg, rec) {
+	for_ftrace_rec_iter(iter) {
 		int cnt;
-		/*
-		 * do_for_each_ftrace_rec() is a double loop.
-		 * module text shares the pg. If a record is
-		 * not part of this module, then skip this pg,
-		 * which the "break" will do.
-		 */
+		rec = ftrace_rec_iter_record(iter);
+
 		if (!within_module_core(rec->ip, mod) &&
-		    !within_module_init(rec->ip, mod))
-			break;
+		    !within_module_init(rec->ip, mod)) {
+			/* skip current ftrace_page */
+			iter->index = iter->pg->index;
+			continue;
+		}
 
 		cnt = 0;
 
@@ -6464,7 +6458,7 @@ void ftrace_module_enable(struct module *mod)
 			}
 		}
 
-	} while_for_each_ftrace_rec();
+	}
 
  out_loop:
 	if (ftrace_start_up)
-- 
2.20.1 (Apple Git-117)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/6] ftrace: ftrace_global_list is renamed to ftrace_ops_list
  2020-08-31  3:10 [PATCH 0/6] ftrace: trivial cleanup Wei Yang
                   ` (4 preceding siblings ...)
  2020-08-31  3:11 ` [PATCH 5/6] ftrace: replace do_for_each_ftrace_rec() with for_ftrace_rec_iter() Wei Yang
@ 2020-08-31  3:11 ` Wei Yang
  5 siblings, 0 replies; 11+ messages in thread
From: Wei Yang @ 2020-08-31  3:11 UTC (permalink / raw)
  To: rostedt, mingo; +Cc: linux-kernel, Wei Yang

Fix the comment to comply with the code.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
---
 include/linux/ftrace.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 23c4d6526998..8e1fd97343c6 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -218,11 +218,11 @@ extern struct ftrace_ops __rcu *ftrace_ops_list;
 extern struct ftrace_ops ftrace_list_end;
 
 /*
- * Traverse the ftrace_global_list, invoking all entries.  The reason that we
+ * Traverse the ftrace_ops_list, invoking all entries.  The reason that we
  * can use rcu_dereference_raw_check() is that elements removed from this list
  * are simply leaked, so there is no need to interact with a grace-period
  * mechanism.  The rcu_dereference_raw_check() calls are needed to handle
- * concurrent insertions into the ftrace_global_list.
+ * concurrent insertions into the ftrace_ops_list.
  *
  * Silly Alpha and silly pointer-speculation compiler optimizations!
  */
-- 
2.20.1 (Apple Git-117)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/6] ftrace: define seq_file only for FMODE_READ
  2020-08-31  3:10 ` [PATCH 1/6] ftrace: define seq_file only for FMODE_READ Wei Yang
@ 2020-10-06 14:36   ` Steven Rostedt
  2020-10-08  3:34     ` Wei Yang
  0 siblings, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2020-10-06 14:36 UTC (permalink / raw)
  To: Wei Yang; +Cc: mingo, linux-kernel

On Mon, 31 Aug 2020 11:10:59 +0800
Wei Yang <richard.weiyang@linux.alibaba.com> wrote:

> The purpose of the operation is to get ftrace_iterator, which is embedded
> in file or seq_file for FMODE_WRITE/FMODE_READ respectively. Since we
> don't have a seq_file for FMODE_WRITE case, it is meaningless to cast
> file->private_data to seq_file.
> 
> Let's move the definition when there is a valid seq_file.

I didn't pull in this patch because I find the original more expressive,
and there's really no benefit in changing it.

-- Steve


> 
> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
> ---
>  kernel/trace/ftrace.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index edc233122598..12cb535769bc 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -5558,7 +5558,6 @@ static void __init set_ftrace_early_filters(void)
>  
>  int ftrace_regex_release(struct inode *inode, struct file *file)
>  {
> -	struct seq_file *m = (struct seq_file *)file->private_data;
>  	struct ftrace_iterator *iter;
>  	struct ftrace_hash **orig_hash;
>  	struct trace_parser *parser;
> @@ -5566,6 +5565,7 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
>  	int ret;
>  
>  	if (file->f_mode & FMODE_READ) {
> +		struct seq_file *m = file->private_data;
>  		iter = m->private;
>  		seq_release(inode, file);
>  	} else


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/6] ftrace: replace do_for_each_ftrace_rec() with for_ftrace_rec_iter()
  2020-08-31  3:11 ` [PATCH 5/6] ftrace: replace do_for_each_ftrace_rec() with for_ftrace_rec_iter() Wei Yang
@ 2020-10-06 14:42   ` Steven Rostedt
  2020-10-08  3:34     ` Wei Yang
  0 siblings, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2020-10-06 14:42 UTC (permalink / raw)
  To: Wei Yang; +Cc: mingo, linux-kernel

On Mon, 31 Aug 2020 11:11:03 +0800
Wei Yang <richard.weiyang@linux.alibaba.com> wrote:

> Now we have two similar infrastructure to iterate ftrace_page and
> dyn_ftrace:
> 
>   * do_for_each_ftrace_rec()
>   * for_ftrace_rec_iter()
> 
> The 2nd one, for_ftrace_rec_iter(), looks more generic, so preserve it
> and replace do_for_each_ftrace_rec() with it.
> 

I also didn't pull in this patch. The reason is that the
for_ftrace_rec_iter() is specific for external usage (for archs to use) and
requires two function calls to do the iterations.

The do_for_each_ftrace_rec() is a faster, light weight version, but for
internal use only.

I rather keep both, as each has their own, but slightly different, use
cases.

-- Steve

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/6] ftrace: define seq_file only for FMODE_READ
  2020-10-06 14:36   ` Steven Rostedt
@ 2020-10-08  3:34     ` Wei Yang
  0 siblings, 0 replies; 11+ messages in thread
From: Wei Yang @ 2020-10-08  3:34 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Wei Yang, mingo, linux-kernel

On Tue, Oct 06, 2020 at 10:36:38AM -0400, Steven Rostedt wrote:
>On Mon, 31 Aug 2020 11:10:59 +0800
>Wei Yang <richard.weiyang@linux.alibaba.com> wrote:
>
>> The purpose of the operation is to get ftrace_iterator, which is embedded
>> in file or seq_file for FMODE_WRITE/FMODE_READ respectively. Since we
>> don't have a seq_file for FMODE_WRITE case, it is meaningless to cast
>> file->private_data to seq_file.
>> 
>> Let's move the definition when there is a valid seq_file.
>
>I didn't pull in this patch because I find the original more expressive,
>and there's really no benefit in changing it.
>

Got it, thanks

>-- Steve
>
>
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
>> ---
>>  kernel/trace/ftrace.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
>> index edc233122598..12cb535769bc 100644
>> --- a/kernel/trace/ftrace.c
>> +++ b/kernel/trace/ftrace.c
>> @@ -5558,7 +5558,6 @@ static void __init set_ftrace_early_filters(void)
>>  
>>  int ftrace_regex_release(struct inode *inode, struct file *file)
>>  {
>> -	struct seq_file *m = (struct seq_file *)file->private_data;
>>  	struct ftrace_iterator *iter;
>>  	struct ftrace_hash **orig_hash;
>>  	struct trace_parser *parser;
>> @@ -5566,6 +5565,7 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
>>  	int ret;
>>  
>>  	if (file->f_mode & FMODE_READ) {
>> +		struct seq_file *m = file->private_data;
>>  		iter = m->private;
>>  		seq_release(inode, file);
>>  	} else

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/6] ftrace: replace do_for_each_ftrace_rec() with for_ftrace_rec_iter()
  2020-10-06 14:42   ` Steven Rostedt
@ 2020-10-08  3:34     ` Wei Yang
  0 siblings, 0 replies; 11+ messages in thread
From: Wei Yang @ 2020-10-08  3:34 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Wei Yang, mingo, linux-kernel

On Tue, Oct 06, 2020 at 10:42:17AM -0400, Steven Rostedt wrote:
>On Mon, 31 Aug 2020 11:11:03 +0800
>Wei Yang <richard.weiyang@linux.alibaba.com> wrote:
>
>> Now we have two similar infrastructure to iterate ftrace_page and
>> dyn_ftrace:
>> 
>>   * do_for_each_ftrace_rec()
>>   * for_ftrace_rec_iter()
>> 
>> The 2nd one, for_ftrace_rec_iter(), looks more generic, so preserve it
>> and replace do_for_each_ftrace_rec() with it.
>> 
>
>I also didn't pull in this patch. The reason is that the
>for_ftrace_rec_iter() is specific for external usage (for archs to use) and
>requires two function calls to do the iterations.
>
>The do_for_each_ftrace_rec() is a faster, light weight version, but for
>internal use only.
>
>I rather keep both, as each has their own, but slightly different, use
>cases.
>

Got it, thanks

>-- Steve

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-10-08  3:34 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-31  3:10 [PATCH 0/6] ftrace: trivial cleanup Wei Yang
2020-08-31  3:10 ` [PATCH 1/6] ftrace: define seq_file only for FMODE_READ Wei Yang
2020-10-06 14:36   ` Steven Rostedt
2020-10-08  3:34     ` Wei Yang
2020-08-31  3:11 ` [PATCH 2/6] ftrace: use fls() to get the bits for dup_hash() Wei Yang
2020-08-31  3:11 ` [PATCH 3/6] ftrace: simplify the dyn_ftrace->flags macro Wei Yang
2020-08-31  3:11 ` [PATCH 4/6] ftrace: simplify the calculation of page number for ftrace_page->records Wei Yang
2020-08-31  3:11 ` [PATCH 5/6] ftrace: replace do_for_each_ftrace_rec() with for_ftrace_rec_iter() Wei Yang
2020-10-06 14:42   ` Steven Rostedt
2020-10-08  3:34     ` Wei Yang
2020-08-31  3:11 ` [PATCH 6/6] ftrace: ftrace_global_list is renamed to ftrace_ops_list Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).