* [PATCH 1/3] tracing: Fix double free when function profile init failed
@ 2013-04-01 12:46 Namhyung Kim
2013-04-01 12:46 ` [PATCH 2/3] tracing: Fix off-by-one on allocating stat->pages Namhyung Kim
2013-04-01 12:46 ` [PATCH 3/3] ftrace: Constify ftrace_profile_bits Namhyung Kim
0 siblings, 2 replies; 7+ messages in thread
From: Namhyung Kim @ 2013-04-01 12:46 UTC (permalink / raw)
To: Steven Rostedt, Frederic Weisbecker; +Cc: LKML, Namhyung Kim
From: Namhyung Kim <namhyung.kim@lge.com>
On the failure path, stat->start and stat->pages will refer same page.
So it'll attempt to free the same page again and get kernel panic.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
kernel/trace/ftrace.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 25770824598f..65bc47472b1b 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -694,7 +694,6 @@ int ftrace_profile_pages_init(struct ftrace_profile_stat *stat)
free_page(tmp);
}
- free_page((unsigned long)stat->pages);
stat->pages = NULL;
stat->start = NULL;
--
1.7.11.7
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/3] tracing: Fix off-by-one on allocating stat->pages
2013-04-01 12:46 [PATCH 1/3] tracing: Fix double free when function profile init failed Namhyung Kim
@ 2013-04-01 12:46 ` Namhyung Kim
2013-04-01 12:46 ` [PATCH 3/3] ftrace: Constify ftrace_profile_bits Namhyung Kim
1 sibling, 0 replies; 7+ messages in thread
From: Namhyung Kim @ 2013-04-01 12:46 UTC (permalink / raw)
To: Steven Rostedt, Frederic Weisbecker; +Cc: LKML, Namhyung Kim
From: Namhyung Kim <namhyung.kim@lge.com>
The first page was allocated separately, so no need to start from 0.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
kernel/trace/ftrace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 65bc47472b1b..d38ad7145f2f 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -676,7 +676,7 @@ int ftrace_profile_pages_init(struct ftrace_profile_stat *stat)
pages = DIV_ROUND_UP(functions, PROFILES_PER_PAGE);
- for (i = 0; i < pages; i++) {
+ for (i = 1; i < pages; i++) {
pg->next = (void *)get_zeroed_page(GFP_KERNEL);
if (!pg->next)
goto out_free;
--
1.7.11.7
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/3] ftrace: Constify ftrace_profile_bits
2013-04-01 12:46 [PATCH 1/3] tracing: Fix double free when function profile init failed Namhyung Kim
2013-04-01 12:46 ` [PATCH 2/3] tracing: Fix off-by-one on allocating stat->pages Namhyung Kim
@ 2013-04-01 12:46 ` Namhyung Kim
2013-04-09 22:58 ` Steven Rostedt
2013-04-09 23:03 ` Steven Rostedt
1 sibling, 2 replies; 7+ messages in thread
From: Namhyung Kim @ 2013-04-01 12:46 UTC (permalink / raw)
To: Steven Rostedt, Frederic Weisbecker; +Cc: LKML, Namhyung Kim
From: Namhyung Kim <namhyung.kim@lge.com>
It seems that function profiler's hash size is fixed at 1024.
Make the ftrace_profile_bits const and update hash size macro.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
kernel/trace/ftrace.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index d38ad7145f2f..08bbc5952a3a 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -486,7 +486,6 @@ struct ftrace_profile_stat {
#define PROFILES_PER_PAGE \
(PROFILE_RECORDS_SIZE / sizeof(struct ftrace_profile))
-static int ftrace_profile_bits __read_mostly;
static int ftrace_profile_enabled __read_mostly;
/* ftrace_profile_lock - synchronize the enable and disable of the profiler */
@@ -494,7 +493,10 @@ static DEFINE_MUTEX(ftrace_profile_lock);
static DEFINE_PER_CPU(struct ftrace_profile_stat, ftrace_profile_stats);
-#define FTRACE_PROFILE_HASH_SIZE 1024 /* must be power of 2 */
+#define FTRACE_PROFILE_HASH_BITS 10
+#define FTRACE_PROFILE_HASH_SIZE (1 << FTRACE_PROFILE_HASH_BITS)
+
+static const int ftrace_profile_bits = FTRACE_PROFILE_HASH_BITS;
static void *
function_stat_next(void *v, int idx)
@@ -724,13 +726,6 @@ static int ftrace_profile_init_cpu(int cpu)
if (!stat->hash)
return -ENOMEM;
- if (!ftrace_profile_bits) {
- size--;
-
- for (; size; size >>= 1)
- ftrace_profile_bits++;
- }
-
/* Preallocate the function profiling pages */
if (ftrace_profile_pages_init(stat) < 0) {
kfree(stat->hash);
--
1.7.11.7
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 3/3] ftrace: Constify ftrace_profile_bits
2013-04-01 12:46 ` [PATCH 3/3] ftrace: Constify ftrace_profile_bits Namhyung Kim
@ 2013-04-09 22:58 ` Steven Rostedt
2013-04-09 23:03 ` Steven Rostedt
1 sibling, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2013-04-09 22:58 UTC (permalink / raw)
To: Namhyung Kim; +Cc: Frederic Weisbecker, LKML, Namhyung Kim
On Mon, 2013-04-01 at 21:46 +0900, Namhyung Kim wrote:
> From: Namhyung Kim <namhyung.kim@lge.com>
>
> It seems that function profiler's hash size is fixed at 1024.
> Make the ftrace_profile_bits const and update hash size macro.
>
Thanks for the fixes. As patch 1/3 fixes a real bug (one that can cause
a panic), I've labeled it as stable and will push it for 3.9.
Patch 2/3 just saves memory. I'll mark it stable, but it can wait to go
into 3.10.
This patch is more of a clean up, and will only go into 3.10.
Thanks!
-- Steve
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 3/3] ftrace: Constify ftrace_profile_bits
2013-04-01 12:46 ` [PATCH 3/3] ftrace: Constify ftrace_profile_bits Namhyung Kim
2013-04-09 22:58 ` Steven Rostedt
@ 2013-04-09 23:03 ` Steven Rostedt
2013-04-09 23:55 ` [PATCH v2 3/3] ftrace: Get rid of ftrace_profile_bits Namhyung Kim
1 sibling, 1 reply; 7+ messages in thread
From: Steven Rostedt @ 2013-04-09 23:03 UTC (permalink / raw)
To: Namhyung Kim; +Cc: Frederic Weisbecker, LKML, Namhyung Kim
On Mon, 2013-04-01 at 21:46 +0900, Namhyung Kim wrote:
> From: Namhyung Kim <namhyung.kim@lge.com>
>
> It seems that function profiler's hash size is fixed at 1024.
> Make the ftrace_profile_bits const and update hash size macro.
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> kernel/trace/ftrace.c | 13 ++++---------
> 1 file changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index d38ad7145f2f..08bbc5952a3a 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -486,7 +486,6 @@ struct ftrace_profile_stat {
> #define PROFILES_PER_PAGE \
> (PROFILE_RECORDS_SIZE / sizeof(struct ftrace_profile))
>
> -static int ftrace_profile_bits __read_mostly;
> static int ftrace_profile_enabled __read_mostly;
>
> /* ftrace_profile_lock - synchronize the enable and disable of the profiler */
> @@ -494,7 +493,10 @@ static DEFINE_MUTEX(ftrace_profile_lock);
>
> static DEFINE_PER_CPU(struct ftrace_profile_stat, ftrace_profile_stats);
>
> -#define FTRACE_PROFILE_HASH_SIZE 1024 /* must be power of 2 */
> +#define FTRACE_PROFILE_HASH_BITS 10
> +#define FTRACE_PROFILE_HASH_SIZE (1 << FTRACE_PROFILE_HASH_BITS)
> +
> +static const int ftrace_profile_bits = FTRACE_PROFILE_HASH_BITS;
Actually, can you resubmit this, and remove ftrace_profile_bits totally,
and just use FTRACE_PROFILE_HASH_BITS directly?
Thanks,
-- Steve
>
> static void *
> function_stat_next(void *v, int idx)
> @@ -724,13 +726,6 @@ static int ftrace_profile_init_cpu(int cpu)
> if (!stat->hash)
> return -ENOMEM;
>
> - if (!ftrace_profile_bits) {
> - size--;
> -
> - for (; size; size >>= 1)
> - ftrace_profile_bits++;
> - }
> -
> /* Preallocate the function profiling pages */
> if (ftrace_profile_pages_init(stat) < 0) {
> kfree(stat->hash);
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 3/3] ftrace: Get rid of ftrace_profile_bits
2013-04-09 23:03 ` Steven Rostedt
@ 2013-04-09 23:55 ` Namhyung Kim
2013-04-10 0:15 ` Steven Rostedt
0 siblings, 1 reply; 7+ messages in thread
From: Namhyung Kim @ 2013-04-09 23:55 UTC (permalink / raw)
To: Steven Rostedt; +Cc: Frederic Weisbecker, LKML, Namhyung Kim
From: Namhyung Kim <namhyung.kim@lge.com>
It seems that function profiler's hash size is fixed at 1024. Add and
use FTRACE_PROFILE_HASH_BITS instead and update hash size macro.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
kernel/trace/ftrace.c | 15 ++++-----------
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index d38ad7145f2f..78f4398cb608 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -486,7 +486,6 @@ struct ftrace_profile_stat {
#define PROFILES_PER_PAGE \
(PROFILE_RECORDS_SIZE / sizeof(struct ftrace_profile))
-static int ftrace_profile_bits __read_mostly;
static int ftrace_profile_enabled __read_mostly;
/* ftrace_profile_lock - synchronize the enable and disable of the profiler */
@@ -494,7 +493,8 @@ static DEFINE_MUTEX(ftrace_profile_lock);
static DEFINE_PER_CPU(struct ftrace_profile_stat, ftrace_profile_stats);
-#define FTRACE_PROFILE_HASH_SIZE 1024 /* must be power of 2 */
+#define FTRACE_PROFILE_HASH_BITS 10
+#define FTRACE_PROFILE_HASH_SIZE (1 << FTRACE_PROFILE_HASH_BITS)
static void *
function_stat_next(void *v, int idx)
@@ -724,13 +724,6 @@ static int ftrace_profile_init_cpu(int cpu)
if (!stat->hash)
return -ENOMEM;
- if (!ftrace_profile_bits) {
- size--;
-
- for (; size; size >>= 1)
- ftrace_profile_bits++;
- }
-
/* Preallocate the function profiling pages */
if (ftrace_profile_pages_init(stat) < 0) {
kfree(stat->hash);
@@ -764,7 +757,7 @@ ftrace_find_profiled_func(struct ftrace_profile_stat *stat, unsigned long ip)
struct hlist_node *n;
unsigned long key;
- key = hash_long(ip, ftrace_profile_bits);
+ key = hash_long(ip, FTRACE_PROFILE_HASH_BITS);
hhd = &stat->hash[key];
if (hlist_empty(hhd))
@@ -783,7 +776,7 @@ static void ftrace_add_profile(struct ftrace_profile_stat *stat,
{
unsigned long key;
- key = hash_long(rec->ip, ftrace_profile_bits);
+ key = hash_long(rec->ip, FTRACE_PROFILE_HASH_BITS);
hlist_add_head_rcu(&rec->node, &stat->hash[key]);
}
--
1.7.11.7
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 3/3] ftrace: Get rid of ftrace_profile_bits
2013-04-09 23:55 ` [PATCH v2 3/3] ftrace: Get rid of ftrace_profile_bits Namhyung Kim
@ 2013-04-10 0:15 ` Steven Rostedt
0 siblings, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2013-04-10 0:15 UTC (permalink / raw)
To: Namhyung Kim; +Cc: Frederic Weisbecker, LKML, Namhyung Kim
On Wed, 2013-04-10 at 08:55 +0900, Namhyung Kim wrote:
> From: Namhyung Kim <namhyung.kim@lge.com>
>
> It seems that function profiler's hash size is fixed at 1024. Add and
> use FTRACE_PROFILE_HASH_BITS instead and update hash size macro.
Thanks! I'll queue it in my 3.10 branch.
-- Steve
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2013-04-10 0:15 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-01 12:46 [PATCH 1/3] tracing: Fix double free when function profile init failed Namhyung Kim
2013-04-01 12:46 ` [PATCH 2/3] tracing: Fix off-by-one on allocating stat->pages Namhyung Kim
2013-04-01 12:46 ` [PATCH 3/3] ftrace: Constify ftrace_profile_bits Namhyung Kim
2013-04-09 22:58 ` Steven Rostedt
2013-04-09 23:03 ` Steven Rostedt
2013-04-09 23:55 ` [PATCH v2 3/3] ftrace: Get rid of ftrace_profile_bits Namhyung Kim
2013-04-10 0:15 ` Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).