From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04B98C04AAF for ; Mon, 20 May 2019 14:22:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D3A5321479 for ; Mon, 20 May 2019 14:22:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391067AbfETOWR (ORCPT ); Mon, 20 May 2019 10:22:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:48948 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388849AbfETOWA (ORCPT ); Mon, 20 May 2019 10:22:00 -0400 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E56FB2173C; Mon, 20 May 2019 14:21:58 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.92) (envelope-from ) id 1hSjAs-0003SG-16; Mon, 20 May 2019 10:21:58 -0400 Message-Id: <20190520142157.921603945@goodmis.org> User-Agent: quilt/0.65 Date: Mon, 20 May 2019 10:20:10 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Ingo Molnar , Andrew Morton , Thomas Gleixner , Peter Zijlstra , Masami Hiramatsu , Josh Poimboeuf , Frederic Weisbecker , Joel Fernandes , Andy Lutomirski , Mark Rutland , Namhyung Kim , "Frank Ch. Eigler" Subject: [RFC][PATCH 09/14 v2] function_graph: Add "task variables" per task for fgraph_ops References: <20190520142001.270067280@goodmis.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Steven Rostedt (VMware)" Add a "task variables" array on the tasks shadow ret_stack that is the size of longs for each possible registered fgraph_ops. That's a total of 16, taking up 8 * 16 = 128 bytes (out of a page size 4k). This will allow for fgraph_ops to do specific features on a per task basis having a way to maintain state for each task. Signed-off-by: Steven Rostedt (VMware) --- include/linux/ftrace.h | 2 ++ kernel/trace/fgraph.c | 73 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 74 insertions(+), 1 deletion(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index e6a596e7cdf4..a0bdd1745e56 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -754,6 +754,7 @@ struct fgraph_ops { trace_func_graph_ret_t retfunc; struct ftrace_ops ops; /* for the hash lists */ void *private; + int idx; }; /* @@ -792,6 +793,7 @@ ftrace_graph_get_ret_stack(struct task_struct *task, int idx); unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp); +unsigned long *fgraph_get_task_var(struct fgraph_ops *gops); int function_graph_enter(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp); diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 06511f5192b6..c225b04bcd00 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -91,10 +91,18 @@ enum { #define SHADOW_STACK_INDEX \ (ALIGN(SHADOW_STACK_SIZE, sizeof(long)) / sizeof(long)) /* Leave on a buffer at the end */ -#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - (FGRAPH_RET_INDEX + 1)) +#define SHADOW_STACK_MAX_INDEX \ + (SHADOW_STACK_INDEX - (FGRAPH_RET_INDEX + 1 + FGRAPH_ARRAY_SIZE)) #define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) +/* + * Each fgraph_ops has a reservered unsigned long at the end (top) of the + * ret_stack to store task specific state. + */ +#define SHADOW_STACK_TASK_VARS(ret_stack) \ + ((unsigned long *)(&(ret_stack)[SHADOW_STACK_INDEX - FGRAPH_ARRAY_SIZE])) + static bool kill_ftrace_graph; int ftrace_graph_active; @@ -131,6 +139,44 @@ static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops) return; } +static void ret_stack_set_task_var(struct task_struct *t, int idx, long val) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(t->ret_stack); + + gvals[idx] = val; +} + +static unsigned long * +ret_stack_get_task_var(struct task_struct *t, int idx) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(t->ret_stack); + + return &gvals[idx]; +} + +static void ret_stack_init_task_vars(unsigned long *ret_stack) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(ret_stack); + + memset(gvals, 0, sizeof(*gvals) * FGRAPH_ARRAY_SIZE); +} + +/** + * fgraph_get_task_var - retrieve a task specific state variable + * @gops: The ftrace_ops that owns the task specific variable + * + * Every registered fgraph_ops has a task state variable + * reserved on the task's ret_stack. This function returns the + * address to that variable. + * + * Returns the address to the fgraph_ops @gops tasks specific + * unsigned long variable. + */ +unsigned long *fgraph_get_task_var(struct fgraph_ops *gops) +{ + return ret_stack_get_task_var(current, gops->idx); +} + /* * @offset: The index into @t->ret_stack to find the ret_stack entry * @index: Where to place the index into @t->ret_stack of that entry @@ -643,6 +689,7 @@ static int alloc_retstack_tasklist(unsigned long **ret_stack_list) if (t->ret_stack == NULL) { atomic_set(&t->tracing_graph_pause, 0); atomic_set(&t->trace_overrun, 0); + ret_stack_init_task_vars(ret_stack_list[start]); t->curr_ret_stack = 0; t->curr_ret_depth = -1; /* Make sure the tasks see the 0 first: */ @@ -702,6 +749,7 @@ graph_init_task(struct task_struct *t, unsigned long *ret_stack) { atomic_set(&t->tracing_graph_pause, 0); atomic_set(&t->trace_overrun, 0); + ret_stack_init_task_vars(ret_stack); t->ftrace_timestamp = 0; t->curr_ret_stack = 0; t->curr_ret_depth = -1; @@ -800,6 +848,24 @@ static int start_graph_tracing(void) return ret; } +static void init_task_vars(int idx) +{ + struct task_struct *g, *t; + int cpu; + + for_each_online_cpu(cpu) { + if (idle_task(cpu)->ret_stack) + ret_stack_set_task_var(idle_task(cpu), idx, 0); + } + + read_lock(&tasklist_lock); + do_each_thread(g, t) { + if (t->ret_stack) + ret_stack_set_task_var(t, idx, 0); + } while_each_thread(g, t); + read_unlock(&tasklist_lock); +} + int register_ftrace_graph(struct fgraph_ops *gops) { int command = 0; @@ -836,6 +902,7 @@ int register_ftrace_graph(struct fgraph_ops *gops) fgraph_array[i] = gops; if (i + 1 > fgraph_array_cnt) fgraph_array_cnt = i + 1; + gops->idx = i; ftrace_graph_active++; @@ -853,6 +920,8 @@ int register_ftrace_graph(struct fgraph_ops *gops) ftrace_graph_return = return_run; ftrace_graph_entry = entry_run; command = FTRACE_START_FUNC_RET; + } else { + init_task_vars(gops->idx); } ret = ftrace_startup(&gops->ops, command); @@ -877,6 +946,8 @@ void unregister_ftrace_graph(struct fgraph_ops *gops) if (i >= fgraph_array_cnt) goto out; + WARN_ON_ONCE(gops->idx != i); + fgraph_array[i] = &fgraph_stub; if (i + 1 == fgraph_array_cnt) { for (; i >= 0; i--) -- 2.20.1