From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932386AbdJaBf4 (ORCPT ); Mon, 30 Oct 2017 21:35:56 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:60248 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753422AbdJaBfx (ORCPT ); Mon, 30 Oct 2017 21:35:53 -0400 Subject: Re: [PATCH 1/2] bpf: add a bpf_override_function helper To: Josef Bacik , , , , , References: <1509398375-11927-1-git-send-email-josef@toxicpanda.com> <1509398375-11927-2-git-send-email-josef@toxicpanda.com> CC: Josef Bacik From: Alexei Starovoitov Message-ID: Date: Mon, 30 Oct 2017 18:35:41 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <1509398375-11927-2-git-send-email-josef@toxicpanda.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [2620:10d:c090:180::1:5c08] X-ClientProxiedBy: CY4PR2201CA0017.namprd22.prod.outlook.com (2603:10b6:910:5f::27) To DM3PR15MB0972.namprd15.prod.outlook.com (2603:10b6:0:10::26) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3311c91b-346a-42ea-95b7-08d51fffb540 X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(4534020)(4602075)(2017052603199);SRVR:DM3PR15MB0972; X-Microsoft-Exchange-Diagnostics: 1;DM3PR15MB0972;3:3za0AeA/KpUpolLYHMda34YVUg96eWnZj7s7buTUGBbDwggwEMA4APlp8GBGMR0QP+6Zphd0Aa9j6EcJcLWJ6c6xna3JYjsIb9B8GpbUSQ95QRoJBdokoCZXVa4AV/JiAT7Mk9J9WE10jn9uf0OYYzrmUVWmHE6OcaLWnFYwOPYi4uYUulIw/wZLq6WUZNufZiyXpHPnLTkYe7UV57SjTfAn8e8BsSYa78UrKlOFFlOInY/XV0mE0pTu601YxseI;25:xT45yvQO/7m2Kjb24w9RoxBK7y/7ZZW9sGie3hEKObzx8usd8B1J/FUitX7Ubb4rcjVBar0Sp41yW/WaKURebB92HWETVAKZPdTPGUUgIUJo1uAMkRWwBCeddqdYXrk1JQzi+ghPSaxgO7GmRZ13//FC19rOLNKGfqD01j0K/UP//U9FhUIt6/0QShaptFD4K41j8OEWLjaek3WwwPNRUMhGYpTPko4my9b6FCrtPUb/aw42hMfPrzh9YrRKSwxZtMHWcUHFIqftucKRb5mQuzXOC5o4yxJUgLpU9WRRWn2IL3WFsilamUHskQBCk+hmTNHqat3zkuEZX/Kpm3647A==;31:M9O/0buWOh6p3w5+DV2KracNltLlqknSxbwLLBPP1ORPvIPApqbmAKG6uPN4FmomqsaA9b3xRm8XTE7c3cVxP19lKW8oefTwMriUHe90RovR79/Y+xOsJOjq9EhMHHB4J2OocjRkHtUW87CBUdf5ikq5KnM9PACfDAhbGPRTLbiBlmxvo6IKSUl/lait0nIbhBdyZzZBaltz8+3Z64YBgzbxhP3a1iuRXvOibZt3REI= X-MS-TrafficTypeDiagnostic: DM3PR15MB0972: X-Microsoft-Exchange-Diagnostics: 1;DM3PR15MB0972;20:ne9ZjX5R3PIOqYNQ62O9mrM8gk+EL700Y5Lpe9hbhOPaBz9UaQ43NsJKiOZFA0a7vvxfGfgJOYc4RbfwC1hh0JxP+BmZDDTXarqZ8gWoZFzOd6ZcACqbX/5dQRZqe0Lf87fy+hGIAn72taHszankNjoC1S9BloT5WBoKGXei6kzoGTCX1DRoJy1GRtAnnaCKxRlB5UYjJURS+/xa/PffM/iaJJtk9RcZ2TNJ6vLn5uhiDrPE3Fgn+e7Y0nn1cS8KaZJ290PM+QxZkFj30zNZlAO2xPWzabTyBBwUoATdfQ9Pz+y5cGmk4CP0jNYuZggJQmHwdxPKxXMdwT+PKlij5v68lp8FqDnJoaum0Bz8g6mxHmX9l4Jh3dDLZdFeuE2sQTzNTplB7gdbcDyjEBNMR17bPFRWYibFCD5SppwMlN8Z/55RpVYV6/gnNgGXl3nRewqjjx5cA+1fmHADshNGYh9QhtNRi6/S65RhC6HCyHhkbh1k8yJ8R85reDezgHOL;4:+7KusazlQLTXJsFedLRPo5me2+bMXrb6uXdt7Z2Pz7zn45SrsyfIm2owN9yMhkeKuHV9Gd/iXk/dwjUE72BtOJlqbqgW/q0puN/QGHi3UwAQavJGoB4cBZQtasJrt/b1e/h/gm8q4HtUZtiKEqyKSvos0U3zkLAxzxqc2Ae8xdtMxduxdkNp0jwFSfmgH0cCl7+e2paBIW9VoQfgADIB/xoSkomdF9wYMkI7Yk6JZE1UCY1SYaUUftV1k3DP3+BX3a9jpJcRamq5CrEDdOtSgJ4RlulJWcdkjo7/Wyn5+xpkoBf2AeeRytEI7eItavHR X-Exchange-Antispam-Report-Test: UriScan:(67672495146484); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(11241501159)(6040450)(2401047)(5005006)(8121501046)(100000703101)(100105400095)(93006095)(93001095)(3231020)(10201501046)(3002001)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123555025)(20161123562025)(20161123560025)(20161123558100)(20161123564025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095);SRVR:DM3PR15MB0972;BCL:0;PCL:0;RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);SRVR:DM3PR15MB0972; X-Forefront-PRVS: 04772EA191 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10019020)(6009001)(346002)(376002)(189002)(24454002)(199003)(6116002)(1706002)(53936002)(7736002)(33646002)(305945005)(68736007)(105586002)(8676002)(81156014)(81166006)(106356001)(8936002)(54356999)(76176999)(230700001)(50986999)(101416001)(65806001)(47776003)(189998001)(65956001)(83506002)(316002)(229853002)(478600001)(6486002)(2906002)(58126008)(50466002)(31686004)(97736004)(5660300001)(64126003)(65826007)(31696002)(4326008)(23746002)(25786009)(575784001)(86362001)(6636002)(6666003)(2950100002)(53546010)(6246003)(36756003)(67846002)(42262002);DIR:OUT;SFP:1102;SCL:1;SRVR:DM3PR15MB0972;H:[IPv6:2620:10d:c081:1130::119f];FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?Windows-1252?Q?1;DM3PR15MB0972;23:cBjnDrmYOzvyBLIT8eXuqlEdCMWowhMRThLt4?= =?Windows-1252?Q?hZqDwQyMnfBvnh5l6NSbU4HYHT0hC9h16KcuXn5PSjDI5la0jXKfTllc?= =?Windows-1252?Q?Dglw95rgyUMuflOJ2X+ZfLfi8LjPM1O7cUSdaa9uZD7gKuMmNrdBIRTd?= =?Windows-1252?Q?df5KgtiQlBk7zrbvFKKSWp+x8Uy8PeMzYlL77rYhtL89gMq6QpI3gYwt?= =?Windows-1252?Q?H51d1rZWgC89qzQKRgBSTMfuZhtX1CSLYkZr2k7mkGpjtxxwyYPH9Spv?= =?Windows-1252?Q?9heZAl5E6EXEaQW2osWgH1n+SXAz+hvkPfC3DmB7r9fGPOAuNIQEoFIa?= =?Windows-1252?Q?YZX+3jDE+E7PNgiKJRmB2y+Xueo7JfujZ7AOQGaT0rLMFdKw5EDRiRTW?= =?Windows-1252?Q?+QUzG6BRqvnGhGGAyqMPUmRsnaxwRvH0OQLy1E3xFRUkXHrOs87WxYmb?= =?Windows-1252?Q?H1Q8hF4dgq/AJzAZ2EzLPtcdRqI7lJ3jOlBCRE9d3076y3EfsQr3i7jS?= =?Windows-1252?Q?oJJXy5RaBJWOTLiZZEABvBy9oMVlhOT/eb0Fr3BI/ZvCR6oYMUsmAeAs?= =?Windows-1252?Q?as0Yc5OUfCV/2++iCRKPt1kgdseiVcwdp39pmEXJ2ovwy/75dAMEOz8M?= =?Windows-1252?Q?I0eEhQ+yCPonTN/EQe5WvOjJONoUZd78f28qnSXQNw4ruRxLWFeNlFnR?= =?Windows-1252?Q?P0PzjPm2fbQVaLvpQsx9mVihykZu5tmh3tqYlYtAyPrxc4o8mo8DiFw9?= =?Windows-1252?Q?Hv0e3OQ9nm16v28anqFPdW1NjsH44UITlH8DJob3JiK4kXCowR1pbGs4?= =?Windows-1252?Q?0mrbK400G/1jpJzifjVYM0TT0HfL9Oyda89PKiUH7mCNTP2MP+R7fme+?= =?Windows-1252?Q?ox/ra5L0ET4DodvaFusiGsUAR8Z29enVvesPw4KR55gqptexp9GK2UJn?= =?Windows-1252?Q?DwrK0HpNcc1RhnipDYCDd71GC3d7PCSdy2+or8PbHsGtoXokM1qTSytG?= =?Windows-1252?Q?wHeyHLu4SP+SqyHg94lgI86Z6wYRYzuTnv/gHtZuJG786e73NMY3LKjM?= =?Windows-1252?Q?tsD7ujUEMJZjG5ga9d/0yLudqx6Q+bQOsZ1gZ9YZsQNDd0UMZTV1OWqy?= =?Windows-1252?Q?fhUSiG12QSIatp+Qoi0mxQsPbvLlBpjT+Qc6jnsaNOyM1gGSrP9LAXKL?= =?Windows-1252?Q?Qnpa7L1gfnk9CBzvF+Whno5T62mlCdPWBkaEiYw97ikx0jfgygGc9/bQ?= =?Windows-1252?Q?kCSQ1uGkKhdA3MblPXnnJ0QWb8HWjBcvQrMMM/i+3Ra++tl+vrYujuVl?= =?Windows-1252?Q?iQ6Xu9Ei8RGKcNcE76hLXfmzbQG+rMwLuikvrrrKsZJ/Equ/a+HRZb6q?= =?Windows-1252?Q?XYdzZ0ezWam?= X-Microsoft-Exchange-Diagnostics: 1;DM3PR15MB0972;6:Zc7soocZHRI0vmBWfMS2UV3pZMSnQLBZ/QbbcpIJZGvNsCQONcBCKpLOA8Y7ZWRZx4wKq2fjrglMwTKNX4yOR12oyBlbyZo078i+09iKjPVgTvKnXMNVP1Jg7jOvNevPRQ85S+GwR14gCDMWGPfR53tvH/fECgC4DJJpMxFyJkwoBMbCK/xd4zzHpxnUpWBNgFer7BKn5VCBiZ9wk5xoQtqL9dJlB9bjyNESzujAdrWBhxRz8HvLs25jlGsBTKpbjD1BwOAsL/gs5p/Edl8hV4aJVKiEweYLWShmjgOuEFIZSYjn/21WmGI1VDGvOef9v/GQvjcy7a43w3wxElDcSFT77Fdmscy/I+oPdfODeGA=;5:k2ltLlw/SUApE07q2G4usskETk7RbAeJeewFh8TtdCgL6ClkwPKAvYZAi33bNydyLg3S5YfDeZMlb03lXp+CMsWgArhaqfRfGs+wSNUDbaDayMH7gUXrTV0NKbf1gAzOowL53KzILywqceaUTBj2a1ZkdbsVAmSOd1af/C7qKEc=;24:KTb3+ECJM2FMwBf+cBkMCp+mYFBlqizngn0h3nDLKCSWVh0ko7rcNNJlxYZ5V+nP8J4iAAxq58+p65RbU2MfPJ+u242vb5WrCMuGJU8xIlk=;7:S5nqu8m655xolJAQ48o5FXN+SY5HPDRLAGUA7ARhmR0SA4+tkeSw3AuEbLl27N0m+S1wMmdR6XZLF7Ysn5QUli5qPb5yCmDZPIWxDI5RXCtDoQaHgKBKz3pBEmV0lIB15gNfL2PJq5I8Oa8L9Rw2Gyvyfw+O8ilFqB1A/hqVmc810Ut07QRW6Ik1FsjVOvTXhsfBaMILet4ML7+830a18Pq+9hPXMfaXEdIhi/D6cm0BV4+wLHuMv8KQxdOQhzHO SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM3PR15MB0972;20:idyJb/i4xcI3J8BSYi2rd95Z4xt6e78jBb+7op5XbhfAyA80tP3W3oQHMbyhQh6+8fwM6/UWBJWVyuyhZpOCj93GiXWY0jxkKnDJl34/bu9lT6s9dOU63WSXIwraf8Mb9/Cbn5axLvwQSLUF/p2iwRjwNOtUEXeAc1p1m+M04qc= X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2017 01:35:45.0510 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3311c91b-346a-42ea-95b7-08d51fffb540 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 8ae927fe-1255-47a7-a2af-5f3a069daaa2 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR15MB0972 X-OriginatorOrg: fb.com X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-10-30_09:,, signatures=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/30/17 2:19 PM, Josef Bacik wrote: > From: Josef Bacik > > Error injection is sloppy and very ad-hoc. BPF could fill this niche > perfectly with it's kprobe functionality. We could make sure errors are > only triggered in specific call chains that we care about with very > specific situations. Accomplish this with the bpf_override_funciton > helper. This will modify the probe'd callers return value to the > specified value and set the PC to an override function that simply > returns, bypassing the originally probed function. This gives us a nice > clean way to implement systematic error injection for all of our code > paths. > > Signed-off-by: Josef Bacik > --- > arch/Kconfig | 3 +++ > arch/x86/Kconfig | 1 + > arch/x86/include/asm/kprobes.h | 4 ++++ > arch/x86/include/asm/ptrace.h | 5 +++++ > arch/x86/kernel/kprobes/ftrace.c | 14 ++++++++++++ > include/uapi/linux/bpf.h | 7 +++++- > kernel/trace/Kconfig | 11 ++++++++++ > kernel/trace/bpf_trace.c | 47 +++++++++++++++++++++++++++++++++++----- > kernel/trace/trace.h | 6 +++++ > kernel/trace/trace_kprobe.c | 23 ++++++++++++++------ > 10 files changed, 108 insertions(+), 13 deletions(-) > > diff --git a/arch/Kconfig b/arch/Kconfig > index d789a89cb32c..4fb618082259 100644 > --- a/arch/Kconfig > +++ b/arch/Kconfig > @@ -195,6 +195,9 @@ config HAVE_OPTPROBES > config HAVE_KPROBES_ON_FTRACE > bool > > +config HAVE_KPROBE_OVERRIDE > + bool > + > config HAVE_NMI > bool > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 971feac13506..5126d2750dd0 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -152,6 +152,7 @@ config X86 > select HAVE_KERNEL_XZ > select HAVE_KPROBES > select HAVE_KPROBES_ON_FTRACE > + select HAVE_KPROBE_OVERRIDE > select HAVE_KRETPROBES > select HAVE_KVM > select HAVE_LIVEPATCH if X86_64 > diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h > index 6cf65437b5e5..c6c3b1f4306a 100644 > --- a/arch/x86/include/asm/kprobes.h > +++ b/arch/x86/include/asm/kprobes.h > @@ -67,6 +67,10 @@ extern const int kretprobe_blacklist_size; > void arch_remove_kprobe(struct kprobe *p); > asmlinkage void kretprobe_trampoline(void); > > +#ifdef CONFIG_KPROBES_ON_FTRACE > +extern void arch_ftrace_kprobe_override_function(struct pt_regs *regs); > +#endif > + > /* Architecture specific copy of original instruction*/ > struct arch_specific_insn { > /* copy of the original instruction */ > diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h > index 91c04c8e67fa..f04e71800c2f 100644 > --- a/arch/x86/include/asm/ptrace.h > +++ b/arch/x86/include/asm/ptrace.h > @@ -108,6 +108,11 @@ static inline unsigned long regs_return_value(struct pt_regs *regs) > return regs->ax; > } > > +static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc) > +{ > + regs->ax = rc; > +} > + > /* > * user_mode(regs) determines whether a register set came from user > * mode. On x86_32, this is true if V8086 mode was enabled OR if the > diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c > index 041f7b6dfa0f..3c455bf490cb 100644 > --- a/arch/x86/kernel/kprobes/ftrace.c > +++ b/arch/x86/kernel/kprobes/ftrace.c > @@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p) > p->ainsn.boostable = false; > return 0; > } > + > +asmlinkage void override_func(void); > +asm( > + ".type override_func, @function\n" > + "override_func:\n" > + " ret\n" > + ".size override_func, .-override_func\n" > +); > + > +void arch_ftrace_kprobe_override_function(struct pt_regs *regs) > +{ > + regs->ip = (unsigned long)&override_func; > +} > +NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function); > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h > index 0b7b54d898bd..1ad5b87a42f6 100644 > --- a/include/uapi/linux/bpf.h > +++ b/include/uapi/linux/bpf.h > @@ -673,6 +673,10 @@ union bpf_attr { > * @buf: buf to fill > * @buf_size: size of the buf > * Return : 0 on success or negative error code > + * > + * int bpf_override_return(pt_regs, rc) > + * @pt_regs: pointer to struct pt_regs > + * @rc: the return value to set > */ > #define __BPF_FUNC_MAPPER(FN) \ > FN(unspec), \ > @@ -732,7 +736,8 @@ union bpf_attr { > FN(xdp_adjust_meta), \ > FN(perf_event_read_value), \ > FN(perf_prog_read_value), \ > - FN(getsockopt), > + FN(getsockopt), \ > + FN(override_return), > > /* integer value in 'imm' field of BPF_CALL instruction selects which helper > * function eBPF program intends to call > diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig > index 434c840e2d82..9dc0deeaad2b 100644 > --- a/kernel/trace/Kconfig > +++ b/kernel/trace/Kconfig > @@ -518,6 +518,17 @@ config FUNCTION_PROFILER > > If in doubt, say N. > > +config BPF_KPROBE_OVERRIDE > + bool "Enable BPF programs to override a kprobed function" > + depends on BPF_EVENTS > + depends on KPROBES_ON_FTRACE > + depends on HAVE_KPROBE_OVERRIDE > + depends on DYNAMIC_FTRACE_WITH_REGS > + default n > + help > + Allows BPF to override the execution of a probed function and > + set a different return value. This is used for error injection. > + > config FTRACE_MCOUNT_RECORD > def_bool y > depends on DYNAMIC_FTRACE > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index 136aa6bb0422..38b6d6016b71 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -13,10 +13,14 @@ > #include > #include > #include > +#include > + > #include "trace.h" > > u64 bpf_get_stackid(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); > > +static DEFINE_PER_CPU(int, pc_modified); > + > /** > * trace_call_bpf - invoke BPF program > * @call: tracepoint event > @@ -27,16 +31,18 @@ u64 bpf_get_stackid(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); > * > * Return: BPF programs always return an integer which is interpreted by > * kprobe handler as: > - * 0 - return from kprobe (event is filtered out) > - * 1 - store kprobe event into ring buffer > - * Other values are reserved and currently alias to 1 > + * TRACE_KPROBE_SKIP - return from kprobe (event is filtered out) > + * TRACE_KPOBE_STORE - store kprobe event into ring buffer > + * TRACE_KPROBE_MODIFIED - we modified the registers, make sure the dispatcher > + * skips the event and returns so the kprobe infrastructure > + * doesn't mess with the next instruction. > */ > unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) > { > unsigned int ret; > > if (in_nmi()) /* not supported yet */ > - return 1; > + return TRACE_KPROBE_STORE; > > preempt_disable(); > > @@ -47,7 +53,7 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) > * and don't send kprobe event into ring-buffer, > * so return zero here > */ > - ret = 0; > + ret = TRACE_KPROBE_SKIP; > goto out; > } > > @@ -67,7 +73,13 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) > * rcu_dereference() which is accepted risk. > */ > ret = BPF_PROG_RUN_ARRAY_CHECK(call->prog_array, ctx, BPF_PROG_RUN); > + if (ret) > + ret = TRACE_KPROBE_STORE; > > + if (__this_cpu_read(pc_modified)) { > + __this_cpu_write(pc_modified, 0); > + ret = TRACE_KPROBE_MODIFIED; we probably need to fork trace_call_bpf() specifically for kprobes, since this new functionality is not applicable to tracepoints and uprobes. Like perf_event type bpf prog is using bpf_overflow_handler() > + } > out: > __this_cpu_dec(bpf_prog_active); > preempt_enable(); > @@ -76,6 +88,29 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) > } > EXPORT_SYMBOL_GPL(trace_call_bpf); > > +#ifdef CONFIG_BPF_KPROBE_OVERRIDE > +BPF_CALL_2(bpf_override_return, struct pt_regs *, regs, unsigned long, rc) > +{ > + __this_cpu_write(pc_modified, 1); > + regs_set_return_value(regs, rc); > + arch_ftrace_kprobe_override_function(regs); > + return 0; > +} > +#else > +BPF_CALL_2(bpf_override_return, struct pt_regs *, regs, unsigned long, rc) > +{ > + return -EINVAL; > +} > +#endif > + > +static const struct bpf_func_proto bpf_override_return_proto = { > + .func = bpf_override_return, > + .gpl_only = true, > + .ret_type = RET_INTEGER, > + .arg1_type = ARG_PTR_TO_CTX, > + .arg2_type = ARG_ANYTHING, > +}; > + > BPF_CALL_3(bpf_probe_read, void *, dst, u32, size, const void *, unsafe_ptr) > { > int ret; > @@ -551,6 +586,8 @@ static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func > return &bpf_get_stackid_proto; > case BPF_FUNC_perf_event_read_value: > return &bpf_perf_event_read_value_proto; > + case BPF_FUNC_override_return: > + return &bpf_override_return_proto; good call to allow it on kprobes only. It probably needs to be tighten further to allow it in ftrace-based kprobes only. imo 'depends on KPROBES_ON_FTRACE' isn't not enough, since kprobe in the middle of the function will still work via trap and won't work with this override_func(). > default: > return tracing_func_proto(func_id); > } > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h > index 652c682707cd..317ff2e961ac 100644 > --- a/kernel/trace/trace.h > +++ b/kernel/trace/trace.h > @@ -278,6 +278,12 @@ enum { > TRACE_ARRAY_FL_GLOBAL = (1 << 0) > }; > > +enum { > + TRACE_KPROBE_SKIP = 0, > + TRACE_KPROBE_STORE, > + TRACE_KPROBE_MODIFIED, > +}; > + > extern struct list_head ftrace_trace_arrays; > > extern struct mutex trace_types_lock; > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c > index abf92e478cfb..722fc6568134 100644 > --- a/kernel/trace/trace_kprobe.c > +++ b/kernel/trace/trace_kprobe.c > @@ -1170,7 +1170,7 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call) > #ifdef CONFIG_PERF_EVENTS > > /* Kprobe profile handler */ > -static void > +static int > kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs) > { > struct trace_event_call *call = &tk->tp.call; > @@ -1179,12 +1179,19 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs) > int size, __size, dsize; > int rctx; > > - if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs)) > - return; > + if (bpf_prog_array_valid(call)) { > + int ret = trace_call_bpf(call, regs); actually, can we keep trace_call_bpf() as-is and move if (__this_cpu_read(pc_modified)) logic into here ? I think kprobe_perf_func() runs with preempt_disabled. May be specialized trace_call_kprobe_bpf() would be better still to avoid double preempt_disable.