linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events
@ 2021-11-04 17:04 Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 01/10] user_events: Add UABI header for user access to user_events Beau Belgrave
                   ` (9 more replies)
  0 siblings, 10 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

User mode processes that wish to use trace events to get data into
ftrace, perf, eBPF, etc are limited to uprobes today. The user events
features enables an ABI for user mode processes to create and write to
trace events that are isolated from kernel level trace events. This
enables a faster path for tracing from user mode data as well as opens
managed code to participate in trace events, where stub locations are
dynamic.

User processes often want to trace only when it's useful. To enable this
a set of pages are mapped into the user process space that indicate the
current state of the user events that have been registered. User
processes can check if their event is hooked to a trace/probe, and if it
is, emit the event data out via the write() syscall.

Two new files are introduced into tracefs to accomplish this:
user_events_status - This file is mmap'd into participating user mode
processes to indicate event status.

user_events_data - This file is opened and register/delete ioctl's are
issued to create/open/delete trace events that can be used for tracing.

The typical scenario is on process start to mmap user_events_status. Processes
then register the events they plan to use via the REG ioctl. The ioctl reads
and updates the passed in user_reg struct. The status_index of the struct is
used to know the byte in the status page to check for that event. The
write_index of the struct is used to describe that event when writing out to
the fd that was used for the ioctl call. The data must always include this
index first when writing out data for an event. Data can be written either by
write() or by writev().

For example, in memory:
int index;
char data[];

Psuedo code example of typical usage:
struct user_reg reg;

int page_fd = open("user_events_status", O_RDWR);
char *page_data = mmap(NULL, PAGE_SIZE, PROT_READ, MAP_SHARED, page_fd, 0);
close(page_fd);

int data_fd = open("user_events_data", O_RDWR);

reg.size = sizeof(reg);
reg.name_args = (__u64)"test";

ioctl(data_fd, DIAG_IOCSREG, &reg);
int status_id = reg.status_index;
int write_id = reg.write_index;

struct iovec io[2];
io[0].iov_base = &write_id;
io[0].iov_len = sizeof(write_id);
io[1].iov_base = payload;
io[1].iov_len = sizeof(payload);

if (page_data[status_id])
	writev(data_fd, io, 2);

User events are also exposed via the dynamic_events tracefs file for
both create and delete. Current status is exposed via the user_events_status
tracefs file.

Simple example to register a user event via dynamic_events:
	echo u:test >> dynamic_events
	cat dynamic_events
	u:test

If an event is hooked to a probe, the probe hooked shows up:
	echo 1 > events/user_events/test/enable
	cat user_events_status
	1:test # Used by ftrace

	Active: 1
	Busy: 1
	Max: 4096

If an event is not hooked to a probe, no probe status shows up:
	echo 0 > events/user_events/test/enable
	cat user_events_status
	1:test

	Active: 1
	Busy: 0
	Max: 4096

Users can describe the trace event format via the following format:
	name[:FLAG1[,FLAG2...] [field1[;field2...]]

Each field has the following format:
	type name

Example for char array with a size of 20 named msg:
	echo 'u:detailed char[20] msg' >> dynamic_events
	cat dynamic_events
	u:detailed char[20] msg

Data offsets are based on the data written out via write() and will be
updated to reflect the correct offset in the trace_event fields. For dynamic
data it is recommended to use the new __rel_loc data type. This type will be
the same as __data_loc, but the offset is relative to this entry. This allows
user_events to not worry about what common fields are being inserted before
the data.

The above format is valid for both the ioctl and the dynamic_events file.

V2:
Fixed kmalloc vs kzalloc for register_page.
Renamed user_event_mmap to user_event_status.
Renamed user_event prefix from ue to u.
Added seq_* operations to user_event_status to enable cat output.
Aligned field parsing to synth_events format (+ size specifier for
custom/user types).
Added uapi header user_events.h to align kernel and user ABI definitions.

V3:
Updated ABI to handle single FD into many events via an int header.
Added iovec/writev support to enable int header without payload changes.
Updated bpf context to describe if data is coming from user, kernel or
raw iovec.
Added flag support for registering event, allows forcing BPF to always
recieve the direct iovecs for sensitive code paths that do not want
copies.

V4:
Moved to struct user_reg for registering events via ioctl.
Added unit tests for ftrace, dyn_events and perf integration.
Added print_fmt generation and proper dyn_events matching statements.
Reduced time in preemption disabled paths.
Added documentation file.
Pre-fault in data when preemption is enabled and use no-fault copy in probes.
Fixed MIPs missing PAGE_READONLY define.

Beau Belgrave (10):
  user_events: Add UABI header for user access to user_events
  user_events: Add minimal support for trace_event into ftrace
  user_events: Add print_fmt generation support for basic types
  user_events: Handle matching arguments from dyn_events
  user_events: Add basic perf and eBPF support
  user_events: Add self-test for ftrace integration
  user_events: Add self-test for dynamic_events integration
  user_events: Add self-test for perf_event integration
  user_events: Optimize writing events by only copying data once
  user_events: Add documentation file

 Documentation/trace/user_events.rst           |  298 ++++
 include/uapi/linux/user_events.h              |   68 +
 kernel/trace/Kconfig                          |   15 +
 kernel/trace/Makefile                         |    1 +
 kernel/trace/trace_events_user.c              | 1413 +++++++++++++++++
 tools/testing/selftests/user_events/Makefile  |    9 +
 .../testing/selftests/user_events/dyn_test.c  |  122 ++
 .../selftests/user_events/ftrace_test.c       |  205 +++
 .../testing/selftests/user_events/perf_test.c |  168 ++
 tools/testing/selftests/user_events/settings  |    1 +
 10 files changed, 2300 insertions(+)
 create mode 100644 Documentation/trace/user_events.rst
 create mode 100644 include/uapi/linux/user_events.h
 create mode 100644 kernel/trace/trace_events_user.c
 create mode 100644 tools/testing/selftests/user_events/Makefile
 create mode 100644 tools/testing/selftests/user_events/dyn_test.c
 create mode 100644 tools/testing/selftests/user_events/ftrace_test.c
 create mode 100644 tools/testing/selftests/user_events/perf_test.c
 create mode 100644 tools/testing/selftests/user_events/settings

-- 
2.17.1


^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v4 01/10] user_events: Add UABI header for user access to user_events
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace Beau Belgrave
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Define the basic structs and ioctl commands that allow user processes to
interact with user_events.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 include/uapi/linux/user_events.h | 68 ++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)
 create mode 100644 include/uapi/linux/user_events.h

diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h
new file mode 100644
index 000000000000..5bff99418deb
--- /dev/null
+++ b/include/uapi/linux/user_events.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Copyright (c) 2021, Microsoft Corporation.
+ *
+ * Authors:
+ *   Beau Belgrave <beaub@linux.microsoft.com>
+ */
+#ifndef _UAPI_LINUX_USER_EVENTS_H
+#define _UAPI_LINUX_USER_EVENTS_H
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+#ifdef __KERNEL__
+#include <linux/uio.h>
+#else
+#include <sys/uio.h>
+#endif
+
+#define USER_EVENTS_SYSTEM "user_events"
+#define USER_EVENTS_PREFIX "u:"
+
+/* Bits 0-6 are for known probe types, Bit 7 is for unknown probes */
+#define EVENT_BIT_FTRACE 0
+#define EVENT_BIT_PERF 1
+#define EVENT_BIT_OTHER 7
+
+#define EVENT_STATUS_FTRACE (1 << EVENT_BIT_FTRACE)
+#define EVENT_STATUS_PERF (1 << EVENT_BIT_PERF)
+#define EVENT_STATUS_OTHER (1 << EVENT_BIT_OTHER)
+
+/* Use raw iterator for attached BPF program(s), no affect on ftrace/perf */
+#define FLAG_BPF_ITER (1 << 0)
+
+struct user_reg {
+	__u32 size;
+	__u64 name_args;
+	__u32 status_index;
+	__u32 write_index;
+};
+
+#define DIAG_IOC_MAGIC '*'
+#define DIAG_IOCSREG _IOWR(DIAG_IOC_MAGIC, 0, struct user_reg*)
+#define DIAG_IOCSDEL _IOW(DIAG_IOC_MAGIC, 1, char*)
+
+enum {
+	USER_BPF_DATA_KERNEL,
+	USER_BPF_DATA_USER,
+	USER_BPF_DATA_ITER,
+};
+
+struct user_bpf_iter {
+	__u32 iov_offset;
+	__u32 nr_segs;
+	const struct iovec *iov;
+};
+
+struct user_bpf_context {
+	__u32 data_type;
+	__u32 data_len;
+	union {
+		void *kdata;
+		void *udata;
+		struct user_bpf_iter *iter;
+	};
+};
+
+#endif /* _UAPI_LINUX_USER_EVENTS_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 01/10] user_events: Add UABI header for user access to user_events Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-04 21:34   ` kernel test robot
                     ` (2 more replies)
  2021-11-04 17:04 ` [PATCH v4 03/10] user_events: Add print_fmt generation support for basic types Beau Belgrave
                   ` (7 subsequent siblings)
  9 siblings, 3 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Minimal support for interacting with dynamic events, trace_event and
ftrace. Core outline of flow between user process, ioctl and trace_event
APIs.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/Kconfig             |   15 +
 kernel/trace/Makefile            |    1 +
 kernel/trace/trace_events_user.c | 1140 ++++++++++++++++++++++++++++++
 3 files changed, 1156 insertions(+)
 create mode 100644 kernel/trace/trace_events_user.c

diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 3ee23f4d437f..deaaad421be4 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -725,6 +725,21 @@ config SYNTH_EVENTS
 
 	  If in doubt, say N.
 
+config USER_EVENTS
+	bool "User trace events"
+	select TRACING
+	select DYNAMIC_EVENTS
+	default n
+	help
+	  User trace events are user-defined trace events that
+	  can be used like an existing kernel trace event.  User trace
+	  events are generated by writing to a tracefs file.  User
+	  processes can determine if their tracing events should be
+	  generated by memory mapping a tracefs file and checking for
+	  an associated byte being non-zero.
+
+	  If in doubt, say N.
+
 config HIST_TRIGGERS
 	bool "Histogram triggers"
 	depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index b1c47ccf4f73..a653b255e89c 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -80,6 +80,7 @@ obj-$(CONFIG_EVENT_TRACING) += trace_events_trigger.o
 obj-$(CONFIG_TRACE_EVENT_INJECT) += trace_events_inject.o
 obj-$(CONFIG_SYNTH_EVENTS) += trace_events_synth.o
 obj-$(CONFIG_HIST_TRIGGERS) += trace_events_hist.o
+obj-$(CONFIG_USER_EVENTS) += trace_events_user.o
 obj-$(CONFIG_BPF_EVENTS) += bpf_trace.o
 obj-$(CONFIG_KPROBE_EVENTS) += trace_kprobe.o
 obj-$(CONFIG_TRACEPOINTS) += error_report-traces.o
diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
new file mode 100644
index 000000000000..a68017ad7fdd
--- /dev/null
+++ b/kernel/trace/trace_events_user.c
@@ -0,0 +1,1140 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, Microsoft Corporation.
+ *
+ * Authors:
+ *   Beau Belgrave <beaub@linux.microsoft.com>
+ */
+
+#include <linux/bitmap.h>
+#include <linux/cdev.h>
+#include <linux/hashtable.h>
+#include <linux/list.h>
+#include <linux/io.h>
+#include <linux/uio.h>
+#include <linux/ioctl.h>
+#include <linux/jhash.h>
+#include <linux/trace_events.h>
+#include <linux/tracefs.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
+#include <uapi/linux/user_events.h>
+#include "trace.h"
+#include "trace_dynevent.h"
+
+#define USER_EVENTS_PREFIX_LEN (sizeof(USER_EVENTS_PREFIX)-1)
+
+#define FIELD_DEPTH_TYPE 0
+#define FIELD_DEPTH_NAME 1
+#define FIELD_DEPTH_SIZE 2
+
+/*
+ * Limits how many trace_event calls user processes can create:
+ * Must be multiple of PAGE_SIZE.
+ */
+#define MAX_PAGES 1
+#define MAX_EVENTS (MAX_PAGES * PAGE_SIZE)
+
+/* Limit how long of an event name plus args within the subsystem. */
+#define MAX_EVENT_DESC 512
+#define EVENT_NAME(user_event) ((user_event)->tracepoint.name)
+
+static char *register_page_data;
+
+static DEFINE_MUTEX(reg_mutex);
+static DEFINE_HASHTABLE(register_table, 4);
+static DECLARE_BITMAP(page_bitmap, MAX_EVENTS);
+
+struct user_event {
+	struct tracepoint tracepoint;
+	struct trace_event_call call;
+	struct trace_event_class class;
+	struct dyn_event devent;
+	struct hlist_node node;
+	struct list_head fields;
+	atomic_t refcnt;
+	int index;
+	int flags;
+};
+
+struct user_event_refs {
+	struct rcu_head rcu;
+	int count;
+	struct user_event *events[];
+};
+
+typedef void (*user_event_func_t) (struct user_event *user,
+				   void *data, u32 datalen,
+				   void *tpdata);
+
+static int user_event_parse(char *name, char *args, char *flags,
+			    struct user_event **newuser);
+
+static u32 user_event_key(char *name)
+{
+	return jhash(name, strlen(name), 0);
+}
+
+static struct list_head *user_event_get_fields(struct trace_event_call *call)
+{
+	struct user_event *user = (struct user_event *)call->data;
+
+	return &user->fields;
+}
+
+/*
+ * Parses a register command for user_events
+ * Format: event_name[:FLAG1[,FLAG2...]] [field1[;field2...]]
+ *
+ * Example event named test with a 20 char msg field with a unsigned int after:
+ * test char[20] msg;unsigned int id
+ *
+ * NOTE: Offsets are from the user data perspective, they are not from the
+ * trace_entry/buffer perspective. We automatically add the common properties
+ * sizes to the offset for the user.
+ */
+static int user_event_parse_cmd(char *raw_command, struct user_event **newuser)
+{
+	char *name = raw_command;
+	char *args = strpbrk(name, " ");
+	char *flags;
+
+	if (args)
+		*args++ = 0;
+
+	flags = strpbrk(name, ":");
+
+	if (flags)
+		*flags++ = 0;
+
+	return user_event_parse(name, args, flags, newuser);
+}
+
+static int user_field_array_size(const char *type)
+{
+	const char *start = strchr(type, '[');
+	int size = 0;
+
+	if (start == NULL)
+		return -EINVAL;
+
+	start++;
+
+	while (*start >= '0' && *start <= '9')
+		size = (size * 10) + (*start++ - '0');
+
+	if (*start != ']')
+		return -EINVAL;
+
+	return size;
+}
+
+static int user_field_size(const char *type)
+{
+	/* long is not allowed from a user, since it's ambigious in size */
+	if (strcmp(type, "s64") == 0)
+		return sizeof(s64);
+	if (strcmp(type, "u64") == 0)
+		return sizeof(u64);
+	if (strcmp(type, "s32") == 0)
+		return sizeof(s32);
+	if (strcmp(type, "u32") == 0)
+		return sizeof(u32);
+	if (strcmp(type, "int") == 0)
+		return sizeof(int);
+	if (strcmp(type, "unsigned int") == 0)
+		return sizeof(unsigned int);
+	if (strcmp(type, "s16") == 0)
+		return sizeof(s16);
+	if (strcmp(type, "u16") == 0)
+		return sizeof(u16);
+	if (strcmp(type, "short") == 0)
+		return sizeof(short);
+	if (strcmp(type, "unsigned short") == 0)
+		return sizeof(unsigned short);
+	if (strcmp(type, "s8") == 0)
+		return sizeof(s8);
+	if (strcmp(type, "u8") == 0)
+		return sizeof(u8);
+	if (strcmp(type, "char") == 0)
+		return sizeof(char);
+	if (strcmp(type, "unsigned char") == 0)
+		return sizeof(unsigned char);
+	if (strstr(type, "char[") == type)
+		return user_field_array_size(type);
+	if (strstr(type, "unsigned char[") == type)
+		return user_field_array_size(type);
+	if (strstr(type, "__data_loc ") == type)
+		return sizeof(u32);
+	if (strstr(type, "__rel_loc ") == type)
+		return sizeof(u32);
+
+	/* Uknown basic type, error */
+	return -EINVAL;
+}
+
+static void user_event_destroy_fields(struct user_event *user)
+{
+	struct ftrace_event_field *field, *next;
+	struct list_head *head = &user->fields;
+
+	list_for_each_entry_safe(field, next, head, link) {
+		list_del(&field->link);
+		kfree(field);
+	}
+}
+
+static int user_event_add_field(struct user_event *user, const char *type,
+				const char *name, int offset, int size,
+				int is_signed, int filter_type)
+{
+	struct ftrace_event_field *field;
+
+	field = kmalloc(sizeof(*field), GFP_KERNEL);
+
+	if (!field)
+		return -ENOMEM;
+
+	field->type = type;
+	field->name = name;
+	field->offset = offset;
+	field->size = size;
+	field->is_signed = is_signed;
+	field->filter_type = filter_type;
+
+	list_add(&field->link, &user->fields);
+
+	return 0;
+}
+
+/*
+ * Parses the values of a field within the description
+ * Format: type name [size]
+ */
+static int user_event_parse_field(char *field, struct user_event *user,
+				  u32 *offset)
+{
+	char *part, *type, *name;
+	u32 depth = 0, saved_offset = *offset;
+	int size = -EINVAL;
+	bool is_struct = false;
+
+	field = skip_spaces(field);
+
+	if (*field == 0)
+		return 0;
+
+	/* Handle types that have a space within */
+	if (strstr(field, "unsigned ") == field) {
+		type = field;
+		field = strpbrk(field + sizeof("unsigned"), " ");
+		goto skip_next;
+	} else if (strstr(field, "struct ") == field) {
+		type = field;
+		field = strpbrk(field + sizeof("struct"), " ");
+		is_struct = true;
+		goto skip_next;
+	} else if (strstr(field, "__data_loc unsigned ") == field) {
+		type = field;
+		field = strpbrk(field + sizeof("__data_loc unsigned"), " ");
+		goto skip_next;
+	} else if (strstr(field, "__data_loc ") == field) {
+		type = field;
+		field = strpbrk(field + sizeof("__data_loc"), " ");
+		goto skip_next;
+	} else if (strstr(field, "__rel_loc unsigned ") == field) {
+		type = field;
+		field = strpbrk(field + sizeof("__rel_loc unsigned"), " ");
+		goto skip_next;
+	} else if (strstr(field, "__rel_loc ") == field) {
+		type = field;
+		field = strpbrk(field + sizeof("__rel_loc"), " ");
+		goto skip_next;
+	}
+	goto parse;
+skip_next:
+	if (field == NULL)
+		return -EINVAL;
+
+	*field++ = 0;
+	depth++;
+parse:
+	while ((part = strsep(&field, " ")) != NULL) {
+		switch (depth++) {
+		case FIELD_DEPTH_TYPE:
+			type = part;
+			break;
+		case FIELD_DEPTH_NAME:
+			name = part;
+			break;
+		case FIELD_DEPTH_SIZE:
+			if (!is_struct)
+				return -EINVAL;
+
+			if (kstrtou32(part, 10, &size))
+				return -EINVAL;
+			break;
+		default:
+			return -EINVAL;
+		}
+	}
+
+	if (depth < FIELD_DEPTH_SIZE)
+		return -EINVAL;
+
+	if (depth == FIELD_DEPTH_SIZE)
+		size = user_field_size(type);
+
+	if (size == 0)
+		return -EINVAL;
+
+	if (size < 0)
+		return size;
+
+	*offset = saved_offset + size;
+
+	return user_event_add_field(user, type, name, saved_offset, size,
+				    type[0] != 'u', FILTER_OTHER);
+}
+
+static void user_event_parse_flags(struct user_event *user, char *flags)
+{
+	char *flag;
+
+	if (flags == NULL)
+		return;
+
+	while ((flag = strsep(&flags, ",")) != NULL) {
+		if (strcmp(flag, "BPF_ITER") == 0)
+			user->flags |= FLAG_BPF_ITER;
+	}
+}
+
+static int user_event_parse_fields(struct user_event *user, char *args)
+{
+	char *field;
+	u32 offset = sizeof(struct trace_entry);
+	int ret = -EINVAL;
+
+	if (args == NULL)
+		return 0;
+
+	while ((field = strsep(&args, ";")) != NULL) {
+		ret = user_event_parse_field(field, user, &offset);
+
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+
+static struct trace_event_fields user_event_fields_array[] = {
+	{}
+};
+
+static enum print_line_t user_event_print_trace(struct trace_iterator *iter,
+						int flags,
+						struct trace_event *event)
+{
+	/* Unsafe to try to decode user provided print_fmt, use hex */
+	trace_print_hex_dump_seq(&iter->seq, "", DUMP_PREFIX_OFFSET, 16,
+				 1, iter->ent, iter->ent_size, true);
+
+	return trace_handle_return(&iter->seq);
+}
+
+static struct trace_event_functions user_event_funcs = {
+	.trace = user_event_print_trace,
+};
+
+static int destroy_user_event(struct user_event *user)
+{
+	int ret = 0;
+
+	/* Must destroy fields before call removal */
+	user_event_destroy_fields(user);
+
+	ret = trace_remove_event_call(&user->call);
+
+	if (ret)
+		return ret;
+
+	dyn_event_remove(&user->devent);
+
+	register_page_data[user->index] = 0;
+	clear_bit(user->index, page_bitmap);
+	hash_del(&user->node);
+
+	kfree(EVENT_NAME(user));
+	kfree(user);
+
+	return ret;
+}
+
+static struct user_event *find_user_event(char *name, u32 *outkey)
+{
+	struct user_event *user;
+	u32 key = user_event_key(name);
+
+	*outkey = key;
+
+	hash_for_each_possible(register_table, user, node, key)
+		if (!strcmp(EVENT_NAME(user), name))
+			return user;
+
+	return NULL;
+}
+
+/*
+ * Writes the user supplied payload out to a trace file.
+ */
+static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
+			      void *tpdata)
+{
+	struct trace_event_file *file;
+	struct trace_entry *entry;
+	struct trace_event_buffer event_buffer;
+
+	file = (struct trace_event_file *)tpdata;
+
+	if (!file ||
+	    !(file->flags & EVENT_FILE_FL_ENABLED) ||
+	    trace_trigger_soft_disabled(file))
+		return;
+
+	entry = trace_event_buffer_reserve(&event_buffer, file,
+					   sizeof(*entry) + datalen);
+
+	if (unlikely(!entry))
+		return;
+
+	memcpy(entry + 1, data, datalen);
+
+	trace_event_buffer_commit(&event_buffer);
+}
+
+/*
+ * Update the register page that is shared between user processes.
+ */
+static void update_reg_page_for(struct user_event *user)
+{
+	struct tracepoint *tp = &user->tracepoint;
+	char status = 0;
+
+	if (atomic_read(&tp->key.enabled) > 0) {
+		struct tracepoint_func *probe_func_ptr;
+		user_event_func_t probe_func;
+
+		rcu_read_lock_sched();
+
+		probe_func_ptr = rcu_dereference_sched(tp->funcs);
+
+		if (probe_func_ptr) {
+			do {
+				probe_func = probe_func_ptr->func;
+
+				if (probe_func == user_event_ftrace)
+					status |= EVENT_STATUS_FTRACE;
+				else
+					status |= EVENT_STATUS_OTHER;
+			} while ((++probe_func_ptr)->func);
+		}
+
+		rcu_read_unlock_sched();
+	}
+
+	register_page_data[user->index] = status;
+}
+
+/*
+ * Register callback for our events from tracing sub-systems.
+ */
+static int user_event_reg(struct trace_event_call *call,
+			  enum trace_reg type,
+			  void *data)
+{
+	struct user_event *user = (struct user_event *)call->data;
+	int ret = 0;
+
+	if (!user)
+		return -ENOENT;
+
+	switch (type) {
+	case TRACE_REG_REGISTER:
+		ret = tracepoint_probe_register(call->tp,
+						call->class->probe,
+						data);
+		if (!ret)
+			goto inc;
+		break;
+
+	case TRACE_REG_UNREGISTER:
+		tracepoint_probe_unregister(call->tp,
+					    call->class->probe,
+					    data);
+		goto dec;
+
+#ifdef CONFIG_PERF_EVENTS
+	case TRACE_REG_PERF_REGISTER:
+	case TRACE_REG_PERF_UNREGISTER:
+	case TRACE_REG_PERF_OPEN:
+	case TRACE_REG_PERF_CLOSE:
+	case TRACE_REG_PERF_ADD:
+	case TRACE_REG_PERF_DEL:
+		break;
+#endif
+	}
+
+	return ret;
+inc:
+	atomic_inc(&user->refcnt);
+	update_reg_page_for(user);
+	return 0;
+dec:
+	update_reg_page_for(user);
+	atomic_dec(&user->refcnt);
+	return 0;
+}
+
+static int user_event_create(const char *raw_command)
+{
+	struct user_event *user;
+	char *name;
+	int ret;
+
+	if (strstr(raw_command, USER_EVENTS_PREFIX) != raw_command)
+		return -ECANCELED;
+
+	raw_command += USER_EVENTS_PREFIX_LEN;
+	raw_command = skip_spaces(raw_command);
+
+	name = kstrdup(raw_command, GFP_KERNEL);
+
+	if (!name)
+		return -ENOMEM;
+
+	mutex_lock(&reg_mutex);
+	ret = user_event_parse_cmd(name, &user);
+	mutex_unlock(&reg_mutex);
+
+	return ret;
+}
+
+static int user_event_show(struct seq_file *m, struct dyn_event *ev)
+{
+	struct user_event *user = container_of(ev, struct user_event, devent);
+	struct ftrace_event_field *field, *next;
+	struct list_head *head;
+	int depth = 0;
+
+	seq_printf(m, "%s%s", USER_EVENTS_PREFIX, EVENT_NAME(user));
+
+	head = trace_get_fields(&user->call);
+
+	list_for_each_entry_safe_reverse(field, next, head, link) {
+		if (depth == 0)
+			seq_puts(m, " ");
+		else
+			seq_puts(m, "; ");
+		seq_printf(m, "%s %s", field->type, field->name);
+		depth++;
+	}
+
+	seq_puts(m, "\n");
+
+	return 0;
+}
+
+static bool user_event_is_busy(struct dyn_event *ev)
+{
+	struct user_event *user = container_of(ev, struct user_event, devent);
+
+	return atomic_read(&user->refcnt) != 0;
+}
+
+static int user_event_free(struct dyn_event *ev)
+{
+	struct user_event *user = container_of(ev, struct user_event, devent);
+
+	return destroy_user_event(user);
+}
+
+static bool user_event_match(const char *system, const char *event,
+			     int argc, const char **argv, struct dyn_event *ev)
+{
+	struct user_event *user = container_of(ev, struct user_event, devent);
+
+	return strcmp(EVENT_NAME(user), event) == 0 &&
+		(!system || strcmp(system, USER_EVENTS_SYSTEM) == 0);
+}
+
+static struct dyn_event_operations user_event_dops = {
+	.create = user_event_create,
+	.show = user_event_show,
+	.is_busy = user_event_is_busy,
+	.free = user_event_free,
+	.match = user_event_match,
+};
+
+static int user_event_trace_register(struct user_event *user)
+{
+	int ret;
+
+	ret = register_trace_event(&user->call.event);
+
+	if (!ret)
+		return -ENODEV;
+
+	ret = trace_add_event_call(&user->call);
+
+	if (ret)
+		unregister_trace_event(&user->call.event);
+
+	return ret;
+}
+
+/*
+ * Parses the event name, arguments and flags then registers if successful.
+ */
+static int user_event_parse(char *name, char *args, char *flags,
+			    struct user_event **newuser)
+{
+	int ret;
+	int index;
+	u32 key;
+	struct user_event *user = find_user_event(name, &key);
+
+	if (user) {
+		*newuser = user;
+		ret = 0;
+		goto put_name;
+	}
+
+	index = find_first_zero_bit(page_bitmap, MAX_EVENTS);
+
+	if (index == MAX_EVENTS) {
+		ret = -EMFILE;
+		goto put_name;
+	}
+
+	user = kzalloc(sizeof(*user), GFP_KERNEL);
+
+	if (!user) {
+		ret = -ENOMEM;
+		goto put_name;
+	}
+
+	INIT_LIST_HEAD(&user->class.fields);
+	INIT_LIST_HEAD(&user->fields);
+
+	user->tracepoint.name = name;
+
+	user_event_parse_flags(user, flags);
+
+	ret = user_event_parse_fields(user, args);
+
+	if (ret)
+		goto put_user;
+
+	/* Minimal print format */
+	user->call.print_fmt = "\"\"";
+
+	user->call.data = user;
+	user->call.class = &user->class;
+	user->call.name = name;
+	user->call.flags = TRACE_EVENT_FL_TRACEPOINT;
+	user->call.tp = &user->tracepoint;
+	user->call.event.funcs = &user_event_funcs;
+
+	user->class.system = USER_EVENTS_SYSTEM;
+	user->class.fields_array = user_event_fields_array;
+	user->class.get_fields = user_event_get_fields;
+	user->class.reg = user_event_reg;
+	user->class.probe = user_event_ftrace;
+
+	mutex_lock(&event_mutex);
+	ret = user_event_trace_register(user);
+	mutex_unlock(&event_mutex);
+
+	if (ret)
+		goto put_user;
+
+	user->index = index;
+	dyn_event_init(&user->devent, &user_event_dops);
+	dyn_event_add(&user->devent);
+	set_bit(user->index, page_bitmap);
+	hash_add(register_table, &user->node, key);
+
+	*newuser = user;
+	return 0;
+put_user:
+	user_event_destroy_fields(user);
+	kfree(user);
+put_name:
+	kfree(name);
+	return ret;
+}
+
+/*
+ * Deletes a previously created event if it is no longer being used.
+ */
+static int delete_user_event(char *name)
+{
+	u32 key;
+	int ret;
+	struct user_event *user = find_user_event(name, &key);
+
+	if (!user)
+		return -ENOENT;
+
+	if (atomic_read(&user->refcnt) != 0)
+		return -EBUSY;
+
+	mutex_lock(&event_mutex);
+	ret = destroy_user_event(user);
+	mutex_unlock(&event_mutex);
+
+	return ret;
+}
+
+/*
+ * Validates the user payload and writes via iterator.
+ */
+static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
+{
+	struct user_event_refs *refs;
+	struct user_event *user = NULL;
+	struct tracepoint *tp;
+	ssize_t ret = i->count;
+	int idx;
+
+	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
+		return -EFAULT;
+
+	rcu_read_lock_sched();
+
+	refs = rcu_dereference_sched(file->private_data);
+
+	if (likely(refs && idx < refs->count))
+		user = refs->events[idx];
+
+	rcu_read_unlock_sched();
+
+	if (unlikely(user == NULL))
+		return -ENOENT;
+
+	tp = &user->tracepoint;
+
+	if (likely(atomic_read(&tp->key.enabled) > 0)) {
+		struct tracepoint_func *probe_func_ptr;
+		user_event_func_t probe_func;
+		void *tpdata;
+		void *kdata;
+		u32 datalen;
+
+		kdata = kmalloc(i->count, GFP_KERNEL);
+
+		if (unlikely(!kdata))
+			return -ENOMEM;
+
+		datalen = copy_from_iter(kdata, i->count, i);
+
+		rcu_read_lock_sched();
+
+		probe_func_ptr = rcu_dereference_sched(tp->funcs);
+
+		if (probe_func_ptr) {
+			do {
+				probe_func = probe_func_ptr->func;
+				tpdata = probe_func_ptr->data;
+				probe_func(user, kdata, datalen, tpdata);
+			} while ((++probe_func_ptr)->func);
+		}
+
+		rcu_read_unlock_sched();
+
+		kfree(kdata);
+	}
+
+	return ret;
+}
+
+static ssize_t user_events_write(struct file *file, const char __user *ubuf,
+				 size_t count, loff_t *ppos)
+{
+	struct iovec iov;
+	struct iov_iter i;
+
+	if (unlikely(*ppos != 0))
+		return -EFAULT;
+
+	if (unlikely(import_single_range(READ, (char *)ubuf, count, &iov, &i)))
+		return -EFAULT;
+
+	return user_events_write_core(file, &i);
+}
+
+static ssize_t user_events_write_iter(struct kiocb *kp, struct iov_iter *i)
+{
+	return user_events_write_core(kp->ki_filp, i);
+}
+
+static int user_events_ref_add(struct file *file, struct user_event *user)
+{
+	struct user_event_refs *refs, *new_refs;
+	int i, size, count = 0;
+
+	rcu_read_lock_sched();
+	refs = rcu_dereference_sched(file->private_data);
+	rcu_read_unlock_sched();
+
+	if (refs) {
+		count = refs->count;
+
+		for (i = 0; i < count; ++i)
+			if (refs->events[i] == user)
+				return i;
+	}
+
+	size = sizeof(*refs) + (sizeof(struct user_event *) * (count + 1));
+
+	new_refs = kzalloc(size, GFP_KERNEL);
+
+	if (!new_refs)
+		return -ENOMEM;
+
+	new_refs->count = count + 1;
+
+	for (i = 0; i < count; ++i)
+		new_refs->events[i] = refs->events[i];
+
+	new_refs->events[i] = user;
+
+	atomic_inc(&user->refcnt);
+
+	rcu_assign_pointer(file->private_data, new_refs);
+
+	if (refs)
+		kfree_rcu(refs, rcu);
+
+	return i;
+}
+
+static long user_reg_get(struct user_reg __user *ureg, struct user_reg *kreg)
+{
+	u32 size;
+	long ret;
+
+	ret = get_user(size, &ureg->size);
+
+	if (ret)
+		return ret;
+
+	if (size > PAGE_SIZE)
+		return -E2BIG;
+
+	return copy_struct_from_user(kreg, sizeof(*kreg), ureg, size);
+}
+
+/*
+ * Registers a user_event on behalf of a user process.
+ */
+static long user_events_ioctl_reg(struct file *file, unsigned long uarg)
+{
+	struct user_reg __user *ureg = (struct user_reg __user *)uarg;
+	struct user_reg reg;
+	struct user_event *user;
+	char *name;
+	long ret;
+
+	ret = user_reg_get(ureg, &reg);
+
+	if (ret)
+		return ret;
+
+	name = strndup_user((const char __user *)(uintptr_t)reg.name_args,
+			    MAX_EVENT_DESC);
+
+	if (IS_ERR(name)) {
+		ret = PTR_ERR(name);
+		return ret;
+	}
+
+	ret = user_event_parse_cmd(name, &user);
+
+	if (ret < 0)
+		return ret;
+
+	ret = user_events_ref_add(file, user);
+
+	if (ret < 0)
+		return ret;
+
+	put_user((u32)ret, &ureg->write_index);
+	put_user(user->index, &ureg->status_index);
+
+	return 0;
+}
+
+/*
+ * Deletes a user_event on behalf of a user process.
+ */
+static long user_events_ioctl_del(struct file *file, unsigned long uarg)
+{
+	void __user *ubuf = (void __user *)uarg;
+	char *name;
+	long ret;
+
+	name = strndup_user(ubuf, MAX_EVENT_DESC);
+
+	if (IS_ERR(name))
+		return PTR_ERR(name);
+
+	ret = delete_user_event(name);
+
+	kfree(name);
+
+	return ret;
+}
+
+/*
+ * Handles the ioctl from user mode to register or alter operations.
+ */
+static long user_events_ioctl(struct file *file, unsigned int cmd,
+			      unsigned long uarg)
+{
+	long ret = -ENOTTY;
+
+	switch (cmd) {
+	case DIAG_IOCSREG:
+		mutex_lock(&reg_mutex);
+		ret = user_events_ioctl_reg(file, uarg);
+		mutex_unlock(&reg_mutex);
+		break;
+
+	case DIAG_IOCSDEL:
+		mutex_lock(&reg_mutex);
+		ret = user_events_ioctl_del(file, uarg);
+		mutex_unlock(&reg_mutex);
+		break;
+	}
+
+	return ret;
+}
+
+/*
+ * Handles the final close of the file from user mode.
+ */
+static int user_events_release(struct inode *node, struct file *file)
+{
+	struct user_event_refs *refs;
+	struct user_event *user;
+	int i;
+
+	rcu_read_lock_sched();
+	refs = rcu_dereference_sched(file->private_data);
+	rcu_read_unlock_sched();
+
+	if (!refs)
+		goto out;
+
+	for (i = 0; i < refs->count; ++i) {
+		user = refs->events[i];
+
+		if (user)
+			atomic_dec(&user->refcnt);
+	}
+
+	kfree_rcu(refs, rcu);
+out:
+	return 0;
+}
+
+static const struct file_operations user_data_fops = {
+	.write = user_events_write,
+	.write_iter = user_events_write_iter,
+	.unlocked_ioctl	= user_events_ioctl,
+	.release = user_events_release,
+};
+
+/*
+ * Maps the shared page into the user process for checking if event is enabled.
+ */
+static int user_status_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	unsigned long size = vma->vm_end - vma->vm_start;
+
+	if (size != MAX_EVENTS)
+		return -EINVAL;
+
+	return remap_pfn_range(vma, vma->vm_start,
+			       virt_to_phys(register_page_data) >> PAGE_SHIFT,
+			       size, vm_get_page_prot(VM_READ));
+}
+
+static int user_status_show(struct seq_file *m, void *p)
+{
+	struct user_event *user;
+	char status;
+	int i, active = 0, busy = 0, flags;
+
+	mutex_lock(&reg_mutex);
+
+	hash_for_each(register_table, i, user, node) {
+		status = register_page_data[user->index];
+		flags = user->flags;
+
+		seq_printf(m, "%d:%s", user->index, EVENT_NAME(user));
+
+		if (flags != 0 || status != 0)
+			seq_puts(m, " #");
+
+		if (status != 0) {
+			seq_puts(m, " Used by");
+			if (status & EVENT_STATUS_FTRACE)
+				seq_puts(m, " ftrace");
+			if (status & EVENT_STATUS_PERF)
+				seq_puts(m, " perf");
+			if (status & EVENT_STATUS_OTHER)
+				seq_puts(m, " other");
+			busy++;
+		}
+
+		if (flags & FLAG_BPF_ITER)
+			seq_puts(m, " FLAG:BPF_ITER");
+
+		seq_puts(m, "\n");
+		active++;
+	}
+
+	mutex_unlock(&reg_mutex);
+
+	seq_puts(m, "\n");
+	seq_printf(m, "Active: %d\n", active);
+	seq_printf(m, "Busy: %d\n", busy);
+	seq_printf(m, "Max: %ld\n", MAX_EVENTS);
+
+	return 0;
+}
+
+static ssize_t user_status_read(struct file *file, char __user *ubuf,
+				size_t count, loff_t *ppos)
+{
+	/*
+	 * Delay allocation of seq data until requested, most callers
+	 * will never read the status file. They will only mmap.
+	 */
+	if (file->private_data == NULL) {
+		int ret;
+
+		if (*ppos != 0)
+			return -EINVAL;
+
+		ret = single_open(file, user_status_show, NULL);
+
+		if (ret)
+			return ret;
+	}
+
+	return seq_read(file, ubuf, count, ppos);
+}
+
+static loff_t user_status_seek(struct file *file, loff_t offset, int whence)
+{
+	if (file->private_data == NULL)
+		return 0;
+
+	return seq_lseek(file, offset, whence);
+}
+
+static int user_status_release(struct inode *node, struct file *file)
+{
+	if (file->private_data == NULL)
+		return 0;
+
+	return single_release(node, file);
+}
+
+static const struct file_operations user_status_fops = {
+	.mmap = user_status_mmap,
+	.read = user_status_read,
+	.llseek  = user_status_seek,
+	.release = user_status_release,
+};
+
+/*
+ * Creates a set of tracefs files to allow user mode interactions.
+ */
+static int create_user_tracefs(void)
+{
+	struct dentry *edata, *emmap;
+
+	edata = tracefs_create_file("user_events_data", 0644, NULL,
+				    NULL, &user_data_fops);
+
+	if (!edata) {
+		pr_warn("Could not create tracefs 'user_events_data' entry\n");
+		goto err;
+	}
+
+	/* mmap with MAP_SHARED requires writable fd */
+	emmap = tracefs_create_file("user_events_status", 0644, NULL,
+				    NULL, &user_status_fops);
+
+	if (!emmap) {
+		tracefs_remove(edata);
+		pr_warn("Could not create tracefs 'user_events_mmap' entry\n");
+		goto err;
+	}
+
+	return 0;
+err:
+	return -ENODEV;
+}
+
+static void set_page_reservations(bool set)
+{
+	int page;
+
+	for (page = 0; page < MAX_PAGES; ++page) {
+		void *addr = register_page_data + (PAGE_SIZE * page);
+
+		if (set)
+			SetPageReserved(virt_to_page(addr));
+		else
+			ClearPageReserved(virt_to_page(addr));
+	}
+}
+
+static int __init trace_events_user_init(void)
+{
+	int ret;
+
+	/* Zero all bits beside 0 (which is reserved for failures) */
+	bitmap_zero(page_bitmap, MAX_EVENTS);
+	set_bit(0, page_bitmap);
+
+	register_page_data = kzalloc(MAX_EVENTS, GFP_KERNEL);
+
+	if (!register_page_data)
+		return -ENOMEM;
+
+	set_page_reservations(true);
+
+	ret = create_user_tracefs();
+
+	if (ret) {
+		pr_warn("user_events could not register with tracefs\n");
+		set_page_reservations(false);
+		kfree(register_page_data);
+		return ret;
+	}
+
+	if (dyn_event_register(&user_event_dops))
+		pr_warn("user_events could not register with dyn_events\n");
+
+	return 0;
+}
+
+fs_initcall(trace_events_user_init);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 03/10] user_events: Add print_fmt generation support for basic types
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 01/10] user_events: Add UABI header for user access to user_events Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-08 22:03   ` Steven Rostedt
  2021-11-04 17:04 ` [PATCH v4 04/10] user_events: Handle matching arguments from dyn_events Beau Belgrave
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Addes print_fmt format generation for basic types that are supported for
user processes. Only supports sizes that are the same on 32 and 64 bit.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c | 108 ++++++++++++++++++++++++++++++-
 1 file changed, 106 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index a68017ad7fdd..479a9ced3281 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -329,6 +329,107 @@ static int user_event_parse_fields(struct user_event *user, char *args)
 	return ret;
 }
 
+static char *user_field_format(const char *type)
+{
+	if (strcmp(type, "s64") == 0)
+		return "%lld";
+	if (strcmp(type, "u64") == 0)
+		return "%llu";
+	if (strcmp(type, "s32") == 0)
+		return "%d";
+	if (strcmp(type, "u32") == 0)
+		return "%u";
+	if (strcmp(type, "int") == 0)
+		return "%d";
+	if (strcmp(type, "unsigned int") == 0)
+		return "%u";
+	if (strcmp(type, "s16") == 0)
+		return "%d";
+	if (strcmp(type, "u16") == 0)
+		return "%u";
+	if (strcmp(type, "short") == 0)
+		return "%d";
+	if (strcmp(type, "unsigned short") == 0)
+		return "%u";
+	if (strcmp(type, "s8") == 0)
+		return "%d";
+	if (strcmp(type, "u8") == 0)
+		return "%u";
+	if (strcmp(type, "char") == 0)
+		return "%d";
+	if (strcmp(type, "unsigned char") == 0)
+		return "%u";
+	if (strstr(type, "char[") != 0)
+		return "%s";
+
+	/* Unknown, likely struct, allowed treat as 64-bit */
+	return "%llu";
+}
+
+static bool user_field_is_dyn_string(const char *type)
+{
+	if (strstr(type, "__data_loc ") == type ||
+	    strstr(type, "__rel_loc ") == type) {
+		if (strstr(type, "char[") != 0)
+			return true;
+	}
+
+	return false;
+}
+
+#define LEN_OR_ZERO (len ? len - pos : 0)
+static int user_event_set_print_fmt(struct user_event *user, char *buf, int len)
+{
+	struct ftrace_event_field *field, *next;
+	struct list_head *head = &user->fields;
+	int pos = 0, depth = 0;
+
+	pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
+
+	list_for_each_entry_safe_reverse(field, next, head, link) {
+		if (depth != 0)
+			pos += snprintf(buf + pos, LEN_OR_ZERO, " ");
+
+		pos += snprintf(buf + pos, LEN_OR_ZERO, "%s=%s",
+				field->name, user_field_format(field->type));
+
+		depth++;
+	}
+
+	pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
+
+	list_for_each_entry_safe_reverse(field, next, head, link) {
+		if (user_field_is_dyn_string(field->type))
+			pos += snprintf(buf + pos, LEN_OR_ZERO,
+					", __get_str(%s)", field->name);
+		else
+			pos += snprintf(buf + pos, LEN_OR_ZERO,
+					", REC->%s", field->name);
+	}
+
+	return pos + 1;
+}
+#undef LEN_OR_ZERO
+
+static int user_event_create_print_fmt(struct user_event *user)
+{
+	char *print_fmt;
+	int len;
+
+	len = user_event_set_print_fmt(user, NULL, 0);
+
+	print_fmt = kmalloc(len, GFP_KERNEL);
+
+	if (!print_fmt)
+		return -ENOMEM;
+
+	user_event_set_print_fmt(user, print_fmt, len);
+
+	user->call.print_fmt = print_fmt;
+
+	return 0;
+}
+
 static struct trace_event_fields user_event_fields_array[] = {
 	{}
 };
@@ -366,6 +467,7 @@ static int destroy_user_event(struct user_event *user)
 	clear_bit(user->index, page_bitmap);
 	hash_del(&user->node);
 
+	kfree(user->call.print_fmt);
 	kfree(EVENT_NAME(user));
 	kfree(user);
 
@@ -637,8 +739,10 @@ static int user_event_parse(char *name, char *args, char *flags,
 	if (ret)
 		goto put_user;
 
-	/* Minimal print format */
-	user->call.print_fmt = "\"\"";
+	ret = user_event_create_print_fmt(user);
+
+	if (ret)
+		goto put_user;
 
 	user->call.data = user;
 	user->call.class = &user->class;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 04/10] user_events: Handle matching arguments from dyn_events
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
                   ` (2 preceding siblings ...)
  2021-11-04 17:04 ` [PATCH v4 03/10] user_events: Add print_fmt generation support for basic types Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-08 22:05   ` Steven Rostedt
  2021-11-04 17:04 ` [PATCH v4 05/10] user_events: Add basic perf and eBPF support Beau Belgrave
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Ensures that when dynamic events requests a match with arguments that
they match what is in the user_event.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c | 67 +++++++++++++++++++++++++++++++-
 1 file changed, 66 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index 479a9ced3281..cd78cc481557 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -662,13 +662,78 @@ static int user_event_free(struct dyn_event *ev)
 	return destroy_user_event(user);
 }
 
+static int user_field_match(struct ftrace_event_field *field, int argc,
+			    const char **argv, int *iout)
+{
+	char field_name[256];
+	char arg_name[256];
+	int len, pos, i = *iout;
+	bool colon = false;
+
+	if (i >= argc)
+		return false;
+
+	len = sizeof(arg_name);
+	pos = 0;
+
+	for (; i < argc; ++i) {
+		if (i != *iout)
+			pos += snprintf(arg_name + pos, len - pos, " ");
+
+		pos += snprintf(arg_name + pos, len - pos, argv[i]);
+
+		if (strchr(argv[i], ';')) {
+			++i;
+			colon = true;
+			break;
+		}
+	}
+
+	len = sizeof(field_name);
+	pos = 0;
+
+	pos += snprintf(field_name + pos, len - pos, field->type);
+	pos += snprintf(field_name + pos, len - pos, " ");
+	pos += snprintf(field_name + pos, len - pos, field->name);
+
+	if (colon)
+		pos += snprintf(field_name + pos, len - pos, ";");
+
+	*iout = i;
+
+	return strcmp(arg_name, field_name) == 0;
+}
+
+static bool user_fields_match(struct user_event *user, int argc,
+			      const char **argv)
+{
+	struct ftrace_event_field *field, *next;
+	struct list_head *head = &user->fields;
+	int i = 0;
+
+	list_for_each_entry_safe_reverse(field, next, head, link)
+		if (!user_field_match(field, argc, argv, &i))
+			return false;
+
+	if (i != argc)
+		return false;
+
+	return true;
+}
+
 static bool user_event_match(const char *system, const char *event,
 			     int argc, const char **argv, struct dyn_event *ev)
 {
 	struct user_event *user = container_of(ev, struct user_event, devent);
+	bool match;
 
-	return strcmp(EVENT_NAME(user), event) == 0 &&
+	match = strcmp(EVENT_NAME(user), event) == 0 &&
 		(!system || strcmp(system, USER_EVENTS_SYSTEM) == 0);
+
+	if (match && argc > 0)
+		match = user_fields_match(user, argc, argv);
+
+	return match;
 }
 
 static struct dyn_event_operations user_event_dops = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 05/10] user_events: Add basic perf and eBPF support
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
                   ` (3 preceding siblings ...)
  2021-11-04 17:04 ` [PATCH v4 04/10] user_events: Handle matching arguments from dyn_events Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 06/10] user_events: Add self-test for ftrace integration Beau Belgrave
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Adds support to write out user_event data to perf_probe/perf files as
well as to any attached eBPF program.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c | 63 ++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index cd78cc481557..b5fe0550b489 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -516,6 +516,50 @@ static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
 	trace_event_buffer_commit(&event_buffer);
 }
 
+#ifdef CONFIG_PERF_EVENTS
+/*
+ * Writes the user supplied payload out to perf ring buffer or eBPF program.
+ */
+static void user_event_perf(struct user_event *user, void *data, u32 datalen,
+			    void *tpdata)
+{
+	struct hlist_head *perf_head;
+
+	if (bpf_prog_array_valid(&user->call)) {
+		struct user_bpf_context context = {0};
+
+		context.data_len = datalen;
+		context.data_type = USER_BPF_DATA_KERNEL;
+		context.kdata = data;
+
+		trace_call_bpf(&user->call, &context);
+	}
+
+	perf_head = this_cpu_ptr(user->call.perf_events);
+
+	if (perf_head && !hlist_empty(perf_head)) {
+		struct trace_entry *perf_entry;
+		struct pt_regs *regs;
+		size_t size = sizeof(*perf_entry) + datalen;
+		int context;
+
+		perf_entry = perf_trace_buf_alloc(ALIGN(size, 8),
+						  &regs, &context);
+
+		if (unlikely(!perf_entry))
+			return;
+
+		perf_fetch_caller_regs(regs);
+
+		memcpy(perf_entry + 1, data, datalen);
+
+		perf_trace_buf_submit(perf_entry, size, context,
+				      user->call.event.type, 1, regs,
+				      perf_head, NULL);
+	}
+}
+#endif
+
 /*
  * Update the register page that is shared between user processes.
  */
@@ -538,6 +582,10 @@ static void update_reg_page_for(struct user_event *user)
 
 				if (probe_func == user_event_ftrace)
 					status |= EVENT_STATUS_FTRACE;
+#ifdef CONFIG_PERF_EVENTS
+				else if (probe_func == user_event_perf)
+					status |= EVENT_STATUS_PERF;
+#endif
 				else
 					status |= EVENT_STATUS_OTHER;
 			} while ((++probe_func_ptr)->func);
@@ -579,7 +627,19 @@ static int user_event_reg(struct trace_event_call *call,
 
 #ifdef CONFIG_PERF_EVENTS
 	case TRACE_REG_PERF_REGISTER:
+		ret = tracepoint_probe_register(call->tp,
+						call->class->perf_probe,
+						data);
+		if (!ret)
+			goto inc;
+		break;
+
 	case TRACE_REG_PERF_UNREGISTER:
+		tracepoint_probe_unregister(call->tp,
+					    call->class->perf_probe,
+					    data);
+		goto dec;
+
 	case TRACE_REG_PERF_OPEN:
 	case TRACE_REG_PERF_CLOSE:
 	case TRACE_REG_PERF_ADD:
@@ -821,6 +881,9 @@ static int user_event_parse(char *name, char *args, char *flags,
 	user->class.get_fields = user_event_get_fields;
 	user->class.reg = user_event_reg;
 	user->class.probe = user_event_ftrace;
+#ifdef CONFIG_PERF_EVENTS
+	user->class.perf_probe = user_event_perf;
+#endif
 
 	mutex_lock(&event_mutex);
 	ret = user_event_trace_register(user);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 06/10] user_events: Add self-test for ftrace integration
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
                   ` (4 preceding siblings ...)
  2021-11-04 17:04 ` [PATCH v4 05/10] user_events: Add basic perf and eBPF support Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 07/10] user_events: Add self-test for dynamic_events integration Beau Belgrave
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Tests basic functionality of registering/deregistering, status and
writing data out via ftrace mechanisms within user_events.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 tools/testing/selftests/user_events/Makefile  |   9 +
 .../selftests/user_events/ftrace_test.c       | 205 ++++++++++++++++++
 tools/testing/selftests/user_events/settings  |   1 +
 3 files changed, 215 insertions(+)
 create mode 100644 tools/testing/selftests/user_events/Makefile
 create mode 100644 tools/testing/selftests/user_events/ftrace_test.c
 create mode 100644 tools/testing/selftests/user_events/settings

diff --git a/tools/testing/selftests/user_events/Makefile b/tools/testing/selftests/user_events/Makefile
new file mode 100644
index 000000000000..d66c551a6fe3
--- /dev/null
+++ b/tools/testing/selftests/user_events/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0
+CFLAGS += -Wl,-no-as-needed -Wall -I../../../../usr/include
+LDLIBS += -lrt -lpthread -lm
+
+TEST_GEN_PROGS = ftrace_test
+
+TEST_FILES := settings
+
+include ../lib.mk
diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c
new file mode 100644
index 000000000000..9d53717139e6
--- /dev/null
+++ b/tools/testing/selftests/user_events/ftrace_test.c
@@ -0,0 +1,205 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * User Events FTrace Test Program
+ *
+ * Copyright (c) 2021 Beau Belgrave <beaub@linux.microsoft.com>
+ */
+
+#include <errno.h>
+#include <linux/user_events.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/stat.h>
+#include <unistd.h>
+
+#include "../kselftest_harness.h"
+
+const char *data_file = "/sys/kernel/debug/tracing/user_events_data";
+const char *status_file = "/sys/kernel/debug/tracing/user_events_status";
+const char *enable_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/enable";
+const char *trace_file = "/sys/kernel/debug/tracing/trace";
+
+static int trace_bytes(void)
+{
+	int fd = open(trace_file, O_RDONLY);
+	char buf[256];
+	int bytes = 0, got;
+
+	if (fd == -1)
+		return -1;
+
+	while (true) {
+		got = read(fd, buf, sizeof(buf));
+
+		if (got == -1)
+			return -1;
+
+		if (got == 0)
+			break;
+
+		bytes += got;
+	}
+
+	close(fd);
+
+	return bytes;
+}
+
+FIXTURE(user) {
+	int status_fd;
+	int data_fd;
+	int enable_fd;
+};
+
+FIXTURE_SETUP(user) {
+	self->status_fd = open(status_file, O_RDONLY);
+	ASSERT_NE(-1, self->status_fd);
+
+	self->data_fd = open(data_file, O_RDWR);
+	ASSERT_NE(-1, self->data_fd);
+
+	self->enable_fd = -1;
+}
+
+FIXTURE_TEARDOWN(user) {
+	close(self->status_fd);
+	close(self->data_fd);
+
+	if (self->enable_fd != -1) {
+		write(self->enable_fd, "0", sizeof("0"));
+		close(self->enable_fd);
+	}
+}
+
+TEST_F(user, register_events) {
+	struct user_reg reg = {0};
+	int page_size = sysconf(_SC_PAGESIZE);
+	char *status_page;
+
+	reg.size = sizeof(reg);
+	reg.name_args = (__u64)"__test_event u32 field1; u32 field2";
+
+	status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED,
+			   self->status_fd, 0);
+
+	/* Register should work */
+	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
+	ASSERT_EQ(0, reg.write_index);
+	ASSERT_NE(0, reg.status_index);
+
+	/* Multiple registers should result in same index */
+	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
+	ASSERT_EQ(0, reg.write_index);
+	ASSERT_NE(0, reg.status_index);
+
+	/* Ensure disabled */
+	self->enable_fd = open(enable_file, O_RDWR);
+	ASSERT_NE(-1, self->enable_fd);
+	ASSERT_NE(-1, write(self->enable_fd, "0", sizeof("0")))
+
+	/* MMAP should work and be zero'd */
+	ASSERT_NE(MAP_FAILED, status_page);
+	ASSERT_NE(NULL, status_page);
+	ASSERT_EQ(0, status_page[reg.status_index]);
+
+	/* Enable event and ensure bits updated in status */
+	ASSERT_NE(-1, write(self->enable_fd, "1", sizeof("1")))
+	ASSERT_EQ(EVENT_STATUS_FTRACE, status_page[reg.status_index]);
+
+	/* Disable event and ensure bits updated in status */
+	ASSERT_NE(-1, write(self->enable_fd, "0", sizeof("0")))
+	ASSERT_EQ(0, status_page[reg.status_index]);
+
+	/* File still open should return -EBUSY for delete */
+	ASSERT_EQ(-1, ioctl(self->data_fd, DIAG_IOCSDEL, "__test_event"));
+	ASSERT_EQ(EBUSY, errno);
+
+	/* Delete should work only after close */
+	close(self->data_fd);
+	self->data_fd = open(data_file, O_RDWR);
+	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSDEL, "__test_event"));
+
+	/* Unmap should work */
+	ASSERT_EQ(0, munmap(status_page, page_size));
+}
+
+TEST_F(user, write_events) {
+	struct user_reg reg = {0};
+	struct iovec io[3];
+	__u32 field1, field2;
+	int before = 0, after = 0;
+
+	reg.size = sizeof(reg);
+	reg.name_args = (__u64)"__test_event u32 field1; u32 field2";
+
+	field1 = 1;
+	field2 = 2;
+
+	io[0].iov_base = &reg.write_index;
+	io[0].iov_len = sizeof(reg.write_index);
+	io[1].iov_base = &field1;
+	io[1].iov_len = sizeof(field1);
+	io[2].iov_base = &field2;
+	io[2].iov_len = sizeof(field2);
+
+	/* Register should work */
+	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
+	ASSERT_EQ(0, reg.write_index);
+	ASSERT_NE(0, reg.status_index);
+
+	/* Write should fail on invalid slot with ENOENT */
+	io[0].iov_base = &field2;
+	io[0].iov_len = sizeof(field2);
+	ASSERT_EQ(-1, writev(self->data_fd, (const struct iovec *)io, 3));
+	ASSERT_EQ(ENOENT, errno);
+	io[0].iov_base = &reg.write_index;
+	io[0].iov_len = sizeof(reg.write_index);
+
+	/* Enable event */
+	self->enable_fd = open(enable_file, O_RDWR);
+	ASSERT_NE(-1, write(self->enable_fd, "1", sizeof("1")))
+
+	/* Write should make it out to ftrace buffers */
+	before = trace_bytes();
+	ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 3));
+	after = trace_bytes();
+	ASSERT_GT(after, before);
+}
+
+TEST_F(user, write_fault) {
+	struct user_reg reg = {0};
+	struct iovec io[2];
+	int l = sizeof(__u64);
+	void *anon;
+
+	reg.size = sizeof(reg);
+	reg.name_args = (__u64)"__test_event u64 anon";
+
+	anon = mmap(NULL, l, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+	ASSERT_NE(MAP_FAILED, anon);
+
+	io[0].iov_base = &reg.write_index;
+	io[0].iov_len = sizeof(reg.write_index);
+	io[1].iov_base = anon;
+	io[1].iov_len = l;
+
+	/* Register should work */
+	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
+	ASSERT_EQ(0, reg.write_index);
+	ASSERT_NE(0, reg.status_index);
+
+	/* Write should work normally */
+	ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 2));
+
+	/* Faulted data should zero fill and work */
+	ASSERT_EQ(0, madvise(anon, l, MADV_DONTNEED));
+	ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 2));
+	ASSERT_EQ(0, munmap(anon, l));
+}
+
+int main(int argc, char **argv)
+{
+	return test_harness_run(argc, argv);
+}
diff --git a/tools/testing/selftests/user_events/settings b/tools/testing/selftests/user_events/settings
new file mode 100644
index 000000000000..ba4d85f74cd6
--- /dev/null
+++ b/tools/testing/selftests/user_events/settings
@@ -0,0 +1 @@
+timeout=90
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 07/10] user_events: Add self-test for dynamic_events integration
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
                   ` (5 preceding siblings ...)
  2021-11-04 17:04 ` [PATCH v4 06/10] user_events: Add self-test for ftrace integration Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 08/10] user_events: Add self-test for perf_event integration Beau Belgrave
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Tests matching deletes, creation of basic and complex types. Ensures
common patterns work correctly when interacting with dynamic_events
file.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 tools/testing/selftests/user_events/Makefile  |   2 +-
 .../testing/selftests/user_events/dyn_test.c  | 122 ++++++++++++++++++
 2 files changed, 123 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/user_events/dyn_test.c

diff --git a/tools/testing/selftests/user_events/Makefile b/tools/testing/selftests/user_events/Makefile
index d66c551a6fe3..e824b9c2cae7 100644
--- a/tools/testing/selftests/user_events/Makefile
+++ b/tools/testing/selftests/user_events/Makefile
@@ -2,7 +2,7 @@
 CFLAGS += -Wl,-no-as-needed -Wall -I../../../../usr/include
 LDLIBS += -lrt -lpthread -lm
 
-TEST_GEN_PROGS = ftrace_test
+TEST_GEN_PROGS = ftrace_test dyn_test
 
 TEST_FILES := settings
 
diff --git a/tools/testing/selftests/user_events/dyn_test.c b/tools/testing/selftests/user_events/dyn_test.c
new file mode 100644
index 000000000000..b06c1b79a6a2
--- /dev/null
+++ b/tools/testing/selftests/user_events/dyn_test.c
@@ -0,0 +1,122 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * User Events Dyn Events Test Program
+ *
+ * Copyright (c) 2021 Beau Belgrave <beaub@linux.microsoft.com>
+ */
+
+#include <errno.h>
+#include <linux/user_events.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/stat.h>
+#include <unistd.h>
+
+#include "../kselftest_harness.h"
+
+const char *dyn_file = "/sys/kernel/debug/tracing/dynamic_events";
+const char *clear = "!u:__test_event";
+
+static int Append(const char *value)
+{
+	int fd = open(dyn_file, O_RDWR | O_APPEND);
+	int ret = write(fd, value, strlen(value));
+
+	close(fd);
+	return ret;
+}
+
+#define CLEAR() \
+do { \
+	int ret = Append(clear); \
+	if (ret == -1) \
+		ASSERT_EQ(ENOENT, errno); \
+} while (0)
+
+#define TEST_PARSE(x) \
+do { \
+	ASSERT_NE(-1, Append(x)); \
+	CLEAR(); \
+} while (0)
+
+#define TEST_NPARSE(x) ASSERT_EQ(-1, Append(x))
+
+FIXTURE(user) {
+};
+
+FIXTURE_SETUP(user) {
+	CLEAR();
+}
+
+FIXTURE_TEARDOWN(user) {
+	CLEAR();
+}
+
+TEST_F(user, basic_types) {
+	/* All should work */
+	TEST_PARSE("u:__test_event u64 a");
+	TEST_PARSE("u:__test_event u32 a");
+	TEST_PARSE("u:__test_event u16 a");
+	TEST_PARSE("u:__test_event u8 a");
+	TEST_PARSE("u:__test_event char a");
+	TEST_PARSE("u:__test_event unsigned char a");
+	TEST_PARSE("u:__test_event int a");
+	TEST_PARSE("u:__test_event unsigned int a");
+	TEST_PARSE("u:__test_event short a");
+	TEST_PARSE("u:__test_event unsigned short a");
+	TEST_PARSE("u:__test_event char[20] a");
+	TEST_PARSE("u:__test_event unsigned char[20] a");
+}
+
+TEST_F(user, loc_types) {
+	/* All should work */
+	TEST_PARSE("u:__test_event __data_loc char[] a");
+	TEST_PARSE("u:__test_event __data_loc unsigned char[] a");
+	TEST_PARSE("u:__test_event __rel_loc char[] a");
+	TEST_PARSE("u:__test_event __rel_loc unsigned char[] a");
+}
+
+TEST_F(user, size_types) {
+	/* Should work */
+	TEST_PARSE("u:__test_event struct custom a 20");
+	/* Size not specified on struct should fail */
+	TEST_NPARSE("u:__test_event struct custom a");
+	/* Size specified on non-struct should fail */
+	TEST_NPARSE("u:__test_event char a 20");
+}
+
+TEST_F(user, flags) {
+	/* Should work */
+	TEST_PARSE("u:__test_event:FLAG_BPF_ITER u32 a");
+	/* Forward compat */
+	TEST_PARSE("u:__test_event:FLAG_BPF_ITER,FLAG_FUTURE u32 a");
+}
+
+TEST_F(user, matching) {
+	/* Register */
+	ASSERT_NE(-1, Append("u:__test_event struct custom a 20"));
+	/* Should not match */
+	TEST_NPARSE("!u:__test_event struct custom b");
+	/* Should match */
+	TEST_PARSE("!u:__test_event struct custom a");
+	/* Multi field reg */
+	ASSERT_NE(-1, Append("u:__test_event u32 a; u32 b"));
+	/* Non matching cases */
+	TEST_NPARSE("!u:__test_event u32 a");
+	TEST_NPARSE("!u:__test_event u32 b");
+	TEST_NPARSE("!u:__test_event u32 a; u32 ");
+	TEST_NPARSE("!u:__test_event u32 a; u32 a");
+	/* Matching case */
+	TEST_PARSE("!u:__test_event u32 a; u32 b");
+	/* Register */
+	ASSERT_NE(-1, Append("u:__test_event u32 a; u32 b"));
+	/* Ensure trailing semi-colon case */
+	TEST_PARSE("!u:__test_event u32 a; u32 b;");
+}
+
+int main(int argc, char **argv)
+{
+	return test_harness_run(argc, argv);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 08/10] user_events: Add self-test for perf_event integration
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
                   ` (6 preceding siblings ...)
  2021-11-04 17:04 ` [PATCH v4 07/10] user_events: Add self-test for dynamic_events integration Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 09/10] user_events: Optimize writing events by only copying data once Beau Belgrave
  2021-11-04 17:04 ` [PATCH v4 10/10] user_events: Add documentation file Beau Belgrave
  9 siblings, 0 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Tests perf can be attached to and written out correctly. Ensures attach
updates status bits in user programs.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 tools/testing/selftests/user_events/Makefile  |   2 +-
 .../testing/selftests/user_events/perf_test.c | 168 ++++++++++++++++++
 2 files changed, 169 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/user_events/perf_test.c

diff --git a/tools/testing/selftests/user_events/Makefile b/tools/testing/selftests/user_events/Makefile
index e824b9c2cae7..c765d8635d9a 100644
--- a/tools/testing/selftests/user_events/Makefile
+++ b/tools/testing/selftests/user_events/Makefile
@@ -2,7 +2,7 @@
 CFLAGS += -Wl,-no-as-needed -Wall -I../../../../usr/include
 LDLIBS += -lrt -lpthread -lm
 
-TEST_GEN_PROGS = ftrace_test dyn_test
+TEST_GEN_PROGS = ftrace_test dyn_test perf_test
 
 TEST_FILES := settings
 
diff --git a/tools/testing/selftests/user_events/perf_test.c b/tools/testing/selftests/user_events/perf_test.c
new file mode 100644
index 000000000000..26851d51d6bb
--- /dev/null
+++ b/tools/testing/selftests/user_events/perf_test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * User Events Perf Events Test Program
+ *
+ * Copyright (c) 2021 Beau Belgrave <beaub@linux.microsoft.com>
+ */
+
+#include <errno.h>
+#include <linux/user_events.h>
+#include <linux/perf_event.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/stat.h>
+#include <unistd.h>
+#include <asm/unistd.h>
+
+#include "../kselftest_harness.h"
+
+const char *data_file = "/sys/kernel/debug/tracing/user_events_data";
+const char *status_file = "/sys/kernel/debug/tracing/user_events_status";
+const char *id_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/id";
+const char *fmt_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/format";
+
+struct event {
+	__u32 index;
+	__u32 field1;
+	__u32 field2;
+};
+
+static long perf_event_open(struct perf_event_attr *pe, pid_t pid,
+			    int cpu, int group_fd, unsigned long flags)
+{
+	return syscall(__NR_perf_event_open, pe, pid, cpu, group_fd, flags);
+}
+
+static int get_id(void)
+{
+	FILE *fp = fopen(id_file, "r");
+	int ret, id = 0;
+
+	if (!fp)
+		return -1;
+
+	ret = fscanf(fp, "%d", &id);
+	fclose(fp);
+
+	if (ret != 1)
+		return -1;
+
+	return id;
+}
+
+static int get_offset(void)
+{
+	FILE *fp = fopen(fmt_file, "r");
+	int ret, c, last = 0, offset = 0;
+
+	if (!fp)
+		return -1;
+
+	/* Read until empty line */
+	while (true) {
+		c = getc(fp);
+
+		if (c == EOF)
+			break;
+
+		if (last == '\n' && c == '\n')
+			break;
+
+		last = c;
+	}
+
+	ret = fscanf(fp, "\tfield:u32 field1;\toffset:%d;", &offset);
+	fclose(fp);
+
+	if (ret != 1)
+		return -1;
+
+	return offset;
+}
+
+FIXTURE(user) {
+	int status_fd;
+	int data_fd;
+};
+
+FIXTURE_SETUP(user) {
+	self->status_fd = open(status_file, O_RDONLY);
+	ASSERT_NE(-1, self->status_fd);
+
+	self->data_fd = open(data_file, O_RDWR);
+	ASSERT_NE(-1, self->data_fd);
+}
+
+FIXTURE_TEARDOWN(user) {
+	close(self->status_fd);
+	close(self->data_fd);
+}
+
+TEST_F(user, perf_write) {
+	struct perf_event_attr pe = {0};
+	struct user_reg reg = {0};
+	int page_size = sysconf(_SC_PAGESIZE);
+	char *status_page;
+	struct event event;
+	struct perf_event_mmap_page *perf_page;
+	int id, fd, offset;
+	__u32 *val;
+
+	reg.size = sizeof(reg);
+	reg.name_args = (__u64)"__test_event u32 field1; u32 field2";
+
+	status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED,
+			   self->status_fd, 0);
+	ASSERT_NE(MAP_FAILED, status_page);
+
+	/* Register should work */
+	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
+	ASSERT_EQ(0, reg.write_index);
+	ASSERT_NE(0, reg.status_index);
+	ASSERT_EQ(0, status_page[reg.status_index]);
+
+	/* Id should be there */
+	id = get_id();
+	ASSERT_NE(-1, id);
+	offset = get_offset();
+	ASSERT_NE(-1, offset);
+
+	pe.type = PERF_TYPE_TRACEPOINT;
+	pe.size = sizeof(pe);
+	pe.config = id;
+	pe.sample_type = PERF_SAMPLE_RAW;
+	pe.sample_period = 1;
+	pe.wakeup_events = 1;
+
+	/* Tracepoint attach should work */
+	fd = perf_event_open(&pe, 0, -1, -1, 0);
+	ASSERT_NE(-1, fd);
+
+	perf_page = mmap(NULL, page_size * 2, PROT_READ, MAP_SHARED, fd, 0);
+	ASSERT_NE(MAP_FAILED, perf_page);
+
+	/* Status should be updated */
+	ASSERT_EQ(EVENT_STATUS_PERF, status_page[reg.status_index]);
+
+	event.index = reg.write_index;
+	event.field1 = 0xc001;
+	event.field2 = 0xc01a;
+
+	/* Ensure write shows up at correct offset */
+	ASSERT_NE(-1, write(self->data_fd, &event, sizeof(event)));
+	val = (void *)(((char *)perf_page) + perf_page->data_offset);
+	ASSERT_EQ(PERF_RECORD_SAMPLE, *val);
+	/* Skip over header and size, move to offset */
+	val += 3;
+	val = (void *)((char *)val) + offset;
+	/* Ensure correct */
+	ASSERT_EQ(event.field1, *val++);
+	ASSERT_EQ(event.field2, *val++);
+}
+
+int main(int argc, char **argv)
+{
+	return test_harness_run(argc, argv);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 09/10] user_events: Optimize writing events by only copying data once
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
                   ` (7 preceding siblings ...)
  2021-11-04 17:04 ` [PATCH v4 08/10] user_events: Add self-test for perf_event integration Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-08 22:45   ` Steven Rostedt
  2021-11-04 17:04 ` [PATCH v4 10/10] user_events: Add documentation file Beau Belgrave
  9 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Pass iterator through to probes to allow copying data directly to the
probe buffers instead of taking multiple copies. Enables eBPF user and
raw iterator types out to programs for no-copy scenarios.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c | 97 +++++++++++++++++++++++---------
 1 file changed, 69 insertions(+), 28 deletions(-)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index b5fe0550b489..d50118b9630a 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -39,6 +39,10 @@
 #define MAX_EVENT_DESC 512
 #define EVENT_NAME(user_event) ((user_event)->tracepoint.name)
 
+#define MAX_BPF_COPY_SIZE PAGE_SIZE
+#define MAX_STACK_BPF_DATA 512
+#define copy_nofault copy_from_iter_nocache
+
 static char *register_page_data;
 
 static DEFINE_MUTEX(reg_mutex);
@@ -63,8 +67,7 @@ struct user_event_refs {
 	struct user_event *events[];
 };
 
-typedef void (*user_event_func_t) (struct user_event *user,
-				   void *data, u32 datalen,
+typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i,
 				   void *tpdata);
 
 static int user_event_parse(char *name, char *args, char *flags,
@@ -491,7 +494,7 @@ static struct user_event *find_user_event(char *name, u32 *outkey)
 /*
  * Writes the user supplied payload out to a trace file.
  */
-static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
+static void user_event_ftrace(struct user_event *user, struct iov_iter *i,
 			      void *tpdata)
 {
 	struct trace_event_file *file;
@@ -506,41 +509,82 @@ static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
 		return;
 
 	entry = trace_event_buffer_reserve(&event_buffer, file,
-					   sizeof(*entry) + datalen);
+					   sizeof(*entry) + i->count);
 
 	if (unlikely(!entry))
 		return;
 
-	memcpy(entry + 1, data, datalen);
+	if (unlikely(!copy_nofault(entry + 1, i->count, i)))
+		return;
 
 	trace_event_buffer_commit(&event_buffer);
 }
 
 #ifdef CONFIG_PERF_EVENTS
+static void user_event_bpf(struct user_event *user, struct iov_iter *i)
+{
+	struct user_bpf_context context;
+	struct user_bpf_iter bpf_i;
+	char fast_data[MAX_STACK_BPF_DATA];
+	void *temp = NULL;
+
+	if ((user->flags & FLAG_BPF_ITER) && iter_is_iovec(i)) {
+		/* Raw iterator */
+		context.data_type = USER_BPF_DATA_ITER;
+		context.data_len = i->count;
+		context.iter = &bpf_i;
+
+		bpf_i.iov_offset = i->iov_offset;
+		bpf_i.iov = i->iov;
+		bpf_i.nr_segs = i->nr_segs;
+	} else if (i->nr_segs == 1 && iter_is_iovec(i)) {
+		/* Single buffer from user */
+		context.data_type = USER_BPF_DATA_USER;
+		context.data_len = i->count;
+		context.udata = i->iov->iov_base + i->iov_offset;
+	} else {
+		/* Multi buffer from user */
+		struct iov_iter copy = *i;
+		size_t copy_size = min(i->count, MAX_BPF_COPY_SIZE);
+
+		context.data_type = USER_BPF_DATA_KERNEL;
+		context.kdata = fast_data;
+
+		if (unlikely(copy_size > sizeof(fast_data))) {
+			temp = kmalloc(copy_size, GFP_NOWAIT);
+
+			if (temp)
+				context.kdata = temp;
+			else
+				copy_size = sizeof(fast_data);
+		}
+
+		context.data_len = copy_nofault(context.kdata,
+						copy_size, &copy);
+	}
+
+	trace_call_bpf(&user->call, &context);
+
+	kfree(temp);
+}
+
 /*
  * Writes the user supplied payload out to perf ring buffer or eBPF program.
  */
-static void user_event_perf(struct user_event *user, void *data, u32 datalen,
+static void user_event_perf(struct user_event *user, struct iov_iter *i,
 			    void *tpdata)
 {
 	struct hlist_head *perf_head;
 
-	if (bpf_prog_array_valid(&user->call)) {
-		struct user_bpf_context context = {0};
-
-		context.data_len = datalen;
-		context.data_type = USER_BPF_DATA_KERNEL;
-		context.kdata = data;
-
-		trace_call_bpf(&user->call, &context);
-	}
+	if (bpf_prog_array_valid(&user->call))
+		user_event_bpf(user, i);
 
 	perf_head = this_cpu_ptr(user->call.perf_events);
 
 	if (perf_head && !hlist_empty(perf_head)) {
 		struct trace_entry *perf_entry;
 		struct pt_regs *regs;
-		size_t size = sizeof(*perf_entry) + datalen;
+		size_t size = sizeof(*perf_entry) + i->count;
 		int context;
 
 		perf_entry = perf_trace_buf_alloc(ALIGN(size, 8),
@@ -551,7 +595,8 @@ static void user_event_perf(struct user_event *user, void *data, u32 datalen,
 
 		perf_fetch_caller_regs(regs);
 
-		memcpy(perf_entry + 1, data, datalen);
+		if (unlikely(!copy_nofault(perf_entry + 1, i->count, i)))
+			return;
 
 		perf_trace_buf_submit(perf_entry, size, context,
 				      user->call.event.type, 1, regs,
@@ -961,32 +1006,28 @@ static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
 	if (likely(atomic_read(&tp->key.enabled) > 0)) {
 		struct tracepoint_func *probe_func_ptr;
 		user_event_func_t probe_func;
+		struct iov_iter copy;
 		void *tpdata;
-		void *kdata;
-		u32 datalen;
-
-		kdata = kmalloc(i->count, GFP_KERNEL);
 
-		if (unlikely(!kdata))
-			return -ENOMEM;
-
-		datalen = copy_from_iter(kdata, i->count, i);
+		if (unlikely(iov_iter_fault_in_readable(i, i->count)))
+			return -EFAULT;
 
 		rcu_read_lock_sched();
+		pagefault_disable();
 
 		probe_func_ptr = rcu_dereference_sched(tp->funcs);
 
 		if (probe_func_ptr) {
 			do {
+				copy = *i;
 				probe_func = probe_func_ptr->func;
 				tpdata = probe_func_ptr->data;
-				probe_func(user, kdata, datalen, tpdata);
+				probe_func(user, &copy, tpdata);
 			} while ((++probe_func_ptr)->func);
 		}
 
+		pagefault_enable();
 		rcu_read_unlock_sched();
-
-		kfree(kdata);
 	}
 
 	return ret;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 10/10] user_events: Add documentation file
  2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
                   ` (8 preceding siblings ...)
  2021-11-04 17:04 ` [PATCH v4 09/10] user_events: Optimize writing events by only copying data once Beau Belgrave
@ 2021-11-04 17:04 ` Beau Belgrave
  2021-11-04 19:05   ` Jonathan Corbet
  9 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 17:04 UTC (permalink / raw)
  To: rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Add a documentation file about user_events with example code, etc.
explaining how it may be used.

Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 Documentation/trace/user_events.rst | 298 ++++++++++++++++++++++++++++
 1 file changed, 298 insertions(+)
 create mode 100644 Documentation/trace/user_events.rst

diff --git a/Documentation/trace/user_events.rst b/Documentation/trace/user_events.rst
new file mode 100644
index 000000000000..d79c9f07d012
--- /dev/null
+++ b/Documentation/trace/user_events.rst
@@ -0,0 +1,298 @@
+=========================================
+user_events: User-based Event Tracing
+=========================================
+
+:Author: Beau Belgrave
+
+Overview
+--------
+User based trace events allow user processes to create events and trace data
+that can be viewed via existing tools, such as ftrace, perf and eBPF.
+To enable this feature, build your kernel with CONFIG_USER_EVENTS=y.
+
+Programs can view status of the events via 
+/sys/kernel/debug/tracing/user_events_status and can both register and write
+data out via /sys/kernel/debug/tracing/user_events_data.
+
+Programs can also use /sys/kernel/debug/tracing/dynamic_events to register and
+delete user based events via the u: prefix. The format of the command to
+dynamic_events is the same as the ioctl with the u: prefix applied.
+
+Typically programs will register a set of events that they wish to expose to
+tools that can read trace_events (such as ftrace and perf). The registration
+process gives back two ints to the program for each event. The first int is the
+status index. This index describes which byte in the 
+/sys/kernel/debug/tracing/user_events_status file represents this event. The
+second int is the write index. This index describes the data when a write() or
+writev() is called on the /sys/kernel/debug/tracing/user_events_data file.
+
+The structures referenced in this document are contained with the
+/include/uap/linux/user_events.h file in the source tree.
+
+**NOTE:** *Both user_events_status and user_events_data are under the tracefs filesystem
+and may be mounted at different paths than above.*
+
+Registering
+-----------
+Registering within a user process is done via ioctl() out to the
+/sys/kernel/debug/tracing/user_events_data file. The command to issue is
+DIAG_IOCSREG. This command takes a struct user_reg as an argument.
+
+The struct user_reg requires two values, the first is the size of the structure
+to ensure forward and backward compatibility. The second is the command string
+to issue for registering.
+
+User based events show up under tracefs like any other event under the subsystem
+named "user_events". This means tools that wish to attach to the events need to
+use /sys/kernel/debug/tracing/events/user_events/[name]/enable or perf record
+-e user_events:[name] when attaching/recording.
+
+**NOTE:** *The write_index returned is only valid for the FD that was used*
+
+Command Format
+^^^^^^^^^^^^^^
+The command string format is as follows:
+
+::
+
+  name[:FLAG1[,FLAG2...]] [Field1[;Field2...]]
+
+Supported Flags
+^^^^^^^^^^^^^^^
+**BPF_ITER** - EBPF programs attached to this event will get the raw iovec
+struct instead of any data copies for max performance.
+
+Field Format
+^^^^^^^^^^^^
+
+::
+
+  type name [size]
+
+Basic types are supported (__data_loc, u32, u64, int, char, char[20]).
+User programs are encouraged to use clearly sized types like u32.
+
+**NOTE:** *Long is not supported since size can vary between user and kernel.*
+
+The size is only valid for types that start with a struct prefix.
+This allows user programs to describe custom structs out to tools, if required.
+
+For example, a struct in C that looks like this:
+
+::
+
+  struct mytype {
+    char data[20];
+  };
+
+Would be represented by the following field:
+
+::
+
+  struct mytype myname 20
+
+Status
+------
+When tools attach/record user based events the status of the event is updated
+in realtime. This allows user programs to only incur the cost of the write() or
+writev() calls when something is actively attached to the event.
+
+User programs call mmap() on /sys/kernel/debug/tracing/user_events_status to
+check the status for each event that is registered. The byte to check in the
+file is given back after the register ioctl() via user_reg.status_index.
+Currently the size of user_events_status is a single page, however, custom
+kernel configurations can change this size to allow more user based events. In
+all cases the size of the file is a multiple of a page size.
+
+For example, if the register ioctl() gives back a status_index of 3 you would
+check byte 3 of the returned mmap data to see if anything is attached to that
+event.
+
+Administrators can easily check the status of all registered events by reading
+the user_events_status file directly via a terminal. The output is as follows:
+
+::
+
+  Byte:Name [# Comments]
+  ...
+
+  Active: ActiveCount
+  Buisy: BusyCount
+  Max: MaxCount
+
+For example, on a system that has a single event the output looks like this:
+
+::
+
+  1:test
+
+  Active: 1
+  Busy: 0
+  Max: 4096
+
+If a user enables the user event via ftrace, the output would change to this:
+
+:: 
+
+  1:test # Used by ftrace
+
+  Active: 1
+  Busy: 1
+  Max: 4096
+
+**NOTE:** *A status index of 0 will never be returned. This allows user 
+programs to have an index that can be used on error cases.*
+
+Status Bits
+^^^^^^^^^^^
+The byte being checked will be non-zero if anything is attached. Programs can
+check specific bits in the byte to see what mechanism has been attached.
+
+The following values are defined to aid in checking what has been attached:
+**EVENT_STATUS_FTRACE** - Bit set if ftrace has been attached (Bit 0).
+
+**EVENT_STATUS_PERF** - Bit set if perf/eBPF has been attached (Bit 1).
+
+Writing Data
+------------
+After registering an event the same fd that was used to register can be used
+to write an entry for that event. The write_index returned must be at the start
+of the data, then the remaining data is treated as the payload of the event.
+
+For example, if write_index returned was 1 and I wanted to write out an int
+payload of the event. Then the data would have to be 8 bytes (2 ints) long,
+with the first 4 bytes being equal to 1 and the last 4 bytes being equal to the
+value I want as the payload.
+
+In memory this would look like this:
+
+::
+
+  int index;
+  int payload;
+
+User programs might have well known structs that they wish to use to emit out
+as payloads. In those cases writev() can be used, with the first vector being
+the index and the following vector(s) being the actual event payload.
+
+For example, if I have a struct like this:
+
+::
+
+  struct payload {
+        int src;
+        int dst;
+        int flags;
+  };
+
+It's advised for user programs to do the following:
+
+:: 
+
+  struct iovec io[2];
+  struct payload e;
+
+  io[0].iov_base = &write_index;
+  io[0].iov_len = sizeof(write_index);
+  io[1].iov_base = &e;
+  io[1].iov_len = sizeof(e);
+
+  writev(fd, (const struct iovec*)io, 2);
+
+**NOTE:** *The write_index is not emitted out into the trace being recorded.*
+
+EBPF
+----
+EBPF programs that attach to a user-based event tracepoint are given a pointer
+to a struct user_bpf_context. The bpf context contains the data type (which can
+be a user or kernel buffer, or can be a pointer to the iovec) and the data
+length that was emitted (minus the write_index).
+
+Example Code
+------------
+
+::
+
+  #include <errno.h>
+  #include <sys/ioctl.h>
+  #include <sys/mman.h>
+  #include <fcntl.h>
+  #include <stdio.h>
+  #include <unistd.h>
+  #include <linux/user_events.h>
+  
+  /* Assumes debugfs is mounted */
+  const char *data_file = "/sys/kernel/debug/tracing/user_events_data";
+  const char *status_file = "/sys/kernel/debug/tracing/user_events_status";
+  
+  static int event_status(char **status)
+  {
+  	int fd = open(status_file, O_RDONLY);
+  
+  	*status = mmap(NULL, sysconf(_SC_PAGESIZE), PROT_READ,
+  		       MAP_SHARED, fd, 0);
+  
+  	close(fd);
+  
+  	if (*status == MAP_FAILED)
+  	      return -1;
+  
+  	return 0;
+  }
+  
+  static int event_reg(int fd, const char *command, int *status, int *write)
+  {
+  	struct user_reg reg = {0};
+  
+  	reg.size = sizeof(reg);
+  	reg.name_args = (__u64)command;
+  
+  	if (ioctl(fd, DIAG_IOCSREG, &reg) == -1)
+  		return -1;
+  
+  	*status = reg.status_index;
+  	*write = reg.write_index;
+  
+  	return 0;
+  }
+  
+  int main(int argc, char **argv)
+  {
+  	int data_fd, status, write;
+  	char *status_page;
+  	struct iovec io[2];
+  	__u32 count = 0;
+  
+  	if (event_status(&status_page) == -1)
+  		return errno;
+  
+  	data_fd = open(data_file, O_RDWR);
+  
+  	if (event_reg(data_fd, "test u32 count", &status, &write) == -1)
+  		return errno;
+  
+  	/* Setup iovec */
+  	io[0].iov_base = &status;
+  	io[0].iov_len = sizeof(status);
+  	io[1].iov_base = &count;
+  	io[1].iov_len = sizeof(count);
+  
+  ask:
+  	printf("Press enter to check status...\n");
+  	getchar();
+  
+  	/* Check if anyone is listening */
+  	if (status_page[status]) {
+  		/* Yep, trace out our data */
+  		writev(data_fd, (const struct iovec*)io, 2);
+  
+  		/* Increase the count */
+  		count++;
+  
+  		printf("Something was attached, wrote data\n");
+  	}
+  
+  	goto ask;
+  
+  	return 0;
+  }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 10/10] user_events: Add documentation file
  2021-11-04 17:04 ` [PATCH v4 10/10] user_events: Add documentation file Beau Belgrave
@ 2021-11-04 19:05   ` Jonathan Corbet
  2021-11-04 21:08     ` Beau Belgrave
  0 siblings, 1 reply; 47+ messages in thread
From: Jonathan Corbet @ 2021-11-04 19:05 UTC (permalink / raw)
  To: Beau Belgrave, rostedt, mhiramat; +Cc: linux-trace-devel, linux-kernel, beaub

Beau Belgrave <beaub@linux.microsoft.com> writes:

> Add a documentation file about user_events with example code, etc.
> explaining how it may be used.

Yay documentation!  Some nits for a slow moment...

> Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
> ---
>  Documentation/trace/user_events.rst | 298 ++++++++++++++++++++++++++++
>  1 file changed, 298 insertions(+)
>  create mode 100644 Documentation/trace/user_events.rst

Actually, this isn't really a nit.  When you add a new RST file, you
need to add it to the index.rst file too so that it gets pulled into the
docs build.  Otherwise you'll get the warning you doubtless saw when you
tried building the docs after adding this file...:)

> diff --git a/Documentation/trace/user_events.rst b/Documentation/trace/user_events.rst
> new file mode 100644
> index 000000000000..d79c9f07d012
> --- /dev/null
> +++ b/Documentation/trace/user_events.rst
> @@ -0,0 +1,298 @@
> +=========================================
> +user_events: User-based Event Tracing
> +=========================================
> +
> +:Author: Beau Belgrave
> +
> +Overview
> +--------
> +User based trace events allow user processes to create events and trace data
> +that can be viewed via existing tools, such as ftrace, perf and eBPF.
> +To enable this feature, build your kernel with CONFIG_USER_EVENTS=y.
> +
> +Programs can view status of the events via 
> +/sys/kernel/debug/tracing/user_events_status and can both register and write
> +data out via /sys/kernel/debug/tracing/user_events_data.
> +
> +Programs can also use /sys/kernel/debug/tracing/dynamic_events to register and
> +delete user based events via the u: prefix. The format of the command to
> +dynamic_events is the same as the ioctl with the u: prefix applied.
> +
> +Typically programs will register a set of events that they wish to expose to
> +tools that can read trace_events (such as ftrace and perf). The registration
> +process gives back two ints to the program for each event. The first int is the
> +status index. This index describes which byte in the 
> +/sys/kernel/debug/tracing/user_events_status file represents this event. The
> +second int is the write index. This index describes the data when a write() or
> +writev() is called on the /sys/kernel/debug/tracing/user_events_data file.
> +
> +The structures referenced in this document are contained with the
> +/include/uap/linux/user_events.h file in the source tree.
> +
> +**NOTE:** *Both user_events_status and user_events_data are under the tracefs filesystem
> +and may be mounted at different paths than above.*

Best to stick with the 80-column guideline for docs, please.

This also won't render the way you expect and may add a warning.

> +Registering
> +-----------
> +Registering within a user process is done via ioctl() out to the
> +/sys/kernel/debug/tracing/user_events_data file. The command to issue is
> +DIAG_IOCSREG. This command takes a struct user_reg as an argument.
> +
> +The struct user_reg requires two values, the first is the size of the structure
> +to ensure forward and backward compatibility. The second is the command string
> +to issue for registering.
> +
> +User based events show up under tracefs like any other event under the subsystem
> +named "user_events". This means tools that wish to attach to the events need to
> +use /sys/kernel/debug/tracing/events/user_events/[name]/enable or perf record
> +-e user_events:[name] when attaching/recording.
> +
> +**NOTE:** *The write_index returned is only valid for the FD that was used*
> +
> +Command Format
> +^^^^^^^^^^^^^^
> +The command string format is as follows:
> +
> +::

You can express this more concisely (and readably) as

  The command string format is as follows::

(there's a bunch of these in this file)

> +  name[:FLAG1[,FLAG2...]] [Field1[;Field2...]]
> +
> +Supported Flags
> +^^^^^^^^^^^^^^^
> +**BPF_ITER** - EBPF programs attached to this event will get the raw iovec
> +struct instead of any data copies for max performance.
> +
> +Field Format
> +^^^^^^^^^^^^
> +
> +::
> +
> +  type name [size]
> +
> +Basic types are supported (__data_loc, u32, u64, int, char, char[20]).
> +User programs are encouraged to use clearly sized types like u32.
> +
> +**NOTE:** *Long is not supported since size can vary between user and kernel.*
> +
> +The size is only valid for types that start with a struct prefix.
> +This allows user programs to describe custom structs out to tools, if required.
> +
> +For example, a struct in C that looks like this:
> +
> +::
> +
> +  struct mytype {
> +    char data[20];
> +  };
> +
> +Would be represented by the following field:
> +
> +::
> +
> +  struct mytype myname 20
> +
> +Status
> +------
> +When tools attach/record user based events the status of the event is updated
> +in realtime. This allows user programs to only incur the cost of the write() or
> +writev() calls when something is actively attached to the event.
> +
> +User programs call mmap() on /sys/kernel/debug/tracing/user_events_status to
> +check the status for each event that is registered. The byte to check in the
> +file is given back after the register ioctl() via user_reg.status_index.
> +Currently the size of user_events_status is a single page, however, custom
> +kernel configurations can change this size to allow more user based events. In
> +all cases the size of the file is a multiple of a page size.
> +
> +For example, if the register ioctl() gives back a status_index of 3 you would
> +check byte 3 of the returned mmap data to see if anything is attached to that
> +event.
> +
> +Administrators can easily check the status of all registered events by reading
> +the user_events_status file directly via a terminal. The output is as follows:
> +
> +::
> +
> +  Byte:Name [# Comments]
> +  ...
> +
> +  Active: ActiveCount
> +  Buisy: BusyCount
> +  Max: MaxCount
> +
> +For example, on a system that has a single event the output looks like this:
> +
> +::
> +
> +  1:test
> +
> +  Active: 1
> +  Busy: 0
> +  Max: 4096
> +
> +If a user enables the user event via ftrace, the output would change to this:
> +
> +:: 
> +
> +  1:test # Used by ftrace
> +
> +  Active: 1
> +  Busy: 1
> +  Max: 4096
> +
> +**NOTE:** *A status index of 0 will never be returned. This allows user 
> +programs to have an index that can be used on error cases.*
> +
> +Status Bits
> +^^^^^^^^^^^
> +The byte being checked will be non-zero if anything is attached. Programs can
> +check specific bits in the byte to see what mechanism has been attached.
> +
> +The following values are defined to aid in checking what has been attached:
> +**EVENT_STATUS_FTRACE** - Bit set if ftrace has been attached (Bit 0).
> +
> +**EVENT_STATUS_PERF** - Bit set if perf/eBPF has been attached (Bit 1).
> +
> +Writing Data
> +------------
> +After registering an event the same fd that was used to register can be used
> +to write an entry for that event. The write_index returned must be at the start
> +of the data, then the remaining data is treated as the payload of the event.
> +
> +For example, if write_index returned was 1 and I wanted to write out an int
> +payload of the event. Then the data would have to be 8 bytes (2 ints) long,
> +with the first 4 bytes being equal to 1 and the last 4 bytes being equal to the
> +value I want as the payload.
> +
> +In memory this would look like this:
> +
> +::
> +
> +  int index;
> +  int payload;
> +
> +User programs might have well known structs that they wish to use to emit out
> +as payloads. In those cases writev() can be used, with the first vector being
> +the index and the following vector(s) being the actual event payload.
> +
> +For example, if I have a struct like this:
> +
> +::
> +
> +  struct payload {
> +        int src;
> +        int dst;
> +        int flags;
> +  };
> +
> +It's advised for user programs to do the following:
> +
> +:: 
> +
> +  struct iovec io[2];
> +  struct payload e;
> +
> +  io[0].iov_base = &write_index;
> +  io[0].iov_len = sizeof(write_index);
> +  io[1].iov_base = &e;
> +  io[1].iov_len = sizeof(e);
> +
> +  writev(fd, (const struct iovec*)io, 2);
> +
> +**NOTE:** *The write_index is not emitted out into the trace being recorded.*
> +
> +EBPF
> +----
> +EBPF programs that attach to a user-based event tracepoint are given a pointer
> +to a struct user_bpf_context. The bpf context contains the data type (which can
> +be a user or kernel buffer, or can be a pointer to the iovec) and the data
> +length that was emitted (minus the write_index).
> +
> +Example Code
> +------------

Examples are great, but they are best placed under samples/ (or tools/
if they do something useful) where readers can build and run them.

> +::
> +
> +  #include <errno.h>
> +  #include <sys/ioctl.h>
> +  #include <sys/mman.h>
> +  #include <fcntl.h>
> +  #include <stdio.h>
> +  #include <unistd.h>
> +  #include <linux/user_events.h>
> +  
> +  /* Assumes debugfs is mounted */
> +  const char *data_file = "/sys/kernel/debug/tracing/user_events_data";
> +  const char *status_file = "/sys/kernel/debug/tracing/user_events_status";
> +  
> +  static int event_status(char **status)
> +  {
> +  	int fd = open(status_file, O_RDONLY);
> +  
> +  	*status = mmap(NULL, sysconf(_SC_PAGESIZE), PROT_READ,
> +  		       MAP_SHARED, fd, 0);
> +  
> +  	close(fd);
> +  
> +  	if (*status == MAP_FAILED)
> +  	      return -1;
> +  
> +  	return 0;
> +  }
> +  
> +  static int event_reg(int fd, const char *command, int *status, int *write)
> +  {
> +  	struct user_reg reg = {0};
> +  
> +  	reg.size = sizeof(reg);
> +  	reg.name_args = (__u64)command;
> +  
> +  	if (ioctl(fd, DIAG_IOCSREG, &reg) == -1)
> +  		return -1;
> +  
> +  	*status = reg.status_index;
> +  	*write = reg.write_index;
> +  
> +  	return 0;
> +  }
> +  
> +  int main(int argc, char **argv)
> +  {
> +  	int data_fd, status, write;
> +  	char *status_page;
> +  	struct iovec io[2];
> +  	__u32 count = 0;
> +  
> +  	if (event_status(&status_page) == -1)
> +  		return errno;
> +  
> +  	data_fd = open(data_file, O_RDWR);
> +  
> +  	if (event_reg(data_fd, "test u32 count", &status, &write) == -1)
> +  		return errno;
> +  
> +  	/* Setup iovec */
> +  	io[0].iov_base = &status;
> +  	io[0].iov_len = sizeof(status);
> +  	io[1].iov_base = &count;
> +  	io[1].iov_len = sizeof(count);
> +  
> +  ask:
> +  	printf("Press enter to check status...\n");
> +  	getchar();
> +  
> +  	/* Check if anyone is listening */
> +  	if (status_page[status]) {
> +  		/* Yep, trace out our data */
> +  		writev(data_fd, (const struct iovec*)io, 2);
> +  
> +  		/* Increase the count */
> +  		count++;
> +  
> +  		printf("Something was attached, wrote data\n");
> +  	}
> +  
> +  	goto ask;
> +  
> +  	return 0;
> +  }

Thanks,

jon

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 10/10] user_events: Add documentation file
  2021-11-04 19:05   ` Jonathan Corbet
@ 2021-11-04 21:08     ` Beau Belgrave
  2021-11-04 21:18       ` Jonathan Corbet
  0 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-04 21:08 UTC (permalink / raw)
  To: Jonathan Corbet; +Cc: rostedt, mhiramat, linux-trace-devel, linux-kernel

On Thu, Nov 04, 2021 at 01:05:51PM -0600, Jonathan Corbet wrote:
> Beau Belgrave <beaub@linux.microsoft.com> writes:
> 
> > Add a documentation file about user_events with example code, etc.
> > explaining how it may be used.
> 
> Yay documentation!  Some nits for a slow moment...
> 
> > Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
> > ---
> >  Documentation/trace/user_events.rst | 298 ++++++++++++++++++++++++++++
> >  1 file changed, 298 insertions(+)
> >  create mode 100644 Documentation/trace/user_events.rst
> 
> Actually, this isn't really a nit.  When you add a new RST file, you
> need to add it to the index.rst file too so that it gets pulled into the
> docs build.  Otherwise you'll get the warning you doubtless saw when you
> tried building the docs after adding this file...:)
> 

Thanks for the review, I'll fix these things up!

> > +Example Code
> > +------------
> 
> Examples are great, but they are best placed under samples/ (or tools/
> if they do something useful) where readers can build and run them.
> 

Ok, sounds good. Is it fine to include user mode samples in there? I
quickly checked and most appear to be modules. Maybe there is a better
place for user mode examples?

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 10/10] user_events: Add documentation file
  2021-11-04 21:08     ` Beau Belgrave
@ 2021-11-04 21:18       ` Jonathan Corbet
  0 siblings, 0 replies; 47+ messages in thread
From: Jonathan Corbet @ 2021-11-04 21:18 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: rostedt, mhiramat, linux-trace-devel, linux-kernel

Beau Belgrave <beaub@linux.microsoft.com> writes:

> On Thu, Nov 04, 2021 at 01:05:51PM -0600, Jonathan Corbet wrote:
>> Examples are great, but they are best placed under samples/ (or tools/
>> if they do something useful) where readers can build and run them.
>> 
> Ok, sounds good. Is it fine to include user mode samples in there? I
> quickly checked and most appear to be modules. Maybe there is a better
> place for user mode examples?

User-mode stuff is fine - at least, *I* don't object to it! :)

Thanks,

jon

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-04 17:04 ` [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace Beau Belgrave
@ 2021-11-04 21:34   ` kernel test robot
  2021-11-08  2:32     ` Masami Hiramatsu
  2021-11-07 14:31   ` Masami Hiramatsu
  2021-11-07 18:18   ` Steven Rostedt
  2 siblings, 1 reply; 47+ messages in thread
From: kernel test robot @ 2021-11-04 21:34 UTC (permalink / raw)
  To: Beau Belgrave, rostedt, mhiramat
  Cc: kbuild-all, linux-trace-devel, linux-kernel, beaub

[-- Attachment #1: Type: text/plain, Size: 4554 bytes --]

Hi Beau,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on rostedt-trace/for-next]
[also build test ERROR on shuah-kselftest/next linux/master linus/master v5.15 next-20211104]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Beau-Belgrave/user_events-Enable-user-processes-to-create-and-write-to-trace-events/20211105-010650
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git for-next
config: powerpc-allmodconfig (attached as .config)
compiler: powerpc-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/da0961ad45aa1192b47b8a80de6b17437434ae4a
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Beau-Belgrave/user_events-Enable-user-processes-to-create-and-write-to-trace-events/20211105-010650
        git checkout da0961ad45aa1192b47b8a80de6b17437434ae4a
        # save the attached .config to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=powerpc SHELL=/bin/bash kernel/trace/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   kernel/trace/trace_events_user.c: In function 'user_event_parse':
>> kernel/trace/trace_events_user.c:665:9: error: too few arguments to function 'dyn_event_add'
     665 |         dyn_event_add(&user->devent);
         |         ^~~~~~~~~~~~~
   In file included from kernel/trace/trace_events_user.c:23:
   kernel/trace/trace_dynevent.h:79:19: note: declared here
      79 | static inline int dyn_event_add(struct dyn_event *ev,
         |                   ^~~~~~~~~~~~~


vim +/dyn_event_add +665 kernel/trace/trace_events_user.c

   596	
   597	/*
   598	 * Parses the event name, arguments and flags then registers if successful.
   599	 */
   600	static int user_event_parse(char *name, char *args, char *flags,
   601				    struct user_event **newuser)
   602	{
   603		int ret;
   604		int index;
   605		u32 key;
   606		struct user_event *user = find_user_event(name, &key);
   607	
   608		if (user) {
   609			*newuser = user;
   610			ret = 0;
   611			goto put_name;
   612		}
   613	
   614		index = find_first_zero_bit(page_bitmap, MAX_EVENTS);
   615	
   616		if (index == MAX_EVENTS) {
   617			ret = -EMFILE;
   618			goto put_name;
   619		}
   620	
   621		user = kzalloc(sizeof(*user), GFP_KERNEL);
   622	
   623		if (!user) {
   624			ret = -ENOMEM;
   625			goto put_name;
   626		}
   627	
   628		INIT_LIST_HEAD(&user->class.fields);
   629		INIT_LIST_HEAD(&user->fields);
   630	
   631		user->tracepoint.name = name;
   632	
   633		user_event_parse_flags(user, flags);
   634	
   635		ret = user_event_parse_fields(user, args);
   636	
   637		if (ret)
   638			goto put_user;
   639	
   640		/* Minimal print format */
   641		user->call.print_fmt = "\"\"";
   642	
   643		user->call.data = user;
   644		user->call.class = &user->class;
   645		user->call.name = name;
   646		user->call.flags = TRACE_EVENT_FL_TRACEPOINT;
   647		user->call.tp = &user->tracepoint;
   648		user->call.event.funcs = &user_event_funcs;
   649	
   650		user->class.system = USER_EVENTS_SYSTEM;
   651		user->class.fields_array = user_event_fields_array;
   652		user->class.get_fields = user_event_get_fields;
   653		user->class.reg = user_event_reg;
   654		user->class.probe = user_event_ftrace;
   655	
   656		mutex_lock(&event_mutex);
   657		ret = user_event_trace_register(user);
   658		mutex_unlock(&event_mutex);
   659	
   660		if (ret)
   661			goto put_user;
   662	
   663		user->index = index;
   664		dyn_event_init(&user->devent, &user_event_dops);
 > 665		dyn_event_add(&user->devent);
   666		set_bit(user->index, page_bitmap);
   667		hash_add(register_table, &user->node, key);
   668	
   669		*newuser = user;
   670		return 0;
   671	put_user:
   672		user_event_destroy_fields(user);
   673		kfree(user);
   674	put_name:
   675		kfree(name);
   676		return ret;
   677	}
   678	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 72031 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-04 17:04 ` [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace Beau Belgrave
  2021-11-04 21:34   ` kernel test robot
@ 2021-11-07 14:31   ` Masami Hiramatsu
  2021-11-08 17:13     ` Beau Belgrave
  2021-11-07 18:18   ` Steven Rostedt
  2 siblings, 1 reply; 47+ messages in thread
From: Masami Hiramatsu @ 2021-11-07 14:31 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: rostedt, linux-trace-devel, linux-kernel

Hi Beau,

At first, thanks for breaking down your patch into this series!

Now I found that a suspicious security design issue in this patch.

On Thu,  4 Nov 2021 10:04:25 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> +
> +static enum print_line_t user_event_print_trace(struct trace_iterator *iter,
> +						int flags,
> +						struct trace_event *event)
> +{
> +	/* Unsafe to try to decode user provided print_fmt, use hex */
> +	trace_print_hex_dump_seq(&iter->seq, "", DUMP_PREFIX_OFFSET, 16,
> +				 1, iter->ent, iter->ent_size, true);

You said this is "Unsafe to try to decode user provided" here, because
this doesn't check the event data sanity, especially non-fixed size data.

However, it is not enough that you don't decode it here. Because synthetic
events (histograms) and event filters need to decode this recorded data entry
using the event format information.

This means this can cause a buffer overrun issue on the ring buffer, because
__data_loc (and __rel_loc too) can be compromised by the user.

If you want to just trace the user events with digit parameters, there is
a way to close this issue - support only the fixed size types (IOW, drop
__data_loc/rel_loc support) and always checks the 'length' of the written
data size. This ensures that those filters/synthetic events can access
those parameters as 'values'. Maybe eprobes still has to reject the user
events but the other parts will work well.

If you want to log some "string", it is hard. Maybe it needs a special
check when writing the event (address, length, and null termination.),
but it will increase the recording overhead.

Thank you,


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-04 17:04 ` [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace Beau Belgrave
  2021-11-04 21:34   ` kernel test robot
  2021-11-07 14:31   ` Masami Hiramatsu
@ 2021-11-07 18:18   ` Steven Rostedt
  2021-11-08 19:56     ` Beau Belgrave
  2 siblings, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-07 18:18 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Thu,  4 Nov 2021 10:04:25 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:
> index 000000000000..a68017ad7fdd
> --- /dev/null
> +++ b/kernel/trace/trace_events_user.c
> @@ -0,0 +1,1140 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (c) 2021, Microsoft Corporation.
> + *
> + * Authors:
> + *   Beau Belgrave <beaub@linux.microsoft.com>
> + */
> +
> +#include <linux/bitmap.h>
> +#include <linux/cdev.h>
> +#include <linux/hashtable.h>
> +#include <linux/list.h>
> +#include <linux/io.h>
> +#include <linux/uio.h>
> +#include <linux/ioctl.h>
> +#include <linux/jhash.h>
> +#include <linux/trace_events.h>
> +#include <linux/tracefs.h>
> +#include <linux/types.h>
> +#include <linux/uaccess.h>
> +#include <uapi/linux/user_events.h>
> +#include "trace.h"
> +#include "trace_dynevent.h"
> +
> +#define USER_EVENTS_PREFIX_LEN (sizeof(USER_EVENTS_PREFIX)-1)
> +
> +#define FIELD_DEPTH_TYPE 0
> +#define FIELD_DEPTH_NAME 1
> +#define FIELD_DEPTH_SIZE 2
> +
> +/*
> + * Limits how many trace_event calls user processes can create:
> + * Must be multiple of PAGE_SIZE.
> + */
> +#define MAX_PAGES 1
> +#define MAX_EVENTS (MAX_PAGES * PAGE_SIZE)
> +
> +/* Limit how long of an event name plus args within the subsystem. */
> +#define MAX_EVENT_DESC 512
> +#define EVENT_NAME(user_event) ((user_event)->tracepoint.name)
> +
> +static char *register_page_data;
> +
> +static DEFINE_MUTEX(reg_mutex);
> +static DEFINE_HASHTABLE(register_table, 4);
> +static DECLARE_BITMAP(page_bitmap, MAX_EVENTS);
> +
> +struct user_event {
> +	struct tracepoint tracepoint;
> +	struct trace_event_call call;
> +	struct trace_event_class class;
> +	struct dyn_event devent;
> +	struct hlist_node node;
> +	struct list_head fields;
> +	atomic_t refcnt;
> +	int index;
> +	int flags;
> +};
> +
> +struct user_event_refs {
> +	struct rcu_head rcu;
> +	int count;
> +	struct user_event *events[];
> +};
> +
> +typedef void (*user_event_func_t) (struct user_event *user,
> +				   void *data, u32 datalen,
> +				   void *tpdata);
> +
> +static int user_event_parse(char *name, char *args, char *flags,
> +			    struct user_event **newuser);
> +
> +static u32 user_event_key(char *name)
> +{
> +	return jhash(name, strlen(name), 0);
> +}
> +
> +static struct list_head *user_event_get_fields(struct trace_event_call *call)
> +{
> +	struct user_event *user = (struct user_event *)call->data;
> +
> +	return &user->fields;
> +}
> +
> +/*
> + * Parses a register command for user_events
> + * Format: event_name[:FLAG1[,FLAG2...]] [field1[;field2...]]
> + *
> + * Example event named test with a 20 char msg field with a unsigned int after:
> + * test char[20] msg;unsigned int id
> + *
> + * NOTE: Offsets are from the user data perspective, they are not from the
> + * trace_entry/buffer perspective. We automatically add the common properties
> + * sizes to the offset for the user.
> + */
> +static int user_event_parse_cmd(char *raw_command, struct user_event **newuser)
> +{
> +	char *name = raw_command;
> +	char *args = strpbrk(name, " ");
> +	char *flags;
> +
> +	if (args)
> +		*args++ = 0;
> +
> +	flags = strpbrk(name, ":");
> +
> +	if (flags)
> +		*flags++ = 0;
> +
> +	return user_event_parse(name, args, flags, newuser);
> +}
> +
> +static int user_field_array_size(const char *type)
> +{
> +	const char *start = strchr(type, '[');
> +	int size = 0;
> +
> +	if (start == NULL)
> +		return -EINVAL;
> +
> +	start++;
> +
> +	while (*start >= '0' && *start <= '9')

The kernel has include/linux/ctype.h

	while (isdigit(*start))

> +		size = (size * 10) + (*start++ - '0');

So you only allow decimal digits? No hex?

Also, is there anything to check if the size overflows?

I'm assuming that other patches will add checking if the size is
greater than some amount?

> +
> +	if (*start != ']')
> +		return -EINVAL;
> +
> +	return size;
> +}
> +
> +static int user_field_size(const char *type)
> +{
> +	/* long is not allowed from a user, since it's ambigious in size */
> +	if (strcmp(type, "s64") == 0)
> +		return sizeof(s64);
> +	if (strcmp(type, "u64") == 0)
> +		return sizeof(u64);
> +	if (strcmp(type, "s32") == 0)
> +		return sizeof(s32);
> +	if (strcmp(type, "u32") == 0)
> +		return sizeof(u32);
> +	if (strcmp(type, "int") == 0)
> +		return sizeof(int);
> +	if (strcmp(type, "unsigned int") == 0)
> +		return sizeof(unsigned int);
> +	if (strcmp(type, "s16") == 0)
> +		return sizeof(s16);
> +	if (strcmp(type, "u16") == 0)
> +		return sizeof(u16);
> +	if (strcmp(type, "short") == 0)
> +		return sizeof(short);
> +	if (strcmp(type, "unsigned short") == 0)
> +		return sizeof(unsigned short);
> +	if (strcmp(type, "s8") == 0)
> +		return sizeof(s8);
> +	if (strcmp(type, "u8") == 0)
> +		return sizeof(u8);
> +	if (strcmp(type, "char") == 0)
> +		return sizeof(char);
> +	if (strcmp(type, "unsigned char") == 0)
> +		return sizeof(unsigned char);
> +	if (strstr(type, "char[") == type)
> +		return user_field_array_size(type);
> +	if (strstr(type, "unsigned char[") == type)
> +		return user_field_array_size(type);
> +	if (strstr(type, "__data_loc ") == type)
> +		return sizeof(u32);
> +	if (strstr(type, "__rel_loc ") == type)
> +		return sizeof(u32);
> +
> +	/* Uknown basic type, error */
> +	return -EINVAL;
> +}
> +
> +static void user_event_destroy_fields(struct user_event *user)
> +{
> +	struct ftrace_event_field *field, *next;
> +	struct list_head *head = &user->fields;
> +
> +	list_for_each_entry_safe(field, next, head, link) {
> +		list_del(&field->link);
> +		kfree(field);
> +	}
> +}
> +
> +static int user_event_add_field(struct user_event *user, const char *type,
> +				const char *name, int offset, int size,
> +				int is_signed, int filter_type)
> +{
> +	struct ftrace_event_field *field;
> +
> +	field = kmalloc(sizeof(*field), GFP_KERNEL);
> +
> +	if (!field)
> +		return -ENOMEM;
> +
> +	field->type = type;
> +	field->name = name;
> +	field->offset = offset;
> +	field->size = size;
> +	field->is_signed = is_signed;
> +	field->filter_type = filter_type;
> +
> +	list_add(&field->link, &user->fields);
> +
> +	return 0;
> +}
> +
> +/*
> + * Parses the values of a field within the description
> + * Format: type name [size]
> + */
> +static int user_event_parse_field(char *field, struct user_event *user,
> +				  u32 *offset)
> +{
> +	char *part, *type, *name;
> +	u32 depth = 0, saved_offset = *offset;
> +	int size = -EINVAL;
> +	bool is_struct = false;
> +
> +	field = skip_spaces(field);
> +
> +	if (*field == 0)
> +		return 0;
> +
> +	/* Handle types that have a space within */
> +	if (strstr(field, "unsigned ") == field) {

These should use str_has_prefix(field, "unsigned ") etc.

It also returns the length of the prefix so you don't need to add
sizeof() of the string you checked for.


> +		type = field;
> +		field = strpbrk(field + sizeof("unsigned"), " ");
> +		goto skip_next;
> +	} else if (strstr(field, "struct ") == field) {
> +		type = field;
> +		field = strpbrk(field + sizeof("struct"), " ");
> +		is_struct = true;
> +		goto skip_next;
> +	} else if (strstr(field, "__data_loc unsigned ") == field) {
> +		type = field;
> +		field = strpbrk(field + sizeof("__data_loc unsigned"), " ");
> +		goto skip_next;
> +	} else if (strstr(field, "__data_loc ") == field) {
> +		type = field;
> +		field = strpbrk(field + sizeof("__data_loc"), " ");
> +		goto skip_next;
> +	} else if (strstr(field, "__rel_loc unsigned ") == field) {
> +		type = field;
> +		field = strpbrk(field + sizeof("__rel_loc unsigned"), " ");
> +		goto skip_next;
> +	} else if (strstr(field, "__rel_loc ") == field) {
> +		type = field;
> +		field = strpbrk(field + sizeof("__rel_loc"), " ");
> +		goto skip_next;
> +	}
> +	goto parse;
> +skip_next:
> +	if (field == NULL)
> +		return -EINVAL;
> +
> +	*field++ = 0;
> +	depth++;
> +parse:
> +	while ((part = strsep(&field, " ")) != NULL) {
> +		switch (depth++) {
> +		case FIELD_DEPTH_TYPE:
> +			type = part;
> +			break;
> +		case FIELD_DEPTH_NAME:
> +			name = part;
> +			break;
> +		case FIELD_DEPTH_SIZE:
> +			if (!is_struct)
> +				return -EINVAL;
> +
> +			if (kstrtou32(part, 10, &size))
> +				return -EINVAL;
> +			break;
> +		default:
> +			return -EINVAL;
> +		}
> +	}
> +
> +	if (depth < FIELD_DEPTH_SIZE)
> +		return -EINVAL;
> +
> +	if (depth == FIELD_DEPTH_SIZE)
> +		size = user_field_size(type);
> +
> +	if (size == 0)
> +		return -EINVAL;
> +
> +	if (size < 0)
> +		return size;
> +
> +	*offset = saved_offset + size;
> +
> +	return user_event_add_field(user, type, name, saved_offset, size,
> +				    type[0] != 'u', FILTER_OTHER);
> +}
> +
> +static void user_event_parse_flags(struct user_event *user, char *flags)
> +{
> +	char *flag;
> +
> +	if (flags == NULL)
> +		return;
> +
> +	while ((flag = strsep(&flags, ",")) != NULL) {
> +		if (strcmp(flag, "BPF_ITER") == 0)
> +			user->flags |= FLAG_BPF_ITER;
> +	}
> +}
> +
> +static int user_event_parse_fields(struct user_event *user, char *args)
> +{
> +	char *field;
> +	u32 offset = sizeof(struct trace_entry);
> +	int ret = -EINVAL;
> +
> +	if (args == NULL)
> +		return 0;
> +
> +	while ((field = strsep(&args, ";")) != NULL) {
> +		ret = user_event_parse_field(field, user, &offset);
> +
> +		if (ret)
> +			break;
> +	}
> +
> +	return ret;
> +}
> +
> +static struct trace_event_fields user_event_fields_array[] = {
> +	{}
> +};

Isn't the above just a fancy way of writing:

static struct trace_event_fields user_event_fields_array[1];

?

> +
> +static enum print_line_t user_event_print_trace(struct trace_iterator *iter,
> +						int flags,
> +						struct trace_event *event)
> +{
> +	/* Unsafe to try to decode user provided print_fmt, use hex */
> +	trace_print_hex_dump_seq(&iter->seq, "", DUMP_PREFIX_OFFSET, 16,
> +				 1, iter->ent, iter->ent_size, true);

Hmm, I need to look at this when I have time to apply the patches. (On
weekends, I only look at the patches) I'm adding this to remind me to
relook at this. ;-)

> +
> +	return trace_handle_return(&iter->seq);
> +}
> +
> +static struct trace_event_functions user_event_funcs = {
> +	.trace = user_event_print_trace,
> +};
> +
> +static int destroy_user_event(struct user_event *user)
> +{
> +	int ret = 0;
> +
> +	/* Must destroy fields before call removal */
> +	user_event_destroy_fields(user);
> +
> +	ret = trace_remove_event_call(&user->call);
> +
> +	if (ret)
> +		return ret;
> +
> +	dyn_event_remove(&user->devent);
> +
> +	register_page_data[user->index] = 0;
> +	clear_bit(user->index, page_bitmap);
> +	hash_del(&user->node);
> +
> +	kfree(EVENT_NAME(user));
> +	kfree(user);
> +
> +	return ret;
> +}
> +
> +static struct user_event *find_user_event(char *name, u32 *outkey)
> +{
> +	struct user_event *user;
> +	u32 key = user_event_key(name);
> +
> +	*outkey = key;
> +
> +	hash_for_each_possible(register_table, user, node, key)
> +		if (!strcmp(EVENT_NAME(user), name))
> +			return user;
> +
> +	return NULL;
> +}
> +
> +/*
> + * Writes the user supplied payload out to a trace file.
> + */
> +static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
> +			      void *tpdata)
> +{
> +	struct trace_event_file *file;
> +	struct trace_entry *entry;
> +	struct trace_event_buffer event_buffer;
> +
> +	file = (struct trace_event_file *)tpdata;
> +
> +	if (!file ||
> +	    !(file->flags & EVENT_FILE_FL_ENABLED) ||
> +	    trace_trigger_soft_disabled(file))
> +		return;
> +
> +	entry = trace_event_buffer_reserve(&event_buffer, file,
> +					   sizeof(*entry) + datalen);
> +
> +	if (unlikely(!entry))
> +		return;
> +

Might want to add a comment here that the trace_event_buffer_reserve()
will fill in the struct trace_entry, which explains the "entry+1" below.

I also need to add comments to trace_event_buffer_reserve() that it does so :-p

> +	memcpy(entry + 1, data, datalen);
> +
> +	trace_event_buffer_commit(&event_buffer);
> +}
> +
> +/*
> + * Update the register page that is shared between user processes.
> + */
> +static void update_reg_page_for(struct user_event *user)
> +{
> +	struct tracepoint *tp = &user->tracepoint;
> +	char status = 0;
> +
> +	if (atomic_read(&tp->key.enabled) > 0) {
> +		struct tracepoint_func *probe_func_ptr;
> +		user_event_func_t probe_func;
> +
> +		rcu_read_lock_sched();
> +
> +		probe_func_ptr = rcu_dereference_sched(tp->funcs);
> +
> +		if (probe_func_ptr) {
> +			do {
> +				probe_func = probe_func_ptr->func;
> +
> +				if (probe_func == user_event_ftrace)
> +					status |= EVENT_STATUS_FTRACE;
> +				else
> +					status |= EVENT_STATUS_OTHER;
> +			} while ((++probe_func_ptr)->func);
> +		}
> +
> +		rcu_read_unlock_sched();
> +	}
> +
> +	register_page_data[user->index] = status;

Should there be some kind of memory barriers here? That is, isn't this
the page that user space sees? The user space code should probably have
some kind of read memory barrier as well.

> +}
> +
> +/*
> + * Register callback for our events from tracing sub-systems.
> + */
> +static int user_event_reg(struct trace_event_call *call,
> +			  enum trace_reg type,
> +			  void *data)
> +{
> +	struct user_event *user = (struct user_event *)call->data;
> +	int ret = 0;
> +
> +	if (!user)
> +		return -ENOENT;
> +
> +	switch (type) {
> +	case TRACE_REG_REGISTER:
> +		ret = tracepoint_probe_register(call->tp,
> +						call->class->probe,
> +						data);
> +		if (!ret)
> +			goto inc;
> +		break;
> +
> +	case TRACE_REG_UNREGISTER:
> +		tracepoint_probe_unregister(call->tp,
> +					    call->class->probe,
> +					    data);
> +		goto dec;
> +
> +#ifdef CONFIG_PERF_EVENTS
> +	case TRACE_REG_PERF_REGISTER:
> +	case TRACE_REG_PERF_UNREGISTER:
> +	case TRACE_REG_PERF_OPEN:
> +	case TRACE_REG_PERF_CLOSE:
> +	case TRACE_REG_PERF_ADD:
> +	case TRACE_REG_PERF_DEL:
> +		break;
> +#endif
> +	}
> +
> +	return ret;
> +inc:
> +	atomic_inc(&user->refcnt);
> +	update_reg_page_for(user);
> +	return 0;
> +dec:
> +	update_reg_page_for(user);
> +	atomic_dec(&user->refcnt);
> +	return 0;
> +}
> +
> +static int user_event_create(const char *raw_command)
> +{
> +	struct user_event *user;
> +	char *name;
> +	int ret;
> +
> +	if (strstr(raw_command, USER_EVENTS_PREFIX) != raw_command)
> +		return -ECANCELED;
> +
> +	raw_command += USER_EVENTS_PREFIX_LEN;
> +	raw_command = skip_spaces(raw_command);
> +
> +	name = kstrdup(raw_command, GFP_KERNEL);

name is allocated here, it really needs to be freed in this function as
well. I see that user_event_parse() will free it, but that is extremely
error prone to have a dependency like that. If name needs to be saved
by user_event_parse_cmd() then that should be shown in the return value
of that function. And if it fails the freeing of name should be in this
function. Also, if it is saved, then there should be a comment in this
function stating that.

> +
> +	if (!name)
> +		return -ENOMEM;
> +
> +	mutex_lock(&reg_mutex);
> +	ret = user_event_parse_cmd(name, &user);
> +	mutex_unlock(&reg_mutex);
> +
> +	return ret;
> +}
> +
> +static int user_event_show(struct seq_file *m, struct dyn_event *ev)
> +{
> +	struct user_event *user = container_of(ev, struct user_event, devent);
> +	struct ftrace_event_field *field, *next;
> +	struct list_head *head;
> +	int depth = 0;
> +
> +	seq_printf(m, "%s%s", USER_EVENTS_PREFIX, EVENT_NAME(user));
> +
> +	head = trace_get_fields(&user->call);
> +
> +	list_for_each_entry_safe_reverse(field, next, head, link) {
> +		if (depth == 0)
> +			seq_puts(m, " ");
> +		else
> +			seq_puts(m, "; ");
> +		seq_printf(m, "%s %s", field->type, field->name);
> +		depth++;
> +	}
> +
> +	seq_puts(m, "\n");
> +
> +	return 0;
> +}
> +
> +static bool user_event_is_busy(struct dyn_event *ev)
> +{
> +	struct user_event *user = container_of(ev, struct user_event, devent);
> +
> +	return atomic_read(&user->refcnt) != 0;
> +}
> +
> +static int user_event_free(struct dyn_event *ev)
> +{
> +	struct user_event *user = container_of(ev, struct user_event, devent);
> +

Shouldn't this check if the event is busy first?

> +	return destroy_user_event(user);
> +}
> +
> +static bool user_event_match(const char *system, const char *event,
> +			     int argc, const char **argv, struct dyn_event *ev)
> +{
> +	struct user_event *user = container_of(ev, struct user_event, devent);
> +
> +	return strcmp(EVENT_NAME(user), event) == 0 &&
> +		(!system || strcmp(system, USER_EVENTS_SYSTEM) == 0);
> +}
> +
> +static struct dyn_event_operations user_event_dops = {
> +	.create = user_event_create,
> +	.show = user_event_show,
> +	.is_busy = user_event_is_busy,
> +	.free = user_event_free,
> +	.match = user_event_match,
> +};
> +
> +static int user_event_trace_register(struct user_event *user)
> +{
> +	int ret;
> +
> +	ret = register_trace_event(&user->call.event);
> +
> +	if (!ret)
> +		return -ENODEV;
> +
> +	ret = trace_add_event_call(&user->call);
> +
> +	if (ret)
> +		unregister_trace_event(&user->call.event);
> +
> +	return ret;
> +}
> +
> +/*
> + * Parses the event name, arguments and flags then registers if successful.
> + */
> +static int user_event_parse(char *name, char *args, char *flags,
> +			    struct user_event **newuser)
> +{
> +	int ret;
> +	int index;
> +	u32 key;
> +	struct user_event *user = find_user_event(name, &key);
> +
> +	if (user) {
> +		*newuser = user;
> +		ret = 0;
> +		goto put_name;
> +	}
> +
> +	index = find_first_zero_bit(page_bitmap, MAX_EVENTS);
> +
> +	if (index == MAX_EVENTS) {
> +		ret = -EMFILE;
> +		goto put_name;
> +	}
> +
> +	user = kzalloc(sizeof(*user), GFP_KERNEL);
> +
> +	if (!user) {
> +		ret = -ENOMEM;
> +		goto put_name;
> +	}
> +
> +	INIT_LIST_HEAD(&user->class.fields);
> +	INIT_LIST_HEAD(&user->fields);
> +
> +	user->tracepoint.name = name;
> +
> +	user_event_parse_flags(user, flags);
> +
> +	ret = user_event_parse_fields(user, args);
> +
> +	if (ret)
> +		goto put_user;
> +
> +	/* Minimal print format */
> +	user->call.print_fmt = "\"\"";
> +
> +	user->call.data = user;
> +	user->call.class = &user->class;
> +	user->call.name = name;
> +	user->call.flags = TRACE_EVENT_FL_TRACEPOINT;
> +	user->call.tp = &user->tracepoint;
> +	user->call.event.funcs = &user_event_funcs;
> +
> +	user->class.system = USER_EVENTS_SYSTEM;
> +	user->class.fields_array = user_event_fields_array;
> +	user->class.get_fields = user_event_get_fields;
> +	user->class.reg = user_event_reg;
> +	user->class.probe = user_event_ftrace;
> +
> +	mutex_lock(&event_mutex);
> +	ret = user_event_trace_register(user);
> +	mutex_unlock(&event_mutex);
> +
> +	if (ret)
> +		goto put_user;
> +
> +	user->index = index;
> +	dyn_event_init(&user->devent, &user_event_dops);
> +	dyn_event_add(&user->devent);
> +	set_bit(user->index, page_bitmap);
> +	hash_add(register_table, &user->node, key);
> +
> +	*newuser = user;
> +	return 0;
> +put_user:
> +	user_event_destroy_fields(user);
> +	kfree(user);
> +put_name:
> +	kfree(name);
> +	return ret;
> +}
> +
> +/*
> + * Deletes a previously created event if it is no longer being used.
> + */
> +static int delete_user_event(char *name)
> +{
> +	u32 key;
> +	int ret;
> +	struct user_event *user = find_user_event(name, &key);
> +
> +	if (!user)
> +		return -ENOENT;
> +
> +	if (atomic_read(&user->refcnt) != 0)
> +		return -EBUSY;
> +
> +	mutex_lock(&event_mutex);
> +	ret = destroy_user_event(user);
> +	mutex_unlock(&event_mutex);
> +
> +	return ret;
> +}
> +
> +/*
> + * Validates the user payload and writes via iterator.
> + */
> +static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
> +{
> +	struct user_event_refs *refs;
> +	struct user_event *user = NULL;
> +	struct tracepoint *tp;
> +	ssize_t ret = i->count;
> +	int idx;
> +
> +	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
> +		return -EFAULT;
> +
> +	rcu_read_lock_sched();
> +
> +	refs = rcu_dereference_sched(file->private_data);
> +
> +	if (likely(refs && idx < refs->count))
> +		user = refs->events[idx];
> +
> +	rcu_read_unlock_sched();
> +
> +	if (unlikely(user == NULL))
> +		return -ENOENT;
> +
> +	tp = &user->tracepoint;

What protects user here? You released the rcu lock.

> +
> +	if (likely(atomic_read(&tp->key.enabled) > 0)) {
> +		struct tracepoint_func *probe_func_ptr;
> +		user_event_func_t probe_func;
> +		void *tpdata;
> +		void *kdata;
> +		u32 datalen;
> +
> +		kdata = kmalloc(i->count, GFP_KERNEL);

The allocation would need to be done first, before grabbing the rcu
lock, and all this would need to be done within the rcu lock, because
"tp" may not even exist anymore.

> +
> +		if (unlikely(!kdata))
> +			return -ENOMEM;
> +
> +		datalen = copy_from_iter(kdata, i->count, i);
> +
> +		rcu_read_lock_sched();
> +
> +		probe_func_ptr = rcu_dereference_sched(tp->funcs);
> +
> +		if (probe_func_ptr) {
> +			do {
> +				probe_func = probe_func_ptr->func;
> +				tpdata = probe_func_ptr->data;
> +				probe_func(user, kdata, datalen, tpdata);
> +			} while ((++probe_func_ptr)->func);
> +		}
> +
> +		rcu_read_unlock_sched();
> +
> +		kfree(kdata);
> +	}
> +
> +	return ret;
> +}
> +
> +static ssize_t user_events_write(struct file *file, const char __user *ubuf,
> +				 size_t count, loff_t *ppos)
> +{
> +	struct iovec iov;
> +	struct iov_iter i;
> +
> +	if (unlikely(*ppos != 0))
> +		return -EFAULT;
> +
> +	if (unlikely(import_single_range(READ, (char *)ubuf, count, &iov, &i)))
> +		return -EFAULT;
> +
> +	return user_events_write_core(file, &i);
> +}
> +
> +static ssize_t user_events_write_iter(struct kiocb *kp, struct iov_iter *i)
> +{
> +	return user_events_write_core(kp->ki_filp, i);
> +}
> +
> +static int user_events_ref_add(struct file *file, struct user_event *user)
> +{
> +	struct user_event_refs *refs, *new_refs;
> +	int i, size, count = 0;
> +
> +	rcu_read_lock_sched();

rcu lock is not needed, but you may want to use:

  rcu_dereference_protected()

and list the lock that protects the modifications here.

> +	refs = rcu_dereference_sched(file->private_data);
> +	rcu_read_unlock_sched();
> +
> +	if (refs) {
> +		count = refs->count;
> +
> +		for (i = 0; i < count; ++i)
> +			if (refs->events[i] == user)
> +				return i;
> +	}
> +
> +	size = sizeof(*refs) + (sizeof(struct user_event *) * (count + 1));
> +
> +	new_refs = kzalloc(size, GFP_KERNEL);
> +
> +	if (!new_refs)
> +		return -ENOMEM;
> +
> +	new_refs->count = count + 1;
> +
> +	for (i = 0; i < count; ++i)
> +		new_refs->events[i] = refs->events[i];
> +
> +	new_refs->events[i] = user;
> +
> +	atomic_inc(&user->refcnt);
> +
> +	rcu_assign_pointer(file->private_data, new_refs);
> +
> +	if (refs)
> +		kfree_rcu(refs, rcu);
> +
> +	return i;
> +}

I just skimmed the rest, and didn't see anything that stuck out.

But maybe I'll get time to look deeper at it later.

-- Steve

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-04 21:34   ` kernel test robot
@ 2021-11-08  2:32     ` Masami Hiramatsu
  2021-11-08 16:59       ` Beau Belgrave
  0 siblings, 1 reply; 47+ messages in thread
From: Masami Hiramatsu @ 2021-11-08  2:32 UTC (permalink / raw)
  To: kernel test robot
  Cc: Beau Belgrave, rostedt, mhiramat, kbuild-all, linux-trace-devel,
	linux-kernel

On Fri, 5 Nov 2021 05:34:31 +0800
kernel test robot <lkp@intel.com> wrote:

> Hi Beau,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on rostedt-trace/for-next]
> [also build test ERROR on shuah-kselftest/next linux/master linus/master v5.15 next-20211104]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
> 
> url:    https://github.com/0day-ci/linux/commits/Beau-Belgrave/user_events-Enable-user-processes-to-create-and-write-to-trace-events/20211105-010650
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git for-next
> config: powerpc-allmodconfig (attached as .config)
> compiler: powerpc-linux-gcc (GCC) 11.2.0
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # https://github.com/0day-ci/linux/commit/da0961ad45aa1192b47b8a80de6b17437434ae4a
>         git remote add linux-review https://github.com/0day-ci/linux
>         git fetch --no-tags linux-review Beau-Belgrave/user_events-Enable-user-processes-to-create-and-write-to-trace-events/20211105-010650
>         git checkout da0961ad45aa1192b47b8a80de6b17437434ae4a
>         # save the attached .config to linux build tree
>         mkdir build_dir
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=powerpc SHELL=/bin/bash kernel/trace/
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
> 
> All errors (new ones prefixed by >>):
> 
>    kernel/trace/trace_events_user.c: In function 'user_event_parse':
> >> kernel/trace/trace_events_user.c:665:9: error: too few arguments to function 'dyn_event_add'
>      665 |         dyn_event_add(&user->devent);
>          |         ^~~~~~~~~~~~~
>    In file included from kernel/trace/trace_events_user.c:23:
>    kernel/trace/trace_dynevent.h:79:19: note: declared here
>       79 | static inline int dyn_event_add(struct dyn_event *ev,
>          |                   ^~~~~~~~~~~~~

You need to pass &user->call too :)

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08  2:32     ` Masami Hiramatsu
@ 2021-11-08 16:59       ` Beau Belgrave
  0 siblings, 0 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 16:59 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: kernel test robot, rostedt, kbuild-all, linux-trace-devel, linux-kernel

On Mon, Nov 08, 2021 at 11:32:25AM +0900, Masami Hiramatsu wrote:
> On Fri, 5 Nov 2021 05:34:31 +0800
> kernel test robot <lkp@intel.com> wrote:
> 
> > Hi Beau,
> > 
> > Thank you for the patch! Yet something to improve:
> > 
> > [auto build test ERROR on rostedt-trace/for-next]
> > [also build test ERROR on shuah-kselftest/next linux/master linus/master v5.15 next-20211104]
> > [If your patch is applied to the wrong git tree, kindly drop us a note.
> > And when submitting patch, we suggest to use '--base' as documented in
> > https://git-scm.com/docs/git-format-patch]
> > 
> > url:    https://github.com/0day-ci/linux/commits/Beau-Belgrave/user_events-Enable-user-processes-to-create-and-write-to-trace-events/20211105-010650
> > base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git for-next
> > config: powerpc-allmodconfig (attached as .config)
> > compiler: powerpc-linux-gcc (GCC) 11.2.0
> > reproduce (this is a W=1 build):
> >         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> >         chmod +x ~/bin/make.cross
> >         # https://github.com/0day-ci/linux/commit/da0961ad45aa1192b47b8a80de6b17437434ae4a
> >         git remote add linux-review https://github.com/0day-ci/linux
> >         git fetch --no-tags linux-review Beau-Belgrave/user_events-Enable-user-processes-to-create-and-write-to-trace-events/20211105-010650
> >         git checkout da0961ad45aa1192b47b8a80de6b17437434ae4a
> >         # save the attached .config to linux build tree
> >         mkdir build_dir
> >         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=powerpc SHELL=/bin/bash kernel/trace/
> > 
> > If you fix the issue, kindly add following tag as appropriate
> > Reported-by: kernel test robot <lkp@intel.com>
> > 
> > All errors (new ones prefixed by >>):
> > 
> >    kernel/trace/trace_events_user.c: In function 'user_event_parse':
> > >> kernel/trace/trace_events_user.c:665:9: error: too few arguments to function 'dyn_event_add'
> >      665 |         dyn_event_add(&user->devent);
> >          |         ^~~~~~~~~~~~~
> >    In file included from kernel/trace/trace_events_user.c:23:
> >    kernel/trace/trace_dynevent.h:79:19: note: declared here
> >       79 | static inline int dyn_event_add(struct dyn_event *ev,
> >          |                   ^~~~~~~~~~~~~
> 
> You need to pass &user->call too :)
> 
> Thank you,
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>

:)

Yep, these so far have been based on perf/core branch of tip, I've moved the
next iteration over to for-next branch off of linux-trace to ensure alignment.

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-07 14:31   ` Masami Hiramatsu
@ 2021-11-08 17:13     ` Beau Belgrave
  2021-11-08 18:16       ` Steven Rostedt
  0 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 17:13 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: rostedt, linux-trace-devel, linux-kernel

On Sun, Nov 07, 2021 at 11:31:15PM +0900, Masami Hiramatsu wrote:
> Hi Beau,
> 
> At first, thanks for breaking down your patch into this series!
> 
> Now I found that a suspicious security design issue in this patch.
> 
> On Thu,  4 Nov 2021 10:04:25 -0700
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > +
> > +static enum print_line_t user_event_print_trace(struct trace_iterator *iter,
> > +						int flags,
> > +						struct trace_event *event)
> > +{
> > +	/* Unsafe to try to decode user provided print_fmt, use hex */
> > +	trace_print_hex_dump_seq(&iter->seq, "", DUMP_PREFIX_OFFSET, 16,
> > +				 1, iter->ent, iter->ent_size, true);
> 
> You said this is "Unsafe to try to decode user provided" here, because
> this doesn't check the event data sanity, especially non-fixed size data.
> 
> However, it is not enough that you don't decode it here. Because synthetic
> events (histograms) and event filters need to decode this recorded data entry
> using the event format information.
> 
> This means this can cause a buffer overrun issue on the ring buffer, because
> __data_loc (and __rel_loc too) can be compromised by the user.
> 
> If you want to just trace the user events with digit parameters, there is
> a way to close this issue - support only the fixed size types (IOW, drop
> __data_loc/rel_loc support) and always checks the 'length' of the written
> data size. This ensures that those filters/synthetic events can access
> those parameters as 'values'. Maybe eprobes still has to reject the user
> events but the other parts will work well.
> 
> If you want to log some "string", it is hard. Maybe it needs a special
> check when writing the event (address, length, and null termination.),
> but it will increase the recording overhead.
> 
> Thank you,
> 
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>

Does that mean the decoders in eprobes/histogram don't check event
record sizes before accessing the data? Shouldn't that get fix
centrally? That would mean a loaded module could do the same thing
(user_events only works if the user has access to tracefs, so it's not
like it's open to all users).

Is it possible to mark trace_events with a flag that says "don't trust
this"? That way eBPF, ftrace and perf would still allow recording and
decoding offline (in a safer context).

From user_events code, an entry is always allocated with enough data and
if it fails (either too big or alloc failure) the event is not written
out. This appears to be purely in the decode logic that is not within
this patch?

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 17:13     ` Beau Belgrave
@ 2021-11-08 18:16       ` Steven Rostedt
  2021-11-08 20:25         ` Beau Belgrave
  0 siblings, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 18:16 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Mon, 8 Nov 2021 09:13:36 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:


> Does that mean the decoders in eprobes/histogram don't check event
> record sizes before accessing the data? Shouldn't that get fix
> centrally? That would mean a loaded module could do the same thing
> (user_events only works if the user has access to tracefs, so it's not
> like it's open to all users).

There's checks to make sure everything fits in eprobes and kprobes. If it
doesn't then the event is simply dropped.

For example, if you look at __eprobe_trace_func() in trace_eprobe.c, you'll
see that it calls get_eprobe_size(), which goes through and just reads what
it is about to accept. Then it reserves the amount of data on the ring
buffer, and then calls store_trace_args() which also passes in the size
that it found, in case things change. If it's too big, it only records what
it originally intended.

-- Steve

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-07 18:18   ` Steven Rostedt
@ 2021-11-08 19:56     ` Beau Belgrave
  2021-11-08 20:53       ` Steven Rostedt
  0 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 19:56 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Sun, Nov 07, 2021 at 01:18:50PM -0500, Steven Rostedt wrote:
> On Thu,  4 Nov 2021 10:04:25 -0700
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> > +static int user_field_array_size(const char *type)
> > +{
> > +	const char *start = strchr(type, '[');
> > +	int size = 0;
> > +
> > +	if (start == NULL)
> > +		return -EINVAL;
> > +
> > +	start++;
> > +
> > +	while (*start >= '0' && *start <= '9')
> 
> The kernel has include/linux/ctype.h
> 
> 	while (isdigit(*start))
> 
> > +		size = (size * 10) + (*start++ - '0');
> 
> So you only allow decimal digits? No hex?
> 

Happy to change it, I only expected decimal to be allowed.

Is there a strong need for hex? (IE: kstrtouint(start, 0, ...)?

> Also, is there anything to check if the size overflows?
> 
> I'm assuming that other patches will add checking if the size is
> greater than some amount?
> 

I can switch to kstrtouint and use a max check, however:
The max checks are weird here because eBPF has no limits, while ftrace 
and perf both do (and I believe they are different max sizes?)

If someone really wanted a large array of characters and it can fit
within perf but not ftrace, is it correct to block it here? My thinking
was to allow each trace buffer reserve call to either work or not based
on what the user requested depending on what was hooked.

No strong opinion here though, just thoughts about what is a reasonable
max given the 3 technologies.

> > +/*
> > + * Parses the values of a field within the description
> > + * Format: type name [size]
> > + */
> > +static int user_event_parse_field(char *field, struct user_event *user,
> > +				  u32 *offset)
> > +{
> > +	char *part, *type, *name;
> > +	u32 depth = 0, saved_offset = *offset;
> > +	int size = -EINVAL;
> > +	bool is_struct = false;
> > +
> > +	field = skip_spaces(field);
> > +
> > +	if (*field == 0)
> > +		return 0;
> > +
> > +	/* Handle types that have a space within */
> > +	if (strstr(field, "unsigned ") == field) {
> 
> These should use str_has_prefix(field, "unsigned ") etc.
> 
> It also returns the length of the prefix so you don't need to add
> sizeof() of the string you checked for.
> 

Nice, will use that.

> > +
> > +static struct trace_event_fields user_event_fields_array[] = {
> > +	{}
> > +};
> 
> Isn't the above just a fancy way of writing:
> 
> static struct trace_event_fields user_event_fields_array[1];
> 
> ?
> 

Yes, as long as it gets init'd to zero. My understanding was that {}
would force zeroing, but I could totally be wrong.

> > +/*
> > + * Writes the user supplied payload out to a trace file.
> > + */
> > +static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
> > +			      void *tpdata)
> > +{
> > +	struct trace_event_file *file;
> > +	struct trace_entry *entry;
> > +	struct trace_event_buffer event_buffer;
> > +
> > +	file = (struct trace_event_file *)tpdata;
> > +
> > +	if (!file ||
> > +	    !(file->flags & EVENT_FILE_FL_ENABLED) ||
> > +	    trace_trigger_soft_disabled(file))
> > +		return;
> > +
> > +	entry = trace_event_buffer_reserve(&event_buffer, file,
> > +					   sizeof(*entry) + datalen);
> > +
> > +	if (unlikely(!entry))
> > +		return;
> > +
> 
> Might want to add a comment here that the trace_event_buffer_reserve()
> will fill in the struct trace_entry, which explains the "entry+1" below.
> 
> I also need to add comments to trace_event_buffer_reserve() that it does so :-p
> 

Will do.

> > +/*
> > + * Update the register page that is shared between user processes.
> > + */
> > +static void update_reg_page_for(struct user_event *user)
> > +{
> > +	struct tracepoint *tp = &user->tracepoint;
> > +	char status = 0;
> > +
> > +	if (atomic_read(&tp->key.enabled) > 0) {
> > +		struct tracepoint_func *probe_func_ptr;
> > +		user_event_func_t probe_func;
> > +
> > +		rcu_read_lock_sched();
> > +
> > +		probe_func_ptr = rcu_dereference_sched(tp->funcs);
> > +
> > +		if (probe_func_ptr) {
> > +			do {
> > +				probe_func = probe_func_ptr->func;
> > +
> > +				if (probe_func == user_event_ftrace)
> > +					status |= EVENT_STATUS_FTRACE;
> > +				else
> > +					status |= EVENT_STATUS_OTHER;
> > +			} while ((++probe_func_ptr)->func);
> > +		}
> > +
> > +		rcu_read_unlock_sched();
> > +	}
> > +
> > +	register_page_data[user->index] = status;
> 
> Should there be some kind of memory barriers here? That is, isn't this
> the page that user space sees? The user space code should probably have
> some kind of read memory barrier as well.
> 

I'm glad you brought this up. I wanted to ensure a balance between
eventual enablement of the event in the user mode process vs the cost
of simultaneous enablement of the event (stalls, etc).

We haven't seen this become an issue for our teams in our other
telemetry sources (with no barriers), which seems to indicate eventual
agreement of the page data works well as a tradeoff.

> > +static int user_event_create(const char *raw_command)
> > +{
> > +	struct user_event *user;
> > +	char *name;
> > +	int ret;
> > +
> > +	if (strstr(raw_command, USER_EVENTS_PREFIX) != raw_command)
> > +		return -ECANCELED;
> > +
> > +	raw_command += USER_EVENTS_PREFIX_LEN;
> > +	raw_command = skip_spaces(raw_command);
> > +
> > +	name = kstrdup(raw_command, GFP_KERNEL);
> 
> name is allocated here, it really needs to be freed in this function as
> well. I see that user_event_parse() will free it, but that is extremely
> error prone to have a dependency like that. If name needs to be saved
> by user_event_parse_cmd() then that should be shown in the return value
> of that function. And if it fails the freeing of name should be in this
> function. Also, if it is saved, then there should be a comment in this
> function stating that.
> 

It's a bit tricky, because if the event already exists, the name is
freed. If the function fails, the name is freed. If the event has
never been seen before then it is saved.

I'll try to make this more clear, my thought is to add an explicit
argument that gets set back to the caller if the string should be freed
or not to make this clear while reading the code.

> > +static bool user_event_is_busy(struct dyn_event *ev)
> > +{
> > +	struct user_event *user = container_of(ev, struct user_event, devent);
> > +
> > +	return atomic_read(&user->refcnt) != 0;
> > +}
> > +
> > +static int user_event_free(struct dyn_event *ev)
> > +{
> > +	struct user_event *user = container_of(ev, struct user_event, devent);
> > +
> 
> Shouldn't this check if the event is busy first?

Yes, you are right. In the release all case busy is checked by
dyn_events for me. However, in the individual case it is not. I'll fix
this.

> > +/*
> > + * Validates the user payload and writes via iterator.
> > + */
> > +static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
> > +{
> > +	struct user_event_refs *refs;
> > +	struct user_event *user = NULL;
> > +	struct tracepoint *tp;
> > +	ssize_t ret = i->count;
> > +	int idx;
> > +
> > +	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
> > +		return -EFAULT;
> > +
> > +	rcu_read_lock_sched();
> > +
> > +	refs = rcu_dereference_sched(file->private_data);
> > +
> > +	if (likely(refs && idx < refs->count))
> > +		user = refs->events[idx];
> > +
> > +	rcu_read_unlock_sched();
> > +
> > +	if (unlikely(user == NULL))
> > +		return -ENOENT;
> > +
> > +	tp = &user->tracepoint;
> 
> What protects user here? You released the rcu lock.

user is ref counted by the file, so before the final close all events
linked to the file via the reg ioctl are ref'd and cannot go away.

> 
> > +
> > +	if (likely(atomic_read(&tp->key.enabled) > 0)) {
> > +		struct tracepoint_func *probe_func_ptr;
> > +		user_event_func_t probe_func;
> > +		void *tpdata;
> > +		void *kdata;
> > +		u32 datalen;
> > +
> > +		kdata = kmalloc(i->count, GFP_KERNEL);
> 
> The allocation would need to be done first, before grabbing the rcu
> lock, and all this would need to be done within the rcu lock, because
> "tp" may not even exist anymore.

The tracepoint can only get removed after all refs of an event become
zero. It cannot happen when an outstanding write is occurring. The only
way to delete the tracepoint is via the delete ioctl which explicitly
checks the ref counts.

Minus the bug you found in dyn_event free not checking busy, I don't
believe this case is possible and does not require rcu lock since the
ref count is protecting it from going away during this method.

> > +static int user_events_ref_add(struct file *file, struct user_event *user)
> > +{
> > +	struct user_event_refs *refs, *new_refs;
> > +	int i, size, count = 0;
> > +
> > +	rcu_read_lock_sched();
> 
> rcu lock is not needed, but you may want to use:
> 
>   rcu_dereference_protected()
> 
> and list the lock that protects the modifications here.
> 

Sure, I put them here to make it clear, I can change it up and add a
comment instead.

> 
> I just skimmed the rest, and didn't see anything that stuck out.
> 
> But maybe I'll get time to look deeper at it later.
> 
> -- Steve

Thank you for the review, I appreciate it!

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 18:16       ` Steven Rostedt
@ 2021-11-08 20:25         ` Beau Belgrave
  2021-11-08 21:00           ` Steven Rostedt
  2021-11-09  2:56           ` Masami Hiramatsu
  0 siblings, 2 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 20:25 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Mon, Nov 08, 2021 at 01:16:39PM -0500, Steven Rostedt wrote:
> On Mon, 8 Nov 2021 09:13:36 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> 
> > Does that mean the decoders in eprobes/histogram don't check event
> > record sizes before accessing the data? Shouldn't that get fix
> > centrally? That would mean a loaded module could do the same thing
> > (user_events only works if the user has access to tracefs, so it's not
> > like it's open to all users).
> 
> There's checks to make sure everything fits in eprobes and kprobes. If it
> doesn't then the event is simply dropped.
> 
> For example, if you look at __eprobe_trace_func() in trace_eprobe.c, you'll
> see that it calls get_eprobe_size(), which goes through and just reads what
> it is about to accept. Then it reserves the amount of data on the ring
> buffer, and then calls store_trace_args() which also passes in the size
> that it found, in case things change. If it's too big, it only records what
> it originally intended.
> 
> -- Steve

It seems there are 2 concerns:
1. If data comes in and it's not in the size that is specified, it's
suspicious and should either be truncated or ignored. Maybe under
ignore, over truncate.

2. If the data is more than specified, it must be checked to see if
there are __data_loc / __rel_loc entries and they must be validated as
within range of accepted limits. If there are no __data_loc / __rel_loc
it should either be truncated or ignored.

Is there more that I may have missed?

I'd like to know if I do fix them that the features like filtering will still
be available to user_events or if it's better to just add flags to disable
kernel filtering?

I'm still unsure this is limited to just user_events.

For example, why doesn't filter_pred_strloc and filter_pred_pchar in
trace_events_filter.c check the boundary it will be accessing?

It seems like tracepoints from kernel modules, while more trusted, can also
cause this kind of thing due to bugs, etc.

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 19:56     ` Beau Belgrave
@ 2021-11-08 20:53       ` Steven Rostedt
  2021-11-08 21:15         ` Beau Belgrave
  0 siblings, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 20:53 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Mon, 8 Nov 2021 11:56:42 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> On Sun, Nov 07, 2021 at 01:18:50PM -0500, Steven Rostedt wrote:
> > On Thu,  4 Nov 2021 10:04:25 -0700
> > Beau Belgrave <beaub@linux.microsoft.com> wrote:  
> > > +static int user_field_array_size(const char *type)
> > > +{
> > > +	const char *start = strchr(type, '[');
> > > +	int size = 0;
> > > +
> > > +	if (start == NULL)
> > > +		return -EINVAL;
> > > +
> > > +	start++;
> > > +
> > > +	while (*start >= '0' && *start <= '9')  
> > 
> > The kernel has include/linux/ctype.h
> > 
> > 	while (isdigit(*start))
> >   
> > > +		size = (size * 10) + (*start++ - '0');  
> > 
> > So you only allow decimal digits? No hex?
> >   
> 
> Happy to change it, I only expected decimal to be allowed.

I'm more worried that if the output of the "registered events" may have a
hex number, and someone uses that output to recreate the event.

> 
> Is there a strong need for hex? (IE: kstrtouint(start, 0, ...)?
> 
> > Also, is there anything to check if the size overflows?
> > 
> > I'm assuming that other patches will add checking if the size is
> > greater than some amount?
> >   
> 
> I can switch to kstrtouint and use a max check, however:
> The max checks are weird here because eBPF has no limits, while ftrace 
> and perf both do (and I believe they are different max sizes?)

I'm not concerned about taking up more than ftrace or perf, but having some
kind of DOS that is caused by hugh allocations. If you make a huge event
and try to record it in ftrace, then ftrace will simply drop the event. No
harm done (except you won't see the event).

> 
> If someone really wanted a large array of characters and it can fit
> within perf but not ftrace, is it correct to block it here? My thinking
> was to allow each trace buffer reserve call to either work or not based
> on what the user requested depending on what was hooked.
> 
> No strong opinion here though, just thoughts about what is a reasonable
> max given the 3 technologies.

Again, I'm more worried about just a general 'this is way too big' thing.
If anything, it could be simply to flag a bug where the event in created
via some logic that goes crazy.

> 
> > > +/*
> > > + * Parses the values of a field within the description
> > > + * Format: type name [size]
> > > + */
> > > +static int user_event_parse_field(char *field, struct user_event *user,
> > > +				  u32 *offset)
> > > +{
> > > +	char *part, *type, *name;
> > > +	u32 depth = 0, saved_offset = *offset;
> > > +	int size = -EINVAL;
> > > +	bool is_struct = false;
> > > +
> > > +	field = skip_spaces(field);
> > > +
> > > +	if (*field == 0)
> > > +		return 0;
> > > +
> > > +	/* Handle types that have a space within */
> > > +	if (strstr(field, "unsigned ") == field) {  
> > 
> > These should use str_has_prefix(field, "unsigned ") etc.
> > 
> > It also returns the length of the prefix so you don't need to add
> > sizeof() of the string you checked for.
> >   
> 
> Nice, will use that.
> 
> > > +
> > > +static struct trace_event_fields user_event_fields_array[] = {
> > > +	{}
> > > +};  
> > 
> > Isn't the above just a fancy way of writing:
> > 
> > static struct trace_event_fields user_event_fields_array[1];
> > 
> > ?
> >   
> 
> Yes, as long as it gets init'd to zero. My understanding was that {}
> would force zeroing, but I could totally be wrong.

All static and global variables that are not assigned will be initialized
to zero. No need to initialize anything that is zero by default.

> 
> > > +/*
> > > + * Writes the user supplied payload out to a trace file.
> > > + */
> > > +static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
> > > +			      void *tpdata)
> > > +{
> > > +	struct trace_event_file *file;
> > > +	struct trace_entry *entry;
> > > +	struct trace_event_buffer event_buffer;
> > > +
> > > +	file = (struct trace_event_file *)tpdata;
> > > +
> > > +	if (!file ||
> > > +	    !(file->flags & EVENT_FILE_FL_ENABLED) ||
> > > +	    trace_trigger_soft_disabled(file))
> > > +		return;
> > > +
> > > +	entry = trace_event_buffer_reserve(&event_buffer, file,
> > > +					   sizeof(*entry) + datalen);
> > > +
> > > +	if (unlikely(!entry))
> > > +		return;
> > > +  
> > 
> > Might want to add a comment here that the trace_event_buffer_reserve()
> > will fill in the struct trace_entry, which explains the "entry+1" below.
> > 
> > I also need to add comments to trace_event_buffer_reserve() that it does so :-p
> >   
> 
> Will do.
> 
> > > +/*
> > > + * Update the register page that is shared between user processes.
> > > + */
> > > +static void update_reg_page_for(struct user_event *user)
> > > +{
> > > +	struct tracepoint *tp = &user->tracepoint;
> > > +	char status = 0;
> > > +
> > > +	if (atomic_read(&tp->key.enabled) > 0) {
> > > +		struct tracepoint_func *probe_func_ptr;
> > > +		user_event_func_t probe_func;
> > > +
> > > +		rcu_read_lock_sched();
> > > +
> > > +		probe_func_ptr = rcu_dereference_sched(tp->funcs);
> > > +
> > > +		if (probe_func_ptr) {
> > > +			do {
> > > +				probe_func = probe_func_ptr->func;
> > > +
> > > +				if (probe_func == user_event_ftrace)
> > > +					status |= EVENT_STATUS_FTRACE;
> > > +				else
> > > +					status |= EVENT_STATUS_OTHER;
> > > +			} while ((++probe_func_ptr)->func);
> > > +		}
> > > +
> > > +		rcu_read_unlock_sched();
> > > +	}
> > > +
> > > +	register_page_data[user->index] = status;  
> > 
> > Should there be some kind of memory barriers here? That is, isn't this
> > the page that user space sees? The user space code should probably have
> > some kind of read memory barrier as well.
> >   
> 
> I'm glad you brought this up. I wanted to ensure a balance between
> eventual enablement of the event in the user mode process vs the cost
> of simultaneous enablement of the event (stalls, etc).
> 
> We haven't seen this become an issue for our teams in our other
> telemetry sources (with no barriers), which seems to indicate eventual
> agreement of the page data works well as a tradeoff.

Another approach is if you know what tasks these events come from (who
registered them), then you could simply send an IPI to the task if it
happen to be running. An IPI will force a memory barrier, and since you
only need to do this on changes (enable / disable event) it's not like it
will happen often.

> 
> > > +static int user_event_create(const char *raw_command)
> > > +{
> > > +	struct user_event *user;
> > > +	char *name;
> > > +	int ret;
> > > +
> > > +	if (strstr(raw_command, USER_EVENTS_PREFIX) != raw_command)
> > > +		return -ECANCELED;
> > > +
> > > +	raw_command += USER_EVENTS_PREFIX_LEN;
> > > +	raw_command = skip_spaces(raw_command);
> > > +
> > > +	name = kstrdup(raw_command, GFP_KERNEL);  
> > 
> > name is allocated here, it really needs to be freed in this function as
> > well. I see that user_event_parse() will free it, but that is extremely
> > error prone to have a dependency like that. If name needs to be saved
> > by user_event_parse_cmd() then that should be shown in the return value
> > of that function. And if it fails the freeing of name should be in this
> > function. Also, if it is saved, then there should be a comment in this
> > function stating that.
> >   
> 
> It's a bit tricky, because if the event already exists, the name is
> freed. If the function fails, the name is freed. If the event has
> never been seen before then it is saved.

I would then suggest to free the name from the calling function only if it
succeeds and the name already exists. With a comment stating that.

But I would have the caller free it on failure. That is, you could change
user_event_parse() to have:

	struct user_event *user = find_user_event(name, &key);

	if (user) {
		*newuser = user;
		/*
		 * The name is allocated by the caller, but since it
		 * already exists in user, simply free it here.
		 */
		kfree(name);
		return 0;
	}

And remove all the "put_name" jumps.

> 
> I'll try to make this more clear, my thought is to add an explicit
> argument that gets set back to the caller if the string should be freed
> or not to make this clear while reading the code.
> 
> > > +static bool user_event_is_busy(struct dyn_event *ev)
> > > +{
> > > +	struct user_event *user = container_of(ev, struct user_event, devent);
> > > +
> > > +	return atomic_read(&user->refcnt) != 0;
> > > +}
> > > +
> > > +static int user_event_free(struct dyn_event *ev)
> > > +{
> > > +	struct user_event *user = container_of(ev, struct user_event, devent);
> > > +  
> > 
> > Shouldn't this check if the event is busy first?  
> 
> Yes, you are right. In the release all case busy is checked by
> dyn_events for me. However, in the individual case it is not. I'll fix
> this.
> 
> > > +/*
> > > + * Validates the user payload and writes via iterator.
> > > + */
> > > +static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
> > > +{
> > > +	struct user_event_refs *refs;
> > > +	struct user_event *user = NULL;
> > > +	struct tracepoint *tp;
> > > +	ssize_t ret = i->count;
> > > +	int idx;
> > > +
> > > +	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
> > > +		return -EFAULT;
> > > +
> > > +	rcu_read_lock_sched();
> > > +
> > > +	refs = rcu_dereference_sched(file->private_data);
> > > +
> > > +	if (likely(refs && idx < refs->count))
> > > +		user = refs->events[idx];
> > > +
> > > +	rcu_read_unlock_sched();
> > > +
> > > +	if (unlikely(user == NULL))
> > > +		return -ENOENT;
> > > +
> > > +	tp = &user->tracepoint;  
> > 
> > What protects user here? You released the rcu lock.  
> 
> user is ref counted by the file, so before the final close all events
> linked to the file via the reg ioctl are ref'd and cannot go away.


If that's the case, then why the rcu_read_lock_sched() around the assigment
of user?

-- Steve


> 
> >   
> > > +
> > > +	if (likely(atomic_read(&tp->key.enabled) > 0)) {
> > > +		struct tracepoint_func *probe_func_ptr;
> > > +		user_event_func_t probe_func;
> > > +		void *tpdata;
> > > +		void *kdata;
> > > +		u32 datalen;
> > > +
> > > +		kdata = kmalloc(i->count, GFP_KERNEL);  
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 20:25         ` Beau Belgrave
@ 2021-11-08 21:00           ` Steven Rostedt
  2021-11-08 22:09             ` Beau Belgrave
  2021-11-09  2:56           ` Masami Hiramatsu
  1 sibling, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 21:00 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Mon, 8 Nov 2021 12:25:27 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> It seems there are 2 concerns:
> 1. If data comes in and it's not in the size that is specified, it's
> suspicious and should either be truncated or ignored. Maybe under
> ignore, over truncate.
> 
> 2. If the data is more than specified, it must be checked to see if
> there are __data_loc / __rel_loc entries and they must be validated as
> within range of accepted limits. If there are no __data_loc / __rel_loc
> it should either be truncated or ignored.
> 
> Is there more that I may have missed?
> 
> I'd like to know if I do fix them that the features like filtering will still
> be available to user_events or if it's better to just add flags to disable
> kernel filtering?

If these are "user defined" then perhaps we add a wrapper to the filtering
that is called instead of the normal filtering for theses events that
verify the fields of the events being filtered are located on the ring
buffer. Although, strings and such are rare or just slow in filtering that
we could make sure the content is still on the buffer that is being
filtered.

> 
> I'm still unsure this is limited to just user_events.
> 
> For example, why doesn't filter_pred_strloc and filter_pred_pchar in
> trace_events_filter.c check the boundary it will be accessing?
> 
> It seems like tracepoints from kernel modules, while more trusted, can also
> cause this kind of thing due to bugs, etc.

Yes, it's the fact that the code is created in the kernel, and the only way
that the filtering could be out of bounds is if there's a bug in the
kernel. We rather not check for that if it slows down the tracing. But
perhaps if we can show that the checks are only done for dynamic strings
and arrays, that it doesn't cause noticeable overhead, it may be fine to
keep for all events.

-- Steve


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 20:53       ` Steven Rostedt
@ 2021-11-08 21:15         ` Beau Belgrave
  0 siblings, 0 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 21:15 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Mon, Nov 08, 2021 at 03:53:59PM -0500, Steven Rostedt wrote:
> On Mon, 8 Nov 2021 11:56:42 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > On Sun, Nov 07, 2021 at 01:18:50PM -0500, Steven Rostedt wrote:
> > > On Thu,  4 Nov 2021 10:04:25 -0700
> > > Beau Belgrave <beaub@linux.microsoft.com> wrote:  
> > > > +static int user_field_array_size(const char *type)
> > > > +{
> > > > +	const char *start = strchr(type, '[');
> > > > +	int size = 0;
> > > > +
> > > > +	if (start == NULL)
> > > > +		return -EINVAL;
> > > > +
> > > > +	start++;
> > > > +
> > > > +	while (*start >= '0' && *start <= '9')  
> > > 
> > > The kernel has include/linux/ctype.h
> > > 
> > > 	while (isdigit(*start))
> > >   
> > > > +		size = (size * 10) + (*start++ - '0');  
> > > 
> > > So you only allow decimal digits? No hex?
> > >   
> > 
> > Happy to change it, I only expected decimal to be allowed.
> 
> I'm more worried that if the output of the "registered events" may have a
> hex number, and someone uses that output to recreate the event.
> 

It seems like there is precedent around this in synth, so I will go
ahead and ensure hex works as well.

> > 
> > Is there a strong need for hex? (IE: kstrtouint(start, 0, ...)?
> > 
> > > Also, is there anything to check if the size overflows?
> > > 
> > > I'm assuming that other patches will add checking if the size is
> > > greater than some amount?
> > >   
> > 
> > I can switch to kstrtouint and use a max check, however:
> > The max checks are weird here because eBPF has no limits, while ftrace 
> > and perf both do (and I believe they are different max sizes?)
> 
> I'm not concerned about taking up more than ftrace or perf, but having some
> kind of DOS that is caused by hugh allocations. If you make a huge event
> and try to record it in ftrace, then ftrace will simply drop the event. No
> harm done (except you won't see the event).
> 

The later patches in the series avoid the alloc all together (and pass
the iter through to the probes). eBPF still has an alloc there, but it
is size limited to a single page to handle this case.

The huge alloc case could happen if a user just put in a lot of data in
the first patch. However, with alloc being removed in the later patches,
this won't occur.

> > 
> > If someone really wanted a large array of characters and it can fit
> > within perf but not ftrace, is it correct to block it here? My thinking
> > was to allow each trace buffer reserve call to either work or not based
> > on what the user requested depending on what was hooked.
> > 
> > No strong opinion here though, just thoughts about what is a reasonable
> > max given the 3 technologies.
> 
> Again, I'm more worried about just a general 'this is way too big' thing.
> If anything, it could be simply to flag a bug where the event in created
> via some logic that goes crazy.
> 

Got it, I believe putting a large upper bound, like 2*PAGE_SIZE should
cut that stuff out.

> > 
> > > > +/*
> > > > + * Parses the values of a field within the description
> > > > + * Format: type name [size]
> > > > + */
> > > > +static int user_event_parse_field(char *field, struct user_event *user,
> > > > +				  u32 *offset)
> > > > +{
> > > > +	char *part, *type, *name;
> > > > +	u32 depth = 0, saved_offset = *offset;
> > > > +	int size = -EINVAL;
> > > > +	bool is_struct = false;
> > > > +
> > > > +	field = skip_spaces(field);
> > > > +
> > > > +	if (*field == 0)
> > > > +		return 0;
> > > > +
> > > > +	/* Handle types that have a space within */
> > > > +	if (strstr(field, "unsigned ") == field) {  
> > > 
> > > These should use str_has_prefix(field, "unsigned ") etc.
> > > 
> > > It also returns the length of the prefix so you don't need to add
> > > sizeof() of the string you checked for.
> > >   
> > 
> > Nice, will use that.
> > 
> > > > +
> > > > +static struct trace_event_fields user_event_fields_array[] = {
> > > > +	{}
> > > > +};  
> > > 
> > > Isn't the above just a fancy way of writing:
> > > 
> > > static struct trace_event_fields user_event_fields_array[1];
> > > 
> > > ?
> > >   
> > 
> > Yes, as long as it gets init'd to zero. My understanding was that {}
> > would force zeroing, but I could totally be wrong.
> 
> All static and global variables that are not assigned will be initialized
> to zero. No need to initialize anything that is zero by default.
> 
> > 
> > > > +/*
> > > > + * Writes the user supplied payload out to a trace file.
> > > > + */
> > > > +static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
> > > > +			      void *tpdata)
> > > > +{
> > > > +	struct trace_event_file *file;
> > > > +	struct trace_entry *entry;
> > > > +	struct trace_event_buffer event_buffer;
> > > > +
> > > > +	file = (struct trace_event_file *)tpdata;
> > > > +
> > > > +	if (!file ||
> > > > +	    !(file->flags & EVENT_FILE_FL_ENABLED) ||
> > > > +	    trace_trigger_soft_disabled(file))
> > > > +		return;
> > > > +
> > > > +	entry = trace_event_buffer_reserve(&event_buffer, file,
> > > > +					   sizeof(*entry) + datalen);
> > > > +
> > > > +	if (unlikely(!entry))
> > > > +		return;
> > > > +  
> > > 
> > > Might want to add a comment here that the trace_event_buffer_reserve()
> > > will fill in the struct trace_entry, which explains the "entry+1" below.
> > > 
> > > I also need to add comments to trace_event_buffer_reserve() that it does so :-p
> > >   
> > 
> > Will do.
> > 
> > > > +/*
> > > > + * Update the register page that is shared between user processes.
> > > > + */
> > > > +static void update_reg_page_for(struct user_event *user)
> > > > +{
> > > > +	struct tracepoint *tp = &user->tracepoint;
> > > > +	char status = 0;
> > > > +
> > > > +	if (atomic_read(&tp->key.enabled) > 0) {
> > > > +		struct tracepoint_func *probe_func_ptr;
> > > > +		user_event_func_t probe_func;
> > > > +
> > > > +		rcu_read_lock_sched();
> > > > +
> > > > +		probe_func_ptr = rcu_dereference_sched(tp->funcs);
> > > > +
> > > > +		if (probe_func_ptr) {
> > > > +			do {
> > > > +				probe_func = probe_func_ptr->func;
> > > > +
> > > > +				if (probe_func == user_event_ftrace)
> > > > +					status |= EVENT_STATUS_FTRACE;
> > > > +				else
> > > > +					status |= EVENT_STATUS_OTHER;
> > > > +			} while ((++probe_func_ptr)->func);
> > > > +		}
> > > > +
> > > > +		rcu_read_unlock_sched();
> > > > +	}
> > > > +
> > > > +	register_page_data[user->index] = status;  
> > > 
> > > Should there be some kind of memory barriers here? That is, isn't this
> > > the page that user space sees? The user space code should probably have
> > > some kind of read memory barrier as well.
> > >   
> > 
> > I'm glad you brought this up. I wanted to ensure a balance between
> > eventual enablement of the event in the user mode process vs the cost
> > of simultaneous enablement of the event (stalls, etc).
> > 
> > We haven't seen this become an issue for our teams in our other
> > telemetry sources (with no barriers), which seems to indicate eventual
> > agreement of the page data works well as a tradeoff.
> 
> Another approach is if you know what tasks these events come from (who
> registered them), then you could simply send an IPI to the task if it
> happen to be running. An IPI will force a memory barrier, and since you
> only need to do this on changes (enable / disable event) it's not like it
> will happen often.
> 

This is good to know, there might be a need for this in the future when
we get around to more features of user_events from what some people are
asking for.

> > 
> > > > +static int user_event_create(const char *raw_command)
> > > > +{
> > > > +	struct user_event *user;
> > > > +	char *name;
> > > > +	int ret;
> > > > +
> > > > +	if (strstr(raw_command, USER_EVENTS_PREFIX) != raw_command)
> > > > +		return -ECANCELED;
> > > > +
> > > > +	raw_command += USER_EVENTS_PREFIX_LEN;
> > > > +	raw_command = skip_spaces(raw_command);
> > > > +
> > > > +	name = kstrdup(raw_command, GFP_KERNEL);  
> > > 
> > > name is allocated here, it really needs to be freed in this function as
> > > well. I see that user_event_parse() will free it, but that is extremely
> > > error prone to have a dependency like that. If name needs to be saved
> > > by user_event_parse_cmd() then that should be shown in the return value
> > > of that function. And if it fails the freeing of name should be in this
> > > function. Also, if it is saved, then there should be a comment in this
> > > function stating that.
> > >   
> > 
> > It's a bit tricky, because if the event already exists, the name is
> > freed. If the function fails, the name is freed. If the event has
> > never been seen before then it is saved.
> 
> I would then suggest to free the name from the calling function only if it
> succeeds and the name already exists. With a comment stating that.
> 
> But I would have the caller free it on failure. That is, you could change
> user_event_parse() to have:
> 
> 	struct user_event *user = find_user_event(name, &key);
> 
> 	if (user) {
> 		*newuser = user;
> 		/*
> 		 * The name is allocated by the caller, but since it
> 		 * already exists in user, simply free it here.
> 		 */
> 		kfree(name);
> 		return 0;
> 	}
> 
> And remove all the "put_name" jumps.
> 

Sure thing.

> > 
> > I'll try to make this more clear, my thought is to add an explicit
> > argument that gets set back to the caller if the string should be freed
> > or not to make this clear while reading the code.
> > 
> > > > +static bool user_event_is_busy(struct dyn_event *ev)
> > > > +{
> > > > +	struct user_event *user = container_of(ev, struct user_event, devent);
> > > > +
> > > > +	return atomic_read(&user->refcnt) != 0;
> > > > +}
> > > > +
> > > > +static int user_event_free(struct dyn_event *ev)
> > > > +{
> > > > +	struct user_event *user = container_of(ev, struct user_event, devent);
> > > > +  
> > > 
> > > Shouldn't this check if the event is busy first?  
> > 
> > Yes, you are right. In the release all case busy is checked by
> > dyn_events for me. However, in the individual case it is not. I'll fix
> > this.
> > 
> > > > +/*
> > > > + * Validates the user payload and writes via iterator.
> > > > + */
> > > > +static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
> > > > +{
> > > > +	struct user_event_refs *refs;
> > > > +	struct user_event *user = NULL;
> > > > +	struct tracepoint *tp;
> > > > +	ssize_t ret = i->count;
> > > > +	int idx;
> > > > +
> > > > +	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
> > > > +		return -EFAULT;
> > > > +
> > > > +	rcu_read_lock_sched();
> > > > +
> > > > +	refs = rcu_dereference_sched(file->private_data);
> > > > +
> > > > +	if (likely(refs && idx < refs->count))
> > > > +		user = refs->events[idx];
> > > > +
> > > > +	rcu_read_unlock_sched();
> > > > +
> > > > +	if (unlikely(user == NULL))
> > > > +		return -ENOENT;
> > > > +
> > > > +	tp = &user->tracepoint;  
> > > 
> > > What protects user here? You released the rcu lock.  
> > 
> > user is ref counted by the file, so before the final close all events
> > linked to the file via the reg ioctl are ref'd and cannot go away.
> 
> 
> If that's the case, then why the rcu_read_lock_sched() around the assigment
> of user?
> 
> -- Steve
> 

Because the refs can change during the write, and we get the user
assignment from the refs events array that is RCU protected. If a task
registered another event during the write this has a timing window where
we would deref a possibly freed array.

User structs are shared across files, so they are ref counted. The
user_event_refs are per-FD references that are RCU protected since tasks
can share the same FD and cause synchronization windows that are bad
without it.

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 03/10] user_events: Add print_fmt generation support for basic types
  2021-11-04 17:04 ` [PATCH v4 03/10] user_events: Add print_fmt generation support for basic types Beau Belgrave
@ 2021-11-08 22:03   ` Steven Rostedt
  0 siblings, 0 replies; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 22:03 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Thu,  4 Nov 2021 10:04:26 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> Addes print_fmt format generation for basic types that are supported for
> user processes. Only supports sizes that are the same on 32 and 64 bit.
> 
> Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
> ---
>  kernel/trace/trace_events_user.c | 108 ++++++++++++++++++++++++++++++-
>  1 file changed, 106 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
> index a68017ad7fdd..479a9ced3281 100644
> --- a/kernel/trace/trace_events_user.c
> +++ b/kernel/trace/trace_events_user.c
> @@ -329,6 +329,107 @@ static int user_event_parse_fields(struct user_event *user, char *args)
>  	return ret;
>  }
>  
> +static char *user_field_format(const char *type)

Should be const char *

> +{
> +	if (strcmp(type, "s64") == 0)
> +		return "%lld";
> +	if (strcmp(type, "u64") == 0)
> +		return "%llu";
> +	if (strcmp(type, "s32") == 0)
> +		return "%d";
> +	if (strcmp(type, "u32") == 0)
> +		return "%u";
> +	if (strcmp(type, "int") == 0)
> +		return "%d";
> +	if (strcmp(type, "unsigned int") == 0)
> +		return "%u";
> +	if (strcmp(type, "s16") == 0)
> +		return "%d";
> +	if (strcmp(type, "u16") == 0)
> +		return "%u";
> +	if (strcmp(type, "short") == 0)
> +		return "%d";
> +	if (strcmp(type, "unsigned short") == 0)
> +		return "%u";
> +	if (strcmp(type, "s8") == 0)
> +		return "%d";
> +	if (strcmp(type, "u8") == 0)
> +		return "%u";
> +	if (strcmp(type, "char") == 0)
> +		return "%d";
> +	if (strcmp(type, "unsigned char") == 0)
> +		return "%u";
> +	if (strstr(type, "char[") != 0)
> +		return "%s";
> +
> +	/* Unknown, likely struct, allowed treat as 64-bit */
> +	return "%llu";
> +}
> +
> +static bool user_field_is_dyn_string(const char *type)
> +{
> +	if (strstr(type, "__data_loc ") == type ||
> +	    strstr(type, "__rel_loc ") == type) {

Use str_has_prefix().

-- Steve


> +		if (strstr(type, "char[") != 0)
> +			return true;
> +	}
> +
> +	return false;
> +}
> +
> +#define LEN_OR_ZERO (len ? len - pos : 0)
> +static int user_event_set_print_fmt(struct user_event *user, char *buf, int len)
> +{
> +	struct ftrace_event_field *field, *next;
> +	struct list_head *head = &user->fields;
> +	int pos = 0, depth = 0;
> +
> +	pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
> +
> +	list_for_each_entry_safe_reverse(field, next, head, link) {
> +		if (depth != 0)
> +			pos += snprintf(buf + pos, LEN_OR_ZERO, " ");
> +
> +		pos += snprintf(buf + pos, LEN_OR_ZERO, "%s=%s",
> +				field->name, user_field_format(field->type));
> +
> +		depth++;
> +	}
> +
> +	pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
> +
> +	list_for_each_entry_safe_reverse(field, next, head, link) {
> +		if (user_field_is_dyn_string(field->type))
> +			pos += snprintf(buf + pos, LEN_OR_ZERO,
> +					", __get_str(%s)", field->name);
> +		else
> +			pos += snprintf(buf + pos, LEN_OR_ZERO,
> +					", REC->%s", field->name);
> +	}
> +
> +	return pos + 1;
> +}
> +#undef LEN_OR_ZERO
> +
> +static int user_event_create_print_fmt(struct user_event *user)
> +{
> +	char *print_fmt;
> +	int len;
> +
> +	len = user_event_set_print_fmt(user, NULL, 0);
> +
> +	print_fmt = kmalloc(len, GFP_KERNEL);
> +
> +	if (!print_fmt)
> +		return -ENOMEM;
> +
> +	user_event_set_print_fmt(user, print_fmt, len);
> +
> +	user->call.print_fmt = print_fmt;
> +
> +	return 0;
> +}
> +
>  static struct trace_event_fields user_event_fields_array[] = {
>  	{}
>  };
> @@ -366,6 +467,7 @@ static int destroy_user_event(struct user_event *user)
>  	clear_bit(user->index, page_bitmap);
>  	hash_del(&user->node);
>  
> +	kfree(user->call.print_fmt);
>  	kfree(EVENT_NAME(user));
>  	kfree(user);
>  
> @@ -637,8 +739,10 @@ static int user_event_parse(char *name, char *args, char *flags,
>  	if (ret)
>  		goto put_user;
>  
> -	/* Minimal print format */
> -	user->call.print_fmt = "\"\"";
> +	ret = user_event_create_print_fmt(user);
> +
> +	if (ret)
> +		goto put_user;
>  
>  	user->call.data = user;
>  	user->call.class = &user->class;


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 04/10] user_events: Handle matching arguments from dyn_events
  2021-11-04 17:04 ` [PATCH v4 04/10] user_events: Handle matching arguments from dyn_events Beau Belgrave
@ 2021-11-08 22:05   ` Steven Rostedt
  0 siblings, 0 replies; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 22:05 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Thu,  4 Nov 2021 10:04:27 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> Ensures that when dynamic events requests a match with arguments that
> they match what is in the user_event.
> 
> Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
> ---
>  kernel/trace/trace_events_user.c | 67 +++++++++++++++++++++++++++++++-
>  1 file changed, 66 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
> index 479a9ced3281..cd78cc481557 100644
> --- a/kernel/trace/trace_events_user.c
> +++ b/kernel/trace/trace_events_user.c
> @@ -662,13 +662,78 @@ static int user_event_free(struct dyn_event *ev)
>  	return destroy_user_event(user);
>  }
>  
> +static int user_field_match(struct ftrace_event_field *field, int argc,
> +			    const char **argv, int *iout)
> +{
> +	char field_name[256];
> +	char arg_name[256];

I'm a bit nervous about allocating that much on the stack. This isn't a
critical function. I think it is best to just kmalloc it, and fail in the
unlikely event that the kmalloc fails. It should never fail unless there's
major memory issues with the system.

-- Steve


> +	int len, pos, i = *iout;
> +	bool colon = false;
> +
> +	if (i >= argc)
> +		return false;
> +
> +	len = sizeof(arg_name);
> +	pos = 0;
> +
> +	for (; i < argc; ++i) {
> +		if (i != *iout)
> +			pos += snprintf(arg_name + pos, len - pos, " ");
> +
> +		pos += snprintf(arg_name + pos, len - pos, argv[i]);
> +
> +		if (strchr(argv[i], ';')) {
> +			++i;
> +			colon = true;
> +			break;
> +		}
> +	}
> +
> +	len = sizeof(field_name);
> +	pos = 0;
> +
> +	pos += snprintf(field_name + pos, len - pos, field->type);
> +	pos += snprintf(field_name + pos, len - pos, " ");
> +	pos += snprintf(field_name + pos, len - pos, field->name);
> +
> +	if (colon)
> +		pos += snprintf(field_name + pos, len - pos, ";");
> +
> +	*iout = i;
> +
> +	return strcmp(arg_name, field_name) == 0;
> +}
> +
> +static bool user_fields_match(struct user_event *user, int argc,
> +			      const char **argv)
> +{
> +	struct ftrace_event_field *field, *next;
> +	struct list_head *head = &user->fields;
> +	int i = 0;
> +
> +	list_for_each_entry_safe_reverse(field, next, head, link)
> +		if (!user_field_match(field, argc, argv, &i))
> +			return false;
> +
> +	if (i != argc)
> +		return false;
> +
> +	return true;
> +}
> +
>  static bool user_event_match(const char *system, const char *event,
>  			     int argc, const char **argv, struct dyn_event *ev)
>  {
>  	struct user_event *user = container_of(ev, struct user_event, devent);
> +	bool match;
>  
> -	return strcmp(EVENT_NAME(user), event) == 0 &&
> +	match = strcmp(EVENT_NAME(user), event) == 0 &&
>  		(!system || strcmp(system, USER_EVENTS_SYSTEM) == 0);
> +
> +	if (match && argc > 0)
> +		match = user_fields_match(user, argc, argv);
> +
> +	return match;
>  }
>  
>  static struct dyn_event_operations user_event_dops = {


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 21:00           ` Steven Rostedt
@ 2021-11-08 22:09             ` Beau Belgrave
  2021-11-08 22:30               ` Steven Rostedt
  2021-11-09  4:58               ` Masami Hiramatsu
  0 siblings, 2 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 22:09 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Mon, Nov 08, 2021 at 04:00:27PM -0500, Steven Rostedt wrote:
> On Mon, 8 Nov 2021 12:25:27 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > It seems there are 2 concerns:
> > 1. If data comes in and it's not in the size that is specified, it's
> > suspicious and should either be truncated or ignored. Maybe under
> > ignore, over truncate.
> > 
> > 2. If the data is more than specified, it must be checked to see if
> > there are __data_loc / __rel_loc entries and they must be validated as
> > within range of accepted limits. If there are no __data_loc / __rel_loc
> > it should either be truncated or ignored.
> > 
> > Is there more that I may have missed?
> > 
> > I'd like to know if I do fix them that the features like filtering will still
> > be available to user_events or if it's better to just add flags to disable
> > kernel filtering?
> 
> If these are "user defined" then perhaps we add a wrapper to the filtering
> that is called instead of the normal filtering for theses events that
> verify the fields of the events being filtered are located on the ring
> buffer. Although, strings and such are rare or just slow in filtering that
> we could make sure the content is still on the buffer that is being
> filtered.
> 

It seems like both histograms and filter both reference field flags to
determine how to get the data.

How would you feel about another FILTER_* flag on fields, like:
FILTER_DYN_STRING_SAFE
FILTER_PTR_STRING_SAFE

user_events when parsing would instead of leaving FILTER_OTHER for
__data_loc / __rel_loc switch to the above.

The predicate filter method would then switch based on those types to
safer versions.

That way other parts could take advantage of this if needed beyond
user_events.

If this is addressed at the filter/histogram level, would then the write
callsites still check bounds per-write? Or maybe only care about the
undersized data cases?

> > 
> > I'm still unsure this is limited to just user_events.
> > 
> > For example, why doesn't filter_pred_strloc and filter_pred_pchar in
> > trace_events_filter.c check the boundary it will be accessing?
> > 
> > It seems like tracepoints from kernel modules, while more trusted, can also
> > cause this kind of thing due to bugs, etc.
> 
> Yes, it's the fact that the code is created in the kernel, and the only way
> that the filtering could be out of bounds is if there's a bug in the
> kernel. We rather not check for that if it slows down the tracing. But
> perhaps if we can show that the checks are only done for dynamic strings
> and arrays, that it doesn't cause noticeable overhead, it may be fine to
> keep for all events.
> 
> -- Steve

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 22:09             ` Beau Belgrave
@ 2021-11-08 22:30               ` Steven Rostedt
  2021-11-08 22:59                 ` Beau Belgrave
  2021-11-09  4:58               ` Masami Hiramatsu
  1 sibling, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 22:30 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Mon, 8 Nov 2021 14:09:45 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> It seems like both histograms and filter both reference field flags to
> determine how to get the data.
> 
> How would you feel about another FILTER_* flag on fields, like:
> FILTER_DYN_STRING_SAFE
> FILTER_PTR_STRING_SAFE

You mean "UNSAFE" ?

> 
> user_events when parsing would instead of leaving FILTER_OTHER for
> __data_loc / __rel_loc switch to the above.
> 
> The predicate filter method would then switch based on those types to
> safer versions.
> 
> That way other parts could take advantage of this if needed beyond
> user_events.
> 
> If this is addressed at the filter/histogram level, would then the write
> callsites still check bounds per-write? Or maybe only care about the
> undersized data cases?

I'd have to look at the implementation of this. There's too many variables
running around in my head right now.

-- Steve

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 09/10] user_events: Optimize writing events by only copying data once
  2021-11-04 17:04 ` [PATCH v4 09/10] user_events: Optimize writing events by only copying data once Beau Belgrave
@ 2021-11-08 22:45   ` Steven Rostedt
  2021-11-08 23:00     ` Beau Belgrave
  0 siblings, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 22:45 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Thu,  4 Nov 2021 10:04:32 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> Pass iterator through to probes to allow copying data directly to the
> probe buffers instead of taking multiple copies. Enables eBPF user and
> raw iterator types out to programs for no-copy scenarios.
> 
> Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
> ---
>  kernel/trace/trace_events_user.c | 97 +++++++++++++++++++++++---------
>  1 file changed, 69 insertions(+), 28 deletions(-)
> 
> diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
> index b5fe0550b489..d50118b9630a 100644
> --- a/kernel/trace/trace_events_user.c
> +++ b/kernel/trace/trace_events_user.c
> @@ -39,6 +39,10 @@
>  #define MAX_EVENT_DESC 512
>  #define EVENT_NAME(user_event) ((user_event)->tracepoint.name)
>  
> +#define MAX_BPF_COPY_SIZE PAGE_SIZE
> +#define MAX_STACK_BPF_DATA 512
> +#define copy_nofault copy_from_iter_nocache
> +
>  static char *register_page_data;
>  
>  static DEFINE_MUTEX(reg_mutex);
> @@ -63,8 +67,7 @@ struct user_event_refs {
>  	struct user_event *events[];
>  };
>  
> -typedef void (*user_event_func_t) (struct user_event *user,
> -				   void *data, u32 datalen,
> +typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i,
>  				   void *tpdata);
>  
>  static int user_event_parse(char *name, char *args, char *flags,
> @@ -491,7 +494,7 @@ static struct user_event *find_user_event(char *name, u32 *outkey)
>  /*
>   * Writes the user supplied payload out to a trace file.
>   */
> -static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
> +static void user_event_ftrace(struct user_event *user, struct iov_iter *i,
>  			      void *tpdata)
>  {
>  	struct trace_event_file *file;
> @@ -506,41 +509,82 @@ static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
>  		return;
>  
>  	entry = trace_event_buffer_reserve(&event_buffer, file,
> -					   sizeof(*entry) + datalen);
> +					   sizeof(*entry) + i->count);
>  
>  	if (unlikely(!entry))
>  		return;
>  
> -	memcpy(entry + 1, data, datalen);
> +	if (unlikely(!copy_nofault(entry + 1, i->count, i)))

Need:
		__trace_event_discard_commit(event_buffer.buffer, event_buffer.event);

Because the trace_event_buffer_reserve() will not only allocate space on
the ring buffer, but may also disable preemption.

-- Steve


> +		return;
>  
>  	trace_event_buffer_commit(&event_buffer);
>  }
>  
>  #ifdef CONFIG_PERF_EVENTS

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 22:30               ` Steven Rostedt
@ 2021-11-08 22:59                 ` Beau Belgrave
  0 siblings, 0 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 22:59 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Mon, Nov 08, 2021 at 05:30:53PM -0500, Steven Rostedt wrote:
> On Mon, 8 Nov 2021 14:09:45 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> > 
> > It seems like both histograms and filter both reference field flags to
> > determine how to get the data.
> > 
> > How would you feel about another FILTER_* flag on fields, like:
> > FILTER_DYN_STRING_SAFE
> > FILTER_PTR_STRING_SAFE
> 
> You mean "UNSAFE" ?
> 

Yes :) Unsafe data, safe filter method.

> > 
> > user_events when parsing would instead of leaving FILTER_OTHER for
> > __data_loc / __rel_loc switch to the above.
> > 
> > The predicate filter method would then switch based on those types to
> > safer versions.
> > 
> > That way other parts could take advantage of this if needed beyond
> > user_events.
> > 
> > If this is addressed at the filter/histogram level, would then the write
> > callsites still check bounds per-write? Or maybe only care about the
> > undersized data cases?
> 
> I'd have to look at the implementation of this. There's too many variables
> running around in my head right now.
> 
> -- Steve

Understood, thanks for thinking about this.

-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 09/10] user_events: Optimize writing events by only copying data once
  2021-11-08 22:45   ` Steven Rostedt
@ 2021-11-08 23:00     ` Beau Belgrave
  2021-11-08 23:04       ` Steven Rostedt
  0 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 23:00 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Mon, Nov 08, 2021 at 05:45:42PM -0500, Steven Rostedt wrote:
> On Thu,  4 Nov 2021 10:04:32 -0700
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > Pass iterator through to probes to allow copying data directly to the
> > probe buffers instead of taking multiple copies. Enables eBPF user and
> > raw iterator types out to programs for no-copy scenarios.
> > 
> > Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
> > ---
> >  kernel/trace/trace_events_user.c | 97 +++++++++++++++++++++++---------
> >  1 file changed, 69 insertions(+), 28 deletions(-)
> > 
> > diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
> > index b5fe0550b489..d50118b9630a 100644
> > --- a/kernel/trace/trace_events_user.c
> > +++ b/kernel/trace/trace_events_user.c
> > @@ -39,6 +39,10 @@
> >  #define MAX_EVENT_DESC 512
> >  #define EVENT_NAME(user_event) ((user_event)->tracepoint.name)
> >  
> > +#define MAX_BPF_COPY_SIZE PAGE_SIZE
> > +#define MAX_STACK_BPF_DATA 512
> > +#define copy_nofault copy_from_iter_nocache
> > +
> >  static char *register_page_data;
> >  
> >  static DEFINE_MUTEX(reg_mutex);
> > @@ -63,8 +67,7 @@ struct user_event_refs {
> >  	struct user_event *events[];
> >  };
> >  
> > -typedef void (*user_event_func_t) (struct user_event *user,
> > -				   void *data, u32 datalen,
> > +typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i,
> >  				   void *tpdata);
> >  
> >  static int user_event_parse(char *name, char *args, char *flags,
> > @@ -491,7 +494,7 @@ static struct user_event *find_user_event(char *name, u32 *outkey)
> >  /*
> >   * Writes the user supplied payload out to a trace file.
> >   */
> > -static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
> > +static void user_event_ftrace(struct user_event *user, struct iov_iter *i,
> >  			      void *tpdata)
> >  {
> >  	struct trace_event_file *file;
> > @@ -506,41 +509,82 @@ static void user_event_ftrace(struct user_event *user, void *data, u32 datalen,
> >  		return;
> >  
> >  	entry = trace_event_buffer_reserve(&event_buffer, file,
> > -					   sizeof(*entry) + datalen);
> > +					   sizeof(*entry) + i->count);
> >  
> >  	if (unlikely(!entry))
> >  		return;
> >  
> > -	memcpy(entry + 1, data, datalen);
> > +	if (unlikely(!copy_nofault(entry + 1, i->count, i)))
> 
> Need:
> 		__trace_event_discard_commit(event_buffer.buffer, event_buffer.event);
> 
> Because the trace_event_buffer_reserve() will not only allocate space on
> the ring buffer, but may also disable preemption.
> 
> -- Steve
> 

Ah, thank you!

-Beau

> 
> > +		return;
> >  
> >  	trace_event_buffer_commit(&event_buffer);
> >  }
> >  
> >  #ifdef CONFIG_PERF_EVENTS

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 09/10] user_events: Optimize writing events by only copying data once
  2021-11-08 23:00     ` Beau Belgrave
@ 2021-11-08 23:04       ` Steven Rostedt
  2021-11-08 23:17         ` Beau Belgrave
  0 siblings, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 23:04 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Mon, 8 Nov 2021 15:00:34 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> > > -	memcpy(entry + 1, data, datalen);
> > > +	if (unlikely(!copy_nofault(entry + 1, i->count, i)))  
> > 
> > Need:
> > 		__trace_event_discard_commit(event_buffer.buffer, event_buffer.event);
> > 
> > Because the trace_event_buffer_reserve() will not only allocate space on
> > the ring buffer, but may also disable preemption.
> > 
> > -- Steve
> >   
> 
> Ah, thank you!

Which reminds me that trace_event_buffer_reserve() expects to be called
with preemption disabled. And I'm guessing that may not be the case for you.

I'll change this so that it always disables preemption even if it uses the
filter buffer, and *always* disables preemption on return.

-- Steve

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 09/10] user_events: Optimize writing events by only copying data once
  2021-11-08 23:04       ` Steven Rostedt
@ 2021-11-08 23:17         ` Beau Belgrave
  2021-11-08 23:20           ` Steven Rostedt
  0 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-08 23:17 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Mon, Nov 08, 2021 at 06:04:52PM -0500, Steven Rostedt wrote:
> On Mon, 8 Nov 2021 15:00:34 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > > > -	memcpy(entry + 1, data, datalen);
> > > > +	if (unlikely(!copy_nofault(entry + 1, i->count, i)))  
> > > 
> > > Need:
> > > 		__trace_event_discard_commit(event_buffer.buffer, event_buffer.event);
> > > 
> > > Because the trace_event_buffer_reserve() will not only allocate space on
> > > the ring buffer, but may also disable preemption.
> > > 
> > > -- Steve
> > >   
> > 
> > Ah, thank you!
> 
> Which reminds me that trace_event_buffer_reserve() expects to be called
> with preemption disabled. And I'm guessing that may not be the case for you.
> 

Thanks, should be good there:
I have rcu_read_lock_sched() held, which will have preemption disabled
during the various probe calls.

> I'll change this so that it always disables preemption even if it uses the
> filter buffer, and *always* disables preemption on return.
> 
> -- Steve

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 09/10] user_events: Optimize writing events by only copying data once
  2021-11-08 23:17         ` Beau Belgrave
@ 2021-11-08 23:20           ` Steven Rostedt
  0 siblings, 0 replies; 47+ messages in thread
From: Steven Rostedt @ 2021-11-08 23:20 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: mhiramat, linux-trace-devel, linux-kernel

On Mon, 8 Nov 2021 15:17:10 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> > Which reminds me that trace_event_buffer_reserve() expects to be called
> > with preemption disabled. And I'm guessing that may not be the case for you.
> >   
> 
> Thanks, should be good there:
> I have rcu_read_lock_sched() held, which will have preemption disabled
> during the various probe calls.

Ah, that's right. Thanks for the reminder.

> 
> > I'll change this so that it always disables preemption even if it uses the
> > filter buffer, and *always* disables preemption on return.

Even so, I think it's better to have it consistently disable/enable
preemption than expect the caller to do so.

-- Steve

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 20:25         ` Beau Belgrave
  2021-11-08 21:00           ` Steven Rostedt
@ 2021-11-09  2:56           ` Masami Hiramatsu
  2021-11-09 19:08             ` Beau Belgrave
  1 sibling, 1 reply; 47+ messages in thread
From: Masami Hiramatsu @ 2021-11-09  2:56 UTC (permalink / raw)
  To: Beau Belgrave
  Cc: Steven Rostedt, Masami Hiramatsu, linux-trace-devel, linux-kernel

On Mon, 8 Nov 2021 12:25:27 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> On Mon, Nov 08, 2021 at 01:16:39PM -0500, Steven Rostedt wrote:
> > On Mon, 8 Nov 2021 09:13:36 -0800
> > Beau Belgrave <beaub@linux.microsoft.com> wrote:
> > 
> > 
> > > Does that mean the decoders in eprobes/histogram don't check event
> > > record sizes before accessing the data? Shouldn't that get fix
> > > centrally? That would mean a loaded module could do the same thing
> > > (user_events only works if the user has access to tracefs, so it's not
> > > like it's open to all users).
> > 
> > There's checks to make sure everything fits in eprobes and kprobes. If it
> > doesn't then the event is simply dropped.
> > 
> > For example, if you look at __eprobe_trace_func() in trace_eprobe.c, you'll
> > see that it calls get_eprobe_size(), which goes through and just reads what
> > it is about to accept. Then it reserves the amount of data on the ring
> > buffer, and then calls store_trace_args() which also passes in the size
> > that it found, in case things change. If it's too big, it only records what
> > it originally intended.
> > 
> > -- Steve
> 
> It seems there are 2 concerns:
> 1. If data comes in and it's not in the size that is specified, it's
> suspicious and should either be truncated or ignored. Maybe under
> ignore, over truncate.

Yes, this is for the events which is defined with fixed-size
parameters and what I suggested.

> 
> 2. If the data is more than specified, it must be checked to see if
> there are __data_loc / __rel_loc entries and they must be validated as
> within range of accepted limits. If there are no __data_loc / __rel_loc
> it should either be truncated or ignored.

Yes, this is for the events, which is defined with variable length
parameters, like null-terminated string. In this case, along with the
__data/__rel_loc validation, it needs a null termination check.

> 
> Is there more that I may have missed?
> 
> I'd like to know if I do fix them that the features like filtering will still
> be available to user_events or if it's better to just add flags to disable
> kernel filtering?

I would rather like that the filters will be available on the user_events.

My question is that you need to log the dynamic data or strings via user-
events or not. Since the other user-events, like SDT doesn't support the
string variables to trace, I guess that is not a high priority.

Moreover, since now we can use eprobes, if user event records the address of
user-string, the eprobes can fetch it.

So, my suggestion is implmenting with fixed-size parameters as the first step
and keep filter/histograms/eprobes available on the user-events.
If you find any performance issue, you can expand the user-events to support
dynamic (array) data and strings.

> 
> I'm still unsure this is limited to just user_events.
> 
> For example, why doesn't filter_pred_strloc and filter_pred_pchar in
> trace_events_filter.c check the boundary it will be accessing?

Because all data is written from the kernel code. We can trust the data
exists on the buffer. (If not, there is an actual BUG in the kenrel)
We can add a verifieer for the debug purpose.

> 
> It seems like tracepoints from kernel modules, while more trusted, can also
> cause this kind of thing due to bugs, etc.

Of course, and that must be fixed. And, the tracepoints (traceevents) should
be automatically generated code, that is more trusted than the events crafted
by user.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-08 22:09             ` Beau Belgrave
  2021-11-08 22:30               ` Steven Rostedt
@ 2021-11-09  4:58               ` Masami Hiramatsu
  1 sibling, 0 replies; 47+ messages in thread
From: Masami Hiramatsu @ 2021-11-09  4:58 UTC (permalink / raw)
  To: Beau Belgrave
  Cc: Steven Rostedt, Masami Hiramatsu, linux-trace-devel, linux-kernel

On Mon, 8 Nov 2021 14:09:45 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> On Mon, Nov 08, 2021 at 04:00:27PM -0500, Steven Rostedt wrote:
> > On Mon, 8 Nov 2021 12:25:27 -0800
> > Beau Belgrave <beaub@linux.microsoft.com> wrote:
> > 
> > > It seems there are 2 concerns:
> > > 1. If data comes in and it's not in the size that is specified, it's
> > > suspicious and should either be truncated or ignored. Maybe under
> > > ignore, over truncate.
> > > 
> > > 2. If the data is more than specified, it must be checked to see if
> > > there are __data_loc / __rel_loc entries and they must be validated as
> > > within range of accepted limits. If there are no __data_loc / __rel_loc
> > > it should either be truncated or ignored.
> > > 
> > > Is there more that I may have missed?
> > > 
> > > I'd like to know if I do fix them that the features like filtering will still
> > > be available to user_events or if it's better to just add flags to disable
> > > kernel filtering?
> > 
> > If these are "user defined" then perhaps we add a wrapper to the filtering
> > that is called instead of the normal filtering for theses events that
> > verify the fields of the events being filtered are located on the ring
> > buffer. Although, strings and such are rare or just slow in filtering that
> > we could make sure the content is still on the buffer that is being
> > filtered.
> > 
> 
> It seems like both histograms and filter both reference field flags to
> determine how to get the data.
> 
> How would you feel about another FILTER_* flag on fields, like:
> FILTER_DYN_STRING_SAFE
> FILTER_PTR_STRING_SAFE
> 
> user_events when parsing would instead of leaving FILTER_OTHER for
> __data_loc / __rel_loc switch to the above.
> 
> The predicate filter method would then switch based on those types to
> safer versions.
> 
> That way other parts could take advantage of this if needed beyond
> user_events.
> 
> If this is addressed at the filter/histogram level, would then the write
> callsites still check bounds per-write? Or maybe only care about the
> undersized data cases?

Even with the unsafe flags, I think the callsites still needs
the undersized check at least. It may have the maxsize and minsize
for the events. If the event defined with dynamic data
(__data_loc/__rel_loc), minsize is the sum of the field size and
maxsize will be the PAGE_SIZE (or smaller than that). If the event
has no dynamic data field, minsize == maxsize.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-09  2:56           ` Masami Hiramatsu
@ 2021-11-09 19:08             ` Beau Belgrave
  2021-11-09 19:25               ` Steven Rostedt
  2021-11-10 13:56               ` Masami Hiramatsu
  0 siblings, 2 replies; 47+ messages in thread
From: Beau Belgrave @ 2021-11-09 19:08 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: Steven Rostedt, linux-trace-devel, linux-kernel

On Tue, Nov 09, 2021 at 11:56:34AM +0900, Masami Hiramatsu wrote:
> On Mon, 8 Nov 2021 12:25:27 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > On Mon, Nov 08, 2021 at 01:16:39PM -0500, Steven Rostedt wrote:
> > > On Mon, 8 Nov 2021 09:13:36 -0800
> > > Beau Belgrave <beaub@linux.microsoft.com> wrote:
> > > 
> > > 
> > > > Does that mean the decoders in eprobes/histogram don't check event
> > > > record sizes before accessing the data? Shouldn't that get fix
> > > > centrally? That would mean a loaded module could do the same thing
> > > > (user_events only works if the user has access to tracefs, so it's not
> > > > like it's open to all users).
> > > 
> > > There's checks to make sure everything fits in eprobes and kprobes. If it
> > > doesn't then the event is simply dropped.
> > > 
> > > For example, if you look at __eprobe_trace_func() in trace_eprobe.c, you'll
> > > see that it calls get_eprobe_size(), which goes through and just reads what
> > > it is about to accept. Then it reserves the amount of data on the ring
> > > buffer, and then calls store_trace_args() which also passes in the size
> > > that it found, in case things change. If it's too big, it only records what
> > > it originally intended.
> > > 
> > > -- Steve
> > 
> > It seems there are 2 concerns:
> > 1. If data comes in and it's not in the size that is specified, it's
> > suspicious and should either be truncated or ignored. Maybe under
> > ignore, over truncate.
> 
> Yes, this is for the events which is defined with fixed-size
> parameters and what I suggested.
> 

Thanks.

> > 
> > 2. If the data is more than specified, it must be checked to see if
> > there are __data_loc / __rel_loc entries and they must be validated as
> > within range of accepted limits. If there are no __data_loc / __rel_loc
> > it should either be truncated or ignored.
> 
> Yes, this is for the events, which is defined with variable length
> parameters, like null-terminated string. In this case, along with the
> __data/__rel_loc validation, it needs a null termination check.
> 

Got it.

> > 
> > Is there more that I may have missed?
> > 
> > I'd like to know if I do fix them that the features like filtering will still
> > be available to user_events or if it's better to just add flags to disable
> > kernel filtering?
> 
> I would rather like that the filters will be available on the user_events.
> 
> My question is that you need to log the dynamic data or strings via user-
> events or not. Since the other user-events, like SDT doesn't support the
> string variables to trace, I guess that is not a high priority.
> 
> Moreover, since now we can use eprobes, if user event records the address of
> user-string, the eprobes can fetch it.
> 
> So, my suggestion is implmenting with fixed-size parameters as the first step
> and keep filter/histograms/eprobes available on the user-events.
> If you find any performance issue, you can expand the user-events to support
> dynamic (array) data and strings.
> 

We need strings to be able to be emitted and recorded in eBPF, perf and
ftrace. So I would rather go after a solution that lets us keep these in
the ring buffers, even if it means a perf hit.

Guess what's left is to where the best place to check is, checking in
the filter with unsafe flags would let us keep most of the perf (we just
check the undersize case, 1 branch). When these unsafe types are
filtered then a perf tax is imposed to keep things safe.

It sounded like Steven wanted to think about this a bit, so I'll wait a
bit before poking again for consensus :)

Do you have any strong feelings about where it goes?

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-09 19:08             ` Beau Belgrave
@ 2021-11-09 19:25               ` Steven Rostedt
  2021-11-09 20:14                 ` Beau Belgrave
  2021-11-10 13:56               ` Masami Hiramatsu
  1 sibling, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-09 19:25 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Tue, 9 Nov 2021 11:08:44 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> We need strings to be able to be emitted and recorded in eBPF, perf and
> ftrace. So I would rather go after a solution that lets us keep these in
> the ring buffers, even if it means a perf hit.
> 
> Guess what's left is to where the best place to check is, checking in
> the filter with unsafe flags would let us keep most of the perf (we just
> check the undersize case, 1 branch). When these unsafe types are
> filtered then a perf tax is imposed to keep things safe.
> 
> It sounded like Steven wanted to think about this a bit, so I'll wait a
> bit before poking again for consensus :)
> 
> Do you have any strong feelings about where it goes?

IIUC, the writing into the trace event is done via one big blob, correct?

That is this:

+	if (likely(atomic_read(&tp->key.enabled) > 0)) {
+		struct tracepoint_func *probe_func_ptr;
+		user_event_func_t probe_func;
+		void *tpdata;
+		void *kdata;
+		u32 datalen;
+
+		kdata = kmalloc(i->count, GFP_KERNEL);
+
+		if (unlikely(!kdata))
+			return -ENOMEM;
+
+		datalen = copy_from_iter(kdata, i->count, i);
+
+		rcu_read_lock_sched();
+
+		probe_func_ptr = rcu_dereference_sched(tp->funcs);
+
+		if (probe_func_ptr) {
+			do {
+				probe_func = probe_func_ptr->func;
+				tpdata = probe_func_ptr->data;
+				probe_func(user, kdata, datalen, tpdata);
+			} while ((++probe_func_ptr)->func);
+		}
+
+		rcu_read_unlock_sched();
+
+		kfree(kdata);

So we really are just interested in making sure that the output is correct?

That is, the reading of the trace file?

-- Steve

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-09 19:25               ` Steven Rostedt
@ 2021-11-09 20:14                 ` Beau Belgrave
  2021-11-09 20:45                   ` Steven Rostedt
  0 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-09 20:14 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Tue, Nov 09, 2021 at 02:25:06PM -0500, Steven Rostedt wrote:
> On Tue, 9 Nov 2021 11:08:44 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > We need strings to be able to be emitted and recorded in eBPF, perf and
> > ftrace. So I would rather go after a solution that lets us keep these in
> > the ring buffers, even if it means a perf hit.
> > 
> > Guess what's left is to where the best place to check is, checking in
> > the filter with unsafe flags would let us keep most of the perf (we just
> > check the undersize case, 1 branch). When these unsafe types are
> > filtered then a perf tax is imposed to keep things safe.
> > 
> > It sounded like Steven wanted to think about this a bit, so I'll wait a
> > bit before poking again for consensus :)
> > 
> > Do you have any strong feelings about where it goes?
> 
> IIUC, the writing into the trace event is done via one big blob, correct?
> 

Yes, the top 4 bytes get trimmed off as an index, then it's a big blob
to all places except eBPF (when asking for the iterator directly).

> That is this:
> 
> +	if (likely(atomic_read(&tp->key.enabled) > 0)) {
> +		struct tracepoint_func *probe_func_ptr;
> +		user_event_func_t probe_func;
> +		void *tpdata;
> +		void *kdata;
> +		u32 datalen;
> +
> +		kdata = kmalloc(i->count, GFP_KERNEL);
> +
> +		if (unlikely(!kdata))
> +			return -ENOMEM;
> +
> +		datalen = copy_from_iter(kdata, i->count, i);
> +
> +		rcu_read_lock_sched();
> +
> +		probe_func_ptr = rcu_dereference_sched(tp->funcs);
> +
> +		if (probe_func_ptr) {
> +			do {
> +				probe_func = probe_func_ptr->func;
> +				tpdata = probe_func_ptr->data;
> +				probe_func(user, kdata, datalen, tpdata);
> +			} while ((++probe_func_ptr)->func);
> +		}
> +
> +		rcu_read_unlock_sched();
> +
> +		kfree(kdata);
> 
> So we really are just interested in making sure that the output is correct?
> 

Largely, yes.

The optimization part of the patch moves the buffer copies into the
probes to remove a double copy. I believe however that output can be
checked either centrally before the probes or within each probe call if
need be.

For perf/eBPF we may not need to check things, however, for ftrace
we will due to the filters. So we may be able to isolate to just the
ftrace probe method.

The ftrace probe will have a blob even after optimization due to the copy
into the ring buffer (assuming we can discard it if it violates a policy).

> That is, the reading of the trace file?
> 

We really need to ensure that data can be analyzed on the machine
directly (eBPF, ftrace, perf) as well as outside of the machine (ftrace, perf).

The priorities to us are fast recording speed with accurate reading of trace
files and event data.

> -- Steve

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-09 20:14                 ` Beau Belgrave
@ 2021-11-09 20:45                   ` Steven Rostedt
  2021-11-09 21:27                     ` Beau Belgrave
  0 siblings, 1 reply; 47+ messages in thread
From: Steven Rostedt @ 2021-11-09 20:45 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Tue, 9 Nov 2021 12:14:32 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> The ftrace probe will have a blob even after optimization due to the copy
> into the ring buffer (assuming we can discard it if it violates a policy).

Yes it can be discarded. In fact, when filtering is enabled, it tries to
first use a temporary per cpu buffer to do the filtering and not write it
into the ring buffer. Only when it passes the filter does it get injected.

For user events that happen in user context, it will always use this temp
buffer. But since there's only buffer per CPU, if an interrupt comes in and
executes a filtered event, it will use the ring buffer itself, and discard
it if it does not match.

> 
> > That is, the reading of the trace file?
> >   
> 
> We really need to ensure that data can be analyzed on the machine
> directly (eBPF, ftrace, perf) as well as outside of the machine (ftrace, perf).
> 
> The priorities to us are fast recording speed with accurate reading of trace
> files and event data.

OK, then it probably isn't an issue to add checks to the parsing of the
dynamic arrays (including strings) that makes sure the string is within
bounds for the filtering.

-- Steve


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-09 20:45                   ` Steven Rostedt
@ 2021-11-09 21:27                     ` Beau Belgrave
  2021-11-09 21:39                       ` Steven Rostedt
  0 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-09 21:27 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Tue, Nov 09, 2021 at 03:45:20PM -0500, Steven Rostedt wrote:
> On Tue, 9 Nov 2021 12:14:32 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > The ftrace probe will have a blob even after optimization due to the copy
> > into the ring buffer (assuming we can discard it if it violates a policy).
> 
> Yes it can be discarded. In fact, when filtering is enabled, it tries to
> first use a temporary per cpu buffer to do the filtering and not write it
> into the ring buffer. Only when it passes the filter does it get injected.
> 
> For user events that happen in user context, it will always use this temp
> buffer. But since there's only buffer per CPU, if an interrupt comes in and
> executes a filtered event, it will use the ring buffer itself, and discard
> it if it does not match.
> 
> > 
> > > That is, the reading of the trace file?
> > >   
> > 
> > We really need to ensure that data can be analyzed on the machine
> > directly (eBPF, ftrace, perf) as well as outside of the machine (ftrace, perf).
> > 
> > The priorities to us are fast recording speed with accurate reading of trace
> > files and event data.
> 
> OK, then it probably isn't an issue to add checks to the parsing of the
> dynamic arrays (including strings) that makes sure the string is within
> bounds for the filtering.
> 
> -- Steve

Where were you thinking the filtering would occur? In the filter /
histogram predicates or in user_events directly before buffer commit?

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-09 21:27                     ` Beau Belgrave
@ 2021-11-09 21:39                       ` Steven Rostedt
  0 siblings, 0 replies; 47+ messages in thread
From: Steven Rostedt @ 2021-11-09 21:39 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: Masami Hiramatsu, linux-trace-devel, linux-kernel

On Tue, 9 Nov 2021 13:27:56 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> Where were you thinking the filtering would occur? In the filter /
> histogram predicates or in user_events directly before buffer commit?

In the predicates. But we only care about the dynamic array ones.

-- Steve

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-09 19:08             ` Beau Belgrave
  2021-11-09 19:25               ` Steven Rostedt
@ 2021-11-10 13:56               ` Masami Hiramatsu
  2021-11-11 17:33                 ` Beau Belgrave
  1 sibling, 1 reply; 47+ messages in thread
From: Masami Hiramatsu @ 2021-11-10 13:56 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: Steven Rostedt, linux-trace-devel, linux-kernel

On Tue, 9 Nov 2021 11:08:44 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> > > I'd like to know if I do fix them that the features like filtering will still
> > > be available to user_events or if it's better to just add flags to disable
> > > kernel filtering?
> > 
> > I would rather like that the filters will be available on the user_events.
> > 
> > My question is that you need to log the dynamic data or strings via user-
> > events or not. Since the other user-events, like SDT doesn't support the
> > string variables to trace, I guess that is not a high priority.
> > 
> > Moreover, since now we can use eprobes, if user event records the address of
> > user-string, the eprobes can fetch it.
> > 
> > So, my suggestion is implmenting with fixed-size parameters as the first step
> > and keep filter/histograms/eprobes available on the user-events.
> > If you find any performance issue, you can expand the user-events to support
> > dynamic (array) data and strings.
> > 
> 
> We need strings to be able to be emitted and recorded in eBPF, perf and
> ftrace. So I would rather go after a solution that lets us keep these in
> the ring buffers, even if it means a perf hit.

OK, my concern is based on the current implementation, so in that case
you can add some additional verification. That is good.

> Guess what's left is to where the best place to check is, checking in
> the filter with unsafe flags would let us keep most of the perf (we just
> check the undersize case, 1 branch). When these unsafe types are
> filtered then a perf tax is imposed to keep things safe.

I would like to keep verifying in writer side then we can ensure the
data on ring buffer (of perf and of ftrace) is sane. If you add the unsafe
flag, you have to change all the code which access the ring buffer, not only
the filter but also eprobes, histograms, perf-tools, and other user-space
tracing tools which reads the tracing buffer directly.

> It sounded like Steven wanted to think about this a bit, so I'll wait a
> bit before poking again for consensus :)
> 
> Do you have any strong feelings about where it goes?

I recommend you to start verifying the writer side, it should make the
change as small as possible. Unsafe flag idea may involve many other
tools. And it is not fundamentary required for user-events.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-10 13:56               ` Masami Hiramatsu
@ 2021-11-11 17:33                 ` Beau Belgrave
  2021-11-12 13:40                   ` Masami Hiramatsu
  0 siblings, 1 reply; 47+ messages in thread
From: Beau Belgrave @ 2021-11-11 17:33 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: Steven Rostedt, linux-trace-devel, linux-kernel

On Wed, Nov 10, 2021 at 10:56:30PM +0900, Masami Hiramatsu wrote:
> On Tue, 9 Nov 2021 11:08:44 -0800
> Beau Belgrave <beaub@linux.microsoft.com> wrote:

> I would like to keep verifying in writer side then we can ensure the
> data on ring buffer (of perf and of ftrace) is sane. If you add the unsafe
> flag, you have to change all the code which access the ring buffer, not only
> the filter but also eprobes, histograms, perf-tools, and other user-space
> tracing tools which reads the tracing buffer directly.
> 
> > It sounded like Steven wanted to think about this a bit, so I'll wait a
> > bit before poking again for consensus :)
> > 
> > Do you have any strong feelings about where it goes?
> 
> I recommend you to start verifying the writer side, it should make the
> change as small as possible. Unsafe flag idea may involve many other
> tools. And it is not fundamentary required for user-events.
> 
> Thank you,
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>

Ok, I will start there.

Are static string buffers required as well for the null check?

Or is this only for dyn strings that require the check?

Also, I am assuming that __rel_loc offset is based after the __rel_loc
payload, IE: Offset 0 of __rel_loc is immediately after the 4 byte
__rel_loc description?

Thanks,
-Beau

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace
  2021-11-11 17:33                 ` Beau Belgrave
@ 2021-11-12 13:40                   ` Masami Hiramatsu
  0 siblings, 0 replies; 47+ messages in thread
From: Masami Hiramatsu @ 2021-11-12 13:40 UTC (permalink / raw)
  To: Beau Belgrave; +Cc: Steven Rostedt, linux-trace-devel, linux-kernel

On Thu, 11 Nov 2021 09:33:34 -0800
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> On Wed, Nov 10, 2021 at 10:56:30PM +0900, Masami Hiramatsu wrote:
> > On Tue, 9 Nov 2021 11:08:44 -0800
> > Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > I would like to keep verifying in writer side then we can ensure the
> > data on ring buffer (of perf and of ftrace) is sane. If you add the unsafe
> > flag, you have to change all the code which access the ring buffer, not only
> > the filter but also eprobes, histograms, perf-tools, and other user-space
> > tracing tools which reads the tracing buffer directly.
> > 
> > > It sounded like Steven wanted to think about this a bit, so I'll wait a
> > > bit before poking again for consensus :)
> > > 
> > > Do you have any strong feelings about where it goes?
> > 
> > I recommend you to start verifying the writer side, it should make the
> > change as small as possible. Unsafe flag idea may involve many other
> > tools. And it is not fundamentary required for user-events.
> > 
> > Thank you,
> > 
> > -- 
> > Masami Hiramatsu <mhiramat@kernel.org>
> 
> Ok, I will start there.
> 
> Are static string buffers required as well for the null check?
> 
> Or is this only for dyn strings that require the check?

Good question! The dynamic strings is ensured to be null-terminated,
but the static string is not because the size is fixed (at least
event filter checked that.)

BTW, I found that the hist_triger_elt_update() doesn't check the
field size for fixed-size string (only use STR_VAR_LEN_MAX to limit.)
It seems buggy if the fixed-size char [] field is not null terminated.
(e.g. it is used for storing array-data)
Let me fix that.

> Also, I am assuming that __rel_loc offset is based after the __rel_loc
> payload, IE: Offset 0 of __rel_loc is immediately after the 4 byte
> __rel_loc description?

Yes, so if the field is the last one, the offset can be 0.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2021-11-12 13:40 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-04 17:04 [PATCH v4 00/10] user_events: Enable user processes to create and write to trace events Beau Belgrave
2021-11-04 17:04 ` [PATCH v4 01/10] user_events: Add UABI header for user access to user_events Beau Belgrave
2021-11-04 17:04 ` [PATCH v4 02/10] user_events: Add minimal support for trace_event into ftrace Beau Belgrave
2021-11-04 21:34   ` kernel test robot
2021-11-08  2:32     ` Masami Hiramatsu
2021-11-08 16:59       ` Beau Belgrave
2021-11-07 14:31   ` Masami Hiramatsu
2021-11-08 17:13     ` Beau Belgrave
2021-11-08 18:16       ` Steven Rostedt
2021-11-08 20:25         ` Beau Belgrave
2021-11-08 21:00           ` Steven Rostedt
2021-11-08 22:09             ` Beau Belgrave
2021-11-08 22:30               ` Steven Rostedt
2021-11-08 22:59                 ` Beau Belgrave
2021-11-09  4:58               ` Masami Hiramatsu
2021-11-09  2:56           ` Masami Hiramatsu
2021-11-09 19:08             ` Beau Belgrave
2021-11-09 19:25               ` Steven Rostedt
2021-11-09 20:14                 ` Beau Belgrave
2021-11-09 20:45                   ` Steven Rostedt
2021-11-09 21:27                     ` Beau Belgrave
2021-11-09 21:39                       ` Steven Rostedt
2021-11-10 13:56               ` Masami Hiramatsu
2021-11-11 17:33                 ` Beau Belgrave
2021-11-12 13:40                   ` Masami Hiramatsu
2021-11-07 18:18   ` Steven Rostedt
2021-11-08 19:56     ` Beau Belgrave
2021-11-08 20:53       ` Steven Rostedt
2021-11-08 21:15         ` Beau Belgrave
2021-11-04 17:04 ` [PATCH v4 03/10] user_events: Add print_fmt generation support for basic types Beau Belgrave
2021-11-08 22:03   ` Steven Rostedt
2021-11-04 17:04 ` [PATCH v4 04/10] user_events: Handle matching arguments from dyn_events Beau Belgrave
2021-11-08 22:05   ` Steven Rostedt
2021-11-04 17:04 ` [PATCH v4 05/10] user_events: Add basic perf and eBPF support Beau Belgrave
2021-11-04 17:04 ` [PATCH v4 06/10] user_events: Add self-test for ftrace integration Beau Belgrave
2021-11-04 17:04 ` [PATCH v4 07/10] user_events: Add self-test for dynamic_events integration Beau Belgrave
2021-11-04 17:04 ` [PATCH v4 08/10] user_events: Add self-test for perf_event integration Beau Belgrave
2021-11-04 17:04 ` [PATCH v4 09/10] user_events: Optimize writing events by only copying data once Beau Belgrave
2021-11-08 22:45   ` Steven Rostedt
2021-11-08 23:00     ` Beau Belgrave
2021-11-08 23:04       ` Steven Rostedt
2021-11-08 23:17         ` Beau Belgrave
2021-11-08 23:20           ` Steven Rostedt
2021-11-04 17:04 ` [PATCH v4 10/10] user_events: Add documentation file Beau Belgrave
2021-11-04 19:05   ` Jonathan Corbet
2021-11-04 21:08     ` Beau Belgrave
2021-11-04 21:18       ` Jonathan Corbet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).