linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] tracing/user_events: Fixes and improvements for 6.4
@ 2023-04-11 21:17 Beau Belgrave
  2023-04-11 21:17 ` [PATCH 1/3] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Beau Belgrave @ 2023-04-11 21:17 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

Now that user_events is in for-next we broadened our integration of
user_events. During this integration we found a few things that can
help prevent the debugging of issues for user_events when user
processes use the ABI directly.

The most important thing found is an out of bounds fix with the
write index. If it is negative, an out of bounds access is attempted.
This bug was introduced on one of the very first user_events patches
and remained unseen for a long time. Apologies for not catching that
sooner.

We think users will expect the kernel to always clear the registered
bit when events are unregistered, even if the event is still enabled
in a kernel tracer. The user process could do this after unregistering,
but it seems appropriate for the kernel side to attempt this. We also
discussed if it makes sense for the kernel to allow user processes
to tie multiple events to the same value and bit. While this doesn't
cause any issues on the kernel side, it leads to very undefined
behavior for the user process. Depending on which event gets enabled
when, the bit will vary.

Beau Belgrave (3):
  tracing/user_events: Ensure write index cannot be negative
  tracing/user_events: Ensure bit is cleared on unregister
  tracing/user_events: Prevent same address and bit per process

 kernel/trace/trace_events_user.c              | 77 +++++++++++++++++++
 .../testing/selftests/user_events/abi_test.c  |  9 ++-
 .../selftests/user_events/ftrace_test.c       | 14 +++-
 3 files changed, 96 insertions(+), 4 deletions(-)


base-commit: 88fe1ec75fcb296579e05eaf3807da3ee83137e4
-- 
2.25.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] tracing/user_events: Ensure write index cannot be negative
  2023-04-11 21:17 [PATCH 0/3] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
@ 2023-04-11 21:17 ` Beau Belgrave
  2023-04-25 14:43   ` Masami Hiramatsu
  2023-04-11 21:17 ` [PATCH 2/3] tracing/user_events: Ensure bit is cleared on unregister Beau Belgrave
  2023-04-11 21:17 ` [PATCH 3/3] tracing/user_events: Prevent same address and bit per process Beau Belgrave
  2 siblings, 1 reply; 9+ messages in thread
From: Beau Belgrave @ 2023-04-11 21:17 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

The write index indicates which event the data is for and accesses a
per-file array. The index is passed by user processes during write()
calls as the first 4 bytes. Ensure that it cannot be negative by
returning -EINVAL to prevent out of bounds accesses.

Update ftrace self-test to ensure this occurs properly.

Fixes: 7f5a08c79df3 ("user_events: Add minimal support for trace_event into ftrace")
Reported-by: Doug Cook <dcook@linux.microsoft.com>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c                  | 3 +++
 tools/testing/selftests/user_events/ftrace_test.c | 5 +++++
 2 files changed, 8 insertions(+)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index cc8c6d8b69b5..e7dff24aa724 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -1821,6 +1821,9 @@ static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
 	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
 		return -EFAULT;
 
+	if (idx < 0)
+		return -EINVAL;
+
 	rcu_read_lock_sched();
 
 	refs = rcu_dereference_sched(info->refs);
diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c
index aceafacfb126..91272f9d6fce 100644
--- a/tools/testing/selftests/user_events/ftrace_test.c
+++ b/tools/testing/selftests/user_events/ftrace_test.c
@@ -296,6 +296,11 @@ TEST_F(user, write_events) {
 	ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 3));
 	after = trace_bytes();
 	ASSERT_GT(after, before);
+
+	/* Negative index should fail with EINVAL */
+	reg.write_index = -1;
+	ASSERT_EQ(-1, writev(self->data_fd, (const struct iovec *)io, 3));
+	ASSERT_EQ(EINVAL, errno);
 }
 
 TEST_F(user, write_fault) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] tracing/user_events: Ensure bit is cleared on unregister
  2023-04-11 21:17 [PATCH 0/3] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
  2023-04-11 21:17 ` [PATCH 1/3] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
@ 2023-04-11 21:17 ` Beau Belgrave
  2023-04-25  1:39   ` Steven Rostedt
  2023-04-11 21:17 ` [PATCH 3/3] tracing/user_events: Prevent same address and bit per process Beau Belgrave
  2 siblings, 1 reply; 9+ messages in thread
From: Beau Belgrave @ 2023-04-11 21:17 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

If an event is enabled and a user process unregisters user_events, the
bit is left set. Fix this by always clearing the bit in the user process
if unregister is successful.

Update abi self-test to ensure this occurs properly.

Suggested-by: Doug Cook <dcook@linux.microsoft.com>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c              | 34 +++++++++++++++++++
 .../testing/selftests/user_events/abi_test.c  |  9 +++--
 2 files changed, 40 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index e7dff24aa724..eb195d697177 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -2146,6 +2146,35 @@ static long user_unreg_get(struct user_unreg __user *ureg,
 	return ret;
 }
 
+static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
+				   unsigned long uaddr, unsigned char bit)
+{
+	struct user_event_enabler enabler;
+	int result;
+
+	memset(&enabler, 0, sizeof(enabler));
+	enabler.addr = uaddr;
+	enabler.values = bit;
+retry:
+	/* Prevents state changes from racing with new enablers */
+	mutex_lock(&event_mutex);
+
+	/* Force the bit to be cleared, since no event is attached */
+	mmap_read_lock(user_mm->mm);
+	result = user_event_enabler_write(user_mm, &enabler, false);
+	mmap_read_unlock(user_mm->mm);
+
+	mutex_unlock(&event_mutex);
+
+	if (result) {
+		/* Attempt to fault-in and retry if it worked */
+		if (!user_event_mm_fault_in(user_mm, uaddr))
+			goto retry;
+	}
+
+	return result;
+}
+
 /*
  * Unregisters an enablement address/bit within a task/user mm.
  */
@@ -2190,6 +2219,11 @@ static long user_events_ioctl_unreg(unsigned long uarg)
 
 	mutex_unlock(&event_mutex);
 
+	/* Ensure bit is now cleared for user, regardless of event status */
+	if (!ret)
+		ret = user_event_mm_clear_bit(mm, reg.disable_addr,
+					      reg.disable_bit);
+
 	return ret;
 }
 
diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c
index e0323d3777a7..5125c42efe65 100644
--- a/tools/testing/selftests/user_events/abi_test.c
+++ b/tools/testing/selftests/user_events/abi_test.c
@@ -109,13 +109,16 @@ TEST_F(user, enablement) {
 	ASSERT_EQ(0, change_event(false));
 	ASSERT_EQ(0, self->check);
 
-	/* Should not change after disable */
+	/* Ensure kernel clears bit after disable */
 	ASSERT_EQ(0, change_event(true));
 	ASSERT_EQ(1, self->check);
 	ASSERT_EQ(0, reg_disable(&self->check, 0));
+	ASSERT_EQ(0, self->check);
+
+	/* Ensure doesn't change after unreg */
+	ASSERT_EQ(0, change_event(true));
+	ASSERT_EQ(0, self->check);
 	ASSERT_EQ(0, change_event(false));
-	ASSERT_EQ(1, self->check);
-	self->check = 0;
 }
 
 TEST_F(user, bit_sizes) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] tracing/user_events: Prevent same address and bit per process
  2023-04-11 21:17 [PATCH 0/3] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
  2023-04-11 21:17 ` [PATCH 1/3] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
  2023-04-11 21:17 ` [PATCH 2/3] tracing/user_events: Ensure bit is cleared on unregister Beau Belgrave
@ 2023-04-11 21:17 ` Beau Belgrave
  2023-04-25  1:41   ` Steven Rostedt
  2 siblings, 1 reply; 9+ messages in thread
From: Beau Belgrave @ 2023-04-11 21:17 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

User processes register an address and bit pair for events. If the same
address and bit pair are registered multiple times in the same process,
it can cause undefined behavior when events are enabled/disabled.
When more than one are used, the bit could be turned off by another
event being disabled, while the original event is still enabled.

Prevent undefined behavior by checking the current mm to see if any
event has already been registered for the address and bit pair. Return
EADDRINUSE back to the user process if it's already being used.

Update ftrace self-test to ensure this occurs properly.

Suggested-by: Doug Cook <dcook@linux.microsoft.com>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c              | 40 +++++++++++++++++++
 .../selftests/user_events/ftrace_test.c       |  9 ++++-
 2 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index eb195d697177..ce5357019113 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -419,6 +419,20 @@ static int user_event_enabler_write(struct user_event_mm *mm,
 	return 0;
 }
 
+static bool user_event_enabler_exists(struct user_event_mm *mm,
+				      unsigned long uaddr, unsigned char bit)
+{
+	struct user_event_enabler *enabler;
+	struct user_event_enabler *next;
+
+	list_for_each_entry_safe(enabler, next, &mm->enablers, link)
+		if (enabler->addr == uaddr &&
+		    (enabler->values & ENABLE_VAL_BIT_MASK) == bit)
+			return true;
+
+	return false;
+}
+
 static void user_event_enabler_update(struct user_event *user)
 {
 	struct user_event_enabler *enabler;
@@ -657,6 +671,22 @@ void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm)
 	user_event_mm_remove(t);
 }
 
+static bool current_user_event_enabler_exists(unsigned long uaddr,
+					      unsigned char bit)
+{
+	struct user_event_mm *user_mm = current_user_event_mm();
+	bool exists;
+
+	if (!user_mm)
+		return false;
+
+	exists = user_event_enabler_exists(user_mm, uaddr, bit);
+
+	user_event_mm_put(user_mm);
+
+	return exists;
+}
+
 static struct user_event_enabler
 *user_event_enabler_create(struct user_reg *reg, struct user_event *user,
 			   int *write_result)
@@ -2045,6 +2075,16 @@ static long user_events_ioctl_reg(struct user_event_file_info *info,
 	if (ret)
 		return ret;
 
+	/*
+	 * Prevent users from using the same address and bit multiple times
+	 * within the same mm address space. This can cause unexpected behavior
+	 * for user processes that is far easier to debug if this is explictly
+	 * an error upon registering.
+	 */
+	if (current_user_event_enabler_exists((unsigned long)reg.enable_addr,
+					      reg.enable_bit))
+		return -EADDRINUSE;
+
 	name = strndup_user((const char __user *)(uintptr_t)reg.name_args,
 			    MAX_EVENT_DESC);
 
diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c
index 91272f9d6fce..7c99cef94a65 100644
--- a/tools/testing/selftests/user_events/ftrace_test.c
+++ b/tools/testing/selftests/user_events/ftrace_test.c
@@ -219,7 +219,12 @@ TEST_F(user, register_events) {
 	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
 	ASSERT_EQ(0, reg.write_index);
 
-	/* Multiple registers should result in same index */
+	/* Multiple registers to the same addr + bit should fail */
+	ASSERT_EQ(-1, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
+	ASSERT_EQ(EADDRINUSE, errno);
+
+	/* Multiple registers to same name should result in same index */
+	reg.enable_bit = 30;
 	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
 	ASSERT_EQ(0, reg.write_index);
 
@@ -242,6 +247,8 @@ TEST_F(user, register_events) {
 
 	/* Unregister */
 	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSUNREG, &unreg));
+	unreg.disable_bit = 30;
+	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSUNREG, &unreg));
 
 	/* Delete should work only after close and unregister */
 	close(self->data_fd);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] tracing/user_events: Ensure bit is cleared on unregister
  2023-04-11 21:17 ` [PATCH 2/3] tracing/user_events: Ensure bit is cleared on unregister Beau Belgrave
@ 2023-04-25  1:39   ` Steven Rostedt
  2023-04-25 17:06     ` Beau Belgrave
  0 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2023-04-25  1:39 UTC (permalink / raw)
  To: Beau Belgrave
  Cc: mhiramat, mathieu.desnoyers, dcook, alanau, linux-kernel,
	linux-trace-kernel

On Tue, 11 Apr 2023 14:17:08 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> +static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
> +				   unsigned long uaddr, unsigned char bit)
> +{
> +	struct user_event_enabler enabler;
> +	int result;
> +
> +	memset(&enabler, 0, sizeof(enabler));
> +	enabler.addr = uaddr;
> +	enabler.values = bit;
> +retry:
> +	/* Prevents state changes from racing with new enablers */
> +	mutex_lock(&event_mutex);
> +
> +	/* Force the bit to be cleared, since no event is attached */
> +	mmap_read_lock(user_mm->mm);
> +	result = user_event_enabler_write(user_mm, &enabler, false);
> +	mmap_read_unlock(user_mm->mm);
> +
> +	mutex_unlock(&event_mutex);
> +
> +	if (result) {
> +		/* Attempt to fault-in and retry if it worked */
> +		if (!user_event_mm_fault_in(user_mm, uaddr))
> +			goto retry;

Without looking into the functions of this call, I wonder if this can
get into an infinite loop?

-- Steve


> +	}
> +
> +	return result;
> +}
> +

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] tracing/user_events: Prevent same address and bit per process
  2023-04-11 21:17 ` [PATCH 3/3] tracing/user_events: Prevent same address and bit per process Beau Belgrave
@ 2023-04-25  1:41   ` Steven Rostedt
  0 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2023-04-25  1:41 UTC (permalink / raw)
  To: Beau Belgrave
  Cc: mhiramat, mathieu.desnoyers, dcook, alanau, linux-kernel,
	linux-trace-kernel

On Tue, 11 Apr 2023 14:17:09 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> +static bool user_event_enabler_exists(struct user_event_mm *mm,
> +				      unsigned long uaddr, unsigned char bit)
> +{
> +	struct user_event_enabler *enabler;
> +	struct user_event_enabler *next;
> +
> +	list_for_each_entry_safe(enabler, next, &mm->enablers, link)
> +		if (enabler->addr == uaddr &&
> +		    (enabler->values & ENABLE_VAL_BIT_MASK) == bit)
> +			return true;
> +

Please add brackets around complex calls like the above. The no bracket
rule is not if you can get away without using it, but if there's only a
single line.

	list_for_each_entry_safe(enabler, next, &mm->enablers, link) {
		if (enabler->addr == uaddr &&
		    (enabler->values & ENABLE_VAL_BIT_MASK) == bit)
			return true;
	}

-- Steve



> +	return false;
> +}
> +

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] tracing/user_events: Ensure write index cannot be negative
  2023-04-11 21:17 ` [PATCH 1/3] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
@ 2023-04-25 14:43   ` Masami Hiramatsu
  0 siblings, 0 replies; 9+ messages in thread
From: Masami Hiramatsu @ 2023-04-25 14:43 UTC (permalink / raw)
  To: Beau Belgrave
  Cc: rostedt, mathieu.desnoyers, dcook, alanau, linux-kernel,
	linux-trace-kernel

On Tue, 11 Apr 2023 14:17:07 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> The write index indicates which event the data is for and accesses a
> per-file array. The index is passed by user processes during write()
> calls as the first 4 bytes. Ensure that it cannot be negative by
> returning -EINVAL to prevent out of bounds accesses.
> 
> Update ftrace self-test to ensure this occurs properly.
> 
> Fixes: 7f5a08c79df3 ("user_events: Add minimal support for trace_event into ftrace")
> Reported-by: Doug Cook <dcook@linux.microsoft.com>
> Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>

Looks good to me.

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Thanks!

> ---
>  kernel/trace/trace_events_user.c                  | 3 +++
>  tools/testing/selftests/user_events/ftrace_test.c | 5 +++++
>  2 files changed, 8 insertions(+)
> 
> diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
> index cc8c6d8b69b5..e7dff24aa724 100644
> --- a/kernel/trace/trace_events_user.c
> +++ b/kernel/trace/trace_events_user.c
> @@ -1821,6 +1821,9 @@ static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
>  	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
>  		return -EFAULT;
>  
> +	if (idx < 0)
> +		return -EINVAL;
> +
>  	rcu_read_lock_sched();
>  
>  	refs = rcu_dereference_sched(info->refs);
> diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c
> index aceafacfb126..91272f9d6fce 100644
> --- a/tools/testing/selftests/user_events/ftrace_test.c
> +++ b/tools/testing/selftests/user_events/ftrace_test.c
> @@ -296,6 +296,11 @@ TEST_F(user, write_events) {
>  	ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 3));
>  	after = trace_bytes();
>  	ASSERT_GT(after, before);
> +
> +	/* Negative index should fail with EINVAL */
> +	reg.write_index = -1;
> +	ASSERT_EQ(-1, writev(self->data_fd, (const struct iovec *)io, 3));
> +	ASSERT_EQ(EINVAL, errno);
>  }
>  
>  TEST_F(user, write_fault) {
> -- 
> 2.25.1
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] tracing/user_events: Ensure bit is cleared on unregister
  2023-04-25  1:39   ` Steven Rostedt
@ 2023-04-25 17:06     ` Beau Belgrave
  2023-04-25 17:56       ` Steven Rostedt
  0 siblings, 1 reply; 9+ messages in thread
From: Beau Belgrave @ 2023-04-25 17:06 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: mhiramat, mathieu.desnoyers, dcook, alanau, linux-kernel,
	linux-trace-kernel

On Mon, Apr 24, 2023 at 09:39:57PM -0400, Steven Rostedt wrote:
> On Tue, 11 Apr 2023 14:17:08 -0700
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > +static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
> > +				   unsigned long uaddr, unsigned char bit)
> > +{
> > +	struct user_event_enabler enabler;
> > +	int result;
> > +
> > +	memset(&enabler, 0, sizeof(enabler));
> > +	enabler.addr = uaddr;
> > +	enabler.values = bit;
> > +retry:
> > +	/* Prevents state changes from racing with new enablers */
> > +	mutex_lock(&event_mutex);
> > +
> > +	/* Force the bit to be cleared, since no event is attached */
> > +	mmap_read_lock(user_mm->mm);
> > +	result = user_event_enabler_write(user_mm, &enabler, false);
> > +	mmap_read_unlock(user_mm->mm);
> > +
> > +	mutex_unlock(&event_mutex);
> > +
> > +	if (result) {
> > +		/* Attempt to fault-in and retry if it worked */
> > +		if (!user_event_mm_fault_in(user_mm, uaddr))
> > +			goto retry;
> 
> Without looking into the functions of this call, I wonder if this can
> get into an infinite loop?
> 

That's a good point, user_event_mm_fault() is a wrapper around
fixup_user_fault(). We retry if it works, so I guess if the user could
somehow cause a fail on the write and succeed to page in repeatedly, it
could keep the loop going for that time period. I cannot think of a way
to achieve this forever, but that doesn't mean it couldn't happen.

I can certainly add an upper bound of retries (maybe 3 or so?) if you
think it would be possible for this to occur. I think we need retries of
some amount to handle spurious faults.

Thanks,
-Beau

> -- Steve
> 
> 
> > +	}
> > +
> > +	return result;
> > +}
> > +

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] tracing/user_events: Ensure bit is cleared on unregister
  2023-04-25 17:06     ` Beau Belgrave
@ 2023-04-25 17:56       ` Steven Rostedt
  0 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2023-04-25 17:56 UTC (permalink / raw)
  To: Beau Belgrave
  Cc: mhiramat, mathieu.desnoyers, dcook, alanau, linux-kernel,
	linux-trace-kernel

On Tue, 25 Apr 2023 10:06:54 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> That's a good point, user_event_mm_fault() is a wrapper around
> fixup_user_fault(). We retry if it works, so I guess if the user could
> somehow cause a fail on the write and succeed to page in repeatedly, it
> could keep the loop going for that time period. I cannot think of a way
> to achieve this forever, but that doesn't mean it couldn't happen.
> 
> I can certainly add an upper bound of retries (maybe 3 or so?) if you
> think it would be possible for this to occur. I think we need retries of
> some amount to handle spurious faults.

Even 10 is fine. With a comment saying, "This really shouldn't loop more
than a couple of times, but we want to make sure some mischievous user
doesn't take advantage of this looping".

-- Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-04-25 17:56 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-11 21:17 [PATCH 0/3] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
2023-04-11 21:17 ` [PATCH 1/3] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
2023-04-25 14:43   ` Masami Hiramatsu
2023-04-11 21:17 ` [PATCH 2/3] tracing/user_events: Ensure bit is cleared on unregister Beau Belgrave
2023-04-25  1:39   ` Steven Rostedt
2023-04-25 17:06     ` Beau Belgrave
2023-04-25 17:56       ` Steven Rostedt
2023-04-11 21:17 ` [PATCH 3/3] tracing/user_events: Prevent same address and bit per process Beau Belgrave
2023-04-25  1:41   ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).