All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] Locking protection for the stats pointer
@ 2020-12-08 10:35 John Wood
  2020-12-08 10:35 ` [RFC PATCH 1/2] security/brute: Brute LSM John Wood
  2020-12-08 10:35 ` [RFC PATCH 2/2] security/brute.c: Protect the stats pointer John Wood
  0 siblings, 2 replies; 5+ messages in thread
From: John Wood @ 2020-12-08 10:35 UTC (permalink / raw)
  To: kernelnewbies; +Cc: John Wood, keescook

Hi,

I'm working in the v3 version of the "fork brute force attack mitigation"
feature (KSPP task). I'm a kernel developer newbie and I think that now
a locking paranoid :)

My post in this mailing list is to get feedback about my locking system.
I'm very afraid about locking. I think that the protection of the stats
structure's internal data about concurrency is correct. But the protection
of a shared pointer among processes is not clear to me.

I divided this post in two patches. The first shows the "brute" LSM code
with the explanation of the main idea behind.

The second patch shows what I think is the correct method to protect the
stats pointer shared among processes.

The code in every patch not represent the changes that will be done in
the version presented to review. I split this changes in this manner to
make easier to comment and clarify my doubts.

Also, this RFC tell about a task_fatal_signal hook that are not shown.
I think that narrow the code will be better, so this part is not sent.

All my questions are presented in the second patch. I would like to know
your comments and opinions. This way I will be able to choose the correct
path about locking avoiding basic errors.

The questions are related to locking not the functionality of the "brute"
LSM. But any constructive comments are always welcome.

The previous versions can be found in:

RFC
https://lore.kernel.org/kernel-hardening/20200910202107.3799376-1-keescook@chromium.org/

v2
https://lore.kernel.org/kernel-hardening/20201025134540.3770-1-john.wood@gmx.com/

Thanks in advance.

John Wood (2):
  security/brute: Brute LSM
  security/brute.c: Protect the stats pointer

 security/brute/brute.c | 381 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 381 insertions(+)
 create mode 100644 security/brute/brute.c

--
2.25.1


_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [RFC PATCH 1/2] security/brute: Brute LSM
  2020-12-08 10:35 [RFC PATCH 0/2] Locking protection for the stats pointer John Wood
@ 2020-12-08 10:35 ` John Wood
  2020-12-08 10:35 ` [RFC PATCH 2/2] security/brute.c: Protect the stats pointer John Wood
  1 sibling, 0 replies; 5+ messages in thread
From: John Wood @ 2020-12-08 10:35 UTC (permalink / raw)
  To: kernelnewbies; +Cc: John Wood, keescook

This is the code of "Brute" LSM and is a work in progress. The basic
idea behind is the following:

When a process forks, if it has statistics, these are shared with the
child. If the process that fork doesn't have statistical data, a new
stats structure is created and shared with the child. This is managed
by the task_alloc hook.

When a process calls execve after a fork system call, a new statistics
are created for this process. But if the execve is called immediately
after an execve, the statistics can be used if they are reset. This is
managed by the bprm_committing_creds hook.

When a process is freed we must check if its necessary to free the
statistical data shared among processes. This is managed by the
task_free hook.

When a process gets a fatal signal the crash periods (for fork and
for execve system calls) are updated and then test if exists an attack.
This is managed in the task_fatal_signal hook (defined in a previous
patch not shown in this RFC).

Signed-off-by: John Wood <john.wood@gmx.com>
---
 security/brute/brute.c | 358 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 358 insertions(+)
 create mode 100644 security/brute/brute.c

diff --git a/security/brute/brute.c b/security/brute/brute.c
new file mode 100644
index 000000000000..60944a0f8de8
--- /dev/null
+++ b/security/brute/brute.c
@@ -0,0 +1,358 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <asm/current.h>
+#include <linux/bug.h>
+#include <linux/compiler.h>
+#include <linux/errno.h>
+#include <linux/gfp.h>
+#include <linux/init.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/lsm_hooks.h>
+#include <linux/printk.h>
+#include <linux/refcount.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+/**
+ * struct brute_stats - Fork brute force attack statistics.
+ * @lock: Lock to protect the brute_stats structure.
+ * @refc: Reference counter.
+ * @faults: Number of crashes.
+ * @jiffies: Last crash timestamp.
+ * @period: Crash period's moving average.
+ *
+ * This structure holds the statistical data shared by all the fork hierarchy
+ * processes.
+ */
+struct brute_stats {
+	spinlock_t lock;
+	refcount_t refc;
+	unsigned char faults;
+	u64 jiffies;
+	u64 period;
+};
+
+/**
+ * brute_blob_sizes - LSM blob sizes.
+ *
+ * To share statistical data among all the fork hierarchy processes, define a
+ * pointer to the brute_stats structure as a part of the task_struct's security
+ * blob.
+ */
+static struct lsm_blob_sizes brute_blob_sizes __lsm_ro_after_init = {
+	.lbs_task = sizeof(struct brute_stats *),
+};
+
+/**
+ * brute_stats_ptr() - Get the pointer to the brute_stats structure.
+ * @task: Task that holds the statistical data.
+ *
+ * Return: A pointer to a pointer to the brute_stats structure.
+ */
+static inline struct brute_stats **brute_stats_ptr(struct task_struct *task)
+{
+	return task->security + brute_blob_sizes.lbs_task;
+}
+
+/**
+ * brute_new_stats() - Allocate a new statistics structure.
+ *
+ * If the allocation is successful the reference counter is set to one to
+ * indicate that there will be one task that points to this structure. Also, the
+ * last crash timestamp is set to now. This way, its possible to compute the
+ * application crash period at the first fault.
+ *
+ * Return: NULL if the allocation fails. A pointer to the new allocated
+ *         statistics structure if it success.
+ */
+static struct brute_stats *brute_new_stats(void)
+{
+	struct brute_stats *stats;
+
+	stats = kmalloc(sizeof(struct brute_stats), GFP_KERNEL);
+	if (!stats)
+		return NULL;
+
+	spin_lock_init(&stats->lock);
+	refcount_set(&stats->refc, 1);
+	stats->faults = 0;
+	stats->jiffies = get_jiffies_64();
+	stats->period = 0;
+
+	return stats;
+}
+
+/**
+ * brute_share_stats() - Share the statistical data between processes.
+ * @src: Source of statistics to be shared.
+ * @dst: Destination of statistics to be shared.
+ *
+ * Copy the src's pointer to the statistical data structure to the dst's pointer
+ * to the same structure. Since there is a new process that shares the same
+ * data, increase the reference counter. The src's pointer cannot be NULL.
+ *
+ * It's mandatory to disable interrupts before acquiring the lock since the
+ * task_free hook can be called from an IRQ context during the execution of the
+ * task_alloc hook.
+ */
+static void brute_share_stats(struct brute_stats **src,
+			      struct brute_stats **dst)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&(*src)->lock, flags);
+	refcount_inc(&(*src)->refc);
+	*dst = *src;
+	spin_unlock_irqrestore(&(*src)->lock, flags);
+}
+
+/**
+ * brute_task_alloc() - Target for the task_alloc hook.
+ * @task: Task being allocated.
+ * @clone_flags: Contains the flags indicating what should be shared.
+ *
+ * For a correct management of a fork brute force attack it is necessary that
+ * all the tasks hold statistical data. The same statistical data needs to be
+ * shared between all the tasks that hold the same memory contents or in other
+ * words, between all the tasks that have been forked without any execve call.
+ *
+ * To ensure this, if the current task doesn't have statistical data when forks,
+ * it is mandatory to allocate a new statistics structure and share it between
+ * this task and the new one being allocated. Otherwise, share the statistics
+ * that the current task already has.
+ *
+ * Return: -ENOMEM if the allocation of the new statistics structure fails. Zero
+ *         otherwise.
+ */
+static int brute_task_alloc(struct task_struct *task, unsigned long clone_flags)
+{
+	struct brute_stats **stats, **p_stats;
+
+	stats = brute_stats_ptr(task);
+	p_stats = brute_stats_ptr(current);
+
+	if (likely(*p_stats)) {
+		brute_share_stats(p_stats, stats);
+		return 0;
+	}
+
+	*stats = brute_new_stats();
+	if (!*stats)
+		return -ENOMEM;
+
+	brute_share_stats(stats, p_stats);
+	return 0;
+}
+
+/**
+ * brute_task_execve() - Target for the bprm_committing_creds hook.
+ * @bprm: Points to the linux_binprm structure.
+ *
+ * When a forked task calls the execve system call, the memory contents are set
+ * with new values. So, in this scenario the parent's statistical data no need
+ * to be shared. Instead, a new statistical data structure must be allocated to
+ * start a new hierarchy. This condition is detected when the statistics
+ * reference counter holds a value greater than or equal to two (a fork always
+ * sets the statistics reference counter to a minimum of two since the parent
+ * and the child task are sharing the same data).
+ *
+ * However, if the execve function is called immediately after another execve
+ * call, althought the memory contents are reset, there is no need to allocate
+ * a new statistical data structure. This is possible because at this moment
+ * only one task (the task that calls the execve function) points to the data.
+ * In this case, the previous allocation is used but the statistics are reset.
+ *
+ * It's mandatory to disable interrupts before acquiring the lock since the
+ * task_free hook can be called from an IRQ context during the execution of the
+ * bprm_committing_creds hook.
+ */
+static void brute_task_execve(struct linux_binprm *bprm)
+{
+	struct brute_stats **stats;
+	unsigned long flags;
+
+	stats = brute_stats_ptr(current);
+	if (WARN(!*stats, "No statistical data\n"))
+		return;
+
+	spin_lock_irqsave(&(*stats)->lock, flags);
+
+	if (!refcount_dec_not_one(&(*stats)->refc)) {
+		/* execve call after an execve call */
+		(*stats)->faults = 0;
+		(*stats)->jiffies = get_jiffies_64();
+		(*stats)->period = 0;
+		spin_unlock_irqrestore(&(*stats)->lock, flags);
+		return;
+	}
+
+	/* execve call after a fork call */
+	spin_unlock_irqrestore(&(*stats)->lock, flags);
+	*stats = brute_new_stats();
+	WARN(!*stats, "Cannot allocate statistical data\n");
+}
+
+/**
+ * brute_task_free() - Target for the task_free hook.
+ * @task: Task about to be freed.
+ *
+ * The statistical data that is shared between all the fork hierarchy processes
+ * needs to be freed when this hierarchy disappears.
+ */
+static void brute_task_free(struct task_struct *task)
+{
+	struct brute_stats **stats;
+	bool refc_is_zero;
+
+	stats = brute_stats_ptr(task);
+	if (WARN(!*stats, "No statistical data\n"))
+		return;
+
+	spin_lock(&(*stats)->lock);
+	refc_is_zero = refcount_dec_and_test(&(*stats)->refc);
+	spin_unlock(&(*stats)->lock);
+
+	if (refc_is_zero)
+		kfree(*stats);
+}
+
+static const u64 BRUTE_EMA_WEIGHT_NUMERATOR = 7;
+static const u64 BRUTE_EMA_WEIGHT_DENOMINATOR = 10;
+
+static inline u64 brute_mul_by_ema_weight(u64 a)
+{
+	return a / BRUTE_EMA_WEIGHT_DENOMINATOR * BRUTE_EMA_WEIGHT_NUMERATOR;
+}
+
+static const unsigned char BRUTE_EMA_MIN_DATA = 5;
+
+static u64 brute_update_fork_crash_period(struct brute_stats *stats, u64 now)
+{
+	unsigned long flags;
+	u64 period;
+
+	spin_lock_irqsave(&stats->lock, flags);
+	period = now - stats->jiffies;
+	stats->jiffies = now;
+
+	stats->period -= brute_mul_by_ema_weight(stats->period);
+	stats->period += brute_mul_by_ema_weight(period);
+
+	stats->faults += 1;
+	period = stats->faults < BRUTE_EMA_MIN_DATA ? 0 : stats->period;
+	spin_unlock_irqrestore(&stats->lock, flags);
+
+	return jiffies64_to_msecs(period);
+}
+
+static struct brute_stats *brute_get_exec_stats(const struct brute_stats *stats)
+{
+	const struct task_struct *task = current;
+	struct brute_stats **p_stats;
+
+	read_lock(&tasklist_lock);
+	do {
+		if (!task->real_parent) {
+			read_unlock(&tasklist_lock);
+			return NULL;
+		}
+
+		p_stats = brute_stats_ptr(task->real_parent);
+		task = task->real_parent;
+	} while (stats == *p_stats);
+	read_unlock(&tasklist_lock);
+
+	return *p_stats;
+}
+
+static u64 brute_update_exec_crash_period(const struct brute_stats *stats,
+					  u64 now, u64 last_fork_crash)
+{
+	struct brute_stats *exec_stats;
+	unsigned long flags;
+
+	exec_stats = brute_get_exec_stats(stats);
+	if (WARN(!exec_stats, "No exec statistical data\n"))
+		return 0;
+
+	spin_lock_irqsave(&exec_stats->lock, flags);
+	if (!exec_stats->faults)
+		exec_stats->jiffies = last_fork_crash;
+	spin_unlock_irqrestore(&exec_stats->lock, flags);
+
+	return brute_update_fork_crash_period(exec_stats, now);
+}
+
+static void brute_get_crash_periods(struct brute_stats *stats, u64 *fork_period,
+				    u64 *exec_period)
+{
+	unsigned long flags;
+	u64 last_fork_crash;
+	u64 now = get_jiffies_64();
+
+	spin_lock_irqsave(&stats->lock, flags);
+	last_fork_crash = stats->jiffies;
+	spin_unlock_irqrestore(&stats->lock, flags);
+
+	*fork_period = brute_update_fork_crash_period(stats, now);
+	*exec_period = brute_update_exec_crash_period(stats, now,
+						      last_fork_crash);
+}
+
+static const u64 BRUTE_EMA_CRASH_PERIOD_THRESHOLD = 30000;
+
+static void brute_task_fatal_signal(const kernel_siginfo_t *siginfo)
+{
+	struct brute_stats **stats;
+	u64 fork_period;
+	u64 exec_period;
+
+	stats = brute_stats_ptr(current);
+	if (WARN(!*stats, "No statistical data\n"))
+		return;
+
+	brute_get_crash_periods(*stats, &fork_period, &exec_period);
+
+	if (fork_period && fork_period < BRUTE_EMA_CRASH_PERIOD_THRESHOLD)
+		pr_warn("Brute force attack detected through fork\n");
+
+	if (fork_period == exec_period)
+		return;
+
+	if (exec_period && exec_period < BRUTE_EMA_CRASH_PERIOD_THRESHOLD)
+		pr_warn("Brute force attack detected through execve\n");
+}
+
+/**
+ * brute_hooks - Targets for the LSM's hooks.
+ */
+static struct security_hook_list brute_hooks[] __lsm_ro_after_init = {
+	LSM_HOOK_INIT(task_alloc, brute_task_alloc),
+	LSM_HOOK_INIT(bprm_committing_creds, brute_task_execve),
+	LSM_HOOK_INIT(task_free, brute_task_free),
+	LSM_HOOK_INIT(task_fatal_signal, brute_task_fatal_signal),
+};
+
+/**
+ * brute_init() - Initialize the brute LSM.
+ *
+ * Return: Always returns zero.
+ */
+static int __init brute_init(void)
+{
+	pr_info("Brute initialized\n");
+	security_add_hooks(brute_hooks, ARRAY_SIZE(brute_hooks),
+			   KBUILD_MODNAME);
+	return 0;
+}
+
+DEFINE_LSM(brute) = {
+	.name = KBUILD_MODNAME,
+	.init = brute_init,
+	.blobs = &brute_blob_sizes,
+};
--
2.25.1


_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH 2/2] security/brute.c: Protect the stats pointer
  2020-12-08 10:35 [RFC PATCH 0/2] Locking protection for the stats pointer John Wood
  2020-12-08 10:35 ` [RFC PATCH 1/2] security/brute: Brute LSM John Wood
@ 2020-12-08 10:35 ` John Wood
  2020-12-08 14:42   ` Valdis Klētnieks
  1 sibling, 1 reply; 5+ messages in thread
From: John Wood @ 2020-12-08 10:35 UTC (permalink / raw)
  To: kernelnewbies; +Cc: John Wood, keescook

I think the stats pointer present in the task_struct's security blob
needs to be protected against concurrency for the following reasons.

1.- The same process forking at the same time in two different CPUs.
2.- The same process execve() at the same time in two different CPUs.
3.- A process forking or execve() while we traverse the fork hierarchy
    in the brute_get_exec_stats() function.
4.- More cases that now not come to my mind.

So, the question is: Is the locking used correct?

I use a read/write spinlock to allow multiple readers when the stats
pointer not change. But when this pointer needs to be updated, the
access must be exclusive by only one writer.

Moreover, the kmalloc() uses now the GFP_ATOMIC option to avoid sleep
in an atomic context. Is this a problem? It would be better allocate the
memory for the stats structure outside of an atomic context using
GFP_KERNEL and update the stats pointer inside the atomic context? This
will make necessary always allocate memory (and free it if its not
necessary) in the task_alloc hook.

Also I would like to know if is it necessary a more fine grained
locking? I think that it is not possible because we need to avoid that,
for example, if the same process is execve() at the same time in two
different CPUs, that two stat structures are allocated and then only
one assigned to the shared stats pointer. I think that in this situation
will be a memory leak. I also believe that the other cases cannot use a
more fine grained locking since we need that this sections are atomic to
avoid inconsistence states like the above commented.

Another question. If I use this lock to protect the stats pointer, can
be the stats->lock avoided (to acquire and release) in the
brute_share_stats function and the task_execve function? I think that in
this scenario the brute_stats_ptr_lock lock also protects the access to
the internal data of stats structure.

But I think that the stats->lock cannot be removed completely since in the
task_fatal_signal function the brute_stats_ptr_lock is acquired in read
state but we need to write to the stats structure to compute the crash
period.

Any constructive comments are welcome. I would like to know your
opinions about my locking system but remember that I'm a newbie :)

Signed-off-by: John Wood <john.wood@gmx.com>
---
 security/brute/brute.c | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/security/brute/brute.c b/security/brute/brute.c
index 60944a0f8de8..926bffb61a3b 100644
--- a/security/brute/brute.c
+++ b/security/brute/brute.c
@@ -18,6 +18,8 @@
 #include <linux/spinlock.h>
 #include <linux/types.h>

+static DEFINE_RWLOCK(brute_stats_ptr_lock);
+
 /**
  * struct brute_stats - Fork brute force attack statistics.
  * @lock: Lock to protect the brute_stats structure.
@@ -74,7 +76,7 @@ static struct brute_stats *brute_new_stats(void)
 {
 	struct brute_stats *stats;

-	stats = kmalloc(sizeof(struct brute_stats), GFP_KERNEL);
+	stats = kmalloc(sizeof(struct brute_stats), GFP_ATOMIC);
 	if (!stats)
 		return NULL;

@@ -135,17 +137,22 @@ static int brute_task_alloc(struct task_struct *task, unsigned long clone_flags)

 	stats = brute_stats_ptr(task);
 	p_stats = brute_stats_ptr(current);
+	write_lock(&brute_stats_ptr_lock);

 	if (likely(*p_stats)) {
 		brute_share_stats(p_stats, stats);
+		write_unlock(&brute_stats_ptr_lock);
 		return 0;
 	}

 	*stats = brute_new_stats();
-	if (!*stats)
+	if (!*stats) {
+		write_unlock(&brute_stats_ptr_lock);
 		return -ENOMEM;
+	}

 	brute_share_stats(stats, p_stats);
+	write_unlock(&brute_stats_ptr_lock);
 	return 0;
 }

@@ -177,8 +184,12 @@ static void brute_task_execve(struct linux_binprm *bprm)
 	unsigned long flags;

 	stats = brute_stats_ptr(current);
-	if (WARN(!*stats, "No statistical data\n"))
+	write_lock(&brute_stats_ptr_lock);
+
+	if (WARN(!*stats, "No statistical data\n")) {
+		write_unlock(&brute_stats_ptr_lock);
 		return;
+	}

 	spin_lock_irqsave(&(*stats)->lock, flags);

@@ -188,6 +199,7 @@ static void brute_task_execve(struct linux_binprm *bprm)
 		(*stats)->jiffies = get_jiffies_64();
 		(*stats)->period = 0;
 		spin_unlock_irqrestore(&(*stats)->lock, flags);
+		write_unlock(&brute_stats_ptr_lock);
 		return;
 	}

@@ -195,6 +207,7 @@ static void brute_task_execve(struct linux_binprm *bprm)
 	spin_unlock_irqrestore(&(*stats)->lock, flags);
 	*stats = brute_new_stats();
 	WARN(!*stats, "Cannot allocate statistical data\n");
+	write_unlock(&brute_stats_ptr_lock);
 }

 /**
@@ -210,8 +223,12 @@ static void brute_task_free(struct task_struct *task)
 	bool refc_is_zero;

 	stats = brute_stats_ptr(task);
-	if (WARN(!*stats, "No statistical data\n"))
+	read_lock(&brute_stats_ptr_lock);
+
+	if (WARN(!*stats, "No statistical data\n")) {
+		read_unlock(&brute_stats_ptr_lock);
 		return;
+	}

 	spin_lock(&(*stats)->lock);
 	refc_is_zero = refcount_dec_and_test(&(*stats)->refc);
@@ -219,6 +236,8 @@ static void brute_task_free(struct task_struct *task)

 	if (refc_is_zero)
 		kfree(*stats);
+
+	read_unlock(&brute_stats_ptr_lock);
 }

 static const u64 BRUTE_EMA_WEIGHT_NUMERATOR = 7;
@@ -313,8 +332,10 @@ static void brute_task_fatal_signal(const kernel_siginfo_t *siginfo)
 	u64 exec_period;

 	stats = brute_stats_ptr(current);
+	read_lock(&brute_stats_ptr_lock);
+
 	if (WARN(!*stats, "No statistical data\n"))
-		return;
+		goto unlock;

 	brute_get_crash_periods(*stats, &fork_period, &exec_period);

@@ -322,10 +343,12 @@ static void brute_task_fatal_signal(const kernel_siginfo_t *siginfo)
 		pr_warn("Brute force attack detected through fork\n");

 	if (fork_period == exec_period)
-		return;
+		goto unlock;

 	if (exec_period && exec_period < BRUTE_EMA_CRASH_PERIOD_THRESHOLD)
 		pr_warn("Brute force attack detected through execve\n");
+unlock:
+	read_unlock(&brute_stats_ptr_lock);
 }

 /**
--
2.25.1


_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 2/2] security/brute.c: Protect the stats pointer
  2020-12-08 10:35 ` [RFC PATCH 2/2] security/brute.c: Protect the stats pointer John Wood
@ 2020-12-08 14:42   ` Valdis Klētnieks
  2020-12-09  9:31     ` John Wood
  0 siblings, 1 reply; 5+ messages in thread
From: Valdis Klētnieks @ 2020-12-08 14:42 UTC (permalink / raw)
  To: John Wood; +Cc: keescook, kernelnewbies


[-- Attachment #1.1: Type: text/plain, Size: 831 bytes --]

On Tue, 08 Dec 2020 11:35:57 +0100, John Wood said:
> I think the stats pointer present in the task_struct's security blob
> needs to be protected against concurrency for the following reasons.
>
> 1.- The same process forking at the same time in two different CPUs.
> 2.- The same process execve() at the same time in two different CPUs.

OK, I'll bite.  How would these two cases even happen?

(Note that you could conceivably issue the fork()/exeve() on one CPU, run
kernel code for a bit and then get rescheduled onto a different CPU to complete
the syscall, but that's a different cache coherency can-o-worms :)

(Your case 3 of a fork/exec while you traverse is an actual issue.  Note that
you missed one case - where the process evaporates for some reason while you do
the traverse and you're left with a stale pointer...)


[-- Attachment #1.2: Type: application/pgp-signature, Size: 832 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 2/2] security/brute.c: Protect the stats pointer
  2020-12-08 14:42   ` Valdis Klētnieks
@ 2020-12-09  9:31     ` John Wood
  0 siblings, 0 replies; 5+ messages in thread
From: John Wood @ 2020-12-09  9:31 UTC (permalink / raw)
  To: Valdis Klētnieks; +Cc: John Wood, keescook, kernelnewbies

On Tue, 08 Dec 2020 09:42:59 -0500, Valdis Klētnieks wrote:
> On Tue, 08 Dec 2020 11:35:57 +0100, John Wood said:
> > I think the stats pointer present in the task_struct's security blob
> > needs to be protected against concurrency for the following reasons.
> >
> > 1.- The same process forking at the same time in two different CPUs.
> > 2.- The same process execve() at the same time in two different CPUs.
>
> OK, I'll bite.  How would these two cases even happen?
>
> (Note that you could conceivably issue the fork()/exeve() on one CPU, run
> kernel code for a bit and then get rescheduled onto a different CPU to complete
> the syscall, but that's a different cache coherency can-o-worms :)

Thanks for the reply. Understood. The first and second cases can never happen.

> (Your case 3 of a fork/exec while you traverse is an actual issue.  Note that
> you missed one case - where the process evaporates for some reason while you do
> the traverse and you're left with a stale pointer...)

Ok, so I still need protection for the stats pointer.

Since the 1 and 2 cases can never happen, I believe that there is no need
to make all the fork, execve and free management atomic. In other words,
now I think I can protect the reading and the writing operations
separately. Note that the fork management is atomic because basically all
the operations are writing and I believe that this way is better than
acquire the lock in read state, release it, acquire in write state,
release it, acquire again in read state, ...

Also, to deal with the case where a process evaporates while I do the
traverse, I set the pointer to NULL after free it. This way I can notice
this state.

This is the protection system now. What do you think?
I hope you are not too hard on my code :)

---
 security/brute/brute.c | 42 ++++++++++++++++++++++++++++++++++++------
 1 file changed, 36 insertions(+), 6 deletions(-)

diff --git a/security/brute/brute.c b/security/brute/brute.c
index 60944a0f8de8..0c6bebd9bf18 100644
--- a/security/brute/brute.c
+++ b/security/brute/brute.c
@@ -18,6 +18,8 @@
 #include <linux/spinlock.h>
 #include <linux/types.h>

+static DEFINE_RWLOCK(brute_stats_ptr_lock);
+
 /**
  * struct brute_stats - Fork brute force attack statistics.
  * @lock: Lock to protect the brute_stats structure.
@@ -74,7 +76,7 @@ static struct brute_stats *brute_new_stats(void)
 {
 	struct brute_stats *stats;

-	stats = kmalloc(sizeof(struct brute_stats), GFP_KERNEL);
+	stats = kmalloc(sizeof(struct brute_stats), GFP_ATOMIC);
 	if (!stats)
 		return NULL;

@@ -135,17 +137,22 @@ static int brute_task_alloc(struct task_struct *task, unsigned long clone_flags)

 	stats = brute_stats_ptr(task);
 	p_stats = brute_stats_ptr(current);
+	write_lock(&brute_stats_ptr_lock);

 	if (likely(*p_stats)) {
 		brute_share_stats(p_stats, stats);
+		write_unlock(&brute_stats_ptr_lock);
 		return 0;
 	}

 	*stats = brute_new_stats();
-	if (!*stats)
+	if (!*stats) {
+		write_unlock(&brute_stats_ptr_lock);
 		return -ENOMEM;
+	}

 	brute_share_stats(stats, p_stats);
+	write_unlock(&brute_stats_ptr_lock);
 	return 0;
 }

@@ -177,8 +184,12 @@ static void brute_task_execve(struct linux_binprm *bprm)
 	unsigned long flags;

 	stats = brute_stats_ptr(current);
-	if (WARN(!*stats, "No statistical data\n"))
+	read_lock(&brute_stats_ptr_lock);
+
+	if (WARN(!*stats, "No statistical data\n")) {
+		read_unlock(&brute_stats_ptr_lock);
 		return;
+	}

 	spin_lock_irqsave(&(*stats)->lock, flags);

@@ -188,13 +199,18 @@ static void brute_task_execve(struct linux_binprm *bprm)
 		(*stats)->jiffies = get_jiffies_64();
 		(*stats)->period = 0;
 		spin_unlock_irqrestore(&(*stats)->lock, flags);
+		read_unlock(&brute_stats_ptr_lock);
 		return;
 	}

 	/* execve call after a fork call */
 	spin_unlock_irqrestore(&(*stats)->lock, flags);
+	read_unlock(&brute_stats_ptr_lock);
+
+	write_lock(&brute_stats_ptr_lock);
 	*stats = brute_new_stats();
 	WARN(!*stats, "Cannot allocate statistical data\n");
+	write_unlock(&brute_stats_ptr_lock);
 }

 /**
@@ -210,15 +226,24 @@ static void brute_task_free(struct task_struct *task)
 	bool refc_is_zero;

 	stats = brute_stats_ptr(task);
-	if (WARN(!*stats, "No statistical data\n"))
+	read_lock(&brute_stats_ptr_lock);
+
+	if (WARN(!*stats, "No statistical data\n")) {
+		read_unlock(&brute_stats_ptr_lock);
 		return;
+	}

 	spin_lock(&(*stats)->lock);
 	refc_is_zero = refcount_dec_and_test(&(*stats)->refc);
 	spin_unlock(&(*stats)->lock);
+	read_unlock(&brute_stats_ptr_lock);

-	if (refc_is_zero)
+	if (refc_is_zero) {
+		write_lock(&brute_stats_ptr_lock);
 		kfree(*stats);
+		*stats = NULL;
+		write_unlock(&brute_stats_ptr_lock);
+	}
 }

 static const u64 BRUTE_EMA_WEIGHT_NUMERATOR = 7;
@@ -313,10 +338,15 @@ static void brute_task_fatal_signal(const kernel_siginfo_t *siginfo)
 	u64 exec_period;

 	stats = brute_stats_ptr(current);
-	if (WARN(!*stats, "No statistical data\n"))
+	read_lock(&brute_stats_ptr_lock);
+
+	if (WARN(!*stats, "No statistical data\n")) {
+		read_unlock(&brute_stats_ptr_lock);
 		return;
+	}

 	brute_get_crash_periods(*stats, &fork_period, &exec_period);
+	read_unlock(&brute_stats_ptr_lock);

 	if (fork_period && fork_period < BRUTE_EMA_CRASH_PERIOD_THRESHOLD)
 		pr_warn("Brute force attack detected through fork\n");
--
2.25.1


_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-12-09  9:31 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-08 10:35 [RFC PATCH 0/2] Locking protection for the stats pointer John Wood
2020-12-08 10:35 ` [RFC PATCH 1/2] security/brute: Brute LSM John Wood
2020-12-08 10:35 ` [RFC PATCH 2/2] security/brute.c: Protect the stats pointer John Wood
2020-12-08 14:42   ` Valdis Klētnieks
2020-12-09  9:31     ` John Wood

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.