From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from thejh.net ([37.221.195.125]:54337 "EHLO thejh.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752366AbcJ3Vqt (ORCPT ); Sun, 30 Oct 2016 17:46:49 -0400 From: Jann Horn To: Alexander Viro , Roland McGrath , Oleg Nesterov , John Johansen , James Morris , "Serge E. Hallyn" , Paul Moore , Stephen Smalley , Eric Paris , Casey Schaufler , Kees Cook , Andrew Morton , Janis Danisevskis , Seth Forshee , "Eric W. Biederman" , Thomas Gleixner , Benjamin LaHaise , Ben Hutchings , Andy Lutomirski , Linus Torvalds , Krister Johansen Cc: linux-fsdevel@vger.kernel.org, linux-security-module@vger.kernel.org, security@kernel.org Subject: [PATCH v3 2/8] exec: add privunit to task_struct Date: Sun, 30 Oct 2016 22:46:32 +0100 Message-Id: <1477863998-3298-3-git-send-email-jann@thejh.net> In-Reply-To: <1477863998-3298-1-git-send-email-jann@thejh.net> References: <1477863998-3298-1-git-send-email-jann@thejh.net> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: This adds a member privunit ("privilege unit locally unique ID") to task_struct. privunit is only shared by processes that share the mm_struct and the signal_struct; not just spatially, but also temporally. In other words, if you do execve() or clone() without CLONE_THREAD, you get a new privunit that has never been used before. privunit is used in later patches to check during ptrace access checks whether subject and object are temporally and spatially equal for privilege checking purposes. The implementation of locally unique IDs is in sched.h and exec.c for now because those are the only users so far - if anything else wants to use them in the future, they can be moved elsewhere. changed in v2: - have 2^64 IDs per CPU instead of 2^64 shared ones (luid scheme, suggested by Andy Lutomirski) - take task_lock for reading in setup_new_exec() while bumping the LUID changed in v3: - Make privunit a new member of task_struct instead of reusing self_exec_id. This reduces locking trouble and allows self_exec_id to be removed at a later point. (Oleg Nesterov) - statically initialize luid_counters instead of using an __init function (Andy Lutomirski) Signed-off-by: Jann Horn --- fs/exec.c | 18 ++++++++++++++++++ include/linux/sched.h | 18 ++++++++++++++++-- kernel/fork.c | 1 + 3 files changed, 35 insertions(+), 2 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index 67b76cb319d8..c695dcd355ac 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1286,6 +1286,22 @@ void would_dump(struct linux_binprm *bprm, struct file *file) } EXPORT_SYMBOL(would_dump); +/* value 0 is reserved for init */ +static DEFINE_PER_CPU(u64, luid_counters) = 1; + +/* + * Allocates a new LUID and writes the allocated LUID to @out. + * This function must not be called from IRQ context. + */ +void alloc_luid(struct luid *out) +{ + preempt_disable(); + out->count = raw_cpu_read(luid_counters); + raw_cpu_add(luid_counters, 1); + out->cpu = smp_processor_id(); + preempt_enable(); +} + void setup_new_exec(struct linux_binprm * bprm) { arch_pick_mmap_layout(current->mm); @@ -1320,6 +1336,8 @@ void setup_new_exec(struct linux_binprm * bprm) /* An exec changes our domain. We are no longer part of the thread group */ current->self_exec_id++; + alloc_luid(¤t->privunit); + flush_signal_handlers(current, 0); do_close_on_exec(current->files); } diff --git a/include/linux/sched.h b/include/linux/sched.h index 0ccb379895b3..86146977d60c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1482,6 +1482,19 @@ struct tlbflush_unmap_batch { bool writable; }; +/* locally unique ID */ +struct luid { + u64 count; + unsigned int cpu; +}; + +void alloc_luid(struct luid *out); + +static inline bool luid_eq(const struct luid *a, const struct luid *b) +{ + return a->count == b->count && a->cpu == b->cpu; +} + struct task_struct { #ifdef CONFIG_THREAD_INFO_IN_TASK /* @@ -1713,8 +1726,9 @@ struct task_struct { struct seccomp seccomp; /* Thread group tracking */ - u32 parent_exec_id; - u32 self_exec_id; + u32 parent_exec_id; + u32 self_exec_id; + struct luid privunit; /* Protection of (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed, * mempolicy */ spinlock_t alloc_lock; diff --git a/kernel/fork.c b/kernel/fork.c index d0e1d6fa4d00..c7a658d5a6cf 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1714,6 +1714,7 @@ static __latent_entropy struct task_struct *copy_process( p->exit_signal = (clone_flags & CSIGNAL); p->group_leader = p; p->tgid = p->pid; + alloc_luid(&p->privunit); } p->nr_dirtied = 0; -- 2.1.4