From: Linus Torvalds <torvalds@linux-foundation.org> To: Oleg Nesterov <oleg@redhat.com>, Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net>, Andy Lutomirski <luto@kernel.org>, "the arch/x86 maintainers" <x86@kernel.org>, Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>, Borislav Petkov <bp@alien8.de>, Nadav Amit <nadav.amit@gmail.com>, Kees Cook <keescook@chromium.org>, Brian Gerst <brgerst@gmail.com>, "kernel-hardening@lists.openwall.com" <kernel-hardening@lists.openwall.com>, Josh Poimboeuf <jpoimboe@redhat.com>, Jann Horn <jann@thejh.net>, Heiko Carstens <heiko.carstens@de.ibm.com> Subject: Re: [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core) Date: Thu, 23 Jun 2016 11:46:41 -0700 [thread overview] Message-ID: <CA+55aFyYruc_7Ax-hjh96zXPWmJJpCqm0yqq-fYpT094owSO_Q@mail.gmail.com> (raw) In-Reply-To: <CA+55aFy54iDN56FDJAz3A8epRvKECVO+nL5LaMxdmKrFEOm05w@mail.gmail.com> [-- Attachment #1: Type: text/plain, Size: 791 bytes --] On Thu, Jun 23, 2016 at 10:52 AM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > > Ugh. Looking around at this, it turns out that a great example of this > kind of legacy issue is the debug_mutex stuff. Interestingly, the *only* other user of ti->task for a full allmodconfig build of x86-64 seems to be arch/x86/kernel/dumpstack.c with the print_context_stack() -> print_ftrace_graph_addr() -> task = tinfo->task chain. And that doesn't really seem to want thread_info either. The callers all have 'task', and have to generate thread_info from that anyway. So this attached patch (which includes the previous one) seems to build. I didn't actually boot it, but there should be no users left unless there is some asm code that has hardcoded offsets.. Linus [-- Attachment #2: patch.diff --] [-- Type: text/plain, Size: 11470 bytes --] arch/x86/include/asm/stacktrace.h | 6 +++--- arch/x86/include/asm/thread_info.h | 4 +--- arch/x86/kernel/dumpstack.c | 22 ++++++++++------------ arch/x86/kernel/dumpstack_64.c | 8 +++----- include/linux/sched.h | 1 - kernel/locking/mutex-debug.c | 12 ++++++------ kernel/locking/mutex-debug.h | 4 ++-- kernel/locking/mutex.c | 6 +++--- kernel/locking/mutex.h | 2 +- 9 files changed, 29 insertions(+), 36 deletions(-) diff --git a/arch/x86/include/asm/stacktrace.h b/arch/x86/include/asm/stacktrace.h index 7c247e7404be..0944218af9e2 100644 --- a/arch/x86/include/asm/stacktrace.h +++ b/arch/x86/include/asm/stacktrace.h @@ -14,7 +14,7 @@ extern int kstack_depth_to_print; struct thread_info; struct stacktrace_ops; -typedef unsigned long (*walk_stack_t)(struct thread_info *tinfo, +typedef unsigned long (*walk_stack_t)(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, @@ -23,13 +23,13 @@ typedef unsigned long (*walk_stack_t)(struct thread_info *tinfo, int *graph); extern unsigned long -print_context_stack(struct thread_info *tinfo, +print_context_stack(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, void *data, unsigned long *end, int *graph); extern unsigned long -print_context_stack_bp(struct thread_info *tinfo, +print_context_stack_bp(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, void *data, unsigned long *end, int *graph); diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h index 30c133ac05cd..420acbf477ff 100644 --- a/arch/x86/include/asm/thread_info.h +++ b/arch/x86/include/asm/thread_info.h @@ -53,18 +53,16 @@ struct task_struct; #include <linux/atomic.h> struct thread_info { - struct task_struct *task; /* main task structure */ __u32 flags; /* low level flags */ __u32 status; /* thread synchronous flags */ __u32 cpu; /* current CPU */ - mm_segment_t addr_limit; unsigned int sig_on_uaccess_error:1; unsigned int uaccess_err:1; /* uaccess failed */ + mm_segment_t addr_limit; }; #define INIT_THREAD_INFO(tsk) \ { \ - .task = &tsk, \ .flags = 0, \ .cpu = 0, \ .addr_limit = KERNEL_DS, \ diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c index 2bb25c3fe2e8..d6209f3a69cb 100644 --- a/arch/x86/kernel/dumpstack.c +++ b/arch/x86/kernel/dumpstack.c @@ -42,16 +42,14 @@ void printk_address(unsigned long address) static void print_ftrace_graph_addr(unsigned long addr, void *data, const struct stacktrace_ops *ops, - struct thread_info *tinfo, int *graph) + struct task_struct *task, int *graph) { - struct task_struct *task; unsigned long ret_addr; int index; if (addr != (unsigned long)return_to_handler) return; - task = tinfo->task; index = task->curr_ret_stack; if (!task->ret_stack || index < *graph) @@ -68,7 +66,7 @@ print_ftrace_graph_addr(unsigned long addr, void *data, static inline void print_ftrace_graph_addr(unsigned long addr, void *data, const struct stacktrace_ops *ops, - struct thread_info *tinfo, int *graph) + struct task_struct *task, int *graph) { } #endif @@ -79,10 +77,10 @@ print_ftrace_graph_addr(unsigned long addr, void *data, * severe exception (double fault, nmi, stack fault, debug, mce) hardware stack */ -static inline int valid_stack_ptr(struct thread_info *tinfo, +static inline int valid_stack_ptr(struct task_struct *task, void *p, unsigned int size, void *end) { - void *t = tinfo; + void *t = task_thread_info(task); if (end) { if (p < end && p >= (end-THREAD_SIZE)) return 1; @@ -93,14 +91,14 @@ static inline int valid_stack_ptr(struct thread_info *tinfo, } unsigned long -print_context_stack(struct thread_info *tinfo, +print_context_stack(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, void *data, unsigned long *end, int *graph) { struct stack_frame *frame = (struct stack_frame *)bp; - while (valid_stack_ptr(tinfo, stack, sizeof(*stack), end)) { + while (valid_stack_ptr(task, stack, sizeof(*stack), end)) { unsigned long addr; addr = *stack; @@ -112,7 +110,7 @@ print_context_stack(struct thread_info *tinfo, } else { ops->address(data, addr, 0); } - print_ftrace_graph_addr(addr, data, ops, tinfo, graph); + print_ftrace_graph_addr(addr, data, ops, task, graph); } stack++; } @@ -121,7 +119,7 @@ print_context_stack(struct thread_info *tinfo, EXPORT_SYMBOL_GPL(print_context_stack); unsigned long -print_context_stack_bp(struct thread_info *tinfo, +print_context_stack_bp(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, void *data, unsigned long *end, int *graph) @@ -129,7 +127,7 @@ print_context_stack_bp(struct thread_info *tinfo, struct stack_frame *frame = (struct stack_frame *)bp; unsigned long *ret_addr = &frame->return_address; - while (valid_stack_ptr(tinfo, ret_addr, sizeof(*ret_addr), end)) { + while (valid_stack_ptr(task, ret_addr, sizeof(*ret_addr), end)) { unsigned long addr = *ret_addr; if (!__kernel_text_address(addr)) @@ -139,7 +137,7 @@ print_context_stack_bp(struct thread_info *tinfo, break; frame = frame->next_frame; ret_addr = &frame->return_address; - print_ftrace_graph_addr(addr, data, ops, tinfo, graph); + print_ftrace_graph_addr(addr, data, ops, task, graph); } return (unsigned long)frame; diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c index 5f1c6266eb30..d558a8a49016 100644 --- a/arch/x86/kernel/dumpstack_64.c +++ b/arch/x86/kernel/dumpstack_64.c @@ -153,7 +153,6 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, const struct stacktrace_ops *ops, void *data) { const unsigned cpu = get_cpu(); - struct thread_info *tinfo; unsigned long *irq_stack = (unsigned long *)per_cpu(irq_stack_ptr, cpu); unsigned long dummy; unsigned used = 0; @@ -179,7 +178,6 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, * current stack address. If the stacks consist of nested * exceptions */ - tinfo = task_thread_info(task); while (!done) { unsigned long *stack_end; enum stack_type stype; @@ -202,7 +200,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, if (ops->stack(data, id) < 0) break; - bp = ops->walk_stack(tinfo, stack, bp, ops, + bp = ops->walk_stack(task, stack, bp, ops, data, stack_end, &graph); ops->stack(data, "<EOE>"); /* @@ -218,7 +216,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, if (ops->stack(data, "IRQ") < 0) break; - bp = ops->walk_stack(tinfo, stack, bp, + bp = ops->walk_stack(task, stack, bp, ops, data, stack_end, &graph); /* * We link to the next stack (which would be @@ -240,7 +238,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, /* * This handles the process stack: */ - bp = ops->walk_stack(tinfo, stack, bp, ops, data, NULL, &graph); + bp = ops->walk_stack(task, stack, bp, ops, data, NULL, &graph); put_cpu(); } EXPORT_SYMBOL(dump_trace); diff --git a/include/linux/sched.h b/include/linux/sched.h index 6e42ada26345..17be3f2507f3 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2975,7 +2975,6 @@ static inline void threadgroup_change_end(struct task_struct *tsk) static inline void setup_thread_stack(struct task_struct *p, struct task_struct *org) { *task_thread_info(p) = *task_thread_info(org); - task_thread_info(p)->task = p; } /* diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 3ef3736002d8..9c951fade415 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -49,21 +49,21 @@ void debug_mutex_free_waiter(struct mutex_waiter *waiter) } void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti) + struct task_struct *task) { SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock)); /* Mark the current thread as blocked on the lock: */ - ti->task->blocked_on = waiter; + task->blocked_on = waiter; } void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti) + struct task_struct *task) { DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); - DEBUG_LOCKS_WARN_ON(waiter->task != ti->task); - DEBUG_LOCKS_WARN_ON(ti->task->blocked_on != waiter); - ti->task->blocked_on = NULL; + DEBUG_LOCKS_WARN_ON(waiter->task != task); + DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter); + task->blocked_on = NULL; list_del_init(&waiter->list); waiter->task = NULL; diff --git a/kernel/locking/mutex-debug.h b/kernel/locking/mutex-debug.h index 0799fd3e4cfa..d06ae3bb46c5 100644 --- a/kernel/locking/mutex-debug.h +++ b/kernel/locking/mutex-debug.h @@ -20,9 +20,9 @@ extern void debug_mutex_wake_waiter(struct mutex *lock, extern void debug_mutex_free_waiter(struct mutex_waiter *waiter); extern void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti); + struct task_struct *task); extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti); + struct task_struct *task); extern void debug_mutex_unlock(struct mutex *lock); extern void debug_mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key); diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 79d2d765a75f..a70b90db3909 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -537,7 +537,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, goto skip_wait; debug_mutex_lock_common(lock, &waiter); - debug_mutex_add_waiter(lock, &waiter, task_thread_info(task)); + debug_mutex_add_waiter(lock, &waiter, task); /* add waiting tasks to the end of the waitqueue (FIFO): */ list_add_tail(&waiter.list, &lock->wait_list); @@ -584,7 +584,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, } __set_task_state(task, TASK_RUNNING); - mutex_remove_waiter(lock, &waiter, current_thread_info()); + mutex_remove_waiter(lock, &waiter, task); /* set it to 0 if there are no waiters left: */ if (likely(list_empty(&lock->wait_list))) atomic_set(&lock->count, 0); @@ -605,7 +605,7 @@ skip_wait: return 0; err: - mutex_remove_waiter(lock, &waiter, task_thread_info(task)); + mutex_remove_waiter(lock, &waiter, task); spin_unlock_mutex(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, 1, ip); diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h index 5cda397607f2..a68bae5e852a 100644 --- a/kernel/locking/mutex.h +++ b/kernel/locking/mutex.h @@ -13,7 +13,7 @@ do { spin_lock(lock); (void)(flags); } while (0) #define spin_unlock_mutex(lock, flags) \ do { spin_unlock(lock); (void)(flags); } while (0) -#define mutex_remove_waiter(lock, waiter, ti) \ +#define mutex_remove_waiter(lock, waiter, task) \ __list_del((waiter)->list.prev, (waiter)->list.next) #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
WARNING: multiple messages have this Message-ID (diff)
From: Linus Torvalds <torvalds@linux-foundation.org> To: Oleg Nesterov <oleg@redhat.com>, Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net>, Andy Lutomirski <luto@kernel.org>, the arch/x86 maintainers <x86@kernel.org>, Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>, Borislav Petkov <bp@alien8.de>, Nadav Amit <nadav.amit@gmail.com>, Kees Cook <keescook@chromium.org>, Brian Gerst <brgerst@gmail.com>, "kernel-hardening@lists.openwall.com" <kernel-hardening@lists.openwall.com>, Josh Poimboeuf <jpoimboe@redhat.com>, Jann Horn <jann@thejh.net>, Heiko Carstens <heiko.carstens@de.ibm.com> Subject: [kernel-hardening] Re: [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core) Date: Thu, 23 Jun 2016 11:46:41 -0700 [thread overview] Message-ID: <CA+55aFyYruc_7Ax-hjh96zXPWmJJpCqm0yqq-fYpT094owSO_Q@mail.gmail.com> (raw) In-Reply-To: <CA+55aFy54iDN56FDJAz3A8epRvKECVO+nL5LaMxdmKrFEOm05w@mail.gmail.com> [-- Attachment #1: Type: text/plain, Size: 791 bytes --] On Thu, Jun 23, 2016 at 10:52 AM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > > Ugh. Looking around at this, it turns out that a great example of this > kind of legacy issue is the debug_mutex stuff. Interestingly, the *only* other user of ti->task for a full allmodconfig build of x86-64 seems to be arch/x86/kernel/dumpstack.c with the print_context_stack() -> print_ftrace_graph_addr() -> task = tinfo->task chain. And that doesn't really seem to want thread_info either. The callers all have 'task', and have to generate thread_info from that anyway. So this attached patch (which includes the previous one) seems to build. I didn't actually boot it, but there should be no users left unless there is some asm code that has hardcoded offsets.. Linus [-- Attachment #2: patch.diff --] [-- Type: text/plain, Size: 11470 bytes --] arch/x86/include/asm/stacktrace.h | 6 +++--- arch/x86/include/asm/thread_info.h | 4 +--- arch/x86/kernel/dumpstack.c | 22 ++++++++++------------ arch/x86/kernel/dumpstack_64.c | 8 +++----- include/linux/sched.h | 1 - kernel/locking/mutex-debug.c | 12 ++++++------ kernel/locking/mutex-debug.h | 4 ++-- kernel/locking/mutex.c | 6 +++--- kernel/locking/mutex.h | 2 +- 9 files changed, 29 insertions(+), 36 deletions(-) diff --git a/arch/x86/include/asm/stacktrace.h b/arch/x86/include/asm/stacktrace.h index 7c247e7404be..0944218af9e2 100644 --- a/arch/x86/include/asm/stacktrace.h +++ b/arch/x86/include/asm/stacktrace.h @@ -14,7 +14,7 @@ extern int kstack_depth_to_print; struct thread_info; struct stacktrace_ops; -typedef unsigned long (*walk_stack_t)(struct thread_info *tinfo, +typedef unsigned long (*walk_stack_t)(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, @@ -23,13 +23,13 @@ typedef unsigned long (*walk_stack_t)(struct thread_info *tinfo, int *graph); extern unsigned long -print_context_stack(struct thread_info *tinfo, +print_context_stack(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, void *data, unsigned long *end, int *graph); extern unsigned long -print_context_stack_bp(struct thread_info *tinfo, +print_context_stack_bp(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, void *data, unsigned long *end, int *graph); diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h index 30c133ac05cd..420acbf477ff 100644 --- a/arch/x86/include/asm/thread_info.h +++ b/arch/x86/include/asm/thread_info.h @@ -53,18 +53,16 @@ struct task_struct; #include <linux/atomic.h> struct thread_info { - struct task_struct *task; /* main task structure */ __u32 flags; /* low level flags */ __u32 status; /* thread synchronous flags */ __u32 cpu; /* current CPU */ - mm_segment_t addr_limit; unsigned int sig_on_uaccess_error:1; unsigned int uaccess_err:1; /* uaccess failed */ + mm_segment_t addr_limit; }; #define INIT_THREAD_INFO(tsk) \ { \ - .task = &tsk, \ .flags = 0, \ .cpu = 0, \ .addr_limit = KERNEL_DS, \ diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c index 2bb25c3fe2e8..d6209f3a69cb 100644 --- a/arch/x86/kernel/dumpstack.c +++ b/arch/x86/kernel/dumpstack.c @@ -42,16 +42,14 @@ void printk_address(unsigned long address) static void print_ftrace_graph_addr(unsigned long addr, void *data, const struct stacktrace_ops *ops, - struct thread_info *tinfo, int *graph) + struct task_struct *task, int *graph) { - struct task_struct *task; unsigned long ret_addr; int index; if (addr != (unsigned long)return_to_handler) return; - task = tinfo->task; index = task->curr_ret_stack; if (!task->ret_stack || index < *graph) @@ -68,7 +66,7 @@ print_ftrace_graph_addr(unsigned long addr, void *data, static inline void print_ftrace_graph_addr(unsigned long addr, void *data, const struct stacktrace_ops *ops, - struct thread_info *tinfo, int *graph) + struct task_struct *task, int *graph) { } #endif @@ -79,10 +77,10 @@ print_ftrace_graph_addr(unsigned long addr, void *data, * severe exception (double fault, nmi, stack fault, debug, mce) hardware stack */ -static inline int valid_stack_ptr(struct thread_info *tinfo, +static inline int valid_stack_ptr(struct task_struct *task, void *p, unsigned int size, void *end) { - void *t = tinfo; + void *t = task_thread_info(task); if (end) { if (p < end && p >= (end-THREAD_SIZE)) return 1; @@ -93,14 +91,14 @@ static inline int valid_stack_ptr(struct thread_info *tinfo, } unsigned long -print_context_stack(struct thread_info *tinfo, +print_context_stack(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, void *data, unsigned long *end, int *graph) { struct stack_frame *frame = (struct stack_frame *)bp; - while (valid_stack_ptr(tinfo, stack, sizeof(*stack), end)) { + while (valid_stack_ptr(task, stack, sizeof(*stack), end)) { unsigned long addr; addr = *stack; @@ -112,7 +110,7 @@ print_context_stack(struct thread_info *tinfo, } else { ops->address(data, addr, 0); } - print_ftrace_graph_addr(addr, data, ops, tinfo, graph); + print_ftrace_graph_addr(addr, data, ops, task, graph); } stack++; } @@ -121,7 +119,7 @@ print_context_stack(struct thread_info *tinfo, EXPORT_SYMBOL_GPL(print_context_stack); unsigned long -print_context_stack_bp(struct thread_info *tinfo, +print_context_stack_bp(struct task_struct *task, unsigned long *stack, unsigned long bp, const struct stacktrace_ops *ops, void *data, unsigned long *end, int *graph) @@ -129,7 +127,7 @@ print_context_stack_bp(struct thread_info *tinfo, struct stack_frame *frame = (struct stack_frame *)bp; unsigned long *ret_addr = &frame->return_address; - while (valid_stack_ptr(tinfo, ret_addr, sizeof(*ret_addr), end)) { + while (valid_stack_ptr(task, ret_addr, sizeof(*ret_addr), end)) { unsigned long addr = *ret_addr; if (!__kernel_text_address(addr)) @@ -139,7 +137,7 @@ print_context_stack_bp(struct thread_info *tinfo, break; frame = frame->next_frame; ret_addr = &frame->return_address; - print_ftrace_graph_addr(addr, data, ops, tinfo, graph); + print_ftrace_graph_addr(addr, data, ops, task, graph); } return (unsigned long)frame; diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c index 5f1c6266eb30..d558a8a49016 100644 --- a/arch/x86/kernel/dumpstack_64.c +++ b/arch/x86/kernel/dumpstack_64.c @@ -153,7 +153,6 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, const struct stacktrace_ops *ops, void *data) { const unsigned cpu = get_cpu(); - struct thread_info *tinfo; unsigned long *irq_stack = (unsigned long *)per_cpu(irq_stack_ptr, cpu); unsigned long dummy; unsigned used = 0; @@ -179,7 +178,6 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, * current stack address. If the stacks consist of nested * exceptions */ - tinfo = task_thread_info(task); while (!done) { unsigned long *stack_end; enum stack_type stype; @@ -202,7 +200,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, if (ops->stack(data, id) < 0) break; - bp = ops->walk_stack(tinfo, stack, bp, ops, + bp = ops->walk_stack(task, stack, bp, ops, data, stack_end, &graph); ops->stack(data, "<EOE>"); /* @@ -218,7 +216,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, if (ops->stack(data, "IRQ") < 0) break; - bp = ops->walk_stack(tinfo, stack, bp, + bp = ops->walk_stack(task, stack, bp, ops, data, stack_end, &graph); /* * We link to the next stack (which would be @@ -240,7 +238,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs, /* * This handles the process stack: */ - bp = ops->walk_stack(tinfo, stack, bp, ops, data, NULL, &graph); + bp = ops->walk_stack(task, stack, bp, ops, data, NULL, &graph); put_cpu(); } EXPORT_SYMBOL(dump_trace); diff --git a/include/linux/sched.h b/include/linux/sched.h index 6e42ada26345..17be3f2507f3 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2975,7 +2975,6 @@ static inline void threadgroup_change_end(struct task_struct *tsk) static inline void setup_thread_stack(struct task_struct *p, struct task_struct *org) { *task_thread_info(p) = *task_thread_info(org); - task_thread_info(p)->task = p; } /* diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 3ef3736002d8..9c951fade415 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -49,21 +49,21 @@ void debug_mutex_free_waiter(struct mutex_waiter *waiter) } void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti) + struct task_struct *task) { SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock)); /* Mark the current thread as blocked on the lock: */ - ti->task->blocked_on = waiter; + task->blocked_on = waiter; } void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti) + struct task_struct *task) { DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); - DEBUG_LOCKS_WARN_ON(waiter->task != ti->task); - DEBUG_LOCKS_WARN_ON(ti->task->blocked_on != waiter); - ti->task->blocked_on = NULL; + DEBUG_LOCKS_WARN_ON(waiter->task != task); + DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter); + task->blocked_on = NULL; list_del_init(&waiter->list); waiter->task = NULL; diff --git a/kernel/locking/mutex-debug.h b/kernel/locking/mutex-debug.h index 0799fd3e4cfa..d06ae3bb46c5 100644 --- a/kernel/locking/mutex-debug.h +++ b/kernel/locking/mutex-debug.h @@ -20,9 +20,9 @@ extern void debug_mutex_wake_waiter(struct mutex *lock, extern void debug_mutex_free_waiter(struct mutex_waiter *waiter); extern void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti); + struct task_struct *task); extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti); + struct task_struct *task); extern void debug_mutex_unlock(struct mutex *lock); extern void debug_mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key); diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 79d2d765a75f..a70b90db3909 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -537,7 +537,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, goto skip_wait; debug_mutex_lock_common(lock, &waiter); - debug_mutex_add_waiter(lock, &waiter, task_thread_info(task)); + debug_mutex_add_waiter(lock, &waiter, task); /* add waiting tasks to the end of the waitqueue (FIFO): */ list_add_tail(&waiter.list, &lock->wait_list); @@ -584,7 +584,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, } __set_task_state(task, TASK_RUNNING); - mutex_remove_waiter(lock, &waiter, current_thread_info()); + mutex_remove_waiter(lock, &waiter, task); /* set it to 0 if there are no waiters left: */ if (likely(list_empty(&lock->wait_list))) atomic_set(&lock->count, 0); @@ -605,7 +605,7 @@ skip_wait: return 0; err: - mutex_remove_waiter(lock, &waiter, task_thread_info(task)); + mutex_remove_waiter(lock, &waiter, task); spin_unlock_mutex(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, 1, ip); diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h index 5cda397607f2..a68bae5e852a 100644 --- a/kernel/locking/mutex.h +++ b/kernel/locking/mutex.h @@ -13,7 +13,7 @@ do { spin_lock(lock); (void)(flags); } while (0) #define spin_unlock_mutex(lock, flags) \ do { spin_unlock(lock); (void)(flags); } while (0) -#define mutex_remove_waiter(lock, waiter, ti) \ +#define mutex_remove_waiter(lock, waiter, task) \ __list_del((waiter)->list.prev, (waiter)->list.next) #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
next prev parent reply other threads:[~2016-06-23 18:46 UTC|newest] Thread overview: 269+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-06-20 23:43 [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core) Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 01/13] x86/mm/hotplug: Don't remove PGD entries in remove_pagetable() Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 02/13] x86/cpa: In populate_pgd, don't set the pgd entry until it's populated Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 03/13] x86/mm: Remove kernel_unmap_pages_in_pgd() and efi_cleanup_page_tables() Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-21 9:53 ` Matt Fleming 2016-06-21 9:53 ` [kernel-hardening] " Matt Fleming 2016-06-21 9:53 ` Matt Fleming 2016-06-20 23:43 ` [PATCH v3 04/13] mm: Track NR_KERNEL_STACK in KiB instead of number of stacks Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-21 9:46 ` Vladimir Davydov 2016-06-21 9:46 ` [kernel-hardening] " Vladimir Davydov 2016-06-21 9:46 ` Vladimir Davydov 2016-06-21 9:46 ` Vladimir Davydov 2016-06-22 7:35 ` Michal Hocko 2016-06-22 7:35 ` [kernel-hardening] " Michal Hocko 2016-06-22 7:35 ` Michal Hocko 2016-06-22 7:35 ` Michal Hocko 2016-06-20 23:43 ` [PATCH v3 05/13] mm: Fix memcg stack accounting for sub-page stacks Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-21 9:54 ` Vladimir Davydov 2016-06-21 9:54 ` [kernel-hardening] " Vladimir Davydov 2016-06-21 9:54 ` Vladimir Davydov 2016-06-21 9:54 ` Vladimir Davydov 2016-06-22 7:38 ` Michal Hocko 2016-06-22 7:38 ` [kernel-hardening] " Michal Hocko 2016-06-22 7:38 ` Michal Hocko 2016-06-22 7:38 ` Michal Hocko 2016-06-20 23:43 ` [PATCH v3 06/13] fork: Add generic vmalloced stack support Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-21 7:30 ` Jann Horn 2016-06-21 7:30 ` [kernel-hardening] " Jann Horn 2016-06-21 7:30 ` Jann Horn 2016-06-21 16:59 ` Andy Lutomirski 2016-06-21 16:59 ` [kernel-hardening] " Andy Lutomirski 2016-06-21 16:59 ` Andy Lutomirski 2016-06-21 17:13 ` Kees Cook 2016-06-21 17:13 ` [kernel-hardening] " Kees Cook 2016-06-21 17:13 ` Kees Cook 2016-06-21 17:28 ` Andy Lutomirski 2016-06-21 17:28 ` [kernel-hardening] " Andy Lutomirski 2016-06-21 17:28 ` Andy Lutomirski 2016-06-21 18:32 ` [kernel-hardening] " Rik van Riel 2016-06-21 18:32 ` Rik van Riel 2016-06-21 19:44 ` [kernel-hardening] " Arnd Bergmann 2016-06-21 19:44 ` Arnd Bergmann 2016-06-21 19:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-21 19:43 ` Andy Lutomirski 2016-06-21 19:43 ` Andy Lutomirski 2016-07-11 17:00 ` [kernel-hardening] " Andrey Ryabinin 2016-06-20 23:43 ` [PATCH v3 07/13] x86/die: Don't try to recover from an OOPS on a non-default stack Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 08/13] x86/dumpstack: When OOPSing, rewind the stack before do_exit Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 09/13] x86/dumpstack: When dumping stack bytes due to OOPS, start with regs->sp Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 10/13] x86/dumpstack: Try harder to get a call trace on stack overflow Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 11/13] x86/dumpstack/64: Handle faults when printing the "Stack:" part of an OOPS Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 12/13] x86/mm/64: Enable vmapped stacks Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-20 23:43 ` [PATCH v3 13/13] x86/mm: Improve stack-overflow #PF handling Andy Lutomirski 2016-06-20 23:43 ` [kernel-hardening] " Andy Lutomirski 2016-06-20 23:43 ` Andy Lutomirski 2016-06-21 4:01 ` [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core) Linus Torvalds 2016-06-21 4:01 ` [kernel-hardening] " Linus Torvalds 2016-06-21 4:01 ` Linus Torvalds 2016-06-21 16:45 ` Andy Lutomirski 2016-06-21 16:45 ` [kernel-hardening] " Andy Lutomirski 2016-06-21 16:45 ` Andy Lutomirski 2016-06-21 17:16 ` Linus Torvalds 2016-06-21 17:16 ` [kernel-hardening] " Linus Torvalds 2016-06-21 17:16 ` Linus Torvalds 2016-06-21 17:27 ` Andy Lutomirski 2016-06-21 17:27 ` [kernel-hardening] " Andy Lutomirski 2016-06-21 17:27 ` Andy Lutomirski 2016-06-21 18:12 ` Kees Cook 2016-06-21 18:12 ` [kernel-hardening] " Kees Cook 2016-06-21 18:12 ` Kees Cook 2016-06-21 18:19 ` [kernel-hardening] " Rik van Riel 2016-06-21 18:19 ` Rik van Riel 2016-06-23 1:22 ` Andy Lutomirski 2016-06-23 1:22 ` [kernel-hardening] " Andy Lutomirski 2016-06-23 1:22 ` Andy Lutomirski 2016-06-23 6:02 ` Linus Torvalds 2016-06-23 6:02 ` [kernel-hardening] " Linus Torvalds 2016-06-23 6:02 ` Linus Torvalds 2016-06-23 14:31 ` Oleg Nesterov 2016-06-23 14:31 ` [kernel-hardening] " Oleg Nesterov 2016-06-23 14:31 ` Oleg Nesterov 2016-06-23 16:30 ` Linus Torvalds 2016-06-23 16:30 ` [kernel-hardening] " Linus Torvalds 2016-06-23 16:30 ` Linus Torvalds 2016-06-23 16:41 ` Andy Lutomirski 2016-06-23 16:41 ` [kernel-hardening] " Andy Lutomirski 2016-06-23 16:41 ` Andy Lutomirski 2016-06-23 17:10 ` Oleg Nesterov 2016-06-23 17:10 ` [kernel-hardening] " Oleg Nesterov 2016-06-23 17:10 ` Oleg Nesterov 2016-09-06 16:19 ` Jann Horn 2016-09-06 16:19 ` [kernel-hardening] " Jann Horn 2016-09-06 16:19 ` Jann Horn 2016-09-06 16:40 ` Andy Lutomirski 2016-09-06 16:40 ` [kernel-hardening] " Andy Lutomirski 2016-09-06 16:40 ` Andy Lutomirski 2016-06-23 17:03 ` Oleg Nesterov 2016-06-23 17:03 ` [kernel-hardening] " Oleg Nesterov 2016-06-23 17:03 ` Oleg Nesterov 2016-06-23 17:44 ` Linus Torvalds 2016-06-23 17:44 ` [kernel-hardening] " Linus Torvalds 2016-06-23 17:44 ` Linus Torvalds 2016-06-23 17:52 ` Linus Torvalds 2016-06-23 17:52 ` [kernel-hardening] " Linus Torvalds 2016-06-23 17:52 ` Linus Torvalds 2016-06-23 18:00 ` Kees Cook 2016-06-23 18:00 ` [kernel-hardening] " Kees Cook 2016-06-23 18:00 ` Kees Cook 2016-06-23 18:54 ` Peter Zijlstra 2016-06-23 18:54 ` [kernel-hardening] " Peter Zijlstra 2016-06-23 18:54 ` Peter Zijlstra 2016-06-23 18:12 ` Oleg Nesterov 2016-06-23 18:12 ` [kernel-hardening] " Oleg Nesterov 2016-06-23 18:12 ` Oleg Nesterov 2016-06-23 18:55 ` Peter Zijlstra 2016-06-23 18:55 ` [kernel-hardening] " Peter Zijlstra 2016-06-23 18:55 ` Peter Zijlstra 2016-06-23 18:46 ` Linus Torvalds [this message] 2016-06-23 18:46 ` [kernel-hardening] " Linus Torvalds 2016-06-23 18:46 ` Linus Torvalds 2016-06-23 19:08 ` Andy Lutomirski 2016-06-23 19:08 ` [kernel-hardening] " Andy Lutomirski 2016-06-23 19:08 ` Andy Lutomirski 2016-06-23 18:53 ` Peter Zijlstra 2016-06-23 18:53 ` [kernel-hardening] " Peter Zijlstra 2016-06-23 18:53 ` Peter Zijlstra 2016-06-23 19:09 ` Andy Lutomirski 2016-06-23 19:09 ` [kernel-hardening] " Andy Lutomirski 2016-06-23 19:09 ` Andy Lutomirski 2016-06-23 19:13 ` Peter Zijlstra 2016-06-23 19:13 ` [kernel-hardening] " Peter Zijlstra 2016-06-23 19:13 ` Peter Zijlstra 2016-06-23 19:17 ` Linus Torvalds 2016-06-23 19:17 ` [kernel-hardening] " Linus Torvalds 2016-06-23 19:17 ` Linus Torvalds 2016-06-24 6:17 ` Linus Torvalds 2016-06-24 6:17 ` [kernel-hardening] " Linus Torvalds 2016-06-24 6:17 ` Linus Torvalds 2016-06-24 12:25 ` Brian Gerst 2016-06-24 12:25 ` [kernel-hardening] " Brian Gerst 2016-06-24 12:25 ` Brian Gerst 2016-06-24 17:21 ` Linus Torvalds 2016-06-24 17:21 ` [kernel-hardening] " Linus Torvalds 2016-06-24 17:21 ` Linus Torvalds 2016-06-24 17:40 ` Linus Torvalds 2016-06-24 17:40 ` [kernel-hardening] " Linus Torvalds 2016-06-24 17:40 ` Linus Torvalds 2016-06-24 17:47 ` Andy Lutomirski 2016-06-24 17:47 ` [kernel-hardening] " Andy Lutomirski 2016-06-24 17:47 ` Andy Lutomirski 2016-06-24 17:56 ` Linus Torvalds 2016-06-24 17:56 ` [kernel-hardening] " Linus Torvalds 2016-06-24 17:56 ` Linus Torvalds 2016-06-24 18:36 ` Andy Lutomirski 2016-06-24 18:36 ` [kernel-hardening] " Andy Lutomirski 2016-06-24 18:36 ` Andy Lutomirski 2016-06-24 17:51 ` Linus Torvalds 2016-06-24 17:51 ` [kernel-hardening] " Linus Torvalds 2016-06-24 17:51 ` Linus Torvalds 2016-06-24 18:11 ` Linus Torvalds 2016-06-24 18:11 ` [kernel-hardening] " Linus Torvalds 2016-06-24 18:11 ` Linus Torvalds 2016-06-24 20:25 ` Josh Poimboeuf 2016-06-24 20:25 ` [kernel-hardening] " Josh Poimboeuf 2016-06-24 20:25 ` Josh Poimboeuf 2016-06-24 20:51 ` Josh Poimboeuf 2016-06-24 20:51 ` [kernel-hardening] " Josh Poimboeuf 2016-06-24 20:51 ` Josh Poimboeuf 2016-06-24 20:53 ` Andy Lutomirski 2016-06-24 20:53 ` [kernel-hardening] " Andy Lutomirski 2016-06-24 20:53 ` Andy Lutomirski 2016-06-24 21:06 ` Linus Torvalds 2016-06-24 21:06 ` [kernel-hardening] " Linus Torvalds 2016-06-24 21:06 ` Linus Torvalds 2016-06-24 21:25 ` Andy Lutomirski 2016-06-24 21:25 ` [kernel-hardening] " Andy Lutomirski 2016-06-24 21:25 ` Andy Lutomirski 2016-06-24 21:32 ` Linus Torvalds 2016-06-24 21:32 ` [kernel-hardening] " Linus Torvalds 2016-06-24 21:32 ` Linus Torvalds 2016-06-24 21:34 ` Andy Lutomirski 2016-06-24 21:34 ` [kernel-hardening] " Andy Lutomirski 2016-06-24 21:34 ` Andy Lutomirski 2016-06-25 2:41 ` Linus Torvalds 2016-06-25 2:41 ` [kernel-hardening] " Linus Torvalds 2016-06-25 2:41 ` Linus Torvalds 2016-06-25 23:19 ` Andy Lutomirski 2016-06-25 23:19 ` [kernel-hardening] " Andy Lutomirski 2016-06-25 23:19 ` Andy Lutomirski 2016-06-25 23:30 ` Andy Lutomirski 2016-06-25 23:30 ` [kernel-hardening] " Andy Lutomirski 2016-06-25 23:30 ` Andy Lutomirski 2016-06-26 1:23 ` Linus Torvalds 2016-06-26 1:23 ` [kernel-hardening] " Linus Torvalds 2016-06-26 1:23 ` Linus Torvalds 2016-06-23 18:52 ` Oleg Nesterov 2016-06-23 18:52 ` [kernel-hardening] " Oleg Nesterov 2016-06-23 18:52 ` Oleg Nesterov 2016-06-24 14:05 ` Michal Hocko 2016-06-24 14:05 ` [kernel-hardening] " Michal Hocko 2016-06-24 14:05 ` Michal Hocko 2016-06-24 15:06 ` Michal Hocko 2016-06-24 15:06 ` [kernel-hardening] " Michal Hocko 2016-06-24 15:06 ` Michal Hocko 2016-06-24 15:06 ` Michal Hocko 2016-06-24 20:22 ` Oleg Nesterov 2016-06-24 20:22 ` [kernel-hardening] " Oleg Nesterov 2016-06-24 20:22 ` Oleg Nesterov 2016-06-27 10:36 ` Michal Hocko 2016-06-27 10:36 ` [kernel-hardening] " Michal Hocko 2016-06-27 10:36 ` Michal Hocko 2016-06-23 19:11 ` Peter Zijlstra 2016-06-23 19:11 ` [kernel-hardening] " Peter Zijlstra 2016-06-23 19:11 ` Peter Zijlstra 2016-06-23 19:34 ` Linus Torvalds 2016-06-23 19:34 ` [kernel-hardening] " Linus Torvalds 2016-06-23 19:34 ` Linus Torvalds 2016-06-23 19:46 ` Peter Zijlstra 2016-06-23 19:46 ` [kernel-hardening] " Peter Zijlstra 2016-06-23 19:46 ` Peter Zijlstra 2016-06-21 9:24 ` Arnd Bergmann 2016-06-21 9:24 ` [kernel-hardening] " Arnd Bergmann 2016-06-21 9:24 ` Arnd Bergmann 2016-06-21 17:16 ` Kees Cook 2016-06-21 17:16 ` [kernel-hardening] " Kees Cook 2016-06-21 17:16 ` Kees Cook 2016-06-21 18:02 ` [kernel-hardening] " Rik van Riel 2016-06-21 18:02 ` Rik van Riel 2016-06-21 18:05 ` [kernel-hardening] " Andy Lutomirski 2016-06-21 18:05 ` Andy Lutomirski 2016-06-21 18:05 ` Andy Lutomirski 2016-06-21 19:47 ` Arnd Bergmann 2016-06-21 19:47 ` [kernel-hardening] " Arnd Bergmann 2016-06-21 19:47 ` Arnd Bergmann 2016-06-21 19:47 ` Andy Lutomirski 2016-06-21 19:47 ` [kernel-hardening] " Andy Lutomirski 2016-06-21 19:47 ` Andy Lutomirski 2016-06-21 20:18 ` Kees Cook 2016-06-21 20:18 ` [kernel-hardening] " Kees Cook 2016-06-21 20:18 ` Kees Cook
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CA+55aFyYruc_7Ax-hjh96zXPWmJJpCqm0yqq-fYpT094owSO_Q@mail.gmail.com \ --to=torvalds@linux-foundation.org \ --cc=bp@alien8.de \ --cc=brgerst@gmail.com \ --cc=heiko.carstens@de.ibm.com \ --cc=jann@thejh.net \ --cc=jpoimboe@redhat.com \ --cc=keescook@chromium.org \ --cc=kernel-hardening@lists.openwall.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=luto@amacapital.net \ --cc=luto@kernel.org \ --cc=nadav.amit@gmail.com \ --cc=oleg@redhat.com \ --cc=peterz@infradead.org \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.