From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752542AbbCGBue (ORCPT ); Fri, 6 Mar 2015 20:50:34 -0500 Received: from mail-pa0-f48.google.com ([209.85.220.48]:33085 "EHLO mail-pa0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750830AbbCGBu2 (ORCPT ); Fri, 6 Mar 2015 20:50:28 -0500 From: Andy Lutomirski To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: Borislav Petkov , Oleg Nesterov , Denys Vlasenko , Andy Lutomirski Subject: [PATCH 1/2] x86: Delay loading sp0 slightly on task switch Date: Fri, 6 Mar 2015 17:50:18 -0800 Message-Id: <9fcaa47dd8487db59eed7a3911b6ae409476763e.1425692936.git.luto@amacapital.net> X-Mailer: git-send-email 2.1.0 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The change: 75182b1632a8 x86/asm/entry: Switch all C consumers of kernel_stack to this_cpu_sp0() had the unintended side effect of changing the return value of current_thread_info() during part of the context switch process. Change it back. This has no effect as far as I can tell -- it's just for consistency. Signed-off-by: Andy Lutomirski --- arch/x86/kernel/process_32.c | 10 +++++----- arch/x86/kernel/process_64.c | 6 +++--- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index d3460af3d27a..0405cab6634d 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -256,11 +256,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) fpu = switch_fpu_prepare(prev_p, next_p, cpu); /* - * Reload esp0. - */ - load_sp0(tss, next); - - /* * Save away %gs. No need to save %fs, as it was saved on the * stack on entry. No need to save %es and %ds, as those are * always kernel segments while inside the kernel. Doing this @@ -310,6 +305,11 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) */ arch_end_context_switch(next_p); + /* + * Reload esp0. This changes current_thread_info(). + */ + load_sp0(tss, next); + this_cpu_write(kernel_stack, (unsigned long)task_stack_page(next_p) + THREAD_SIZE - KERNEL_STACK_OFFSET); diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 2cd562f96c1f..1e393d27d701 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -283,9 +283,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) fpu = switch_fpu_prepare(prev_p, next_p, cpu); - /* Reload esp0 and ss1. */ - load_sp0(tss, next); - /* We must save %fs and %gs before load_TLS() because * %fs and %gs may be cleared by load_TLS(). * @@ -413,6 +410,9 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) task_thread_info(prev_p)->saved_preempt_count = this_cpu_read(__preempt_count); this_cpu_write(__preempt_count, task_thread_info(next_p)->saved_preempt_count); + /* Reload esp0 and ss1. This changes current_thread_info(). */ + load_sp0(tss, next); + this_cpu_write(kernel_stack, (unsigned long)task_stack_page(next_p) + THREAD_SIZE - KERNEL_STACK_OFFSET); -- 2.1.0