From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932998AbbBBSBP (ORCPT ); Mon, 2 Feb 2015 13:01:15 -0500 Received: from shelob.surriel.com ([74.92.59.67]:39687 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753706AbbBBSBC (ORCPT ); Mon, 2 Feb 2015 13:01:02 -0500 From: riel@redhat.com To: oleg@redhat.com Cc: dave.hansen@linux.intel.com, sbsiddha@gmail.com, luto@amacapital.net, tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, fenghua.yu@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/6] x86,fpu: use disable_task_lazy_fpu_restore helper Date: Mon, 2 Feb 2015 13:00:49 -0500 Message-Id: <1422900051-10778-5-git-send-email-riel@redhat.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1422900051-10778-1-git-send-email-riel@redhat.com> References: <20150129210723.GA31584@redhat.com> <1422900051-10778-1-git-send-email-riel@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Rik van Riel Replace magic assignments of fpu.last_cpu = ~0 with more explicit disable_task_lazy_fpu_restore calls. This also fixes the lazy FPU restore disabling in drop_fpu, which only really works when !use_eager_fpu(). This is fine for now, because fpu_lazy_restore() is only used when !use_eager_fpu() currently, but we may want to expand that. Signed-off-by: Rik van Riel --- arch/x86/include/asm/fpu-internal.h | 6 +++--- arch/x86/kernel/i387.c | 2 +- arch/x86/kernel/process.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h index 04063751ac80..e2832f9dfed5 100644 --- a/arch/x86/include/asm/fpu-internal.h +++ b/arch/x86/include/asm/fpu-internal.h @@ -396,7 +396,7 @@ static inline void drop_fpu(struct task_struct *tsk) * Forget coprocessor state.. */ preempt_disable(); - tsk->thread.fpu_counter = 0; + task_disable_lazy_fpu_restore(tsk); __drop_fpu(tsk); clear_used_math(); preempt_enable(); @@ -440,7 +440,7 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta new->thread.fpu_counter > 5); if (__thread_has_fpu(old)) { if (!__save_init_fpu(old)) - old->thread.fpu.last_cpu = ~0; + task_disable_lazy_fpu_restore(old); else old->thread.fpu.last_cpu = cpu; old->thread.fpu.has_fpu = 0; /* But leave fpu_owner_task! */ @@ -454,7 +454,7 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta stts(); } else { old->thread.fpu_counter = 0; - old->thread.fpu.last_cpu = ~0; + task_disable_lazy_fpu_restore(old); if (fpu.preload) { new->thread.fpu_counter++; if (!use_eager_fpu() && fpu_lazy_restore(new, cpu)) diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index 8e070a6c30e5..8416b5f85806 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -250,7 +250,7 @@ int init_fpu(struct task_struct *tsk) if (tsk_used_math(tsk)) { if (cpu_has_fpu && tsk == current) unlazy_fpu(tsk); - tsk->thread.fpu.last_cpu = ~0; + task_disable_lazy_fpu_restore(tsk); return 0; } diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index dd9a069a5ec5..83480373a642 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -68,8 +68,8 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) dst->thread.fpu_counter = 0; dst->thread.fpu.has_fpu = 0; - dst->thread.fpu.last_cpu = ~0; dst->thread.fpu.state = NULL; + task_disable_lazy_fpu_restore(dst); if (tsk_used_math(src)) { int err = fpu_alloc(&dst->thread.fpu); if (err) -- 1.9.3