From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751782AbbBPU0a (ORCPT ); Mon, 16 Feb 2015 15:26:30 -0500 Received: from mail.skyhub.de ([78.46.96.112]:56602 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751231AbbBPU02 (ORCPT ); Mon, 16 Feb 2015 15:26:28 -0500 Date: Mon, 16 Feb 2015 21:25:40 +0100 From: Borislav Petkov To: riel@redhat.com, oleg@redhat.com Cc: dave.hansen@linux.intel.com, sbsiddha@gmail.com, luto@amacapital.net, tglx@linutronix.de, mingo@kernel.org, hpa@zytor.com, fenghua.yu@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/8] x86, fpu: unlazy_fpu: don't do __thread_fpu_end() if use_eager_fpu() Message-ID: <20150216202540.GM4458@pd.tnic> References: <1423252925-14451-1-git-send-email-riel@redhat.com> <1423252925-14451-3-git-send-email-riel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <1423252925-14451-3-git-send-email-riel@redhat.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 06, 2015 at 03:01:59PM -0500, riel@redhat.com wrote: > From: Oleg Nesterov > > unlazy_fpu()->__thread_fpu_end() doesn't look right if use_eager_fpu(). > Unconditional __thread_fpu_end() is only correct if we know that this > thread can't return to user-mode and use FPU. > > Fortunately it has only 2 callers. fpu_copy() checks use_eager_fpu(), > and init_fpu(current) can be only called by the coredumping thread via > regset->get(). But it is exported to modules, and imo this should be > fixed anyway. > > And if we check use_eager_fpu() we can use __save_fpu() like fpu_copy() > and save_init_fpu() do. > > - It seems that even !use_eager_fpu() case doesn't need the unconditional > __thread_fpu_end(), we only need it if __save_init_fpu() returns 0. I can follow so far. > - It is still not clear to me if __save_init_fpu() can safely nest with > another save + restore from __kernel_fpu_begin(). If not, we can use > kernel_fpu_disable() to fix the race. Well, my primitive understanding would say no, not safely, for the simple reason that we have only one XSAVE state area per thread. However, __kernel_fpu_begin() is called with preemption disabled so ... I guess I'm still not seeing the race. Btw, what is kernel_fpu_disable()? Can't find it here. -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply. --