From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932583AbbHGQJu (ORCPT ); Fri, 7 Aug 2015 12:09:50 -0400 Received: from mail-yk0-f180.google.com ([209.85.160.180]:35654 "EHLO mail-yk0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932286AbbHGQJs (ORCPT ); Fri, 7 Aug 2015 12:09:48 -0400 Date: Fri, 7 Aug 2015 12:09:45 -0400 From: Tejun Heo To: Peter Zijlstra Cc: mingo@kernel.org, riel@redhat.com, dedekind1@gmail.com, linux-kernel@vger.kernel.org, mgorman@suse.de, rostedt@goodmis.org, juri.lelli@arm.com, Oleg Nesterov Subject: Re: [RFC][PATCH 1/4] sched: Fix a race between __kthread_bind() and sched_setaffinity() Message-ID: <20150807160945.GF14626@mtj.duckdns.org> References: <20150515154333.712161952@infradead.org> <20150515154833.545640346@infradead.org> <20150515155653.GA23692@htj.duckdns.org> <20150807142708.GK16853@twins.programming.kicks-ass.net> <20150807151608.GD14626@mtj.duckdns.org> <20150807152956.GN16853@twins.programming.kicks-ass.net> <20150807153828.GE14626@mtj.duckdns.org> <20150807155954.GP16853@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150807155954.GP16853@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, Peter. On Fri, Aug 07, 2015 at 05:59:54PM +0200, Peter Zijlstra wrote: > > So, the problem there is that __kthread_bind() doesn't grab the same > > lock that the syscall side grabs but workqueue used > > set_cpus_allowed_ptr() which goes through the rq locking, so as long > > as the check on syscall side is movied inside rq lock, it should be > > fine. > > Currently neither site uses any lock, and that is what the patch fixes > (it uses the per-task ->pi_lock instead of the rq->lock, but that is > immaterial). Yeap, the testing on the syscall side should definitely be moved inside rq->lock. > What matters though is that you now must hold a scheduler lock while > setting PF_NO_SETAFFINITY. In order to avoid spreading that knowledge > around I've taught kthread_bind*() about this and made the workqueue > code use that API (rather than having the workqueue code take scheduler > locks). So, as long as PF_NO_SETAFFINITY is set before the task sets its affinity to its target holding the rq lock, it should still be safe. > Hmm.. a better solution. Have the worker thread creation call > kthread_bind_mask() before attach_to_pool() and have attach_to_pool() > keep using set_cpus_allowed_ptr(). Less ugly. Yeah, that works too. About the same effect. Thanks. -- tejun