From mboxrd@z Thu Jan 1 00:00:00 1970 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751179AbeABSAw (ORCPT + 1 other); Tue, 2 Jan 2018 13:00:52 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:59146 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751040AbeABSAv (ORCPT ); Tue, 2 Jan 2018 13:00:51 -0500 Date: Tue, 2 Jan 2018 10:01:19 -0800 From: "Paul E. McKenney" To: Tejun Heo Cc: Prateek Sood , Peter Zijlstra , avagin@gmail.com, mingo@kernel.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, sramana@codeaurora.org Subject: Re: [PATCH] cgroup/cpuset: fix circular locking dependency Reply-To: paulmck@linux.vnet.ibm.com References: <623f214b-8b9a-f967-7a3d-ca9c06151267@codeaurora.org> <20171204202219.GF2421075@devbig577.frc2.facebook.com> <20171204225825.GP2421075@devbig577.frc2.facebook.com> <20171204230117.GF20227@worktop.programming.kicks-ass.net> <20171211152059.GH2421075@devbig577.frc2.facebook.com> <20171213160617.GQ3919388@devbig577.frc2.facebook.com> <9843d982-d201-8702-2e4e-0541a4d96b53@codeaurora.org> <20180102161656.GD3668920@devbig577.frc2.facebook.com> <20180102174408.GM7829@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180102174408.GM7829@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 18010218-0036-0000-0000-000002A3CBDD X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00008306; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000244; SDB=6.00969258; UDB=6.00490788; IPR=6.00749232; BA=6.00005764; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00018833; XFM=3.00000015; UTC=2018-01-02 18:00:49 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18010218-0037-0000-0000-000042D658E8 Message-Id: <20180102180119.GA1355@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-01-02_13:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 lowpriorityscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1801020259 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On Tue, Jan 02, 2018 at 09:44:08AM -0800, Paul E. McKenney wrote: > On Tue, Jan 02, 2018 at 08:16:56AM -0800, Tejun Heo wrote: > > Hello, > > > > On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote: > > > task T is waiting for cpuset_mutex acquired > > > by kworker/2:1 > > > > > > sh ==> cpuhp/2 ==> kworker/2:1 ==> sh > > > > > > kworker/2:3 ==> kthreadd ==> Task T ==> kworker/2:1 > > > > > > It seems that my earlier patch set should fix this scenario: > > > 1) Inverting locking order of cpuset_mutex and cpu_hotplug_lock. > > > 2) Make cpuset hotplug work synchronous. > > > > > > Could you please share your feedback. > > > > Hmm... this can also be resolved by adding WQ_MEM_RECLAIM to the > > synchronize rcu workqueue, right? Given the wide-spread usages of > > synchronize_rcu and friends, maybe that's the right solution, or at > > least something we also need to do, for this particular deadlock? > > To make WQ_MEM_RECLAIM work, I need to dynamically allocate RCU's > workqueues, correct? Or is there some way to mark a statically > allocated workqueue as WQ_MEM_RECLAIM after the fact? > > I can dynamically allocate them, but I need to carefully investigate > boot-time use. So if it is possible to be lazy, I do want to take > the easy way out. ;-) Actually, after taking a quick look, could you please supply me with a way of mark a statically allocated workqueue as WQ_MEM_RECLAIM after the fact? Otherwise, I end up having to check for the workqueue having been allocated pretty much each time I use it, which is going to be an open invitation for bugs. Plus it looks like there are ways that RCU's workqueue wakeups can be executed during very early boot, which can be handled, but again in a rather messy fashion. In contrast, given a way of mark a statically allocated workqueue as WQ_MEM_RECLAIM after the fact, I simply continue initializing the workqueue at early boot, and then add the WQ_MEM_RECLAIM marking some arbitrarily chosen time after the scheduler has been initialized. The required change to workqueues looks easy, just move the body of the "if (flags & WQ_MEM_RECLAIM) {" statement in __alloc_workqueue_key() to a separate function, right? Thanx, Paul