From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751590AbXAHPz7 (ORCPT ); Mon, 8 Jan 2007 10:55:59 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751591AbXAHPz7 (ORCPT ); Mon, 8 Jan 2007 10:55:59 -0500 Received: from mail.screens.ru ([213.234.233.54]:57768 "EHLO mail.screens.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751588AbXAHPz6 (ORCPT ); Mon, 8 Jan 2007 10:55:58 -0500 Date: Mon, 8 Jan 2007 18:56:38 +0300 From: Oleg Nesterov To: Srivatsa Vaddagiri Cc: Andrew Morton , David Howells , Christoph Hellwig , Ingo Molnar , Linus Torvalds , linux-kernel@vger.kernel.org, Gautham shenoy Subject: Re: [PATCH] fix-flush_workqueue-vs-cpu_dead-race-update Message-ID: <20070108155638.GA156@tv-sign.ru> References: <20070104091850.c1feee76.akpm@osdl.org> <20070106151036.GA951@tv-sign.ru> <20070106154506.GC24274@in.ibm.com> <20070106163035.GA2948@tv-sign.ru> <20070106163851.GA13579@in.ibm.com> <20070106111117.54bb2307.akpm@osdl.org> <20070107110013.GD13579@in.ibm.com> <20070107115957.6080aa08.akpm@osdl.org> <20070107215103.GA7960@tv-sign.ru> <20070108152211.GA31263@in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070108152211.GA31263@in.ibm.com> User-Agent: Mutt/1.5.11 Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org On 01/08, Srivatsa Vaddagiri wrote: > > On Mon, Jan 08, 2007 at 12:51:03AM +0300, Oleg Nesterov wrote: > > Change flush_workqueue() to use for_each_possible_cpu(). This means that > > flush_cpu_workqueue() may hit CPU which is already dead. However in that > > case > > > > if (!list_empty(&cwq->worklist) || cwq->current_work != NULL) > > > > means that CPU_DEAD in progress, it will do kthread_stop() + take_over_work() > > so we can proceed and insert a barrier. We hold cwq->lock, so we are safe. > > > > This patch replaces fix-flush_workqueue-vs-cpu_dead-race.patch which was > > broken by switching to preempt_disable (now we don't need locking at all). > > Note that migrate_sequence (was hotplug_sequence) is incremented under > > cwq->lock. Since flush_workqueue does lock/unlock of cwq->lock on all CPUs, > > it must see the new value if take_over_work() happened before we checked > > this cwq, and this is the case we should worry about: otherwise we added > > a barrier. > > > > Srivatsa? > > This is head-spinning :) Thank you for review! > Spotted atleast these problems: > > 1. run_workqueue()->work.func()->flush_work()->mutex_lock(workqueue_mutex) > deadlocks if we are blocked in cleanup_workqueue_thread()->kthread_stop() > for the same worker thread to exit. > > Looks possible in practice to me. Yes, this is the same (old) problem as we have/had with flush_workqueue(). We can convert flush_work() to use preempt_disable (this is not straightforward, but easy), or forbid to call flush_work() from work.func(). This patch doesn't touch this problem. > 2. > > CPU_DEAD->cleanup_workqueue_thread->(cwq->thread = NULL)->kthread_stop() .. > ^^^^^^^^^^^^^^^^^^^^ > |___ Problematic Hmm... This should not be possible? cwq->thread != NULL on CPU_DEAD event. Event IF it was NULL, we don't call kthread_stop() in that case. > Now while we are blocked here, if a work->func() calls > flush_workqueue->flush_cpu_workqueue, we clearly cant identify that event > thread is trying to flush its own queue (cwq->thread == current test > fails) and hence we will deadlock. Could you clarify? I believe cwq->thread == current test always works, we never "substitute" cwq->thread. > A lock_cpu_hotplug(), or any other ability to block concurrent hotplug > operations from happening, in run_workqueue would have avoided both the above > races. I still don't think this is a good idea. We also need is_cpu_down_waits_for_lock_cpu_hotplug() helper, otherwise we have a deadlock if work->func() sleeps and re-queues itself. > Alternatively, for the second race, I guess we can avoid setting > cwq->thread = NULL in cleanup_workqueue_thread() till the thread has exited, Yes, http://marc.theaimsgroup.com/?l=linux-kernel&m=116818097927685, I believe we can do this later. This way workqueue will have almost zero interaction with cpu-hotplug, and cpu UP/DOWN event won't be delayed by sleeping work.func(). take_over_work() can go away, this also allows us to simplify things. Thanks! Oleg.