From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760866AbZBLUpK (ORCPT ); Thu, 12 Feb 2009 15:45:10 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1759560AbZBLUoy (ORCPT ); Thu, 12 Feb 2009 15:44:54 -0500 Received: from e4.ny.us.ibm.com ([32.97.182.144]:38185 "EHLO e4.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759537AbZBLUow (ORCPT ); Thu, 12 Feb 2009 15:44:52 -0500 Date: Thu, 12 Feb 2009 12:44:50 -0800 From: Mike Anderson To: Hannes Reinecke , Jens Axboe Cc: Christof Schmitt , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org Subject: Re: Deadlock during multipath failover Message-ID: <20090212204450.GA3556@linux.vnet.ibm.com> References: <20090212094022.GA4973@schmichrtp.de.ibm.com> <4994234D.1000404@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4994234D.1000404@suse.de> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hannes Reinecke wrote: > Hi Christof, > > Christof Schmitt wrote: >> During failover tests on a current distribution kernel, we found this >> problem. From reading the code, the upstream kernel has the same >> problem: >> >> During multipath failover tests with SCSI on System z, the kernel >> deadlocks in this situation: >> >>> STACK: >>> 0 blk_add_timer+206 [0x2981ea] >>> 1 blk_rq_timed_out+132 [0x2982a8] >>> 2 blk_abort_request+114 [0x29833e] >>> 3 blk_abort_queue+92 [0x2983a8] >>> 4 deactivate_path+74 [0x3e00009625a] >>> 5 run_workqueue+236 [0x149e04] >>> 6 worker_thread+294 [0x149fce] >>> 7 kthread+110 [0x14f436] >>> 8 kernel_thread_starter+6 [0x10941a] >> >> blk_abort_queue takes the queue_lock with spinlock_irqsave and walks >> the timer_list with list_for_each_entry_safe. Since a path to a SCSI >> device just failed, the rport state is FC_PORTSTATE_BLOCKED. This >> rport state triggers blk_add_timer that calls list_add_tail to move >> the request to the end of timer_list. Thus, the >> list_for_each_entry_safe never reaches the end of the timer_list, it >> continously moves the requests to the end of the list. >> > Hmm. That would be fixes by using list_splice() here: > > diff --git a/block/blk-timeout.c b/block/blk-timeout.c > index a095353..67bcc3f 100644 > --- a/block/blk-timeout.c > +++ b/block/blk-timeout.c > @@ -209,12 +209,15 @@ void blk_abort_queue(struct request_queue *q) > { > unsigned long flags; > struct request *rq, *tmp; > + LIST_HEAD(list); > > spin_lock_irqsave(q->queue_lock, flags); > > elv_abort_queue(q); > > - list_for_each_entry_safe(rq, tmp, &q->timeout_list, timeout_list) > + list_splice_init(&q->timeout_list, &list); > + > + list_for_each_entry_safe(rq, tmp, &list, timeout_list) > blk_abort_request(rq); > > spin_unlock_irqrestore(q->queue_lock, flags); > >> The rport state FC_PORTSTATE_BLOCKED would end, when the function >> fc_timeout_deleted_rport would run to remove the rport. But this >> function was schedules from queue_delayed_work. The timer already >> expired, but the timer function does not run, because the timer >> interrupt is disabled from the spinlock_irqsave call. >> > .. but this shouldn't happen anymore when using splice, as > the timer will be called _after_ the irqrestore above. If this patch does not address the deadlock another option to look into would be to run some testing without blk_abort_request (just using elv_abort_queue) and not try to abort in flight IOs at this time. We observed reduced IO delays during storage failover testing (target responsive but timing out IOs) with this code, but I do not have good breakdown data on the number of IOs handled by elv_abort_queue vs blk_abort_request vs IO delay (It is also config dependent). -andmike -- Michael Anderson andmike@linux.vnet.ibm.com