From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755131AbZFNRnj (ORCPT ); Sun, 14 Jun 2009 13:43:39 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754169AbZFNRn3 (ORCPT ); Sun, 14 Jun 2009 13:43:29 -0400 Received: from tau.jukie.net ([216.239.93.128]:58918 "EHLO tau.jukie.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753518AbZFNRn2 convert rfc822-to-8bit (ORCPT ); Sun, 14 Jun 2009 13:43:28 -0400 Date: Sun, 14 Jun 2009 13:43:30 -0400 From: Bart Trojanowski To: David Howells , linux-kernel@vger.kernel.org Cc: linux-cachefs@redhat.com, linux-nfs@vger.kernel.org, linux-mm@kvack.org Subject: Re: [v2.6.30 nfs+fscache] swapper: possible circular locking dependency detected Message-ID: <20090614174329.GA4721@jukie.net> References: <20090613182721.GA24072@jukie.net> <20090614141459.GA5543@jukie.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8BIT In-Reply-To: <20090614141459.GA5543@jukie.net> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It's me again. I am tyring to decipher the lockdep report... * Bart Trojanowski [090614 10:15]: > ======================================================= > [ INFO: possible circular locking dependency detected ] > 2.6.30-kvm3-dirty #4 > ------------------------------------------------------- > swapper/0 is trying to acquire lock: > (&cwq->lock){..-...}, at: [] __queue_work+0x1d/0x43 > > but task is already holding lock: > (&q->lock){-.-.-.}, at: [] __wake_up+0x27/0x55 > > which lock already depends on the new lock. > > > the existing dependency chain (in reverse order) is: > > -> #1 (&q->lock){-.-.-.}: > [] __lock_acquire+0x1350/0x16b4 > [] lock_acquire+0xc7/0xf3 > [] _spin_lock_irqsave+0x4f/0x86 > [] __wake_up+0x27/0x55 > [] insert_work+0x9a/0xa6 > [] __queue_work+0x2f/0x43 > [] queue_work_on+0x4a/0x53 > [] queue_work+0x1f/0x21 So, here I can see that we take the cwq->lock first, when __queue_work does: spin_lock_irqsave(&cwq->lock, flags); insert_work(cwq, work, &cwq->worklist); spin_unlock_irqrestore(&cwq->lock, flags); and later take the q->lock when insert_work calls to __wake_up: spin_lock_irqsave(&q->lock, flags); __wake_up_common(q, mode, nr_exclusive, 0, key); spin_unlock_irqrestore(&q->lock, flags); But previously the order was reversed: > stack backtrace: > Pid: 0, comm: swapper Not tainted 2.6.30-kvm3-dirty #4 > Call Trace: > [] print_circular_bug_tail+0xc1/0xcc > [] __lock_acquire+0x1085/0x16b4 > [] ? save_trace+0x3f/0xa6 > [] ? __lock_acquire+0x15d2/0x16b4 > [] lock_acquire+0xc7/0xf3 > [] ? __queue_work+0x1d/0x43 > [] _spin_lock_irqsave+0x4f/0x86 > [] ? __queue_work+0x1d/0x43 > [] __queue_work+0x1d/0x43 > [] queue_work_on+0x4a/0x53 > [] queue_work+0x1f/0x21 > [] schedule_work+0x1b/0x1d > [] fscache_enqueue_operation+0xec/0x11e [fscache] > [] cachefiles_read_waiter+0xee/0x102 [cachefiles] > [] __wake_up_common+0x4b/0x7a > [] __wake_up+0x3d/0x55 > [] __wake_up_bit+0x31/0x33 > [] unlock_page+0x27/0x2b Here the __wake_up happens first, which takes the q->lock, and later the __queue_work would take the cwq->lock. I am guessing that it's not safe for fscache to call out to queue_work from this cachefiles_read_waiter() context (more specifically fscache_enqueue_operation calls schedule_work). I don't have much experience with lockdep... does that make any sense? -Bart -- WebSig: http://www.jukie.net/~bart/sig/ From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bart Trojanowski Subject: Re: [v2.6.30 nfs+fscache] swapper: possible circular locking dependency detected Date: Sun, 14 Jun 2009 13:43:30 -0400 Message-ID: <20090614174329.GA4721@jukie.net> References: <20090613182721.GA24072@jukie.net> <20090614141459.GA5543@jukie.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linux-cachefs@redhat.com, linux-nfs@vger.kernel.org, linux-mm@kvack.org To: David Howells , linux-kernel@vger.kernel.org Return-path: Received: from tau.jukie.net ([216.239.93.128]:58918 "EHLO tau.jukie.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753518AbZFNRn2 convert rfc822-to-8bit (ORCPT ); Sun, 14 Jun 2009 13:43:28 -0400 In-Reply-To: <20090614141459.GA5543-LIbhotJ4rFdeoWH0uzbU5w@public.gmane.org> Sender: linux-nfs-owner@vger.kernel.org List-ID: It's me again. I am tyring to decipher the lockdep report... * Bart Trojanowski [090614 10:15]: > ======================================================= > [ INFO: possible circular locking dependency detected ] > 2.6.30-kvm3-dirty #4 > ------------------------------------------------------- > swapper/0 is trying to acquire lock: > (&cwq->lock){..-...}, at: [] __queue_work+0x1d/0x43 > > but task is already holding lock: > (&q->lock){-.-.-.}, at: [] __wake_up+0x27/0x55 > > which lock already depends on the new lock. > > > the existing dependency chain (in reverse order) is: > > -> #1 (&q->lock){-.-.-.}: > [] __lock_acquire+0x1350/0x16b4 > [] lock_acquire+0xc7/0xf3 > [] _spin_lock_irqsave+0x4f/0x86 > [] __wake_up+0x27/0x55 > [] insert_work+0x9a/0xa6 > [] __queue_work+0x2f/0x43 > [] queue_work_on+0x4a/0x53 > [] queue_work+0x1f/0x21 So, here I can see that we take the cwq->lock first, when __queue_work does: spin_lock_irqsave(&cwq->lock, flags); insert_work(cwq, work, &cwq->worklist); spin_unlock_irqrestore(&cwq->lock, flags); and later take the q->lock when insert_work calls to __wake_up: spin_lock_irqsave(&q->lock, flags); __wake_up_common(q, mode, nr_exclusive, 0, key); spin_unlock_irqrestore(&q->lock, flags); But previously the order was reversed: > stack backtrace: > Pid: 0, comm: swapper Not tainted 2.6.30-kvm3-dirty #4 > Call Trace: > [] print_circular_bug_tail+0xc1/0xcc > [] __lock_acquire+0x1085/0x16b4 > [] ? save_trace+0x3f/0xa6 > [] ? __lock_acquire+0x15d2/0x16b4 > [] lock_acquire+0xc7/0xf3 > [] ? __queue_work+0x1d/0x43 > [] _spin_lock_irqsave+0x4f/0x86 > [] ? __queue_work+0x1d/0x43 > [] __queue_work+0x1d/0x43 > [] queue_work_on+0x4a/0x53 > [] queue_work+0x1f/0x21 > [] schedule_work+0x1b/0x1d > [] fscache_enqueue_operation+0xec/0x11e [fscache] > [] cachefiles_read_waiter+0xee/0x102 [cachefiles] > [] __wake_up_common+0x4b/0x7a > [] __wake_up+0x3d/0x55 > [] __wake_up_bit+0x31/0x33 > [] unlock_page+0x27/0x2b Here the __wake_up happens first, which takes the q->lock, and later the __queue_work would take the cwq->lock. I am guessing that it's not safe for fscache to call out to queue_work from this cachefiles_read_waiter() context (more specifically fscache_enqueue_operation calls schedule_work). I don't have much experience with lockdep... does that make any sense? -Bart -- WebSig: http://www.jukie.net/~bart/sig/ From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail202.messagelabs.com (mail202.messagelabs.com [216.82.254.227]) by kanga.kvack.org (Postfix) with SMTP id 4BF2A6B004F for ; Sun, 14 Jun 2009 13:42:13 -0400 (EDT) Date: Sun, 14 Jun 2009 13:43:30 -0400 From: Bart Trojanowski Subject: Re: [v2.6.30 nfs+fscache] swapper: possible circular locking dependency detected Message-ID: <20090614174329.GA4721@jukie.net> References: <20090613182721.GA24072@jukie.net> <20090614141459.GA5543@jukie.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable In-Reply-To: <20090614141459.GA5543@jukie.net> Sender: owner-linux-mm@kvack.org To: David Howells , linux-kernel@vger.kernel.org Cc: linux-cachefs@redhat.com, linux-nfs@vger.kernel.org, linux-mm@kvack.org List-ID: It's me again. I am tyring to decipher the lockdep report... * Bart Trojanowski [090614 10:15]: > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D > [ INFO: possible circular locking dependency detected ] > 2.6.30-kvm3-dirty #4 > ------------------------------------------------------- > swapper/0 is trying to acquire lock: > (&cwq->lock){..-...}, at: [] __queue_work+0x1d/0x43 >=20 > but task is already holding lock: > (&q->lock){-.-.-.}, at: [] __wake_up+0x27/0x55 >=20 > which lock already depends on the new lock. >=20 >=20 > the existing dependency chain (in reverse order) is: >=20 > -> #1 (&q->lock){-.-.-.}: > [] __lock_acquire+0x1350/0x16b4 > [] lock_acquire+0xc7/0xf3 > [] _spin_lock_irqsave+0x4f/0x86 > [] __wake_up+0x27/0x55 > [] insert_work+0x9a/0xa6 > [] __queue_work+0x2f/0x43 > [] queue_work_on+0x4a/0x53 > [] queue_work+0x1f/0x21 So, here I can see that we take the cwq->lock first, when __queue_work does: spin_lock_irqsave(&cwq->lock, flags); insert_work(cwq, work, &cwq->worklist); spin_unlock_irqrestore(&cwq->lock, flags); and later take the q->lock when insert_work calls to __wake_up: spin_lock_irqsave(&q->lock, flags); __wake_up_common(q, mode, nr_exclusive, 0, key); spin_unlock_irqrestore(&q->lock, flags); But previously the order was reversed: > stack backtrace: > Pid: 0, comm: swapper Not tainted 2.6.30-kvm3-dirty #4 > Call Trace: > [] print_circular_bug_tail+0xc1/0xcc > [] __lock_acquire+0x1085/0x16b4 > [] ? save_trace+0x3f/0xa6 > [] ? __lock_acquire+0x15d2/0x16b4 > [] lock_acquire+0xc7/0xf3 > [] ? __queue_work+0x1d/0x43 > [] _spin_lock_irqsave+0x4f/0x86 > [] ? __queue_work+0x1d/0x43 > [] __queue_work+0x1d/0x43 > [] queue_work_on+0x4a/0x53 > [] queue_work+0x1f/0x21 > [] schedule_work+0x1b/0x1d > [] fscache_enqueue_operation+0xec/0x11e [fscache] > [] cachefiles_read_waiter+0xee/0x102 [cachefiles] > [] __wake_up_common+0x4b/0x7a > [] __wake_up+0x3d/0x55 > [] __wake_up_bit+0x31/0x33 > [] unlock_page+0x27/0x2b Here the __wake_up happens first, which takes the q->lock, and later the __queue_work would take the cwq->lock. I am guessing that it's not safe for fscache to call out to queue_work =66rom this cachefiles_read_waiter() context (more specifically fscache_enqueue_operation calls schedule_work). I don't have much experience with lockdep... does that make any sense? -Bart --=20 WebSig: http://www.jukie.net/~bart/sig/ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org