From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?ISO-8859-1?Q?Christian_K=F6nig?= Subject: Re: VM lockdep warning Date: Sat, 21 Apr 2012 14:35:54 +0200 Message-ID: <4F92A9AA.4030607@vodafone.de> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from outgoing.email.vodafone.de (outgoing.email.vodafone.de [139.7.28.128]) by gabe.freedesktop.org (Postfix) with ESMTP id 029AC9E734 for ; Sat, 21 Apr 2012 05:35:57 -0700 (PDT) In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dri-devel-bounces+sf-dri-devel=m.gmane.org@lists.freedesktop.org Errors-To: dri-devel-bounces+sf-dri-devel=m.gmane.org@lists.freedesktop.org To: Dave Airlie Cc: dri-devel List-Id: dri-devel@lists.freedesktop.org Interesting, I'm pretty sure that I haven't touched the locking order of the cs_mutex vs. vm_mutex. Maybe it is just some kind of side effect, going to locking into it anyway. Christian. On 21.04.2012 13:39, Dave Airlie wrote: > running 3.4.0-rc3 + Christian's reset patch series. > > The locks are definitely taken in different orders between vm_bo_add > and cs ioctl. > > Dave. > > ====================================================== > [ INFO: possible circular locking dependency detected ] > 3.4.0-rc3+ #33 Not tainted > ------------------------------------------------------- > shader_runner/3090 is trying to acquire lock: > (&vm->mutex){+.+...}, at: [] > radeon_cs_ioctl+0x438/0x5c1 [radeon] > > but task is already holding lock: > (&rdev->cs_mutex){+.+.+.}, at: [] > radeon_cs_ioctl+0x33/0x5c1 [radeon] > > which lock already depends on the new lock. > > > the existing dependency chain (in reverse order) is: > > -> #1 (&rdev->cs_mutex){+.+.+.}: > [] lock_acquire+0xf0/0x116 > [] mutex_lock_nested+0x6a/0x2bb > [] radeon_vm_bo_add+0x118/0x1f5 [radeon] > [] radeon_vm_init+0x6b/0x70 [radeon] > [] radeon_driver_open_kms+0x68/0x9a [radeon] > [] drm_open+0x201/0x587 [drm] > [] drm_stub_open+0xec/0x14a [drm] > [] chrdev_open+0x11c/0x145 > [] __dentry_open+0x17e/0x29b > [] nameidata_to_filp+0x5b/0x62 > [] do_last+0x75d/0x771 > [] path_openat+0xcb/0x380 > [] do_filp_open+0x33/0x81 > [] do_sys_open+0x100/0x192 > [] sys_open+0x1c/0x1e > [] system_call_fastpath+0x16/0x1b > > -> #0 (&vm->mutex){+.+...}: > [] __lock_acquire+0xfcd/0x1664 > [] lock_acquire+0xf0/0x116 > [] mutex_lock_nested+0x6a/0x2bb > [] radeon_cs_ioctl+0x438/0x5c1 [radeon] > [] drm_ioctl+0x2d8/0x3a4 [drm] > [] do_vfs_ioctl+0x469/0x4aa > [] sys_ioctl+0x51/0x75 > [] system_call_fastpath+0x16/0x1b > > other info that might help us debug this: > > Possible unsafe locking scenario: > > CPU0 CPU1 > ---- ---- > lock(&rdev->cs_mutex); > lock(&vm->mutex); > lock(&rdev->cs_mutex); > lock(&vm->mutex); > > *** DEADLOCK *** > > 1 lock held by shader_runner/3090: > #0: (&rdev->cs_mutex){+.+.+.}, at: [] > radeon_cs_ioctl+0x33/0x5c1 [radeon] > > stack backtrace: > Pid: 3090, comm: shader_runner Not tainted 3.4.0-rc3+ #33 > Call Trace: > [] print_circular_bug+0x28a/0x29b > [] __lock_acquire+0xfcd/0x1664 > [] lock_acquire+0xf0/0x116 > [] ? radeon_cs_ioctl+0x438/0x5c1 [radeon] > [] ? might_fault+0x57/0xa7 > [] mutex_lock_nested+0x6a/0x2bb > [] ? radeon_cs_ioctl+0x438/0x5c1 [radeon] > [] ? evergreen_ib_parse+0x1b2/0x204 [radeon] > [] radeon_cs_ioctl+0x438/0x5c1 [radeon] > [] drm_ioctl+0x2d8/0x3a4 [drm] > [] ? radeon_cs_finish_pages+0xa3/0xa3 [radeon] > [] ? avc_has_perm_flags+0xd7/0x160 > [] ? avc_has_perm_flags+0x26/0x160 > [] ? up_read+0x1b/0x32 > [] do_vfs_ioctl+0x469/0x4aa > [] sys_ioctl+0x51/0x75 > [] ? __wake_up+0x1d/0x48 > [] system_call_fastpath+0x16/0x1b >