From: Alex Williamson <alex.williamson@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
peterx@redhat.com, prime.zeng@hisilicon.com, cohuck@redhat.com
Subject: Re: [PATCH] vfio/pci: Handle concurrent vma faults
Date: Fri, 12 Mar 2021 13:58:44 -0700 [thread overview]
Message-ID: <20210312135844.5e97aac7@omen.home.shazbot.org> (raw)
In-Reply-To: <20210312130938.1e535e50@omen.home.shazbot.org>
On Fri, 12 Mar 2021 13:09:38 -0700
Alex Williamson <alex.williamson@redhat.com> wrote:
> On Fri, 12 Mar 2021 15:41:47 -0400
> Jason Gunthorpe <jgg@nvidia.com> wrote:
>
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 5.12.0-rc1+ #18 Not tainted
> ------------------------------------------------------
> CPU 0/KVM/1406 is trying to acquire lock:
> ffffffffa5a58d60 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_acquire+0x83/0xd0
>
> but task is already holding lock:
> ffff94c0f3e8fb08 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: vfio_device_io_remap_mapping_range+0x31/0x120 [vfio]
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (&mapping->i_mmap_rwsem){++++}-{3:3}:
> down_write+0x3d/0x70
> dma_resv_lockdep+0x1b0/0x298
> do_one_initcall+0x5b/0x2d0
> kernel_init_freeable+0x251/0x298
> kernel_init+0xa/0x111
> ret_from_fork+0x22/0x30
>
> -> #0 (fs_reclaim){+.+.}-{0:0}:
> __lock_acquire+0x111f/0x1e10
> lock_acquire+0xb5/0x380
> fs_reclaim_acquire+0xa3/0xd0
> kmem_cache_alloc_trace+0x30/0x2c0
> memtype_reserve+0xc3/0x280
> reserve_pfn_range+0x86/0x160
> track_pfn_remap+0xa6/0xe0
> remap_pfn_range+0xa8/0x610
> vfio_device_io_remap_mapping_range+0x93/0x120 [vfio]
> vfio_pci_test_and_up_write_memory_lock+0x34/0x40 [vfio_pci]
> vfio_basic_config_write+0x12d/0x230 [vfio_pci]
> vfio_pci_config_rw+0x1b7/0x3a0 [vfio_pci]
> vfs_write+0xea/0x390
> __x64_sys_pwrite64+0x72/0xb0
> do_syscall_64+0x33/0x40
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
..
> > Does current_gfp_context()/memalloc_nofs_save()/etc solve it?
Yeah, we can indeed use memalloc_nofs_save/restore(). It seems we're
trying to allocate something for pfnmap tracking and that enables lots
of lockdep specific tests. Is it valid to wrap io_remap_pfn_range()
around clearing this flag or am I just masking a bug? Thanks,
Alex
next prev parent reply other threads:[~2021-03-12 20:59 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-10 17:53 [PATCH] vfio/pci: Handle concurrent vma faults Alex Williamson
2021-03-10 18:14 ` Jason Gunthorpe
2021-03-10 18:34 ` Alex Williamson
2021-03-10 18:40 ` Jason Gunthorpe
2021-03-10 20:06 ` Peter Xu
2021-03-11 11:35 ` Christoph Hellwig
2021-03-11 16:35 ` Peter Xu
2021-03-12 19:16 ` Alex Williamson
2021-03-12 19:41 ` Jason Gunthorpe
2021-03-12 20:09 ` Alex Williamson
2021-03-12 20:58 ` Alex Williamson [this message]
2021-03-13 0:03 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210312135844.5e97aac7@omen.home.shazbot.org \
--to=alex.williamson@redhat.com \
--cc=cohuck@redhat.com \
--cc=jgg@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterx@redhat.com \
--cc=prime.zeng@hisilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).