From: Jan Kara <jack@suse.cz> To: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Jan Kara <jack@suse.cz>, linux-nvdimm@lists.01.org, Christoph Hellwig <hch@infradead.org>, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org> Subject: Re: [PATCH 0/21 v4] dax: Clear dirty bits after flushing caches Date: Fri, 4 Nov 2016 19:14:59 +0100 [thread overview] Message-ID: <20161104181459.GB6650@quack2.suse.cz> (raw) In-Reply-To: <20161104044648.GB3569@quack2.suse.cz> On Fri 04-11-16 05:46:48, Jan Kara wrote: > On Tue 01-11-16 23:17:33, Ross Zwisler wrote: > > On Tue, Nov 01, 2016 at 11:36:06PM +0100, Jan Kara wrote: > > > Hello, > > > > > > this is the fourth revision of my patches to clear dirty bits from radix tree > > > of DAX inodes when caches for corresponding pfns have been flushed. This patch > > > set is significantly larger than the previous version because I'm changing how > > > ->fault, ->page_mkwrite, and ->pfn_mkwrite handlers may choose to handle the > > > fault so that we don't have to leak details about DAX locking into the generic > > > code. In principle, these patches enable handlers to easily update PTEs and do > > > other work necessary to finish the fault without duplicating the functionality > > > present in the generic code. I'd be really like feedback from mm folks whether > > > such changes to fault handling code are fine or what they'd do differently. > > > > > > The patches are based on 4.9-rc1 + Ross' DAX PMD page fault series [1] + ext4 > > > conversion of DAX IO patch to the iomap infrastructure [2]. For testing, > > > I've pushed out a tree including all these patches and further DAX fixes > > > to: > > > > > > git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs.git dax > > > > In my testing I hit what I believe to be a new lockdep splat. This was > > produced with ext4+dax+generic/246, though I've tried several times to > > reproduce it and haven't been able. This testing was done with your tree plus > > one patch to fix the DAX PMD recursive fallback issue that you reported. This > > new patch is folded into v9 of my PMD series that I sent out earlier today. > > > > I've posted the tree I was testing with here: > > > > https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=jan_dax > > > > Here is the lockdep splat, passed through kasan_symbolize: > > > > run fstests generic/246 at 2016-11-01 21:51:34 > > > > ====================================================== > > [ INFO: possible circular locking dependency detected ] > > 4.9.0-rc1-00165-g13826b5 #2 Not tainted > > ------------------------------------------------------- > > t_mmap_writev/13704 is trying to acquire lock: > > ([ 3522.320075] &ei->i_mmap_sem > > ){++++.+}[ 3522.320924] , at: > > [<ffffffff8133ef06>] ext4_dax_fault+0x36/0xd0 fs/ext4/file.c:267 > > Interesting, I didn't see this in my testing. > > > -> #0[ 3522.342547] ( > > &ei->i_mmap_sem[ 3522.343023] ){++++.+} > > : > > [< inline >] check_prev_add kernel/locking/lockdep.c:1829 > > [< inline >] check_prevs_add kernel/locking/lockdep.c:1939 > > [< inline >] validate_chain kernel/locking/lockdep.c:2266 > > [<ffffffff8110e89f>] __lock_acquire+0x127f/0x14d0 kernel/locking/lockdep.c:3335 > > [<ffffffff8110efa2>] lock_acquire+0xf2/0x1e0 kernel/locking/lockdep.c:3746 > > [<ffffffff81b39b7e>] down_read+0x3e/0xa0 kernel/locking/rwsem.c:22 > > [<ffffffff8133ef06>] ext4_dax_fault+0x36/0xd0 fs/ext4/file.c:267 > > [<ffffffff8122f9d1>] __do_fault+0x21/0x130 mm/memory.c:2872 > > [< inline >] do_read_fault mm/memory.c:3231 > > [< inline >] do_fault mm/memory.c:3333 > > [< inline >] handle_pte_fault mm/memory.c:3534 > > [< inline >] __handle_mm_fault mm/memory.c:3624 > > [<ffffffff812348ae>] handle_mm_fault+0x114e/0x1550 mm/memory.c:3661 > > [<ffffffff8106bc27>] __do_page_fault+0x247/0x4f0 arch/x86/mm/fault.c:1397 > > [<ffffffff8106bfad>] trace_do_page_fault+0x5d/0x290 arch/x86/mm/fault.c:1490 > > [<ffffffff81065dba>] do_async_page_fault+0x1a/0xa0 arch/x86/kernel/kvm.c:265 > > [<ffffffff81b3ea08>] async_page_fault+0x28/0x30 arch/x86/entry/entry_64.S:1015 > > [< inline >] arch_copy_from_iter_pmem ./arch/x86/include/asm/pmem.h:95 > > [< inline >] copy_from_iter_pmem ./include/linux/pmem.h:118 > > [<ffffffff812f4a27>] dax_iomap_actor+0x147/0x270 fs/dax.c:1027 > > [<ffffffff8130c513>] iomap_apply+0xb3/0x130 fs/iomap.c:78 > > So the problem is that we were doing write to a DAX file from a buffer > which is mmapped DAX file and we took a fault when copying data from the > buffer. That looks like a real problem. I'll have to think what to do with > it... Thanks for report! So the problem is in ext4 iomap conversion (i.e., not in this patch series). The culprit is that we keep transaction handle running between ->iomap_begin() and ->iomap_end() calls. I was thinking about possible solutions and I've found only two: 1) Add inode to orphan list when we are extending the file in ext4_iomap_begin() and stop the current handle. Then grab a new handle in ext4_iomap_end() and remove inode from the orphan list and update inode size. This is what we were basically using in the original direct IO path. 2) Add a version of dax_iomap_rw() (or a flag for it) to prefault pages before calling ->iomap_begin(), then use atomic copy for the data. In ->iomap_end() we'd have to truncate the file if we didn't copy data for the whole extent. This is more like standard write path works. Doing 1) is easier, doing 2) may perform better unless there is high memory pressure which would evict buffers from memory before we actually allocate the extent and copy data to it. I guess for now I'll go with 1) just to have the conversion to iomap code done and look into doing 2) later while measuring what performance benefits do we get from it. Honza > > [<ffffffff812f46d6>] dax_iomap_rw+0x76/0xa0 fs/dax.c:1067 > > [< inline >] ext4_dax_write_iter fs/ext4/file.c:196 > > [<ffffffff8133fb13>] ext4_file_write_iter+0x243/0x340 fs/ext4/file.c:217 > > [<ffffffff812957d1>] do_iter_readv_writev+0xb1/0x130 fs/read_write.c:695 > > [<ffffffff81296384>] do_readv_writev+0x1a4/0x250 fs/read_write.c:872 > > [<ffffffff8129668f>] vfs_writev+0x3f/0x50 fs/read_write.c:911 > > [<ffffffff81296704>] do_writev+0x64/0x100 fs/read_write.c:944 > > [< inline >] SYSC_writev fs/read_write.c:1017 > > [<ffffffff81297900>] SyS_writev+0x10/0x20 fs/read_write.c:1014 > > [<ffffffff81b3d501>] entry_SYSCALL_64_fastpath+0x1f/0xc2 arch/x86/entry/entry_64.S:209 -- Jan Kara <jack@suse.com> SUSE Labs, CR _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm
WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz> To: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Jan Kara <jack@suse.cz>, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org, Andrew Morton <akpm@linux-foundation.org>, Christoph Hellwig <hch@infradead.org> Subject: Re: [PATCH 0/21 v4] dax: Clear dirty bits after flushing caches Date: Fri, 4 Nov 2016 19:14:59 +0100 [thread overview] Message-ID: <20161104181459.GB6650@quack2.suse.cz> (raw) In-Reply-To: <20161104044648.GB3569@quack2.suse.cz> On Fri 04-11-16 05:46:48, Jan Kara wrote: > On Tue 01-11-16 23:17:33, Ross Zwisler wrote: > > On Tue, Nov 01, 2016 at 11:36:06PM +0100, Jan Kara wrote: > > > Hello, > > > > > > this is the fourth revision of my patches to clear dirty bits from radix tree > > > of DAX inodes when caches for corresponding pfns have been flushed. This patch > > > set is significantly larger than the previous version because I'm changing how > > > ->fault, ->page_mkwrite, and ->pfn_mkwrite handlers may choose to handle the > > > fault so that we don't have to leak details about DAX locking into the generic > > > code. In principle, these patches enable handlers to easily update PTEs and do > > > other work necessary to finish the fault without duplicating the functionality > > > present in the generic code. I'd be really like feedback from mm folks whether > > > such changes to fault handling code are fine or what they'd do differently. > > > > > > The patches are based on 4.9-rc1 + Ross' DAX PMD page fault series [1] + ext4 > > > conversion of DAX IO patch to the iomap infrastructure [2]. For testing, > > > I've pushed out a tree including all these patches and further DAX fixes > > > to: > > > > > > git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs.git dax > > > > In my testing I hit what I believe to be a new lockdep splat. This was > > produced with ext4+dax+generic/246, though I've tried several times to > > reproduce it and haven't been able. This testing was done with your tree plus > > one patch to fix the DAX PMD recursive fallback issue that you reported. This > > new patch is folded into v9 of my PMD series that I sent out earlier today. > > > > I've posted the tree I was testing with here: > > > > https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=jan_dax > > > > Here is the lockdep splat, passed through kasan_symbolize: > > > > run fstests generic/246 at 2016-11-01 21:51:34 > > > > ====================================================== > > [ INFO: possible circular locking dependency detected ] > > 4.9.0-rc1-00165-g13826b5 #2 Not tainted > > ------------------------------------------------------- > > t_mmap_writev/13704 is trying to acquire lock: > > ([ 3522.320075] &ei->i_mmap_sem > > ){++++.+}[ 3522.320924] , at: > > [<ffffffff8133ef06>] ext4_dax_fault+0x36/0xd0 fs/ext4/file.c:267 > > Interesting, I didn't see this in my testing. > > > -> #0[ 3522.342547] ( > > &ei->i_mmap_sem[ 3522.343023] ){++++.+} > > : > > [< inline >] check_prev_add kernel/locking/lockdep.c:1829 > > [< inline >] check_prevs_add kernel/locking/lockdep.c:1939 > > [< inline >] validate_chain kernel/locking/lockdep.c:2266 > > [<ffffffff8110e89f>] __lock_acquire+0x127f/0x14d0 kernel/locking/lockdep.c:3335 > > [<ffffffff8110efa2>] lock_acquire+0xf2/0x1e0 kernel/locking/lockdep.c:3746 > > [<ffffffff81b39b7e>] down_read+0x3e/0xa0 kernel/locking/rwsem.c:22 > > [<ffffffff8133ef06>] ext4_dax_fault+0x36/0xd0 fs/ext4/file.c:267 > > [<ffffffff8122f9d1>] __do_fault+0x21/0x130 mm/memory.c:2872 > > [< inline >] do_read_fault mm/memory.c:3231 > > [< inline >] do_fault mm/memory.c:3333 > > [< inline >] handle_pte_fault mm/memory.c:3534 > > [< inline >] __handle_mm_fault mm/memory.c:3624 > > [<ffffffff812348ae>] handle_mm_fault+0x114e/0x1550 mm/memory.c:3661 > > [<ffffffff8106bc27>] __do_page_fault+0x247/0x4f0 arch/x86/mm/fault.c:1397 > > [<ffffffff8106bfad>] trace_do_page_fault+0x5d/0x290 arch/x86/mm/fault.c:1490 > > [<ffffffff81065dba>] do_async_page_fault+0x1a/0xa0 arch/x86/kernel/kvm.c:265 > > [<ffffffff81b3ea08>] async_page_fault+0x28/0x30 arch/x86/entry/entry_64.S:1015 > > [< inline >] arch_copy_from_iter_pmem ./arch/x86/include/asm/pmem.h:95 > > [< inline >] copy_from_iter_pmem ./include/linux/pmem.h:118 > > [<ffffffff812f4a27>] dax_iomap_actor+0x147/0x270 fs/dax.c:1027 > > [<ffffffff8130c513>] iomap_apply+0xb3/0x130 fs/iomap.c:78 > > So the problem is that we were doing write to a DAX file from a buffer > which is mmapped DAX file and we took a fault when copying data from the > buffer. That looks like a real problem. I'll have to think what to do with > it... Thanks for report! So the problem is in ext4 iomap conversion (i.e., not in this patch series). The culprit is that we keep transaction handle running between ->iomap_begin() and ->iomap_end() calls. I was thinking about possible solutions and I've found only two: 1) Add inode to orphan list when we are extending the file in ext4_iomap_begin() and stop the current handle. Then grab a new handle in ext4_iomap_end() and remove inode from the orphan list and update inode size. This is what we were basically using in the original direct IO path. 2) Add a version of dax_iomap_rw() (or a flag for it) to prefault pages before calling ->iomap_begin(), then use atomic copy for the data. In ->iomap_end() we'd have to truncate the file if we didn't copy data for the whole extent. This is more like standard write path works. Doing 1) is easier, doing 2) may perform better unless there is high memory pressure which would evict buffers from memory before we actually allocate the extent and copy data to it. I guess for now I'll go with 1) just to have the conversion to iomap code done and look into doing 2) later while measuring what performance benefits do we get from it. Honza > > [<ffffffff812f46d6>] dax_iomap_rw+0x76/0xa0 fs/dax.c:1067 > > [< inline >] ext4_dax_write_iter fs/ext4/file.c:196 > > [<ffffffff8133fb13>] ext4_file_write_iter+0x243/0x340 fs/ext4/file.c:217 > > [<ffffffff812957d1>] do_iter_readv_writev+0xb1/0x130 fs/read_write.c:695 > > [<ffffffff81296384>] do_readv_writev+0x1a4/0x250 fs/read_write.c:872 > > [<ffffffff8129668f>] vfs_writev+0x3f/0x50 fs/read_write.c:911 > > [<ffffffff81296704>] do_writev+0x64/0x100 fs/read_write.c:944 > > [< inline >] SYSC_writev fs/read_write.c:1017 > > [<ffffffff81297900>] SyS_writev+0x10/0x20 fs/read_write.c:1014 > > [<ffffffff81b3d501>] entry_SYSCALL_64_fastpath+0x1f/0xc2 arch/x86/entry/entry_64.S:209 -- Jan Kara <jack@suse.com> SUSE Labs, CR -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-11-04 18:15 UTC|newest] Thread overview: 108+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-11-01 22:36 [PATCH 0/21 v4] dax: Clear dirty bits after flushing caches Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 01/20] mm: Change type of vmf->virtual_address Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-02 9:55 ` Kirill A. Shutemov 2016-11-02 9:55 ` Kirill A. Shutemov 2016-11-01 22:36 ` [PATCH 01/21] mm: Join struct fault_env and vm_fault Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-02 9:58 ` Kirill A. Shutemov 2016-11-02 9:58 ` Kirill A. Shutemov 2016-11-04 4:32 ` Jan Kara 2016-11-04 4:32 ` Jan Kara 2016-11-01 22:36 ` [PATCH 02/20] " Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 02/21] mm: Use vmf->address instead of of vmf->virtual_address Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-02 4:18 ` Hillf Danton 2016-11-02 4:18 ` Hillf Danton 2016-11-02 4:18 ` Hillf Danton 2016-11-04 3:46 ` Jan Kara 2016-11-04 3:46 ` Jan Kara 2016-11-01 22:36 ` [PATCH 03/21] mm: Use pgoff in struct vm_fault instead of passing it separately Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 04/21] mm: Use passed vm_fault structure in __do_fault() Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 05/21] mm: Trim __do_fault() arguments Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 06/21] mm: Use passed vm_fault structure for in wp_pfn_shared() Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 06/20] mm: Use pass " Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 07/21] mm: Add orig_pte field into vm_fault Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 08/21] mm: Allow full handling of COW faults in ->fault handlers Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 09/21] mm: Factor out functionality to finish page faults Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 10/21] mm: Move handling of COW faults into DAX code Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 11/21] mm: Remove unnecessary vma->vm_ops check Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 12/21] mm: Factor out common parts of write fault handling Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 13/21] mm: Pass vm_fault structure into do_page_mkwrite() Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 14/21] mm: Use vmf->page during WP faults Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 15/21] mm: Move part of wp_page_reuse() into the single call site Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 16/21] mm: Provide helper for finishing mkwrite faults Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 17/21] mm: Change return values of finish_mkwrite_fault() Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 17/20] mm: Export follow_pte() Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 18/20] dax: Make cache flushing protected by entry lock Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 18/21] mm: Export follow_pte() Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 19/21] dax: Make cache flushing protected by entry lock Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 19/20] dax: Protect PTE modification on WP fault by radix tree " Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 20/20] dax: Clear dirty entry tags on cache flush Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 20/21] dax: Protect PTE modification on WP fault by radix tree entry lock Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 22:36 ` [PATCH 21/21] dax: Clear dirty entry tags on cache flush Jan Kara 2016-11-01 22:36 ` Jan Kara 2016-11-01 23:13 ` [PATCH 0/21 v4] dax: Clear dirty bits after flushing caches Jan Kara 2016-11-01 23:13 ` Jan Kara 2016-11-02 10:02 ` Kirill A. Shutemov 2016-11-02 10:02 ` Kirill A. Shutemov 2016-11-03 20:46 ` Jan Kara 2016-11-03 20:46 ` Jan Kara 2016-11-02 5:17 ` Ross Zwisler 2016-11-02 5:17 ` Ross Zwisler 2016-11-04 4:46 ` Jan Kara 2016-11-04 4:46 ` Jan Kara 2016-11-04 18:14 ` Jan Kara [this message] 2016-11-04 18:14 ` Jan Kara
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20161104181459.GB6650@quack2.suse.cz \ --to=jack@suse.cz \ --cc=akpm@linux-foundation.org \ --cc=hch@infradead.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-nvdimm@lists.01.org \ --cc=ross.zwisler@linux.intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.