From: Jan Kara <jack@suse.cz> To: <linux-fsdevel@vger.kernel.org> Cc: <linux-ext4@vger.kernel.org>, <linux-xfs@vger.kernel.org>, Christoph Hellwig <hch@infradead.org>, Dan Williams <dan.j.williams@intel.com>, Ross Zwisler <ross.zwisler@linux.intel.com>, Ted Tso <tytso@mit.edu>, "Darrick J. Wong" <darrick.wong@oracle.com>, Jan Kara <jack@suse.cz> Subject: [PATCH 11/19] dax: Allow tuning whether dax_insert_mapping_entry() dirties entry Date: Wed, 11 Oct 2017 22:05:55 +0200 [thread overview] Message-ID: <20171011200603.27442-12-jack@suse.cz> (raw) In-Reply-To: <20171011200603.27442-1-jack@suse.cz> Currently we dirty radix tree entry whenever dax_insert_mapping_entry() gets called for a write fault. With synchronous page faults we would like to insert clean radix tree entry and dirty it only once we call fdatasync() and update page tables to same some unnecessary cache flushing. Add 'dirty' argument to dax_insert_mapping_entry() for that. Signed-off-by: Jan Kara <jack@suse.cz> --- fs/dax.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 5ddf15161390..efc210ff6665 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -526,13 +526,13 @@ static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev, static void *dax_insert_mapping_entry(struct address_space *mapping, struct vm_fault *vmf, void *entry, sector_t sector, - unsigned long flags) + unsigned long flags, bool dirty) { struct radix_tree_root *page_tree = &mapping->page_tree; void *new_entry; pgoff_t index = vmf->pgoff; - if (vmf->flags & FAULT_FLAG_WRITE) + if (dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); if (dax_is_zero_entry(entry) && !(flags & RADIX_DAX_ZERO_PAGE)) { @@ -569,7 +569,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping, entry = new_entry; } - if (vmf->flags & FAULT_FLAG_WRITE) + if (dirty) radix_tree_tag_set(page_tree, index, PAGECACHE_TAG_DIRTY); spin_unlock_irq(&mapping->tree_lock); @@ -881,7 +881,7 @@ static int dax_load_hole(struct address_space *mapping, void *entry, } entry2 = dax_insert_mapping_entry(mapping, vmf, entry, 0, - RADIX_DAX_ZERO_PAGE); + RADIX_DAX_ZERO_PAGE, false); if (IS_ERR(entry2)) { ret = VM_FAULT_SIGBUS; goto out; @@ -1182,7 +1182,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, entry = dax_insert_mapping_entry(mapping, vmf, entry, dax_iomap_sector(&iomap, pos), - 0); + 0, write); if (IS_ERR(entry)) { error = PTR_ERR(entry); goto error_finish_iomap; @@ -1258,7 +1258,7 @@ static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap, goto fallback; ret = dax_insert_mapping_entry(mapping, vmf, entry, 0, - RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE); + RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE, false); if (IS_ERR(ret)) goto fallback; @@ -1379,7 +1379,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, entry = dax_insert_mapping_entry(mapping, vmf, entry, dax_iomap_sector(&iomap, pos), - RADIX_DAX_PMD); + RADIX_DAX_PMD, write); if (IS_ERR(entry)) goto finish_iomap; -- 2.12.3
WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz> To: linux-fsdevel@vger.kernel.org Cc: linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, Christoph Hellwig <hch@infradead.org>, Dan Williams <dan.j.williams@intel.com>, Ross Zwisler <ross.zwisler@linux.intel.com>, Ted Tso <tytso@mit.edu>, "Darrick J. Wong" <darrick.wong@oracle.com>, Jan Kara <jack@suse.cz> Subject: [PATCH 11/19] dax: Allow tuning whether dax_insert_mapping_entry() dirties entry Date: Wed, 11 Oct 2017 22:05:55 +0200 [thread overview] Message-ID: <20171011200603.27442-12-jack@suse.cz> (raw) In-Reply-To: <20171011200603.27442-1-jack@suse.cz> Currently we dirty radix tree entry whenever dax_insert_mapping_entry() gets called for a write fault. With synchronous page faults we would like to insert clean radix tree entry and dirty it only once we call fdatasync() and update page tables to same some unnecessary cache flushing. Add 'dirty' argument to dax_insert_mapping_entry() for that. Signed-off-by: Jan Kara <jack@suse.cz> --- fs/dax.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 5ddf15161390..efc210ff6665 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -526,13 +526,13 @@ static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev, static void *dax_insert_mapping_entry(struct address_space *mapping, struct vm_fault *vmf, void *entry, sector_t sector, - unsigned long flags) + unsigned long flags, bool dirty) { struct radix_tree_root *page_tree = &mapping->page_tree; void *new_entry; pgoff_t index = vmf->pgoff; - if (vmf->flags & FAULT_FLAG_WRITE) + if (dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); if (dax_is_zero_entry(entry) && !(flags & RADIX_DAX_ZERO_PAGE)) { @@ -569,7 +569,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping, entry = new_entry; } - if (vmf->flags & FAULT_FLAG_WRITE) + if (dirty) radix_tree_tag_set(page_tree, index, PAGECACHE_TAG_DIRTY); spin_unlock_irq(&mapping->tree_lock); @@ -881,7 +881,7 @@ static int dax_load_hole(struct address_space *mapping, void *entry, } entry2 = dax_insert_mapping_entry(mapping, vmf, entry, 0, - RADIX_DAX_ZERO_PAGE); + RADIX_DAX_ZERO_PAGE, false); if (IS_ERR(entry2)) { ret = VM_FAULT_SIGBUS; goto out; @@ -1182,7 +1182,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, entry = dax_insert_mapping_entry(mapping, vmf, entry, dax_iomap_sector(&iomap, pos), - 0); + 0, write); if (IS_ERR(entry)) { error = PTR_ERR(entry); goto error_finish_iomap; @@ -1258,7 +1258,7 @@ static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap, goto fallback; ret = dax_insert_mapping_entry(mapping, vmf, entry, 0, - RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE); + RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE, false); if (IS_ERR(ret)) goto fallback; @@ -1379,7 +1379,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, entry = dax_insert_mapping_entry(mapping, vmf, entry, dax_iomap_sector(&iomap, pos), - RADIX_DAX_PMD); + RADIX_DAX_PMD, write); if (IS_ERR(entry)) goto finish_iomap; -- 2.12.3
next prev parent reply other threads:[~2017-10-11 20:06 UTC|newest] Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-10-11 20:05 [PATCH 0/19 v3] dax, ext4, xfs: Synchronous page faults Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 01/19] mm: introduce MAP_SHARED_VALIDATE, a mechanism to safely define new mmap flags Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-13 7:12 ` Christoph Hellwig 2017-10-13 15:44 ` Dan Williams 2017-10-13 18:28 ` Dan Williams 2017-10-14 15:57 ` Williams, Dan J 2017-10-16 7:45 ` hch 2017-10-17 11:50 ` Jan Kara 2017-10-17 19:38 ` Dan Williams 2017-10-18 6:59 ` hch 2017-10-11 20:05 ` [PATCH 02/19] mm: Remove VM_FAULT_HWPOISON_LARGE_MASK Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 03/19] dax: Simplify arguments of dax_insert_mapping() Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 04/19] dax: Factor out getting of pfn out of iomap Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 05/19] dax: Create local variable for VMA in dax_iomap_pte_fault() Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 06/19] dax: Create local variable for vmf->flags & FAULT_FLAG_WRITE test Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 07/19] dax: Inline dax_insert_mapping() into the callsite Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 08/19] dax: Inline dax_pmd_insert_mapping() " Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 09/19] dax: Fix comment describing dax_iomap_fault() Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` [PATCH 10/19] dax: Allow dax_iomap_fault() to return pfn Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-11 20:05 ` Jan Kara [this message] 2017-10-11 20:05 ` [PATCH 11/19] dax: Allow tuning whether dax_insert_mapping_entry() dirties entry Jan Kara 2017-10-13 7:12 ` Christoph Hellwig 2017-10-13 19:26 ` Ross Zwisler 2017-10-11 20:05 ` [PATCH 12/19] mm: Define MAP_SYNC and VM_SYNC flags Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-13 7:12 ` Christoph Hellwig 2017-10-13 19:44 ` Ross Zwisler 2017-10-16 15:37 ` Jan Kara 2017-10-11 20:05 ` [PATCH 13/19] dax, iomap: Add support for synchronous faults Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-13 7:14 ` Christoph Hellwig 2017-10-11 20:05 ` [PATCH 14/19] dax: Implement dax_finish_sync_fault() Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-13 7:21 ` Christoph Hellwig 2017-10-16 15:43 ` Jan Kara 2017-10-13 20:06 ` Ross Zwisler 2017-10-11 20:05 ` [PATCH 15/19] ext4: Simplify error handling in ext4_dax_huge_fault() Jan Kara 2017-10-11 20:05 ` Jan Kara 2017-10-13 20:09 ` Ross Zwisler 2017-10-11 20:06 ` [PATCH 16/19] ext4: Support for synchronous DAX faults Jan Kara 2017-10-11 20:06 ` Jan Kara 2017-10-11 22:23 ` Dan Williams 2017-10-12 13:42 ` Jan Kara 2017-10-13 20:58 ` Ross Zwisler 2017-10-16 15:50 ` Jan Kara 2017-10-11 20:06 ` [PATCH 17/19] ext4: Add support for MAP_SYNC flag Jan Kara 2017-10-11 20:06 ` Jan Kara 2017-10-11 22:11 ` Dan Williams 2017-10-12 13:42 ` Jan Kara 2017-10-13 0:23 ` Dan Williams 2017-10-13 7:22 ` Christoph Hellwig 2017-10-13 15:52 ` Dan Williams 2017-10-17 11:30 ` Jan Kara 2017-10-13 7:21 ` Christoph Hellwig 2017-10-16 15:14 ` Jan Kara 2017-10-11 20:06 ` [PATCH 18/19] xfs: support for synchronous DAX faults Jan Kara 2017-10-11 20:06 ` Jan Kara 2017-10-11 20:06 ` [PATCH 19/19] xfs: Add support for MAP_SYNC flag Jan Kara 2017-10-11 20:06 ` Jan Kara 2017-10-11 22:54 ` Dan Williams 2017-10-11 23:02 ` Dan Williams 2017-10-13 7:28 ` Christoph Hellwig 2017-10-11 21:18 ` [PATCH 0/19 v3] dax, ext4, xfs: Synchronous page faults Dan Williams 2017-10-11 22:43 ` Dave Chinner 2017-10-12 1:18 ` Dan Williams 2017-10-13 22:53 ` Ross Zwisler 2017-10-16 15:12 ` Jan Kara
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20171011200603.27442-12-jack@suse.cz \ --to=jack@suse.cz \ --cc=dan.j.williams@intel.com \ --cc=darrick.wong@oracle.com \ --cc=hch@infradead.org \ --cc=linux-ext4@vger.kernel.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-xfs@vger.kernel.org \ --cc=ross.zwisler@linux.intel.com \ --cc=tytso@mit.edu \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.