From: Matthew Wilcox <willy@infradead.org>
To: Andreas Gruenbacher <agruenba@redhat.com>
Cc: Gao Xiang <hsiangkao@linux.alibaba.com>,
Christoph Hellwig <hch@lst.de>,
"Darrick J . Wong" <djwong@kernel.org>,
Huang Jianan <huangjianan@oppo.com>,
linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org,
Andreas Gruenbacher <andreas.gruenbacher@gmail.com>
Subject: Re: [PATCH v7] iomap: make inline data support more flexible
Date: Mon, 26 Jul 2021 13:32:14 +0100 [thread overview]
Message-ID: <YP6rTi/I3Vd+pbeT@casper.infradead.org> (raw)
In-Reply-To: <20210726110611.459173-1-agruenba@redhat.com>
On Mon, Jul 26, 2021 at 01:06:11PM +0200, Andreas Gruenbacher wrote:
> @@ -671,11 +683,11 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page,
> void *addr;
>
> WARN_ON_ONCE(!PageUptodate(page));
> - BUG_ON(pos + copied > PAGE_SIZE - offset_in_page(iomap->inline_data));
> + BUG_ON(!iomap_inline_data_size_valid(iomap));
>
> flush_dcache_page(page);
> addr = kmap_atomic(page);
> - memcpy(iomap->inline_data + pos, addr + pos, copied);
> + memcpy(iomap_inline_data(iomap, pos), addr + pos, copied);
> kunmap_atomic(addr);
>
> mark_inode_dirty(inode);
Only tangentially related ... why do we memcpy the data into the tail
at write_end() time instead of at writepage() time? I see there's a
workaround for that in gfs2's page_mkwrite():
if (gfs2_is_stuffed(ip)) {
err = gfs2_unstuff_dinode(ip);
(an mmap store cannot change the size of the file, so this would be
unnecessary)
Something like this ...
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 87ccb3438bec..3aeebe899fc5 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -665,9 +665,10 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len,
return copied;
}
-static size_t iomap_write_end_inline(struct inode *inode, struct page *page,
- struct iomap *iomap, loff_t pos, size_t copied)
+static int iomap_write_inline_data(struct inode *inode, struct page *page,
+ struct iomap *iomap)
{
+ size_t size = i_size_read(inode) - page_offset(page);
void *addr;
WARN_ON_ONCE(!PageUptodate(page));
@@ -675,11 +676,10 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page,
flush_dcache_page(page);
addr = kmap_atomic(page);
- memcpy(iomap->inline_data + pos, addr + pos, copied);
+ memcpy(iomap->inline_data, addr, size);
kunmap_atomic(addr);
- mark_inode_dirty(inode);
- return copied;
+ return 0;
}
/* Returns the number of bytes copied. May be 0. Cannot be an errno. */
@@ -691,9 +691,7 @@ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len,
loff_t old_size = inode->i_size;
size_t ret;
- if (srcmap->type == IOMAP_INLINE) {
- ret = iomap_write_end_inline(inode, page, iomap, pos, copied);
- } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) {
+ if (srcmap->flags & IOMAP_F_BUFFER_HEAD) {
ret = block_write_end(NULL, inode->i_mapping, pos, len, copied,
page, NULL);
} else {
@@ -1314,6 +1312,9 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0);
+ if (wpc->iomap.type == IOMAP_INLINE)
+ return iomap_write_inline_data(inode, page, iomap);
+
/*
* Walk through the page to find areas to write back. If we run off the
* end of the current map or find the current map invalid, grab a new
@@ -1328,8 +1329,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
error = wpc->ops->map_blocks(wpc, inode, file_offset);
if (error)
break;
- if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE))
- continue;
if (wpc->iomap.type == IOMAP_HOLE)
continue;
iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc,
next prev parent reply other threads:[~2021-07-26 12:34 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-23 17:41 [PATCH v7] iomap: make inline data support more flexible Gao Xiang
2021-07-23 19:40 ` Matthew Wilcox
2021-07-24 0:54 ` Gao Xiang
2021-07-25 21:39 ` Andreas Grünbacher
2021-07-25 22:16 ` Andreas Gruenbacher
2021-07-26 2:36 ` Gao Xiang
2021-07-26 7:22 ` Andreas Gruenbacher
2021-07-26 7:38 ` Gao Xiang
2021-07-26 21:36 ` Darrick J. Wong
2021-07-26 22:20 ` Andreas Grünbacher
2021-07-26 3:06 ` Matthew Wilcox
2021-07-26 6:56 ` Andreas Gruenbacher
2021-07-26 4:00 ` Gao Xiang
2021-07-26 8:08 ` Andreas Grünbacher
2021-07-26 8:17 ` Gao Xiang
2021-07-26 11:06 ` Andreas Gruenbacher
2021-07-26 12:17 ` Christoph Hellwig
2021-07-26 12:27 ` Andreas Grünbacher
2021-07-26 12:50 ` Gao Xiang
2021-07-26 13:10 ` Andreas Gruenbacher
2021-07-26 13:27 ` Christoph Hellwig
2021-07-27 8:20 ` David Sterba
2021-07-27 13:35 ` Matthew Wilcox
2021-07-27 15:04 ` Gao Xiang
2021-07-27 16:53 ` David Sterba
2021-07-26 12:32 ` Matthew Wilcox [this message]
2021-07-26 13:03 ` Andreas Gruenbacher
2021-07-26 13:12 ` Gao Xiang
2021-07-26 13:30 ` Christoph Hellwig
2021-07-26 8:08 ` Joseph Qi
2021-08-01 10:29 Andreas Gruenbacher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YP6rTi/I3Vd+pbeT@casper.infradead.org \
--to=willy@infradead.org \
--cc=agruenba@redhat.com \
--cc=andreas.gruenbacher@gmail.com \
--cc=djwong@kernel.org \
--cc=hch@lst.de \
--cc=hsiangkao@linux.alibaba.com \
--cc=huangjianan@oppo.com \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).