linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [V9fs-developer] [bug report] fs/9p: inode blocks show error in fscache mode
@ 2017-11-22  9:31 jiangyiwen
  0 siblings, 0 replies; only message in thread
From: jiangyiwen @ 2017-11-22  9:31 UTC (permalink / raw)
  To: Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov, v9fs-developer
  Cc: kernel-janitors, linux-kernel, sochin.jiang, Eduard Shishkin,
	Xulei (Stone)

Hi all,

I test a scenario that will cause the difference of inode blocks
between client and host, the scenario as follows:

Precondition:
1) use VirtFS(virtio-9p) to connect guest and host.
2) 9p dir in guest is /mnt/9p, host is /9p-host.
3) server fs is ext4 and block size is 4096.

Test steps:
1) on the client(guest)
# touch /mnt/9p/test/file
# dd if=/dev/zero of=/mnt/9p/test/file bs=1 count=1043456 seek=1302528 conv=notrunc

2) on the client(guest)
# stat /mnt/9p/test/file
the file's blocks is 4582 blocks(block size is 512)

3) on the server(host)
# stat /9p-host/test/file
the file's blocks is 2040 blocks(block size is 512)

Cause analysis:
Because the file is sparse file, so in function v9fs_write_end will
update inode blocks according to difference between last_pos and
inode_size, only when last_pos is larger than the inode_size, then it
update the blocks and inode_size, the operation is not fit the sparse
file.

Currently I want to call v9fs_invalidate_inode_attr to invalidate inode,
but it will influence the performance, so I don't have a good solution.

Please advise. Thanks in advance!

Best regards,
Yiwen

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2017-11-22  9:32 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-22  9:31 [V9fs-developer] [bug report] fs/9p: inode blocks show error in fscache mode jiangyiwen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).