All of lore.kernel.org
 help / color / mirror / Atom feed
* [f2fs-dev] [PATCH v5] f2fs: fix compressed file start atomic write may cause data corruption
@ 2022-03-18  1:23 Fengnan Chang via Linux-f2fs-devel
  2022-03-18  9:06 ` Chao Yu
  0 siblings, 1 reply; 2+ messages in thread
From: Fengnan Chang via Linux-f2fs-devel @ 2022-03-18  1:23 UTC (permalink / raw)
  To: jaegeuk, chao
  Cc: Dan Carpenter, kernel test robot, Fengnan Chang, linux-f2fs-devel

When compressed file has blocks, f2fs_ioc_start_atomic_write will succeed,
but compressed flag will be remained in inode. If write partial compreseed
cluster and commit atomic write will cause data corruption.

This is the reproduction process:
Step 1:
create a compressed file ,write 64K data , call fsync(), then the blocks
are write as compressed cluster.
Step2:
iotcl(F2FS_IOC_START_ATOMIC_WRITE)  --- this should be fail, but not.
write page 0 and page 3.
iotcl(F2FS_IOC_COMMIT_ATOMIC_WRITE)  -- page 0 and 3 write as normal file,
Step3:
drop cache.
read page 0-4   -- Since page 0 has a valid block address, read as
non-compressed cluster, page 1 and 2 will be filled with compressed data
or zero.

The root cause is, after commit 7eab7a696827 ("f2fs: compress: remove
unneeded read when rewrite whole cluster"), in step 2, f2fs_write_begin()
only set target page dirty, and in f2fs_commit_inmem_pages(), we will write
partial raw pages into compressed cluster, result in corrupting compressed
cluster layout.

Fixes: 4c8ff7095bef ("f2fs: support data compression")
Fixes: 7eab7a696827 ("f2fs: compress: remove unneeded read when rewrite whole cluster")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Fengnan Chang <changfengnan@vivo.com>
---
 fs/f2fs/data.c | 2 +-
 fs/f2fs/file.c | 5 ++++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index b09f401f8960..5675af1b6916 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3363,7 +3363,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
 
 		*fsdata = NULL;
 
-		if (len == PAGE_SIZE)
+		if (len == PAGE_SIZE && !(f2fs_is_atomic_file(inode)))
 			goto repeat;
 
 		ret = f2fs_prepare_compress_overwrite(inode, pagep,
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 7049be29bc2e..68ddf4c7ca64 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -2009,7 +2009,10 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
 
 	inode_lock(inode);
 
-	f2fs_disable_compressed_file(inode);
+	if (!f2fs_disable_compressed_file(inode)) {
+		ret = -EINVAL;
+		goto out;
+	}
 
 	if (f2fs_is_atomic_file(inode)) {
 		if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST))
-- 
2.32.0



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [f2fs-dev] [PATCH v5] f2fs: fix compressed file start atomic write may cause data corruption
  2022-03-18  1:23 [f2fs-dev] [PATCH v5] f2fs: fix compressed file start atomic write may cause data corruption Fengnan Chang via Linux-f2fs-devel
@ 2022-03-18  9:06 ` Chao Yu
  0 siblings, 0 replies; 2+ messages in thread
From: Chao Yu @ 2022-03-18  9:06 UTC (permalink / raw)
  To: Fengnan Chang, jaegeuk; +Cc: kernel test robot, Dan Carpenter, linux-f2fs-devel

On 2022/3/18 9:23, Fengnan Chang wrote:
> When compressed file has blocks, f2fs_ioc_start_atomic_write will succeed,
> but compressed flag will be remained in inode. If write partial compreseed
> cluster and commit atomic write will cause data corruption.
> 
> This is the reproduction process:
> Step 1:
> create a compressed file ,write 64K data , call fsync(), then the blocks
> are write as compressed cluster.
> Step2:
> iotcl(F2FS_IOC_START_ATOMIC_WRITE)  --- this should be fail, but not.
> write page 0 and page 3.
> iotcl(F2FS_IOC_COMMIT_ATOMIC_WRITE)  -- page 0 and 3 write as normal file,
> Step3:
> drop cache.
> read page 0-4   -- Since page 0 has a valid block address, read as
> non-compressed cluster, page 1 and 2 will be filled with compressed data
> or zero.
> 
> The root cause is, after commit 7eab7a696827 ("f2fs: compress: remove
> unneeded read when rewrite whole cluster"), in step 2, f2fs_write_begin()
> only set target page dirty, and in f2fs_commit_inmem_pages(), we will write
> partial raw pages into compressed cluster, result in corrupting compressed
> cluster layout.
> 
> Fixes: 4c8ff7095bef ("f2fs: support data compression")
> Fixes: 7eab7a696827 ("f2fs: compress: remove unneeded read when rewrite whole cluster")
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> Signed-off-by: Fengnan Chang <changfengnan@vivo.com>

Reviewed-by: Chao Yu <chao@kernel.org>

Thanks,


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-03-18  9:06 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-18  1:23 [f2fs-dev] [PATCH v5] f2fs: fix compressed file start atomic write may cause data corruption Fengnan Chang via Linux-f2fs-devel
2022-03-18  9:06 ` Chao Yu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.