linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] f2fs: add compress_mode mount option
@ 2020-11-23  3:17 Daeho Jeong
  2020-11-23  3:17 ` [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE Daeho Jeong
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Daeho Jeong @ 2020-11-23  3:17 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel, kernel-team; +Cc: Daeho Jeong

From: Daeho Jeong <daehojeong@google.com>

We will add a new "compress_mode" mount option to control file
compression mode. This supports "fs-based" and "user-based".
In "fs-based" mode (default), f2fs does automatic compression on
the compression enabled files. In "user-based" mode, f2fs disables
the automaic compression and gives the user discretion of choosing
the target file and the timing. It means the user can do manual
compression/decompression on the compression enabled files using ioctls.

Signed-off-by: Daeho Jeong <daehojeong@google.com>
---
 Documentation/filesystems/f2fs.rst |  7 +++++++
 fs/f2fs/data.c                     | 10 +++++-----
 fs/f2fs/f2fs.h                     | 30 ++++++++++++++++++++++++++++++
 fs/f2fs/segment.c                  |  2 +-
 fs/f2fs/super.c                    | 23 +++++++++++++++++++++++
 5 files changed, 66 insertions(+), 6 deletions(-)

diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
index b8ee761c9922..0679c53d5012 100644
--- a/Documentation/filesystems/f2fs.rst
+++ b/Documentation/filesystems/f2fs.rst
@@ -260,6 +260,13 @@ compress_extension=%s	 Support adding specified extension, so that f2fs can enab
 			 For other files, we can still enable compression via ioctl.
 			 Note that, there is one reserved special extension '*', it
 			 can be set to enable compression for all files.
+compress_mode=%s	 Control file compression mode. This supports "fs-based" and
+			 "user-based". In "fs-based" mode (default), f2fs does
+			 automatic compression on the compression enabled files.
+			 In "user-based" mode, f2fs disables the automaic compression
+			 and gives the user discretion of choosing the target file and
+			 the timing. The user can do manual compression/decompression
+			 on the compression enabled files using ioctls.
 inlinecrypt		 When possible, encrypt/decrypt the contents of encrypted
 			 files using the blk-crypto framework rather than
 			 filesystem-layer encryption. This allows the use of
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index be4da52604ed..69370f0073dd 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2896,7 +2896,7 @@ static int f2fs_write_data_page(struct page *page,
 	if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
 		goto out;
 
-	if (f2fs_compressed_file(inode)) {
+	if (f2fs_need_compress_write(inode)) {
 		if (f2fs_is_compressed_cluster(inode, page->index)) {
 			redirty_page_for_writepage(wbc, page);
 			return AOP_WRITEPAGE_ACTIVATE;
@@ -2988,7 +2988,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 readd:
 			need_readd = false;
 #ifdef CONFIG_F2FS_FS_COMPRESSION
-			if (f2fs_compressed_file(inode)) {
+			if (f2fs_need_compress_write(inode)) {
 				ret = f2fs_init_compress_ctx(&cc);
 				if (ret) {
 					done = 1;
@@ -3067,7 +3067,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 				goto continue_unlock;
 
 #ifdef CONFIG_F2FS_FS_COMPRESSION
-			if (f2fs_compressed_file(inode)) {
+			if (f2fs_need_compress_write(inode)) {
 				get_page(page);
 				f2fs_compress_ctx_add_page(&cc, page);
 				continue;
@@ -3120,7 +3120,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 	}
 #ifdef CONFIG_F2FS_FS_COMPRESSION
 	/* flush remained pages in compress cluster */
-	if (f2fs_compressed_file(inode) && !f2fs_cluster_is_empty(&cc)) {
+	if (f2fs_need_compress_write(inode) && !f2fs_cluster_is_empty(&cc)) {
 		ret = f2fs_write_multi_pages(&cc, &submitted, wbc, io_type);
 		nwritten += submitted;
 		wbc->nr_to_write -= submitted;
@@ -3164,7 +3164,7 @@ static inline bool __should_serialize_io(struct inode *inode,
 	if (IS_NOQUOTA(inode))
 		return false;
 
-	if (f2fs_compressed_file(inode))
+	if (f2fs_need_compress_write(inode))
 		return true;
 	if (wbc->sync_mode != WB_SYNC_ALL)
 		return true;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index e0826779a101..88e012d07ad5 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -149,6 +149,7 @@ struct f2fs_mount_info {
 	unsigned char compress_algorithm;	/* algorithm type */
 	unsigned compress_log_size;		/* cluster log size */
 	unsigned char compress_ext_cnt;		/* extension count */
+	int compress_mode;			/* compression mode */
 	unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN];	/* extensions */
 };
 
@@ -677,6 +678,7 @@ enum {
 	FI_VERITY_IN_PROGRESS,	/* building fs-verity Merkle tree */
 	FI_COMPRESSED_FILE,	/* indicate file's data can be compressed */
 	FI_MMAP_FILE,		/* indicate file was mmapped */
+	FI_ENABLE_COMPRESS,	/* enable compression in user-based compression mode */
 	FI_MAX,			/* max flag, never be used */
 };
 
@@ -1243,6 +1245,18 @@ enum fsync_mode {
 	FSYNC_MODE_NOBARRIER,	/* fsync behaves nobarrier based on posix */
 };
 
+enum {
+	COMPR_MODE_FS,		/*
+				 * automatically compress compression
+				 * enabled files
+				 */
+	COMPR_MODE_USER,	/*
+				 * automatical compression is disabled.
+				 * user can control the file compression
+				 * using ioctls
+				 */
+};
+
 /*
  * this value is set in page as a private data which indicate that
  * the page is atomically written, and it is in inmem_pages list.
@@ -2752,6 +2766,22 @@ static inline int f2fs_compressed_file(struct inode *inode)
 		is_inode_flag_set(inode, FI_COMPRESSED_FILE);
 }
 
+static inline int f2fs_need_compress_write(struct inode *inode)
+{
+	int compress_mode = F2FS_OPTION(F2FS_I_SB(inode)).compress_mode;
+
+	if (!f2fs_compressed_file(inode))
+		return 0;
+
+	if (compress_mode == COMPR_MODE_FS)
+		return 1;
+	else if (compress_mode == COMPR_MODE_USER &&
+			is_inode_flag_set(inode, FI_ENABLE_COMPRESS))
+		return 1;
+
+	return 0;
+}
+
 static inline unsigned int addrs_per_inode(struct inode *inode)
 {
 	unsigned int addrs = CUR_ADDRS_PER_INODE(inode) -
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 1596502f7375..652ca049bb7e 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -3254,7 +3254,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
 			else
 				return CURSEG_COLD_DATA;
 		}
-		if (file_is_cold(inode) || f2fs_compressed_file(inode))
+		if (file_is_cold(inode) || f2fs_need_compress_write(inode))
 			return CURSEG_COLD_DATA;
 		if (file_is_hot(inode) ||
 				is_inode_flag_set(inode, FI_HOT_DATA) ||
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 87f7a6e86370..ea2385aa7f48 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -146,6 +146,7 @@ enum {
 	Opt_compress_algorithm,
 	Opt_compress_log_size,
 	Opt_compress_extension,
+	Opt_compress_mode,
 	Opt_atgc,
 	Opt_err,
 };
@@ -214,6 +215,7 @@ static match_table_t f2fs_tokens = {
 	{Opt_compress_algorithm, "compress_algorithm=%s"},
 	{Opt_compress_log_size, "compress_log_size=%u"},
 	{Opt_compress_extension, "compress_extension=%s"},
+	{Opt_compress_mode, "compress_mode=%s"},
 	{Opt_atgc, "atgc"},
 	{Opt_err, NULL},
 };
@@ -934,10 +936,25 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
 			F2FS_OPTION(sbi).compress_ext_cnt++;
 			kfree(name);
 			break;
+		case Opt_compress_mode:
+			name = match_strdup(&args[0]);
+			if (!name)
+				return -ENOMEM;
+			if (!strcmp(name, "fs-based")) {
+				F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
+			} else if (!strcmp(name, "user-based")) {
+				F2FS_OPTION(sbi).compress_mode = COMPR_MODE_USER;
+			} else {
+				kfree(name);
+				return -EINVAL;
+			}
+			kfree(name);
+			break;
 #else
 		case Opt_compress_algorithm:
 		case Opt_compress_log_size:
 		case Opt_compress_extension:
+		case Opt_compress_mode:
 			f2fs_info(sbi, "compression options not supported");
 			break;
 #endif
@@ -1523,6 +1540,11 @@ static inline void f2fs_show_compress_options(struct seq_file *seq,
 		seq_printf(seq, ",compress_extension=%s",
 			F2FS_OPTION(sbi).extensions[i]);
 	}
+
+	if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_FS)
+		seq_printf(seq, ",compress_mode=%s", "fs-based");
+	else if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_USER)
+		seq_printf(seq, ",compress_mode=%s", "user-based");
 }
 
 static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
@@ -1672,6 +1694,7 @@ static void default_options(struct f2fs_sb_info *sbi)
 	F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZ4;
 	F2FS_OPTION(sbi).compress_log_size = MIN_COMPRESS_LOG_SIZE;
 	F2FS_OPTION(sbi).compress_ext_cnt = 0;
+	F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
 	F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
 
 	sbi->sb->s_flags &= ~SB_INLINECRYPT;
-- 
2.29.2.454.gaff20da3a2-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-23  3:17 [PATCH 1/2] f2fs: add compress_mode mount option Daeho Jeong
@ 2020-11-23  3:17 ` Daeho Jeong
  2020-11-23 17:19   ` [f2fs-dev] " Jaegeuk Kim
                     ` (2 more replies)
  2020-11-23 17:18 ` [f2fs-dev] [PATCH 1/2] f2fs: add compress_mode mount option Jaegeuk Kim
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 18+ messages in thread
From: Daeho Jeong @ 2020-11-23  3:17 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel, kernel-team; +Cc: Daeho Jeong

From: Daeho Jeong <daehojeong@google.com>

Added two ioctl to decompress/compress explicitly the compression
enabled file in "compress_mode=user-based" mount option.

Using these two ioctls, the users can make a control of compression
and decompression of their files.

Signed-off-by: Daeho Jeong <daehojeong@google.com>
---
 fs/f2fs/file.c            | 181 +++++++++++++++++++++++++++++++++++++-
 include/uapi/linux/f2fs.h |   2 +
 2 files changed, 182 insertions(+), 1 deletion(-)

diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index be8db06aca27..e8f142470e87 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -4026,6 +4026,180 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
 	return ret;
 }
 
+static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
+{
+	DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, page_idx);
+	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+	struct address_space *mapping = inode->i_mapping;
+	struct page *page;
+	pgoff_t redirty_idx = page_idx;
+	int i, page_len = 0, ret = 0;
+
+	page_cache_ra_unbounded(&ractl, len, 0);
+
+	for (i = 0; i < len; i++, page_idx++) {
+		page = read_cache_page(mapping, page_idx, NULL, NULL);
+		if (IS_ERR(page)) {
+			ret = PTR_ERR(page);
+			f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
+				"couldn't be read (errno:%d).\n",
+				__func__, inode->i_ino, page_idx, ret);
+			break;
+		}
+		page_len++;
+	}
+
+	for (i = 0; i < page_len; i++, redirty_idx++) {
+		page = find_lock_page(mapping, redirty_idx);
+		if (!page) {
+			ret = -ENOENT;
+			f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
+				"couldn't be found (errno:%d).\n",
+				__func__, inode->i_ino, redirty_idx, ret);
+		}
+		set_page_dirty(page);
+		f2fs_put_page(page, 1);
+		f2fs_put_page(page, 0);
+	}
+
+	return ret;
+}
+
+static int f2fs_ioc_decompress_file(struct file *filp, unsigned long arg)
+{
+	struct inode *inode = file_inode(filp);
+	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+	struct f2fs_inode_info *fi = F2FS_I(inode);
+	pgoff_t page_idx = 0, last_idx;
+	int cluster_size = F2FS_I(inode)->i_cluster_size;
+	int count, ret;
+
+	if (!f2fs_sb_has_compression(sbi))
+		return -EOPNOTSUPP;
+
+	if (!(filp->f_mode & FMODE_WRITE))
+		return -EBADF;
+
+	if (!f2fs_compressed_file(inode))
+		return -EINVAL;
+
+	f2fs_balance_fs(F2FS_I_SB(inode), true);
+
+	file_start_write(filp);
+	inode_lock(inode);
+
+	if (f2fs_is_mmap_file(inode)) {
+		ret = -EBUSY;
+		goto out;
+	}
+
+	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+	if (ret)
+		goto out;
+
+	if (!atomic_read(&fi->i_compr_blocks))
+		goto out;
+
+	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
+
+	count = last_idx - page_idx;
+	while (count) {
+		int len = min(cluster_size, count);
+
+		ret = redirty_blocks(inode, page_idx, len);
+
+		if (ret < 0)
+			break;
+
+		page_idx += len;
+		count -= len;
+	}
+
+	if (!ret)
+		ret = filemap_write_and_wait_range(inode->i_mapping, 0,
+							LLONG_MAX);
+
+	if (!ret) {
+		stat_sub_compr_blocks(inode, atomic_read(&fi->i_compr_blocks));
+		atomic_set(&fi->i_compr_blocks, 0);
+		f2fs_mark_inode_dirty_sync(inode, true);
+	} else {
+		f2fs_warn(sbi, "%s: The file might be partially decompressed "
+				"(errno=%d). Please delete the file.\n",
+				__func__, ret);
+	}
+out:
+	inode_unlock(inode);
+	file_end_write(filp);
+
+	return ret;
+}
+
+static int f2fs_ioc_compress_file(struct file *filp, unsigned long arg)
+{
+	struct inode *inode = file_inode(filp);
+	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+	pgoff_t page_idx = 0, last_idx;
+	int cluster_size = F2FS_I(inode)->i_cluster_size;
+	int count, ret;
+
+	if (!f2fs_sb_has_compression(sbi))
+		return -EOPNOTSUPP;
+
+	if (!(filp->f_mode & FMODE_WRITE))
+		return -EBADF;
+
+	if (!f2fs_compressed_file(inode))
+		return -EINVAL;
+
+	f2fs_balance_fs(F2FS_I_SB(inode), true);
+
+	file_start_write(filp);
+	inode_lock(inode);
+
+	if (f2fs_is_mmap_file(inode)) {
+		ret = -EBUSY;
+		goto out;
+	}
+
+	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+	if (ret)
+		goto out;
+
+	set_inode_flag(inode, FI_ENABLE_COMPRESS);
+
+	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
+
+	count = last_idx - page_idx;
+	while (count) {
+		int len = min(cluster_size, count);
+
+		ret = redirty_blocks(inode, page_idx, len);
+
+		if (ret < 0)
+			break;
+
+		page_idx += len;
+		count -= len;
+	}
+
+	if (!ret)
+		ret = filemap_write_and_wait_range(inode->i_mapping, 0,
+							LLONG_MAX);
+
+	clear_inode_flag(inode, FI_ENABLE_COMPRESS);
+
+	if (ret)
+		f2fs_warn(sbi, "%s: The file might be partially compressed "
+				"(errno=%d). Please delete the file.\n",
+				__func__, ret);
+out:
+	inode_unlock(inode);
+	file_end_write(filp);
+
+	return ret;
+}
+
 static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 {
 	switch (cmd) {
@@ -4113,6 +4287,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 		return f2fs_ioc_get_compress_option(filp, arg);
 	case F2FS_IOC_SET_COMPRESS_OPTION:
 		return f2fs_ioc_set_compress_option(filp, arg);
+	case F2FS_IOC_DECOMPRESS_FILE:
+		return f2fs_ioc_decompress_file(filp, arg);
+	case F2FS_IOC_COMPRESS_FILE:
+		return f2fs_ioc_compress_file(filp, arg);
 	default:
 		return -ENOTTY;
 	}
@@ -4352,7 +4530,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	case F2FS_IOC_SEC_TRIM_FILE:
 	case F2FS_IOC_GET_COMPRESS_OPTION:
 	case F2FS_IOC_SET_COMPRESS_OPTION:
-		break;
+	case F2FS_IOC_DECOMPRESS_FILE:
+	case F2FS_IOC_COMPRESS_FILE:
 	default:
 		return -ENOIOCTLCMD;
 	}
diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
index f00199a2e38b..352a822d4370 100644
--- a/include/uapi/linux/f2fs.h
+++ b/include/uapi/linux/f2fs.h
@@ -40,6 +40,8 @@
 						struct f2fs_comp_option)
 #define F2FS_IOC_SET_COMPRESS_OPTION	_IOW(F2FS_IOCTL_MAGIC, 22,	\
 						struct f2fs_comp_option)
+#define F2FS_IOC_DECOMPRESS_FILE	_IO(F2FS_IOCTL_MAGIC, 23)
+#define F2FS_IOC_COMPRESS_FILE		_IO(F2FS_IOCTL_MAGIC, 24)
 
 /*
  * should be same as XFS_IOC_GOINGDOWN.
-- 
2.29.2.454.gaff20da3a2-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 1/2] f2fs: add compress_mode mount option
  2020-11-23  3:17 [PATCH 1/2] f2fs: add compress_mode mount option Daeho Jeong
  2020-11-23  3:17 ` [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE Daeho Jeong
@ 2020-11-23 17:18 ` Jaegeuk Kim
  2020-11-23 18:46 ` Eric Biggers
  2020-11-24  2:16 ` Chao Yu
  3 siblings, 0 replies; 18+ messages in thread
From: Jaegeuk Kim @ 2020-11-23 17:18 UTC (permalink / raw)
  To: Daeho Jeong; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

On 11/23, Daeho Jeong wrote:
> From: Daeho Jeong <daehojeong@google.com>
> 
> We will add a new "compress_mode" mount option to control file
> compression mode. This supports "fs-based" and "user-based".
> In "fs-based" mode (default), f2fs does automatic compression on
> the compression enabled files. In "user-based" mode, f2fs disables
> the automaic compression and gives the user discretion of choosing
> the target file and the timing. It means the user can do manual
> compression/decompression on the compression enabled files using ioctls.
> 
> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> ---
>  Documentation/filesystems/f2fs.rst |  7 +++++++
>  fs/f2fs/data.c                     | 10 +++++-----
>  fs/f2fs/f2fs.h                     | 30 ++++++++++++++++++++++++++++++
>  fs/f2fs/segment.c                  |  2 +-
>  fs/f2fs/super.c                    | 23 +++++++++++++++++++++++
>  5 files changed, 66 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
> index b8ee761c9922..0679c53d5012 100644
> --- a/Documentation/filesystems/f2fs.rst
> +++ b/Documentation/filesystems/f2fs.rst
> @@ -260,6 +260,13 @@ compress_extension=%s	 Support adding specified extension, so that f2fs can enab
>  			 For other files, we can still enable compression via ioctl.
>  			 Note that, there is one reserved special extension '*', it
>  			 can be set to enable compression for all files.
> +compress_mode=%s	 Control file compression mode. This supports "fs-based" and
> +			 "user-based". In "fs-based" mode (default), f2fs does

I think "fs" and "user" would be enough.

> +			 automatic compression on the compression enabled files.
> +			 In "user-based" mode, f2fs disables the automaic compression
> +			 and gives the user discretion of choosing the target file and
> +			 the timing. The user can do manual compression/decompression
> +			 on the compression enabled files using ioctls.
>  inlinecrypt		 When possible, encrypt/decrypt the contents of encrypted
>  			 files using the blk-crypto framework rather than
>  			 filesystem-layer encryption. This allows the use of
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index be4da52604ed..69370f0073dd 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2896,7 +2896,7 @@ static int f2fs_write_data_page(struct page *page,
>  	if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
>  		goto out;
>  
> -	if (f2fs_compressed_file(inode)) {
> +	if (f2fs_need_compress_write(inode)) {
>  		if (f2fs_is_compressed_cluster(inode, page->index)) {
>  			redirty_page_for_writepage(wbc, page);
>  			return AOP_WRITEPAGE_ACTIVATE;
> @@ -2988,7 +2988,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  readd:
>  			need_readd = false;
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
> -			if (f2fs_compressed_file(inode)) {
> +			if (f2fs_need_compress_write(inode)) {
>  				ret = f2fs_init_compress_ctx(&cc);
>  				if (ret) {
>  					done = 1;
> @@ -3067,7 +3067,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  				goto continue_unlock;
>  
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
> -			if (f2fs_compressed_file(inode)) {
> +			if (f2fs_need_compress_write(inode)) {
>  				get_page(page);
>  				f2fs_compress_ctx_add_page(&cc, page);
>  				continue;
> @@ -3120,7 +3120,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  	}
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
>  	/* flush remained pages in compress cluster */
> -	if (f2fs_compressed_file(inode) && !f2fs_cluster_is_empty(&cc)) {
> +	if (f2fs_need_compress_write(inode) && !f2fs_cluster_is_empty(&cc)) {
>  		ret = f2fs_write_multi_pages(&cc, &submitted, wbc, io_type);
>  		nwritten += submitted;
>  		wbc->nr_to_write -= submitted;
> @@ -3164,7 +3164,7 @@ static inline bool __should_serialize_io(struct inode *inode,
>  	if (IS_NOQUOTA(inode))
>  		return false;
>  
> -	if (f2fs_compressed_file(inode))
> +	if (f2fs_need_compress_write(inode))
>  		return true;
>  	if (wbc->sync_mode != WB_SYNC_ALL)
>  		return true;
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index e0826779a101..88e012d07ad5 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -149,6 +149,7 @@ struct f2fs_mount_info {
>  	unsigned char compress_algorithm;	/* algorithm type */
>  	unsigned compress_log_size;		/* cluster log size */
>  	unsigned char compress_ext_cnt;		/* extension count */
> +	int compress_mode;			/* compression mode */
>  	unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN];	/* extensions */
>  };
>  
> @@ -677,6 +678,7 @@ enum {
>  	FI_VERITY_IN_PROGRESS,	/* building fs-verity Merkle tree */
>  	FI_COMPRESSED_FILE,	/* indicate file's data can be compressed */
>  	FI_MMAP_FILE,		/* indicate file was mmapped */
> +	FI_ENABLE_COMPRESS,	/* enable compression in user-based compression mode */
>  	FI_MAX,			/* max flag, never be used */
>  };
>  
> @@ -1243,6 +1245,18 @@ enum fsync_mode {
>  	FSYNC_MODE_NOBARRIER,	/* fsync behaves nobarrier based on posix */
>  };
>  
> +enum {
> +	COMPR_MODE_FS,		/*
> +				 * automatically compress compression
> +				 * enabled files
> +				 */
> +	COMPR_MODE_USER,	/*
> +				 * automatical compression is disabled.
> +				 * user can control the file compression
> +				 * using ioctls
> +				 */
> +};
> +
>  /*
>   * this value is set in page as a private data which indicate that
>   * the page is atomically written, and it is in inmem_pages list.
> @@ -2752,6 +2766,22 @@ static inline int f2fs_compressed_file(struct inode *inode)
>  		is_inode_flag_set(inode, FI_COMPRESSED_FILE);
>  }
>  
> +static inline int f2fs_need_compress_write(struct inode *inode)
> +{
> +	int compress_mode = F2FS_OPTION(F2FS_I_SB(inode)).compress_mode;
> +
> +	if (!f2fs_compressed_file(inode))
> +		return 0;
> +
> +	if (compress_mode == COMPR_MODE_FS)
> +		return 1;
> +	else if (compress_mode == COMPR_MODE_USER &&
> +			is_inode_flag_set(inode, FI_ENABLE_COMPRESS))
> +		return 1;
> +
> +	return 0;
> +}
> +
>  static inline unsigned int addrs_per_inode(struct inode *inode)
>  {
>  	unsigned int addrs = CUR_ADDRS_PER_INODE(inode) -
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 1596502f7375..652ca049bb7e 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -3254,7 +3254,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
>  			else
>  				return CURSEG_COLD_DATA;
>  		}
> -		if (file_is_cold(inode) || f2fs_compressed_file(inode))
> +		if (file_is_cold(inode) || f2fs_need_compress_write(inode))
>  			return CURSEG_COLD_DATA;
>  		if (file_is_hot(inode) ||
>  				is_inode_flag_set(inode, FI_HOT_DATA) ||
> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> index 87f7a6e86370..ea2385aa7f48 100644
> --- a/fs/f2fs/super.c
> +++ b/fs/f2fs/super.c
> @@ -146,6 +146,7 @@ enum {
>  	Opt_compress_algorithm,
>  	Opt_compress_log_size,
>  	Opt_compress_extension,
> +	Opt_compress_mode,
>  	Opt_atgc,
>  	Opt_err,
>  };
> @@ -214,6 +215,7 @@ static match_table_t f2fs_tokens = {
>  	{Opt_compress_algorithm, "compress_algorithm=%s"},
>  	{Opt_compress_log_size, "compress_log_size=%u"},
>  	{Opt_compress_extension, "compress_extension=%s"},
> +	{Opt_compress_mode, "compress_mode=%s"},
>  	{Opt_atgc, "atgc"},
>  	{Opt_err, NULL},
>  };
> @@ -934,10 +936,25 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
>  			F2FS_OPTION(sbi).compress_ext_cnt++;
>  			kfree(name);
>  			break;
> +		case Opt_compress_mode:
> +			name = match_strdup(&args[0]);
> +			if (!name)
> +				return -ENOMEM;
> +			if (!strcmp(name, "fs-based")) {
> +				F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
> +			} else if (!strcmp(name, "user-based")) {
> +				F2FS_OPTION(sbi).compress_mode = COMPR_MODE_USER;
> +			} else {
> +				kfree(name);
> +				return -EINVAL;
> +			}
> +			kfree(name);
> +			break;
>  #else
>  		case Opt_compress_algorithm:
>  		case Opt_compress_log_size:
>  		case Opt_compress_extension:
> +		case Opt_compress_mode:
>  			f2fs_info(sbi, "compression options not supported");
>  			break;
>  #endif
> @@ -1523,6 +1540,11 @@ static inline void f2fs_show_compress_options(struct seq_file *seq,
>  		seq_printf(seq, ",compress_extension=%s",
>  			F2FS_OPTION(sbi).extensions[i]);
>  	}
> +
> +	if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_FS)
> +		seq_printf(seq, ",compress_mode=%s", "fs-based");
> +	else if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_USER)
> +		seq_printf(seq, ",compress_mode=%s", "user-based");
>  }
>  
>  static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
> @@ -1672,6 +1694,7 @@ static void default_options(struct f2fs_sb_info *sbi)
>  	F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZ4;
>  	F2FS_OPTION(sbi).compress_log_size = MIN_COMPRESS_LOG_SIZE;
>  	F2FS_OPTION(sbi).compress_ext_cnt = 0;
> +	F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
>  	F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
>  
>  	sbi->sb->s_flags &= ~SB_INLINECRYPT;
> -- 
> 2.29.2.454.gaff20da3a2-goog
> 
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-23  3:17 ` [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE Daeho Jeong
@ 2020-11-23 17:19   ` Jaegeuk Kim
  2020-11-23 18:48   ` Eric Biggers
  2020-11-24  3:05   ` Chao Yu
  2 siblings, 0 replies; 18+ messages in thread
From: Jaegeuk Kim @ 2020-11-23 17:19 UTC (permalink / raw)
  To: Daeho Jeong; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

On 11/23, Daeho Jeong wrote:
> From: Daeho Jeong <daehojeong@google.com>
> 
> Added two ioctl to decompress/compress explicitly the compression
> enabled file in "compress_mode=user-based" mount option.
> 
> Using these two ioctls, the users can make a control of compression
> and decompression of their files.
> 
> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> ---
>  fs/f2fs/file.c            | 181 +++++++++++++++++++++++++++++++++++++-
>  include/uapi/linux/f2fs.h |   2 +
>  2 files changed, 182 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> index be8db06aca27..e8f142470e87 100644
> --- a/fs/f2fs/file.c
> +++ b/fs/f2fs/file.c
> @@ -4026,6 +4026,180 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
>  	return ret;
>  }
>  
> +static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
> +{
> +	DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, page_idx);
> +	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> +	struct address_space *mapping = inode->i_mapping;
> +	struct page *page;
> +	pgoff_t redirty_idx = page_idx;
> +	int i, page_len = 0, ret = 0;
> +
> +	page_cache_ra_unbounded(&ractl, len, 0);
> +
> +	for (i = 0; i < len; i++, page_idx++) {
> +		page = read_cache_page(mapping, page_idx, NULL, NULL);
> +		if (IS_ERR(page)) {
> +			ret = PTR_ERR(page);
> +			f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
> +				"couldn't be read (errno:%d).\n",
> +				__func__, inode->i_ino, page_idx, ret);
> +			break;
> +		}
> +		page_len++;
> +	}
> +
> +	for (i = 0; i < page_len; i++, redirty_idx++) {
> +		page = find_lock_page(mapping, redirty_idx);
> +		if (!page) {
> +			ret = -ENOENT;
> +			f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
> +				"couldn't be found (errno:%d).\n",
> +				__func__, inode->i_ino, redirty_idx, ret);
> +		}
> +		set_page_dirty(page);
> +		f2fs_put_page(page, 1);
> +		f2fs_put_page(page, 0);
> +	}
> +
> +	return ret;
> +}
> +
> +static int f2fs_ioc_decompress_file(struct file *filp, unsigned long arg)
> +{
> +	struct inode *inode = file_inode(filp);
> +	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> +	struct f2fs_inode_info *fi = F2FS_I(inode);
> +	pgoff_t page_idx = 0, last_idx;
> +	int cluster_size = F2FS_I(inode)->i_cluster_size;
> +	int count, ret;
> +
> +	if (!f2fs_sb_has_compression(sbi))
> +		return -EOPNOTSUPP;
> +
> +	if (!(filp->f_mode & FMODE_WRITE))
> +		return -EBADF;
> +
> +	if (!f2fs_compressed_file(inode))
> +		return -EINVAL;
> +
> +	f2fs_balance_fs(F2FS_I_SB(inode), true);
> +
> +	file_start_write(filp);
> +	inode_lock(inode);
> +
> +	if (f2fs_is_mmap_file(inode)) {
> +		ret = -EBUSY;
> +		goto out;
> +	}
> +
> +	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
> +	if (ret)
> +		goto out;
> +
> +	if (!atomic_read(&fi->i_compr_blocks))
> +		goto out;
> +
> +	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> +
> +	count = last_idx - page_idx;
> +	while (count) {
> +		int len = min(cluster_size, count);
> +
> +		ret = redirty_blocks(inode, page_idx, len);
> +
> +		if (ret < 0)
> +			break;
> +
> +		page_idx += len;
> +		count -= len;
> +	}
> +
> +	if (!ret)
> +		ret = filemap_write_and_wait_range(inode->i_mapping, 0,
> +							LLONG_MAX);
> +
> +	if (!ret) {
> +		stat_sub_compr_blocks(inode, atomic_read(&fi->i_compr_blocks));
> +		atomic_set(&fi->i_compr_blocks, 0);
> +		f2fs_mark_inode_dirty_sync(inode, true);
> +	} else {
> +		f2fs_warn(sbi, "%s: The file might be partially decompressed "
> +				"(errno=%d). Please delete the file.\n",
> +				__func__, ret);
> +	}
> +out:
> +	inode_unlock(inode);
> +	file_end_write(filp);
> +
> +	return ret;
> +}
> +
> +static int f2fs_ioc_compress_file(struct file *filp, unsigned long arg)
> +{
> +	struct inode *inode = file_inode(filp);
> +	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> +	pgoff_t page_idx = 0, last_idx;
> +	int cluster_size = F2FS_I(inode)->i_cluster_size;
> +	int count, ret;
> +
> +	if (!f2fs_sb_has_compression(sbi))
> +		return -EOPNOTSUPP;
> +
> +	if (!(filp->f_mode & FMODE_WRITE))
> +		return -EBADF;
> +
> +	if (!f2fs_compressed_file(inode))
> +		return -EINVAL;
> +
> +	f2fs_balance_fs(F2FS_I_SB(inode), true);
> +
> +	file_start_write(filp);
> +	inode_lock(inode);
> +
> +	if (f2fs_is_mmap_file(inode)) {
> +		ret = -EBUSY;
> +		goto out;
> +	}
> +
> +	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
> +	if (ret)
> +		goto out;
> +
> +	set_inode_flag(inode, FI_ENABLE_COMPRESS);
> +
> +	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> +
> +	count = last_idx - page_idx;
> +	while (count) {
> +		int len = min(cluster_size, count);
> +
> +		ret = redirty_blocks(inode, page_idx, len);
> +
> +		if (ret < 0)
> +			break;
> +
> +		page_idx += len;
> +		count -= len;
> +	}
> +
> +	if (!ret)
> +		ret = filemap_write_and_wait_range(inode->i_mapping, 0,
> +							LLONG_MAX);
> +
> +	clear_inode_flag(inode, FI_ENABLE_COMPRESS);
> +
> +	if (ret)
> +		f2fs_warn(sbi, "%s: The file might be partially compressed "
> +				"(errno=%d). Please delete the file.\n",
> +				__func__, ret);
> +out:
> +	inode_unlock(inode);
> +	file_end_write(filp);
> +
> +	return ret;
> +}
> +
>  static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>  {
>  	switch (cmd) {
> @@ -4113,6 +4287,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>  		return f2fs_ioc_get_compress_option(filp, arg);
>  	case F2FS_IOC_SET_COMPRESS_OPTION:
>  		return f2fs_ioc_set_compress_option(filp, arg);
> +	case F2FS_IOC_DECOMPRESS_FILE:
> +		return f2fs_ioc_decompress_file(filp, arg);
> +	case F2FS_IOC_COMPRESS_FILE:
> +		return f2fs_ioc_compress_file(filp, arg);
>  	default:
>  		return -ENOTTY;
>  	}
> @@ -4352,7 +4530,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>  	case F2FS_IOC_SEC_TRIM_FILE:
>  	case F2FS_IOC_GET_COMPRESS_OPTION:
>  	case F2FS_IOC_SET_COMPRESS_OPTION:
> -		break;
> +	case F2FS_IOC_DECOMPRESS_FILE:
> +	case F2FS_IOC_COMPRESS_FILE:

		break; ?

>  	default:
>  		return -ENOIOCTLCMD;
>  	}
> diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
> index f00199a2e38b..352a822d4370 100644
> --- a/include/uapi/linux/f2fs.h
> +++ b/include/uapi/linux/f2fs.h
> @@ -40,6 +40,8 @@
>  						struct f2fs_comp_option)
>  #define F2FS_IOC_SET_COMPRESS_OPTION	_IOW(F2FS_IOCTL_MAGIC, 22,	\
>  						struct f2fs_comp_option)
> +#define F2FS_IOC_DECOMPRESS_FILE	_IO(F2FS_IOCTL_MAGIC, 23)
> +#define F2FS_IOC_COMPRESS_FILE		_IO(F2FS_IOCTL_MAGIC, 24)
>  
>  /*
>   * should be same as XFS_IOC_GOINGDOWN.
> -- 
> 2.29.2.454.gaff20da3a2-goog
> 
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 1/2] f2fs: add compress_mode mount option
  2020-11-23  3:17 [PATCH 1/2] f2fs: add compress_mode mount option Daeho Jeong
  2020-11-23  3:17 ` [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE Daeho Jeong
  2020-11-23 17:18 ` [f2fs-dev] [PATCH 1/2] f2fs: add compress_mode mount option Jaegeuk Kim
@ 2020-11-23 18:46 ` Eric Biggers
  2020-11-23 23:03   ` Daeho Jeong
  2020-11-24  2:16 ` Chao Yu
  3 siblings, 1 reply; 18+ messages in thread
From: Eric Biggers @ 2020-11-23 18:46 UTC (permalink / raw)
  To: Daeho Jeong; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

On Mon, Nov 23, 2020 at 12:17:50PM +0900, Daeho Jeong wrote:
> From: Daeho Jeong <daehojeong@google.com>
> 
> We will add a new "compress_mode" mount option to control file
> compression mode. This supports "fs-based" and "user-based".
> In "fs-based" mode (default), f2fs does automatic compression on
> the compression enabled files. In "user-based" mode, f2fs disables
> the automaic compression and gives the user discretion of choosing
> the target file and the timing. It means the user can do manual
> compression/decompression on the compression enabled files using ioctls.
> 
> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> ---
>  Documentation/filesystems/f2fs.rst |  7 +++++++
>  fs/f2fs/data.c                     | 10 +++++-----
>  fs/f2fs/f2fs.h                     | 30 ++++++++++++++++++++++++++++++
>  fs/f2fs/segment.c                  |  2 +-
>  fs/f2fs/super.c                    | 23 +++++++++++++++++++++++
>  5 files changed, 66 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
> index b8ee761c9922..0679c53d5012 100644
> --- a/Documentation/filesystems/f2fs.rst
> +++ b/Documentation/filesystems/f2fs.rst
> @@ -260,6 +260,13 @@ compress_extension=%s	 Support adding specified extension, so that f2fs can enab
>  			 For other files, we can still enable compression via ioctl.
>  			 Note that, there is one reserved special extension '*', it
>  			 can be set to enable compression for all files.
> +compress_mode=%s	 Control file compression mode. This supports "fs-based" and
> +			 "user-based". In "fs-based" mode (default), f2fs does
> +			 automatic compression on the compression enabled files.
> +			 In "user-based" mode, f2fs disables the automaic compression
> +			 and gives the user discretion of choosing the target file and
> +			 the timing. The user can do manual compression/decompression
> +			 on the compression enabled files using ioctls.

Please clarify in the documentation what it means for compression-enabled files
to not be compressed.  It is not obvious.

- Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-23  3:17 ` [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE Daeho Jeong
  2020-11-23 17:19   ` [f2fs-dev] " Jaegeuk Kim
@ 2020-11-23 18:48   ` Eric Biggers
  2020-11-23 23:02     ` Daeho Jeong
  2020-11-24  3:05   ` Chao Yu
  2 siblings, 1 reply; 18+ messages in thread
From: Eric Biggers @ 2020-11-23 18:48 UTC (permalink / raw)
  To: Daeho Jeong; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

On Mon, Nov 23, 2020 at 12:17:51PM +0900, Daeho Jeong wrote:
> From: Daeho Jeong <daehojeong@google.com>
> 
> Added two ioctl to decompress/compress explicitly the compression
> enabled file in "compress_mode=user-based" mount option.
> 
> Using these two ioctls, the users can make a control of compression
> and decompression of their files.
> 
> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> ---
>  fs/f2fs/file.c            | 181 +++++++++++++++++++++++++++++++++++++-
>  include/uapi/linux/f2fs.h |   2 +
>  2 files changed, 182 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> index be8db06aca27..e8f142470e87 100644
> --- a/fs/f2fs/file.c
> +++ b/fs/f2fs/file.c
> @@ -4026,6 +4026,180 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
>  	return ret;
>  }
>  
> +static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
> +{
> +	DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, page_idx);
> +	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> +	struct address_space *mapping = inode->i_mapping;
> +	struct page *page;
> +	pgoff_t redirty_idx = page_idx;
> +	int i, page_len = 0, ret = 0;
> +
> +	page_cache_ra_unbounded(&ractl, len, 0);

Using page_cache_ra_unbounded() here looks wrong.  See the comment above
page_cache_ra_unbounded().

>  static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>  {
>  	switch (cmd) {
> @@ -4113,6 +4287,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>  		return f2fs_ioc_get_compress_option(filp, arg);
>  	case F2FS_IOC_SET_COMPRESS_OPTION:
>  		return f2fs_ioc_set_compress_option(filp, arg);
> +	case F2FS_IOC_DECOMPRESS_FILE:
> +		return f2fs_ioc_decompress_file(filp, arg);
> +	case F2FS_IOC_COMPRESS_FILE:
> +		return f2fs_ioc_compress_file(filp, arg);
>  	default:
>  		return -ENOTTY;
>  	}

Where is the documentation and tests for these new ioctls?

- Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-23 18:48   ` Eric Biggers
@ 2020-11-23 23:02     ` Daeho Jeong
  2020-11-23 23:29       ` Eric Biggers
  0 siblings, 1 reply; 18+ messages in thread
From: Daeho Jeong @ 2020-11-23 23:02 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

Jaegeuk,

My mistake~

Eric,

What I want is like do_page_cache_ra(), but I used
page_cache_ra_unbounded() directly, because we already checked that
read is within i_size.
Or we could use do_page_cache_ra(), but it might do the same check in it again.
What do you think?

I could add some description about these in
Documentation/filesystems/f2fs.rst and I implemented tests internally.

2020년 11월 24일 (화) 오전 3:48, Eric Biggers <ebiggers@kernel.org>님이 작성:
>
> On Mon, Nov 23, 2020 at 12:17:51PM +0900, Daeho Jeong wrote:
> > From: Daeho Jeong <daehojeong@google.com>
> >
> > Added two ioctl to decompress/compress explicitly the compression
> > enabled file in "compress_mode=user-based" mount option.
> >
> > Using these two ioctls, the users can make a control of compression
> > and decompression of their files.
> >
> > Signed-off-by: Daeho Jeong <daehojeong@google.com>
> > ---
> >  fs/f2fs/file.c            | 181 +++++++++++++++++++++++++++++++++++++-
> >  include/uapi/linux/f2fs.h |   2 +
> >  2 files changed, 182 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> > index be8db06aca27..e8f142470e87 100644
> > --- a/fs/f2fs/file.c
> > +++ b/fs/f2fs/file.c
> > @@ -4026,6 +4026,180 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
> >       return ret;
> >  }
> >
> > +static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
> > +{
> > +     DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, page_idx);
> > +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> > +     struct address_space *mapping = inode->i_mapping;
> > +     struct page *page;
> > +     pgoff_t redirty_idx = page_idx;
> > +     int i, page_len = 0, ret = 0;
> > +
> > +     page_cache_ra_unbounded(&ractl, len, 0);
>
> Using page_cache_ra_unbounded() here looks wrong.  See the comment above
> page_cache_ra_unbounded().
>
> >  static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> >  {
> >       switch (cmd) {
> > @@ -4113,6 +4287,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> >               return f2fs_ioc_get_compress_option(filp, arg);
> >       case F2FS_IOC_SET_COMPRESS_OPTION:
> >               return f2fs_ioc_set_compress_option(filp, arg);
> > +     case F2FS_IOC_DECOMPRESS_FILE:
> > +             return f2fs_ioc_decompress_file(filp, arg);
> > +     case F2FS_IOC_COMPRESS_FILE:
> > +             return f2fs_ioc_compress_file(filp, arg);
> >       default:
> >               return -ENOTTY;
> >       }
>
> Where is the documentation and tests for these new ioctls?
>
> - Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 1/2] f2fs: add compress_mode mount option
  2020-11-23 18:46 ` Eric Biggers
@ 2020-11-23 23:03   ` Daeho Jeong
  0 siblings, 0 replies; 18+ messages in thread
From: Daeho Jeong @ 2020-11-23 23:03 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

Jaegeuk,

Got it.

Eric,

Yep.

2020년 11월 24일 (화) 오전 3:46, Eric Biggers <ebiggers@kernel.org>님이 작성:
>
> On Mon, Nov 23, 2020 at 12:17:50PM +0900, Daeho Jeong wrote:
> > From: Daeho Jeong <daehojeong@google.com>
> >
> > We will add a new "compress_mode" mount option to control file
> > compression mode. This supports "fs-based" and "user-based".
> > In "fs-based" mode (default), f2fs does automatic compression on
> > the compression enabled files. In "user-based" mode, f2fs disables
> > the automaic compression and gives the user discretion of choosing
> > the target file and the timing. It means the user can do manual
> > compression/decompression on the compression enabled files using ioctls.
> >
> > Signed-off-by: Daeho Jeong <daehojeong@google.com>
> > ---
> >  Documentation/filesystems/f2fs.rst |  7 +++++++
> >  fs/f2fs/data.c                     | 10 +++++-----
> >  fs/f2fs/f2fs.h                     | 30 ++++++++++++++++++++++++++++++
> >  fs/f2fs/segment.c                  |  2 +-
> >  fs/f2fs/super.c                    | 23 +++++++++++++++++++++++
> >  5 files changed, 66 insertions(+), 6 deletions(-)
> >
> > diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
> > index b8ee761c9922..0679c53d5012 100644
> > --- a/Documentation/filesystems/f2fs.rst
> > +++ b/Documentation/filesystems/f2fs.rst
> > @@ -260,6 +260,13 @@ compress_extension=%s     Support adding specified extension, so that f2fs can enab
> >                        For other files, we can still enable compression via ioctl.
> >                        Note that, there is one reserved special extension '*', it
> >                        can be set to enable compression for all files.
> > +compress_mode=%s      Control file compression mode. This supports "fs-based" and
> > +                      "user-based". In "fs-based" mode (default), f2fs does
> > +                      automatic compression on the compression enabled files.
> > +                      In "user-based" mode, f2fs disables the automaic compression
> > +                      and gives the user discretion of choosing the target file and
> > +                      the timing. The user can do manual compression/decompression
> > +                      on the compression enabled files using ioctls.
>
> Please clarify in the documentation what it means for compression-enabled files
> to not be compressed.  It is not obvious.
>
> - Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-23 23:02     ` Daeho Jeong
@ 2020-11-23 23:29       ` Eric Biggers
  2020-11-24  1:03         ` Daeho Jeong
  0 siblings, 1 reply; 18+ messages in thread
From: Eric Biggers @ 2020-11-23 23:29 UTC (permalink / raw)
  To: Daeho Jeong; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

On Tue, Nov 24, 2020 at 08:02:21AM +0900, Daeho Jeong wrote:
> Jaegeuk,
> 
> My mistake~
> 
> Eric,
> 
> What I want is like do_page_cache_ra(), but I used
> page_cache_ra_unbounded() directly, because we already checked that
> read is within i_size.
>
> Or we could use do_page_cache_ra(), but it might do the same check in it again.
> What do you think?

page_cache_ra_unbounded() is basically a quirk for how fs-verity is implemented
in ext4 and f2fs.  I don't think people would be happy if it's used in other
cases, where it's not needed.  Checking against i_size multiple times is fine.

> 
> I could add some description about these in
> Documentation/filesystems/f2fs.rst and I implemented tests internally.

Documentation in f2fs.rst sounds good.  All the f2fs ioctls should be
documented there.

The tests should be runnable by any kernel developer; "internal" tests aren't
very useful.  Could you add tests to xfstests?

- Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-23 23:29       ` Eric Biggers
@ 2020-11-24  1:03         ` Daeho Jeong
  0 siblings, 0 replies; 18+ messages in thread
From: Daeho Jeong @ 2020-11-24  1:03 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

2020년 11월 24일 (화) 오전 8:29, Eric Biggers <ebiggers@kernel.org>님이 작성:
>
> On Tue, Nov 24, 2020 at 08:02:21AM +0900, Daeho Jeong wrote:
> > Jaegeuk,
> >
> > My mistake~
> >
> > Eric,
> >
> > What I want is like do_page_cache_ra(), but I used
> > page_cache_ra_unbounded() directly, because we already checked that
> > read is within i_size.
> >
> > Or we could use do_page_cache_ra(), but it might do the same check in it again.
> > What do you think?
>
> page_cache_ra_unbounded() is basically a quirk for how fs-verity is implemented
> in ext4 and f2fs.  I don't think people would be happy if it's used in other
> cases, where it's not needed.  Checking against i_size multiple times is fine.
>

Got your point. Thanks.

> >
> > I could add some description about these in
> > Documentation/filesystems/f2fs.rst and I implemented tests internally.
>
> Documentation in f2fs.rst sounds good.  All the f2fs ioctls should be
> documented there.
>
> The tests should be runnable by any kernel developer; "internal" tests aren't
> very useful.  Could you add tests to xfstests?
>

Yes, I'll add all the internal test cases to xfstests soon~

> - Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 1/2] f2fs: add compress_mode mount option
  2020-11-23  3:17 [PATCH 1/2] f2fs: add compress_mode mount option Daeho Jeong
                   ` (2 preceding siblings ...)
  2020-11-23 18:46 ` Eric Biggers
@ 2020-11-24  2:16 ` Chao Yu
  3 siblings, 0 replies; 18+ messages in thread
From: Chao Yu @ 2020-11-24  2:16 UTC (permalink / raw)
  To: Daeho Jeong, linux-kernel, linux-f2fs-devel, kernel-team; +Cc: Daeho Jeong

On 2020/11/23 11:17, Daeho Jeong wrote:
> From: Daeho Jeong <daehojeong@google.com>
> 
> We will add a new "compress_mode" mount option to control file
> compression mode. This supports "fs-based" and "user-based".
> In "fs-based" mode (default), f2fs does automatic compression on
> the compression enabled files. In "user-based" mode, f2fs disables
> the automaic compression and gives the user discretion of choosing
> the target file and the timing. It means the user can do manual
> compression/decompression on the compression enabled files using ioctls.
> 
> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> ---
>   Documentation/filesystems/f2fs.rst |  7 +++++++
>   fs/f2fs/data.c                     | 10 +++++-----
>   fs/f2fs/f2fs.h                     | 30 ++++++++++++++++++++++++++++++
>   fs/f2fs/segment.c                  |  2 +-
>   fs/f2fs/super.c                    | 23 +++++++++++++++++++++++
>   5 files changed, 66 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
> index b8ee761c9922..0679c53d5012 100644
> --- a/Documentation/filesystems/f2fs.rst
> +++ b/Documentation/filesystems/f2fs.rst
> @@ -260,6 +260,13 @@ compress_extension=%s	 Support adding specified extension, so that f2fs can enab
>   			 For other files, we can still enable compression via ioctl.
>   			 Note that, there is one reserved special extension '*', it
>   			 can be set to enable compression for all files.
> +compress_mode=%s	 Control file compression mode. This supports "fs-based" and
> +			 "user-based". In "fs-based" mode (default), f2fs does
> +			 automatic compression on the compression enabled files.
> +			 In "user-based" mode, f2fs disables the automaic compression
> +			 and gives the user discretion of choosing the target file and
> +			 the timing. The user can do manual compression/decompression
> +			 on the compression enabled files using ioctls.
>   inlinecrypt		 When possible, encrypt/decrypt the contents of encrypted
>   			 files using the blk-crypto framework rather than
>   			 filesystem-layer encryption. This allows the use of
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index be4da52604ed..69370f0073dd 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2896,7 +2896,7 @@ static int f2fs_write_data_page(struct page *page,
>   	if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
>   		goto out;
>   
> -	if (f2fs_compressed_file(inode)) {
> +	if (f2fs_need_compress_write(inode)) {
>   		if (f2fs_is_compressed_cluster(inode, page->index)) {
>   			redirty_page_for_writepage(wbc, page);
>   			return AOP_WRITEPAGE_ACTIVATE;
> @@ -2988,7 +2988,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   readd:
>   			need_readd = false;
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
> -			if (f2fs_compressed_file(inode)) {
> +			if (f2fs_need_compress_write(inode)) {
>   				ret = f2fs_init_compress_ctx(&cc);
>   				if (ret) {
>   					done = 1;
> @@ -3067,7 +3067,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   				goto continue_unlock;
>   
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
> -			if (f2fs_compressed_file(inode)) {
> +			if (f2fs_need_compress_write(inode)) {
>   				get_page(page);
>   				f2fs_compress_ctx_add_page(&cc, page);
>   				continue;
> @@ -3120,7 +3120,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   	}
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>   	/* flush remained pages in compress cluster */
> -	if (f2fs_compressed_file(inode) && !f2fs_cluster_is_empty(&cc)) {
> +	if (f2fs_need_compress_write(inode) && !f2fs_cluster_is_empty(&cc)) {
>   		ret = f2fs_write_multi_pages(&cc, &submitted, wbc, io_type);
>   		nwritten += submitted;
>   		wbc->nr_to_write -= submitted;
> @@ -3164,7 +3164,7 @@ static inline bool __should_serialize_io(struct inode *inode,
>   	if (IS_NOQUOTA(inode))
>   		return false;
>   
> -	if (f2fs_compressed_file(inode))
> +	if (f2fs_need_compress_write(inode))
>   		return true;
>   	if (wbc->sync_mode != WB_SYNC_ALL)
>   		return true;
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index e0826779a101..88e012d07ad5 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -149,6 +149,7 @@ struct f2fs_mount_info {
>   	unsigned char compress_algorithm;	/* algorithm type */
>   	unsigned compress_log_size;		/* cluster log size */
>   	unsigned char compress_ext_cnt;		/* extension count */
> +	int compress_mode;			/* compression mode */
>   	unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN];	/* extensions */
>   };
>   
> @@ -677,6 +678,7 @@ enum {
>   	FI_VERITY_IN_PROGRESS,	/* building fs-verity Merkle tree */
>   	FI_COMPRESSED_FILE,	/* indicate file's data can be compressed */
>   	FI_MMAP_FILE,		/* indicate file was mmapped */
> +	FI_ENABLE_COMPRESS,	/* enable compression in user-based compression mode */
>   	FI_MAX,			/* max flag, never be used */
>   };
>   
> @@ -1243,6 +1245,18 @@ enum fsync_mode {
>   	FSYNC_MODE_NOBARRIER,	/* fsync behaves nobarrier based on posix */
>   };
>   
> +enum {
> +	COMPR_MODE_FS,		/*
> +				 * automatically compress compression
> +				 * enabled files
> +				 */
> +	COMPR_MODE_USER,	/*
> +				 * automatical compression is disabled.
> +				 * user can control the file compression
> +				 * using ioctls
> +				 */
> +};
> +
>   /*
>    * this value is set in page as a private data which indicate that
>    * the page is atomically written, and it is in inmem_pages list.
> @@ -2752,6 +2766,22 @@ static inline int f2fs_compressed_file(struct inode *inode)
>   		is_inode_flag_set(inode, FI_COMPRESSED_FILE);
>   }
>   
> +static inline int f2fs_need_compress_write(struct inode *inode)

f2fs_need_compress_data() will be more suitable?

> +{
> +	int compress_mode = F2FS_OPTION(F2FS_I_SB(inode)).compress_mode;
> +
> +	if (!f2fs_compressed_file(inode))
> +		return 0;
> +
> +	if (compress_mode == COMPR_MODE_FS)
> +		return 1;
> +	else if (compress_mode == COMPR_MODE_USER &&
> +			is_inode_flag_set(inode, FI_ENABLE_COMPRESS))
> +		return 1;
> +
> +	return 0;

Can we use bool type for return value?

Thanks,

> +}
> +
>   static inline unsigned int addrs_per_inode(struct inode *inode)
>   {
>   	unsigned int addrs = CUR_ADDRS_PER_INODE(inode) -
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 1596502f7375..652ca049bb7e 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -3254,7 +3254,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
>   			else
>   				return CURSEG_COLD_DATA;
>   		}
> -		if (file_is_cold(inode) || f2fs_compressed_file(inode))
> +		if (file_is_cold(inode) || f2fs_need_compress_write(inode))
>   			return CURSEG_COLD_DATA;
>   		if (file_is_hot(inode) ||
>   				is_inode_flag_set(inode, FI_HOT_DATA) ||
> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> index 87f7a6e86370..ea2385aa7f48 100644
> --- a/fs/f2fs/super.c
> +++ b/fs/f2fs/super.c
> @@ -146,6 +146,7 @@ enum {
>   	Opt_compress_algorithm,
>   	Opt_compress_log_size,
>   	Opt_compress_extension,
> +	Opt_compress_mode,
>   	Opt_atgc,
>   	Opt_err,
>   };
> @@ -214,6 +215,7 @@ static match_table_t f2fs_tokens = {
>   	{Opt_compress_algorithm, "compress_algorithm=%s"},
>   	{Opt_compress_log_size, "compress_log_size=%u"},
>   	{Opt_compress_extension, "compress_extension=%s"},
> +	{Opt_compress_mode, "compress_mode=%s"},
>   	{Opt_atgc, "atgc"},
>   	{Opt_err, NULL},
>   };
> @@ -934,10 +936,25 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
>   			F2FS_OPTION(sbi).compress_ext_cnt++;
>   			kfree(name);
>   			break;
> +		case Opt_compress_mode:
> +			name = match_strdup(&args[0]);
> +			if (!name)
> +				return -ENOMEM;
> +			if (!strcmp(name, "fs-based")) {
> +				F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
> +			} else if (!strcmp(name, "user-based")) {
> +				F2FS_OPTION(sbi).compress_mode = COMPR_MODE_USER;
> +			} else {
> +				kfree(name);
> +				return -EINVAL;
> +			}
> +			kfree(name);
> +			break;
>   #else
>   		case Opt_compress_algorithm:
>   		case Opt_compress_log_size:
>   		case Opt_compress_extension:
> +		case Opt_compress_mode:
>   			f2fs_info(sbi, "compression options not supported");
>   			break;
>   #endif
> @@ -1523,6 +1540,11 @@ static inline void f2fs_show_compress_options(struct seq_file *seq,
>   		seq_printf(seq, ",compress_extension=%s",
>   			F2FS_OPTION(sbi).extensions[i]);
>   	}
> +
> +	if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_FS)
> +		seq_printf(seq, ",compress_mode=%s", "fs-based");
> +	else if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_USER)
> +		seq_printf(seq, ",compress_mode=%s", "user-based");
>   }
>   
>   static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
> @@ -1672,6 +1694,7 @@ static void default_options(struct f2fs_sb_info *sbi)
>   	F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZ4;
>   	F2FS_OPTION(sbi).compress_log_size = MIN_COMPRESS_LOG_SIZE;
>   	F2FS_OPTION(sbi).compress_ext_cnt = 0;
> +	F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
>   	F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
>   
>   	sbi->sb->s_flags &= ~SB_INLINECRYPT;
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-23  3:17 ` [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE Daeho Jeong
  2020-11-23 17:19   ` [f2fs-dev] " Jaegeuk Kim
  2020-11-23 18:48   ` Eric Biggers
@ 2020-11-24  3:05   ` Chao Yu
  2020-11-26  5:04     ` Daeho Jeong
  2 siblings, 1 reply; 18+ messages in thread
From: Chao Yu @ 2020-11-24  3:05 UTC (permalink / raw)
  To: Daeho Jeong, linux-kernel, linux-f2fs-devel, kernel-team; +Cc: Daeho Jeong

On 2020/11/23 11:17, Daeho Jeong wrote:
> From: Daeho Jeong <daehojeong@google.com>
> 
> Added two ioctl to decompress/compress explicitly the compression
> enabled file in "compress_mode=user-based" mount option.
> 
> Using these two ioctls, the users can make a control of compression
> and decompression of their files.
> 
> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> ---
>   fs/f2fs/file.c            | 181 +++++++++++++++++++++++++++++++++++++-
>   include/uapi/linux/f2fs.h |   2 +
>   2 files changed, 182 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> index be8db06aca27..e8f142470e87 100644
> --- a/fs/f2fs/file.c
> +++ b/fs/f2fs/file.c
> @@ -4026,6 +4026,180 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
>   	return ret;
>   }
>   
> +static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
> +{
> +	DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, page_idx);
> +	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> +	struct address_space *mapping = inode->i_mapping;
> +	struct page *page;
> +	pgoff_t redirty_idx = page_idx;
> +	int i, page_len = 0, ret = 0;
> +
> +	page_cache_ra_unbounded(&ractl, len, 0);
> +
> +	for (i = 0; i < len; i++, page_idx++) {
> +		page = read_cache_page(mapping, page_idx, NULL, NULL);
> +		if (IS_ERR(page)) {
> +			ret = PTR_ERR(page);
> +			f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
> +				"couldn't be read (errno:%d).\n",
> +				__func__, inode->i_ino, page_idx, ret);

This is a common error case during calling read_cache_page(), IMO, this looks
more like a debug log, so I prefer to print nothing here, or at least using
f2fs_debug() instead.

> +			break;
> +		}
> +		page_len++;
> +	}
> +
> +	for (i = 0; i < page_len; i++, redirty_idx++) {
> +		page = find_lock_page(mapping, redirty_idx);
> +		if (!page) {
> +			ret = -ENOENT;
> +			f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
> +				"couldn't be found (errno:%d).\n",
> +				__func__, inode->i_ino, redirty_idx, ret);

Ditto.

> +		}
> +		set_page_dirty(page);
> +		f2fs_put_page(page, 1);
> +		f2fs_put_page(page, 0);
> +	}
> +
> +	return ret;
> +}
> +
> +static int f2fs_ioc_decompress_file(struct file *filp, unsigned long arg)
> +{
> +	struct inode *inode = file_inode(filp);
> +	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> +	struct f2fs_inode_info *fi = F2FS_I(inode);
> +	pgoff_t page_idx = 0, last_idx;
> +	int cluster_size = F2FS_I(inode)->i_cluster_size;
> +	int count, ret;
> +
> +	if (!f2fs_sb_has_compression(sbi))
> +		return -EOPNOTSUPP;
> +
> +	if (!(filp->f_mode & FMODE_WRITE))
> +		return -EBADF;
> +
> +	if (!f2fs_compressed_file(inode))
> +		return -EINVAL;

Before compressubg/decompressing file, should we check whether current inode's
compress algorithm backend is available in f2fs module?

> +
> +	f2fs_balance_fs(F2FS_I_SB(inode), true);
> +
> +	file_start_write(filp);
> +	inode_lock(inode);
> +
> +	if (f2fs_is_mmap_file(inode)) {
> +		ret = -EBUSY;
> +		goto out;
> +	}
> +
> +	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
> +	if (ret)
> +		goto out;
> +
> +	if (!atomic_read(&fi->i_compr_blocks))
> +		goto out;
> +
> +	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> +
> +	count = last_idx - page_idx;
> +	while (count) {
> +		int len = min(cluster_size, count);
> +
> +		ret = redirty_blocks(inode, page_idx, len);
> +

unneeded blank line..

> +		if (ret < 0)
> +			break;
> +
> +		page_idx += len;
> +		count -= len;

Considering there isn't so many memory in low-end device, how about calling
filemap_fdatawrite() to writeback cluster after redirty several clusters
or xxMB?

> +	}
> +
> +	if (!ret)
> +		ret = filemap_write_and_wait_range(inode->i_mapping, 0,
> +							LLONG_MAX);
> +
> +	if (!ret) {
> +		stat_sub_compr_blocks(inode, atomic_read(&fi->i_compr_blocks));
> +		atomic_set(&fi->i_compr_blocks, 0);
> +		f2fs_mark_inode_dirty_sync(inode, true);

A little bit wired, why not failing cluster_may_compress() for user mode, and
let writepages write cluster as raw blocks, in-where we can update i_compr_blocks
and global compr block stats correctly.

> +	} else {
> +		f2fs_warn(sbi, "%s: The file might be partially decompressed "
> +				"(errno=%d). Please delete the file.\n",
> +				__func__, ret);
> +	}
> +out:
> +	inode_unlock(inode);
> +	file_end_write(filp);
> +
> +	return ret;
> +}
> +
> +static int f2fs_ioc_compress_file(struct file *filp, unsigned long arg)
> +{
> +	struct inode *inode = file_inode(filp);
> +	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> +	pgoff_t page_idx = 0, last_idx;
> +	int cluster_size = F2FS_I(inode)->i_cluster_size;
> +	int count, ret;
> +
> +	if (!f2fs_sb_has_compression(sbi))
> +		return -EOPNOTSUPP;
> +
> +	if (!(filp->f_mode & FMODE_WRITE))
> +		return -EBADF;
> +
> +	if (!f2fs_compressed_file(inode))
> +		return -EINVAL;

algorithm backend check?

> +
> +	f2fs_balance_fs(F2FS_I_SB(inode), true);
> +
> +	file_start_write(filp);
> +	inode_lock(inode);
> +
> +	if (f2fs_is_mmap_file(inode)) {
> +		ret = -EBUSY;
> +		goto out;
> +	}
> +
> +	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
> +	if (ret)
> +		goto out;
> +
> +	set_inode_flag(inode, FI_ENABLE_COMPRESS);
> +
> +	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> +
> +	count = last_idx - page_idx;
> +	while (count) {
> +		int len = min(cluster_size, count);
> +
> +		ret = redirty_blocks(inode, page_idx, len);
> +

Ditto.

Thanks,

> +		if (ret < 0)
> +			break;
> +
> +		page_idx += len;
> +		count -= len;
> +	}
> +
> +	if (!ret)
> +		ret = filemap_write_and_wait_range(inode->i_mapping, 0,
> +							LLONG_MAX);
> +
> +	clear_inode_flag(inode, FI_ENABLE_COMPRESS);
> +
> +	if (ret)
> +		f2fs_warn(sbi, "%s: The file might be partially compressed "
> +				"(errno=%d). Please delete the file.\n",
> +				__func__, ret);
> +out:
> +	inode_unlock(inode);
> +	file_end_write(filp);
> +
> +	return ret;
> +}
> +
>   static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>   {
>   	switch (cmd) {
> @@ -4113,6 +4287,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>   		return f2fs_ioc_get_compress_option(filp, arg);
>   	case F2FS_IOC_SET_COMPRESS_OPTION:
>   		return f2fs_ioc_set_compress_option(filp, arg);
> +	case F2FS_IOC_DECOMPRESS_FILE:
> +		return f2fs_ioc_decompress_file(filp, arg);
> +	case F2FS_IOC_COMPRESS_FILE:
> +		return f2fs_ioc_compress_file(filp, arg);
>   	default:
>   		return -ENOTTY;
>   	}
> @@ -4352,7 +4530,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>   	case F2FS_IOC_SEC_TRIM_FILE:
>   	case F2FS_IOC_GET_COMPRESS_OPTION:
>   	case F2FS_IOC_SET_COMPRESS_OPTION:
> -		break;
> +	case F2FS_IOC_DECOMPRESS_FILE:
> +	case F2FS_IOC_COMPRESS_FILE:
>   	default:
>   		return -ENOIOCTLCMD;
>   	}
> diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
> index f00199a2e38b..352a822d4370 100644
> --- a/include/uapi/linux/f2fs.h
> +++ b/include/uapi/linux/f2fs.h
> @@ -40,6 +40,8 @@
>   						struct f2fs_comp_option)
>   #define F2FS_IOC_SET_COMPRESS_OPTION	_IOW(F2FS_IOCTL_MAGIC, 22,	\
>   						struct f2fs_comp_option)
> +#define F2FS_IOC_DECOMPRESS_FILE	_IO(F2FS_IOCTL_MAGIC, 23)
> +#define F2FS_IOC_COMPRESS_FILE		_IO(F2FS_IOCTL_MAGIC, 24)
>   
>   /*
>    * should be same as XFS_IOC_GOINGDOWN.
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-24  3:05   ` Chao Yu
@ 2020-11-26  5:04     ` Daeho Jeong
  2020-11-26  6:35       ` Daeho Jeong
  2020-11-26 17:49       ` Eric Biggers
  0 siblings, 2 replies; 18+ messages in thread
From: Daeho Jeong @ 2020-11-26  5:04 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

Eric,

do_page_cache_ra() is defined in mm/internal.h for internal use
between in mm, so we cannot use this one right now.
So, I think we could use page_cache_ra_unbounded(), because we already
check i_size boundary on our own.
What do you think?

2020년 11월 24일 (화) 오후 12:05, Chao Yu <yuchao0@huawei.com>님이 작성:
>
> On 2020/11/23 11:17, Daeho Jeong wrote:
> > From: Daeho Jeong <daehojeong@google.com>
> >
> > Added two ioctl to decompress/compress explicitly the compression
> > enabled file in "compress_mode=user-based" mount option.
> >
> > Using these two ioctls, the users can make a control of compression
> > and decompression of their files.
> >
> > Signed-off-by: Daeho Jeong <daehojeong@google.com>
> > ---
> >   fs/f2fs/file.c            | 181 +++++++++++++++++++++++++++++++++++++-
> >   include/uapi/linux/f2fs.h |   2 +
> >   2 files changed, 182 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> > index be8db06aca27..e8f142470e87 100644
> > --- a/fs/f2fs/file.c
> > +++ b/fs/f2fs/file.c
> > @@ -4026,6 +4026,180 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
> >       return ret;
> >   }
> >
> > +static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
> > +{
> > +     DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, page_idx);
> > +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> > +     struct address_space *mapping = inode->i_mapping;
> > +     struct page *page;
> > +     pgoff_t redirty_idx = page_idx;
> > +     int i, page_len = 0, ret = 0;
> > +
> > +     page_cache_ra_unbounded(&ractl, len, 0);
> > +
> > +     for (i = 0; i < len; i++, page_idx++) {
> > +             page = read_cache_page(mapping, page_idx, NULL, NULL);
> > +             if (IS_ERR(page)) {
> > +                     ret = PTR_ERR(page);
> > +                     f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
> > +                             "couldn't be read (errno:%d).\n",
> > +                             __func__, inode->i_ino, page_idx, ret);
>
> This is a common error case during calling read_cache_page(), IMO, this looks
> more like a debug log, so I prefer to print nothing here, or at least using
> f2fs_debug() instead.
>
> > +                     break;
> > +             }
> > +             page_len++;
> > +     }
> > +
> > +     for (i = 0; i < page_len; i++, redirty_idx++) {
> > +             page = find_lock_page(mapping, redirty_idx);
> > +             if (!page) {
> > +                     ret = -ENOENT;
> > +                     f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
> > +                             "couldn't be found (errno:%d).\n",
> > +                             __func__, inode->i_ino, redirty_idx, ret);
>
> Ditto.
>
> > +             }
> > +             set_page_dirty(page);
> > +             f2fs_put_page(page, 1);
> > +             f2fs_put_page(page, 0);
> > +     }
> > +
> > +     return ret;
> > +}
> > +
> > +static int f2fs_ioc_decompress_file(struct file *filp, unsigned long arg)
> > +{
> > +     struct inode *inode = file_inode(filp);
> > +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> > +     struct f2fs_inode_info *fi = F2FS_I(inode);
> > +     pgoff_t page_idx = 0, last_idx;
> > +     int cluster_size = F2FS_I(inode)->i_cluster_size;
> > +     int count, ret;
> > +
> > +     if (!f2fs_sb_has_compression(sbi))
> > +             return -EOPNOTSUPP;
> > +
> > +     if (!(filp->f_mode & FMODE_WRITE))
> > +             return -EBADF;
> > +
> > +     if (!f2fs_compressed_file(inode))
> > +             return -EINVAL;
>
> Before compressubg/decompressing file, should we check whether current inode's
> compress algorithm backend is available in f2fs module?
>
> > +
> > +     f2fs_balance_fs(F2FS_I_SB(inode), true);
> > +
> > +     file_start_write(filp);
> > +     inode_lock(inode);
> > +
> > +     if (f2fs_is_mmap_file(inode)) {
> > +             ret = -EBUSY;
> > +             goto out;
> > +     }
> > +
> > +     ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
> > +     if (ret)
> > +             goto out;
> > +
> > +     if (!atomic_read(&fi->i_compr_blocks))
> > +             goto out;
> > +
> > +     last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> > +
> > +     count = last_idx - page_idx;
> > +     while (count) {
> > +             int len = min(cluster_size, count);
> > +
> > +             ret = redirty_blocks(inode, page_idx, len);
> > +
>
> unneeded blank line..
>
> > +             if (ret < 0)
> > +                     break;
> > +
> > +             page_idx += len;
> > +             count -= len;
>
> Considering there isn't so many memory in low-end device, how about calling
> filemap_fdatawrite() to writeback cluster after redirty several clusters
> or xxMB?
>
> > +     }
> > +
> > +     if (!ret)
> > +             ret = filemap_write_and_wait_range(inode->i_mapping, 0,
> > +                                                     LLONG_MAX);
> > +
> > +     if (!ret) {
> > +             stat_sub_compr_blocks(inode, atomic_read(&fi->i_compr_blocks));
> > +             atomic_set(&fi->i_compr_blocks, 0);
> > +             f2fs_mark_inode_dirty_sync(inode, true);
>
> A little bit wired, why not failing cluster_may_compress() for user mode, and
> let writepages write cluster as raw blocks, in-where we can update i_compr_blocks
> and global compr block stats correctly.
>
> > +     } else {
> > +             f2fs_warn(sbi, "%s: The file might be partially decompressed "
> > +                             "(errno=%d). Please delete the file.\n",
> > +                             __func__, ret);
> > +     }
> > +out:
> > +     inode_unlock(inode);
> > +     file_end_write(filp);
> > +
> > +     return ret;
> > +}
> > +
> > +static int f2fs_ioc_compress_file(struct file *filp, unsigned long arg)
> > +{
> > +     struct inode *inode = file_inode(filp);
> > +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> > +     pgoff_t page_idx = 0, last_idx;
> > +     int cluster_size = F2FS_I(inode)->i_cluster_size;
> > +     int count, ret;
> > +
> > +     if (!f2fs_sb_has_compression(sbi))
> > +             return -EOPNOTSUPP;
> > +
> > +     if (!(filp->f_mode & FMODE_WRITE))
> > +             return -EBADF;
> > +
> > +     if (!f2fs_compressed_file(inode))
> > +             return -EINVAL;
>
> algorithm backend check?
>
> > +
> > +     f2fs_balance_fs(F2FS_I_SB(inode), true);
> > +
> > +     file_start_write(filp);
> > +     inode_lock(inode);
> > +
> > +     if (f2fs_is_mmap_file(inode)) {
> > +             ret = -EBUSY;
> > +             goto out;
> > +     }
> > +
> > +     ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
> > +     if (ret)
> > +             goto out;
> > +
> > +     set_inode_flag(inode, FI_ENABLE_COMPRESS);
> > +
> > +     last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> > +
> > +     count = last_idx - page_idx;
> > +     while (count) {
> > +             int len = min(cluster_size, count);
> > +
> > +             ret = redirty_blocks(inode, page_idx, len);
> > +
>
> Ditto.
>
> Thanks,
>
> > +             if (ret < 0)
> > +                     break;
> > +
> > +             page_idx += len;
> > +             count -= len;
> > +     }
> > +
> > +     if (!ret)
> > +             ret = filemap_write_and_wait_range(inode->i_mapping, 0,
> > +                                                     LLONG_MAX);
> > +
> > +     clear_inode_flag(inode, FI_ENABLE_COMPRESS);
> > +
> > +     if (ret)
> > +             f2fs_warn(sbi, "%s: The file might be partially compressed "
> > +                             "(errno=%d). Please delete the file.\n",
> > +                             __func__, ret);
> > +out:
> > +     inode_unlock(inode);
> > +     file_end_write(filp);
> > +
> > +     return ret;
> > +}
> > +
> >   static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> >   {
> >       switch (cmd) {
> > @@ -4113,6 +4287,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> >               return f2fs_ioc_get_compress_option(filp, arg);
> >       case F2FS_IOC_SET_COMPRESS_OPTION:
> >               return f2fs_ioc_set_compress_option(filp, arg);
> > +     case F2FS_IOC_DECOMPRESS_FILE:
> > +             return f2fs_ioc_decompress_file(filp, arg);
> > +     case F2FS_IOC_COMPRESS_FILE:
> > +             return f2fs_ioc_compress_file(filp, arg);
> >       default:
> >               return -ENOTTY;
> >       }
> > @@ -4352,7 +4530,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> >       case F2FS_IOC_SEC_TRIM_FILE:
> >       case F2FS_IOC_GET_COMPRESS_OPTION:
> >       case F2FS_IOC_SET_COMPRESS_OPTION:
> > -             break;
> > +     case F2FS_IOC_DECOMPRESS_FILE:
> > +     case F2FS_IOC_COMPRESS_FILE:
> >       default:
> >               return -ENOIOCTLCMD;
> >       }
> > diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
> > index f00199a2e38b..352a822d4370 100644
> > --- a/include/uapi/linux/f2fs.h
> > +++ b/include/uapi/linux/f2fs.h
> > @@ -40,6 +40,8 @@
> >                                               struct f2fs_comp_option)
> >   #define F2FS_IOC_SET_COMPRESS_OPTION        _IOW(F2FS_IOCTL_MAGIC, 22,      \
> >                                               struct f2fs_comp_option)
> > +#define F2FS_IOC_DECOMPRESS_FILE     _IO(F2FS_IOCTL_MAGIC, 23)
> > +#define F2FS_IOC_COMPRESS_FILE               _IO(F2FS_IOCTL_MAGIC, 24)
> >
> >   /*
> >    * should be same as XFS_IOC_GOINGDOWN.
> >

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-26  5:04     ` Daeho Jeong
@ 2020-11-26  6:35       ` Daeho Jeong
  2020-11-26  6:54         ` Chao Yu
  2020-11-26 17:49       ` Eric Biggers
  1 sibling, 1 reply; 18+ messages in thread
From: Daeho Jeong @ 2020-11-26  6:35 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

Chao,

> A little bit wired, why not failing cluster_may_compress() for user mode, and
> let writepages write cluster as raw blocks, in-where we can update i_compr_blocks
> and global compr block stats correctly.

For decompression ioctl, I've made f2fs_need_compress_data() return
"false" to prevent compression write, so we don't use
f2fs_write_compressed_pages() anymore in this case.
Because of this, I manually updated i_compr_blocks. Do you have any
suggestions on this?

2020년 11월 26일 (목) 오후 2:04, Daeho Jeong <daeho43@gmail.com>님이 작성:
>
> Eric,
>
> do_page_cache_ra() is defined in mm/internal.h for internal use
> between in mm, so we cannot use this one right now.
> So, I think we could use page_cache_ra_unbounded(), because we already
> check i_size boundary on our own.
> What do you think?
>
> 2020년 11월 24일 (화) 오후 12:05, Chao Yu <yuchao0@huawei.com>님이 작성:
> >
> > On 2020/11/23 11:17, Daeho Jeong wrote:
> > > From: Daeho Jeong <daehojeong@google.com>
> > >
> > > Added two ioctl to decompress/compress explicitly the compression
> > > enabled file in "compress_mode=user-based" mount option.
> > >
> > > Using these two ioctls, the users can make a control of compression
> > > and decompression of their files.
> > >
> > > Signed-off-by: Daeho Jeong <daehojeong@google.com>
> > > ---
> > >   fs/f2fs/file.c            | 181 +++++++++++++++++++++++++++++++++++++-
> > >   include/uapi/linux/f2fs.h |   2 +
> > >   2 files changed, 182 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> > > index be8db06aca27..e8f142470e87 100644
> > > --- a/fs/f2fs/file.c
> > > +++ b/fs/f2fs/file.c
> > > @@ -4026,6 +4026,180 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
> > >       return ret;
> > >   }
> > >
> > > +static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
> > > +{
> > > +     DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, page_idx);
> > > +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> > > +     struct address_space *mapping = inode->i_mapping;
> > > +     struct page *page;
> > > +     pgoff_t redirty_idx = page_idx;
> > > +     int i, page_len = 0, ret = 0;
> > > +
> > > +     page_cache_ra_unbounded(&ractl, len, 0);
> > > +
> > > +     for (i = 0; i < len; i++, page_idx++) {
> > > +             page = read_cache_page(mapping, page_idx, NULL, NULL);
> > > +             if (IS_ERR(page)) {
> > > +                     ret = PTR_ERR(page);
> > > +                     f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
> > > +                             "couldn't be read (errno:%d).\n",
> > > +                             __func__, inode->i_ino, page_idx, ret);
> >
> > This is a common error case during calling read_cache_page(), IMO, this looks
> > more like a debug log, so I prefer to print nothing here, or at least using
> > f2fs_debug() instead.
> >
> > > +                     break;
> > > +             }
> > > +             page_len++;
> > > +     }
> > > +
> > > +     for (i = 0; i < page_len; i++, redirty_idx++) {
> > > +             page = find_lock_page(mapping, redirty_idx);
> > > +             if (!page) {
> > > +                     ret = -ENOENT;
> > > +                     f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
> > > +                             "couldn't be found (errno:%d).\n",
> > > +                             __func__, inode->i_ino, redirty_idx, ret);
> >
> > Ditto.
> >
> > > +             }
> > > +             set_page_dirty(page);
> > > +             f2fs_put_page(page, 1);
> > > +             f2fs_put_page(page, 0);
> > > +     }
> > > +
> > > +     return ret;
> > > +}
> > > +
> > > +static int f2fs_ioc_decompress_file(struct file *filp, unsigned long arg)
> > > +{
> > > +     struct inode *inode = file_inode(filp);
> > > +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> > > +     struct f2fs_inode_info *fi = F2FS_I(inode);
> > > +     pgoff_t page_idx = 0, last_idx;
> > > +     int cluster_size = F2FS_I(inode)->i_cluster_size;
> > > +     int count, ret;
> > > +
> > > +     if (!f2fs_sb_has_compression(sbi))
> > > +             return -EOPNOTSUPP;
> > > +
> > > +     if (!(filp->f_mode & FMODE_WRITE))
> > > +             return -EBADF;
> > > +
> > > +     if (!f2fs_compressed_file(inode))
> > > +             return -EINVAL;
> >
> > Before compressubg/decompressing file, should we check whether current inode's
> > compress algorithm backend is available in f2fs module?
> >
> > > +
> > > +     f2fs_balance_fs(F2FS_I_SB(inode), true);
> > > +
> > > +     file_start_write(filp);
> > > +     inode_lock(inode);
> > > +
> > > +     if (f2fs_is_mmap_file(inode)) {
> > > +             ret = -EBUSY;
> > > +             goto out;
> > > +     }
> > > +
> > > +     ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
> > > +     if (ret)
> > > +             goto out;
> > > +
> > > +     if (!atomic_read(&fi->i_compr_blocks))
> > > +             goto out;
> > > +
> > > +     last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> > > +
> > > +     count = last_idx - page_idx;
> > > +     while (count) {
> > > +             int len = min(cluster_size, count);
> > > +
> > > +             ret = redirty_blocks(inode, page_idx, len);
> > > +
> >
> > unneeded blank line..
> >
> > > +             if (ret < 0)
> > > +                     break;
> > > +
> > > +             page_idx += len;
> > > +             count -= len;
> >
> > Considering there isn't so many memory in low-end device, how about calling
> > filemap_fdatawrite() to writeback cluster after redirty several clusters
> > or xxMB?
> >
> > > +     }
> > > +
> > > +     if (!ret)
> > > +             ret = filemap_write_and_wait_range(inode->i_mapping, 0,
> > > +                                                     LLONG_MAX);
> > > +
> > > +     if (!ret) {
> > > +             stat_sub_compr_blocks(inode, atomic_read(&fi->i_compr_blocks));
> > > +             atomic_set(&fi->i_compr_blocks, 0);
> > > +             f2fs_mark_inode_dirty_sync(inode, true);
> >
> > A little bit wired, why not failing cluster_may_compress() for user mode, and
> > let writepages write cluster as raw blocks, in-where we can update i_compr_blocks
> > and global compr block stats correctly.
> >
> > > +     } else {
> > > +             f2fs_warn(sbi, "%s: The file might be partially decompressed "
> > > +                             "(errno=%d). Please delete the file.\n",
> > > +                             __func__, ret);
> > > +     }
> > > +out:
> > > +     inode_unlock(inode);
> > > +     file_end_write(filp);
> > > +
> > > +     return ret;
> > > +}
> > > +
> > > +static int f2fs_ioc_compress_file(struct file *filp, unsigned long arg)
> > > +{
> > > +     struct inode *inode = file_inode(filp);
> > > +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
> > > +     pgoff_t page_idx = 0, last_idx;
> > > +     int cluster_size = F2FS_I(inode)->i_cluster_size;
> > > +     int count, ret;
> > > +
> > > +     if (!f2fs_sb_has_compression(sbi))
> > > +             return -EOPNOTSUPP;
> > > +
> > > +     if (!(filp->f_mode & FMODE_WRITE))
> > > +             return -EBADF;
> > > +
> > > +     if (!f2fs_compressed_file(inode))
> > > +             return -EINVAL;
> >
> > algorithm backend check?
> >
> > > +
> > > +     f2fs_balance_fs(F2FS_I_SB(inode), true);
> > > +
> > > +     file_start_write(filp);
> > > +     inode_lock(inode);
> > > +
> > > +     if (f2fs_is_mmap_file(inode)) {
> > > +             ret = -EBUSY;
> > > +             goto out;
> > > +     }
> > > +
> > > +     ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
> > > +     if (ret)
> > > +             goto out;
> > > +
> > > +     set_inode_flag(inode, FI_ENABLE_COMPRESS);
> > > +
> > > +     last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> > > +
> > > +     count = last_idx - page_idx;
> > > +     while (count) {
> > > +             int len = min(cluster_size, count);
> > > +
> > > +             ret = redirty_blocks(inode, page_idx, len);
> > > +
> >
> > Ditto.
> >
> > Thanks,
> >
> > > +             if (ret < 0)
> > > +                     break;
> > > +
> > > +             page_idx += len;
> > > +             count -= len;
> > > +     }
> > > +
> > > +     if (!ret)
> > > +             ret = filemap_write_and_wait_range(inode->i_mapping, 0,
> > > +                                                     LLONG_MAX);
> > > +
> > > +     clear_inode_flag(inode, FI_ENABLE_COMPRESS);
> > > +
> > > +     if (ret)
> > > +             f2fs_warn(sbi, "%s: The file might be partially compressed "
> > > +                             "(errno=%d). Please delete the file.\n",
> > > +                             __func__, ret);
> > > +out:
> > > +     inode_unlock(inode);
> > > +     file_end_write(filp);
> > > +
> > > +     return ret;
> > > +}
> > > +
> > >   static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> > >   {
> > >       switch (cmd) {
> > > @@ -4113,6 +4287,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> > >               return f2fs_ioc_get_compress_option(filp, arg);
> > >       case F2FS_IOC_SET_COMPRESS_OPTION:
> > >               return f2fs_ioc_set_compress_option(filp, arg);
> > > +     case F2FS_IOC_DECOMPRESS_FILE:
> > > +             return f2fs_ioc_decompress_file(filp, arg);
> > > +     case F2FS_IOC_COMPRESS_FILE:
> > > +             return f2fs_ioc_compress_file(filp, arg);
> > >       default:
> > >               return -ENOTTY;
> > >       }
> > > @@ -4352,7 +4530,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> > >       case F2FS_IOC_SEC_TRIM_FILE:
> > >       case F2FS_IOC_GET_COMPRESS_OPTION:
> > >       case F2FS_IOC_SET_COMPRESS_OPTION:
> > > -             break;
> > > +     case F2FS_IOC_DECOMPRESS_FILE:
> > > +     case F2FS_IOC_COMPRESS_FILE:
> > >       default:
> > >               return -ENOIOCTLCMD;
> > >       }
> > > diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
> > > index f00199a2e38b..352a822d4370 100644
> > > --- a/include/uapi/linux/f2fs.h
> > > +++ b/include/uapi/linux/f2fs.h
> > > @@ -40,6 +40,8 @@
> > >                                               struct f2fs_comp_option)
> > >   #define F2FS_IOC_SET_COMPRESS_OPTION        _IOW(F2FS_IOCTL_MAGIC, 22,      \
> > >                                               struct f2fs_comp_option)
> > > +#define F2FS_IOC_DECOMPRESS_FILE     _IO(F2FS_IOCTL_MAGIC, 23)
> > > +#define F2FS_IOC_COMPRESS_FILE               _IO(F2FS_IOCTL_MAGIC, 24)
> > >
> > >   /*
> > >    * should be same as XFS_IOC_GOINGDOWN.
> > >

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-26  6:35       ` Daeho Jeong
@ 2020-11-26  6:54         ` Chao Yu
  0 siblings, 0 replies; 18+ messages in thread
From: Chao Yu @ 2020-11-26  6:54 UTC (permalink / raw)
  To: Daeho Jeong; +Cc: linux-kernel, linux-f2fs-devel, kernel-team, Daeho Jeong

Daeho,

On 2020/11/26 14:35, Daeho Jeong wrote:
> Chao,
> 
>> A little bit wired, why not failing cluster_may_compress() for user mode, and
>> let writepages write cluster as raw blocks, in-where we can update i_compr_blocks
>> and global compr block stats correctly.
> 
> For decompression ioctl, I've made f2fs_need_compress_data() return
> "false" to prevent compression write, so we don't use
> f2fs_write_compressed_pages() anymore in this case.
> Because of this, I manually updated i_compr_blocks. Do you have any
> suggestions on this?

I meant, we can control condition to call into below stack, then i_compr_blocks
can be updated correctly.

- f2fs_ioc_decompress_file
  - writepages
   - f2fs_write_multi_pages
    - cluster_may_compress   return false when ioc_decompress is in-process.
     - f2fs_write_raw_pages
      - f2fs_do_write_data_page
       - f2fs_i_compr_blocks_update

Thanks,

> 
> 2020년 11월 26일 (목) 오후 2:04, Daeho Jeong <daeho43@gmail.com>님이 작성:
>>
>> Eric,
>>
>> do_page_cache_ra() is defined in mm/internal.h for internal use
>> between in mm, so we cannot use this one right now.
>> So, I think we could use page_cache_ra_unbounded(), because we already
>> check i_size boundary on our own.
>> What do you think?
>>
>> 2020년 11월 24일 (화) 오후 12:05, Chao Yu <yuchao0@huawei.com>님이 작성:
>>>
>>> On 2020/11/23 11:17, Daeho Jeong wrote:
>>>> From: Daeho Jeong <daehojeong@google.com>
>>>>
>>>> Added two ioctl to decompress/compress explicitly the compression
>>>> enabled file in "compress_mode=user-based" mount option.
>>>>
>>>> Using these two ioctls, the users can make a control of compression
>>>> and decompression of their files.
>>>>
>>>> Signed-off-by: Daeho Jeong <daehojeong@google.com>
>>>> ---
>>>>    fs/f2fs/file.c            | 181 +++++++++++++++++++++++++++++++++++++-
>>>>    include/uapi/linux/f2fs.h |   2 +
>>>>    2 files changed, 182 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
>>>> index be8db06aca27..e8f142470e87 100644
>>>> --- a/fs/f2fs/file.c
>>>> +++ b/fs/f2fs/file.c
>>>> @@ -4026,6 +4026,180 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
>>>>        return ret;
>>>>    }
>>>>
>>>> +static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
>>>> +{
>>>> +     DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, page_idx);
>>>> +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
>>>> +     struct address_space *mapping = inode->i_mapping;
>>>> +     struct page *page;
>>>> +     pgoff_t redirty_idx = page_idx;
>>>> +     int i, page_len = 0, ret = 0;
>>>> +
>>>> +     page_cache_ra_unbounded(&ractl, len, 0);
>>>> +
>>>> +     for (i = 0; i < len; i++, page_idx++) {
>>>> +             page = read_cache_page(mapping, page_idx, NULL, NULL);
>>>> +             if (IS_ERR(page)) {
>>>> +                     ret = PTR_ERR(page);
>>>> +                     f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
>>>> +                             "couldn't be read (errno:%d).\n",
>>>> +                             __func__, inode->i_ino, page_idx, ret);
>>>
>>> This is a common error case during calling read_cache_page(), IMO, this looks
>>> more like a debug log, so I prefer to print nothing here, or at least using
>>> f2fs_debug() instead.
>>>
>>>> +                     break;
>>>> +             }
>>>> +             page_len++;
>>>> +     }
>>>> +
>>>> +     for (i = 0; i < page_len; i++, redirty_idx++) {
>>>> +             page = find_lock_page(mapping, redirty_idx);
>>>> +             if (!page) {
>>>> +                     ret = -ENOENT;
>>>> +                     f2fs_warn(sbi, "%s: inode (%lu) : page_index (%lu) "
>>>> +                             "couldn't be found (errno:%d).\n",
>>>> +                             __func__, inode->i_ino, redirty_idx, ret);
>>>
>>> Ditto.
>>>
>>>> +             }
>>>> +             set_page_dirty(page);
>>>> +             f2fs_put_page(page, 1);
>>>> +             f2fs_put_page(page, 0);
>>>> +     }
>>>> +
>>>> +     return ret;
>>>> +}
>>>> +
>>>> +static int f2fs_ioc_decompress_file(struct file *filp, unsigned long arg)
>>>> +{
>>>> +     struct inode *inode = file_inode(filp);
>>>> +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
>>>> +     struct f2fs_inode_info *fi = F2FS_I(inode);
>>>> +     pgoff_t page_idx = 0, last_idx;
>>>> +     int cluster_size = F2FS_I(inode)->i_cluster_size;
>>>> +     int count, ret;
>>>> +
>>>> +     if (!f2fs_sb_has_compression(sbi))
>>>> +             return -EOPNOTSUPP;
>>>> +
>>>> +     if (!(filp->f_mode & FMODE_WRITE))
>>>> +             return -EBADF;
>>>> +
>>>> +     if (!f2fs_compressed_file(inode))
>>>> +             return -EINVAL;
>>>
>>> Before compressubg/decompressing file, should we check whether current inode's
>>> compress algorithm backend is available in f2fs module?
>>>
>>>> +
>>>> +     f2fs_balance_fs(F2FS_I_SB(inode), true);
>>>> +
>>>> +     file_start_write(filp);
>>>> +     inode_lock(inode);
>>>> +
>>>> +     if (f2fs_is_mmap_file(inode)) {
>>>> +             ret = -EBUSY;
>>>> +             goto out;
>>>> +     }
>>>> +
>>>> +     ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
>>>> +     if (ret)
>>>> +             goto out;
>>>> +
>>>> +     if (!atomic_read(&fi->i_compr_blocks))
>>>> +             goto out;
>>>> +
>>>> +     last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
>>>> +
>>>> +     count = last_idx - page_idx;
>>>> +     while (count) {
>>>> +             int len = min(cluster_size, count);
>>>> +
>>>> +             ret = redirty_blocks(inode, page_idx, len);
>>>> +
>>>
>>> unneeded blank line..
>>>
>>>> +             if (ret < 0)
>>>> +                     break;
>>>> +
>>>> +             page_idx += len;
>>>> +             count -= len;
>>>
>>> Considering there isn't so many memory in low-end device, how about calling
>>> filemap_fdatawrite() to writeback cluster after redirty several clusters
>>> or xxMB?
>>>
>>>> +     }
>>>> +
>>>> +     if (!ret)
>>>> +             ret = filemap_write_and_wait_range(inode->i_mapping, 0,
>>>> +                                                     LLONG_MAX);
>>>> +
>>>> +     if (!ret) {
>>>> +             stat_sub_compr_blocks(inode, atomic_read(&fi->i_compr_blocks));
>>>> +             atomic_set(&fi->i_compr_blocks, 0);
>>>> +             f2fs_mark_inode_dirty_sync(inode, true);
>>>
>>> A little bit wired, why not failing cluster_may_compress() for user mode, and
>>> let writepages write cluster as raw blocks, in-where we can update i_compr_blocks
>>> and global compr block stats correctly.
>>>
>>>> +     } else {
>>>> +             f2fs_warn(sbi, "%s: The file might be partially decompressed "
>>>> +                             "(errno=%d). Please delete the file.\n",
>>>> +                             __func__, ret);
>>>> +     }
>>>> +out:
>>>> +     inode_unlock(inode);
>>>> +     file_end_write(filp);
>>>> +
>>>> +     return ret;
>>>> +}
>>>> +
>>>> +static int f2fs_ioc_compress_file(struct file *filp, unsigned long arg)
>>>> +{
>>>> +     struct inode *inode = file_inode(filp);
>>>> +     struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
>>>> +     pgoff_t page_idx = 0, last_idx;
>>>> +     int cluster_size = F2FS_I(inode)->i_cluster_size;
>>>> +     int count, ret;
>>>> +
>>>> +     if (!f2fs_sb_has_compression(sbi))
>>>> +             return -EOPNOTSUPP;
>>>> +
>>>> +     if (!(filp->f_mode & FMODE_WRITE))
>>>> +             return -EBADF;
>>>> +
>>>> +     if (!f2fs_compressed_file(inode))
>>>> +             return -EINVAL;
>>>
>>> algorithm backend check?
>>>
>>>> +
>>>> +     f2fs_balance_fs(F2FS_I_SB(inode), true);
>>>> +
>>>> +     file_start_write(filp);
>>>> +     inode_lock(inode);
>>>> +
>>>> +     if (f2fs_is_mmap_file(inode)) {
>>>> +             ret = -EBUSY;
>>>> +             goto out;
>>>> +     }
>>>> +
>>>> +     ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
>>>> +     if (ret)
>>>> +             goto out;
>>>> +
>>>> +     set_inode_flag(inode, FI_ENABLE_COMPRESS);
>>>> +
>>>> +     last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
>>>> +
>>>> +     count = last_idx - page_idx;
>>>> +     while (count) {
>>>> +             int len = min(cluster_size, count);
>>>> +
>>>> +             ret = redirty_blocks(inode, page_idx, len);
>>>> +
>>>
>>> Ditto.
>>>
>>> Thanks,
>>>
>>>> +             if (ret < 0)
>>>> +                     break;
>>>> +
>>>> +             page_idx += len;
>>>> +             count -= len;
>>>> +     }
>>>> +
>>>> +     if (!ret)
>>>> +             ret = filemap_write_and_wait_range(inode->i_mapping, 0,
>>>> +                                                     LLONG_MAX);
>>>> +
>>>> +     clear_inode_flag(inode, FI_ENABLE_COMPRESS);
>>>> +
>>>> +     if (ret)
>>>> +             f2fs_warn(sbi, "%s: The file might be partially compressed "
>>>> +                             "(errno=%d). Please delete the file.\n",
>>>> +                             __func__, ret);
>>>> +out:
>>>> +     inode_unlock(inode);
>>>> +     file_end_write(filp);
>>>> +
>>>> +     return ret;
>>>> +}
>>>> +
>>>>    static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>>>>    {
>>>>        switch (cmd) {
>>>> @@ -4113,6 +4287,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>>>>                return f2fs_ioc_get_compress_option(filp, arg);
>>>>        case F2FS_IOC_SET_COMPRESS_OPTION:
>>>>                return f2fs_ioc_set_compress_option(filp, arg);
>>>> +     case F2FS_IOC_DECOMPRESS_FILE:
>>>> +             return f2fs_ioc_decompress_file(filp, arg);
>>>> +     case F2FS_IOC_COMPRESS_FILE:
>>>> +             return f2fs_ioc_compress_file(filp, arg);
>>>>        default:
>>>>                return -ENOTTY;
>>>>        }
>>>> @@ -4352,7 +4530,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>>>>        case F2FS_IOC_SEC_TRIM_FILE:
>>>>        case F2FS_IOC_GET_COMPRESS_OPTION:
>>>>        case F2FS_IOC_SET_COMPRESS_OPTION:
>>>> -             break;
>>>> +     case F2FS_IOC_DECOMPRESS_FILE:
>>>> +     case F2FS_IOC_COMPRESS_FILE:
>>>>        default:
>>>>                return -ENOIOCTLCMD;
>>>>        }
>>>> diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
>>>> index f00199a2e38b..352a822d4370 100644
>>>> --- a/include/uapi/linux/f2fs.h
>>>> +++ b/include/uapi/linux/f2fs.h
>>>> @@ -40,6 +40,8 @@
>>>>                                                struct f2fs_comp_option)
>>>>    #define F2FS_IOC_SET_COMPRESS_OPTION        _IOW(F2FS_IOCTL_MAGIC, 22,      \
>>>>                                                struct f2fs_comp_option)
>>>> +#define F2FS_IOC_DECOMPRESS_FILE     _IO(F2FS_IOCTL_MAGIC, 23)
>>>> +#define F2FS_IOC_COMPRESS_FILE               _IO(F2FS_IOCTL_MAGIC, 24)
>>>>
>>>>    /*
>>>>     * should be same as XFS_IOC_GOINGDOWN.
>>>>
> .
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-26  5:04     ` Daeho Jeong
  2020-11-26  6:35       ` Daeho Jeong
@ 2020-11-26 17:49       ` Eric Biggers
  2020-11-26 23:46         ` Daeho Jeong
  1 sibling, 1 reply; 18+ messages in thread
From: Eric Biggers @ 2020-11-26 17:49 UTC (permalink / raw)
  To: Daeho Jeong
  Cc: Chao Yu, Daeho Jeong, kernel-team, linux-kernel, linux-f2fs-devel

On Thu, Nov 26, 2020 at 02:04:41PM +0900, Daeho Jeong wrote:
> Eric,
> 
> do_page_cache_ra() is defined in mm/internal.h for internal use
> between in mm, so we cannot use this one right now.
> So, I think we could use page_cache_ra_unbounded(), because we already
> check i_size boundary on our own.
> What do you think?

What about page_cache_async_readahead() or page_cache_sync_readahead()?

- Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-26 17:49       ` Eric Biggers
@ 2020-11-26 23:46         ` Daeho Jeong
  2020-11-27  0:30           ` Daeho Jeong
  0 siblings, 1 reply; 18+ messages in thread
From: Daeho Jeong @ 2020-11-26 23:46 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Chao Yu, Daeho Jeong, kernel-team, linux-kernel, linux-f2fs-devel

Chao,

Got it~

Eric,

Actually, I wanted to detour the internal readahead mechanism using
page_cache_ra_unbounded() to generate cluster size aligned read
requests.
But, page_cache_async_readahead() or page_cache_sync_readahead() can
be also good enough, since those can compensate for the misaligned
reads reading more pages in advance.

Thanks,

2020년 11월 27일 (금) 오전 2:49, Eric Biggers <ebiggers@kernel.org>님이 작성:
>
> On Thu, Nov 26, 2020 at 02:04:41PM +0900, Daeho Jeong wrote:
> > Eric,
> >
> > do_page_cache_ra() is defined in mm/internal.h for internal use
> > between in mm, so we cannot use this one right now.
> > So, I think we could use page_cache_ra_unbounded(), because we already
> > check i_size boundary on our own.
> > What do you think?
>
> What about page_cache_async_readahead() or page_cache_sync_readahead()?
>
> - Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [f2fs-dev] [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  2020-11-26 23:46         ` Daeho Jeong
@ 2020-11-27  0:30           ` Daeho Jeong
  0 siblings, 0 replies; 18+ messages in thread
From: Daeho Jeong @ 2020-11-27  0:30 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Chao Yu, Daeho Jeong, kernel-team, linux-kernel, linux-f2fs-devel

Re-thinking about this, page_cache_sync_readahead() is not good for
our situation, it might end up with cluster misaligned reads which
trigger internal duplicated cluster reads.

2020년 11월 27일 (금) 오전 8:46, Daeho Jeong <daeho43@gmail.com>님이 작성:
>
> Chao,
>
> Got it~
>
> Eric,
>
> Actually, I wanted to detour the internal readahead mechanism using
> page_cache_ra_unbounded() to generate cluster size aligned read
> requests.
> But, page_cache_async_readahead() or page_cache_sync_readahead() can
> be also good enough, since those can compensate for the misaligned
> reads reading more pages in advance.
>
> Thanks,
>
> 2020년 11월 27일 (금) 오전 2:49, Eric Biggers <ebiggers@kernel.org>님이 작성:
> >
> > On Thu, Nov 26, 2020 at 02:04:41PM +0900, Daeho Jeong wrote:
> > > Eric,
> > >
> > > do_page_cache_ra() is defined in mm/internal.h for internal use
> > > between in mm, so we cannot use this one right now.
> > > So, I think we could use page_cache_ra_unbounded(), because we already
> > > check i_size boundary on our own.
> > > What do you think?
> >
> > What about page_cache_async_readahead() or page_cache_sync_readahead()?
> >
> > - Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2020-11-27  0:31 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-23  3:17 [PATCH 1/2] f2fs: add compress_mode mount option Daeho Jeong
2020-11-23  3:17 ` [PATCH 2/2] f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE Daeho Jeong
2020-11-23 17:19   ` [f2fs-dev] " Jaegeuk Kim
2020-11-23 18:48   ` Eric Biggers
2020-11-23 23:02     ` Daeho Jeong
2020-11-23 23:29       ` Eric Biggers
2020-11-24  1:03         ` Daeho Jeong
2020-11-24  3:05   ` Chao Yu
2020-11-26  5:04     ` Daeho Jeong
2020-11-26  6:35       ` Daeho Jeong
2020-11-26  6:54         ` Chao Yu
2020-11-26 17:49       ` Eric Biggers
2020-11-26 23:46         ` Daeho Jeong
2020-11-27  0:30           ` Daeho Jeong
2020-11-23 17:18 ` [f2fs-dev] [PATCH 1/2] f2fs: add compress_mode mount option Jaegeuk Kim
2020-11-23 18:46 ` Eric Biggers
2020-11-23 23:03   ` Daeho Jeong
2020-11-24  2:16 ` Chao Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).