All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] f2fs: add a way to limit roll forward recovery time
@ 2022-01-27 21:41 ` Jaegeuk Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-01-27 21:41 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel; +Cc: Jaegeuk Kim

This adds a sysfs entry to call checkpoint during fsync() in order to avoid
long elapsed time to run roll-forward recovery when booting the device.

Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
---
 Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
 fs/f2fs/checkpoint.c                    | 1 +
 fs/f2fs/f2fs.h                          | 3 +++
 fs/f2fs/node.c                          | 2 ++
 fs/f2fs/node.h                          | 3 +++
 fs/f2fs/recovery.c                      | 3 +++
 fs/f2fs/sysfs.c                         | 2 ++
 7 files changed, 20 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
index 87d3884c90ea..ce8103f522cb 100644
--- a/Documentation/ABI/testing/sysfs-fs-f2fs
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
 Description:	You can set the trial count limit for GC urgent high mode with this value.
 		If GC thread gets to the limit, the mode will turn back to GC normal mode.
 		By default, the value is zero, which means there is no limit like before.
+
+What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
+Date:		January 2022
+Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
+Description:	Controls max # of node block writes to be used for roll forward
+		recovery. This can limit the roll forward recovery time.
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index deeda95688f0..57a2d9164bee 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	/* update user_block_counts */
 	sbi->last_valid_block_count = sbi->total_valid_block_count;
 	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
+	percpu_counter_set(&sbi->rf_node_block_count, 0);
 
 	/* Here, we have one bio having CP pack except cp pack 2 page */
 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 63c90416364b..6ddb98ff0b7c 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -913,6 +913,7 @@ struct f2fs_nm_info {
 	nid_t max_nid;			/* maximum possible node ids */
 	nid_t available_nids;		/* # of available node ids */
 	nid_t next_scan_nid;		/* the next nid to be scanned */
+	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
 	unsigned int ram_thresh;	/* control the memory footprint */
 	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
 	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
@@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
 	atomic_t nr_pages[NR_COUNT_TYPE];
 	/* # of allocated blocks */
 	struct percpu_counter alloc_valid_block_count;
+	/* # of node block writes as roll forward recovery */
+	struct percpu_counter rf_node_block_count;
 
 	/* writeback control */
 	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 93512f8859d5..0d9883457579 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 
 			if (!atomic || page == last_page) {
 				set_fsync_mark(page, 1);
+				percpu_counter_inc(&sbi->rf_node_block_count);
 				if (IS_INODE(page)) {
 					if (is_inode_flag_set(inode,
 								FI_DIRTY_INODE))
@@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
 	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
 	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
 	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
+	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
 
 	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
 	INIT_LIST_HEAD(&nm_i->free_nid_list);
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
index 18b98cf0465b..fe56fd29c0d3 100644
--- a/fs/f2fs/node.h
+++ b/fs/f2fs/node.h
@@ -31,6 +31,9 @@
 /* control total # of nats */
 #define DEF_NAT_CACHE_THRESHOLD			100000
 
+/* control total # of node writes used for roll-fowrad recovery */
+#define DEF_RF_NODE_BLOCKS			100
+
 /* vector size for gang look-up from nat cache that consists of radix tree */
 #define NATVEC_SIZE	64
 #define SETVEC_SIZE	32
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index 10d152cfa58d..f69b685fb2b2 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -53,9 +53,12 @@ extern struct kmem_cache *f2fs_cf_name_slab;
 bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
 {
 	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
+	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
 
 	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
 		return false;
+	if (rf_node >= NM_I(sbi)->max_rf_node_blocks)
+		return false;
 	return true;
 }
 
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index 281bc0133ee6..47efcf233afd 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
+F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
@@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
 	ATTR_LIST(ram_thresh),
 	ATTR_LIST(ra_nid_pages),
 	ATTR_LIST(dirty_nats_ratio),
+	ATTR_LIST(max_roll_forward_node_blocks),
 	ATTR_LIST(cp_interval),
 	ATTR_LIST(idle_interval),
 	ATTR_LIST(discard_idle_interval),
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [f2fs-dev] [PATCH] f2fs: add a way to limit roll forward recovery time
@ 2022-01-27 21:41 ` Jaegeuk Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-01-27 21:41 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel; +Cc: Jaegeuk Kim

This adds a sysfs entry to call checkpoint during fsync() in order to avoid
long elapsed time to run roll-forward recovery when booting the device.

Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
---
 Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
 fs/f2fs/checkpoint.c                    | 1 +
 fs/f2fs/f2fs.h                          | 3 +++
 fs/f2fs/node.c                          | 2 ++
 fs/f2fs/node.h                          | 3 +++
 fs/f2fs/recovery.c                      | 3 +++
 fs/f2fs/sysfs.c                         | 2 ++
 7 files changed, 20 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
index 87d3884c90ea..ce8103f522cb 100644
--- a/Documentation/ABI/testing/sysfs-fs-f2fs
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
 Description:	You can set the trial count limit for GC urgent high mode with this value.
 		If GC thread gets to the limit, the mode will turn back to GC normal mode.
 		By default, the value is zero, which means there is no limit like before.
+
+What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
+Date:		January 2022
+Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
+Description:	Controls max # of node block writes to be used for roll forward
+		recovery. This can limit the roll forward recovery time.
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index deeda95688f0..57a2d9164bee 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	/* update user_block_counts */
 	sbi->last_valid_block_count = sbi->total_valid_block_count;
 	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
+	percpu_counter_set(&sbi->rf_node_block_count, 0);
 
 	/* Here, we have one bio having CP pack except cp pack 2 page */
 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 63c90416364b..6ddb98ff0b7c 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -913,6 +913,7 @@ struct f2fs_nm_info {
 	nid_t max_nid;			/* maximum possible node ids */
 	nid_t available_nids;		/* # of available node ids */
 	nid_t next_scan_nid;		/* the next nid to be scanned */
+	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
 	unsigned int ram_thresh;	/* control the memory footprint */
 	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
 	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
@@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
 	atomic_t nr_pages[NR_COUNT_TYPE];
 	/* # of allocated blocks */
 	struct percpu_counter alloc_valid_block_count;
+	/* # of node block writes as roll forward recovery */
+	struct percpu_counter rf_node_block_count;
 
 	/* writeback control */
 	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 93512f8859d5..0d9883457579 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 
 			if (!atomic || page == last_page) {
 				set_fsync_mark(page, 1);
+				percpu_counter_inc(&sbi->rf_node_block_count);
 				if (IS_INODE(page)) {
 					if (is_inode_flag_set(inode,
 								FI_DIRTY_INODE))
@@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
 	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
 	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
 	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
+	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
 
 	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
 	INIT_LIST_HEAD(&nm_i->free_nid_list);
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
index 18b98cf0465b..fe56fd29c0d3 100644
--- a/fs/f2fs/node.h
+++ b/fs/f2fs/node.h
@@ -31,6 +31,9 @@
 /* control total # of nats */
 #define DEF_NAT_CACHE_THRESHOLD			100000
 
+/* control total # of node writes used for roll-fowrad recovery */
+#define DEF_RF_NODE_BLOCKS			100
+
 /* vector size for gang look-up from nat cache that consists of radix tree */
 #define NATVEC_SIZE	64
 #define SETVEC_SIZE	32
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index 10d152cfa58d..f69b685fb2b2 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -53,9 +53,12 @@ extern struct kmem_cache *f2fs_cf_name_slab;
 bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
 {
 	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
+	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
 
 	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
 		return false;
+	if (rf_node >= NM_I(sbi)->max_rf_node_blocks)
+		return false;
 	return true;
 }
 
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index 281bc0133ee6..47efcf233afd 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
+F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
@@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
 	ATTR_LIST(ram_thresh),
 	ATTR_LIST(ra_nid_pages),
 	ATTR_LIST(dirty_nats_ratio),
+	ATTR_LIST(max_roll_forward_node_blocks),
 	ATTR_LIST(cp_interval),
 	ATTR_LIST(idle_interval),
 	ATTR_LIST(discard_idle_interval),
-- 
2.35.0.rc0.227.g00780c9af4-goog



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: add a way to limit roll forward recovery time
  2022-01-27 21:41 ` [f2fs-dev] " Jaegeuk Kim
@ 2022-01-29  8:20   ` Chao Yu
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-01-29  8:20 UTC (permalink / raw)
  To: Jaegeuk Kim, linux-kernel, linux-f2fs-devel

On 2022/1/28 5:41, Jaegeuk Kim wrote:
> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> long elapsed time to run roll-forward recovery when booting the device.
> 
> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> ---
>   Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
>   fs/f2fs/checkpoint.c                    | 1 +
>   fs/f2fs/f2fs.h                          | 3 +++
>   fs/f2fs/node.c                          | 2 ++
>   fs/f2fs/node.h                          | 3 +++
>   fs/f2fs/recovery.c                      | 3 +++
>   fs/f2fs/sysfs.c                         | 2 ++
>   7 files changed, 20 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> index 87d3884c90ea..ce8103f522cb 100644
> --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
>   Description:	You can set the trial count limit for GC urgent high mode with this value.
>   		If GC thread gets to the limit, the mode will turn back to GC normal mode.
>   		By default, the value is zero, which means there is no limit like before.
> +
> +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> +Date:		January 2022
> +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> +Description:	Controls max # of node block writes to be used for roll forward
> +		recovery. This can limit the roll forward recovery time.
> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> index deeda95688f0..57a2d9164bee 100644
> --- a/fs/f2fs/checkpoint.c
> +++ b/fs/f2fs/checkpoint.c
> @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
>   	/* update user_block_counts */
>   	sbi->last_valid_block_count = sbi->total_valid_block_count;
>   	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> +	percpu_counter_set(&sbi->rf_node_block_count, 0);
>   
>   	/* Here, we have one bio having CP pack except cp pack 2 page */
>   	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index 63c90416364b..6ddb98ff0b7c 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -913,6 +913,7 @@ struct f2fs_nm_info {
>   	nid_t max_nid;			/* maximum possible node ids */
>   	nid_t available_nids;		/* # of available node ids */
>   	nid_t next_scan_nid;		/* the next nid to be scanned */
> +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
>   	unsigned int ram_thresh;	/* control the memory footprint */
>   	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
>   	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
>   	atomic_t nr_pages[NR_COUNT_TYPE];
>   	/* # of allocated blocks */
>   	struct percpu_counter alloc_valid_block_count;
> +	/* # of node block writes as roll forward recovery */
> +	struct percpu_counter rf_node_block_count;
>   
>   	/* writeback control */
>   	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index 93512f8859d5..0d9883457579 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>   
>   			if (!atomic || page == last_page) {
>   				set_fsync_mark(page, 1);
> +				percpu_counter_inc(&sbi->rf_node_block_count);
>   				if (IS_INODE(page)) {
>   					if (is_inode_flag_set(inode,
>   								FI_DIRTY_INODE))
> @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
>   	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
>   	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
>   	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
>   
>   	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
>   	INIT_LIST_HEAD(&nm_i->free_nid_list);
> diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> index 18b98cf0465b..fe56fd29c0d3 100644
> --- a/fs/f2fs/node.h
> +++ b/fs/f2fs/node.h
> @@ -31,6 +31,9 @@
>   /* control total # of nats */
>   #define DEF_NAT_CACHE_THRESHOLD			100000
>   
> +/* control total # of node writes used for roll-fowrad recovery */
> +#define DEF_RF_NODE_BLOCKS			100

Will we suffer performance regression issue in some scenarios in where user triggers
fsync/fdatasync frequently? e.g. performance test.

If this issue is a corner case, it's better to increase DEF_RF_NODE_BLOCKS to
avoid affecting common case AMSP?

Thanks,

> +
>   /* vector size for gang look-up from nat cache that consists of radix tree */
>   #define NATVEC_SIZE	64
>   #define SETVEC_SIZE	32
> diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> index 10d152cfa58d..f69b685fb2b2 100644
> --- a/fs/f2fs/recovery.c
> +++ b/fs/f2fs/recovery.c
> @@ -53,9 +53,12 @@ extern struct kmem_cache *f2fs_cf_name_slab;
>   bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
>   {
>   	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
>   
>   	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
>   		return false;
> +	if (rf_node >= NM_I(sbi)->max_rf_node_blocks)
> +		return false;
>   	return true;
>   }
>   
> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> index 281bc0133ee6..47efcf233afd 100644
> --- a/fs/f2fs/sysfs.c
> +++ b/fs/f2fs/sysfs.c
> @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
>   	ATTR_LIST(ram_thresh),
>   	ATTR_LIST(ra_nid_pages),
>   	ATTR_LIST(dirty_nats_ratio),
> +	ATTR_LIST(max_roll_forward_node_blocks),
>   	ATTR_LIST(cp_interval),
>   	ATTR_LIST(idle_interval),
>   	ATTR_LIST(discard_idle_interval),

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: add a way to limit roll forward recovery time
@ 2022-01-29  8:20   ` Chao Yu
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-01-29  8:20 UTC (permalink / raw)
  To: Jaegeuk Kim, linux-kernel, linux-f2fs-devel

On 2022/1/28 5:41, Jaegeuk Kim wrote:
> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> long elapsed time to run roll-forward recovery when booting the device.
> 
> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> ---
>   Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
>   fs/f2fs/checkpoint.c                    | 1 +
>   fs/f2fs/f2fs.h                          | 3 +++
>   fs/f2fs/node.c                          | 2 ++
>   fs/f2fs/node.h                          | 3 +++
>   fs/f2fs/recovery.c                      | 3 +++
>   fs/f2fs/sysfs.c                         | 2 ++
>   7 files changed, 20 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> index 87d3884c90ea..ce8103f522cb 100644
> --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
>   Description:	You can set the trial count limit for GC urgent high mode with this value.
>   		If GC thread gets to the limit, the mode will turn back to GC normal mode.
>   		By default, the value is zero, which means there is no limit like before.
> +
> +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> +Date:		January 2022
> +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> +Description:	Controls max # of node block writes to be used for roll forward
> +		recovery. This can limit the roll forward recovery time.
> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> index deeda95688f0..57a2d9164bee 100644
> --- a/fs/f2fs/checkpoint.c
> +++ b/fs/f2fs/checkpoint.c
> @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
>   	/* update user_block_counts */
>   	sbi->last_valid_block_count = sbi->total_valid_block_count;
>   	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> +	percpu_counter_set(&sbi->rf_node_block_count, 0);
>   
>   	/* Here, we have one bio having CP pack except cp pack 2 page */
>   	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index 63c90416364b..6ddb98ff0b7c 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -913,6 +913,7 @@ struct f2fs_nm_info {
>   	nid_t max_nid;			/* maximum possible node ids */
>   	nid_t available_nids;		/* # of available node ids */
>   	nid_t next_scan_nid;		/* the next nid to be scanned */
> +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
>   	unsigned int ram_thresh;	/* control the memory footprint */
>   	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
>   	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
>   	atomic_t nr_pages[NR_COUNT_TYPE];
>   	/* # of allocated blocks */
>   	struct percpu_counter alloc_valid_block_count;
> +	/* # of node block writes as roll forward recovery */
> +	struct percpu_counter rf_node_block_count;
>   
>   	/* writeback control */
>   	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index 93512f8859d5..0d9883457579 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>   
>   			if (!atomic || page == last_page) {
>   				set_fsync_mark(page, 1);
> +				percpu_counter_inc(&sbi->rf_node_block_count);
>   				if (IS_INODE(page)) {
>   					if (is_inode_flag_set(inode,
>   								FI_DIRTY_INODE))
> @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
>   	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
>   	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
>   	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
>   
>   	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
>   	INIT_LIST_HEAD(&nm_i->free_nid_list);
> diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> index 18b98cf0465b..fe56fd29c0d3 100644
> --- a/fs/f2fs/node.h
> +++ b/fs/f2fs/node.h
> @@ -31,6 +31,9 @@
>   /* control total # of nats */
>   #define DEF_NAT_CACHE_THRESHOLD			100000
>   
> +/* control total # of node writes used for roll-fowrad recovery */
> +#define DEF_RF_NODE_BLOCKS			100

Will we suffer performance regression issue in some scenarios in where user triggers
fsync/fdatasync frequently? e.g. performance test.

If this issue is a corner case, it's better to increase DEF_RF_NODE_BLOCKS to
avoid affecting common case AMSP?

Thanks,

> +
>   /* vector size for gang look-up from nat cache that consists of radix tree */
>   #define NATVEC_SIZE	64
>   #define SETVEC_SIZE	32
> diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> index 10d152cfa58d..f69b685fb2b2 100644
> --- a/fs/f2fs/recovery.c
> +++ b/fs/f2fs/recovery.c
> @@ -53,9 +53,12 @@ extern struct kmem_cache *f2fs_cf_name_slab;
>   bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
>   {
>   	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
>   
>   	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
>   		return false;
> +	if (rf_node >= NM_I(sbi)->max_rf_node_blocks)
> +		return false;
>   	return true;
>   }
>   
> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> index 281bc0133ee6..47efcf233afd 100644
> --- a/fs/f2fs/sysfs.c
> +++ b/fs/f2fs/sysfs.c
> @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
>   	ATTR_LIST(ram_thresh),
>   	ATTR_LIST(ra_nid_pages),
>   	ATTR_LIST(dirty_nats_ratio),
> +	ATTR_LIST(max_roll_forward_node_blocks),
>   	ATTR_LIST(cp_interval),
>   	ATTR_LIST(idle_interval),
>   	ATTR_LIST(discard_idle_interval),


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: add a way to limit roll forward recovery time
  2022-01-29  8:20   ` Chao Yu
@ 2022-02-03  0:33     ` Jaegeuk Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-03  0:33 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel

On 01/29, Chao Yu wrote:
> On 2022/1/28 5:41, Jaegeuk Kim wrote:
> > This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> > long elapsed time to run roll-forward recovery when booting the device.
> > 
> > Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> > ---
> >   Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
> >   fs/f2fs/checkpoint.c                    | 1 +
> >   fs/f2fs/f2fs.h                          | 3 +++
> >   fs/f2fs/node.c                          | 2 ++
> >   fs/f2fs/node.h                          | 3 +++
> >   fs/f2fs/recovery.c                      | 3 +++
> >   fs/f2fs/sysfs.c                         | 2 ++
> >   7 files changed, 20 insertions(+)
> > 
> > diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> > index 87d3884c90ea..ce8103f522cb 100644
> > --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> > +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> > @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
> >   Description:	You can set the trial count limit for GC urgent high mode with this value.
> >   		If GC thread gets to the limit, the mode will turn back to GC normal mode.
> >   		By default, the value is zero, which means there is no limit like before.
> > +
> > +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> > +Date:		January 2022
> > +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> > +Description:	Controls max # of node block writes to be used for roll forward
> > +		recovery. This can limit the roll forward recovery time.
> > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> > index deeda95688f0..57a2d9164bee 100644
> > --- a/fs/f2fs/checkpoint.c
> > +++ b/fs/f2fs/checkpoint.c
> > @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
> >   	/* update user_block_counts */
> >   	sbi->last_valid_block_count = sbi->total_valid_block_count;
> >   	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> > +	percpu_counter_set(&sbi->rf_node_block_count, 0);
> >   	/* Here, we have one bio having CP pack except cp pack 2 page */
> >   	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > index 63c90416364b..6ddb98ff0b7c 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -913,6 +913,7 @@ struct f2fs_nm_info {
> >   	nid_t max_nid;			/* maximum possible node ids */
> >   	nid_t available_nids;		/* # of available node ids */
> >   	nid_t next_scan_nid;		/* the next nid to be scanned */
> > +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
> >   	unsigned int ram_thresh;	/* control the memory footprint */
> >   	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
> >   	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> > @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
> >   	atomic_t nr_pages[NR_COUNT_TYPE];
> >   	/* # of allocated blocks */
> >   	struct percpu_counter alloc_valid_block_count;
> > +	/* # of node block writes as roll forward recovery */
> > +	struct percpu_counter rf_node_block_count;
> >   	/* writeback control */
> >   	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> > index 93512f8859d5..0d9883457579 100644
> > --- a/fs/f2fs/node.c
> > +++ b/fs/f2fs/node.c
> > @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> >   			if (!atomic || page == last_page) {
> >   				set_fsync_mark(page, 1);
> > +				percpu_counter_inc(&sbi->rf_node_block_count);
> >   				if (IS_INODE(page)) {
> >   					if (is_inode_flag_set(inode,
> >   								FI_DIRTY_INODE))
> > @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
> >   	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
> >   	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
> >   	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> > +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
> >   	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
> >   	INIT_LIST_HEAD(&nm_i->free_nid_list);
> > diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> > index 18b98cf0465b..fe56fd29c0d3 100644
> > --- a/fs/f2fs/node.h
> > +++ b/fs/f2fs/node.h
> > @@ -31,6 +31,9 @@
> >   /* control total # of nats */
> >   #define DEF_NAT_CACHE_THRESHOLD			100000
> > +/* control total # of node writes used for roll-fowrad recovery */
> > +#define DEF_RF_NODE_BLOCKS			100
> 
> Will we suffer performance regression issue in some scenarios in where user triggers
> fsync/fdatasync frequently? e.g. performance test.
> 
> If this issue is a corner case, it's better to increase DEF_RF_NODE_BLOCKS to
> avoid affecting common case AMSP?

I got one report only. So, let me try to keep as is by default. (ref. v2)

> 
> Thanks,
> 
> > +
> >   /* vector size for gang look-up from nat cache that consists of radix tree */
> >   #define NATVEC_SIZE	64
> >   #define SETVEC_SIZE	32
> > diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> > index 10d152cfa58d..f69b685fb2b2 100644
> > --- a/fs/f2fs/recovery.c
> > +++ b/fs/f2fs/recovery.c
> > @@ -53,9 +53,12 @@ extern struct kmem_cache *f2fs_cf_name_slab;
> >   bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
> >   {
> >   	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> > +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
> >   	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
> >   		return false;
> > +	if (rf_node >= NM_I(sbi)->max_rf_node_blocks)
> > +		return false;
> >   	return true;
> >   }
> > diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> > index 281bc0133ee6..47efcf233afd 100644
> > --- a/fs/f2fs/sysfs.c
> > +++ b/fs/f2fs/sysfs.c
> > @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> > +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> > @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
> >   	ATTR_LIST(ram_thresh),
> >   	ATTR_LIST(ra_nid_pages),
> >   	ATTR_LIST(dirty_nats_ratio),
> > +	ATTR_LIST(max_roll_forward_node_blocks),
> >   	ATTR_LIST(cp_interval),
> >   	ATTR_LIST(idle_interval),
> >   	ATTR_LIST(discard_idle_interval),

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: add a way to limit roll forward recovery time
@ 2022-02-03  0:33     ` Jaegeuk Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-03  0:33 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel

On 01/29, Chao Yu wrote:
> On 2022/1/28 5:41, Jaegeuk Kim wrote:
> > This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> > long elapsed time to run roll-forward recovery when booting the device.
> > 
> > Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> > ---
> >   Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
> >   fs/f2fs/checkpoint.c                    | 1 +
> >   fs/f2fs/f2fs.h                          | 3 +++
> >   fs/f2fs/node.c                          | 2 ++
> >   fs/f2fs/node.h                          | 3 +++
> >   fs/f2fs/recovery.c                      | 3 +++
> >   fs/f2fs/sysfs.c                         | 2 ++
> >   7 files changed, 20 insertions(+)
> > 
> > diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> > index 87d3884c90ea..ce8103f522cb 100644
> > --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> > +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> > @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
> >   Description:	You can set the trial count limit for GC urgent high mode with this value.
> >   		If GC thread gets to the limit, the mode will turn back to GC normal mode.
> >   		By default, the value is zero, which means there is no limit like before.
> > +
> > +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> > +Date:		January 2022
> > +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> > +Description:	Controls max # of node block writes to be used for roll forward
> > +		recovery. This can limit the roll forward recovery time.
> > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> > index deeda95688f0..57a2d9164bee 100644
> > --- a/fs/f2fs/checkpoint.c
> > +++ b/fs/f2fs/checkpoint.c
> > @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
> >   	/* update user_block_counts */
> >   	sbi->last_valid_block_count = sbi->total_valid_block_count;
> >   	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> > +	percpu_counter_set(&sbi->rf_node_block_count, 0);
> >   	/* Here, we have one bio having CP pack except cp pack 2 page */
> >   	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > index 63c90416364b..6ddb98ff0b7c 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -913,6 +913,7 @@ struct f2fs_nm_info {
> >   	nid_t max_nid;			/* maximum possible node ids */
> >   	nid_t available_nids;		/* # of available node ids */
> >   	nid_t next_scan_nid;		/* the next nid to be scanned */
> > +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
> >   	unsigned int ram_thresh;	/* control the memory footprint */
> >   	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
> >   	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> > @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
> >   	atomic_t nr_pages[NR_COUNT_TYPE];
> >   	/* # of allocated blocks */
> >   	struct percpu_counter alloc_valid_block_count;
> > +	/* # of node block writes as roll forward recovery */
> > +	struct percpu_counter rf_node_block_count;
> >   	/* writeback control */
> >   	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> > index 93512f8859d5..0d9883457579 100644
> > --- a/fs/f2fs/node.c
> > +++ b/fs/f2fs/node.c
> > @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> >   			if (!atomic || page == last_page) {
> >   				set_fsync_mark(page, 1);
> > +				percpu_counter_inc(&sbi->rf_node_block_count);
> >   				if (IS_INODE(page)) {
> >   					if (is_inode_flag_set(inode,
> >   								FI_DIRTY_INODE))
> > @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
> >   	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
> >   	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
> >   	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> > +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
> >   	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
> >   	INIT_LIST_HEAD(&nm_i->free_nid_list);
> > diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> > index 18b98cf0465b..fe56fd29c0d3 100644
> > --- a/fs/f2fs/node.h
> > +++ b/fs/f2fs/node.h
> > @@ -31,6 +31,9 @@
> >   /* control total # of nats */
> >   #define DEF_NAT_CACHE_THRESHOLD			100000
> > +/* control total # of node writes used for roll-fowrad recovery */
> > +#define DEF_RF_NODE_BLOCKS			100
> 
> Will we suffer performance regression issue in some scenarios in where user triggers
> fsync/fdatasync frequently? e.g. performance test.
> 
> If this issue is a corner case, it's better to increase DEF_RF_NODE_BLOCKS to
> avoid affecting common case AMSP?

I got one report only. So, let me try to keep as is by default. (ref. v2)

> 
> Thanks,
> 
> > +
> >   /* vector size for gang look-up from nat cache that consists of radix tree */
> >   #define NATVEC_SIZE	64
> >   #define SETVEC_SIZE	32
> > diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> > index 10d152cfa58d..f69b685fb2b2 100644
> > --- a/fs/f2fs/recovery.c
> > +++ b/fs/f2fs/recovery.c
> > @@ -53,9 +53,12 @@ extern struct kmem_cache *f2fs_cf_name_slab;
> >   bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
> >   {
> >   	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> > +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
> >   	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
> >   		return false;
> > +	if (rf_node >= NM_I(sbi)->max_rf_node_blocks)
> > +		return false;
> >   	return true;
> >   }
> > diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> > index 281bc0133ee6..47efcf233afd 100644
> > --- a/fs/f2fs/sysfs.c
> > +++ b/fs/f2fs/sysfs.c
> > @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> > +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> > @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
> >   	ATTR_LIST(ram_thresh),
> >   	ATTR_LIST(ra_nid_pages),
> >   	ATTR_LIST(dirty_nats_ratio),
> > +	ATTR_LIST(max_roll_forward_node_blocks),
> >   	ATTR_LIST(cp_interval),
> >   	ATTR_LIST(idle_interval),
> >   	ATTR_LIST(discard_idle_interval),


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2] f2fs: add a way to limit roll forward recovery time
  2022-01-27 21:41 ` [f2fs-dev] " Jaegeuk Kim
@ 2022-02-03  0:34   ` Jaegeuk Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-03  0:34 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel

This adds a sysfs entry to call checkpoint during fsync() in order to avoid
long elapsed time to run roll-forward recovery when booting the device.
Default value doesn't enforce the limitation which is same as before.

Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
---
v2 from v1:
 - make the default w/o enforcement

 Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
 fs/f2fs/checkpoint.c                    | 1 +
 fs/f2fs/f2fs.h                          | 3 +++
 fs/f2fs/node.c                          | 2 ++
 fs/f2fs/node.h                          | 3 +++
 fs/f2fs/recovery.c                      | 4 ++++
 fs/f2fs/sysfs.c                         | 2 ++
 7 files changed, 21 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
index 87d3884c90ea..ce8103f522cb 100644
--- a/Documentation/ABI/testing/sysfs-fs-f2fs
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
 Description:	You can set the trial count limit for GC urgent high mode with this value.
 		If GC thread gets to the limit, the mode will turn back to GC normal mode.
 		By default, the value is zero, which means there is no limit like before.
+
+What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
+Date:		January 2022
+Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
+Description:	Controls max # of node block writes to be used for roll forward
+		recovery. This can limit the roll forward recovery time.
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index deeda95688f0..57a2d9164bee 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	/* update user_block_counts */
 	sbi->last_valid_block_count = sbi->total_valid_block_count;
 	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
+	percpu_counter_set(&sbi->rf_node_block_count, 0);
 
 	/* Here, we have one bio having CP pack except cp pack 2 page */
 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 63c90416364b..6ddb98ff0b7c 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -913,6 +913,7 @@ struct f2fs_nm_info {
 	nid_t max_nid;			/* maximum possible node ids */
 	nid_t available_nids;		/* # of available node ids */
 	nid_t next_scan_nid;		/* the next nid to be scanned */
+	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
 	unsigned int ram_thresh;	/* control the memory footprint */
 	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
 	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
@@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
 	atomic_t nr_pages[NR_COUNT_TYPE];
 	/* # of allocated blocks */
 	struct percpu_counter alloc_valid_block_count;
+	/* # of node block writes as roll forward recovery */
+	struct percpu_counter rf_node_block_count;
 
 	/* writeback control */
 	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 93512f8859d5..0d9883457579 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 
 			if (!atomic || page == last_page) {
 				set_fsync_mark(page, 1);
+				percpu_counter_inc(&sbi->rf_node_block_count);
 				if (IS_INODE(page)) {
 					if (is_inode_flag_set(inode,
 								FI_DIRTY_INODE))
@@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
 	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
 	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
 	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
+	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
 
 	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
 	INIT_LIST_HEAD(&nm_i->free_nid_list);
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
index 18b98cf0465b..4c1d34bfea78 100644
--- a/fs/f2fs/node.h
+++ b/fs/f2fs/node.h
@@ -31,6 +31,9 @@
 /* control total # of nats */
 #define DEF_NAT_CACHE_THRESHOLD			100000
 
+/* control total # of node writes used for roll-fowrad recovery */
+#define DEF_RF_NODE_BLOCKS			0
+
 /* vector size for gang look-up from nat cache that consists of radix tree */
 #define NATVEC_SIZE	64
 #define SETVEC_SIZE	32
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index 10d152cfa58d..1c8041fd854e 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
 bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
 {
 	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
+	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
 
 	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
 		return false;
+	if (NM_I(sbi)->max_rf_node_blocks &&
+			rf_node >= NM_I(sbi)->max_rf_node_blocks)
+		return false;
 	return true;
 }
 
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index 281bc0133ee6..47efcf233afd 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
+F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
@@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
 	ATTR_LIST(ram_thresh),
 	ATTR_LIST(ra_nid_pages),
 	ATTR_LIST(dirty_nats_ratio),
+	ATTR_LIST(max_roll_forward_node_blocks),
 	ATTR_LIST(cp_interval),
 	ATTR_LIST(idle_interval),
 	ATTR_LIST(discard_idle_interval),
-- 
2.35.0.rc2.247.g8bbb082509-goog


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
@ 2022-02-03  0:34   ` Jaegeuk Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-03  0:34 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel

This adds a sysfs entry to call checkpoint during fsync() in order to avoid
long elapsed time to run roll-forward recovery when booting the device.
Default value doesn't enforce the limitation which is same as before.

Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
---
v2 from v1:
 - make the default w/o enforcement

 Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
 fs/f2fs/checkpoint.c                    | 1 +
 fs/f2fs/f2fs.h                          | 3 +++
 fs/f2fs/node.c                          | 2 ++
 fs/f2fs/node.h                          | 3 +++
 fs/f2fs/recovery.c                      | 4 ++++
 fs/f2fs/sysfs.c                         | 2 ++
 7 files changed, 21 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
index 87d3884c90ea..ce8103f522cb 100644
--- a/Documentation/ABI/testing/sysfs-fs-f2fs
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
 Description:	You can set the trial count limit for GC urgent high mode with this value.
 		If GC thread gets to the limit, the mode will turn back to GC normal mode.
 		By default, the value is zero, which means there is no limit like before.
+
+What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
+Date:		January 2022
+Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
+Description:	Controls max # of node block writes to be used for roll forward
+		recovery. This can limit the roll forward recovery time.
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index deeda95688f0..57a2d9164bee 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	/* update user_block_counts */
 	sbi->last_valid_block_count = sbi->total_valid_block_count;
 	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
+	percpu_counter_set(&sbi->rf_node_block_count, 0);
 
 	/* Here, we have one bio having CP pack except cp pack 2 page */
 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 63c90416364b..6ddb98ff0b7c 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -913,6 +913,7 @@ struct f2fs_nm_info {
 	nid_t max_nid;			/* maximum possible node ids */
 	nid_t available_nids;		/* # of available node ids */
 	nid_t next_scan_nid;		/* the next nid to be scanned */
+	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
 	unsigned int ram_thresh;	/* control the memory footprint */
 	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
 	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
@@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
 	atomic_t nr_pages[NR_COUNT_TYPE];
 	/* # of allocated blocks */
 	struct percpu_counter alloc_valid_block_count;
+	/* # of node block writes as roll forward recovery */
+	struct percpu_counter rf_node_block_count;
 
 	/* writeback control */
 	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 93512f8859d5..0d9883457579 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 
 			if (!atomic || page == last_page) {
 				set_fsync_mark(page, 1);
+				percpu_counter_inc(&sbi->rf_node_block_count);
 				if (IS_INODE(page)) {
 					if (is_inode_flag_set(inode,
 								FI_DIRTY_INODE))
@@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
 	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
 	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
 	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
+	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
 
 	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
 	INIT_LIST_HEAD(&nm_i->free_nid_list);
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
index 18b98cf0465b..4c1d34bfea78 100644
--- a/fs/f2fs/node.h
+++ b/fs/f2fs/node.h
@@ -31,6 +31,9 @@
 /* control total # of nats */
 #define DEF_NAT_CACHE_THRESHOLD			100000
 
+/* control total # of node writes used for roll-fowrad recovery */
+#define DEF_RF_NODE_BLOCKS			0
+
 /* vector size for gang look-up from nat cache that consists of radix tree */
 #define NATVEC_SIZE	64
 #define SETVEC_SIZE	32
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index 10d152cfa58d..1c8041fd854e 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
 bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
 {
 	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
+	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
 
 	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
 		return false;
+	if (NM_I(sbi)->max_rf_node_blocks &&
+			rf_node >= NM_I(sbi)->max_rf_node_blocks)
+		return false;
 	return true;
 }
 
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index 281bc0133ee6..47efcf233afd 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
+F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
@@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
 	ATTR_LIST(ram_thresh),
 	ATTR_LIST(ra_nid_pages),
 	ATTR_LIST(dirty_nats_ratio),
+	ATTR_LIST(max_roll_forward_node_blocks),
 	ATTR_LIST(cp_interval),
 	ATTR_LIST(idle_interval),
 	ATTR_LIST(discard_idle_interval),
-- 
2.35.0.rc2.247.g8bbb082509-goog



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
  2022-02-03  0:34   ` [f2fs-dev] " Jaegeuk Kim
@ 2022-02-03 14:46     ` Chao Yu
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-02-03 14:46 UTC (permalink / raw)
  To: Jaegeuk Kim, linux-kernel, linux-f2fs-devel

On 2022/2/3 8:34, Jaegeuk Kim wrote:
> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> long elapsed time to run roll-forward recovery when booting the device.
> Default value doesn't enforce the limitation which is same as before.
> 
> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> ---
> v2 from v1:
>   - make the default w/o enforcement
> 
>   Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
>   fs/f2fs/checkpoint.c                    | 1 +
>   fs/f2fs/f2fs.h                          | 3 +++
>   fs/f2fs/node.c                          | 2 ++
>   fs/f2fs/node.h                          | 3 +++
>   fs/f2fs/recovery.c                      | 4 ++++
>   fs/f2fs/sysfs.c                         | 2 ++
>   7 files changed, 21 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> index 87d3884c90ea..ce8103f522cb 100644
> --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
>   Description:	You can set the trial count limit for GC urgent high mode with this value.
>   		If GC thread gets to the limit, the mode will turn back to GC normal mode.
>   		By default, the value is zero, which means there is no limit like before.
> +
> +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> +Date:		January 2022
> +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> +Description:	Controls max # of node block writes to be used for roll forward
> +		recovery. This can limit the roll forward recovery time.
> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> index deeda95688f0..57a2d9164bee 100644
> --- a/fs/f2fs/checkpoint.c
> +++ b/fs/f2fs/checkpoint.c
> @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
>   	/* update user_block_counts */
>   	sbi->last_valid_block_count = sbi->total_valid_block_count;
>   	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> +	percpu_counter_set(&sbi->rf_node_block_count, 0);
>   
>   	/* Here, we have one bio having CP pack except cp pack 2 page */
>   	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index 63c90416364b..6ddb98ff0b7c 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -913,6 +913,7 @@ struct f2fs_nm_info {
>   	nid_t max_nid;			/* maximum possible node ids */
>   	nid_t available_nids;		/* # of available node ids */
>   	nid_t next_scan_nid;		/* the next nid to be scanned */
> +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
>   	unsigned int ram_thresh;	/* control the memory footprint */
>   	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
>   	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
>   	atomic_t nr_pages[NR_COUNT_TYPE];
>   	/* # of allocated blocks */
>   	struct percpu_counter alloc_valid_block_count;
> +	/* # of node block writes as roll forward recovery */
> +	struct percpu_counter rf_node_block_count;
>   
>   	/* writeback control */
>   	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index 93512f8859d5..0d9883457579 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>   
>   			if (!atomic || page == last_page) {
>   				set_fsync_mark(page, 1);
> +				percpu_counter_inc(&sbi->rf_node_block_count);

if (NM_I(sbi)->max_rf_node_blocks)
	percpu_counter_inc(&sbi->rf_node_block_count);

Thanks,

>   				if (IS_INODE(page)) {
>   					if (is_inode_flag_set(inode,
>   								FI_DIRTY_INODE))
> @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
>   	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
>   	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
>   	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
>   
>   	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
>   	INIT_LIST_HEAD(&nm_i->free_nid_list);
> diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> index 18b98cf0465b..4c1d34bfea78 100644
> --- a/fs/f2fs/node.h
> +++ b/fs/f2fs/node.h
> @@ -31,6 +31,9 @@
>   /* control total # of nats */
>   #define DEF_NAT_CACHE_THRESHOLD			100000
>   
> +/* control total # of node writes used for roll-fowrad recovery */
> +#define DEF_RF_NODE_BLOCKS			0
> +
>   /* vector size for gang look-up from nat cache that consists of radix tree */
>   #define NATVEC_SIZE	64
>   #define SETVEC_SIZE	32
> diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> index 10d152cfa58d..1c8041fd854e 100644
> --- a/fs/f2fs/recovery.c
> +++ b/fs/f2fs/recovery.c
> @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
>   bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
>   {
>   	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
>   
>   	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
>   		return false;
> +	if (NM_I(sbi)->max_rf_node_blocks &&
> +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
> +		return false;
>   	return true;
>   }
>   
> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> index 281bc0133ee6..47efcf233afd 100644
> --- a/fs/f2fs/sysfs.c
> +++ b/fs/f2fs/sysfs.c
> @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
>   	ATTR_LIST(ram_thresh),
>   	ATTR_LIST(ra_nid_pages),
>   	ATTR_LIST(dirty_nats_ratio),
> +	ATTR_LIST(max_roll_forward_node_blocks),
>   	ATTR_LIST(cp_interval),
>   	ATTR_LIST(idle_interval),
>   	ATTR_LIST(discard_idle_interval),

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
@ 2022-02-03 14:46     ` Chao Yu
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-02-03 14:46 UTC (permalink / raw)
  To: Jaegeuk Kim, linux-kernel, linux-f2fs-devel

On 2022/2/3 8:34, Jaegeuk Kim wrote:
> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> long elapsed time to run roll-forward recovery when booting the device.
> Default value doesn't enforce the limitation which is same as before.
> 
> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> ---
> v2 from v1:
>   - make the default w/o enforcement
> 
>   Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
>   fs/f2fs/checkpoint.c                    | 1 +
>   fs/f2fs/f2fs.h                          | 3 +++
>   fs/f2fs/node.c                          | 2 ++
>   fs/f2fs/node.h                          | 3 +++
>   fs/f2fs/recovery.c                      | 4 ++++
>   fs/f2fs/sysfs.c                         | 2 ++
>   7 files changed, 21 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> index 87d3884c90ea..ce8103f522cb 100644
> --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
>   Description:	You can set the trial count limit for GC urgent high mode with this value.
>   		If GC thread gets to the limit, the mode will turn back to GC normal mode.
>   		By default, the value is zero, which means there is no limit like before.
> +
> +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> +Date:		January 2022
> +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> +Description:	Controls max # of node block writes to be used for roll forward
> +		recovery. This can limit the roll forward recovery time.
> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> index deeda95688f0..57a2d9164bee 100644
> --- a/fs/f2fs/checkpoint.c
> +++ b/fs/f2fs/checkpoint.c
> @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
>   	/* update user_block_counts */
>   	sbi->last_valid_block_count = sbi->total_valid_block_count;
>   	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> +	percpu_counter_set(&sbi->rf_node_block_count, 0);
>   
>   	/* Here, we have one bio having CP pack except cp pack 2 page */
>   	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index 63c90416364b..6ddb98ff0b7c 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -913,6 +913,7 @@ struct f2fs_nm_info {
>   	nid_t max_nid;			/* maximum possible node ids */
>   	nid_t available_nids;		/* # of available node ids */
>   	nid_t next_scan_nid;		/* the next nid to be scanned */
> +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
>   	unsigned int ram_thresh;	/* control the memory footprint */
>   	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
>   	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
>   	atomic_t nr_pages[NR_COUNT_TYPE];
>   	/* # of allocated blocks */
>   	struct percpu_counter alloc_valid_block_count;
> +	/* # of node block writes as roll forward recovery */
> +	struct percpu_counter rf_node_block_count;
>   
>   	/* writeback control */
>   	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index 93512f8859d5..0d9883457579 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>   
>   			if (!atomic || page == last_page) {
>   				set_fsync_mark(page, 1);
> +				percpu_counter_inc(&sbi->rf_node_block_count);

if (NM_I(sbi)->max_rf_node_blocks)
	percpu_counter_inc(&sbi->rf_node_block_count);

Thanks,

>   				if (IS_INODE(page)) {
>   					if (is_inode_flag_set(inode,
>   								FI_DIRTY_INODE))
> @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
>   	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
>   	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
>   	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
>   
>   	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
>   	INIT_LIST_HEAD(&nm_i->free_nid_list);
> diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> index 18b98cf0465b..4c1d34bfea78 100644
> --- a/fs/f2fs/node.h
> +++ b/fs/f2fs/node.h
> @@ -31,6 +31,9 @@
>   /* control total # of nats */
>   #define DEF_NAT_CACHE_THRESHOLD			100000
>   
> +/* control total # of node writes used for roll-fowrad recovery */
> +#define DEF_RF_NODE_BLOCKS			0
> +
>   /* vector size for gang look-up from nat cache that consists of radix tree */
>   #define NATVEC_SIZE	64
>   #define SETVEC_SIZE	32
> diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> index 10d152cfa58d..1c8041fd854e 100644
> --- a/fs/f2fs/recovery.c
> +++ b/fs/f2fs/recovery.c
> @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
>   bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
>   {
>   	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
>   
>   	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
>   		return false;
> +	if (NM_I(sbi)->max_rf_node_blocks &&
> +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
> +		return false;
>   	return true;
>   }
>   
> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> index 281bc0133ee6..47efcf233afd 100644
> --- a/fs/f2fs/sysfs.c
> +++ b/fs/f2fs/sysfs.c
> @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
>   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
>   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
>   	ATTR_LIST(ram_thresh),
>   	ATTR_LIST(ra_nid_pages),
>   	ATTR_LIST(dirty_nats_ratio),
> +	ATTR_LIST(max_roll_forward_node_blocks),
>   	ATTR_LIST(cp_interval),
>   	ATTR_LIST(idle_interval),
>   	ATTR_LIST(discard_idle_interval),


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
  2022-02-03 14:46     ` Chao Yu
@ 2022-02-03 17:42       ` Jaegeuk Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-03 17:42 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel

On 02/03, Chao Yu wrote:
> On 2022/2/3 8:34, Jaegeuk Kim wrote:
> > This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> > long elapsed time to run roll-forward recovery when booting the device.
> > Default value doesn't enforce the limitation which is same as before.
> > 
> > Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> > ---
> > v2 from v1:
> >   - make the default w/o enforcement
> > 
> >   Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
> >   fs/f2fs/checkpoint.c                    | 1 +
> >   fs/f2fs/f2fs.h                          | 3 +++
> >   fs/f2fs/node.c                          | 2 ++
> >   fs/f2fs/node.h                          | 3 +++
> >   fs/f2fs/recovery.c                      | 4 ++++
> >   fs/f2fs/sysfs.c                         | 2 ++
> >   7 files changed, 21 insertions(+)
> > 
> > diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> > index 87d3884c90ea..ce8103f522cb 100644
> > --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> > +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> > @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
> >   Description:	You can set the trial count limit for GC urgent high mode with this value.
> >   		If GC thread gets to the limit, the mode will turn back to GC normal mode.
> >   		By default, the value is zero, which means there is no limit like before.
> > +
> > +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> > +Date:		January 2022
> > +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> > +Description:	Controls max # of node block writes to be used for roll forward
> > +		recovery. This can limit the roll forward recovery time.
> > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> > index deeda95688f0..57a2d9164bee 100644
> > --- a/fs/f2fs/checkpoint.c
> > +++ b/fs/f2fs/checkpoint.c
> > @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
> >   	/* update user_block_counts */
> >   	sbi->last_valid_block_count = sbi->total_valid_block_count;
> >   	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> > +	percpu_counter_set(&sbi->rf_node_block_count, 0);
> >   	/* Here, we have one bio having CP pack except cp pack 2 page */
> >   	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > index 63c90416364b..6ddb98ff0b7c 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -913,6 +913,7 @@ struct f2fs_nm_info {
> >   	nid_t max_nid;			/* maximum possible node ids */
> >   	nid_t available_nids;		/* # of available node ids */
> >   	nid_t next_scan_nid;		/* the next nid to be scanned */
> > +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
> >   	unsigned int ram_thresh;	/* control the memory footprint */
> >   	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
> >   	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> > @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
> >   	atomic_t nr_pages[NR_COUNT_TYPE];
> >   	/* # of allocated blocks */
> >   	struct percpu_counter alloc_valid_block_count;
> > +	/* # of node block writes as roll forward recovery */
> > +	struct percpu_counter rf_node_block_count;
> >   	/* writeback control */
> >   	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> > index 93512f8859d5..0d9883457579 100644
> > --- a/fs/f2fs/node.c
> > +++ b/fs/f2fs/node.c
> > @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> >   			if (!atomic || page == last_page) {
> >   				set_fsync_mark(page, 1);
> > +				percpu_counter_inc(&sbi->rf_node_block_count);
> 
> if (NM_I(sbi)->max_rf_node_blocks)
> 	percpu_counter_inc(&sbi->rf_node_block_count);

I think we can just count this and adjust right away once sysfs is changed.

> 
> Thanks,
> 
> >   				if (IS_INODE(page)) {
> >   					if (is_inode_flag_set(inode,
> >   								FI_DIRTY_INODE))
> > @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
> >   	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
> >   	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
> >   	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> > +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
> >   	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
> >   	INIT_LIST_HEAD(&nm_i->free_nid_list);
> > diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> > index 18b98cf0465b..4c1d34bfea78 100644
> > --- a/fs/f2fs/node.h
> > +++ b/fs/f2fs/node.h
> > @@ -31,6 +31,9 @@
> >   /* control total # of nats */
> >   #define DEF_NAT_CACHE_THRESHOLD			100000
> > +/* control total # of node writes used for roll-fowrad recovery */
> > +#define DEF_RF_NODE_BLOCKS			0
> > +
> >   /* vector size for gang look-up from nat cache that consists of radix tree */
> >   #define NATVEC_SIZE	64
> >   #define SETVEC_SIZE	32
> > diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> > index 10d152cfa58d..1c8041fd854e 100644
> > --- a/fs/f2fs/recovery.c
> > +++ b/fs/f2fs/recovery.c
> > @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
> >   bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
> >   {
> >   	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> > +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
> >   	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
> >   		return false;
> > +	if (NM_I(sbi)->max_rf_node_blocks &&
> > +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
> > +		return false;
> >   	return true;
> >   }
> > diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> > index 281bc0133ee6..47efcf233afd 100644
> > --- a/fs/f2fs/sysfs.c
> > +++ b/fs/f2fs/sysfs.c
> > @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> > +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> > @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
> >   	ATTR_LIST(ram_thresh),
> >   	ATTR_LIST(ra_nid_pages),
> >   	ATTR_LIST(dirty_nats_ratio),
> > +	ATTR_LIST(max_roll_forward_node_blocks),
> >   	ATTR_LIST(cp_interval),
> >   	ATTR_LIST(idle_interval),
> >   	ATTR_LIST(discard_idle_interval),

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
@ 2022-02-03 17:42       ` Jaegeuk Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-03 17:42 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel

On 02/03, Chao Yu wrote:
> On 2022/2/3 8:34, Jaegeuk Kim wrote:
> > This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> > long elapsed time to run roll-forward recovery when booting the device.
> > Default value doesn't enforce the limitation which is same as before.
> > 
> > Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> > ---
> > v2 from v1:
> >   - make the default w/o enforcement
> > 
> >   Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
> >   fs/f2fs/checkpoint.c                    | 1 +
> >   fs/f2fs/f2fs.h                          | 3 +++
> >   fs/f2fs/node.c                          | 2 ++
> >   fs/f2fs/node.h                          | 3 +++
> >   fs/f2fs/recovery.c                      | 4 ++++
> >   fs/f2fs/sysfs.c                         | 2 ++
> >   7 files changed, 21 insertions(+)
> > 
> > diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> > index 87d3884c90ea..ce8103f522cb 100644
> > --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> > +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> > @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
> >   Description:	You can set the trial count limit for GC urgent high mode with this value.
> >   		If GC thread gets to the limit, the mode will turn back to GC normal mode.
> >   		By default, the value is zero, which means there is no limit like before.
> > +
> > +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> > +Date:		January 2022
> > +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> > +Description:	Controls max # of node block writes to be used for roll forward
> > +		recovery. This can limit the roll forward recovery time.
> > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> > index deeda95688f0..57a2d9164bee 100644
> > --- a/fs/f2fs/checkpoint.c
> > +++ b/fs/f2fs/checkpoint.c
> > @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
> >   	/* update user_block_counts */
> >   	sbi->last_valid_block_count = sbi->total_valid_block_count;
> >   	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> > +	percpu_counter_set(&sbi->rf_node_block_count, 0);
> >   	/* Here, we have one bio having CP pack except cp pack 2 page */
> >   	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > index 63c90416364b..6ddb98ff0b7c 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -913,6 +913,7 @@ struct f2fs_nm_info {
> >   	nid_t max_nid;			/* maximum possible node ids */
> >   	nid_t available_nids;		/* # of available node ids */
> >   	nid_t next_scan_nid;		/* the next nid to be scanned */
> > +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
> >   	unsigned int ram_thresh;	/* control the memory footprint */
> >   	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
> >   	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> > @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
> >   	atomic_t nr_pages[NR_COUNT_TYPE];
> >   	/* # of allocated blocks */
> >   	struct percpu_counter alloc_valid_block_count;
> > +	/* # of node block writes as roll forward recovery */
> > +	struct percpu_counter rf_node_block_count;
> >   	/* writeback control */
> >   	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> > index 93512f8859d5..0d9883457579 100644
> > --- a/fs/f2fs/node.c
> > +++ b/fs/f2fs/node.c
> > @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> >   			if (!atomic || page == last_page) {
> >   				set_fsync_mark(page, 1);
> > +				percpu_counter_inc(&sbi->rf_node_block_count);
> 
> if (NM_I(sbi)->max_rf_node_blocks)
> 	percpu_counter_inc(&sbi->rf_node_block_count);

I think we can just count this and adjust right away once sysfs is changed.

> 
> Thanks,
> 
> >   				if (IS_INODE(page)) {
> >   					if (is_inode_flag_set(inode,
> >   								FI_DIRTY_INODE))
> > @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
> >   	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
> >   	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
> >   	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> > +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
> >   	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
> >   	INIT_LIST_HEAD(&nm_i->free_nid_list);
> > diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> > index 18b98cf0465b..4c1d34bfea78 100644
> > --- a/fs/f2fs/node.h
> > +++ b/fs/f2fs/node.h
> > @@ -31,6 +31,9 @@
> >   /* control total # of nats */
> >   #define DEF_NAT_CACHE_THRESHOLD			100000
> > +/* control total # of node writes used for roll-fowrad recovery */
> > +#define DEF_RF_NODE_BLOCKS			0
> > +
> >   /* vector size for gang look-up from nat cache that consists of radix tree */
> >   #define NATVEC_SIZE	64
> >   #define SETVEC_SIZE	32
> > diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> > index 10d152cfa58d..1c8041fd854e 100644
> > --- a/fs/f2fs/recovery.c
> > +++ b/fs/f2fs/recovery.c
> > @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
> >   bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
> >   {
> >   	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> > +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
> >   	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
> >   		return false;
> > +	if (NM_I(sbi)->max_rf_node_blocks &&
> > +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
> > +		return false;
> >   	return true;
> >   }
> > diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> > index 281bc0133ee6..47efcf233afd 100644
> > --- a/fs/f2fs/sysfs.c
> > +++ b/fs/f2fs/sysfs.c
> > @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
> >   F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> > +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
> >   F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> > @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
> >   	ATTR_LIST(ram_thresh),
> >   	ATTR_LIST(ra_nid_pages),
> >   	ATTR_LIST(dirty_nats_ratio),
> > +	ATTR_LIST(max_roll_forward_node_blocks),
> >   	ATTR_LIST(cp_interval),
> >   	ATTR_LIST(idle_interval),
> >   	ATTR_LIST(discard_idle_interval),


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
  2022-02-03 17:42       ` Jaegeuk Kim
@ 2022-02-04  0:20         ` Chao Yu
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-02-04  0:20 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel

On 2022/2/4 1:42, Jaegeuk Kim wrote:
> On 02/03, Chao Yu wrote:
>> On 2022/2/3 8:34, Jaegeuk Kim wrote:
>>> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
>>> long elapsed time to run roll-forward recovery when booting the device.
>>> Default value doesn't enforce the limitation which is same as before.
>>>
>>> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
>>> ---
>>> v2 from v1:
>>>    - make the default w/o enforcement
>>>
>>>    Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
>>>    fs/f2fs/checkpoint.c                    | 1 +
>>>    fs/f2fs/f2fs.h                          | 3 +++
>>>    fs/f2fs/node.c                          | 2 ++
>>>    fs/f2fs/node.h                          | 3 +++
>>>    fs/f2fs/recovery.c                      | 4 ++++
>>>    fs/f2fs/sysfs.c                         | 2 ++
>>>    7 files changed, 21 insertions(+)
>>>
>>> diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
>>> index 87d3884c90ea..ce8103f522cb 100644
>>> --- a/Documentation/ABI/testing/sysfs-fs-f2fs
>>> +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
>>> @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
>>>    Description:	You can set the trial count limit for GC urgent high mode with this value.
>>>    		If GC thread gets to the limit, the mode will turn back to GC normal mode.
>>>    		By default, the value is zero, which means there is no limit like before.
>>> +
>>> +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
>>> +Date:		January 2022
>>> +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
>>> +Description:	Controls max # of node block writes to be used for roll forward
>>> +		recovery. This can limit the roll forward recovery time.
>>> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
>>> index deeda95688f0..57a2d9164bee 100644
>>> --- a/fs/f2fs/checkpoint.c
>>> +++ b/fs/f2fs/checkpoint.c
>>> @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
>>>    	/* update user_block_counts */
>>>    	sbi->last_valid_block_count = sbi->total_valid_block_count;
>>>    	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
>>> +	percpu_counter_set(&sbi->rf_node_block_count, 0);
>>>    	/* Here, we have one bio having CP pack except cp pack 2 page */
>>>    	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>> index 63c90416364b..6ddb98ff0b7c 100644
>>> --- a/fs/f2fs/f2fs.h
>>> +++ b/fs/f2fs/f2fs.h
>>> @@ -913,6 +913,7 @@ struct f2fs_nm_info {
>>>    	nid_t max_nid;			/* maximum possible node ids */
>>>    	nid_t available_nids;		/* # of available node ids */
>>>    	nid_t next_scan_nid;		/* the next nid to be scanned */
>>> +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
>>>    	unsigned int ram_thresh;	/* control the memory footprint */
>>>    	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
>>>    	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
>>> @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
>>>    	atomic_t nr_pages[NR_COUNT_TYPE];
>>>    	/* # of allocated blocks */
>>>    	struct percpu_counter alloc_valid_block_count;
>>> +	/* # of node block writes as roll forward recovery */
>>> +	struct percpu_counter rf_node_block_count;
>>>    	/* writeback control */
>>>    	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
>>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>>> index 93512f8859d5..0d9883457579 100644
>>> --- a/fs/f2fs/node.c
>>> +++ b/fs/f2fs/node.c
>>> @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>>>    			if (!atomic || page == last_page) {
>>>    				set_fsync_mark(page, 1);
>>> +				percpu_counter_inc(&sbi->rf_node_block_count);
>>
>> if (NM_I(sbi)->max_rf_node_blocks)
>> 	percpu_counter_inc(&sbi->rf_node_block_count);
> 
> I think we can just count this and adjust right away once sysfs is changed.

Since this long recovery latency issue is a corner case, I guess we can avoid this
to save cpu time...

BTW, shouldn't we account all warn dnode blocks? as we will traverse all blocks there
in warn node list.

Thanks,

> 
>>
>> Thanks,
>>
>>>    				if (IS_INODE(page)) {
>>>    					if (is_inode_flag_set(inode,
>>>    								FI_DIRTY_INODE))
>>> @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
>>>    	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
>>>    	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
>>>    	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
>>> +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
>>>    	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
>>>    	INIT_LIST_HEAD(&nm_i->free_nid_list);
>>> diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
>>> index 18b98cf0465b..4c1d34bfea78 100644
>>> --- a/fs/f2fs/node.h
>>> +++ b/fs/f2fs/node.h
>>> @@ -31,6 +31,9 @@
>>>    /* control total # of nats */
>>>    #define DEF_NAT_CACHE_THRESHOLD			100000
>>> +/* control total # of node writes used for roll-fowrad recovery */
>>> +#define DEF_RF_NODE_BLOCKS			0
>>> +
>>>    /* vector size for gang look-up from nat cache that consists of radix tree */
>>>    #define NATVEC_SIZE	64
>>>    #define SETVEC_SIZE	32
>>> diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
>>> index 10d152cfa58d..1c8041fd854e 100644
>>> --- a/fs/f2fs/recovery.c
>>> +++ b/fs/f2fs/recovery.c
>>> @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
>>>    bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
>>>    {
>>>    	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
>>> +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
>>>    	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
>>>    		return false;
>>> +	if (NM_I(sbi)->max_rf_node_blocks &&
>>> +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
>>> +		return false;
>>>    	return true;
>>>    }
>>> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
>>> index 281bc0133ee6..47efcf233afd 100644
>>> --- a/fs/f2fs/sysfs.c
>>> +++ b/fs/f2fs/sysfs.c
>>> @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
>>>    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
>>>    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
>>>    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
>>> +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
>>>    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
>>>    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
>>>    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
>>> @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
>>>    	ATTR_LIST(ram_thresh),
>>>    	ATTR_LIST(ra_nid_pages),
>>>    	ATTR_LIST(dirty_nats_ratio),
>>> +	ATTR_LIST(max_roll_forward_node_blocks),
>>>    	ATTR_LIST(cp_interval),
>>>    	ATTR_LIST(idle_interval),
>>>    	ATTR_LIST(discard_idle_interval),

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
@ 2022-02-04  0:20         ` Chao Yu
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-02-04  0:20 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel

On 2022/2/4 1:42, Jaegeuk Kim wrote:
> On 02/03, Chao Yu wrote:
>> On 2022/2/3 8:34, Jaegeuk Kim wrote:
>>> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
>>> long elapsed time to run roll-forward recovery when booting the device.
>>> Default value doesn't enforce the limitation which is same as before.
>>>
>>> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
>>> ---
>>> v2 from v1:
>>>    - make the default w/o enforcement
>>>
>>>    Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
>>>    fs/f2fs/checkpoint.c                    | 1 +
>>>    fs/f2fs/f2fs.h                          | 3 +++
>>>    fs/f2fs/node.c                          | 2 ++
>>>    fs/f2fs/node.h                          | 3 +++
>>>    fs/f2fs/recovery.c                      | 4 ++++
>>>    fs/f2fs/sysfs.c                         | 2 ++
>>>    7 files changed, 21 insertions(+)
>>>
>>> diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
>>> index 87d3884c90ea..ce8103f522cb 100644
>>> --- a/Documentation/ABI/testing/sysfs-fs-f2fs
>>> +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
>>> @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
>>>    Description:	You can set the trial count limit for GC urgent high mode with this value.
>>>    		If GC thread gets to the limit, the mode will turn back to GC normal mode.
>>>    		By default, the value is zero, which means there is no limit like before.
>>> +
>>> +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
>>> +Date:		January 2022
>>> +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
>>> +Description:	Controls max # of node block writes to be used for roll forward
>>> +		recovery. This can limit the roll forward recovery time.
>>> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
>>> index deeda95688f0..57a2d9164bee 100644
>>> --- a/fs/f2fs/checkpoint.c
>>> +++ b/fs/f2fs/checkpoint.c
>>> @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
>>>    	/* update user_block_counts */
>>>    	sbi->last_valid_block_count = sbi->total_valid_block_count;
>>>    	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
>>> +	percpu_counter_set(&sbi->rf_node_block_count, 0);
>>>    	/* Here, we have one bio having CP pack except cp pack 2 page */
>>>    	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>> index 63c90416364b..6ddb98ff0b7c 100644
>>> --- a/fs/f2fs/f2fs.h
>>> +++ b/fs/f2fs/f2fs.h
>>> @@ -913,6 +913,7 @@ struct f2fs_nm_info {
>>>    	nid_t max_nid;			/* maximum possible node ids */
>>>    	nid_t available_nids;		/* # of available node ids */
>>>    	nid_t next_scan_nid;		/* the next nid to be scanned */
>>> +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
>>>    	unsigned int ram_thresh;	/* control the memory footprint */
>>>    	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
>>>    	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
>>> @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
>>>    	atomic_t nr_pages[NR_COUNT_TYPE];
>>>    	/* # of allocated blocks */
>>>    	struct percpu_counter alloc_valid_block_count;
>>> +	/* # of node block writes as roll forward recovery */
>>> +	struct percpu_counter rf_node_block_count;
>>>    	/* writeback control */
>>>    	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
>>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>>> index 93512f8859d5..0d9883457579 100644
>>> --- a/fs/f2fs/node.c
>>> +++ b/fs/f2fs/node.c
>>> @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>>>    			if (!atomic || page == last_page) {
>>>    				set_fsync_mark(page, 1);
>>> +				percpu_counter_inc(&sbi->rf_node_block_count);
>>
>> if (NM_I(sbi)->max_rf_node_blocks)
>> 	percpu_counter_inc(&sbi->rf_node_block_count);
> 
> I think we can just count this and adjust right away once sysfs is changed.

Since this long recovery latency issue is a corner case, I guess we can avoid this
to save cpu time...

BTW, shouldn't we account all warn dnode blocks? as we will traverse all blocks there
in warn node list.

Thanks,

> 
>>
>> Thanks,
>>
>>>    				if (IS_INODE(page)) {
>>>    					if (is_inode_flag_set(inode,
>>>    								FI_DIRTY_INODE))
>>> @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
>>>    	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
>>>    	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
>>>    	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
>>> +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
>>>    	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
>>>    	INIT_LIST_HEAD(&nm_i->free_nid_list);
>>> diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
>>> index 18b98cf0465b..4c1d34bfea78 100644
>>> --- a/fs/f2fs/node.h
>>> +++ b/fs/f2fs/node.h
>>> @@ -31,6 +31,9 @@
>>>    /* control total # of nats */
>>>    #define DEF_NAT_CACHE_THRESHOLD			100000
>>> +/* control total # of node writes used for roll-fowrad recovery */
>>> +#define DEF_RF_NODE_BLOCKS			0
>>> +
>>>    /* vector size for gang look-up from nat cache that consists of radix tree */
>>>    #define NATVEC_SIZE	64
>>>    #define SETVEC_SIZE	32
>>> diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
>>> index 10d152cfa58d..1c8041fd854e 100644
>>> --- a/fs/f2fs/recovery.c
>>> +++ b/fs/f2fs/recovery.c
>>> @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
>>>    bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
>>>    {
>>>    	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
>>> +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
>>>    	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
>>>    		return false;
>>> +	if (NM_I(sbi)->max_rf_node_blocks &&
>>> +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
>>> +		return false;
>>>    	return true;
>>>    }
>>> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
>>> index 281bc0133ee6..47efcf233afd 100644
>>> --- a/fs/f2fs/sysfs.c
>>> +++ b/fs/f2fs/sysfs.c
>>> @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
>>>    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
>>>    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
>>>    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
>>> +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
>>>    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
>>>    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
>>>    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
>>> @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
>>>    	ATTR_LIST(ram_thresh),
>>>    	ATTR_LIST(ra_nid_pages),
>>>    	ATTR_LIST(dirty_nats_ratio),
>>> +	ATTR_LIST(max_roll_forward_node_blocks),
>>>    	ATTR_LIST(cp_interval),
>>>    	ATTR_LIST(idle_interval),
>>>    	ATTR_LIST(discard_idle_interval),


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
  2022-02-04  0:20         ` Chao Yu
@ 2022-02-04  6:16           ` Jaegeuk Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-04  6:16 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel

On 02/04, Chao Yu wrote:
> On 2022/2/4 1:42, Jaegeuk Kim wrote:
> > On 02/03, Chao Yu wrote:
> > > On 2022/2/3 8:34, Jaegeuk Kim wrote:
> > > > This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> > > > long elapsed time to run roll-forward recovery when booting the device.
> > > > Default value doesn't enforce the limitation which is same as before.
> > > > 
> > > > Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> > > > ---
> > > > v2 from v1:
> > > >    - make the default w/o enforcement
> > > > 
> > > >    Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
> > > >    fs/f2fs/checkpoint.c                    | 1 +
> > > >    fs/f2fs/f2fs.h                          | 3 +++
> > > >    fs/f2fs/node.c                          | 2 ++
> > > >    fs/f2fs/node.h                          | 3 +++
> > > >    fs/f2fs/recovery.c                      | 4 ++++
> > > >    fs/f2fs/sysfs.c                         | 2 ++
> > > >    7 files changed, 21 insertions(+)
> > > > 
> > > > diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> > > > index 87d3884c90ea..ce8103f522cb 100644
> > > > --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> > > > +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> > > > @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
> > > >    Description:	You can set the trial count limit for GC urgent high mode with this value.
> > > >    		If GC thread gets to the limit, the mode will turn back to GC normal mode.
> > > >    		By default, the value is zero, which means there is no limit like before.
> > > > +
> > > > +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> > > > +Date:		January 2022
> > > > +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> > > > +Description:	Controls max # of node block writes to be used for roll forward
> > > > +		recovery. This can limit the roll forward recovery time.
> > > > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> > > > index deeda95688f0..57a2d9164bee 100644
> > > > --- a/fs/f2fs/checkpoint.c
> > > > +++ b/fs/f2fs/checkpoint.c
> > > > @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
> > > >    	/* update user_block_counts */
> > > >    	sbi->last_valid_block_count = sbi->total_valid_block_count;
> > > >    	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> > > > +	percpu_counter_set(&sbi->rf_node_block_count, 0);
> > > >    	/* Here, we have one bio having CP pack except cp pack 2 page */
> > > >    	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> > > > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > > > index 63c90416364b..6ddb98ff0b7c 100644
> > > > --- a/fs/f2fs/f2fs.h
> > > > +++ b/fs/f2fs/f2fs.h
> > > > @@ -913,6 +913,7 @@ struct f2fs_nm_info {
> > > >    	nid_t max_nid;			/* maximum possible node ids */
> > > >    	nid_t available_nids;		/* # of available node ids */
> > > >    	nid_t next_scan_nid;		/* the next nid to be scanned */
> > > > +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
> > > >    	unsigned int ram_thresh;	/* control the memory footprint */
> > > >    	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
> > > >    	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> > > > @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
> > > >    	atomic_t nr_pages[NR_COUNT_TYPE];
> > > >    	/* # of allocated blocks */
> > > >    	struct percpu_counter alloc_valid_block_count;
> > > > +	/* # of node block writes as roll forward recovery */
> > > > +	struct percpu_counter rf_node_block_count;
> > > >    	/* writeback control */
> > > >    	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> > > > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> > > > index 93512f8859d5..0d9883457579 100644
> > > > --- a/fs/f2fs/node.c
> > > > +++ b/fs/f2fs/node.c
> > > > @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> > > >    			if (!atomic || page == last_page) {
> > > >    				set_fsync_mark(page, 1);
> > > > +				percpu_counter_inc(&sbi->rf_node_block_count);
> > > 
> > > if (NM_I(sbi)->max_rf_node_blocks)
> > > 	percpu_counter_inc(&sbi->rf_node_block_count);
> > 
> > I think we can just count this and adjust right away once sysfs is changed.
> 
> Since this long recovery latency issue is a corner case, I guess we can avoid this
> to save cpu time...

I think we can show this in debugfs, as it won't give huge overhead.

> 
> BTW, shouldn't we account all warn dnode blocks? as we will traverse all blocks there
> in warn node list.

I thought we don't need to track the whole bunch of chains, but would be enough
to record # of fsync calls roughly.

> 
> Thanks,
> 
> > 
> > > 
> > > Thanks,
> > > 
> > > >    				if (IS_INODE(page)) {
> > > >    					if (is_inode_flag_set(inode,
> > > >    								FI_DIRTY_INODE))
> > > > @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
> > > >    	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
> > > >    	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
> > > >    	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> > > > +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
> > > >    	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
> > > >    	INIT_LIST_HEAD(&nm_i->free_nid_list);
> > > > diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> > > > index 18b98cf0465b..4c1d34bfea78 100644
> > > > --- a/fs/f2fs/node.h
> > > > +++ b/fs/f2fs/node.h
> > > > @@ -31,6 +31,9 @@
> > > >    /* control total # of nats */
> > > >    #define DEF_NAT_CACHE_THRESHOLD			100000
> > > > +/* control total # of node writes used for roll-fowrad recovery */
> > > > +#define DEF_RF_NODE_BLOCKS			0
> > > > +
> > > >    /* vector size for gang look-up from nat cache that consists of radix tree */
> > > >    #define NATVEC_SIZE	64
> > > >    #define SETVEC_SIZE	32
> > > > diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> > > > index 10d152cfa58d..1c8041fd854e 100644
> > > > --- a/fs/f2fs/recovery.c
> > > > +++ b/fs/f2fs/recovery.c
> > > > @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
> > > >    bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
> > > >    {
> > > >    	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> > > > +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
> > > >    	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
> > > >    		return false;
> > > > +	if (NM_I(sbi)->max_rf_node_blocks &&
> > > > +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
> > > > +		return false;
> > > >    	return true;
> > > >    }
> > > > diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> > > > index 281bc0133ee6..47efcf233afd 100644
> > > > --- a/fs/f2fs/sysfs.c
> > > > +++ b/fs/f2fs/sysfs.c
> > > > @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
> > > >    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
> > > >    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
> > > >    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> > > > +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
> > > >    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
> > > >    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
> > > >    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> > > > @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
> > > >    	ATTR_LIST(ram_thresh),
> > > >    	ATTR_LIST(ra_nid_pages),
> > > >    	ATTR_LIST(dirty_nats_ratio),
> > > > +	ATTR_LIST(max_roll_forward_node_blocks),
> > > >    	ATTR_LIST(cp_interval),
> > > >    	ATTR_LIST(idle_interval),
> > > >    	ATTR_LIST(discard_idle_interval),

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
@ 2022-02-04  6:16           ` Jaegeuk Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-04  6:16 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel

On 02/04, Chao Yu wrote:
> On 2022/2/4 1:42, Jaegeuk Kim wrote:
> > On 02/03, Chao Yu wrote:
> > > On 2022/2/3 8:34, Jaegeuk Kim wrote:
> > > > This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> > > > long elapsed time to run roll-forward recovery when booting the device.
> > > > Default value doesn't enforce the limitation which is same as before.
> > > > 
> > > > Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> > > > ---
> > > > v2 from v1:
> > > >    - make the default w/o enforcement
> > > > 
> > > >    Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
> > > >    fs/f2fs/checkpoint.c                    | 1 +
> > > >    fs/f2fs/f2fs.h                          | 3 +++
> > > >    fs/f2fs/node.c                          | 2 ++
> > > >    fs/f2fs/node.h                          | 3 +++
> > > >    fs/f2fs/recovery.c                      | 4 ++++
> > > >    fs/f2fs/sysfs.c                         | 2 ++
> > > >    7 files changed, 21 insertions(+)
> > > > 
> > > > diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
> > > > index 87d3884c90ea..ce8103f522cb 100644
> > > > --- a/Documentation/ABI/testing/sysfs-fs-f2fs
> > > > +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
> > > > @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
> > > >    Description:	You can set the trial count limit for GC urgent high mode with this value.
> > > >    		If GC thread gets to the limit, the mode will turn back to GC normal mode.
> > > >    		By default, the value is zero, which means there is no limit like before.
> > > > +
> > > > +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
> > > > +Date:		January 2022
> > > > +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
> > > > +Description:	Controls max # of node block writes to be used for roll forward
> > > > +		recovery. This can limit the roll forward recovery time.
> > > > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> > > > index deeda95688f0..57a2d9164bee 100644
> > > > --- a/fs/f2fs/checkpoint.c
> > > > +++ b/fs/f2fs/checkpoint.c
> > > > @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
> > > >    	/* update user_block_counts */
> > > >    	sbi->last_valid_block_count = sbi->total_valid_block_count;
> > > >    	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
> > > > +	percpu_counter_set(&sbi->rf_node_block_count, 0);
> > > >    	/* Here, we have one bio having CP pack except cp pack 2 page */
> > > >    	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
> > > > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > > > index 63c90416364b..6ddb98ff0b7c 100644
> > > > --- a/fs/f2fs/f2fs.h
> > > > +++ b/fs/f2fs/f2fs.h
> > > > @@ -913,6 +913,7 @@ struct f2fs_nm_info {
> > > >    	nid_t max_nid;			/* maximum possible node ids */
> > > >    	nid_t available_nids;		/* # of available node ids */
> > > >    	nid_t next_scan_nid;		/* the next nid to be scanned */
> > > > +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
> > > >    	unsigned int ram_thresh;	/* control the memory footprint */
> > > >    	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
> > > >    	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
> > > > @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
> > > >    	atomic_t nr_pages[NR_COUNT_TYPE];
> > > >    	/* # of allocated blocks */
> > > >    	struct percpu_counter alloc_valid_block_count;
> > > > +	/* # of node block writes as roll forward recovery */
> > > > +	struct percpu_counter rf_node_block_count;
> > > >    	/* writeback control */
> > > >    	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
> > > > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> > > > index 93512f8859d5..0d9883457579 100644
> > > > --- a/fs/f2fs/node.c
> > > > +++ b/fs/f2fs/node.c
> > > > @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> > > >    			if (!atomic || page == last_page) {
> > > >    				set_fsync_mark(page, 1);
> > > > +				percpu_counter_inc(&sbi->rf_node_block_count);
> > > 
> > > if (NM_I(sbi)->max_rf_node_blocks)
> > > 	percpu_counter_inc(&sbi->rf_node_block_count);
> > 
> > I think we can just count this and adjust right away once sysfs is changed.
> 
> Since this long recovery latency issue is a corner case, I guess we can avoid this
> to save cpu time...

I think we can show this in debugfs, as it won't give huge overhead.

> 
> BTW, shouldn't we account all warn dnode blocks? as we will traverse all blocks there
> in warn node list.

I thought we don't need to track the whole bunch of chains, but would be enough
to record # of fsync calls roughly.

> 
> Thanks,
> 
> > 
> > > 
> > > Thanks,
> > > 
> > > >    				if (IS_INODE(page)) {
> > > >    					if (is_inode_flag_set(inode,
> > > >    								FI_DIRTY_INODE))
> > > > @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
> > > >    	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
> > > >    	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
> > > >    	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
> > > > +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
> > > >    	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
> > > >    	INIT_LIST_HEAD(&nm_i->free_nid_list);
> > > > diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
> > > > index 18b98cf0465b..4c1d34bfea78 100644
> > > > --- a/fs/f2fs/node.h
> > > > +++ b/fs/f2fs/node.h
> > > > @@ -31,6 +31,9 @@
> > > >    /* control total # of nats */
> > > >    #define DEF_NAT_CACHE_THRESHOLD			100000
> > > > +/* control total # of node writes used for roll-fowrad recovery */
> > > > +#define DEF_RF_NODE_BLOCKS			0
> > > > +
> > > >    /* vector size for gang look-up from nat cache that consists of radix tree */
> > > >    #define NATVEC_SIZE	64
> > > >    #define SETVEC_SIZE	32
> > > > diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
> > > > index 10d152cfa58d..1c8041fd854e 100644
> > > > --- a/fs/f2fs/recovery.c
> > > > +++ b/fs/f2fs/recovery.c
> > > > @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
> > > >    bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
> > > >    {
> > > >    	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
> > > > +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
> > > >    	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
> > > >    		return false;
> > > > +	if (NM_I(sbi)->max_rf_node_blocks &&
> > > > +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
> > > > +		return false;
> > > >    	return true;
> > > >    }
> > > > diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
> > > > index 281bc0133ee6..47efcf233afd 100644
> > > > --- a/fs/f2fs/sysfs.c
> > > > +++ b/fs/f2fs/sysfs.c
> > > > @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
> > > >    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
> > > >    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
> > > >    F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
> > > > +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
> > > >    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
> > > >    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
> > > >    F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
> > > > @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
> > > >    	ATTR_LIST(ram_thresh),
> > > >    	ATTR_LIST(ra_nid_pages),
> > > >    	ATTR_LIST(dirty_nats_ratio),
> > > > +	ATTR_LIST(max_roll_forward_node_blocks),
> > > >    	ATTR_LIST(cp_interval),
> > > >    	ATTR_LIST(idle_interval),
> > > >    	ATTR_LIST(discard_idle_interval),


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
  2022-02-04  6:16           ` Jaegeuk Kim
@ 2022-02-04 12:03             ` Chao Yu
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-02-04 12:03 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel

On 2022/2/4 14:16, Jaegeuk Kim wrote:
> On 02/04, Chao Yu wrote:
>> On 2022/2/4 1:42, Jaegeuk Kim wrote:
>>> On 02/03, Chao Yu wrote:
>>>> On 2022/2/3 8:34, Jaegeuk Kim wrote:
>>>>> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
>>>>> long elapsed time to run roll-forward recovery when booting the device.
>>>>> Default value doesn't enforce the limitation which is same as before.
>>>>>
>>>>> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
>>>>> ---
>>>>> v2 from v1:
>>>>>     - make the default w/o enforcement
>>>>>
>>>>>     Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
>>>>>     fs/f2fs/checkpoint.c                    | 1 +
>>>>>     fs/f2fs/f2fs.h                          | 3 +++
>>>>>     fs/f2fs/node.c                          | 2 ++
>>>>>     fs/f2fs/node.h                          | 3 +++
>>>>>     fs/f2fs/recovery.c                      | 4 ++++
>>>>>     fs/f2fs/sysfs.c                         | 2 ++
>>>>>     7 files changed, 21 insertions(+)
>>>>>
>>>>> diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
>>>>> index 87d3884c90ea..ce8103f522cb 100644
>>>>> --- a/Documentation/ABI/testing/sysfs-fs-f2fs
>>>>> +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
>>>>> @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
>>>>>     Description:	You can set the trial count limit for GC urgent high mode with this value.
>>>>>     		If GC thread gets to the limit, the mode will turn back to GC normal mode.
>>>>>     		By default, the value is zero, which means there is no limit like before.
>>>>> +
>>>>> +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
>>>>> +Date:		January 2022
>>>>> +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
>>>>> +Description:	Controls max # of node block writes to be used for roll forward
>>>>> +		recovery. This can limit the roll forward recovery time.
>>>>> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
>>>>> index deeda95688f0..57a2d9164bee 100644
>>>>> --- a/fs/f2fs/checkpoint.c
>>>>> +++ b/fs/f2fs/checkpoint.c
>>>>> @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
>>>>>     	/* update user_block_counts */
>>>>>     	sbi->last_valid_block_count = sbi->total_valid_block_count;
>>>>>     	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
>>>>> +	percpu_counter_set(&sbi->rf_node_block_count, 0);

Should be initialized before use?

Thanks,

>>>>>     	/* Here, we have one bio having CP pack except cp pack 2 page */
>>>>>     	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
>>>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>>>> index 63c90416364b..6ddb98ff0b7c 100644
>>>>> --- a/fs/f2fs/f2fs.h
>>>>> +++ b/fs/f2fs/f2fs.h
>>>>> @@ -913,6 +913,7 @@ struct f2fs_nm_info {
>>>>>     	nid_t max_nid;			/* maximum possible node ids */
>>>>>     	nid_t available_nids;		/* # of available node ids */
>>>>>     	nid_t next_scan_nid;		/* the next nid to be scanned */
>>>>> +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
>>>>>     	unsigned int ram_thresh;	/* control the memory footprint */
>>>>>     	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
>>>>>     	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
>>>>> @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
>>>>>     	atomic_t nr_pages[NR_COUNT_TYPE];
>>>>>     	/* # of allocated blocks */
>>>>>     	struct percpu_counter alloc_valid_block_count;
>>>>> +	/* # of node block writes as roll forward recovery */
>>>>> +	struct percpu_counter rf_node_block_count;
>>>>>     	/* writeback control */
>>>>>     	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
>>>>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>>>>> index 93512f8859d5..0d9883457579 100644
>>>>> --- a/fs/f2fs/node.c
>>>>> +++ b/fs/f2fs/node.c
>>>>> @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>>>>>     			if (!atomic || page == last_page) {
>>>>>     				set_fsync_mark(page, 1);
>>>>> +				percpu_counter_inc(&sbi->rf_node_block_count);
>>>>
>>>> if (NM_I(sbi)->max_rf_node_blocks)
>>>> 	percpu_counter_inc(&sbi->rf_node_block_count);
>>>
>>> I think we can just count this and adjust right away once sysfs is changed.
>>
>> Since this long recovery latency issue is a corner case, I guess we can avoid this
>> to save cpu time...
> 
> I think we can show this in debugfs, as it won't give huge overhead.
> 
>>
>> BTW, shouldn't we account all warn dnode blocks? as we will traverse all blocks there
>> in warn node list.
> 
> I thought we don't need to track the whole bunch of chains, but would be enough
> to record # of fsync calls roughly.
> 
>>
>> Thanks,
>>
>>>
>>>>
>>>> Thanks,
>>>>
>>>>>     				if (IS_INODE(page)) {
>>>>>     					if (is_inode_flag_set(inode,
>>>>>     								FI_DIRTY_INODE))
>>>>> @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
>>>>>     	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
>>>>>     	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
>>>>>     	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
>>>>> +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
>>>>>     	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
>>>>>     	INIT_LIST_HEAD(&nm_i->free_nid_list);
>>>>> diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
>>>>> index 18b98cf0465b..4c1d34bfea78 100644
>>>>> --- a/fs/f2fs/node.h
>>>>> +++ b/fs/f2fs/node.h
>>>>> @@ -31,6 +31,9 @@
>>>>>     /* control total # of nats */
>>>>>     #define DEF_NAT_CACHE_THRESHOLD			100000
>>>>> +/* control total # of node writes used for roll-fowrad recovery */
>>>>> +#define DEF_RF_NODE_BLOCKS			0
>>>>> +
>>>>>     /* vector size for gang look-up from nat cache that consists of radix tree */
>>>>>     #define NATVEC_SIZE	64
>>>>>     #define SETVEC_SIZE	32
>>>>> diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
>>>>> index 10d152cfa58d..1c8041fd854e 100644
>>>>> --- a/fs/f2fs/recovery.c
>>>>> +++ b/fs/f2fs/recovery.c
>>>>> @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
>>>>>     bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
>>>>>     {
>>>>>     	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
>>>>> +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
>>>>>     	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
>>>>>     		return false;
>>>>> +	if (NM_I(sbi)->max_rf_node_blocks &&
>>>>> +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
>>>>> +		return false;
>>>>>     	return true;
>>>>>     }
>>>>> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
>>>>> index 281bc0133ee6..47efcf233afd 100644
>>>>> --- a/fs/f2fs/sysfs.c
>>>>> +++ b/fs/f2fs/sysfs.c
>>>>> @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
>>>>>     F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
>>>>>     F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
>>>>>     F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
>>>>> +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
>>>>>     F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
>>>>>     F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
>>>>>     F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
>>>>> @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
>>>>>     	ATTR_LIST(ram_thresh),
>>>>>     	ATTR_LIST(ra_nid_pages),
>>>>>     	ATTR_LIST(dirty_nats_ratio),
>>>>> +	ATTR_LIST(max_roll_forward_node_blocks),
>>>>>     	ATTR_LIST(cp_interval),
>>>>>     	ATTR_LIST(idle_interval),
>>>>>     	ATTR_LIST(discard_idle_interval),

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: add a way to limit roll forward recovery time
@ 2022-02-04 12:03             ` Chao Yu
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-02-04 12:03 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel

On 2022/2/4 14:16, Jaegeuk Kim wrote:
> On 02/04, Chao Yu wrote:
>> On 2022/2/4 1:42, Jaegeuk Kim wrote:
>>> On 02/03, Chao Yu wrote:
>>>> On 2022/2/3 8:34, Jaegeuk Kim wrote:
>>>>> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
>>>>> long elapsed time to run roll-forward recovery when booting the device.
>>>>> Default value doesn't enforce the limitation which is same as before.
>>>>>
>>>>> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
>>>>> ---
>>>>> v2 from v1:
>>>>>     - make the default w/o enforcement
>>>>>
>>>>>     Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++++++
>>>>>     fs/f2fs/checkpoint.c                    | 1 +
>>>>>     fs/f2fs/f2fs.h                          | 3 +++
>>>>>     fs/f2fs/node.c                          | 2 ++
>>>>>     fs/f2fs/node.h                          | 3 +++
>>>>>     fs/f2fs/recovery.c                      | 4 ++++
>>>>>     fs/f2fs/sysfs.c                         | 2 ++
>>>>>     7 files changed, 21 insertions(+)
>>>>>
>>>>> diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
>>>>> index 87d3884c90ea..ce8103f522cb 100644
>>>>> --- a/Documentation/ABI/testing/sysfs-fs-f2fs
>>>>> +++ b/Documentation/ABI/testing/sysfs-fs-f2fs
>>>>> @@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
>>>>>     Description:	You can set the trial count limit for GC urgent high mode with this value.
>>>>>     		If GC thread gets to the limit, the mode will turn back to GC normal mode.
>>>>>     		By default, the value is zero, which means there is no limit like before.
>>>>> +
>>>>> +What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
>>>>> +Date:		January 2022
>>>>> +Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
>>>>> +Description:	Controls max # of node block writes to be used for roll forward
>>>>> +		recovery. This can limit the roll forward recovery time.
>>>>> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
>>>>> index deeda95688f0..57a2d9164bee 100644
>>>>> --- a/fs/f2fs/checkpoint.c
>>>>> +++ b/fs/f2fs/checkpoint.c
>>>>> @@ -1543,6 +1543,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
>>>>>     	/* update user_block_counts */
>>>>>     	sbi->last_valid_block_count = sbi->total_valid_block_count;
>>>>>     	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
>>>>> +	percpu_counter_set(&sbi->rf_node_block_count, 0);

Should be initialized before use?

Thanks,

>>>>>     	/* Here, we have one bio having CP pack except cp pack 2 page */
>>>>>     	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
>>>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>>>> index 63c90416364b..6ddb98ff0b7c 100644
>>>>> --- a/fs/f2fs/f2fs.h
>>>>> +++ b/fs/f2fs/f2fs.h
>>>>> @@ -913,6 +913,7 @@ struct f2fs_nm_info {
>>>>>     	nid_t max_nid;			/* maximum possible node ids */
>>>>>     	nid_t available_nids;		/* # of available node ids */
>>>>>     	nid_t next_scan_nid;		/* the next nid to be scanned */
>>>>> +	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
>>>>>     	unsigned int ram_thresh;	/* control the memory footprint */
>>>>>     	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
>>>>>     	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
>>>>> @@ -1684,6 +1685,8 @@ struct f2fs_sb_info {
>>>>>     	atomic_t nr_pages[NR_COUNT_TYPE];
>>>>>     	/* # of allocated blocks */
>>>>>     	struct percpu_counter alloc_valid_block_count;
>>>>> +	/* # of node block writes as roll forward recovery */
>>>>> +	struct percpu_counter rf_node_block_count;
>>>>>     	/* writeback control */
>>>>>     	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
>>>>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>>>>> index 93512f8859d5..0d9883457579 100644
>>>>> --- a/fs/f2fs/node.c
>>>>> +++ b/fs/f2fs/node.c
>>>>> @@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>>>>>     			if (!atomic || page == last_page) {
>>>>>     				set_fsync_mark(page, 1);
>>>>> +				percpu_counter_inc(&sbi->rf_node_block_count);
>>>>
>>>> if (NM_I(sbi)->max_rf_node_blocks)
>>>> 	percpu_counter_inc(&sbi->rf_node_block_count);
>>>
>>> I think we can just count this and adjust right away once sysfs is changed.
>>
>> Since this long recovery latency issue is a corner case, I guess we can avoid this
>> to save cpu time...
> 
> I think we can show this in debugfs, as it won't give huge overhead.
> 
>>
>> BTW, shouldn't we account all warn dnode blocks? as we will traverse all blocks there
>> in warn node list.
> 
> I thought we don't need to track the whole bunch of chains, but would be enough
> to record # of fsync calls roughly.
> 
>>
>> Thanks,
>>
>>>
>>>>
>>>> Thanks,
>>>>
>>>>>     				if (IS_INODE(page)) {
>>>>>     					if (is_inode_flag_set(inode,
>>>>>     								FI_DIRTY_INODE))
>>>>> @@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
>>>>>     	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
>>>>>     	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
>>>>>     	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
>>>>> +	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
>>>>>     	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
>>>>>     	INIT_LIST_HEAD(&nm_i->free_nid_list);
>>>>> diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
>>>>> index 18b98cf0465b..4c1d34bfea78 100644
>>>>> --- a/fs/f2fs/node.h
>>>>> +++ b/fs/f2fs/node.h
>>>>> @@ -31,6 +31,9 @@
>>>>>     /* control total # of nats */
>>>>>     #define DEF_NAT_CACHE_THRESHOLD			100000
>>>>> +/* control total # of node writes used for roll-fowrad recovery */
>>>>> +#define DEF_RF_NODE_BLOCKS			0
>>>>> +
>>>>>     /* vector size for gang look-up from nat cache that consists of radix tree */
>>>>>     #define NATVEC_SIZE	64
>>>>>     #define SETVEC_SIZE	32
>>>>> diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
>>>>> index 10d152cfa58d..1c8041fd854e 100644
>>>>> --- a/fs/f2fs/recovery.c
>>>>> +++ b/fs/f2fs/recovery.c
>>>>> @@ -53,9 +53,13 @@ extern struct kmem_cache *f2fs_cf_name_slab;
>>>>>     bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
>>>>>     {
>>>>>     	s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
>>>>> +	u32 rf_node = percpu_counter_sum_positive(&sbi->rf_node_block_count);
>>>>>     	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
>>>>>     		return false;
>>>>> +	if (NM_I(sbi)->max_rf_node_blocks &&
>>>>> +			rf_node >= NM_I(sbi)->max_rf_node_blocks)
>>>>> +		return false;
>>>>>     	return true;
>>>>>     }
>>>>> diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
>>>>> index 281bc0133ee6..47efcf233afd 100644
>>>>> --- a/fs/f2fs/sysfs.c
>>>>> +++ b/fs/f2fs/sysfs.c
>>>>> @@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
>>>>>     F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
>>>>>     F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
>>>>>     F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
>>>>> +F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
>>>>>     F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
>>>>>     F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
>>>>>     F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
>>>>> @@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
>>>>>     	ATTR_LIST(ram_thresh),
>>>>>     	ATTR_LIST(ra_nid_pages),
>>>>>     	ATTR_LIST(dirty_nats_ratio),
>>>>> +	ATTR_LIST(max_roll_forward_node_blocks),
>>>>>     	ATTR_LIST(cp_interval),
>>>>>     	ATTR_LIST(idle_interval),
>>>>>     	ATTR_LIST(discard_idle_interval),


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v3] f2fs: add a way to limit roll forward recovery time
  2022-02-03  0:34   ` [f2fs-dev] " Jaegeuk Kim
@ 2022-02-07 19:01     ` Jaegeuk Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-07 19:01 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel

This adds a sysfs entry to call checkpoint during fsync() in order to avoid
long elapsed time to run roll-forward recovery when booting the device.
Default value doesn't enforce the limitation which is same as before.

Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
---
 v3 from v2:
  - add missing percpu init
  - percpu_sum only when it's used

 v2 from v1:
  - make the default w/o enforcement

 Documentation/ABI/testing/sysfs-fs-f2fs |  6 ++++++
 fs/f2fs/checkpoint.c                    |  1 +
 fs/f2fs/debug.c                         |  3 +++
 fs/f2fs/f2fs.h                          |  3 +++
 fs/f2fs/node.c                          |  2 ++
 fs/f2fs/node.h                          |  3 +++
 fs/f2fs/recovery.c                      |  4 ++++
 fs/f2fs/super.c                         | 14 ++++++++++++--
 fs/f2fs/sysfs.c                         |  2 ++
 9 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
index 87d3884c90ea..ce8103f522cb 100644
--- a/Documentation/ABI/testing/sysfs-fs-f2fs
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
 Description:	You can set the trial count limit for GC urgent high mode with this value.
 		If GC thread gets to the limit, the mode will turn back to GC normal mode.
 		By default, the value is zero, which means there is no limit like before.
+
+What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
+Date:		January 2022
+Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
+Description:	Controls max # of node block writes to be used for roll forward
+		recovery. This can limit the roll forward recovery time.
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index a13b6b4af220..203a1577942d 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1547,6 +1547,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	/* update user_block_counts */
 	sbi->last_valid_block_count = sbi->total_valid_block_count;
 	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
+	percpu_counter_set(&sbi->rf_node_block_count, 0);
 
 	/* Here, we have one bio having CP pack except cp pack 2 page */
 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
index 8c50518475a9..9a13902c7702 100644
--- a/fs/f2fs/debug.c
+++ b/fs/f2fs/debug.c
@@ -532,6 +532,9 @@ static int stat_show(struct seq_file *s, void *v)
 			   si->ndirty_meta, si->meta_pages);
 		seq_printf(s, "  - imeta: %4d\n",
 			   si->ndirty_imeta);
+		seq_printf(s, "  - fsync mark: %4lld\n",
+			   percpu_counter_sum_positive(
+					&si->sbi->rf_node_block_count));
 		seq_printf(s, "  - NATs: %9d/%9d\n  - SITs: %9d/%9d\n",
 			   si->dirty_nats, si->nats, si->dirty_sits, si->sits);
 		seq_printf(s, "  - free_nids: %9d/%9d\n  - alloc_nids: %9d\n",
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 51c1392708e6..d220ab613cf1 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -916,6 +916,7 @@ struct f2fs_nm_info {
 	nid_t max_nid;			/* maximum possible node ids */
 	nid_t available_nids;		/* # of available node ids */
 	nid_t next_scan_nid;		/* the next nid to be scanned */
+	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
 	unsigned int ram_thresh;	/* control the memory footprint */
 	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
 	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
@@ -1687,6 +1688,8 @@ struct f2fs_sb_info {
 	atomic_t nr_pages[NR_COUNT_TYPE];
 	/* # of allocated blocks */
 	struct percpu_counter alloc_valid_block_count;
+	/* # of node block writes as roll forward recovery */
+	struct percpu_counter rf_node_block_count;
 
 	/* writeback control */
 	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 93512f8859d5..0d9883457579 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 
 			if (!atomic || page == last_page) {
 				set_fsync_mark(page, 1);
+				percpu_counter_inc(&sbi->rf_node_block_count);
 				if (IS_INODE(page)) {
 					if (is_inode_flag_set(inode,
 								FI_DIRTY_INODE))
@@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
 	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
 	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
 	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
+	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
 
 	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
 	INIT_LIST_HEAD(&nm_i->free_nid_list);
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
index 18b98cf0465b..4c1d34bfea78 100644
--- a/fs/f2fs/node.h
+++ b/fs/f2fs/node.h
@@ -31,6 +31,9 @@
 /* control total # of nats */
 #define DEF_NAT_CACHE_THRESHOLD			100000
 
+/* control total # of node writes used for roll-fowrad recovery */
+#define DEF_RF_NODE_BLOCKS			0
+
 /* vector size for gang look-up from nat cache that consists of radix tree */
 #define NATVEC_SIZE	64
 #define SETVEC_SIZE	32
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index 2af503f75b4f..ab33e474af07 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -56,6 +56,10 @@ bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
 
 	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
 		return false;
+	if (NM_I(sbi)->max_rf_node_blocks &&
+		percpu_counter_sum_positive(&sbi->rf_node_block_count) >=
+						NM_I(sbi)->max_rf_node_blocks)
+		return false;
 	return true;
 }
 
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 9af6c20532ec..f9d627dbed58 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -1501,8 +1501,9 @@ static void f2fs_free_inode(struct inode *inode)
 
 static void destroy_percpu_info(struct f2fs_sb_info *sbi)
 {
-	percpu_counter_destroy(&sbi->alloc_valid_block_count);
 	percpu_counter_destroy(&sbi->total_valid_inode_count);
+	percpu_counter_destroy(&sbi->rf_node_block_count);
+	percpu_counter_destroy(&sbi->alloc_valid_block_count);
 }
 
 static void destroy_device_list(struct f2fs_sb_info *sbi)
@@ -3619,11 +3620,20 @@ static int init_percpu_info(struct f2fs_sb_info *sbi)
 	if (err)
 		return err;
 
+	err = percpu_counter_init(&sbi->rf_node_block_count, 0, GFP_KERNEL);
+	if (err)
+		goto err_valid_block;
+
 	err = percpu_counter_init(&sbi->total_valid_inode_count, 0,
 								GFP_KERNEL);
 	if (err)
-		percpu_counter_destroy(&sbi->alloc_valid_block_count);
+		goto err_node_block;
+	return 0;
 
+err_node_block:
+	percpu_counter_destroy(&sbi->rf_node_block_count);
+err_valid_block:
+	percpu_counter_destroy(&sbi->alloc_valid_block_count);
 	return err;
 }
 
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index 281bc0133ee6..47efcf233afd 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
+F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
@@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
 	ATTR_LIST(ram_thresh),
 	ATTR_LIST(ra_nid_pages),
 	ATTR_LIST(dirty_nats_ratio),
+	ATTR_LIST(max_roll_forward_node_blocks),
 	ATTR_LIST(cp_interval),
 	ATTR_LIST(idle_interval),
 	ATTR_LIST(discard_idle_interval),
-- 
2.35.0.263.gb82422642f-goog



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v3] f2fs: add a way to limit roll forward recovery time
@ 2022-02-07 19:01     ` Jaegeuk Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Jaegeuk Kim @ 2022-02-07 19:01 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel

This adds a sysfs entry to call checkpoint during fsync() in order to avoid
long elapsed time to run roll-forward recovery when booting the device.
Default value doesn't enforce the limitation which is same as before.

Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
---
 v3 from v2:
  - add missing percpu init
  - percpu_sum only when it's used

 v2 from v1:
  - make the default w/o enforcement

 Documentation/ABI/testing/sysfs-fs-f2fs |  6 ++++++
 fs/f2fs/checkpoint.c                    |  1 +
 fs/f2fs/debug.c                         |  3 +++
 fs/f2fs/f2fs.h                          |  3 +++
 fs/f2fs/node.c                          |  2 ++
 fs/f2fs/node.h                          |  3 +++
 fs/f2fs/recovery.c                      |  4 ++++
 fs/f2fs/super.c                         | 14 ++++++++++++--
 fs/f2fs/sysfs.c                         |  2 ++
 9 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
index 87d3884c90ea..ce8103f522cb 100644
--- a/Documentation/ABI/testing/sysfs-fs-f2fs
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -567,3 +567,9 @@ Contact:	"Daeho Jeong" <daehojeong@google.com>
 Description:	You can set the trial count limit for GC urgent high mode with this value.
 		If GC thread gets to the limit, the mode will turn back to GC normal mode.
 		By default, the value is zero, which means there is no limit like before.
+
+What:		/sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
+Date:		January 2022
+Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
+Description:	Controls max # of node block writes to be used for roll forward
+		recovery. This can limit the roll forward recovery time.
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index a13b6b4af220..203a1577942d 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1547,6 +1547,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	/* update user_block_counts */
 	sbi->last_valid_block_count = sbi->total_valid_block_count;
 	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
+	percpu_counter_set(&sbi->rf_node_block_count, 0);
 
 	/* Here, we have one bio having CP pack except cp pack 2 page */
 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
index 8c50518475a9..9a13902c7702 100644
--- a/fs/f2fs/debug.c
+++ b/fs/f2fs/debug.c
@@ -532,6 +532,9 @@ static int stat_show(struct seq_file *s, void *v)
 			   si->ndirty_meta, si->meta_pages);
 		seq_printf(s, "  - imeta: %4d\n",
 			   si->ndirty_imeta);
+		seq_printf(s, "  - fsync mark: %4lld\n",
+			   percpu_counter_sum_positive(
+					&si->sbi->rf_node_block_count));
 		seq_printf(s, "  - NATs: %9d/%9d\n  - SITs: %9d/%9d\n",
 			   si->dirty_nats, si->nats, si->dirty_sits, si->sits);
 		seq_printf(s, "  - free_nids: %9d/%9d\n  - alloc_nids: %9d\n",
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 51c1392708e6..d220ab613cf1 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -916,6 +916,7 @@ struct f2fs_nm_info {
 	nid_t max_nid;			/* maximum possible node ids */
 	nid_t available_nids;		/* # of available node ids */
 	nid_t next_scan_nid;		/* the next nid to be scanned */
+	nid_t max_rf_node_blocks;	/* max # of nodes for recovery */
 	unsigned int ram_thresh;	/* control the memory footprint */
 	unsigned int ra_nid_pages;	/* # of nid pages to be readaheaded */
 	unsigned int dirty_nats_ratio;	/* control dirty nats ratio threshold */
@@ -1687,6 +1688,8 @@ struct f2fs_sb_info {
 	atomic_t nr_pages[NR_COUNT_TYPE];
 	/* # of allocated blocks */
 	struct percpu_counter alloc_valid_block_count;
+	/* # of node block writes as roll forward recovery */
+	struct percpu_counter rf_node_block_count;
 
 	/* writeback control */
 	atomic_t wb_sync_req[META];	/* count # of WB_SYNC threads */
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 93512f8859d5..0d9883457579 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1782,6 +1782,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 
 			if (!atomic || page == last_page) {
 				set_fsync_mark(page, 1);
+				percpu_counter_inc(&sbi->rf_node_block_count);
 				if (IS_INODE(page)) {
 					if (is_inode_flag_set(inode,
 								FI_DIRTY_INODE))
@@ -3218,6 +3219,7 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
 	nm_i->ram_thresh = DEF_RAM_THRESHOLD;
 	nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
 	nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
+	nm_i->max_rf_node_blocks = DEF_RF_NODE_BLOCKS;
 
 	INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
 	INIT_LIST_HEAD(&nm_i->free_nid_list);
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
index 18b98cf0465b..4c1d34bfea78 100644
--- a/fs/f2fs/node.h
+++ b/fs/f2fs/node.h
@@ -31,6 +31,9 @@
 /* control total # of nats */
 #define DEF_NAT_CACHE_THRESHOLD			100000
 
+/* control total # of node writes used for roll-fowrad recovery */
+#define DEF_RF_NODE_BLOCKS			0
+
 /* vector size for gang look-up from nat cache that consists of radix tree */
 #define NATVEC_SIZE	64
 #define SETVEC_SIZE	32
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index 2af503f75b4f..ab33e474af07 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -56,6 +56,10 @@ bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
 
 	if (sbi->last_valid_block_count + nalloc > sbi->user_block_count)
 		return false;
+	if (NM_I(sbi)->max_rf_node_blocks &&
+		percpu_counter_sum_positive(&sbi->rf_node_block_count) >=
+						NM_I(sbi)->max_rf_node_blocks)
+		return false;
 	return true;
 }
 
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 9af6c20532ec..f9d627dbed58 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -1501,8 +1501,9 @@ static void f2fs_free_inode(struct inode *inode)
 
 static void destroy_percpu_info(struct f2fs_sb_info *sbi)
 {
-	percpu_counter_destroy(&sbi->alloc_valid_block_count);
 	percpu_counter_destroy(&sbi->total_valid_inode_count);
+	percpu_counter_destroy(&sbi->rf_node_block_count);
+	percpu_counter_destroy(&sbi->alloc_valid_block_count);
 }
 
 static void destroy_device_list(struct f2fs_sb_info *sbi)
@@ -3619,11 +3620,20 @@ static int init_percpu_info(struct f2fs_sb_info *sbi)
 	if (err)
 		return err;
 
+	err = percpu_counter_init(&sbi->rf_node_block_count, 0, GFP_KERNEL);
+	if (err)
+		goto err_valid_block;
+
 	err = percpu_counter_init(&sbi->total_valid_inode_count, 0,
 								GFP_KERNEL);
 	if (err)
-		percpu_counter_destroy(&sbi->alloc_valid_block_count);
+		goto err_node_block;
+	return 0;
 
+err_node_block:
+	percpu_counter_destroy(&sbi->rf_node_block_count);
+err_valid_block:
+	percpu_counter_destroy(&sbi->alloc_valid_block_count);
 	return err;
 }
 
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index 281bc0133ee6..47efcf233afd 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -732,6 +732,7 @@ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ssr_sections, min_ssr_sections);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages);
 F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio);
+F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, max_roll_forward_node_blocks, max_rf_node_blocks);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity);
 F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
@@ -855,6 +856,7 @@ static struct attribute *f2fs_attrs[] = {
 	ATTR_LIST(ram_thresh),
 	ATTR_LIST(ra_nid_pages),
 	ATTR_LIST(dirty_nats_ratio),
+	ATTR_LIST(max_roll_forward_node_blocks),
 	ATTR_LIST(cp_interval),
 	ATTR_LIST(idle_interval),
 	ATTR_LIST(discard_idle_interval),
-- 
2.35.0.263.gb82422642f-goog


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v3] f2fs: add a way to limit roll forward recovery time
  2022-02-07 19:01     ` Jaegeuk Kim
@ 2022-02-08  1:43       ` Chao Yu
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-02-08  1:43 UTC (permalink / raw)
  To: Jaegeuk Kim, linux-kernel, linux-f2fs-devel

On 2022/2/8 3:01, Jaegeuk Kim wrote:
> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> long elapsed time to run roll-forward recovery when booting the device.
> Default value doesn't enforce the limitation which is same as before.
> 
> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>

Reviewed-by: Chao Yu <chao@kernel.org>

Thanks,


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [f2fs-dev] [PATCH v3] f2fs: add a way to limit roll forward recovery time
@ 2022-02-08  1:43       ` Chao Yu
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Yu @ 2022-02-08  1:43 UTC (permalink / raw)
  To: Jaegeuk Kim, linux-kernel, linux-f2fs-devel

On 2022/2/8 3:01, Jaegeuk Kim wrote:
> This adds a sysfs entry to call checkpoint during fsync() in order to avoid
> long elapsed time to run roll-forward recovery when booting the device.
> Default value doesn't enforce the limitation which is same as before.
> 
> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>

Reviewed-by: Chao Yu <chao@kernel.org>

Thanks,

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-02-08  1:48 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-27 21:41 [PATCH] f2fs: add a way to limit roll forward recovery time Jaegeuk Kim
2022-01-27 21:41 ` [f2fs-dev] " Jaegeuk Kim
2022-01-29  8:20 ` Chao Yu
2022-01-29  8:20   ` Chao Yu
2022-02-03  0:33   ` Jaegeuk Kim
2022-02-03  0:33     ` Jaegeuk Kim
2022-02-03  0:34 ` [PATCH v2] " Jaegeuk Kim
2022-02-03  0:34   ` [f2fs-dev] " Jaegeuk Kim
2022-02-03 14:46   ` Chao Yu
2022-02-03 14:46     ` Chao Yu
2022-02-03 17:42     ` Jaegeuk Kim
2022-02-03 17:42       ` Jaegeuk Kim
2022-02-04  0:20       ` Chao Yu
2022-02-04  0:20         ` Chao Yu
2022-02-04  6:16         ` Jaegeuk Kim
2022-02-04  6:16           ` Jaegeuk Kim
2022-02-04 12:03           ` Chao Yu
2022-02-04 12:03             ` Chao Yu
2022-02-07 19:01   ` [f2fs-dev] [PATCH v3] " Jaegeuk Kim
2022-02-07 19:01     ` Jaegeuk Kim
2022-02-08  1:43     ` Chao Yu
2022-02-08  1:43       ` Chao Yu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.