* [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-02-02 8:00 ` Chao Yu 0 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-02-02 8:00 UTC (permalink / raw) To: jaegeuk; +Cc: linux-f2fs-devel, linux-kernel, chao, Chao Yu Support to use address space of inner inode to cache compressed block, in order to improve cache hit ratio of random read. Signed-off-by: Chao Yu <yuchao0@huawei.com> --- v4: - detect truncation during f2fs_cache_compressed_page() - don't set PageUptodate for temporary page in f2fs_load_compressed_page() - avoid changing compress_cache option from remount - call f2fs_invalidate_compress_pages() from releasepage()/invalidatepage() - fix use-after-free on @dic->nr_cpages - fix to call f2fs_wait_on_block_writeback() before f2fs_load_compressed_page() Documentation/filesystems/f2fs.rst | 3 + fs/f2fs/compress.c | 177 ++++++++++++++++++++++++++++- fs/f2fs/data.c | 32 +++++- fs/f2fs/debug.c | 13 +++ fs/f2fs/f2fs.h | 39 ++++++- fs/f2fs/gc.c | 1 + fs/f2fs/inode.c | 21 +++- fs/f2fs/segment.c | 6 +- fs/f2fs/super.c | 26 ++++- include/linux/f2fs_fs.h | 1 + 10 files changed, 305 insertions(+), 14 deletions(-) diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst index 475994ed8b15..61b4c2999170 100644 --- a/Documentation/filesystems/f2fs.rst +++ b/Documentation/filesystems/f2fs.rst @@ -283,6 +283,9 @@ compress_mode=%s Control file compression mode. This supports "fs" and "user" choosing the target file and the timing. The user can do manual compression/decompression on the compression enabled files using ioctls. +compress_cache Support to use address space of a filesystem managed inode to + cache compressed block, in order to improve cache hit ratio of + random read. inlinecrypt When possible, encrypt/decrypt the contents of encrypted files using the blk-crypto framework rather than filesystem-layer encryption. This allows the use of diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index 77fa342de38f..ad38c05e3f14 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -12,9 +12,11 @@ #include <linux/lzo.h> #include <linux/lz4.h> #include <linux/zstd.h> +#include <linux/pagevec.h> #include "f2fs.h" #include "node.h" +#include "segment.h" #include <trace/events/f2fs.h> static struct kmem_cache *cic_entry_slab; @@ -756,7 +758,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) return ret; } -static void f2fs_decompress_cluster(struct decompress_io_ctx *dic) +void f2fs_decompress_cluster(struct decompress_io_ctx *dic) { struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode); struct f2fs_inode_info *fi = F2FS_I(dic->inode); @@ -855,7 +857,8 @@ static void f2fs_decompress_cluster(struct decompress_io_ctx *dic) * page being waited on in the cluster, and if so, it decompresses the cluster * (or in the case of a failure, cleans up without actually decompressing). */ -void f2fs_end_read_compressed_page(struct page *page, bool failed) +void f2fs_end_read_compressed_page(struct page *page, bool failed, + block_t blkaddr) { struct decompress_io_ctx *dic = (struct decompress_io_ctx *)page_private(page); @@ -865,6 +868,9 @@ void f2fs_end_read_compressed_page(struct page *page, bool failed) if (failed) WRITE_ONCE(dic->failed, true); + else if (blkaddr) + f2fs_cache_compressed_page(sbi, page, + dic->inode->i_ino, blkaddr); if (atomic_dec_and_test(&dic->remaining_pages)) f2fs_decompress_cluster(dic); @@ -1705,6 +1711,173 @@ void f2fs_put_page_dic(struct page *page) f2fs_put_dic(dic); } +const struct address_space_operations f2fs_compress_aops = { + .releasepage = f2fs_release_page, + .invalidatepage = f2fs_invalidate_page, +}; + +struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi) +{ + return sbi->compress_inode->i_mapping; +} + +void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr) +{ + if (!sbi->compress_inode) + return; + invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr); +} + +void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, + nid_t ino, block_t blkaddr) +{ + struct page *cpage; + int ret; + struct sysinfo si; + unsigned long free_ram, avail_ram; + + if (!test_opt(sbi, COMPRESS_CACHE)) + return; + + if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE_READ)) + return; + + si_meminfo(&si); + free_ram = si.freeram; + avail_ram = si.totalram - si.totalhigh; + + /* free memory is lower than watermark, deny caching compress page */ + if (free_ram <= sbi->compress_watermark / 100 * avail_ram) + return; + + /* cached page count exceed threshold, deny caching compress page */ + if (COMPRESS_MAPPING(sbi)->nrpages >= + free_ram / 100 * sbi->compress_percent) + return; + + cpage = find_get_page(COMPRESS_MAPPING(sbi), blkaddr); + if (cpage) { + f2fs_put_page(cpage, 0); + return; + } + + cpage = alloc_page(__GFP_IO); + if (!cpage) + return; + + ret = add_to_page_cache_lru(cpage, COMPRESS_MAPPING(sbi), + blkaddr, GFP_NOFS); + if (ret) { + f2fs_put_page(cpage, 0); + return; + } + + f2fs_set_page_private(cpage, ino); + + if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE_READ)) + goto out; + + memcpy(page_address(cpage), page_address(page), PAGE_SIZE); + SetPageUptodate(cpage); +out: + f2fs_put_page(cpage, 1); +} + +bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page, + block_t blkaddr) +{ + struct page *cpage; + bool hitted = false; + + if (!test_opt(sbi, COMPRESS_CACHE)) + return false; + + cpage = f2fs_pagecache_get_page(COMPRESS_MAPPING(sbi), + blkaddr, FGP_LOCK | FGP_NOWAIT, GFP_NOFS); + if (cpage) { + if (PageUptodate(cpage)) { + atomic_inc(&sbi->compress_page_hit); + memcpy(page_address(page), + page_address(cpage), PAGE_SIZE); + hitted = true; + } + f2fs_put_page(cpage, 1); + } + + return hitted; +} + +void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino) +{ + struct address_space *mapping = sbi->compress_inode->i_mapping; + struct pagevec pvec; + pgoff_t index = 0; + pgoff_t end = MAX_BLKADDR(sbi); + + pagevec_init(&pvec); + + do { + unsigned int nr_pages; + int i; + + nr_pages = pagevec_lookup_range(&pvec, mapping, + &index, end - 1); + if (!nr_pages) + break; + + for (i = 0; i < nr_pages; i++) { + struct page *page = pvec.pages[i]; + + if (page->index > end) + break; + + lock_page(page); + if (page->mapping != mapping) { + unlock_page(page); + continue; + } + + if (ino != page_private(page)) { + unlock_page(page); + continue; + } + + generic_error_remove_page(mapping, page); + unlock_page(page); + } + pagevec_release(&pvec); + cond_resched(); + } while (index < end); +} + +int f2fs_init_compress_inode(struct f2fs_sb_info *sbi) +{ + struct inode *inode; + + if (!test_opt(sbi, COMPRESS_CACHE)) + return 0; + + inode = f2fs_iget(sbi->sb, F2FS_COMPRESS_INO(sbi)); + if (IS_ERR(inode)) + return PTR_ERR(inode); + sbi->compress_inode = inode; + + sbi->compress_percent = COMPRESS_PERCENT; + sbi->compress_watermark = COMPRESS_WATERMARK; + + atomic_set(&sbi->compress_page_hit, 0); + + return 0; +} + +void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi) +{ + if (!sbi->compress_inode) + return; + iput(sbi->compress_inode); + sbi->compress_inode = NULL; +} + int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi) { dev_t dev = sbi->sb->s_bdev->bd_dev; diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 38476d0d3916..5a58a16fe679 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -152,7 +152,7 @@ static void f2fs_finish_read_bio(struct bio *bio) if (f2fs_is_compressed_page(page)) { if (bio->bi_status) - f2fs_end_read_compressed_page(page, true); + f2fs_end_read_compressed_page(page, true, 0); f2fs_put_page_dic(page); continue; } @@ -248,15 +248,19 @@ static void f2fs_handle_step_decompress(struct bio_post_read_ctx *ctx) struct bio_vec *bv; struct bvec_iter_all iter_all; bool all_compressed = true; + block_t blkaddr = SECTOR_TO_BLOCK(ctx->bio->bi_iter.bi_sector); bio_for_each_segment_all(bv, ctx->bio, iter_all) { struct page *page = bv->bv_page; /* PG_error was set if decryption failed. */ if (f2fs_is_compressed_page(page)) - f2fs_end_read_compressed_page(page, PageError(page)); + f2fs_end_read_compressed_page(page, PageError(page), + blkaddr); else all_compressed = false; + + blkaddr++; } /* @@ -1381,9 +1385,11 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type) old_blkaddr = dn->data_blkaddr; f2fs_allocate_data_block(sbi, NULL, old_blkaddr, &dn->data_blkaddr, &sum, seg_type, NULL); - if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) + if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) { invalidate_mapping_pages(META_MAPPING(sbi), old_blkaddr, old_blkaddr); + f2fs_invalidate_compress_page(sbi, old_blkaddr); + } f2fs_update_data_blkaddr(dn, dn->data_blkaddr); /* @@ -2193,7 +2199,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, goto out_put_dnode; } - for (i = 0; i < dic->nr_cpages; i++) { + for (i = 0; i < cc->nr_cpages; i++) { struct page *page = dic->cpages[i]; block_t blkaddr; struct bio_post_read_ctx *ctx; @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, blkaddr = data_blkaddr(dn.inode, dn.node_page, dn.ofs_in_node + i + 1); + f2fs_wait_on_block_writeback(inode, blkaddr); + + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { + if (atomic_dec_and_test(&dic->remaining_pages)) + f2fs_decompress_cluster(dic); + continue; + } + if (bio && (!page_is_mergeable(sbi, bio, *last_block_in_bio, blkaddr) || !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) { @@ -2222,8 +2236,6 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, } } - f2fs_wait_on_block_writeback(inode, blkaddr); - if (bio_add_page(bio, page, blocksize, 0) < blocksize) goto submit_and_realloc; @@ -3637,6 +3649,9 @@ void f2fs_invalidate_page(struct page *page, unsigned int offset, clear_cold_data(page); + if (test_opt(sbi, COMPRESS_CACHE) && f2fs_compressed_file(inode)) + f2fs_invalidate_compress_pages(sbi, inode->i_ino); + if (IS_ATOMIC_WRITTEN_PAGE(page)) return f2fs_drop_inmem_page(inode, page); @@ -3653,6 +3668,11 @@ int f2fs_release_page(struct page *page, gfp_t wait) if (IS_ATOMIC_WRITTEN_PAGE(page)) return 0; + if (test_opt(F2FS_P_SB(page), COMPRESS_CACHE) && + f2fs_compressed_file(page->mapping->host)) + f2fs_invalidate_compress_pages(F2FS_P_SB(page), + page->mapping->host->i_ino); + clear_cold_data(page); f2fs_clear_page_private(page); return 1; diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c index 91855d5721cd..3560ce89464a 100644 --- a/fs/f2fs/debug.c +++ b/fs/f2fs/debug.c @@ -152,6 +152,12 @@ static void update_general_status(struct f2fs_sb_info *sbi) si->node_pages = NODE_MAPPING(sbi)->nrpages; if (sbi->meta_inode) si->meta_pages = META_MAPPING(sbi)->nrpages; +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (sbi->compress_inode) { + si->compress_pages = COMPRESS_MAPPING(sbi)->nrpages; + si->compress_page_hit = atomic_read(&sbi->compress_page_hit); + } +#endif si->nats = NM_I(sbi)->nat_cnt[TOTAL_NAT]; si->dirty_nats = NM_I(sbi)->nat_cnt[DIRTY_NAT]; si->sits = MAIN_SEGS(sbi); @@ -306,6 +312,12 @@ static void update_mem_info(struct f2fs_sb_info *sbi) unsigned npages = META_MAPPING(sbi)->nrpages; si->page_mem += (unsigned long long)npages << PAGE_SHIFT; } +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (sbi->compress_inode) { + unsigned npages = COMPRESS_MAPPING(sbi)->nrpages; + si->page_mem += (unsigned long long)npages << PAGE_SHIFT; + } +#endif } static int stat_show(struct seq_file *s, void *v) @@ -473,6 +485,7 @@ static int stat_show(struct seq_file *s, void *v) "volatile IO: %4d (Max. %4d)\n", si->inmem_pages, si->aw_cnt, si->max_aw_cnt, si->vw_cnt, si->max_vw_cnt); + seq_printf(s, " - compress: %4d, hit:%8d\n", si->compress_pages, si->compress_page_hit); seq_printf(s, " - nodes: %4d in %4d\n", si->ndirty_node, si->node_pages); seq_printf(s, " - dents: %4d in dirs:%4d (%4d)\n", diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 2860003a09ed..57f62761c6fa 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -98,6 +98,7 @@ extern const char *f2fs_fault_name[FAULT_MAX]; #define F2FS_MOUNT_NORECOVERY 0x04000000 #define F2FS_MOUNT_ATGC 0x08000000 #define F2FS_MOUNT_MERGE_CHECKPOINT 0x10000000 +#define F2FS_MOUNT_COMPRESS_CACHE 0x20000000 #define F2FS_OPTION(sbi) ((sbi)->mount_opt) #define clear_opt(sbi, option) (F2FS_OPTION(sbi).opt &= ~F2FS_MOUNT_##option) @@ -1320,6 +1321,9 @@ enum compress_flag { COMPRESS_MAX_FLAG, }; +#define COMPRESS_WATERMARK 20 +#define COMPRESS_PERCENT 20 + #define COMPRESS_DATA_RESERVED_SIZE 4 struct compress_data { __le32 clen; /* compressed data size */ @@ -1624,6 +1628,11 @@ struct f2fs_sb_info { #ifdef CONFIG_F2FS_FS_COMPRESSION struct kmem_cache *page_array_slab; /* page array entry */ unsigned int page_array_slab_size; /* default page array slab size */ + + struct inode *compress_inode; /* cache compressed blocks */ + unsigned int compress_percent; /* cache page percentage */ + unsigned int compress_watermark; /* cache page watermark */ + atomic_t compress_page_hit; /* cache hit count */ #endif }; @@ -3598,7 +3607,8 @@ struct f2fs_stat_info { unsigned int bimodal, avg_vblocks; int util_free, util_valid, util_invalid; int rsvd_segs, overp_segs; - int dirty_count, node_pages, meta_pages; + int dirty_count, node_pages, meta_pages, compress_pages; + int compress_page_hit; int prefree_count, call_count, cp_count, bg_cp_count; int tot_segs, node_segs, data_segs, free_segs, free_secs; int bg_node_segs, bg_data_segs; @@ -3934,7 +3944,9 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page); bool f2fs_is_compress_backend_ready(struct inode *inode); int f2fs_init_compress_mempool(void); void f2fs_destroy_compress_mempool(void); -void f2fs_end_read_compressed_page(struct page *page, bool failed); +void f2fs_decompress_cluster(struct decompress_io_ctx *dic); +void f2fs_end_read_compressed_page(struct page *page, bool failed, + block_t blkaddr); bool f2fs_cluster_is_empty(struct compress_ctx *cc); bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index); void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page); @@ -3952,10 +3964,19 @@ void f2fs_put_page_dic(struct page *page); int f2fs_init_compress_ctx(struct compress_ctx *cc); void f2fs_destroy_compress_ctx(struct compress_ctx *cc); void f2fs_init_compress_info(struct f2fs_sb_info *sbi); +int f2fs_init_compress_inode(struct f2fs_sb_info *sbi); +void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi); int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi); void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi); int __init f2fs_init_compress_cache(void); void f2fs_destroy_compress_cache(void); +struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi); +void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr); +void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, + nid_t ino, block_t blkaddr); +bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page, + block_t blkaddr); +void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino); #else static inline bool f2fs_is_compressed_page(struct page *page) { return false; } static inline bool f2fs_is_compress_backend_ready(struct inode *inode) @@ -3972,7 +3993,9 @@ static inline struct page *f2fs_compress_control_page(struct page *page) } static inline int f2fs_init_compress_mempool(void) { return 0; } static inline void f2fs_destroy_compress_mempool(void) { } -static inline void f2fs_end_read_compressed_page(struct page *page, bool failed) +static inline void f2fs_decompress_cluster(struct decompress_io_ctx *dic) { } +static inline void f2fs_end_read_compressed_page(struct page *page, + bool failed, block_t blkaddr) { WARN_ON_ONCE(1); } @@ -3980,10 +4003,20 @@ static inline void f2fs_put_page_dic(struct page *page) { WARN_ON_ONCE(1); } +static inline int f2fs_init_compress_inode(struct f2fs_sb_info *sbi) { return 0; } +static inline void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi) { } static inline int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi) { return 0; } static inline void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi) { } static inline int __init f2fs_init_compress_cache(void) { return 0; } static inline void f2fs_destroy_compress_cache(void) { } +static inline void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, + block_t blkaddr) { } +static inline void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, + struct page *page, nid_t ino, block_t blkaddr) { } +static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, + struct page *page, block_t blkaddr) { return false; } +static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, + nid_t ino) { } #endif static inline void set_compress_context(struct inode *inode) diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index 39330ad3c44e..4561c44dfa8f 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -1226,6 +1226,7 @@ static int move_data_block(struct inode *inode, block_t bidx, f2fs_put_page(mpage, 1); invalidate_mapping_pages(META_MAPPING(fio.sbi), fio.old_blkaddr, fio.old_blkaddr); + f2fs_invalidate_compress_page(fio.sbi, fio.old_blkaddr); set_page_dirty(fio.encrypted_page); if (clear_page_dirty_for_io(fio.encrypted_page)) diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 349d9cb933ee..f030b9b79202 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -18,6 +18,10 @@ #include <trace/events/f2fs.h> +#ifdef CONFIG_F2FS_FS_COMPRESSION +extern const struct address_space_operations f2fs_compress_aops; +#endif + void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync) { if (is_inode_flag_set(inode, FI_NEW_INODE)) @@ -494,6 +498,11 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino) if (ino == F2FS_NODE_INO(sbi) || ino == F2FS_META_INO(sbi)) goto make_now; +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (ino == F2FS_COMPRESS_INO(sbi)) + goto make_now; +#endif + ret = do_read_inode(inode); if (ret) goto bad_inode; @@ -504,6 +513,12 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino) } else if (ino == F2FS_META_INO(sbi)) { inode->i_mapping->a_ops = &f2fs_meta_aops; mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS); + } else if (ino == F2FS_COMPRESS_INO(sbi)) { +#ifdef CONFIG_F2FS_FS_COMPRESSION + inode->i_mapping->a_ops = &f2fs_compress_aops; +#endif + mapping_set_gfp_mask(inode->i_mapping, + GFP_NOFS | __GFP_HIGHMEM | __GFP_MOVABLE); } else if (S_ISREG(inode->i_mode)) { inode->i_op = &f2fs_file_inode_operations; inode->i_fop = &f2fs_file_operations; @@ -722,8 +737,12 @@ void f2fs_evict_inode(struct inode *inode) trace_f2fs_evict_inode(inode); truncate_inode_pages_final(&inode->i_data); + if (test_opt(sbi, COMPRESS_CACHE) && f2fs_compressed_file(inode)) + f2fs_invalidate_compress_pages(sbi, inode->i_ino); + if (inode->i_ino == F2FS_NODE_INO(sbi) || - inode->i_ino == F2FS_META_INO(sbi)) + inode->i_ino == F2FS_META_INO(sbi) || + inode->i_ino == F2FS_COMPRESS_INO(sbi)) goto out_clear; f2fs_bug_on(sbi, get_dirty_pages(inode)); diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 7d34f1cacdee..2ee9b7ba46bd 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -2302,6 +2302,7 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr) return; invalidate_mapping_pages(META_MAPPING(sbi), addr, addr); + f2fs_invalidate_compress_page(sbi, addr); /* add it into sit main buffer */ down_write(&sit_i->sentry_lock); @@ -3429,9 +3430,11 @@ static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio) reallocate: f2fs_allocate_data_block(fio->sbi, fio->page, fio->old_blkaddr, &fio->new_blkaddr, sum, type, fio); - if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO) + if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO) { invalidate_mapping_pages(META_MAPPING(fio->sbi), fio->old_blkaddr, fio->old_blkaddr); + f2fs_invalidate_compress_page(fio->sbi, fio->old_blkaddr); + } /* writeout dirty page into bdev */ f2fs_submit_page_write(fio); @@ -3604,6 +3607,7 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) { invalidate_mapping_pages(META_MAPPING(sbi), old_blkaddr, old_blkaddr); + f2fs_invalidate_compress_page(sbi, old_blkaddr); if (!from_gc) update_segment_mtime(sbi, old_blkaddr, 0); update_sit_entry(sbi, old_blkaddr, -1); diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index d8603e6c4916..d7beb8d511a5 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -151,6 +151,7 @@ enum { Opt_compress_extension, Opt_compress_chksum, Opt_compress_mode, + Opt_compress_cache, Opt_atgc, Opt_err, }; @@ -223,6 +224,7 @@ static match_table_t f2fs_tokens = { {Opt_compress_extension, "compress_extension=%s"}, {Opt_compress_chksum, "compress_chksum"}, {Opt_compress_mode, "compress_mode=%s"}, + {Opt_compress_cache, "compress_cache"}, {Opt_atgc, "atgc"}, {Opt_err, NULL}, }; @@ -1062,12 +1064,16 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount) } kfree(name); break; + case Opt_compress_cache: + set_opt(sbi, COMPRESS_CACHE); + break; #else case Opt_compress_algorithm: case Opt_compress_log_size: case Opt_compress_extension: case Opt_compress_chksum: case Opt_compress_mode: + case Opt_compress_cache: f2fs_info(sbi, "compression options not supported"); break; #endif @@ -1396,6 +1402,8 @@ static void f2fs_put_super(struct super_block *sb) f2fs_bug_on(sbi, sbi->fsync_node_num); + f2fs_destroy_compress_inode(sbi); + iput(sbi->node_inode); sbi->node_inode = NULL; @@ -1660,6 +1668,9 @@ static inline void f2fs_show_compress_options(struct seq_file *seq, seq_printf(seq, ",compress_mode=%s", "fs"); else if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_USER) seq_printf(seq, ",compress_mode=%s", "user"); + + if (test_opt(sbi, COMPRESS_CACHE)) + seq_puts(seq, ",compress_cache"); } static int f2fs_show_options(struct seq_file *seq, struct dentry *root) @@ -1930,6 +1941,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) bool disable_checkpoint = test_opt(sbi, DISABLE_CHECKPOINT); bool no_io_align = !F2FS_IO_ALIGNED(sbi); bool no_atgc = !test_opt(sbi, ATGC); + bool no_compress_cache = !test_opt(sbi, COMPRESS_CACHE); bool checkpoint_changed; #ifdef CONFIG_QUOTA int i, j; @@ -2022,6 +2034,12 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) goto restore_opts; } + if (no_compress_cache == !!test_opt(sbi, COMPRESS_CACHE)) { + err = -EINVAL; + f2fs_warn(sbi, "switch compress_cache option is not allowed"); + goto restore_opts; + } + if ((*flags & SB_RDONLY) && test_opt(sbi, DISABLE_CHECKPOINT)) { err = -EINVAL; f2fs_warn(sbi, "disabling checkpoint not compatible with read-only"); @@ -3900,10 +3918,14 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) goto free_node_inode; } - err = f2fs_register_sysfs(sbi); + err = f2fs_init_compress_inode(sbi); if (err) goto free_root_inode; + err = f2fs_register_sysfs(sbi); + if (err) + goto free_compress_inode; + #ifdef CONFIG_QUOTA /* Enable quota usage during mount */ if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sb)) { @@ -4037,6 +4059,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) /* evict some inodes being cached by GC */ evict_inodes(sb); f2fs_unregister_sysfs(sbi); +free_compress_inode: + f2fs_destroy_compress_inode(sbi); free_root_inode: dput(sb->s_root); sb->s_root = NULL; diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h index c6cc0a566ef5..2dcc63fe8494 100644 --- a/include/linux/f2fs_fs.h +++ b/include/linux/f2fs_fs.h @@ -34,6 +34,7 @@ #define F2FS_ROOT_INO(sbi) ((sbi)->root_ino_num) #define F2FS_NODE_INO(sbi) ((sbi)->node_ino_num) #define F2FS_META_INO(sbi) ((sbi)->meta_ino_num) +#define F2FS_COMPRESS_INO(sbi) (NM_I(sbi)->max_nid) #define F2FS_MAX_QUOTAS 3 -- 2.29.2 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-02-02 8:00 ` Chao Yu 0 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-02-02 8:00 UTC (permalink / raw) To: jaegeuk; +Cc: linux-kernel, linux-f2fs-devel Support to use address space of inner inode to cache compressed block, in order to improve cache hit ratio of random read. Signed-off-by: Chao Yu <yuchao0@huawei.com> --- v4: - detect truncation during f2fs_cache_compressed_page() - don't set PageUptodate for temporary page in f2fs_load_compressed_page() - avoid changing compress_cache option from remount - call f2fs_invalidate_compress_pages() from releasepage()/invalidatepage() - fix use-after-free on @dic->nr_cpages - fix to call f2fs_wait_on_block_writeback() before f2fs_load_compressed_page() Documentation/filesystems/f2fs.rst | 3 + fs/f2fs/compress.c | 177 ++++++++++++++++++++++++++++- fs/f2fs/data.c | 32 +++++- fs/f2fs/debug.c | 13 +++ fs/f2fs/f2fs.h | 39 ++++++- fs/f2fs/gc.c | 1 + fs/f2fs/inode.c | 21 +++- fs/f2fs/segment.c | 6 +- fs/f2fs/super.c | 26 ++++- include/linux/f2fs_fs.h | 1 + 10 files changed, 305 insertions(+), 14 deletions(-) diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst index 475994ed8b15..61b4c2999170 100644 --- a/Documentation/filesystems/f2fs.rst +++ b/Documentation/filesystems/f2fs.rst @@ -283,6 +283,9 @@ compress_mode=%s Control file compression mode. This supports "fs" and "user" choosing the target file and the timing. The user can do manual compression/decompression on the compression enabled files using ioctls. +compress_cache Support to use address space of a filesystem managed inode to + cache compressed block, in order to improve cache hit ratio of + random read. inlinecrypt When possible, encrypt/decrypt the contents of encrypted files using the blk-crypto framework rather than filesystem-layer encryption. This allows the use of diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index 77fa342de38f..ad38c05e3f14 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -12,9 +12,11 @@ #include <linux/lzo.h> #include <linux/lz4.h> #include <linux/zstd.h> +#include <linux/pagevec.h> #include "f2fs.h" #include "node.h" +#include "segment.h" #include <trace/events/f2fs.h> static struct kmem_cache *cic_entry_slab; @@ -756,7 +758,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) return ret; } -static void f2fs_decompress_cluster(struct decompress_io_ctx *dic) +void f2fs_decompress_cluster(struct decompress_io_ctx *dic) { struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode); struct f2fs_inode_info *fi = F2FS_I(dic->inode); @@ -855,7 +857,8 @@ static void f2fs_decompress_cluster(struct decompress_io_ctx *dic) * page being waited on in the cluster, and if so, it decompresses the cluster * (or in the case of a failure, cleans up without actually decompressing). */ -void f2fs_end_read_compressed_page(struct page *page, bool failed) +void f2fs_end_read_compressed_page(struct page *page, bool failed, + block_t blkaddr) { struct decompress_io_ctx *dic = (struct decompress_io_ctx *)page_private(page); @@ -865,6 +868,9 @@ void f2fs_end_read_compressed_page(struct page *page, bool failed) if (failed) WRITE_ONCE(dic->failed, true); + else if (blkaddr) + f2fs_cache_compressed_page(sbi, page, + dic->inode->i_ino, blkaddr); if (atomic_dec_and_test(&dic->remaining_pages)) f2fs_decompress_cluster(dic); @@ -1705,6 +1711,173 @@ void f2fs_put_page_dic(struct page *page) f2fs_put_dic(dic); } +const struct address_space_operations f2fs_compress_aops = { + .releasepage = f2fs_release_page, + .invalidatepage = f2fs_invalidate_page, +}; + +struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi) +{ + return sbi->compress_inode->i_mapping; +} + +void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr) +{ + if (!sbi->compress_inode) + return; + invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr); +} + +void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, + nid_t ino, block_t blkaddr) +{ + struct page *cpage; + int ret; + struct sysinfo si; + unsigned long free_ram, avail_ram; + + if (!test_opt(sbi, COMPRESS_CACHE)) + return; + + if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE_READ)) + return; + + si_meminfo(&si); + free_ram = si.freeram; + avail_ram = si.totalram - si.totalhigh; + + /* free memory is lower than watermark, deny caching compress page */ + if (free_ram <= sbi->compress_watermark / 100 * avail_ram) + return; + + /* cached page count exceed threshold, deny caching compress page */ + if (COMPRESS_MAPPING(sbi)->nrpages >= + free_ram / 100 * sbi->compress_percent) + return; + + cpage = find_get_page(COMPRESS_MAPPING(sbi), blkaddr); + if (cpage) { + f2fs_put_page(cpage, 0); + return; + } + + cpage = alloc_page(__GFP_IO); + if (!cpage) + return; + + ret = add_to_page_cache_lru(cpage, COMPRESS_MAPPING(sbi), + blkaddr, GFP_NOFS); + if (ret) { + f2fs_put_page(cpage, 0); + return; + } + + f2fs_set_page_private(cpage, ino); + + if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE_READ)) + goto out; + + memcpy(page_address(cpage), page_address(page), PAGE_SIZE); + SetPageUptodate(cpage); +out: + f2fs_put_page(cpage, 1); +} + +bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page, + block_t blkaddr) +{ + struct page *cpage; + bool hitted = false; + + if (!test_opt(sbi, COMPRESS_CACHE)) + return false; + + cpage = f2fs_pagecache_get_page(COMPRESS_MAPPING(sbi), + blkaddr, FGP_LOCK | FGP_NOWAIT, GFP_NOFS); + if (cpage) { + if (PageUptodate(cpage)) { + atomic_inc(&sbi->compress_page_hit); + memcpy(page_address(page), + page_address(cpage), PAGE_SIZE); + hitted = true; + } + f2fs_put_page(cpage, 1); + } + + return hitted; +} + +void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino) +{ + struct address_space *mapping = sbi->compress_inode->i_mapping; + struct pagevec pvec; + pgoff_t index = 0; + pgoff_t end = MAX_BLKADDR(sbi); + + pagevec_init(&pvec); + + do { + unsigned int nr_pages; + int i; + + nr_pages = pagevec_lookup_range(&pvec, mapping, + &index, end - 1); + if (!nr_pages) + break; + + for (i = 0; i < nr_pages; i++) { + struct page *page = pvec.pages[i]; + + if (page->index > end) + break; + + lock_page(page); + if (page->mapping != mapping) { + unlock_page(page); + continue; + } + + if (ino != page_private(page)) { + unlock_page(page); + continue; + } + + generic_error_remove_page(mapping, page); + unlock_page(page); + } + pagevec_release(&pvec); + cond_resched(); + } while (index < end); +} + +int f2fs_init_compress_inode(struct f2fs_sb_info *sbi) +{ + struct inode *inode; + + if (!test_opt(sbi, COMPRESS_CACHE)) + return 0; + + inode = f2fs_iget(sbi->sb, F2FS_COMPRESS_INO(sbi)); + if (IS_ERR(inode)) + return PTR_ERR(inode); + sbi->compress_inode = inode; + + sbi->compress_percent = COMPRESS_PERCENT; + sbi->compress_watermark = COMPRESS_WATERMARK; + + atomic_set(&sbi->compress_page_hit, 0); + + return 0; +} + +void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi) +{ + if (!sbi->compress_inode) + return; + iput(sbi->compress_inode); + sbi->compress_inode = NULL; +} + int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi) { dev_t dev = sbi->sb->s_bdev->bd_dev; diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 38476d0d3916..5a58a16fe679 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -152,7 +152,7 @@ static void f2fs_finish_read_bio(struct bio *bio) if (f2fs_is_compressed_page(page)) { if (bio->bi_status) - f2fs_end_read_compressed_page(page, true); + f2fs_end_read_compressed_page(page, true, 0); f2fs_put_page_dic(page); continue; } @@ -248,15 +248,19 @@ static void f2fs_handle_step_decompress(struct bio_post_read_ctx *ctx) struct bio_vec *bv; struct bvec_iter_all iter_all; bool all_compressed = true; + block_t blkaddr = SECTOR_TO_BLOCK(ctx->bio->bi_iter.bi_sector); bio_for_each_segment_all(bv, ctx->bio, iter_all) { struct page *page = bv->bv_page; /* PG_error was set if decryption failed. */ if (f2fs_is_compressed_page(page)) - f2fs_end_read_compressed_page(page, PageError(page)); + f2fs_end_read_compressed_page(page, PageError(page), + blkaddr); else all_compressed = false; + + blkaddr++; } /* @@ -1381,9 +1385,11 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type) old_blkaddr = dn->data_blkaddr; f2fs_allocate_data_block(sbi, NULL, old_blkaddr, &dn->data_blkaddr, &sum, seg_type, NULL); - if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) + if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) { invalidate_mapping_pages(META_MAPPING(sbi), old_blkaddr, old_blkaddr); + f2fs_invalidate_compress_page(sbi, old_blkaddr); + } f2fs_update_data_blkaddr(dn, dn->data_blkaddr); /* @@ -2193,7 +2199,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, goto out_put_dnode; } - for (i = 0; i < dic->nr_cpages; i++) { + for (i = 0; i < cc->nr_cpages; i++) { struct page *page = dic->cpages[i]; block_t blkaddr; struct bio_post_read_ctx *ctx; @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, blkaddr = data_blkaddr(dn.inode, dn.node_page, dn.ofs_in_node + i + 1); + f2fs_wait_on_block_writeback(inode, blkaddr); + + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { + if (atomic_dec_and_test(&dic->remaining_pages)) + f2fs_decompress_cluster(dic); + continue; + } + if (bio && (!page_is_mergeable(sbi, bio, *last_block_in_bio, blkaddr) || !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) { @@ -2222,8 +2236,6 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, } } - f2fs_wait_on_block_writeback(inode, blkaddr); - if (bio_add_page(bio, page, blocksize, 0) < blocksize) goto submit_and_realloc; @@ -3637,6 +3649,9 @@ void f2fs_invalidate_page(struct page *page, unsigned int offset, clear_cold_data(page); + if (test_opt(sbi, COMPRESS_CACHE) && f2fs_compressed_file(inode)) + f2fs_invalidate_compress_pages(sbi, inode->i_ino); + if (IS_ATOMIC_WRITTEN_PAGE(page)) return f2fs_drop_inmem_page(inode, page); @@ -3653,6 +3668,11 @@ int f2fs_release_page(struct page *page, gfp_t wait) if (IS_ATOMIC_WRITTEN_PAGE(page)) return 0; + if (test_opt(F2FS_P_SB(page), COMPRESS_CACHE) && + f2fs_compressed_file(page->mapping->host)) + f2fs_invalidate_compress_pages(F2FS_P_SB(page), + page->mapping->host->i_ino); + clear_cold_data(page); f2fs_clear_page_private(page); return 1; diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c index 91855d5721cd..3560ce89464a 100644 --- a/fs/f2fs/debug.c +++ b/fs/f2fs/debug.c @@ -152,6 +152,12 @@ static void update_general_status(struct f2fs_sb_info *sbi) si->node_pages = NODE_MAPPING(sbi)->nrpages; if (sbi->meta_inode) si->meta_pages = META_MAPPING(sbi)->nrpages; +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (sbi->compress_inode) { + si->compress_pages = COMPRESS_MAPPING(sbi)->nrpages; + si->compress_page_hit = atomic_read(&sbi->compress_page_hit); + } +#endif si->nats = NM_I(sbi)->nat_cnt[TOTAL_NAT]; si->dirty_nats = NM_I(sbi)->nat_cnt[DIRTY_NAT]; si->sits = MAIN_SEGS(sbi); @@ -306,6 +312,12 @@ static void update_mem_info(struct f2fs_sb_info *sbi) unsigned npages = META_MAPPING(sbi)->nrpages; si->page_mem += (unsigned long long)npages << PAGE_SHIFT; } +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (sbi->compress_inode) { + unsigned npages = COMPRESS_MAPPING(sbi)->nrpages; + si->page_mem += (unsigned long long)npages << PAGE_SHIFT; + } +#endif } static int stat_show(struct seq_file *s, void *v) @@ -473,6 +485,7 @@ static int stat_show(struct seq_file *s, void *v) "volatile IO: %4d (Max. %4d)\n", si->inmem_pages, si->aw_cnt, si->max_aw_cnt, si->vw_cnt, si->max_vw_cnt); + seq_printf(s, " - compress: %4d, hit:%8d\n", si->compress_pages, si->compress_page_hit); seq_printf(s, " - nodes: %4d in %4d\n", si->ndirty_node, si->node_pages); seq_printf(s, " - dents: %4d in dirs:%4d (%4d)\n", diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 2860003a09ed..57f62761c6fa 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -98,6 +98,7 @@ extern const char *f2fs_fault_name[FAULT_MAX]; #define F2FS_MOUNT_NORECOVERY 0x04000000 #define F2FS_MOUNT_ATGC 0x08000000 #define F2FS_MOUNT_MERGE_CHECKPOINT 0x10000000 +#define F2FS_MOUNT_COMPRESS_CACHE 0x20000000 #define F2FS_OPTION(sbi) ((sbi)->mount_opt) #define clear_opt(sbi, option) (F2FS_OPTION(sbi).opt &= ~F2FS_MOUNT_##option) @@ -1320,6 +1321,9 @@ enum compress_flag { COMPRESS_MAX_FLAG, }; +#define COMPRESS_WATERMARK 20 +#define COMPRESS_PERCENT 20 + #define COMPRESS_DATA_RESERVED_SIZE 4 struct compress_data { __le32 clen; /* compressed data size */ @@ -1624,6 +1628,11 @@ struct f2fs_sb_info { #ifdef CONFIG_F2FS_FS_COMPRESSION struct kmem_cache *page_array_slab; /* page array entry */ unsigned int page_array_slab_size; /* default page array slab size */ + + struct inode *compress_inode; /* cache compressed blocks */ + unsigned int compress_percent; /* cache page percentage */ + unsigned int compress_watermark; /* cache page watermark */ + atomic_t compress_page_hit; /* cache hit count */ #endif }; @@ -3598,7 +3607,8 @@ struct f2fs_stat_info { unsigned int bimodal, avg_vblocks; int util_free, util_valid, util_invalid; int rsvd_segs, overp_segs; - int dirty_count, node_pages, meta_pages; + int dirty_count, node_pages, meta_pages, compress_pages; + int compress_page_hit; int prefree_count, call_count, cp_count, bg_cp_count; int tot_segs, node_segs, data_segs, free_segs, free_secs; int bg_node_segs, bg_data_segs; @@ -3934,7 +3944,9 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page); bool f2fs_is_compress_backend_ready(struct inode *inode); int f2fs_init_compress_mempool(void); void f2fs_destroy_compress_mempool(void); -void f2fs_end_read_compressed_page(struct page *page, bool failed); +void f2fs_decompress_cluster(struct decompress_io_ctx *dic); +void f2fs_end_read_compressed_page(struct page *page, bool failed, + block_t blkaddr); bool f2fs_cluster_is_empty(struct compress_ctx *cc); bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index); void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page); @@ -3952,10 +3964,19 @@ void f2fs_put_page_dic(struct page *page); int f2fs_init_compress_ctx(struct compress_ctx *cc); void f2fs_destroy_compress_ctx(struct compress_ctx *cc); void f2fs_init_compress_info(struct f2fs_sb_info *sbi); +int f2fs_init_compress_inode(struct f2fs_sb_info *sbi); +void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi); int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi); void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi); int __init f2fs_init_compress_cache(void); void f2fs_destroy_compress_cache(void); +struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi); +void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr); +void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, + nid_t ino, block_t blkaddr); +bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page, + block_t blkaddr); +void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino); #else static inline bool f2fs_is_compressed_page(struct page *page) { return false; } static inline bool f2fs_is_compress_backend_ready(struct inode *inode) @@ -3972,7 +3993,9 @@ static inline struct page *f2fs_compress_control_page(struct page *page) } static inline int f2fs_init_compress_mempool(void) { return 0; } static inline void f2fs_destroy_compress_mempool(void) { } -static inline void f2fs_end_read_compressed_page(struct page *page, bool failed) +static inline void f2fs_decompress_cluster(struct decompress_io_ctx *dic) { } +static inline void f2fs_end_read_compressed_page(struct page *page, + bool failed, block_t blkaddr) { WARN_ON_ONCE(1); } @@ -3980,10 +4003,20 @@ static inline void f2fs_put_page_dic(struct page *page) { WARN_ON_ONCE(1); } +static inline int f2fs_init_compress_inode(struct f2fs_sb_info *sbi) { return 0; } +static inline void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi) { } static inline int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi) { return 0; } static inline void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi) { } static inline int __init f2fs_init_compress_cache(void) { return 0; } static inline void f2fs_destroy_compress_cache(void) { } +static inline void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, + block_t blkaddr) { } +static inline void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, + struct page *page, nid_t ino, block_t blkaddr) { } +static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, + struct page *page, block_t blkaddr) { return false; } +static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, + nid_t ino) { } #endif static inline void set_compress_context(struct inode *inode) diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index 39330ad3c44e..4561c44dfa8f 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -1226,6 +1226,7 @@ static int move_data_block(struct inode *inode, block_t bidx, f2fs_put_page(mpage, 1); invalidate_mapping_pages(META_MAPPING(fio.sbi), fio.old_blkaddr, fio.old_blkaddr); + f2fs_invalidate_compress_page(fio.sbi, fio.old_blkaddr); set_page_dirty(fio.encrypted_page); if (clear_page_dirty_for_io(fio.encrypted_page)) diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 349d9cb933ee..f030b9b79202 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -18,6 +18,10 @@ #include <trace/events/f2fs.h> +#ifdef CONFIG_F2FS_FS_COMPRESSION +extern const struct address_space_operations f2fs_compress_aops; +#endif + void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync) { if (is_inode_flag_set(inode, FI_NEW_INODE)) @@ -494,6 +498,11 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino) if (ino == F2FS_NODE_INO(sbi) || ino == F2FS_META_INO(sbi)) goto make_now; +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (ino == F2FS_COMPRESS_INO(sbi)) + goto make_now; +#endif + ret = do_read_inode(inode); if (ret) goto bad_inode; @@ -504,6 +513,12 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino) } else if (ino == F2FS_META_INO(sbi)) { inode->i_mapping->a_ops = &f2fs_meta_aops; mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS); + } else if (ino == F2FS_COMPRESS_INO(sbi)) { +#ifdef CONFIG_F2FS_FS_COMPRESSION + inode->i_mapping->a_ops = &f2fs_compress_aops; +#endif + mapping_set_gfp_mask(inode->i_mapping, + GFP_NOFS | __GFP_HIGHMEM | __GFP_MOVABLE); } else if (S_ISREG(inode->i_mode)) { inode->i_op = &f2fs_file_inode_operations; inode->i_fop = &f2fs_file_operations; @@ -722,8 +737,12 @@ void f2fs_evict_inode(struct inode *inode) trace_f2fs_evict_inode(inode); truncate_inode_pages_final(&inode->i_data); + if (test_opt(sbi, COMPRESS_CACHE) && f2fs_compressed_file(inode)) + f2fs_invalidate_compress_pages(sbi, inode->i_ino); + if (inode->i_ino == F2FS_NODE_INO(sbi) || - inode->i_ino == F2FS_META_INO(sbi)) + inode->i_ino == F2FS_META_INO(sbi) || + inode->i_ino == F2FS_COMPRESS_INO(sbi)) goto out_clear; f2fs_bug_on(sbi, get_dirty_pages(inode)); diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 7d34f1cacdee..2ee9b7ba46bd 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -2302,6 +2302,7 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr) return; invalidate_mapping_pages(META_MAPPING(sbi), addr, addr); + f2fs_invalidate_compress_page(sbi, addr); /* add it into sit main buffer */ down_write(&sit_i->sentry_lock); @@ -3429,9 +3430,11 @@ static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio) reallocate: f2fs_allocate_data_block(fio->sbi, fio->page, fio->old_blkaddr, &fio->new_blkaddr, sum, type, fio); - if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO) + if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO) { invalidate_mapping_pages(META_MAPPING(fio->sbi), fio->old_blkaddr, fio->old_blkaddr); + f2fs_invalidate_compress_page(fio->sbi, fio->old_blkaddr); + } /* writeout dirty page into bdev */ f2fs_submit_page_write(fio); @@ -3604,6 +3607,7 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) { invalidate_mapping_pages(META_MAPPING(sbi), old_blkaddr, old_blkaddr); + f2fs_invalidate_compress_page(sbi, old_blkaddr); if (!from_gc) update_segment_mtime(sbi, old_blkaddr, 0); update_sit_entry(sbi, old_blkaddr, -1); diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index d8603e6c4916..d7beb8d511a5 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -151,6 +151,7 @@ enum { Opt_compress_extension, Opt_compress_chksum, Opt_compress_mode, + Opt_compress_cache, Opt_atgc, Opt_err, }; @@ -223,6 +224,7 @@ static match_table_t f2fs_tokens = { {Opt_compress_extension, "compress_extension=%s"}, {Opt_compress_chksum, "compress_chksum"}, {Opt_compress_mode, "compress_mode=%s"}, + {Opt_compress_cache, "compress_cache"}, {Opt_atgc, "atgc"}, {Opt_err, NULL}, }; @@ -1062,12 +1064,16 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount) } kfree(name); break; + case Opt_compress_cache: + set_opt(sbi, COMPRESS_CACHE); + break; #else case Opt_compress_algorithm: case Opt_compress_log_size: case Opt_compress_extension: case Opt_compress_chksum: case Opt_compress_mode: + case Opt_compress_cache: f2fs_info(sbi, "compression options not supported"); break; #endif @@ -1396,6 +1402,8 @@ static void f2fs_put_super(struct super_block *sb) f2fs_bug_on(sbi, sbi->fsync_node_num); + f2fs_destroy_compress_inode(sbi); + iput(sbi->node_inode); sbi->node_inode = NULL; @@ -1660,6 +1668,9 @@ static inline void f2fs_show_compress_options(struct seq_file *seq, seq_printf(seq, ",compress_mode=%s", "fs"); else if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_USER) seq_printf(seq, ",compress_mode=%s", "user"); + + if (test_opt(sbi, COMPRESS_CACHE)) + seq_puts(seq, ",compress_cache"); } static int f2fs_show_options(struct seq_file *seq, struct dentry *root) @@ -1930,6 +1941,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) bool disable_checkpoint = test_opt(sbi, DISABLE_CHECKPOINT); bool no_io_align = !F2FS_IO_ALIGNED(sbi); bool no_atgc = !test_opt(sbi, ATGC); + bool no_compress_cache = !test_opt(sbi, COMPRESS_CACHE); bool checkpoint_changed; #ifdef CONFIG_QUOTA int i, j; @@ -2022,6 +2034,12 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) goto restore_opts; } + if (no_compress_cache == !!test_opt(sbi, COMPRESS_CACHE)) { + err = -EINVAL; + f2fs_warn(sbi, "switch compress_cache option is not allowed"); + goto restore_opts; + } + if ((*flags & SB_RDONLY) && test_opt(sbi, DISABLE_CHECKPOINT)) { err = -EINVAL; f2fs_warn(sbi, "disabling checkpoint not compatible with read-only"); @@ -3900,10 +3918,14 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) goto free_node_inode; } - err = f2fs_register_sysfs(sbi); + err = f2fs_init_compress_inode(sbi); if (err) goto free_root_inode; + err = f2fs_register_sysfs(sbi); + if (err) + goto free_compress_inode; + #ifdef CONFIG_QUOTA /* Enable quota usage during mount */ if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sb)) { @@ -4037,6 +4059,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) /* evict some inodes being cached by GC */ evict_inodes(sb); f2fs_unregister_sysfs(sbi); +free_compress_inode: + f2fs_destroy_compress_inode(sbi); free_root_inode: dput(sb->s_root); sb->s_root = NULL; diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h index c6cc0a566ef5..2dcc63fe8494 100644 --- a/include/linux/f2fs_fs.h +++ b/include/linux/f2fs_fs.h @@ -34,6 +34,7 @@ #define F2FS_ROOT_INO(sbi) ((sbi)->root_ino_num) #define F2FS_NODE_INO(sbi) ((sbi)->node_ino_num) #define F2FS_META_INO(sbi) ((sbi)->meta_ino_num) +#define F2FS_COMPRESS_INO(sbi) (NM_I(sbi)->max_nid) #define F2FS_MAX_QUOTAS 3 -- 2.29.2 _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-02-02 8:00 ` [f2fs-dev] " Chao Yu @ 2021-02-04 3:25 ` Chao Yu -1 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-02-04 3:25 UTC (permalink / raw) To: jaegeuk; +Cc: linux-f2fs-devel, linux-kernel, chao Jaegeuk, On 2021/2/2 16:00, Chao Yu wrote: > - for (i = 0; i < dic->nr_cpages; i++) { > + for (i = 0; i < cc->nr_cpages; i++) { > struct page *page = dic->cpages[i]; por_fsstress still hang in this line? Thanks, > block_t blkaddr; > struct bio_post_read_ctx *ctx; > @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, > blkaddr = data_blkaddr(dn.inode, dn.node_page, > dn.ofs_in_node + i + 1); > > + f2fs_wait_on_block_writeback(inode, blkaddr); > + > + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { > + if (atomic_dec_and_test(&dic->remaining_pages)) > + f2fs_decompress_cluster(dic); > + continue; > + } > + ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-02-04 3:25 ` Chao Yu 0 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-02-04 3:25 UTC (permalink / raw) To: jaegeuk; +Cc: linux-kernel, linux-f2fs-devel Jaegeuk, On 2021/2/2 16:00, Chao Yu wrote: > - for (i = 0; i < dic->nr_cpages; i++) { > + for (i = 0; i < cc->nr_cpages; i++) { > struct page *page = dic->cpages[i]; por_fsstress still hang in this line? Thanks, > block_t blkaddr; > struct bio_post_read_ctx *ctx; > @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, > blkaddr = data_blkaddr(dn.inode, dn.node_page, > dn.ofs_in_node + i + 1); > > + f2fs_wait_on_block_writeback(inode, blkaddr); > + > + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { > + if (atomic_dec_and_test(&dic->remaining_pages)) > + f2fs_decompress_cluster(dic); > + continue; > + } > + _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-02-04 3:25 ` [f2fs-dev] " Chao Yu @ 2021-02-28 5:09 ` Jaegeuk Kim -1 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-02-28 5:09 UTC (permalink / raw) To: Chao Yu; +Cc: linux-f2fs-devel, linux-kernel, chao On 02/04, Chao Yu wrote: > Jaegeuk, > > On 2021/2/2 16:00, Chao Yu wrote: > > - for (i = 0; i < dic->nr_cpages; i++) { > > + for (i = 0; i < cc->nr_cpages; i++) { > > struct page *page = dic->cpages[i]; > > por_fsstress still hang in this line? I'm stuck on testing the patches, since the latest kernel is panicking somehow. Let me update later, once I can test a bit. :( > > Thanks, > > > block_t blkaddr; > > struct bio_post_read_ctx *ctx; > > @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, > > blkaddr = data_blkaddr(dn.inode, dn.node_page, > > dn.ofs_in_node + i + 1); > > + f2fs_wait_on_block_writeback(inode, blkaddr); > > + > > + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { > > + if (atomic_dec_and_test(&dic->remaining_pages)) > > + f2fs_decompress_cluster(dic); > > + continue; > > + } > > + ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-02-28 5:09 ` Jaegeuk Kim 0 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-02-28 5:09 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 02/04, Chao Yu wrote: > Jaegeuk, > > On 2021/2/2 16:00, Chao Yu wrote: > > - for (i = 0; i < dic->nr_cpages; i++) { > > + for (i = 0; i < cc->nr_cpages; i++) { > > struct page *page = dic->cpages[i]; > > por_fsstress still hang in this line? I'm stuck on testing the patches, since the latest kernel is panicking somehow. Let me update later, once I can test a bit. :( > > Thanks, > > > block_t blkaddr; > > struct bio_post_read_ctx *ctx; > > @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, > > blkaddr = data_blkaddr(dn.inode, dn.node_page, > > dn.ofs_in_node + i + 1); > > + f2fs_wait_on_block_writeback(inode, blkaddr); > > + > > + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { > > + if (atomic_dec_and_test(&dic->remaining_pages)) > > + f2fs_decompress_cluster(dic); > > + continue; > > + } > > + _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-02-28 5:09 ` [f2fs-dev] " Jaegeuk Kim @ 2021-03-04 20:20 ` Jaegeuk Kim -1 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-03-04 20:20 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 02/27, Jaegeuk Kim wrote: > On 02/04, Chao Yu wrote: > > Jaegeuk, > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > struct page *page = dic->cpages[i]; > > > > por_fsstress still hang in this line? > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > Let me update later, once I can test a bit. :( It seems this works without error. https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > > > > Thanks, > > > > > block_t blkaddr; > > > struct bio_post_read_ctx *ctx; > > > @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, > > > blkaddr = data_blkaddr(dn.inode, dn.node_page, > > > dn.ofs_in_node + i + 1); > > > + f2fs_wait_on_block_writeback(inode, blkaddr); > > > + > > > + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { > > > + if (atomic_dec_and_test(&dic->remaining_pages)) > > > + f2fs_decompress_cluster(dic); > > > + continue; > > > + } > > > + > > > _______________________________________________ > Linux-f2fs-devel mailing list > Linux-f2fs-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-03-04 20:20 ` Jaegeuk Kim 0 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-03-04 20:20 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 02/27, Jaegeuk Kim wrote: > On 02/04, Chao Yu wrote: > > Jaegeuk, > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > struct page *page = dic->cpages[i]; > > > > por_fsstress still hang in this line? > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > Let me update later, once I can test a bit. :( It seems this works without error. https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > > > > Thanks, > > > > > block_t blkaddr; > > > struct bio_post_read_ctx *ctx; > > > @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, > > > blkaddr = data_blkaddr(dn.inode, dn.node_page, > > > dn.ofs_in_node + i + 1); > > > + f2fs_wait_on_block_writeback(inode, blkaddr); > > > + > > > + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { > > > + if (atomic_dec_and_test(&dic->remaining_pages)) > > > + f2fs_decompress_cluster(dic); > > > + continue; > > > + } > > > + > > > _______________________________________________ > Linux-f2fs-devel mailing list > Linux-f2fs-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-03-04 20:20 ` Jaegeuk Kim @ 2021-03-05 3:07 ` Chao Yu -1 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-03-05 3:07 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/3/5 4:20, Jaegeuk Kim wrote: > On 02/27, Jaegeuk Kim wrote: >> On 02/04, Chao Yu wrote: >>> Jaegeuk, >>> >>> On 2021/2/2 16:00, Chao Yu wrote: >>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>> struct page *page = dic->cpages[i]; >>> >>> por_fsstress still hang in this line? >> >> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >> Let me update later, once I can test a bit. :( > > It seems this works without error. > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 Ah, good news. Thanks for helping to test the patch. :) Thanks, > >> >>> >>> Thanks, >>> >>>> block_t blkaddr; >>>> struct bio_post_read_ctx *ctx; >>>> @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, >>>> blkaddr = data_blkaddr(dn.inode, dn.node_page, >>>> dn.ofs_in_node + i + 1); >>>> + f2fs_wait_on_block_writeback(inode, blkaddr); >>>> + >>>> + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { >>>> + if (atomic_dec_and_test(&dic->remaining_pages)) >>>> + f2fs_decompress_cluster(dic); >>>> + continue; >>>> + } >>>> + >> >> >> _______________________________________________ >> Linux-f2fs-devel mailing list >> Linux-f2fs-devel@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > . > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-03-05 3:07 ` Chao Yu 0 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-03-05 3:07 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/3/5 4:20, Jaegeuk Kim wrote: > On 02/27, Jaegeuk Kim wrote: >> On 02/04, Chao Yu wrote: >>> Jaegeuk, >>> >>> On 2021/2/2 16:00, Chao Yu wrote: >>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>> struct page *page = dic->cpages[i]; >>> >>> por_fsstress still hang in this line? >> >> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >> Let me update later, once I can test a bit. :( > > It seems this works without error. > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 Ah, good news. Thanks for helping to test the patch. :) Thanks, > >> >>> >>> Thanks, >>> >>>> block_t blkaddr; >>>> struct bio_post_read_ctx *ctx; >>>> @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, >>>> blkaddr = data_blkaddr(dn.inode, dn.node_page, >>>> dn.ofs_in_node + i + 1); >>>> + f2fs_wait_on_block_writeback(inode, blkaddr); >>>> + >>>> + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { >>>> + if (atomic_dec_and_test(&dic->remaining_pages)) >>>> + f2fs_decompress_cluster(dic); >>>> + continue; >>>> + } >>>> + >> >> >> _______________________________________________ >> Linux-f2fs-devel mailing list >> Linux-f2fs-devel@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > . > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-03-05 3:07 ` Chao Yu @ 2021-03-09 0:01 ` Jaegeuk Kim -1 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-03-09 0:01 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 03/05, Chao Yu wrote: > On 2021/3/5 4:20, Jaegeuk Kim wrote: > > On 02/27, Jaegeuk Kim wrote: > > > On 02/04, Chao Yu wrote: > > > > Jaegeuk, > > > > > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > > > struct page *page = dic->cpages[i]; > > > > > > > > por_fsstress still hang in this line? > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > > > Let me update later, once I can test a bit. :( > > > > It seems this works without error. > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > Ah, good news. > > Thanks for helping to test the patch. :) Hmm, I hit this again. Let me check w/o compress_cache back. :( [159210.201131] ------------[ cut here ]------------ [159210.204241] kernel BUG at fs/f2fs/compress.c:1082! [159210.207321] invalid opcode: 0000 [#1] SMP PTI [159210.209407] CPU: 4 PID: 2753477 Comm: kworker/u16:2 Tainted: G OE 5.12.0-rc1-custom #1 [159210.212737] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 [159210.224800] Workqueue: writeback wb_workfn (flush-252:16) [159210.226851] RIP: 0010:prepare_compress_overwrite+0x4c0/0x760 [f2fs] [159210.229506] Code: 8b bf 90 0a 00 00 be 40 0d 00 00 e8 4a 92 4f c4 49 89 44 24 18 48 85 c0 0f 84 85 02 00 00 41 8b 54 24 10 e9 c5 fb ff ff 0f 0b <0f> 0b 41 8b 44 24 20 85 c0 0f 84 2a ff ff ff 48 8 [159210.236311] RSP: 0018:ffff9fa782177858 EFLAGS: 00010246 [159210.238517] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [159210.240734] RDX: 000000000000001c RSI: 0000000000000000 RDI: 0000000000000000 [159210.242941] RBP: ffff9fa7821778f0 R08: ffff93b9c89cb232 R09: 0000000000000003 [159210.245107] R10: ffffffff86873420 R11: 0000000000000001 R12: ffff9fa782177900 [159210.247319] R13: ffff93b906dca578 R14: 000000000000031c R15: 0000000000000000 [159210.249492] FS: 0000000000000000(0000) GS:ffff93b9fbd00000(0000) knlGS:0000000000000000 [159210.254724] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [159210.258709] CR2: 00007f0367d33738 CR3: 000000012bc0c004 CR4: 0000000000370ee0 [159210.261608] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [159210.264614] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [159210.267476] Call Trace: [159210.269075] ? f2fs_compress_write_end+0xa2/0x100 [f2fs] [159210.271165] f2fs_prepare_compress_overwrite+0x5f/0x80 [f2fs] [159210.273017] f2fs_write_cache_pages+0x468/0x8a0 [f2fs] [159210.274848] f2fs_write_data_pages+0x2a4/0x2f0 [f2fs] [159210.276612] ? from_kgid+0x12/0x20 [159210.277994] ? f2fs_update_inode+0x3cb/0x510 [f2fs] [159210.279748] do_writepages+0x38/0xc0 [159210.281183] ? f2fs_write_inode+0x11c/0x300 [f2fs] [159210.282877] __writeback_single_inode+0x44/0x2a0 [159210.284526] writeback_sb_inodes+0x223/0x4d0 [159210.286105] __writeback_inodes_wb+0x56/0xf0 [159210.287740] wb_writeback+0x1dd/0x290 [159210.289182] wb_workfn+0x309/0x500 [159210.290553] process_one_work+0x220/0x3c0 [159210.292048] worker_thread+0x53/0x420 [159210.293403] kthread+0x12f/0x150 [159210.294716] ? process_one_work+0x3c0/0x3c0 [159210.296204] ? __kthread_bind_mask+0x70/0x70 [159210.297702] ret_from_fork+0x22/0x30 > > Thanks, > > > > > > > > > > > > > > Thanks, > > > > > > > > > block_t blkaddr; > > > > > struct bio_post_read_ctx *ctx; > > > > > @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, > > > > > blkaddr = data_blkaddr(dn.inode, dn.node_page, > > > > > dn.ofs_in_node + i + 1); > > > > > + f2fs_wait_on_block_writeback(inode, blkaddr); > > > > > + > > > > > + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { > > > > > + if (atomic_dec_and_test(&dic->remaining_pages)) > > > > > + f2fs_decompress_cluster(dic); > > > > > + continue; > > > > > + } > > > > > + > > > > > > > > > _______________________________________________ > > > Linux-f2fs-devel mailing list > > > Linux-f2fs-devel@lists.sourceforge.net > > > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > > . > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-03-09 0:01 ` Jaegeuk Kim 0 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-03-09 0:01 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 03/05, Chao Yu wrote: > On 2021/3/5 4:20, Jaegeuk Kim wrote: > > On 02/27, Jaegeuk Kim wrote: > > > On 02/04, Chao Yu wrote: > > > > Jaegeuk, > > > > > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > > > struct page *page = dic->cpages[i]; > > > > > > > > por_fsstress still hang in this line? > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > > > Let me update later, once I can test a bit. :( > > > > It seems this works without error. > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > Ah, good news. > > Thanks for helping to test the patch. :) Hmm, I hit this again. Let me check w/o compress_cache back. :( [159210.201131] ------------[ cut here ]------------ [159210.204241] kernel BUG at fs/f2fs/compress.c:1082! [159210.207321] invalid opcode: 0000 [#1] SMP PTI [159210.209407] CPU: 4 PID: 2753477 Comm: kworker/u16:2 Tainted: G OE 5.12.0-rc1-custom #1 [159210.212737] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 [159210.224800] Workqueue: writeback wb_workfn (flush-252:16) [159210.226851] RIP: 0010:prepare_compress_overwrite+0x4c0/0x760 [f2fs] [159210.229506] Code: 8b bf 90 0a 00 00 be 40 0d 00 00 e8 4a 92 4f c4 49 89 44 24 18 48 85 c0 0f 84 85 02 00 00 41 8b 54 24 10 e9 c5 fb ff ff 0f 0b <0f> 0b 41 8b 44 24 20 85 c0 0f 84 2a ff ff ff 48 8 [159210.236311] RSP: 0018:ffff9fa782177858 EFLAGS: 00010246 [159210.238517] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [159210.240734] RDX: 000000000000001c RSI: 0000000000000000 RDI: 0000000000000000 [159210.242941] RBP: ffff9fa7821778f0 R08: ffff93b9c89cb232 R09: 0000000000000003 [159210.245107] R10: ffffffff86873420 R11: 0000000000000001 R12: ffff9fa782177900 [159210.247319] R13: ffff93b906dca578 R14: 000000000000031c R15: 0000000000000000 [159210.249492] FS: 0000000000000000(0000) GS:ffff93b9fbd00000(0000) knlGS:0000000000000000 [159210.254724] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [159210.258709] CR2: 00007f0367d33738 CR3: 000000012bc0c004 CR4: 0000000000370ee0 [159210.261608] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [159210.264614] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [159210.267476] Call Trace: [159210.269075] ? f2fs_compress_write_end+0xa2/0x100 [f2fs] [159210.271165] f2fs_prepare_compress_overwrite+0x5f/0x80 [f2fs] [159210.273017] f2fs_write_cache_pages+0x468/0x8a0 [f2fs] [159210.274848] f2fs_write_data_pages+0x2a4/0x2f0 [f2fs] [159210.276612] ? from_kgid+0x12/0x20 [159210.277994] ? f2fs_update_inode+0x3cb/0x510 [f2fs] [159210.279748] do_writepages+0x38/0xc0 [159210.281183] ? f2fs_write_inode+0x11c/0x300 [f2fs] [159210.282877] __writeback_single_inode+0x44/0x2a0 [159210.284526] writeback_sb_inodes+0x223/0x4d0 [159210.286105] __writeback_inodes_wb+0x56/0xf0 [159210.287740] wb_writeback+0x1dd/0x290 [159210.289182] wb_workfn+0x309/0x500 [159210.290553] process_one_work+0x220/0x3c0 [159210.292048] worker_thread+0x53/0x420 [159210.293403] kthread+0x12f/0x150 [159210.294716] ? process_one_work+0x3c0/0x3c0 [159210.296204] ? __kthread_bind_mask+0x70/0x70 [159210.297702] ret_from_fork+0x22/0x30 > > Thanks, > > > > > > > > > > > > > > Thanks, > > > > > > > > > block_t blkaddr; > > > > > struct bio_post_read_ctx *ctx; > > > > > @@ -2201,6 +2207,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, > > > > > blkaddr = data_blkaddr(dn.inode, dn.node_page, > > > > > dn.ofs_in_node + i + 1); > > > > > + f2fs_wait_on_block_writeback(inode, blkaddr); > > > > > + > > > > > + if (f2fs_load_compressed_page(sbi, page, blkaddr)) { > > > > > + if (atomic_dec_and_test(&dic->remaining_pages)) > > > > > + f2fs_decompress_cluster(dic); > > > > > + continue; > > > > > + } > > > > > + > > > > > > > > > _______________________________________________ > > > Linux-f2fs-devel mailing list > > > Linux-f2fs-devel@lists.sourceforge.net > > > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > > . > > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-03-09 0:01 ` Jaegeuk Kim @ 2021-03-09 2:49 ` Chao Yu -1 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-03-09 2:49 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/3/9 8:01, Jaegeuk Kim wrote: > On 03/05, Chao Yu wrote: >> On 2021/3/5 4:20, Jaegeuk Kim wrote: >>> On 02/27, Jaegeuk Kim wrote: >>>> On 02/04, Chao Yu wrote: >>>>> Jaegeuk, >>>>> >>>>> On 2021/2/2 16:00, Chao Yu wrote: >>>>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>>>> struct page *page = dic->cpages[i]; >>>>> >>>>> por_fsstress still hang in this line? >>>> >>>> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >>>> Let me update later, once I can test a bit. :( >>> >>> It seems this works without error. >>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 >> >> Ah, good news. >> >> Thanks for helping to test the patch. :) > > Hmm, I hit this again. Let me check w/o compress_cache back. :( Oops :( ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-03-09 2:49 ` Chao Yu 0 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-03-09 2:49 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/3/9 8:01, Jaegeuk Kim wrote: > On 03/05, Chao Yu wrote: >> On 2021/3/5 4:20, Jaegeuk Kim wrote: >>> On 02/27, Jaegeuk Kim wrote: >>>> On 02/04, Chao Yu wrote: >>>>> Jaegeuk, >>>>> >>>>> On 2021/2/2 16:00, Chao Yu wrote: >>>>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>>>> struct page *page = dic->cpages[i]; >>>>> >>>>> por_fsstress still hang in this line? >>>> >>>> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >>>> Let me update later, once I can test a bit. :( >>> >>> It seems this works without error. >>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 >> >> Ah, good news. >> >> Thanks for helping to test the patch. :) > > Hmm, I hit this again. Let me check w/o compress_cache back. :( Oops :( _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-03-09 2:49 ` Chao Yu @ 2021-03-10 20:52 ` Jaegeuk Kim -1 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-03-10 20:52 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 03/09, Chao Yu wrote: > On 2021/3/9 8:01, Jaegeuk Kim wrote: > > On 03/05, Chao Yu wrote: > > > On 2021/3/5 4:20, Jaegeuk Kim wrote: > > > > On 02/27, Jaegeuk Kim wrote: > > > > > On 02/04, Chao Yu wrote: > > > > > > Jaegeuk, > > > > > > > > > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > > > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > > > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > > > > > struct page *page = dic->cpages[i]; > > > > > > > > > > > > por_fsstress still hang in this line? > > > > > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > > > > > Let me update later, once I can test a bit. :( > > > > > > > > It seems this works without error. > > > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > > > > > Ah, good news. > > > > > > Thanks for helping to test the patch. :) > > > > Hmm, I hit this again. Let me check w/o compress_cache back. :( > > Oops :( Ok, apprantely that panic is caused by compress_cache. The test is running over 24hours w/o it. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-03-10 20:52 ` Jaegeuk Kim 0 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-03-10 20:52 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 03/09, Chao Yu wrote: > On 2021/3/9 8:01, Jaegeuk Kim wrote: > > On 03/05, Chao Yu wrote: > > > On 2021/3/5 4:20, Jaegeuk Kim wrote: > > > > On 02/27, Jaegeuk Kim wrote: > > > > > On 02/04, Chao Yu wrote: > > > > > > Jaegeuk, > > > > > > > > > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > > > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > > > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > > > > > struct page *page = dic->cpages[i]; > > > > > > > > > > > > por_fsstress still hang in this line? > > > > > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > > > > > Let me update later, once I can test a bit. :( > > > > > > > > It seems this works without error. > > > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > > > > > Ah, good news. > > > > > > Thanks for helping to test the patch. :) > > > > Hmm, I hit this again. Let me check w/o compress_cache back. :( > > Oops :( Ok, apprantely that panic is caused by compress_cache. The test is running over 24hours w/o it. _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-03-10 20:52 ` Jaegeuk Kim @ 2021-04-21 9:08 ` Chao Yu -1 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-04-21 9:08 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/3/11 4:52, Jaegeuk Kim wrote: > On 03/09, Chao Yu wrote: >> On 2021/3/9 8:01, Jaegeuk Kim wrote: >>> On 03/05, Chao Yu wrote: >>>> On 2021/3/5 4:20, Jaegeuk Kim wrote: >>>>> On 02/27, Jaegeuk Kim wrote: >>>>>> On 02/04, Chao Yu wrote: >>>>>>> Jaegeuk, >>>>>>> >>>>>>> On 2021/2/2 16:00, Chao Yu wrote: >>>>>>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>>>>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>>>>>> struct page *page = dic->cpages[i]; >>>>>>> >>>>>>> por_fsstress still hang in this line? >>>>>> >>>>>> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >>>>>> Let me update later, once I can test a bit. :( >>>>> >>>>> It seems this works without error. >>>>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 >>>> >>>> Ah, good news. >>>> >>>> Thanks for helping to test the patch. :) >>> >>> Hmm, I hit this again. Let me check w/o compress_cache back. :( >> >> Oops :( > > Ok, apprantely that panic is caused by compress_cache. The test is running over > 24hours w/o it. Jaegeuk, I'm still struggling troubleshooting this issue. However, I failed again to reproduce this bug, I doubt the reason may be my test script and environment(device type/size) is different from yours. (btw, I used pmem as back-end device, and test w/ all fault injection points and w/o write_io/checkpoint fault injection points) Could you please share me your run.sh script? and test command? And I'd like to ask what's your device type and size? Thanks, > . > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-04-21 9:08 ` Chao Yu 0 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-04-21 9:08 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/3/11 4:52, Jaegeuk Kim wrote: > On 03/09, Chao Yu wrote: >> On 2021/3/9 8:01, Jaegeuk Kim wrote: >>> On 03/05, Chao Yu wrote: >>>> On 2021/3/5 4:20, Jaegeuk Kim wrote: >>>>> On 02/27, Jaegeuk Kim wrote: >>>>>> On 02/04, Chao Yu wrote: >>>>>>> Jaegeuk, >>>>>>> >>>>>>> On 2021/2/2 16:00, Chao Yu wrote: >>>>>>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>>>>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>>>>>> struct page *page = dic->cpages[i]; >>>>>>> >>>>>>> por_fsstress still hang in this line? >>>>>> >>>>>> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >>>>>> Let me update later, once I can test a bit. :( >>>>> >>>>> It seems this works without error. >>>>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 >>>> >>>> Ah, good news. >>>> >>>> Thanks for helping to test the patch. :) >>> >>> Hmm, I hit this again. Let me check w/o compress_cache back. :( >> >> Oops :( > > Ok, apprantely that panic is caused by compress_cache. The test is running over > 24hours w/o it. Jaegeuk, I'm still struggling troubleshooting this issue. However, I failed again to reproduce this bug, I doubt the reason may be my test script and environment(device type/size) is different from yours. (btw, I used pmem as back-end device, and test w/ all fault injection points and w/o write_io/checkpoint fault injection points) Could you please share me your run.sh script? and test command? And I'd like to ask what's your device type and size? Thanks, > . > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-04-21 9:08 ` Chao Yu @ 2021-04-22 3:59 ` Jaegeuk Kim -1 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-04-22 3:59 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 04/21, Chao Yu wrote: > On 2021/3/11 4:52, Jaegeuk Kim wrote: > > On 03/09, Chao Yu wrote: > > > On 2021/3/9 8:01, Jaegeuk Kim wrote: > > > > On 03/05, Chao Yu wrote: > > > > > On 2021/3/5 4:20, Jaegeuk Kim wrote: > > > > > > On 02/27, Jaegeuk Kim wrote: > > > > > > > On 02/04, Chao Yu wrote: > > > > > > > > Jaegeuk, > > > > > > > > > > > > > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > > > > > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > > > > > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > > > > > > > struct page *page = dic->cpages[i]; > > > > > > > > > > > > > > > > por_fsstress still hang in this line? > > > > > > > > > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > > > > > > > Let me update later, once I can test a bit. :( > > > > > > > > > > > > It seems this works without error. > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > > > > > > > > > Ah, good news. > > > > > > > > > > Thanks for helping to test the patch. :) > > > > > > > > Hmm, I hit this again. Let me check w/o compress_cache back. :( > > > > > > Oops :( > > > > Ok, apprantely that panic is caused by compress_cache. The test is running over > > 24hours w/o it. > > Jaegeuk, > > I'm still struggling troubleshooting this issue. > > However, I failed again to reproduce this bug, I doubt the reason may be > my test script and environment(device type/size) is different from yours. > (btw, I used pmem as back-end device, and test w/ all fault injection > points and w/o write_io/checkpoint fault injection points) > > Could you please share me your run.sh script? and test command? > > And I'd like to ask what's your device type and size? I'm using qemu with 16GB with this script. https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh ./run.sh por_fsstress > > Thanks, > > > . > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-04-22 3:59 ` Jaegeuk Kim 0 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-04-22 3:59 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 04/21, Chao Yu wrote: > On 2021/3/11 4:52, Jaegeuk Kim wrote: > > On 03/09, Chao Yu wrote: > > > On 2021/3/9 8:01, Jaegeuk Kim wrote: > > > > On 03/05, Chao Yu wrote: > > > > > On 2021/3/5 4:20, Jaegeuk Kim wrote: > > > > > > On 02/27, Jaegeuk Kim wrote: > > > > > > > On 02/04, Chao Yu wrote: > > > > > > > > Jaegeuk, > > > > > > > > > > > > > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > > > > > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > > > > > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > > > > > > > struct page *page = dic->cpages[i]; > > > > > > > > > > > > > > > > por_fsstress still hang in this line? > > > > > > > > > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > > > > > > > Let me update later, once I can test a bit. :( > > > > > > > > > > > > It seems this works without error. > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > > > > > > > > > Ah, good news. > > > > > > > > > > Thanks for helping to test the patch. :) > > > > > > > > Hmm, I hit this again. Let me check w/o compress_cache back. :( > > > > > > Oops :( > > > > Ok, apprantely that panic is caused by compress_cache. The test is running over > > 24hours w/o it. > > Jaegeuk, > > I'm still struggling troubleshooting this issue. > > However, I failed again to reproduce this bug, I doubt the reason may be > my test script and environment(device type/size) is different from yours. > (btw, I used pmem as back-end device, and test w/ all fault injection > points and w/o write_io/checkpoint fault injection points) > > Could you please share me your run.sh script? and test command? > > And I'd like to ask what's your device type and size? I'm using qemu with 16GB with this script. https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh ./run.sh por_fsstress > > Thanks, > > > . > > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-04-22 3:59 ` Jaegeuk Kim @ 2021-04-22 6:07 ` Chao Yu -1 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-04-22 6:07 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/4/22 11:59, Jaegeuk Kim wrote: > On 04/21, Chao Yu wrote: >> On 2021/3/11 4:52, Jaegeuk Kim wrote: >>> On 03/09, Chao Yu wrote: >>>> On 2021/3/9 8:01, Jaegeuk Kim wrote: >>>>> On 03/05, Chao Yu wrote: >>>>>> On 2021/3/5 4:20, Jaegeuk Kim wrote: >>>>>>> On 02/27, Jaegeuk Kim wrote: >>>>>>>> On 02/04, Chao Yu wrote: >>>>>>>>> Jaegeuk, >>>>>>>>> >>>>>>>>> On 2021/2/2 16:00, Chao Yu wrote: >>>>>>>>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>>>>>>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>>>>>>>> struct page *page = dic->cpages[i]; >>>>>>>>> >>>>>>>>> por_fsstress still hang in this line? >>>>>>>> >>>>>>>> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >>>>>>>> Let me update later, once I can test a bit. :( >>>>>>> >>>>>>> It seems this works without error. >>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 >>>>>> >>>>>> Ah, good news. >>>>>> >>>>>> Thanks for helping to test the patch. :) >>>>> >>>>> Hmm, I hit this again. Let me check w/o compress_cache back. :( >>>> >>>> Oops :( >>> >>> Ok, apprantely that panic is caused by compress_cache. The test is running over >>> 24hours w/o it. >> >> Jaegeuk, >> >> I'm still struggling troubleshooting this issue. >> >> However, I failed again to reproduce this bug, I doubt the reason may be >> my test script and environment(device type/size) is different from yours. >> (btw, I used pmem as back-end device, and test w/ all fault injection >> points and w/o write_io/checkpoint fault injection points) >> >> Could you please share me your run.sh script? and test command? >> >> And I'd like to ask what's your device type and size? > > I'm using qemu with 16GB with this script. > https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh > > ./run.sh por_fsstress Thanks, let me check the difference, and try again. Thanks, > >> >> Thanks, >> >>> . >>> > . > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-04-22 6:07 ` Chao Yu 0 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-04-22 6:07 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/4/22 11:59, Jaegeuk Kim wrote: > On 04/21, Chao Yu wrote: >> On 2021/3/11 4:52, Jaegeuk Kim wrote: >>> On 03/09, Chao Yu wrote: >>>> On 2021/3/9 8:01, Jaegeuk Kim wrote: >>>>> On 03/05, Chao Yu wrote: >>>>>> On 2021/3/5 4:20, Jaegeuk Kim wrote: >>>>>>> On 02/27, Jaegeuk Kim wrote: >>>>>>>> On 02/04, Chao Yu wrote: >>>>>>>>> Jaegeuk, >>>>>>>>> >>>>>>>>> On 2021/2/2 16:00, Chao Yu wrote: >>>>>>>>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>>>>>>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>>>>>>>> struct page *page = dic->cpages[i]; >>>>>>>>> >>>>>>>>> por_fsstress still hang in this line? >>>>>>>> >>>>>>>> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >>>>>>>> Let me update later, once I can test a bit. :( >>>>>>> >>>>>>> It seems this works without error. >>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 >>>>>> >>>>>> Ah, good news. >>>>>> >>>>>> Thanks for helping to test the patch. :) >>>>> >>>>> Hmm, I hit this again. Let me check w/o compress_cache back. :( >>>> >>>> Oops :( >>> >>> Ok, apprantely that panic is caused by compress_cache. The test is running over >>> 24hours w/o it. >> >> Jaegeuk, >> >> I'm still struggling troubleshooting this issue. >> >> However, I failed again to reproduce this bug, I doubt the reason may be >> my test script and environment(device type/size) is different from yours. >> (btw, I used pmem as back-end device, and test w/ all fault injection >> points and w/o write_io/checkpoint fault injection points) >> >> Could you please share me your run.sh script? and test command? >> >> And I'd like to ask what's your device type and size? > > I'm using qemu with 16GB with this script. > https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh > > ./run.sh por_fsstress Thanks, let me check the difference, and try again. Thanks, > >> >> Thanks, >> >>> . >>> > . > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-04-22 6:07 ` Chao Yu @ 2021-05-10 9:05 ` Chao Yu -1 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-05-10 9:05 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/4/22 14:07, Chao Yu wrote: > On 2021/4/22 11:59, Jaegeuk Kim wrote: >> On 04/21, Chao Yu wrote: >>> On 2021/3/11 4:52, Jaegeuk Kim wrote: >>>> On 03/09, Chao Yu wrote: >>>>> On 2021/3/9 8:01, Jaegeuk Kim wrote: >>>>>> On 03/05, Chao Yu wrote: >>>>>>> On 2021/3/5 4:20, Jaegeuk Kim wrote: >>>>>>>> On 02/27, Jaegeuk Kim wrote: >>>>>>>>> On 02/04, Chao Yu wrote: >>>>>>>>>> Jaegeuk, >>>>>>>>>> >>>>>>>>>> On 2021/2/2 16:00, Chao Yu wrote: >>>>>>>>>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>>>>>>>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>>>>>>>>> struct page *page = dic->cpages[i]; >>>>>>>>>> >>>>>>>>>> por_fsstress still hang in this line? >>>>>>>>> >>>>>>>>> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >>>>>>>>> Let me update later, once I can test a bit. :( >>>>>>>> >>>>>>>> It seems this works without error. >>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 >>>>>>> >>>>>>> Ah, good news. >>>>>>> >>>>>>> Thanks for helping to test the patch. :) >>>>>> >>>>>> Hmm, I hit this again. Let me check w/o compress_cache back. :( >>>>> >>>>> Oops :( >>>> >>>> Ok, apprantely that panic is caused by compress_cache. The test is running over >>>> 24hours w/o it. >>> >>> Jaegeuk, >>> >>> I'm still struggling troubleshooting this issue. >>> >>> However, I failed again to reproduce this bug, I doubt the reason may be >>> my test script and environment(device type/size) is different from yours. >>> (btw, I used pmem as back-end device, and test w/ all fault injection >>> points and w/o write_io/checkpoint fault injection points) >>> >>> Could you please share me your run.sh script? and test command? >>> >>> And I'd like to ask what's your device type and size? >> >> I'm using qemu with 16GB with this script. >> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh >> >> ./run.sh por_fsstress > > Thanks, let me check the difference, and try again. Finally, I can reproduce this bug, and after troubleshooting this issue, I guess the root cause is not related to this patch, could you please test patch "f2fs: compress: fix race condition of overwrite vs truncate" with compress_cache enabled? I've ran por_fsstress case for 6 hours w/o any problems. Thanks, > > Thanks, > >> >>> >>> Thanks, >>> >>>> . >>>> >> . >> > > > _______________________________________________ > Linux-f2fs-devel mailing list > Linux-f2fs-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > . > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-05-10 9:05 ` Chao Yu 0 siblings, 0 replies; 26+ messages in thread From: Chao Yu @ 2021-05-10 9:05 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel On 2021/4/22 14:07, Chao Yu wrote: > On 2021/4/22 11:59, Jaegeuk Kim wrote: >> On 04/21, Chao Yu wrote: >>> On 2021/3/11 4:52, Jaegeuk Kim wrote: >>>> On 03/09, Chao Yu wrote: >>>>> On 2021/3/9 8:01, Jaegeuk Kim wrote: >>>>>> On 03/05, Chao Yu wrote: >>>>>>> On 2021/3/5 4:20, Jaegeuk Kim wrote: >>>>>>>> On 02/27, Jaegeuk Kim wrote: >>>>>>>>> On 02/04, Chao Yu wrote: >>>>>>>>>> Jaegeuk, >>>>>>>>>> >>>>>>>>>> On 2021/2/2 16:00, Chao Yu wrote: >>>>>>>>>>> - for (i = 0; i < dic->nr_cpages; i++) { >>>>>>>>>>> + for (i = 0; i < cc->nr_cpages; i++) { >>>>>>>>>>> struct page *page = dic->cpages[i]; >>>>>>>>>> >>>>>>>>>> por_fsstress still hang in this line? >>>>>>>>> >>>>>>>>> I'm stuck on testing the patches, since the latest kernel is panicking somehow. >>>>>>>>> Let me update later, once I can test a bit. :( >>>>>>>> >>>>>>>> It seems this works without error. >>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 >>>>>>> >>>>>>> Ah, good news. >>>>>>> >>>>>>> Thanks for helping to test the patch. :) >>>>>> >>>>>> Hmm, I hit this again. Let me check w/o compress_cache back. :( >>>>> >>>>> Oops :( >>>> >>>> Ok, apprantely that panic is caused by compress_cache. The test is running over >>>> 24hours w/o it. >>> >>> Jaegeuk, >>> >>> I'm still struggling troubleshooting this issue. >>> >>> However, I failed again to reproduce this bug, I doubt the reason may be >>> my test script and environment(device type/size) is different from yours. >>> (btw, I used pmem as back-end device, and test w/ all fault injection >>> points and w/o write_io/checkpoint fault injection points) >>> >>> Could you please share me your run.sh script? and test command? >>> >>> And I'd like to ask what's your device type and size? >> >> I'm using qemu with 16GB with this script. >> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh >> >> ./run.sh por_fsstress > > Thanks, let me check the difference, and try again. Finally, I can reproduce this bug, and after troubleshooting this issue, I guess the root cause is not related to this patch, could you please test patch "f2fs: compress: fix race condition of overwrite vs truncate" with compress_cache enabled? I've ran por_fsstress case for 6 hours w/o any problems. Thanks, > > Thanks, > >> >>> >>> Thanks, >>> >>>> . >>>> >> . >> > > > _______________________________________________ > Linux-f2fs-devel mailing list > Linux-f2fs-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > . > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst 2021-05-10 9:05 ` Chao Yu @ 2021-05-10 14:35 ` Jaegeuk Kim -1 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-05-10 14:35 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 05/10, Chao Yu wrote: > On 2021/4/22 14:07, Chao Yu wrote: > > On 2021/4/22 11:59, Jaegeuk Kim wrote: > > > On 04/21, Chao Yu wrote: > > > > On 2021/3/11 4:52, Jaegeuk Kim wrote: > > > > > On 03/09, Chao Yu wrote: > > > > > > On 2021/3/9 8:01, Jaegeuk Kim wrote: > > > > > > > On 03/05, Chao Yu wrote: > > > > > > > > On 2021/3/5 4:20, Jaegeuk Kim wrote: > > > > > > > > > On 02/27, Jaegeuk Kim wrote: > > > > > > > > > > On 02/04, Chao Yu wrote: > > > > > > > > > > > Jaegeuk, > > > > > > > > > > > > > > > > > > > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > > > > > > > > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > > > > > > > > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > > > > > > > > > > struct page *page = dic->cpages[i]; > > > > > > > > > > > > > > > > > > > > > > por_fsstress still hang in this line? > > > > > > > > > > > > > > > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > > > > > > > > > > Let me update later, once I can test a bit. :( > > > > > > > > > > > > > > > > > > It seems this works without error. > > > > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > > > > > > > > > > > > > > > Ah, good news. > > > > > > > > > > > > > > > > Thanks for helping to test the patch. :) > > > > > > > > > > > > > > Hmm, I hit this again. Let me check w/o compress_cache back. :( > > > > > > > > > > > > Oops :( > > > > > > > > > > Ok, apprantely that panic is caused by compress_cache. The test is running over > > > > > 24hours w/o it. > > > > > > > > Jaegeuk, > > > > > > > > I'm still struggling troubleshooting this issue. > > > > > > > > However, I failed again to reproduce this bug, I doubt the reason may be > > > > my test script and environment(device type/size) is different from yours. > > > > (btw, I used pmem as back-end device, and test w/ all fault injection > > > > points and w/o write_io/checkpoint fault injection points) > > > > > > > > Could you please share me your run.sh script? and test command? > > > > > > > > And I'd like to ask what's your device type and size? > > > > > > I'm using qemu with 16GB with this script. > > > https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh > > > > > > ./run.sh por_fsstress > > > > Thanks, let me check the difference, and try again. > > Finally, I can reproduce this bug, and after troubleshooting this > issue, I guess the root cause is not related to this patch, could > you please test patch "f2fs: compress: fix race condition of overwrite > vs truncate" with compress_cache enabled? I've ran por_fsstress case > for 6 hours w/o any problems. Good, sure. :) > > Thanks, > > > > > Thanks, > > > > > > > > > > > > > Thanks, > > > > > > > > > . > > > > > > > > . > > > > > > > > > _______________________________________________ > > Linux-f2fs-devel mailing list > > Linux-f2fs-devel@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > > . > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst @ 2021-05-10 14:35 ` Jaegeuk Kim 0 siblings, 0 replies; 26+ messages in thread From: Jaegeuk Kim @ 2021-05-10 14:35 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel On 05/10, Chao Yu wrote: > On 2021/4/22 14:07, Chao Yu wrote: > > On 2021/4/22 11:59, Jaegeuk Kim wrote: > > > On 04/21, Chao Yu wrote: > > > > On 2021/3/11 4:52, Jaegeuk Kim wrote: > > > > > On 03/09, Chao Yu wrote: > > > > > > On 2021/3/9 8:01, Jaegeuk Kim wrote: > > > > > > > On 03/05, Chao Yu wrote: > > > > > > > > On 2021/3/5 4:20, Jaegeuk Kim wrote: > > > > > > > > > On 02/27, Jaegeuk Kim wrote: > > > > > > > > > > On 02/04, Chao Yu wrote: > > > > > > > > > > > Jaegeuk, > > > > > > > > > > > > > > > > > > > > > > On 2021/2/2 16:00, Chao Yu wrote: > > > > > > > > > > > > - for (i = 0; i < dic->nr_cpages; i++) { > > > > > > > > > > > > + for (i = 0; i < cc->nr_cpages; i++) { > > > > > > > > > > > > struct page *page = dic->cpages[i]; > > > > > > > > > > > > > > > > > > > > > > por_fsstress still hang in this line? > > > > > > > > > > > > > > > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow. > > > > > > > > > > Let me update later, once I can test a bit. :( > > > > > > > > > > > > > > > > > > It seems this works without error. > > > > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9 > > > > > > > > > > > > > > > > Ah, good news. > > > > > > > > > > > > > > > > Thanks for helping to test the patch. :) > > > > > > > > > > > > > > Hmm, I hit this again. Let me check w/o compress_cache back. :( > > > > > > > > > > > > Oops :( > > > > > > > > > > Ok, apprantely that panic is caused by compress_cache. The test is running over > > > > > 24hours w/o it. > > > > > > > > Jaegeuk, > > > > > > > > I'm still struggling troubleshooting this issue. > > > > > > > > However, I failed again to reproduce this bug, I doubt the reason may be > > > > my test script and environment(device type/size) is different from yours. > > > > (btw, I used pmem as back-end device, and test w/ all fault injection > > > > points and w/o write_io/checkpoint fault injection points) > > > > > > > > Could you please share me your run.sh script? and test command? > > > > > > > > And I'd like to ask what's your device type and size? > > > > > > I'm using qemu with 16GB with this script. > > > https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh > > > > > > ./run.sh por_fsstress > > > > Thanks, let me check the difference, and try again. > > Finally, I can reproduce this bug, and after troubleshooting this > issue, I guess the root cause is not related to this patch, could > you please test patch "f2fs: compress: fix race condition of overwrite > vs truncate" with compress_cache enabled? I've ran por_fsstress case > for 6 hours w/o any problems. Good, sure. :) > > Thanks, > > > > > Thanks, > > > > > > > > > > > > > Thanks, > > > > > > > > > . > > > > > > > > . > > > > > > > > > _______________________________________________ > > Linux-f2fs-devel mailing list > > Linux-f2fs-devel@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > > . > > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2021-05-10 14:39 UTC | newest] Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-02-02 8:00 [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst Chao Yu 2021-02-02 8:00 ` [f2fs-dev] " Chao Yu 2021-02-04 3:25 ` Chao Yu 2021-02-04 3:25 ` [f2fs-dev] " Chao Yu 2021-02-28 5:09 ` Jaegeuk Kim 2021-02-28 5:09 ` [f2fs-dev] " Jaegeuk Kim 2021-03-04 20:20 ` Jaegeuk Kim 2021-03-04 20:20 ` Jaegeuk Kim 2021-03-05 3:07 ` Chao Yu 2021-03-05 3:07 ` Chao Yu 2021-03-09 0:01 ` Jaegeuk Kim 2021-03-09 0:01 ` Jaegeuk Kim 2021-03-09 2:49 ` Chao Yu 2021-03-09 2:49 ` Chao Yu 2021-03-10 20:52 ` Jaegeuk Kim 2021-03-10 20:52 ` Jaegeuk Kim 2021-04-21 9:08 ` Chao Yu 2021-04-21 9:08 ` Chao Yu 2021-04-22 3:59 ` Jaegeuk Kim 2021-04-22 3:59 ` Jaegeuk Kim 2021-04-22 6:07 ` Chao Yu 2021-04-22 6:07 ` Chao Yu 2021-05-10 9:05 ` Chao Yu 2021-05-10 9:05 ` Chao Yu 2021-05-10 14:35 ` Jaegeuk Kim 2021-05-10 14:35 ` Jaegeuk Kim
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.