All of lore.kernel.org
 help / color / mirror / Atom feed
* [f2fs-dev] Do we need serial io for compress file?
@ 2021-11-08  3:54 Fengnan Chang
  2021-11-08  8:56 ` 常凤楠
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Fengnan Chang @ 2021-11-08  3:54 UTC (permalink / raw)
  To: jaegeuk, chao; +Cc: linux-f2fs-devel

In my test, serial io for compress file will make multithread small write
performance drop a lot.

I'm try to fingure out why we need __should_serialize_io, IMO, we use __should_serialize_io to avoid deadlock or try to
improve sequential performance, but I don't understand why we should do this for
compressed file. In my test, if we just remove this, write same file in multithread will have problem, but parallel write different files in multithread
is ok. So I think maybe we should use another lock to allow write different files in multithread.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
  2021-11-08  3:54 [f2fs-dev] Do we need serial io for compress file? Fengnan Chang
@ 2021-11-08  8:56 ` 常凤楠
  2021-11-08 14:30   ` Chao Yu
       [not found]   ` <AI6AmQANEzwDyLqc-ild4qqN.9.1636381829406.Hmail.changfengnan@vivo.com>
  2021-11-08 14:21 ` Chao Yu
       [not found] ` <AOQAuQAvE4gDy5nrp7t7Q4pj.9.1636381271838.Hmail.changfengnan@vivo.com>
  2 siblings, 2 replies; 16+ messages in thread
From: 常凤楠 @ 2021-11-08  8:56 UTC (permalink / raw)
  To: jaegeuk, chao; +Cc: linux-f2fs-devel

Anyway, I did some modify to verify my idea, and did some test, not found problem for now.

The modify as follows:

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index f4fd6c246c9a..0ed677efe820 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3165,8 +3165,6 @@ static inline bool __should_serialize_io(struct inode *inode,
 	if (IS_NOQUOTA(inode))
 		return false;
 
-	if (f2fs_need_compress_data(inode))
-		return true;
 	if (wbc->sync_mode != WB_SYNC_ALL)
 		return true;
 	if (get_dirty_pages(inode) >= SM_I(F2FS_I_SB(inode))->min_seq_blocks)
@@ -3218,11 +3216,16 @@ static int __f2fs_write_data_pages(struct address_space *mapping,
 		mutex_lock(&sbi->writepages);
 		locked = true;
 	}
+	if (f2fs_need_compress_data(inode))
+		mutex_lock(&(F2FS_I(inode)->compress_lock));
 
 	blk_start_plug(&plug);
 	ret = f2fs_write_cache_pages(mapping, wbc, io_type);
 	blk_finish_plug(&plug);
 
+	if (f2fs_need_compress_data(inode))
+		mutex_unlock(&(F2FS_I(inode)->compress_lock));
+
 	if (locked)
 		mutex_unlock(&sbi->writepages);
 
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 039a229e11c9..3a6587f13d2f 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -763,6 +763,7 @@ struct f2fs_inode_info {
 	struct list_head inmem_pages;	/* inmemory pages managed by f2fs */
 	struct task_struct *inmem_task;	/* store inmemory task */
 	struct mutex inmem_lock;	/* lock for inmemory pages */
+	struct mutex compress_lock;	/* lock for compress file */
 	struct extent_tree *extent_tree;	/* cached extent_tree entry */
 
 	/* avoid racing between foreground op and gc */
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index a133932333c5..8566e9c34540 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -1323,6 +1323,7 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
 	INIT_LIST_HEAD(&fi->inmem_ilist);
 	INIT_LIST_HEAD(&fi->inmem_pages);
 	mutex_init(&fi->inmem_lock);
+	mutex_init(&fi->compress_lock);
 	init_rwsem(&fi->i_gc_rwsem[READ]);
 	init_rwsem(&fi->i_gc_rwsem[WRITE]);
 	init_rwsem(&fi->i_xattr_sem);
--

> -----Original Message-----
> From: 常凤楠
> Sent: Monday, November 8, 2021 11:55 AM
> To: jaegeuk@kernel.org; chao@kernel.org
> Cc: linux-f2fs-devel@lists.sourceforge.net
> Subject: Do we need serial io for compress file?
> 
> In my test, serial io for compress file will make multithread small write
> performance drop a lot.
> 
> I'm try to fingure out why we need __should_serialize_io, IMO, we use
> __should_serialize_io to avoid deadlock or try to improve sequential
> performance, but I don't understand why we should do this for compressed
> file. In my test, if we just remove this, write same file in multithread will have
> problem, but parallel write different files in multithread is ok. So I think
> maybe we should use another lock to allow write different files in
> multithread.

_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
  2021-11-08  3:54 [f2fs-dev] Do we need serial io for compress file? Fengnan Chang
  2021-11-08  8:56 ` 常凤楠
@ 2021-11-08 14:21 ` Chao Yu
       [not found] ` <AOQAuQAvE4gDy5nrp7t7Q4pj.9.1636381271838.Hmail.changfengnan@vivo.com>
  2 siblings, 0 replies; 16+ messages in thread
From: Chao Yu @ 2021-11-08 14:21 UTC (permalink / raw)
  To: Fengnan Chang, jaegeuk; +Cc: linux-f2fs-devel

On 2021/11/8 11:54, Fengnan Chang wrote:
> In my test, serial io for compress file will make multithread small write
> performance drop a lot.
> 
> I'm try to fingure out why we need __should_serialize_io, IMO, we use __should_serialize_io to avoid deadlock or try to
> improve sequential performance, but I don't understand why we should do this for

It was introduced to avoid fragmentation of file blocks.

> compressed file. In my test, if we just remove this, write same file in multithread will have problem, but parallel write different files in multithread

What do you mean by "write same file in multithread will have problem"?

Thanks,

> is ok. So I think maybe we should use another lock to allow write different files in multithread.
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
  2021-11-08  8:56 ` 常凤楠
@ 2021-11-08 14:30   ` Chao Yu
       [not found]   ` <AI6AmQANEzwDyLqc-ild4qqN.9.1636381829406.Hmail.changfengnan@vivo.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Chao Yu @ 2021-11-08 14:30 UTC (permalink / raw)
  To: 常凤楠, jaegeuk; +Cc: linux-f2fs-devel

On 2021/11/8 16:56, 常凤楠 wrote:
> Anyway, I did some modify to verify my idea, and did some test, not found problem for now.

Could you please consider:
1. pin file
2. fallocate file w/ filesize keeped
  - it will preallocate physical blocks aligned to segments
3. unpin file
4. overwrite compressed file

Thanks,

> 
> The modify as follows:
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index f4fd6c246c9a..0ed677efe820 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -3165,8 +3165,6 @@ static inline bool __should_serialize_io(struct inode *inode,
>   	if (IS_NOQUOTA(inode))
>   		return false;
>   
> -	if (f2fs_need_compress_data(inode))
> -		return true;
>   	if (wbc->sync_mode != WB_SYNC_ALL)
>   		return true;
>   	if (get_dirty_pages(inode) >= SM_I(F2FS_I_SB(inode))->min_seq_blocks)
> @@ -3218,11 +3216,16 @@ static int __f2fs_write_data_pages(struct address_space *mapping,
>   		mutex_lock(&sbi->writepages);
>   		locked = true;
>   	}
> +	if (f2fs_need_compress_data(inode))
> +		mutex_lock(&(F2FS_I(inode)->compress_lock));
>   
>   	blk_start_plug(&plug);
>   	ret = f2fs_write_cache_pages(mapping, wbc, io_type);
>   	blk_finish_plug(&plug);
>   
> +	if (f2fs_need_compress_data(inode))
> +		mutex_unlock(&(F2FS_I(inode)->compress_lock));
> +
>   	if (locked)
>   		mutex_unlock(&sbi->writepages);
>   
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index 039a229e11c9..3a6587f13d2f 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -763,6 +763,7 @@ struct f2fs_inode_info {
>   	struct list_head inmem_pages;	/* inmemory pages managed by f2fs */
>   	struct task_struct *inmem_task;	/* store inmemory task */
>   	struct mutex inmem_lock;	/* lock for inmemory pages */
> +	struct mutex compress_lock;	/* lock for compress file */
>   	struct extent_tree *extent_tree;	/* cached extent_tree entry */
>   
>   	/* avoid racing between foreground op and gc */
> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> index a133932333c5..8566e9c34540 100644
> --- a/fs/f2fs/super.c
> +++ b/fs/f2fs/super.c
> @@ -1323,6 +1323,7 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
>   	INIT_LIST_HEAD(&fi->inmem_ilist);
>   	INIT_LIST_HEAD(&fi->inmem_pages);
>   	mutex_init(&fi->inmem_lock);
> +	mutex_init(&fi->compress_lock);
>   	init_rwsem(&fi->i_gc_rwsem[READ]);
>   	init_rwsem(&fi->i_gc_rwsem[WRITE]);
>   	init_rwsem(&fi->i_xattr_sem);
> --
> 
>> -----Original Message-----
>> From: 常凤楠
>> Sent: Monday, November 8, 2021 11:55 AM
>> To: jaegeuk@kernel.org; chao@kernel.org
>> Cc: linux-f2fs-devel@lists.sourceforge.net
>> Subject: Do we need serial io for compress file?
>>
>> In my test, serial io for compress file will make multithread small write
>> performance drop a lot.
>>
>> I'm try to fingure out why we need __should_serialize_io, IMO, we use
>> __should_serialize_io to avoid deadlock or try to improve sequential
>> performance, but I don't understand why we should do this for compressed
>> file. In my test, if we just remove this, write same file in multithread will have
>> problem, but parallel write different files in multithread is ok. So I think
>> maybe we should use another lock to allow write different files in
>> multithread.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
       [not found] ` <AOQAuQAvE4gDy5nrp7t7Q4pj.9.1636381271838.Hmail.changfengnan@vivo.com>
@ 2021-11-09  1:59   ` 常凤楠
  2021-11-09 13:41     ` Chao Yu
       [not found]     ` <ACQA0AB6E*gFLLLxbhwrcKo0.9.1636465280667.Hmail.changfengnan@vivo.com>
  0 siblings, 2 replies; 16+ messages in thread
From: 常凤楠 @ 2021-11-09  1:59 UTC (permalink / raw)
  To: Chao Yu, jaegeuk; +Cc: linux-f2fs-devel



> -----Original Message-----
> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
> Chao Yu
> Sent: Monday, November 8, 2021 10:21 PM
> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> Cc: linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: Do we need serial io for compress file?
> 
> On 2021/11/8 11:54, Fengnan Chang wrote:
> > In my test, serial io for compress file will make multithread small
> > write performance drop a lot.
> >
> > I'm try to fingure out why we need __should_serialize_io, IMO, we use
> > __should_serialize_io to avoid deadlock or try to improve sequential
> > performance, but I don't understand why we should do this for
> 
> It was introduced to avoid fragmentation of file blocks.

So, for small write on compress file, is this still necessary? I think we should treat compress file as regular file. 
> 
> > compressed file. In my test, if we just remove this, write same file
> > in multithread will have problem, but parallel write different files
> > in multithread
> 
> What do you mean by "write same file in multithread will have problem"?

If just remove compress file in __should_serialize_io()

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index f4fd6c246c9a..7bd429b46429 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3165,8 +3165,8 @@ static inline bool __should_serialize_io(struct inode *inode,
        if (IS_NOQUOTA(inode))
                return false;
 
-       if (f2fs_need_compress_data(inode))
-               return true;
+       //if (f2fs_need_compress_data(inode))
+       //      return true;
        if (wbc->sync_mode != WB_SYNC_ALL)
                return true;
        if (get_dirty_pages(inode) >= SM_I(F2FS_I_SB(inode))->min_seq_blocks)

and use fio to start multi thread to write same file, fio will hung.
fio.conf:
[global]
direct=1
numjobs=8
time_based
runtime=30
ioengine=sync
iodepth=16
buffer_pattern="ZZZZ"
fsync=1

[file0]
name=fio-rand-RW
filename=fio-rand-RW
rw=rw
rwmixread=60
rwmixwrite=40
bs=1M
size=64M

[file1]
name=fio-rand-RW
filename=fio-rand-RW
rw=randrw
rwmixread=60
rwmixwrite=40
bs=4K
size=64M

> 
> Thanks,
> 
> > is ok. So I think maybe we should use another lock to allow write
> different files in multithread.
> >

_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
       [not found]   ` <AI6AmQANEzwDyLqc-ild4qqN.9.1636381829406.Hmail.changfengnan@vivo.com>
@ 2021-11-09  3:18     ` 常凤楠
  2021-11-09 13:46       ` Chao Yu
       [not found]       ` <AFUAIwC1E3YFUbOLaOBVbqp6.9.1636465594624.Hmail.changfengnan@vivo.com>
  0 siblings, 2 replies; 16+ messages in thread
From: 常凤楠 @ 2021-11-09  3:18 UTC (permalink / raw)
  To: Chao Yu, jaegeuk; +Cc: linux-f2fs-devel



> -----Original Message-----
> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
> Chao Yu
> Sent: Monday, November 8, 2021 10:30 PM
> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> Cc: linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: Do we need serial io for compress file?
> 
> On 2021/11/8 16:56, 常凤楠 wrote:
> > Anyway, I did some modify to verify my idea, and did some test, not
> found problem for now.
> 
> Could you please consider:
> 1. pin file
> 2. fallocate file w/ filesize keeped
>   - it will preallocate physical blocks aligned to segments 3. unpin file 4.
> overwrite compressed file

So you means after step 1-3, it will make compressed file fragmentation worse ?

Thanks.
> 
> Thanks,
> 
> >
> > The modify as follows:
> >
> > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
> > f4fd6c246c9a..0ed677efe820 100644
> > --- a/fs/f2fs/data.c
> > +++ b/fs/f2fs/data.c
> > @@ -3165,8 +3165,6 @@ static inline bool __should_serialize_io(struct
> inode *inode,
> >   	if (IS_NOQUOTA(inode))
> >   		return false;
> >
> > -	if (f2fs_need_compress_data(inode))
> > -		return true;
> >   	if (wbc->sync_mode != WB_SYNC_ALL)
> >   		return true;
> >   	if (get_dirty_pages(inode) >=
> > SM_I(F2FS_I_SB(inode))->min_seq_blocks)
> > @@ -3218,11 +3216,16 @@ static int __f2fs_write_data_pages(struct
> address_space *mapping,
> >   		mutex_lock(&sbi->writepages);
> >   		locked = true;
> >   	}
> > +	if (f2fs_need_compress_data(inode))
> > +		mutex_lock(&(F2FS_I(inode)->compress_lock));
> >
> >   	blk_start_plug(&plug);
> >   	ret = f2fs_write_cache_pages(mapping, wbc, io_type);
> >   	blk_finish_plug(&plug);
> >
> > +	if (f2fs_need_compress_data(inode))
> > +		mutex_unlock(&(F2FS_I(inode)->compress_lock));
> > +
> >   	if (locked)
> >   		mutex_unlock(&sbi->writepages);
> >
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index
> > 039a229e11c9..3a6587f13d2f 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -763,6 +763,7 @@ struct f2fs_inode_info {
> >   	struct list_head inmem_pages;	/* inmemory pages managed by
> f2fs */
> >   	struct task_struct *inmem_task;	/* store inmemory task */
> >   	struct mutex inmem_lock;	/* lock for inmemory pages */
> > +	struct mutex compress_lock;	/* lock for compress file */
> >   	struct extent_tree *extent_tree;	/* cached extent_tree entry */
> >
> >   	/* avoid racing between foreground op and gc */ diff --git
> > a/fs/f2fs/super.c b/fs/f2fs/super.c index a133932333c5..8566e9c34540
> > 100644
> > --- a/fs/f2fs/super.c
> > +++ b/fs/f2fs/super.c
> > @@ -1323,6 +1323,7 @@ static struct inode *f2fs_alloc_inode(struct
> super_block *sb)
> >   	INIT_LIST_HEAD(&fi->inmem_ilist);
> >   	INIT_LIST_HEAD(&fi->inmem_pages);
> >   	mutex_init(&fi->inmem_lock);
> > +	mutex_init(&fi->compress_lock);
> >   	init_rwsem(&fi->i_gc_rwsem[READ]);
> >   	init_rwsem(&fi->i_gc_rwsem[WRITE]);
> >   	init_rwsem(&fi->i_xattr_sem);
> > --
> >
> >> -----Original Message-----
> >> From: 常凤楠
> >> Sent: Monday, November 8, 2021 11:55 AM
> >> To: jaegeuk@kernel.org; chao@kernel.org
> >> Cc: linux-f2fs-devel@lists.sourceforge.net
> >> Subject: Do we need serial io for compress file?
> >>
> >> In my test, serial io for compress file will make multithread small
> >> write performance drop a lot.
> >>
> >> I'm try to fingure out why we need __should_serialize_io, IMO, we use
> >> __should_serialize_io to avoid deadlock or try to improve sequential
> >> performance, but I don't understand why we should do this for
> >> compressed file. In my test, if we just remove this, write same file
> >> in multithread will have problem, but parallel write different files
> >> in multithread is ok. So I think maybe we should use another lock to
> >> allow write different files in multithread.

_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
  2021-11-09  1:59   ` 常凤楠
@ 2021-11-09 13:41     ` Chao Yu
       [not found]     ` <ACQA0AB6E*gFLLLxbhwrcKo0.9.1636465280667.Hmail.changfengnan@vivo.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Chao Yu @ 2021-11-09 13:41 UTC (permalink / raw)
  To: 常凤楠, jaegeuk; +Cc: linux-f2fs-devel

On 2021/11/9 9:59, 常凤楠 wrote:
> 
> 
>> -----Original Message-----
>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
>> Chao Yu
>> Sent: Monday, November 8, 2021 10:21 PM
>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>> Cc: linux-f2fs-devel@lists.sourceforge.net
>> Subject: Re: Do we need serial io for compress file?
>>
>> On 2021/11/8 11:54, Fengnan Chang wrote:
>>> In my test, serial io for compress file will make multithread small
>>> write performance drop a lot.
>>>
>>> I'm try to fingure out why we need __should_serialize_io, IMO, we use
>>> __should_serialize_io to avoid deadlock or try to improve sequential
>>> performance, but I don't understand why we should do this for
>>
>> It was introduced to avoid fragmentation of file blocks.
> 
> So, for small write on compress file, is this still necessary? I think we should treat compress file as regular file.

Any real scenario there? let me know if I missed any cases, as I saw, most compressible
files are not small...

>>
>>> compressed file. In my test, if we just remove this, write same file
>>> in multithread will have problem, but parallel write different files
>>> in multithread
>>
>> What do you mean by "write same file in multithread will have problem"?
> 
> If just remove compress file in __should_serialize_io()
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index f4fd6c246c9a..7bd429b46429 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -3165,8 +3165,8 @@ static inline bool __should_serialize_io(struct inode *inode,
>          if (IS_NOQUOTA(inode))
>                  return false;
>   
> -       if (f2fs_need_compress_data(inode))
> -               return true;
> +       //if (f2fs_need_compress_data(inode))
> +       //      return true;
>          if (wbc->sync_mode != WB_SYNC_ALL)
>                  return true;
>          if (get_dirty_pages(inode) >= SM_I(F2FS_I_SB(inode))->min_seq_blocks)
> 
> and use fio to start multi thread to write same file, fio will hung.

Any potential hangtask issue there? did you get any stack backtrace log?

If there is, it needs to figure out the root cause.

Thanks,

> fio.conf:
> [global]
> direct=1
> numjobs=8
> time_based
> runtime=30
> ioengine=sync
> iodepth=16
> buffer_pattern="ZZZZ"
> fsync=1
> 
> [file0]
> name=fio-rand-RW
> filename=fio-rand-RW
> rw=rw
> rwmixread=60
> rwmixwrite=40
> bs=1M
> size=64M
> 
> [file1]
> name=fio-rand-RW
> filename=fio-rand-RW
> rw=randrw
> rwmixread=60
> rwmixwrite=40
> bs=4K
> size=64M
> 
>>
>> Thanks,
>>
>>> is ok. So I think maybe we should use another lock to allow write
>> different files in multithread.
>>>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
  2021-11-09  3:18     ` 常凤楠
@ 2021-11-09 13:46       ` Chao Yu
       [not found]       ` <AFUAIwC1E3YFUbOLaOBVbqp6.9.1636465594624.Hmail.changfengnan@vivo.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Chao Yu @ 2021-11-09 13:46 UTC (permalink / raw)
  To: 常凤楠, jaegeuk; +Cc: linux-f2fs-devel

On 2021/11/9 11:18, 常凤楠 wrote:
> 
> 
>> -----Original Message-----
>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
>> Chao Yu
>> Sent: Monday, November 8, 2021 10:30 PM
>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>> Cc: linux-f2fs-devel@lists.sourceforge.net
>> Subject: Re: Do we need serial io for compress file?
>>
>> On 2021/11/8 16:56, 常凤楠 wrote:
>>> Anyway, I did some modify to verify my idea, and did some test, not
>> found problem for now.
>>
>> Could you please consider:
>> 1. pin file
>> 2. fallocate file w/ filesize keeped
>>    - it will preallocate physical blocks aligned to segments 3. unpin file 4.
>> overwrite compressed file
> 
> So you means after step 1-3, it will make compressed file fragmentation worse ?

Oh, I'm trying to find a way to avoid write performance regression due to race
condition on writepages lock meanwhile avoiding fragmentation of compressed file.
But I'm out of my mind, above case wouldn't help, please ignore this.

Thanks,

> 
> Thanks.
>>
>> Thanks,
>>
>>>
>>> The modify as follows:
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
>>> f4fd6c246c9a..0ed677efe820 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -3165,8 +3165,6 @@ static inline bool __should_serialize_io(struct
>> inode *inode,
>>>    	if (IS_NOQUOTA(inode))
>>>    		return false;
>>>
>>> -	if (f2fs_need_compress_data(inode))
>>> -		return true;
>>>    	if (wbc->sync_mode != WB_SYNC_ALL)
>>>    		return true;
>>>    	if (get_dirty_pages(inode) >=
>>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
>>> @@ -3218,11 +3216,16 @@ static int __f2fs_write_data_pages(struct
>> address_space *mapping,
>>>    		mutex_lock(&sbi->writepages);
>>>    		locked = true;
>>>    	}
>>> +	if (f2fs_need_compress_data(inode))
>>> +		mutex_lock(&(F2FS_I(inode)->compress_lock));
>>>
>>>    	blk_start_plug(&plug);
>>>    	ret = f2fs_write_cache_pages(mapping, wbc, io_type);
>>>    	blk_finish_plug(&plug);
>>>
>>> +	if (f2fs_need_compress_data(inode))
>>> +		mutex_unlock(&(F2FS_I(inode)->compress_lock));
>>> +
>>>    	if (locked)
>>>    		mutex_unlock(&sbi->writepages);
>>>
>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index
>>> 039a229e11c9..3a6587f13d2f 100644
>>> --- a/fs/f2fs/f2fs.h
>>> +++ b/fs/f2fs/f2fs.h
>>> @@ -763,6 +763,7 @@ struct f2fs_inode_info {
>>>    	struct list_head inmem_pages;	/* inmemory pages managed by
>> f2fs */
>>>    	struct task_struct *inmem_task;	/* store inmemory task */
>>>    	struct mutex inmem_lock;	/* lock for inmemory pages */
>>> +	struct mutex compress_lock;	/* lock for compress file */
>>>    	struct extent_tree *extent_tree;	/* cached extent_tree entry */
>>>
>>>    	/* avoid racing between foreground op and gc */ diff --git
>>> a/fs/f2fs/super.c b/fs/f2fs/super.c index a133932333c5..8566e9c34540
>>> 100644
>>> --- a/fs/f2fs/super.c
>>> +++ b/fs/f2fs/super.c
>>> @@ -1323,6 +1323,7 @@ static struct inode *f2fs_alloc_inode(struct
>> super_block *sb)
>>>    	INIT_LIST_HEAD(&fi->inmem_ilist);
>>>    	INIT_LIST_HEAD(&fi->inmem_pages);
>>>    	mutex_init(&fi->inmem_lock);
>>> +	mutex_init(&fi->compress_lock);
>>>    	init_rwsem(&fi->i_gc_rwsem[READ]);
>>>    	init_rwsem(&fi->i_gc_rwsem[WRITE]);
>>>    	init_rwsem(&fi->i_xattr_sem);
>>> --
>>>
>>>> -----Original Message-----
>>>> From: 常凤楠
>>>> Sent: Monday, November 8, 2021 11:55 AM
>>>> To: jaegeuk@kernel.org; chao@kernel.org
>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>> Subject: Do we need serial io for compress file?
>>>>
>>>> In my test, serial io for compress file will make multithread small
>>>> write performance drop a lot.
>>>>
>>>> I'm try to fingure out why we need __should_serialize_io, IMO, we use
>>>> __should_serialize_io to avoid deadlock or try to improve sequential
>>>> performance, but I don't understand why we should do this for
>>>> compressed file. In my test, if we just remove this, write same file
>>>> in multithread will have problem, but parallel write different files
>>>> in multithread is ok. So I think maybe we should use another lock to
>>>> allow write different files in multithread.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
       [not found]       ` <AFUAIwC1E3YFUbOLaOBVbqp6.9.1636465594624.Hmail.changfengnan@vivo.com>
@ 2021-11-10  1:41         ` 常凤楠
  2021-11-13  6:15           ` Chao Yu
  0 siblings, 1 reply; 16+ messages in thread
From: 常凤楠 @ 2021-11-10  1:41 UTC (permalink / raw)
  To: Chao Yu, jaegeuk; +Cc: linux-f2fs-devel

> -----Original Message-----
> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
> Chao Yu
> Sent: Tuesday, November 9, 2021 9:46 PM
> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> Cc: linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: Do we need serial io for compress file?
> 
> On 2021/11/9 11:18, 常凤楠 wrote:
> >
> >
> >> -----Original Message-----
> >> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf
> Of Chao
> >> Yu
> >> Sent: Monday, November 8, 2021 10:30 PM
> >> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >> Cc: linux-f2fs-devel@lists.sourceforge.net
> >> Subject: Re: Do we need serial io for compress file?
> >>
> >> On 2021/11/8 16:56, 常凤楠 wrote:
> >>> Anyway, I did some modify to verify my idea, and did some test, not
> >> found problem for now.
> >>
> >> Could you please consider:
> >> 1. pin file
> >> 2. fallocate file w/ filesize keeped
> >>    - it will preallocate physical blocks aligned to segments 3. unpin file
> 4.
> >> overwrite compressed file
> >
> > So you means after step 1-3, it will make compressed file fragmentation
> worse ?
> 
> Oh, I'm trying to find a way to avoid write performance regression due to
> race condition on writepages lock meanwhile avoiding fragmentation of
> compressed file.

Yep, that's what I want to do, looking forward your idea.
And how about the modify as below ?

Thanks.
> But I'm out of my mind, above case wouldn't help, please ignore this.
> 
> Thanks,
> 
> >
> > Thanks.
> >>
> >> Thanks,
> >>
> >>>
> >>> The modify as follows:
> >>>
> >>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
> >>> f4fd6c246c9a..0ed677efe820 100644
> >>> --- a/fs/f2fs/data.c
> >>> +++ b/fs/f2fs/data.c
> >>> @@ -3165,8 +3165,6 @@ static inline bool
> >>> __should_serialize_io(struct
> >> inode *inode,
> >>>    	if (IS_NOQUOTA(inode))
> >>>    		return false;
> >>>
> >>> -	if (f2fs_need_compress_data(inode))
> >>> -		return true;
> >>>    	if (wbc->sync_mode != WB_SYNC_ALL)
> >>>    		return true;
> >>>    	if (get_dirty_pages(inode) >=
> >>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
> >>> @@ -3218,11 +3216,16 @@ static int __f2fs_write_data_pages(struct
> >> address_space *mapping,
> >>>    		mutex_lock(&sbi->writepages);
> >>>    		locked = true;
> >>>    	}
> >>> +	if (f2fs_need_compress_data(inode))
> >>> +		mutex_lock(&(F2FS_I(inode)->compress_lock));
> >>>
> >>>    	blk_start_plug(&plug);
> >>>    	ret = f2fs_write_cache_pages(mapping, wbc, io_type);
> >>>    	blk_finish_plug(&plug);
> >>>
> >>> +	if (f2fs_need_compress_data(inode))
> >>> +		mutex_unlock(&(F2FS_I(inode)->compress_lock));
> >>> +
> >>>    	if (locked)
> >>>    		mutex_unlock(&sbi->writepages);
> >>>
> >>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index
> >>> 039a229e11c9..3a6587f13d2f 100644
> >>> --- a/fs/f2fs/f2fs.h
> >>> +++ b/fs/f2fs/f2fs.h
> >>> @@ -763,6 +763,7 @@ struct f2fs_inode_info {
> >>>    	struct list_head inmem_pages;	/* inmemory pages managed by
> >> f2fs */
> >>>    	struct task_struct *inmem_task;	/* store inmemory task */
> >>>    	struct mutex inmem_lock;	/* lock for inmemory pages */
> >>> +	struct mutex compress_lock;	/* lock for compress file */
> >>>    	struct extent_tree *extent_tree;	/* cached extent_tree entry */
> >>>
> >>>    	/* avoid racing between foreground op and gc */ diff --git
> >>> a/fs/f2fs/super.c b/fs/f2fs/super.c index a133932333c5..8566e9c34540
> >>> 100644
> >>> --- a/fs/f2fs/super.c
> >>> +++ b/fs/f2fs/super.c
> >>> @@ -1323,6 +1323,7 @@ static struct inode *f2fs_alloc_inode(struct
> >> super_block *sb)
> >>>    	INIT_LIST_HEAD(&fi->inmem_ilist);
> >>>    	INIT_LIST_HEAD(&fi->inmem_pages);
> >>>    	mutex_init(&fi->inmem_lock);
> >>> +	mutex_init(&fi->compress_lock);
> >>>    	init_rwsem(&fi->i_gc_rwsem[READ]);
> >>>    	init_rwsem(&fi->i_gc_rwsem[WRITE]);
> >>>    	init_rwsem(&fi->i_xattr_sem);
> >>> --
> >>>
> >>>> -----Original Message-----
> >>>> From: 常凤楠
> >>>> Sent: Monday, November 8, 2021 11:55 AM
> >>>> To: jaegeuk@kernel.org; chao@kernel.org
> >>>> Cc: linux-f2fs-devel@lists.sourceforge.net
> >>>> Subject: Do we need serial io for compress file?
> >>>>
> >>>> In my test, serial io for compress file will make multithread small
> >>>> write performance drop a lot.
> >>>>
> >>>> I'm try to fingure out why we need __should_serialize_io, IMO, we
> >>>> use __should_serialize_io to avoid deadlock or try to improve
> >>>> sequential performance, but I don't understand why we should do
> >>>> this for compressed file. In my test, if we just remove this, write
> >>>> same file in multithread will have problem, but parallel write
> >>>> different files in multithread is ok. So I think maybe we should
> >>>> use another lock to allow write different files in multithread.

_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
       [not found]     ` <ACQA0AB6E*gFLLLxbhwrcKo0.9.1636465280667.Hmail.changfengnan@vivo.com>
@ 2021-11-10  1:49       ` 常凤楠
  2021-11-13  6:21         ` Chao Yu
       [not found]         ` <AIIAVAAPE7EN6BQe5FH43Kp3.9.1636784464708.Hmail.changfengnan@vivo.com>
  0 siblings, 2 replies; 16+ messages in thread
From: 常凤楠 @ 2021-11-10  1:49 UTC (permalink / raw)
  To: Chao Yu, jaegeuk; +Cc: linux-f2fs-devel



> -----Original Message-----
> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
> Chao Yu
> Sent: Tuesday, November 9, 2021 9:41 PM
> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> Cc: linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: Do we need serial io for compress file?
> 
> On 2021/11/9 9:59, 常凤楠 wrote:
> >
> >
> >> -----Original Message-----
> >> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf
> Of Chao
> >> Yu
> >> Sent: Monday, November 8, 2021 10:21 PM
> >> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >> Cc: linux-f2fs-devel@lists.sourceforge.net
> >> Subject: Re: Do we need serial io for compress file?
> >>
> >> On 2021/11/8 11:54, Fengnan Chang wrote:
> >>> In my test, serial io for compress file will make multithread small
> >>> write performance drop a lot.
> >>>
> >>> I'm try to fingure out why we need __should_serialize_io, IMO, we
> >>> use __should_serialize_io to avoid deadlock or try to improve
> >>> sequential performance, but I don't understand why we should do this
> >>> for
> >>
> >> It was introduced to avoid fragmentation of file blocks.
> >
> > So, for small write on compress file, is this still necessary? I think we
> should treat compress file as regular file.
> 
> Any real scenario there? let me know if I missed any cases, as I saw, most
> compressible files are not small...

Maybe my description is incorrect, small write means write with small block size, for example 4K.
When write multi compress file with bs=4k in multithread,  the serialize io make performance drop a lot.

> 
> >>
> >>> compressed file. In my test, if we just remove this, write same file
> >>> in multithread will have problem, but parallel write different files
> >>> in multithread
> >>
> >> What do you mean by "write same file in multithread will have
> problem"?
> >
> > If just remove compress file in __should_serialize_io()
> >
> > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
> > f4fd6c246c9a..7bd429b46429 100644
> > --- a/fs/f2fs/data.c
> > +++ b/fs/f2fs/data.c
> > @@ -3165,8 +3165,8 @@ static inline bool __should_serialize_io(struct
> inode *inode,
> >          if (IS_NOQUOTA(inode))
> >                  return false;
> >
> > -       if (f2fs_need_compress_data(inode))
> > -               return true;
> > +       //if (f2fs_need_compress_data(inode))
> > +       //      return true;
> >          if (wbc->sync_mode != WB_SYNC_ALL)
> >                  return true;
> >          if (get_dirty_pages(inode) >=
> > SM_I(F2FS_I_SB(inode))->min_seq_blocks)
> >
> > and use fio to start multi thread to write same file, fio will hung.
> 
> Any potential hangtask issue there? did you get any stack backtrace log?
> 
Yes, it's quite easy to reproduce in my test.
Backtrace:
[  493.915408] INFO: task fio:479 blocked for more than 122 seconds.
[  493.915729]       Not tainted 5.15.0-rc1+ #516
[  493.915845] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  493.916265] task:fio             state:D stack:13008 pid:  479 ppid:   474 flags:0x00004000
[  493.916686] Call Trace:
[  493.917310]  __schedule+0x317/0x920
[  493.917560]  ? rcu_read_lock_sched_held+0x23/0x80
[  493.917705]  schedule+0x59/0xc0
[  493.917798]  io_schedule+0x12/0x40
[  493.917895]  __lock_page+0x141/0x230
[  493.924703]  ? filemap_invalidate_unlock_two+0x40/0x40
[  493.926404]  f2fs_write_multi_pages+0x1a7/0x9c0
[  493.926579]  f2fs_write_cache_pages+0x711/0x840
[  493.926771]  f2fs_write_data_pages+0x20e/0x3f0
[  493.926906]  ? rcu_read_lock_held_common+0xe/0x40
[  493.928305]  ? do_writepages+0xd1/0x190
[  493.928439]  do_writepages+0xd1/0x190
[  493.928551]  file_write_and_wait_range+0xa3/0xe0
[  493.928690]  f2fs_do_sync_file+0x10f/0x910
[  493.928823]  do_fsync+0x38/0x60
[  493.928924]  __x64_sys_fsync+0x10/0x20
[  493.929208]  do_syscall_64+0x3a/0x90
[  493.929327]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  493.929486] RIP: 0033:0x7f64cb26ca97
[  493.929636] RSP: 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[  493.929816] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f64cb26ca97
[  493.929975] RDX: 0000000000000000 RSI: 000056378a8ad540 RDI: 0000000000000005
[  493.931164] RBP: 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170
[  493.931347] R10: 00007ffe387e61b0 R11: 0000000000000293 R12: 0000000000000003
[  493.931517] R13: 00007f647e2836d8 R14: 0000000000000000 R15: 000056378a8ad568
[  493.931802] INFO: task fio:480 blocked for more than 122 seconds.
[  493.931946]       Not tainted 5.15.0-rc1+ #516
[  493.932202] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  493.932382] task:fio             state:D stack:13216 pid:  480 ppid:   474 flags:0x00000000
[  493.932596] Call Trace:
[  493.932681]  __schedule+0x317/0x920
[  493.932796]  ? rcu_read_lock_sched_held+0x23/0x80
[  493.932939]  schedule+0x59/0xc0
[  493.933180]  io_schedule+0x12/0x40
[  493.933285]  __lock_page+0x141/0x230
[  493.933391]  ? filemap_invalidate_unlock_two+0x40/0x40
[  493.933539]  f2fs_write_cache_pages+0x38a/0x840
[  493.933708]  ? do_writepages+0xd1/0x190
[  493.933821]  ? do_writepages+0xd1/0x190
[  493.933939]  f2fs_write_data_pages+0x20e/0x3f0
[  493.935516]  ? rcu_read_lock_held_common+0xe/0x40
[  493.935682]  ? do_writepages+0xd1/0x190
[  493.935790]  do_writepages+0xd1/0x190
[  493.935895]  file_write_and_wait_range+0xa3/0xe0
[  493.938074]  f2fs_do_sync_file+0x10f/0x910
[  493.938225]  do_fsync+0x38/0x60
[  493.938328]  __x64_sys_fsync+0x10/0x20
[  493.938430]  do_syscall_64+0x3a/0x90
[  493.938548]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  493.938683] RIP: 0033:0x7f64cb26ca97
[  493.938783] RSP: 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[  493.938969] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f64cb26ca97
[  493.940413] RDX: 0000000000000000 RSI: 000056378a8ad540 RDI: 0000000000000005
[  493.940591] RBP: 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170
[  493.940759] R10: 00007ffe387e61b0 R11: 0000000000000293 R12: 0000000000000003
[  493.940924] R13: 00007f647e2c80e8 R14: 0000000000000000 R15: 000056378a8ad568
[  493.941341] INFO: lockdep is turned off.
[  616.796413] INFO: task fio:479 blocked for more than 245 seconds.
[  616.796706]       Not tainted 5.15.0-rc1+ #516
[  616.796824] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  616.827081] task:fio             state:D stack:13008 pid:  479 ppid:   474 flags:0x00004000
[  616.841373] Call Trace:
[  616.841492]  __schedule+0x317/0x920
[  616.841635]  ? rcu_read_lock_sched_held+0x23/0x80
[  616.841767]  schedule+0x59/0xc0
[  616.841858]  io_schedule+0x12/0x40
[  616.841952]  __lock_page+0x141/0x230
[  616.852135]  ? filemap_invalidate_unlock_two+0x40/0x40
[  616.852305]  f2fs_write_multi_pages+0x1a7/0x9c0
[  616.852455]  f2fs_write_cache_pages+0x711/0x840
[  616.852661]  f2fs_write_data_pages+0x20e/0x3f0
[  616.852785]  ? rcu_read_lock_held_common+0xe/0x40
[  616.852924]  ? do_writepages+0xd1/0x190
[  616.853281]  do_writepages+0xd1/0x190
[  616.853409]  file_write_and_wait_range+0xa3/0xe0
[  616.853537]  f2fs_do_sync_file+0x10f/0x910
[  616.853670]  do_fsync+0x38/0x60
[  616.853773]  __x64_sys_fsync+0x10/0x20
[  616.853869]  do_syscall_64+0x3a/0x90
[  616.853971]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  616.863145] RIP: 0033:0x7f64cb26ca97
[  616.863273] RSP: 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[  616.863468] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f64cb26ca97
[  616.863635] RDX: 0000000000000000 RSI: 000056378a8ad540 RDI: 0000000000000005
[  616.863803] RBP: 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170
[  616.863975] R10: 00007ffe387e61b0 R11: 0000000000000293 R12: 0000000000000003
[  616.870147] R13: 00007f647e2836d8 R14: 0000000000000000 R15: 000056378a8ad568
[  616.870388] INFO: task fio:480 blocked for more than 245 seconds.
[  616.870537]       Not tainted 5.15.0-rc1+ #516
[  616.870650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  616.870833] task:fio             state:D stack:13216 pid:  480 ppid:   474 flags:0x00000000
[  616.872161] Call Trace:
[  616.872257]  __schedule+0x317/0x920
[  616.872366]  ? rcu_read_lock_sched_held+0x23/0x80
[  616.872503]  schedule+0x59/0xc0
[  616.872596]  io_schedule+0x12/0x40
[  616.872694]  __lock_page+0x141/0x230
[  616.872803]  ? filemap_invalidate_unlock_two+0x40/0x40
[  616.872945]  f2fs_write_cache_pages+0x38a/0x840
[  616.873331]  ? do_writepages+0xd1/0x190
[  616.873448]  ? do_writepages+0xd1/0x190
[  616.873569]  f2fs_write_data_pages+0x20e/0x3f0
[  616.873694]  ? rcu_read_lock_held_common+0xe/0x40
[  616.873833]  ? do_writepages+0xd1/0x190
[  616.873936]  do_writepages+0xd1/0x190
[  616.874272]  file_write_and_wait_range+0xa3/0xe0
[  616.874424]  f2fs_do_sync_file+0x10f/0x910
[  616.874560]  do_fsync+0x38/0x60
[  616.874660]  __x64_sys_fsync+0x10/0x20
[  616.874759]  do_syscall_64+0x3a/0x90
[  616.874863]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  616.875792] RIP: 0033:0x7f64cb26ca97
[  616.875905] RSP: 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[  616.876345] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f64cb26ca97
[  616.876515] RDX: 0000000000000000 RSI: 000056378a8ad540 RDI: 0000000000000005
[  616.876692] RBP: 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170
[  616.876859] R10: 00007ffe387e61b0 R11: 0000000000000293 R12: 0000000000000003
[  616.877231] R13: 00007f647e2c80e8 R14: 0000000000000000 R15: 000056378a8ad568
[  616.877456] INFO: lockdep is turned off.
> If there is, it needs to figure out the root cause.
> 
> Thanks,
> 
> > fio.conf:
> > [global]
> > direct=1
> > numjobs=8
> > time_based
> > runtime=30
> > ioengine=sync
> > iodepth=16
> > buffer_pattern="ZZZZ"
> > fsync=1
> >
> > [file0]
> > name=fio-rand-RW
> > filename=fio-rand-RW
> > rw=rw
> > rwmixread=60
> > rwmixwrite=40
> > bs=1M
> > size=64M
> >
> > [file1]
> > name=fio-rand-RW
> > filename=fio-rand-RW
> > rw=randrw
> > rwmixread=60
> > rwmixwrite=40
> > bs=4K
> > size=64M
> >
> >>
> >> Thanks,
> >>
> >>> is ok. So I think maybe we should use another lock to allow write
> >> different files in multithread.
> >>>

_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
  2021-11-10  1:41         ` 常凤楠
@ 2021-11-13  6:15           ` Chao Yu
  0 siblings, 0 replies; 16+ messages in thread
From: Chao Yu @ 2021-11-13  6:15 UTC (permalink / raw)
  To: 常凤楠, jaegeuk; +Cc: linux-f2fs-devel

On 2021/11/10 9:41, 常凤楠 wrote:
>> -----Original Message-----
>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
>> Chao Yu
>> Sent: Tuesday, November 9, 2021 9:46 PM
>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>> Cc: linux-f2fs-devel@lists.sourceforge.net
>> Subject: Re: Do we need serial io for compress file?
>>
>> On 2021/11/9 11:18, 常凤楠 wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf
>> Of Chao
>>>> Yu
>>>> Sent: Monday, November 8, 2021 10:30 PM
>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>> Subject: Re: Do we need serial io for compress file?
>>>>
>>>> On 2021/11/8 16:56, 常凤楠 wrote:
>>>>> Anyway, I did some modify to verify my idea, and did some test, not
>>>> found problem for now.
>>>>
>>>> Could you please consider:
>>>> 1. pin file
>>>> 2. fallocate file w/ filesize keeped
>>>>     - it will preallocate physical blocks aligned to segments 3. unpin file
>> 4.
>>>> overwrite compressed file
>>>
>>> So you means after step 1-3, it will make compressed file fragmentation
>> worse ?
>>
>> Oh, I'm trying to find a way to avoid write performance regression due to
>> race condition on writepages lock meanwhile avoiding fragmentation of
>> compressed file.
> 
> Yep, that's what I want to do, looking forward your idea.
> And how about the modify as below ?

We will still suffer fragmentation in race condition in between compressed file write and
normal file write?

Thanks,

> 
> Thanks.
>> But I'm out of my mind, above case wouldn't help, please ignore this.
>>
>> Thanks,
>>
>>>
>>> Thanks.
>>>>
>>>> Thanks,
>>>>
>>>>>
>>>>> The modify as follows:
>>>>>
>>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
>>>>> f4fd6c246c9a..0ed677efe820 100644
>>>>> --- a/fs/f2fs/data.c
>>>>> +++ b/fs/f2fs/data.c
>>>>> @@ -3165,8 +3165,6 @@ static inline bool
>>>>> __should_serialize_io(struct
>>>> inode *inode,
>>>>>     	if (IS_NOQUOTA(inode))
>>>>>     		return false;
>>>>>
>>>>> -	if (f2fs_need_compress_data(inode))
>>>>> -		return true;
>>>>>     	if (wbc->sync_mode != WB_SYNC_ALL)
>>>>>     		return true;
>>>>>     	if (get_dirty_pages(inode) >=
>>>>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
>>>>> @@ -3218,11 +3216,16 @@ static int __f2fs_write_data_pages(struct
>>>> address_space *mapping,
>>>>>     		mutex_lock(&sbi->writepages);
>>>>>     		locked = true;
>>>>>     	}
>>>>> +	if (f2fs_need_compress_data(inode))
>>>>> +		mutex_lock(&(F2FS_I(inode)->compress_lock));
>>>>>
>>>>>     	blk_start_plug(&plug);
>>>>>     	ret = f2fs_write_cache_pages(mapping, wbc, io_type);
>>>>>     	blk_finish_plug(&plug);
>>>>>
>>>>> +	if (f2fs_need_compress_data(inode))
>>>>> +		mutex_unlock(&(F2FS_I(inode)->compress_lock));
>>>>> +
>>>>>     	if (locked)
>>>>>     		mutex_unlock(&sbi->writepages);
>>>>>
>>>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index
>>>>> 039a229e11c9..3a6587f13d2f 100644
>>>>> --- a/fs/f2fs/f2fs.h
>>>>> +++ b/fs/f2fs/f2fs.h
>>>>> @@ -763,6 +763,7 @@ struct f2fs_inode_info {
>>>>>     	struct list_head inmem_pages;	/* inmemory pages managed by
>>>> f2fs */
>>>>>     	struct task_struct *inmem_task;	/* store inmemory task */
>>>>>     	struct mutex inmem_lock;	/* lock for inmemory pages */
>>>>> +	struct mutex compress_lock;	/* lock for compress file */
>>>>>     	struct extent_tree *extent_tree;	/* cached extent_tree entry */
>>>>>
>>>>>     	/* avoid racing between foreground op and gc */ diff --git
>>>>> a/fs/f2fs/super.c b/fs/f2fs/super.c index a133932333c5..8566e9c34540
>>>>> 100644
>>>>> --- a/fs/f2fs/super.c
>>>>> +++ b/fs/f2fs/super.c
>>>>> @@ -1323,6 +1323,7 @@ static struct inode *f2fs_alloc_inode(struct
>>>> super_block *sb)
>>>>>     	INIT_LIST_HEAD(&fi->inmem_ilist);
>>>>>     	INIT_LIST_HEAD(&fi->inmem_pages);
>>>>>     	mutex_init(&fi->inmem_lock);
>>>>> +	mutex_init(&fi->compress_lock);
>>>>>     	init_rwsem(&fi->i_gc_rwsem[READ]);
>>>>>     	init_rwsem(&fi->i_gc_rwsem[WRITE]);
>>>>>     	init_rwsem(&fi->i_xattr_sem);
>>>>> --
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: 常凤楠
>>>>>> Sent: Monday, November 8, 2021 11:55 AM
>>>>>> To: jaegeuk@kernel.org; chao@kernel.org
>>>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>>>> Subject: Do we need serial io for compress file?
>>>>>>
>>>>>> In my test, serial io for compress file will make multithread small
>>>>>> write performance drop a lot.
>>>>>>
>>>>>> I'm try to fingure out why we need __should_serialize_io, IMO, we
>>>>>> use __should_serialize_io to avoid deadlock or try to improve
>>>>>> sequential performance, but I don't understand why we should do
>>>>>> this for compressed file. In my test, if we just remove this, write
>>>>>> same file in multithread will have problem, but parallel write
>>>>>> different files in multithread is ok. So I think maybe we should
>>>>>> use another lock to allow write different files in multithread.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
  2021-11-10  1:49       ` 常凤楠
@ 2021-11-13  6:21         ` Chao Yu
       [not found]         ` <AIIAVAAPE7EN6BQe5FH43Kp3.9.1636784464708.Hmail.changfengnan@vivo.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Chao Yu @ 2021-11-13  6:21 UTC (permalink / raw)
  To: 常凤楠, jaegeuk; +Cc: linux-f2fs-devel

On 2021/11/10 9:49, 常凤楠 wrote:
> 
> 
>> -----Original Message-----
>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
>> Chao Yu
>> Sent: Tuesday, November 9, 2021 9:41 PM
>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>> Cc: linux-f2fs-devel@lists.sourceforge.net
>> Subject: Re: Do we need serial io for compress file?
>>
>> On 2021/11/9 9:59, 常凤楠 wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf
>> Of Chao
>>>> Yu
>>>> Sent: Monday, November 8, 2021 10:21 PM
>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>> Subject: Re: Do we need serial io for compress file?
>>>>
>>>> On 2021/11/8 11:54, Fengnan Chang wrote:
>>>>> In my test, serial io for compress file will make multithread small
>>>>> write performance drop a lot.
>>>>>
>>>>> I'm try to fingure out why we need __should_serialize_io, IMO, we
>>>>> use __should_serialize_io to avoid deadlock or try to improve
>>>>> sequential performance, but I don't understand why we should do this
>>>>> for
>>>>
>>>> It was introduced to avoid fragmentation of file blocks.
>>>
>>> So, for small write on compress file, is this still necessary? I think we
>> should treat compress file as regular file.
>>
>> Any real scenario there? let me know if I missed any cases, as I saw, most
>> compressible files are not small...
> 
> Maybe my description is incorrect, small write means write with small block size, for example 4K.
> When write multi compress file with bs=4k in multithread,  the serialize io make performance drop a lot.

Got it, but I mean what the real usercase is... rather than benchmark scenario.

> 
>>
>>>>
>>>>> compressed file. In my test, if we just remove this, write same file
>>>>> in multithread will have problem, but parallel write different files
>>>>> in multithread
>>>>
>>>> What do you mean by "write same file in multithread will have
>> problem"?
>>>
>>> If just remove compress file in __should_serialize_io()
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
>>> f4fd6c246c9a..7bd429b46429 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -3165,8 +3165,8 @@ static inline bool __should_serialize_io(struct
>> inode *inode,
>>>           if (IS_NOQUOTA(inode))
>>>                   return false;
>>>
>>> -       if (f2fs_need_compress_data(inode))
>>> -               return true;
>>> +       //if (f2fs_need_compress_data(inode))
>>> +       //      return true;
>>>           if (wbc->sync_mode != WB_SYNC_ALL)
>>>                   return true;
>>>           if (get_dirty_pages(inode) >=
>>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
>>>
>>> and use fio to start multi thread to write same file, fio will hung.
>>
>> Any potential hangtask issue there? did you get any stack backtrace log?
>>
> Yes, it's quite easy to reproduce in my test.

What's your testcase? and filesystem configuration?

Thanks,

> Backtrace:
> [  493.915408] INFO: task fio:479 blocked for more than 122 seconds.
> [  493.915729]       Not tainted 5.15.0-rc1+ #516
> [  493.915845] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  493.916265] task:fio             state:D stack:13008 pid:  479 ppid:   474 flags:0x00004000
> [  493.916686] Call Trace:
> [  493.917310]  __schedule+0x317/0x920
> [  493.917560]  ? rcu_read_lock_sched_held+0x23/0x80
> [  493.917705]  schedule+0x59/0xc0
> [  493.917798]  io_schedule+0x12/0x40
> [  493.917895]  __lock_page+0x141/0x230
> [  493.924703]  ? filemap_invalidate_unlock_two+0x40/0x40
> [  493.926404]  f2fs_write_multi_pages+0x1a7/0x9c0
> [  493.926579]  f2fs_write_cache_pages+0x711/0x840
> [  493.926771]  f2fs_write_data_pages+0x20e/0x3f0
> [  493.926906]  ? rcu_read_lock_held_common+0xe/0x40
> [  493.928305]  ? do_writepages+0xd1/0x190
> [  493.928439]  do_writepages+0xd1/0x190
> [  493.928551]  file_write_and_wait_range+0xa3/0xe0
> [  493.928690]  f2fs_do_sync_file+0x10f/0x910
> [  493.928823]  do_fsync+0x38/0x60
> [  493.928924]  __x64_sys_fsync+0x10/0x20
> [  493.929208]  do_syscall_64+0x3a/0x90
> [  493.929327]  entry_SYSCALL_64_after_hwframe+0x44/0xae
> [  493.929486] RIP: 0033:0x7f64cb26ca97
> [  493.929636] RSP: 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
> [  493.929816] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f64cb26ca97
> [  493.929975] RDX: 0000000000000000 RSI: 000056378a8ad540 RDI: 0000000000000005
> [  493.931164] RBP: 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170
> [  493.931347] R10: 00007ffe387e61b0 R11: 0000000000000293 R12: 0000000000000003
> [  493.931517] R13: 00007f647e2836d8 R14: 0000000000000000 R15: 000056378a8ad568
> [  493.931802] INFO: task fio:480 blocked for more than 122 seconds.
> [  493.931946]       Not tainted 5.15.0-rc1+ #516
> [  493.932202] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  493.932382] task:fio             state:D stack:13216 pid:  480 ppid:   474 flags:0x00000000
> [  493.932596] Call Trace:
> [  493.932681]  __schedule+0x317/0x920
> [  493.932796]  ? rcu_read_lock_sched_held+0x23/0x80
> [  493.932939]  schedule+0x59/0xc0
> [  493.933180]  io_schedule+0x12/0x40
> [  493.933285]  __lock_page+0x141/0x230
> [  493.933391]  ? filemap_invalidate_unlock_two+0x40/0x40
> [  493.933539]  f2fs_write_cache_pages+0x38a/0x840
> [  493.933708]  ? do_writepages+0xd1/0x190
> [  493.933821]  ? do_writepages+0xd1/0x190
> [  493.933939]  f2fs_write_data_pages+0x20e/0x3f0
> [  493.935516]  ? rcu_read_lock_held_common+0xe/0x40
> [  493.935682]  ? do_writepages+0xd1/0x190
> [  493.935790]  do_writepages+0xd1/0x190
> [  493.935895]  file_write_and_wait_range+0xa3/0xe0
> [  493.938074]  f2fs_do_sync_file+0x10f/0x910
> [  493.938225]  do_fsync+0x38/0x60
> [  493.938328]  __x64_sys_fsync+0x10/0x20
> [  493.938430]  do_syscall_64+0x3a/0x90
> [  493.938548]  entry_SYSCALL_64_after_hwframe+0x44/0xae
> [  493.938683] RIP: 0033:0x7f64cb26ca97
> [  493.938783] RSP: 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
> [  493.938969] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f64cb26ca97
> [  493.940413] RDX: 0000000000000000 RSI: 000056378a8ad540 RDI: 0000000000000005
> [  493.940591] RBP: 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170
> [  493.940759] R10: 00007ffe387e61b0 R11: 0000000000000293 R12: 0000000000000003
> [  493.940924] R13: 00007f647e2c80e8 R14: 0000000000000000 R15: 000056378a8ad568
> [  493.941341] INFO: lockdep is turned off.
> [  616.796413] INFO: task fio:479 blocked for more than 245 seconds.
> [  616.796706]       Not tainted 5.15.0-rc1+ #516
> [  616.796824] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  616.827081] task:fio             state:D stack:13008 pid:  479 ppid:   474 flags:0x00004000
> [  616.841373] Call Trace:
> [  616.841492]  __schedule+0x317/0x920
> [  616.841635]  ? rcu_read_lock_sched_held+0x23/0x80
> [  616.841767]  schedule+0x59/0xc0
> [  616.841858]  io_schedule+0x12/0x40
> [  616.841952]  __lock_page+0x141/0x230
> [  616.852135]  ? filemap_invalidate_unlock_two+0x40/0x40
> [  616.852305]  f2fs_write_multi_pages+0x1a7/0x9c0
> [  616.852455]  f2fs_write_cache_pages+0x711/0x840
> [  616.852661]  f2fs_write_data_pages+0x20e/0x3f0
> [  616.852785]  ? rcu_read_lock_held_common+0xe/0x40
> [  616.852924]  ? do_writepages+0xd1/0x190
> [  616.853281]  do_writepages+0xd1/0x190
> [  616.853409]  file_write_and_wait_range+0xa3/0xe0
> [  616.853537]  f2fs_do_sync_file+0x10f/0x910
> [  616.853670]  do_fsync+0x38/0x60
> [  616.853773]  __x64_sys_fsync+0x10/0x20
> [  616.853869]  do_syscall_64+0x3a/0x90
> [  616.853971]  entry_SYSCALL_64_after_hwframe+0x44/0xae
> [  616.863145] RIP: 0033:0x7f64cb26ca97
> [  616.863273] RSP: 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
> [  616.863468] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f64cb26ca97
> [  616.863635] RDX: 0000000000000000 RSI: 000056378a8ad540 RDI: 0000000000000005
> [  616.863803] RBP: 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170
> [  616.863975] R10: 00007ffe387e61b0 R11: 0000000000000293 R12: 0000000000000003
> [  616.870147] R13: 00007f647e2836d8 R14: 0000000000000000 R15: 000056378a8ad568
> [  616.870388] INFO: task fio:480 blocked for more than 245 seconds.
> [  616.870537]       Not tainted 5.15.0-rc1+ #516
> [  616.870650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  616.870833] task:fio             state:D stack:13216 pid:  480 ppid:   474 flags:0x00000000
> [  616.872161] Call Trace:
> [  616.872257]  __schedule+0x317/0x920
> [  616.872366]  ? rcu_read_lock_sched_held+0x23/0x80
> [  616.872503]  schedule+0x59/0xc0
> [  616.872596]  io_schedule+0x12/0x40
> [  616.872694]  __lock_page+0x141/0x230
> [  616.872803]  ? filemap_invalidate_unlock_two+0x40/0x40
> [  616.872945]  f2fs_write_cache_pages+0x38a/0x840
> [  616.873331]  ? do_writepages+0xd1/0x190
> [  616.873448]  ? do_writepages+0xd1/0x190
> [  616.873569]  f2fs_write_data_pages+0x20e/0x3f0
> [  616.873694]  ? rcu_read_lock_held_common+0xe/0x40
> [  616.873833]  ? do_writepages+0xd1/0x190
> [  616.873936]  do_writepages+0xd1/0x190
> [  616.874272]  file_write_and_wait_range+0xa3/0xe0
> [  616.874424]  f2fs_do_sync_file+0x10f/0x910
> [  616.874560]  do_fsync+0x38/0x60
> [  616.874660]  __x64_sys_fsync+0x10/0x20
> [  616.874759]  do_syscall_64+0x3a/0x90
> [  616.874863]  entry_SYSCALL_64_after_hwframe+0x44/0xae
> [  616.875792] RIP: 0033:0x7f64cb26ca97
> [  616.875905] RSP: 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
> [  616.876345] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f64cb26ca97
> [  616.876515] RDX: 0000000000000000 RSI: 000056378a8ad540 RDI: 0000000000000005
> [  616.876692] RBP: 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170
> [  616.876859] R10: 00007ffe387e61b0 R11: 0000000000000293 R12: 0000000000000003
> [  616.877231] R13: 00007f647e2c80e8 R14: 0000000000000000 R15: 000056378a8ad568
> [  616.877456] INFO: lockdep is turned off.
>> If there is, it needs to figure out the root cause.
>>
>> Thanks,
>>
>>> fio.conf:
>>> [global]
>>> direct=1
>>> numjobs=8
>>> time_based
>>> runtime=30
>>> ioengine=sync
>>> iodepth=16
>>> buffer_pattern="ZZZZ"
>>> fsync=1
>>>
>>> [file0]
>>> name=fio-rand-RW
>>> filename=fio-rand-RW
>>> rw=rw
>>> rwmixread=60
>>> rwmixwrite=40
>>> bs=1M
>>> size=64M
>>>
>>> [file1]
>>> name=fio-rand-RW
>>> filename=fio-rand-RW
>>> rw=randrw
>>> rwmixread=60
>>> rwmixwrite=40
>>> bs=4K
>>> size=64M
>>>
>>>>
>>>> Thanks,
>>>>
>>>>> is ok. So I think maybe we should use another lock to allow write
>>>> different files in multithread.
>>>>>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
       [not found]           ` <KL1PR0601MB40038EBE3EE038926365F40DBB989@KL1PR0601MB4003.apcprd06.prod.outlook.com>
@ 2021-11-15  2:46             ` Chao Yu
       [not found]             ` <AOEA7wCAE-cPtwt6leizvqr0.9.1636944422386.Hmail.changfengnan@vivo.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Chao Yu @ 2021-11-15  2:46 UTC (permalink / raw)
  To: 常凤楠, jaegeuk; +Cc: linux-f2fs-devel

On 2021/11/15 10:25, 常凤楠 wrote:
>> -----Original Message-----
>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
>> Chao Yu
>> Sent: Saturday, November 13, 2021 2:21 PM
>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>> Cc: linux-f2fs-devel@lists.sourceforge.net
>> Subject: Re: Do we need serial io for compress file?
>>
>> On 2021/11/10 9:49, 常凤楠 wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf
>> Of Chao
>>>> Yu
>>>> Sent: Tuesday, November 9, 2021 9:41 PM
>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>> Subject: Re: Do we need serial io for compress file?
>>>>
>>>> On 2021/11/9 9:59, 常凤楠 wrote:
>>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On
>> Behalf
>>>> Of Chao
>>>>>> Yu
>>>>>> Sent: Monday, November 8, 2021 10:21 PM
>>>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>>>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>>>> Subject: Re: Do we need serial io for compress file?
>>>>>>
>>>>>> On 2021/11/8 11:54, Fengnan Chang wrote:
>>>>>>> In my test, serial io for compress file will make multithread
>>>>>>> small write performance drop a lot.
>>>>>>>
>>>>>>> I'm try to fingure out why we need __should_serialize_io, IMO, we
>>>>>>> use __should_serialize_io to avoid deadlock or try to improve
>>>>>>> sequential performance, but I don't understand why we should do
>>>>>>> this for
>>>>>>
>>>>>> It was introduced to avoid fragmentation of file blocks.
>>>>>
>>>>> So, for small write on compress file, is this still necessary? I
>>>>> think we
>>>> should treat compress file as regular file.
>>>>
>>>> Any real scenario there? let me know if I missed any cases, as I saw,
>>>> most compressible files are not small...
>>>
>>> Maybe my description is incorrect, small write means write with small
>> block size, for example 4K.
>>> When write multi compress file with bs=4k in multithread,  the serialize
>> io make performance drop a lot.
>>
>> Got it, but I mean what the real usercase is... rather than benchmark
>> scenario.
> 
> I think it’s quite common user case if we take compress file as normal file to use,

Well, I mean what's the type of file that you want to compress, and which apps will use
them? and what's the IO model runs on compressed file?

Thanks,

> write multi file with bs < cluster size at same time will be bothered by this issue.
> 
>>
>>>
>>>>
>>>>>>
>>>>>>> compressed file. In my test, if we just remove this, write same
>>>>>>> file in multithread will have problem, but parallel write
>>>>>>> different files in multithread
>>>>>>
>>>>>> What do you mean by "write same file in multithread will have
>>>> problem"?
>>>>>
>>>>> If just remove compress file in __should_serialize_io()
>>>>>
>>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
>>>>> f4fd6c246c9a..7bd429b46429 100644
>>>>> --- a/fs/f2fs/data.c
>>>>> +++ b/fs/f2fs/data.c
>>>>> @@ -3165,8 +3165,8 @@ static inline bool
>>>>> __should_serialize_io(struct
>>>> inode *inode,
>>>>>            if (IS_NOQUOTA(inode))
>>>>>                    return false;
>>>>>
>>>>> -       if (f2fs_need_compress_data(inode))
>>>>> -               return true;
>>>>> +       //if (f2fs_need_compress_data(inode))
>>>>> +       //      return true;
>>>>>            if (wbc->sync_mode != WB_SYNC_ALL)
>>>>>                    return true;
>>>>>            if (get_dirty_pages(inode) >=
>>>>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
>>>>>
>>>>> and use fio to start multi thread to write same file, fio will hung.
>>>>
>>>> Any potential hangtask issue there? did you get any stack backtrace log?
>>>>
>>> Yes, it's quite easy to reproduce in my test.
>>
>> What's your testcase? and filesystem configuration?
> 
> The test case as below fio conf. filesystem configure, the whole kernel config as attached config.rar file.
> CONFIG_F2FS_FS=y
> CONFIG_F2FS_STAT_FS=y
> CONFIG_F2FS_FS_XATTR=y
> CONFIG_F2FS_FS_POSIX_ACL=y
> CONFIG_F2FS_FS_SECURITY=y
> CONFIG_F2FS_CHECK_FS=y
> CONFIG_F2FS_FAULT_INJECTION=y
> CONFIG_F2FS_FS_COMPRESSION=y
> CONFIG_F2FS_FS_LZO=y
> CONFIG_F2FS_FS_LZORLE=y
> CONFIG_F2FS_FS_LZ4=y
> CONFIG_F2FS_FS_LZ4HC=y
> CONFIG_F2FS_FS_ZSTD=y
> CONFIG_F2FS_IOSTAT=y
>>
>> Thanks,
>>
>>> Backtrace:
>>> [  493.915408] INFO: task fio:479 blocked for more than 122 seconds.
>>> [  493.915729]       Not tainted 5.15.0-rc1+ #516
>>> [  493.915845] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>>> [  493.916265] task:fio             state:D stack:13008 pid:  479 ppid:
>> 474 flags:0x00004000
>>> [  493.916686] Call Trace:
>>> [  493.917310]  __schedule+0x317/0x920 [  493.917560]  ?
>>> rcu_read_lock_sched_held+0x23/0x80
>>> [  493.917705]  schedule+0x59/0xc0
>>> [  493.917798]  io_schedule+0x12/0x40
>>> [  493.917895]  __lock_page+0x141/0x230 [  493.924703]  ?
>>> filemap_invalidate_unlock_two+0x40/0x40
>>> [  493.926404]  f2fs_write_multi_pages+0x1a7/0x9c0
>>> [  493.926579]  f2fs_write_cache_pages+0x711/0x840
>>> [  493.926771]  f2fs_write_data_pages+0x20e/0x3f0 [  493.926906]  ?
>>> rcu_read_lock_held_common+0xe/0x40
>>> [  493.928305]  ? do_writepages+0xd1/0x190 [  493.928439]
>>> do_writepages+0xd1/0x190 [  493.928551]
>>> file_write_and_wait_range+0xa3/0xe0
>>> [  493.928690]  f2fs_do_sync_file+0x10f/0x910 [  493.928823]
>>> do_fsync+0x38/0x60 [  493.928924]  __x64_sys_fsync+0x10/0x20 [
>>> 493.929208]  do_syscall_64+0x3a/0x90 [  493.929327]
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>> [  493.929486] RIP: 0033:0x7f64cb26ca97 [  493.929636] RSP:
>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
>> 000000000000004a [
>>> 493.929816] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
>>> 00007f64cb26ca97 [  493.929975] RDX: 0000000000000000 RSI:
>>> 000056378a8ad540 RDI: 0000000000000005 [  493.931164] RBP:
>>> 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170 [
>>> 493.931347] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
>>> 0000000000000003 [  493.931517] R13: 00007f647e2836d8 R14:
>> 0000000000000000 R15: 000056378a8ad568 [  493.931802] INFO: task
>> fio:480 blocked for more than 122 seconds.
>>> [  493.931946]       Not tainted 5.15.0-rc1+ #516
>>> [  493.932202] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>>> [  493.932382] task:fio             state:D stack:13216 pid:  480 ppid:
>> 474 flags:0x00000000
>>> [  493.932596] Call Trace:
>>> [  493.932681]  __schedule+0x317/0x920 [  493.932796]  ?
>>> rcu_read_lock_sched_held+0x23/0x80
>>> [  493.932939]  schedule+0x59/0xc0
>>> [  493.933180]  io_schedule+0x12/0x40
>>> [  493.933285]  __lock_page+0x141/0x230 [  493.933391]  ?
>>> filemap_invalidate_unlock_two+0x40/0x40
>>> [  493.933539]  f2fs_write_cache_pages+0x38a/0x840
>>> [  493.933708]  ? do_writepages+0xd1/0x190 [  493.933821]  ?
>>> do_writepages+0xd1/0x190 [  493.933939]
>>> f2fs_write_data_pages+0x20e/0x3f0 [  493.935516]  ?
>>> rcu_read_lock_held_common+0xe/0x40
>>> [  493.935682]  ? do_writepages+0xd1/0x190 [  493.935790]
>>> do_writepages+0xd1/0x190 [  493.935895]
>>> file_write_and_wait_range+0xa3/0xe0
>>> [  493.938074]  f2fs_do_sync_file+0x10f/0x910 [  493.938225]
>>> do_fsync+0x38/0x60 [  493.938328]  __x64_sys_fsync+0x10/0x20 [
>>> 493.938430]  do_syscall_64+0x3a/0x90 [  493.938548]
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>> [  493.938683] RIP: 0033:0x7f64cb26ca97 [  493.938783] RSP:
>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
>> 000000000000004a [
>>> 493.938969] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
>>> 00007f64cb26ca97 [  493.940413] RDX: 0000000000000000 RSI:
>>> 000056378a8ad540 RDI: 0000000000000005 [  493.940591] RBP:
>>> 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170 [
>>> 493.940759] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
>>> 0000000000000003 [  493.940924] R13: 00007f647e2c80e8 R14:
>> 0000000000000000 R15: 000056378a8ad568 [  493.941341] INFO: lockdep
>> is turned off.
>>> [  616.796413] INFO: task fio:479 blocked for more than 245 seconds.
>>> [  616.796706]       Not tainted 5.15.0-rc1+ #516
>>> [  616.796824] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>>> [  616.827081] task:fio             state:D stack:13008 pid:  479 ppid:
>> 474 flags:0x00004000
>>> [  616.841373] Call Trace:
>>> [  616.841492]  __schedule+0x317/0x920 [  616.841635]  ?
>>> rcu_read_lock_sched_held+0x23/0x80
>>> [  616.841767]  schedule+0x59/0xc0
>>> [  616.841858]  io_schedule+0x12/0x40
>>> [  616.841952]  __lock_page+0x141/0x230 [  616.852135]  ?
>>> filemap_invalidate_unlock_two+0x40/0x40
>>> [  616.852305]  f2fs_write_multi_pages+0x1a7/0x9c0
>>> [  616.852455]  f2fs_write_cache_pages+0x711/0x840
>>> [  616.852661]  f2fs_write_data_pages+0x20e/0x3f0 [  616.852785]  ?
>>> rcu_read_lock_held_common+0xe/0x40
>>> [  616.852924]  ? do_writepages+0xd1/0x190 [  616.853281]
>>> do_writepages+0xd1/0x190 [  616.853409]
>>> file_write_and_wait_range+0xa3/0xe0
>>> [  616.853537]  f2fs_do_sync_file+0x10f/0x910 [  616.853670]
>>> do_fsync+0x38/0x60 [  616.853773]  __x64_sys_fsync+0x10/0x20 [
>>> 616.853869]  do_syscall_64+0x3a/0x90 [  616.853971]
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>> [  616.863145] RIP: 0033:0x7f64cb26ca97 [  616.863273] RSP:
>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
>> 000000000000004a [
>>> 616.863468] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
>>> 00007f64cb26ca97 [  616.863635] RDX: 0000000000000000 RSI:
>>> 000056378a8ad540 RDI: 0000000000000005 [  616.863803] RBP:
>>> 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170 [
>>> 616.863975] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
>>> 0000000000000003 [  616.870147] R13: 00007f647e2836d8 R14:
>> 0000000000000000 R15: 000056378a8ad568 [  616.870388] INFO: task
>> fio:480 blocked for more than 245 seconds.
>>> [  616.870537]       Not tainted 5.15.0-rc1+ #516
>>> [  616.870650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>>> [  616.870833] task:fio             state:D stack:13216 pid:  480 ppid:
>> 474 flags:0x00000000
>>> [  616.872161] Call Trace:
>>> [  616.872257]  __schedule+0x317/0x920 [  616.872366]  ?
>>> rcu_read_lock_sched_held+0x23/0x80
>>> [  616.872503]  schedule+0x59/0xc0
>>> [  616.872596]  io_schedule+0x12/0x40
>>> [  616.872694]  __lock_page+0x141/0x230 [  616.872803]  ?
>>> filemap_invalidate_unlock_two+0x40/0x40
>>> [  616.872945]  f2fs_write_cache_pages+0x38a/0x840
>>> [  616.873331]  ? do_writepages+0xd1/0x190 [  616.873448]  ?
>>> do_writepages+0xd1/0x190 [  616.873569]
>>> f2fs_write_data_pages+0x20e/0x3f0 [  616.873694]  ?
>>> rcu_read_lock_held_common+0xe/0x40
>>> [  616.873833]  ? do_writepages+0xd1/0x190 [  616.873936]
>>> do_writepages+0xd1/0x190 [  616.874272]
>>> file_write_and_wait_range+0xa3/0xe0
>>> [  616.874424]  f2fs_do_sync_file+0x10f/0x910 [  616.874560]
>>> do_fsync+0x38/0x60 [  616.874660]  __x64_sys_fsync+0x10/0x20 [
>>> 616.874759]  do_syscall_64+0x3a/0x90 [  616.874863]
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>> [  616.875792] RIP: 0033:0x7f64cb26ca97 [  616.875905] RSP:
>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
>> 000000000000004a [
>>> 616.876345] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
>>> 00007f64cb26ca97 [  616.876515] RDX: 0000000000000000 RSI:
>>> 000056378a8ad540 RDI: 0000000000000005 [  616.876692] RBP:
>>> 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170 [
>>> 616.876859] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
>>> 0000000000000003 [  616.877231] R13: 00007f647e2c80e8 R14:
>> 0000000000000000 R15: 000056378a8ad568 [  616.877456] INFO: lockdep
>> is turned off.
>>>> If there is, it needs to figure out the root cause.
>>>>
>>>> Thanks,
>>>>
>>>>> fio.conf:
>>>>> [global]
>>>>> direct=1
>>>>> numjobs=8
>>>>> time_based
>>>>> runtime=30
>>>>> ioengine=sync
>>>>> iodepth=16
>>>>> buffer_pattern="ZZZZ"
>>>>> fsync=1
>>>>>
>>>>> [file0]
>>>>> name=fio-rand-RW
>>>>> filename=fio-rand-RW
>>>>> rw=rw
>>>>> rwmixread=60
>>>>> rwmixwrite=40
>>>>> bs=1M
>>>>> size=64M
>>>>>
>>>>> [file1]
>>>>> name=fio-rand-RW
>>>>> filename=fio-rand-RW
>>>>> rw=randrw
>>>>> rwmixread=60
>>>>> rwmixwrite=40
>>>>> bs=4K
>>>>> size=64M
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>>> is ok. So I think maybe we should use another lock to allow write
>>>>>> different files in multithread.
>>>>>>>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
       [not found]             ` <AOEA7wCAE-cPtwt6leizvqr0.9.1636944422386.Hmail.changfengnan@vivo.com>
@ 2021-11-15  2:56               ` 常凤楠
  2021-11-15  3:28                 ` Chao Yu
       [not found]                 ` <AHYAMQC7E1oPQCsov290cqpf.9.1636946920105.Hmail.changfengnan@vivo.com>
  0 siblings, 2 replies; 16+ messages in thread
From: 常凤楠 @ 2021-11-15  2:56 UTC (permalink / raw)
  To: Chao Yu, jaegeuk; +Cc: linux-f2fs-devel



> -----Original Message-----
> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
> Chao Yu
> Sent: Monday, November 15, 2021 10:47 AM
> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> Cc: linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: Do we need serial io for compress file?
> 
> On 2021/11/15 10:25, 常凤楠 wrote:
> >> -----Original Message-----
> >> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf
> Of Chao
> >> Yu
> >> Sent: Saturday, November 13, 2021 2:21 PM
> >> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >> Cc: linux-f2fs-devel@lists.sourceforge.net
> >> Subject: Re: Do we need serial io for compress file?
> >>
> >> On 2021/11/10 9:49, 常凤楠 wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On
> Behalf
> >> Of Chao
> >>>> Yu
> >>>> Sent: Tuesday, November 9, 2021 9:41 PM
> >>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >>>> Cc: linux-f2fs-devel@lists.sourceforge.net
> >>>> Subject: Re: Do we need serial io for compress file?
> >>>>
> >>>> On 2021/11/9 9:59, 常凤楠 wrote:
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On
> >> Behalf
> >>>> Of Chao
> >>>>>> Yu
> >>>>>> Sent: Monday, November 8, 2021 10:21 PM
> >>>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >>>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
> >>>>>> Subject: Re: Do we need serial io for compress file?
> >>>>>>
> >>>>>> On 2021/11/8 11:54, Fengnan Chang wrote:
> >>>>>>> In my test, serial io for compress file will make multithread
> >>>>>>> small write performance drop a lot.
> >>>>>>>
> >>>>>>> I'm try to fingure out why we need __should_serialize_io, IMO,
> >>>>>>> we use __should_serialize_io to avoid deadlock or try to improve
> >>>>>>> sequential performance, but I don't understand why we should
> do
> >>>>>>> this for
> >>>>>>
> >>>>>> It was introduced to avoid fragmentation of file blocks.
> >>>>>
> >>>>> So, for small write on compress file, is this still necessary? I
> >>>>> think we
> >>>> should treat compress file as regular file.
> >>>>
> >>>> Any real scenario there? let me know if I missed any cases, as I
> >>>> saw, most compressible files are not small...
> >>>
> >>> Maybe my description is incorrect, small write means write with
> >>> small
> >> block size, for example 4K.
> >>> When write multi compress file with bs=4k in multithread,  the
> >>> serialize
> >> io make performance drop a lot.
> >>
> >> Got it, but I mean what the real usercase is... rather than benchmark
> >> scenario.
> >
> > I think it’s quite common user case if we take compress file as normal
> > file to use,
> 
> Well, I mean what's the type of file that you want to compress, and which
> apps will use them? and what's the IO model runs on compressed file?
For now, I haven't test this on normal app, I test compress file on a test app, it's try to simulate IO type of actual user case.
This case is try to simulate multithread SQL performance and random write, the test file is db and no extension name file.
> 
> Thanks,
> 
> > write multi file with bs < cluster size at same time will be bothered by
> this issue.
> >
> >>
> >>>
> >>>>
> >>>>>>
> >>>>>>> compressed file. In my test, if we just remove this, write same
> >>>>>>> file in multithread will have problem, but parallel write
> >>>>>>> different files in multithread
> >>>>>>
> >>>>>> What do you mean by "write same file in multithread will have
> >>>> problem"?
> >>>>>
> >>>>> If just remove compress file in __should_serialize_io()
> >>>>>
> >>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
> >>>>> f4fd6c246c9a..7bd429b46429 100644
> >>>>> --- a/fs/f2fs/data.c
> >>>>> +++ b/fs/f2fs/data.c
> >>>>> @@ -3165,8 +3165,8 @@ static inline bool
> >>>>> __should_serialize_io(struct
> >>>> inode *inode,
> >>>>>            if (IS_NOQUOTA(inode))
> >>>>>                    return false;
> >>>>>
> >>>>> -       if (f2fs_need_compress_data(inode))
> >>>>> -               return true;
> >>>>> +       //if (f2fs_need_compress_data(inode))
> >>>>> +       //      return true;
> >>>>>            if (wbc->sync_mode != WB_SYNC_ALL)
> >>>>>                    return true;
> >>>>>            if (get_dirty_pages(inode) >=
> >>>>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
> >>>>>
> >>>>> and use fio to start multi thread to write same file, fio will hung.
> >>>>
> >>>> Any potential hangtask issue there? did you get any stack backtrace
> log?
> >>>>
> >>> Yes, it's quite easy to reproduce in my test.
> >>
> >> What's your testcase? and filesystem configuration?
> >
> > The test case as below fio conf. filesystem configure, the whole kernel
> config as attached config.rar file.
> > CONFIG_F2FS_FS=y
> > CONFIG_F2FS_STAT_FS=y
> > CONFIG_F2FS_FS_XATTR=y
> > CONFIG_F2FS_FS_POSIX_ACL=y
> > CONFIG_F2FS_FS_SECURITY=y
> > CONFIG_F2FS_CHECK_FS=y
> > CONFIG_F2FS_FAULT_INJECTION=y
> > CONFIG_F2FS_FS_COMPRESSION=y
> > CONFIG_F2FS_FS_LZO=y
> > CONFIG_F2FS_FS_LZORLE=y
> > CONFIG_F2FS_FS_LZ4=y
> > CONFIG_F2FS_FS_LZ4HC=y
> > CONFIG_F2FS_FS_ZSTD=y
> > CONFIG_F2FS_IOSTAT=y
> >>
> >> Thanks,
> >>
> >>> Backtrace:
> >>> [  493.915408] INFO: task fio:479 blocked for more than 122 seconds.
> >>> [  493.915729]       Not tainted 5.15.0-rc1+ #516
> >>> [  493.915845] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> >> disables this message.
> >>> [  493.916265] task:fio             state:D stack:13008 pid:  479
> ppid:
> >> 474 flags:0x00004000
> >>> [  493.916686] Call Trace:
> >>> [  493.917310]  __schedule+0x317/0x920 [  493.917560]  ?
> >>> rcu_read_lock_sched_held+0x23/0x80
> >>> [  493.917705]  schedule+0x59/0xc0
> >>> [  493.917798]  io_schedule+0x12/0x40 [  493.917895]
> >>> __lock_page+0x141/0x230 [  493.924703]  ?
> >>> filemap_invalidate_unlock_two+0x40/0x40
> >>> [  493.926404]  f2fs_write_multi_pages+0x1a7/0x9c0
> >>> [  493.926579]  f2fs_write_cache_pages+0x711/0x840
> >>> [  493.926771]  f2fs_write_data_pages+0x20e/0x3f0
> [  493.926906]  ?
> >>> rcu_read_lock_held_common+0xe/0x40
> >>> [  493.928305]  ? do_writepages+0xd1/0x190 [  493.928439]
> >>> do_writepages+0xd1/0x190 [  493.928551]
> >>> file_write_and_wait_range+0xa3/0xe0
> >>> [  493.928690]  f2fs_do_sync_file+0x10f/0x910 [  493.928823]
> >>> do_fsync+0x38/0x60 [  493.928924]  __x64_sys_fsync+0x10/0x20 [
> >>> 493.929208]  do_syscall_64+0x3a/0x90 [  493.929327]
> >>> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>> [  493.929486] RIP: 0033:0x7f64cb26ca97 [  493.929636] RSP:
> >>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
> >> 000000000000004a [
> >>> 493.929816] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
> >>> 00007f64cb26ca97 [  493.929975] RDX: 0000000000000000 RSI:
> >>> 000056378a8ad540 RDI: 0000000000000005 [  493.931164] RBP:
> >>> 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170 [
> >>> 493.931347] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
> >>> 0000000000000003 [  493.931517] R13: 00007f647e2836d8 R14:
> >> 0000000000000000 R15: 000056378a8ad568 [  493.931802] INFO: task
> >> fio:480 blocked for more than 122 seconds.
> >>> [  493.931946]       Not tainted 5.15.0-rc1+ #516
> >>> [  493.932202] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> >> disables this message.
> >>> [  493.932382] task:fio             state:D stack:13216 pid:  480
> ppid:
> >> 474 flags:0x00000000
> >>> [  493.932596] Call Trace:
> >>> [  493.932681]  __schedule+0x317/0x920 [  493.932796]  ?
> >>> rcu_read_lock_sched_held+0x23/0x80
> >>> [  493.932939]  schedule+0x59/0xc0
> >>> [  493.933180]  io_schedule+0x12/0x40 [  493.933285]
> >>> __lock_page+0x141/0x230 [  493.933391]  ?
> >>> filemap_invalidate_unlock_two+0x40/0x40
> >>> [  493.933539]  f2fs_write_cache_pages+0x38a/0x840
> >>> [  493.933708]  ? do_writepages+0xd1/0x190 [  493.933821]  ?
> >>> do_writepages+0xd1/0x190 [  493.933939]
> >>> f2fs_write_data_pages+0x20e/0x3f0 [  493.935516]  ?
> >>> rcu_read_lock_held_common+0xe/0x40
> >>> [  493.935682]  ? do_writepages+0xd1/0x190 [  493.935790]
> >>> do_writepages+0xd1/0x190 [  493.935895]
> >>> file_write_and_wait_range+0xa3/0xe0
> >>> [  493.938074]  f2fs_do_sync_file+0x10f/0x910 [  493.938225]
> >>> do_fsync+0x38/0x60 [  493.938328]  __x64_sys_fsync+0x10/0x20 [
> >>> 493.938430]  do_syscall_64+0x3a/0x90 [  493.938548]
> >>> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>> [  493.938683] RIP: 0033:0x7f64cb26ca97 [  493.938783] RSP:
> >>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
> >> 000000000000004a [
> >>> 493.938969] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
> >>> 00007f64cb26ca97 [  493.940413] RDX: 0000000000000000 RSI:
> >>> 000056378a8ad540 RDI: 0000000000000005 [  493.940591] RBP:
> >>> 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170 [
> >>> 493.940759] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
> >>> 0000000000000003 [  493.940924] R13: 00007f647e2c80e8 R14:
> >> 0000000000000000 R15: 000056378a8ad568 [  493.941341] INFO:
> lockdep
> >> is turned off.
> >>> [  616.796413] INFO: task fio:479 blocked for more than 245 seconds.
> >>> [  616.796706]       Not tainted 5.15.0-rc1+ #516
> >>> [  616.796824] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> >> disables this message.
> >>> [  616.827081] task:fio             state:D stack:13008 pid:  479
> ppid:
> >> 474 flags:0x00004000
> >>> [  616.841373] Call Trace:
> >>> [  616.841492]  __schedule+0x317/0x920 [  616.841635]  ?
> >>> rcu_read_lock_sched_held+0x23/0x80
> >>> [  616.841767]  schedule+0x59/0xc0
> >>> [  616.841858]  io_schedule+0x12/0x40 [  616.841952]
> >>> __lock_page+0x141/0x230 [  616.852135]  ?
> >>> filemap_invalidate_unlock_two+0x40/0x40
> >>> [  616.852305]  f2fs_write_multi_pages+0x1a7/0x9c0
> >>> [  616.852455]  f2fs_write_cache_pages+0x711/0x840
> >>> [  616.852661]  f2fs_write_data_pages+0x20e/0x3f0
> [  616.852785]  ?
> >>> rcu_read_lock_held_common+0xe/0x40
> >>> [  616.852924]  ? do_writepages+0xd1/0x190 [  616.853281]
> >>> do_writepages+0xd1/0x190 [  616.853409]
> >>> file_write_and_wait_range+0xa3/0xe0
> >>> [  616.853537]  f2fs_do_sync_file+0x10f/0x910 [  616.853670]
> >>> do_fsync+0x38/0x60 [  616.853773]  __x64_sys_fsync+0x10/0x20 [
> >>> 616.853869]  do_syscall_64+0x3a/0x90 [  616.853971]
> >>> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>> [  616.863145] RIP: 0033:0x7f64cb26ca97 [  616.863273] RSP:
> >>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
> >> 000000000000004a [
> >>> 616.863468] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
> >>> 00007f64cb26ca97 [  616.863635] RDX: 0000000000000000 RSI:
> >>> 000056378a8ad540 RDI: 0000000000000005 [  616.863803] RBP:
> >>> 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170 [
> >>> 616.863975] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
> >>> 0000000000000003 [  616.870147] R13: 00007f647e2836d8 R14:
> >> 0000000000000000 R15: 000056378a8ad568 [  616.870388] INFO: task
> >> fio:480 blocked for more than 245 seconds.
> >>> [  616.870537]       Not tainted 5.15.0-rc1+ #516
> >>> [  616.870650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> >> disables this message.
> >>> [  616.870833] task:fio             state:D stack:13216 pid:  480
> ppid:
> >> 474 flags:0x00000000
> >>> [  616.872161] Call Trace:
> >>> [  616.872257]  __schedule+0x317/0x920 [  616.872366]  ?
> >>> rcu_read_lock_sched_held+0x23/0x80
> >>> [  616.872503]  schedule+0x59/0xc0
> >>> [  616.872596]  io_schedule+0x12/0x40 [  616.872694]
> >>> __lock_page+0x141/0x230 [  616.872803]  ?
> >>> filemap_invalidate_unlock_two+0x40/0x40
> >>> [  616.872945]  f2fs_write_cache_pages+0x38a/0x840
> >>> [  616.873331]  ? do_writepages+0xd1/0x190 [  616.873448]  ?
> >>> do_writepages+0xd1/0x190 [  616.873569]
> >>> f2fs_write_data_pages+0x20e/0x3f0 [  616.873694]  ?
> >>> rcu_read_lock_held_common+0xe/0x40
> >>> [  616.873833]  ? do_writepages+0xd1/0x190 [  616.873936]
> >>> do_writepages+0xd1/0x190 [  616.874272]
> >>> file_write_and_wait_range+0xa3/0xe0
> >>> [  616.874424]  f2fs_do_sync_file+0x10f/0x910 [  616.874560]
> >>> do_fsync+0x38/0x60 [  616.874660]  __x64_sys_fsync+0x10/0x20 [
> >>> 616.874759]  do_syscall_64+0x3a/0x90 [  616.874863]
> >>> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>> [  616.875792] RIP: 0033:0x7f64cb26ca97 [  616.875905] RSP:
> >>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
> >> 000000000000004a [
> >>> 616.876345] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
> >>> 00007f64cb26ca97 [  616.876515] RDX: 0000000000000000 RSI:
> >>> 000056378a8ad540 RDI: 0000000000000005 [  616.876692] RBP:
> >>> 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170 [
> >>> 616.876859] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
> >>> 0000000000000003 [  616.877231] R13: 00007f647e2c80e8 R14:
> >> 0000000000000000 R15: 000056378a8ad568 [  616.877456] INFO:
> lockdep
> >> is turned off.
> >>>> If there is, it needs to figure out the root cause.
> >>>>
> >>>> Thanks,
> >>>>
> >>>>> fio.conf:
> >>>>> [global]
> >>>>> direct=1
> >>>>> numjobs=8
> >>>>> time_based
> >>>>> runtime=30
> >>>>> ioengine=sync
> >>>>> iodepth=16
> >>>>> buffer_pattern="ZZZZ"
> >>>>> fsync=1
> >>>>>
> >>>>> [file0]
> >>>>> name=fio-rand-RW
> >>>>> filename=fio-rand-RW
> >>>>> rw=rw
> >>>>> rwmixread=60
> >>>>> rwmixwrite=40
> >>>>> bs=1M
> >>>>> size=64M
> >>>>>
> >>>>> [file1]
> >>>>> name=fio-rand-RW
> >>>>> filename=fio-rand-RW
> >>>>> rw=randrw
> >>>>> rwmixread=60
> >>>>> rwmixwrite=40
> >>>>> bs=4K
> >>>>> size=64M
> >>>>>
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>>> is ok. So I think maybe we should use another lock to allow
> >>>>>>> write
> >>>>>> different files in multithread.
> >>>>>>>

_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
  2021-11-15  2:56               ` 常凤楠
@ 2021-11-15  3:28                 ` Chao Yu
       [not found]                 ` <AHYAMQC7E1oPQCsov290cqpf.9.1636946920105.Hmail.changfengnan@vivo.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Chao Yu @ 2021-11-15  3:28 UTC (permalink / raw)
  To: 常凤楠, jaegeuk; +Cc: linux-f2fs-devel

On 2021/11/15 10:56, 常凤楠 wrote:
> 
> 
>> -----Original Message-----
>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
>> Chao Yu
>> Sent: Monday, November 15, 2021 10:47 AM
>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>> Cc: linux-f2fs-devel@lists.sourceforge.net
>> Subject: Re: Do we need serial io for compress file?
>>
>> On 2021/11/15 10:25, 常凤楠 wrote:
>>>> -----Original Message-----
>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf
>> Of Chao
>>>> Yu
>>>> Sent: Saturday, November 13, 2021 2:21 PM
>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>> Subject: Re: Do we need serial io for compress file?
>>>>
>>>> On 2021/11/10 9:49, 常凤楠 wrote:
>>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On
>> Behalf
>>>> Of Chao
>>>>>> Yu
>>>>>> Sent: Tuesday, November 9, 2021 9:41 PM
>>>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>>>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>>>> Subject: Re: Do we need serial io for compress file?
>>>>>>
>>>>>> On 2021/11/9 9:59, 常凤楠 wrote:
>>>>>>>
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On
>>>> Behalf
>>>>>> Of Chao
>>>>>>>> Yu
>>>>>>>> Sent: Monday, November 8, 2021 10:21 PM
>>>>>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
>>>>>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
>>>>>>>> Subject: Re: Do we need serial io for compress file?
>>>>>>>>
>>>>>>>> On 2021/11/8 11:54, Fengnan Chang wrote:
>>>>>>>>> In my test, serial io for compress file will make multithread
>>>>>>>>> small write performance drop a lot.
>>>>>>>>>
>>>>>>>>> I'm try to fingure out why we need __should_serialize_io, IMO,
>>>>>>>>> we use __should_serialize_io to avoid deadlock or try to improve
>>>>>>>>> sequential performance, but I don't understand why we should
>> do
>>>>>>>>> this for
>>>>>>>>
>>>>>>>> It was introduced to avoid fragmentation of file blocks.
>>>>>>>
>>>>>>> So, for small write on compress file, is this still necessary? I
>>>>>>> think we
>>>>>> should treat compress file as regular file.
>>>>>>
>>>>>> Any real scenario there? let me know if I missed any cases, as I
>>>>>> saw, most compressible files are not small...
>>>>>
>>>>> Maybe my description is incorrect, small write means write with
>>>>> small
>>>> block size, for example 4K.
>>>>> When write multi compress file with bs=4k in multithread,  the
>>>>> serialize
>>>> io make performance drop a lot.
>>>>
>>>> Got it, but I mean what the real usercase is... rather than benchmark
>>>> scenario.
>>>
>>> I think it’s quite common user case if we take compress file as normal
>>> file to use,
>>
>> Well, I mean what's the type of file that you want to compress, and which
>> apps will use them? and what's the IO model runs on compressed file?
> For now, I haven't test this on normal app, I test compress file on a test app, it's try to simulate IO type of actual user case.
> This case is try to simulate multithread SQL performance and random write, the test file is db and no extension name file.

Oh, IMO, SQLite is not target application scenario of f2fs compression functionality.

It may suffer performance regression due to write amplification during SQLite's
random write.

Thanks,

>>
>> Thanks,
>>
>>> write multi file with bs < cluster size at same time will be bothered by
>> this issue.
>>>
>>>>
>>>>>
>>>>>>
>>>>>>>>
>>>>>>>>> compressed file. In my test, if we just remove this, write same
>>>>>>>>> file in multithread will have problem, but parallel write
>>>>>>>>> different files in multithread
>>>>>>>>
>>>>>>>> What do you mean by "write same file in multithread will have
>>>>>> problem"?
>>>>>>>
>>>>>>> If just remove compress file in __should_serialize_io()
>>>>>>>
>>>>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
>>>>>>> f4fd6c246c9a..7bd429b46429 100644
>>>>>>> --- a/fs/f2fs/data.c
>>>>>>> +++ b/fs/f2fs/data.c
>>>>>>> @@ -3165,8 +3165,8 @@ static inline bool
>>>>>>> __should_serialize_io(struct
>>>>>> inode *inode,
>>>>>>>             if (IS_NOQUOTA(inode))
>>>>>>>                     return false;
>>>>>>>
>>>>>>> -       if (f2fs_need_compress_data(inode))
>>>>>>> -               return true;
>>>>>>> +       //if (f2fs_need_compress_data(inode))
>>>>>>> +       //      return true;
>>>>>>>             if (wbc->sync_mode != WB_SYNC_ALL)
>>>>>>>                     return true;
>>>>>>>             if (get_dirty_pages(inode) >=
>>>>>>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
>>>>>>>
>>>>>>> and use fio to start multi thread to write same file, fio will hung.
>>>>>>
>>>>>> Any potential hangtask issue there? did you get any stack backtrace
>> log?
>>>>>>
>>>>> Yes, it's quite easy to reproduce in my test.
>>>>
>>>> What's your testcase? and filesystem configuration?
>>>
>>> The test case as below fio conf. filesystem configure, the whole kernel
>> config as attached config.rar file.
>>> CONFIG_F2FS_FS=y
>>> CONFIG_F2FS_STAT_FS=y
>>> CONFIG_F2FS_FS_XATTR=y
>>> CONFIG_F2FS_FS_POSIX_ACL=y
>>> CONFIG_F2FS_FS_SECURITY=y
>>> CONFIG_F2FS_CHECK_FS=y
>>> CONFIG_F2FS_FAULT_INJECTION=y
>>> CONFIG_F2FS_FS_COMPRESSION=y
>>> CONFIG_F2FS_FS_LZO=y
>>> CONFIG_F2FS_FS_LZORLE=y
>>> CONFIG_F2FS_FS_LZ4=y
>>> CONFIG_F2FS_FS_LZ4HC=y
>>> CONFIG_F2FS_FS_ZSTD=y
>>> CONFIG_F2FS_IOSTAT=y
>>>>
>>>> Thanks,
>>>>
>>>>> Backtrace:
>>>>> [  493.915408] INFO: task fio:479 blocked for more than 122 seconds.
>>>>> [  493.915729]       Not tainted 5.15.0-rc1+ #516
>>>>> [  493.915845] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>>>> disables this message.
>>>>> [  493.916265] task:fio             state:D stack:13008 pid:  479
>> ppid:
>>>> 474 flags:0x00004000
>>>>> [  493.916686] Call Trace:
>>>>> [  493.917310]  __schedule+0x317/0x920 [  493.917560]  ?
>>>>> rcu_read_lock_sched_held+0x23/0x80
>>>>> [  493.917705]  schedule+0x59/0xc0
>>>>> [  493.917798]  io_schedule+0x12/0x40 [  493.917895]
>>>>> __lock_page+0x141/0x230 [  493.924703]  ?
>>>>> filemap_invalidate_unlock_two+0x40/0x40
>>>>> [  493.926404]  f2fs_write_multi_pages+0x1a7/0x9c0
>>>>> [  493.926579]  f2fs_write_cache_pages+0x711/0x840
>>>>> [  493.926771]  f2fs_write_data_pages+0x20e/0x3f0
>> [  493.926906]  ?
>>>>> rcu_read_lock_held_common+0xe/0x40
>>>>> [  493.928305]  ? do_writepages+0xd1/0x190 [  493.928439]
>>>>> do_writepages+0xd1/0x190 [  493.928551]
>>>>> file_write_and_wait_range+0xa3/0xe0
>>>>> [  493.928690]  f2fs_do_sync_file+0x10f/0x910 [  493.928823]
>>>>> do_fsync+0x38/0x60 [  493.928924]  __x64_sys_fsync+0x10/0x20 [
>>>>> 493.929208]  do_syscall_64+0x3a/0x90 [  493.929327]
>>>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>>> [  493.929486] RIP: 0033:0x7f64cb26ca97 [  493.929636] RSP:
>>>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
>>>> 000000000000004a [
>>>>> 493.929816] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
>>>>> 00007f64cb26ca97 [  493.929975] RDX: 0000000000000000 RSI:
>>>>> 000056378a8ad540 RDI: 0000000000000005 [  493.931164] RBP:
>>>>> 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170 [
>>>>> 493.931347] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
>>>>> 0000000000000003 [  493.931517] R13: 00007f647e2836d8 R14:
>>>> 0000000000000000 R15: 000056378a8ad568 [  493.931802] INFO: task
>>>> fio:480 blocked for more than 122 seconds.
>>>>> [  493.931946]       Not tainted 5.15.0-rc1+ #516
>>>>> [  493.932202] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>>>> disables this message.
>>>>> [  493.932382] task:fio             state:D stack:13216 pid:  480
>> ppid:
>>>> 474 flags:0x00000000
>>>>> [  493.932596] Call Trace:
>>>>> [  493.932681]  __schedule+0x317/0x920 [  493.932796]  ?
>>>>> rcu_read_lock_sched_held+0x23/0x80
>>>>> [  493.932939]  schedule+0x59/0xc0
>>>>> [  493.933180]  io_schedule+0x12/0x40 [  493.933285]
>>>>> __lock_page+0x141/0x230 [  493.933391]  ?
>>>>> filemap_invalidate_unlock_two+0x40/0x40
>>>>> [  493.933539]  f2fs_write_cache_pages+0x38a/0x840
>>>>> [  493.933708]  ? do_writepages+0xd1/0x190 [  493.933821]  ?
>>>>> do_writepages+0xd1/0x190 [  493.933939]
>>>>> f2fs_write_data_pages+0x20e/0x3f0 [  493.935516]  ?
>>>>> rcu_read_lock_held_common+0xe/0x40
>>>>> [  493.935682]  ? do_writepages+0xd1/0x190 [  493.935790]
>>>>> do_writepages+0xd1/0x190 [  493.935895]
>>>>> file_write_and_wait_range+0xa3/0xe0
>>>>> [  493.938074]  f2fs_do_sync_file+0x10f/0x910 [  493.938225]
>>>>> do_fsync+0x38/0x60 [  493.938328]  __x64_sys_fsync+0x10/0x20 [
>>>>> 493.938430]  do_syscall_64+0x3a/0x90 [  493.938548]
>>>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>>> [  493.938683] RIP: 0033:0x7f64cb26ca97 [  493.938783] RSP:
>>>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
>>>> 000000000000004a [
>>>>> 493.938969] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
>>>>> 00007f64cb26ca97 [  493.940413] RDX: 0000000000000000 RSI:
>>>>> 000056378a8ad540 RDI: 0000000000000005 [  493.940591] RBP:
>>>>> 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170 [
>>>>> 493.940759] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
>>>>> 0000000000000003 [  493.940924] R13: 00007f647e2c80e8 R14:
>>>> 0000000000000000 R15: 000056378a8ad568 [  493.941341] INFO:
>> lockdep
>>>> is turned off.
>>>>> [  616.796413] INFO: task fio:479 blocked for more than 245 seconds.
>>>>> [  616.796706]       Not tainted 5.15.0-rc1+ #516
>>>>> [  616.796824] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>>>> disables this message.
>>>>> [  616.827081] task:fio             state:D stack:13008 pid:  479
>> ppid:
>>>> 474 flags:0x00004000
>>>>> [  616.841373] Call Trace:
>>>>> [  616.841492]  __schedule+0x317/0x920 [  616.841635]  ?
>>>>> rcu_read_lock_sched_held+0x23/0x80
>>>>> [  616.841767]  schedule+0x59/0xc0
>>>>> [  616.841858]  io_schedule+0x12/0x40 [  616.841952]
>>>>> __lock_page+0x141/0x230 [  616.852135]  ?
>>>>> filemap_invalidate_unlock_two+0x40/0x40
>>>>> [  616.852305]  f2fs_write_multi_pages+0x1a7/0x9c0
>>>>> [  616.852455]  f2fs_write_cache_pages+0x711/0x840
>>>>> [  616.852661]  f2fs_write_data_pages+0x20e/0x3f0
>> [  616.852785]  ?
>>>>> rcu_read_lock_held_common+0xe/0x40
>>>>> [  616.852924]  ? do_writepages+0xd1/0x190 [  616.853281]
>>>>> do_writepages+0xd1/0x190 [  616.853409]
>>>>> file_write_and_wait_range+0xa3/0xe0
>>>>> [  616.853537]  f2fs_do_sync_file+0x10f/0x910 [  616.853670]
>>>>> do_fsync+0x38/0x60 [  616.853773]  __x64_sys_fsync+0x10/0x20 [
>>>>> 616.853869]  do_syscall_64+0x3a/0x90 [  616.853971]
>>>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>>> [  616.863145] RIP: 0033:0x7f64cb26ca97 [  616.863273] RSP:
>>>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
>>>> 000000000000004a [
>>>>> 616.863468] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
>>>>> 00007f64cb26ca97 [  616.863635] RDX: 0000000000000000 RSI:
>>>>> 000056378a8ad540 RDI: 0000000000000005 [  616.863803] RBP:
>>>>> 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170 [
>>>>> 616.863975] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
>>>>> 0000000000000003 [  616.870147] R13: 00007f647e2836d8 R14:
>>>> 0000000000000000 R15: 000056378a8ad568 [  616.870388] INFO: task
>>>> fio:480 blocked for more than 245 seconds.
>>>>> [  616.870537]       Not tainted 5.15.0-rc1+ #516
>>>>> [  616.870650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>>>> disables this message.
>>>>> [  616.870833] task:fio             state:D stack:13216 pid:  480
>> ppid:
>>>> 474 flags:0x00000000
>>>>> [  616.872161] Call Trace:
>>>>> [  616.872257]  __schedule+0x317/0x920 [  616.872366]  ?
>>>>> rcu_read_lock_sched_held+0x23/0x80
>>>>> [  616.872503]  schedule+0x59/0xc0
>>>>> [  616.872596]  io_schedule+0x12/0x40 [  616.872694]
>>>>> __lock_page+0x141/0x230 [  616.872803]  ?
>>>>> filemap_invalidate_unlock_two+0x40/0x40
>>>>> [  616.872945]  f2fs_write_cache_pages+0x38a/0x840
>>>>> [  616.873331]  ? do_writepages+0xd1/0x190 [  616.873448]  ?
>>>>> do_writepages+0xd1/0x190 [  616.873569]
>>>>> f2fs_write_data_pages+0x20e/0x3f0 [  616.873694]  ?
>>>>> rcu_read_lock_held_common+0xe/0x40
>>>>> [  616.873833]  ? do_writepages+0xd1/0x190 [  616.873936]
>>>>> do_writepages+0xd1/0x190 [  616.874272]
>>>>> file_write_and_wait_range+0xa3/0xe0
>>>>> [  616.874424]  f2fs_do_sync_file+0x10f/0x910 [  616.874560]
>>>>> do_fsync+0x38/0x60 [  616.874660]  __x64_sys_fsync+0x10/0x20 [
>>>>> 616.874759]  do_syscall_64+0x3a/0x90 [  616.874863]
>>>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>>> [  616.875792] RIP: 0033:0x7f64cb26ca97 [  616.875905] RSP:
>>>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
>>>> 000000000000004a [
>>>>> 616.876345] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
>>>>> 00007f64cb26ca97 [  616.876515] RDX: 0000000000000000 RSI:
>>>>> 000056378a8ad540 RDI: 0000000000000005 [  616.876692] RBP:
>>>>> 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170 [
>>>>> 616.876859] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
>>>>> 0000000000000003 [  616.877231] R13: 00007f647e2c80e8 R14:
>>>> 0000000000000000 R15: 000056378a8ad568 [  616.877456] INFO:
>> lockdep
>>>> is turned off.
>>>>>> If there is, it needs to figure out the root cause.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>>> fio.conf:
>>>>>>> [global]
>>>>>>> direct=1
>>>>>>> numjobs=8
>>>>>>> time_based
>>>>>>> runtime=30
>>>>>>> ioengine=sync
>>>>>>> iodepth=16
>>>>>>> buffer_pattern="ZZZZ"
>>>>>>> fsync=1
>>>>>>>
>>>>>>> [file0]
>>>>>>> name=fio-rand-RW
>>>>>>> filename=fio-rand-RW
>>>>>>> rw=rw
>>>>>>> rwmixread=60
>>>>>>> rwmixwrite=40
>>>>>>> bs=1M
>>>>>>> size=64M
>>>>>>>
>>>>>>> [file1]
>>>>>>> name=fio-rand-RW
>>>>>>> filename=fio-rand-RW
>>>>>>> rw=randrw
>>>>>>> rwmixread=60
>>>>>>> rwmixwrite=40
>>>>>>> bs=4K
>>>>>>> size=64M
>>>>>>>
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>>> is ok. So I think maybe we should use another lock to allow
>>>>>>>>> write
>>>>>>>> different files in multithread.
>>>>>>>>>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [f2fs-dev] Do we need serial io for compress file?
       [not found]                 ` <AHYAMQC7E1oPQCsov290cqpf.9.1636946920105.Hmail.changfengnan@vivo.com>
@ 2021-11-15  3:51                   ` 常凤楠
  0 siblings, 0 replies; 16+ messages in thread
From: 常凤楠 @ 2021-11-15  3:51 UTC (permalink / raw)
  To: Chao Yu, jaegeuk; +Cc: linux-f2fs-devel



> -----Original Message-----
> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf Of
> Chao Yu
> Sent: Monday, November 15, 2021 11:29 AM
> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> Cc: linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: Do we need serial io for compress file?
> 
> On 2021/11/15 10:56, 常凤楠 wrote:
> >
> >
> >> -----Original Message-----
> >> From: changfengnan@vivo.com <changfengnan@vivo.com> On Behalf
> Of Chao
> >> Yu
> >> Sent: Monday, November 15, 2021 10:47 AM
> >> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >> Cc: linux-f2fs-devel@lists.sourceforge.net
> >> Subject: Re: Do we need serial io for compress file?
> >>
> >> On 2021/11/15 10:25, 常凤楠 wrote:
> >>>> -----Original Message-----
> >>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On
> Behalf
> >> Of Chao
> >>>> Yu
> >>>> Sent: Saturday, November 13, 2021 2:21 PM
> >>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >>>> Cc: linux-f2fs-devel@lists.sourceforge.net
> >>>> Subject: Re: Do we need serial io for compress file?
> >>>>
> >>>> On 2021/11/10 9:49, 常凤楠 wrote:
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On
> >> Behalf
> >>>> Of Chao
> >>>>>> Yu
> >>>>>> Sent: Tuesday, November 9, 2021 9:41 PM
> >>>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >>>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
> >>>>>> Subject: Re: Do we need serial io for compress file?
> >>>>>>
> >>>>>> On 2021/11/9 9:59, 常凤楠 wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: changfengnan@vivo.com <changfengnan@vivo.com> On
> >>>> Behalf
> >>>>>> Of Chao
> >>>>>>>> Yu
> >>>>>>>> Sent: Monday, November 8, 2021 10:21 PM
> >>>>>>>> To: 常凤楠 <changfengnan@vivo.com>; jaegeuk@kernel.org
> >>>>>>>> Cc: linux-f2fs-devel@lists.sourceforge.net
> >>>>>>>> Subject: Re: Do we need serial io for compress file?
> >>>>>>>>
> >>>>>>>> On 2021/11/8 11:54, Fengnan Chang wrote:
> >>>>>>>>> In my test, serial io for compress file will make multithread
> >>>>>>>>> small write performance drop a lot.
> >>>>>>>>>
> >>>>>>>>> I'm try to fingure out why we need __should_serialize_io, IMO,
> >>>>>>>>> we use __should_serialize_io to avoid deadlock or try to
> >>>>>>>>> improve sequential performance, but I don't understand why
> we
> >>>>>>>>> should
> >> do
> >>>>>>>>> this for
> >>>>>>>>
> >>>>>>>> It was introduced to avoid fragmentation of file blocks.
> >>>>>>>
> >>>>>>> So, for small write on compress file, is this still necessary? I
> >>>>>>> think we
> >>>>>> should treat compress file as regular file.
> >>>>>>
> >>>>>> Any real scenario there? let me know if I missed any cases, as I
> >>>>>> saw, most compressible files are not small...
> >>>>>
> >>>>> Maybe my description is incorrect, small write means write with
> >>>>> small
> >>>> block size, for example 4K.
> >>>>> When write multi compress file with bs=4k in multithread,  the
> >>>>> serialize
> >>>> io make performance drop a lot.
> >>>>
> >>>> Got it, but I mean what the real usercase is... rather than
> >>>> benchmark scenario.
> >>>
> >>> I think it’s quite common user case if we take compress file as
> >>> normal file to use,
> >>
> >> Well, I mean what's the type of file that you want to compress, and
> >> which apps will use them? and what's the IO model runs on
> compressed file?
> > For now, I haven't test this on normal app, I test compress file on a test
> app, it's try to simulate IO type of actual user case.
> > This case is try to simulate multithread SQL performance and random
> write, the test file is db and no extension name file.
> 
> Oh, IMO, SQLite is not target application scenario of f2fs compression
> functionality.
> 
> It may suffer performance regression due to write amplification during
> SQLite's random write.

Um, for now it's may not worthy enough , maybe it will become bottleneck in future, I still think fix this is necessary if we can avoid fragment at same time.
Maybe we should discuss this later.

Thanks.
> 
> Thanks,
> 
> >>
> >> Thanks,
> >>
> >>> write multi file with bs < cluster size at same time will be
> >>> bothered by
> >> this issue.
> >>>
> >>>>
> >>>>>
> >>>>>>
> >>>>>>>>
> >>>>>>>>> compressed file. In my test, if we just remove this, write
> >>>>>>>>> same file in multithread will have problem, but parallel write
> >>>>>>>>> different files in multithread
> >>>>>>>>
> >>>>>>>> What do you mean by "write same file in multithread will have
> >>>>>> problem"?
> >>>>>>>
> >>>>>>> If just remove compress file in __should_serialize_io()
> >>>>>>>
> >>>>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
> >>>>>>> f4fd6c246c9a..7bd429b46429 100644
> >>>>>>> --- a/fs/f2fs/data.c
> >>>>>>> +++ b/fs/f2fs/data.c
> >>>>>>> @@ -3165,8 +3165,8 @@ static inline bool
> >>>>>>> __should_serialize_io(struct
> >>>>>> inode *inode,
> >>>>>>>             if (IS_NOQUOTA(inode))
> >>>>>>>                     return false;
> >>>>>>>
> >>>>>>> -       if (f2fs_need_compress_data(inode))
> >>>>>>> -               return true;
> >>>>>>> +       //if (f2fs_need_compress_data(inode))
> >>>>>>> +       //      return true;
> >>>>>>>             if (wbc->sync_mode != WB_SYNC_ALL)
> >>>>>>>                     return true;
> >>>>>>>             if (get_dirty_pages(inode) >=
> >>>>>>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
> >>>>>>>
> >>>>>>> and use fio to start multi thread to write same file, fio will hung.
> >>>>>>
> >>>>>> Any potential hangtask issue there? did you get any stack
> >>>>>> backtrace
> >> log?
> >>>>>>
> >>>>> Yes, it's quite easy to reproduce in my test.
> >>>>
> >>>> What's your testcase? and filesystem configuration?
> >>>
> >>> The test case as below fio conf. filesystem configure, the whole
> >>> kernel
> >> config as attached config.rar file.
> >>> CONFIG_F2FS_FS=y
> >>> CONFIG_F2FS_STAT_FS=y
> >>> CONFIG_F2FS_FS_XATTR=y
> >>> CONFIG_F2FS_FS_POSIX_ACL=y
> >>> CONFIG_F2FS_FS_SECURITY=y
> >>> CONFIG_F2FS_CHECK_FS=y
> >>> CONFIG_F2FS_FAULT_INJECTION=y
> >>> CONFIG_F2FS_FS_COMPRESSION=y
> >>> CONFIG_F2FS_FS_LZO=y
> >>> CONFIG_F2FS_FS_LZORLE=y
> >>> CONFIG_F2FS_FS_LZ4=y
> >>> CONFIG_F2FS_FS_LZ4HC=y
> >>> CONFIG_F2FS_FS_ZSTD=y
> >>> CONFIG_F2FS_IOSTAT=y
> >>>>
> >>>> Thanks,
> >>>>
> >>>>> Backtrace:
> >>>>> [  493.915408] INFO: task fio:479 blocked for more than 122
> seconds.
> >>>>> [  493.915729]       Not tainted 5.15.0-rc1+ #516
> >>>>> [  493.915845] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs"
> >>>> disables this message.
> >>>>> [  493.916265] task:fio             state:D stack:13008 pid:  479
> >> ppid:
> >>>> 474 flags:0x00004000
> >>>>> [  493.916686] Call Trace:
> >>>>> [  493.917310]  __schedule+0x317/0x920 [  493.917560]  ?
> >>>>> rcu_read_lock_sched_held+0x23/0x80
> >>>>> [  493.917705]  schedule+0x59/0xc0 [  493.917798]
> >>>>> io_schedule+0x12/0x40 [  493.917895]
> >>>>> __lock_page+0x141/0x230 [  493.924703]  ?
> >>>>> filemap_invalidate_unlock_two+0x40/0x40
> >>>>> [  493.926404]  f2fs_write_multi_pages+0x1a7/0x9c0
> >>>>> [  493.926579]  f2fs_write_cache_pages+0x711/0x840
> >>>>> [  493.926771]  f2fs_write_data_pages+0x20e/0x3f0
> >> [  493.926906]  ?
> >>>>> rcu_read_lock_held_common+0xe/0x40
> >>>>> [  493.928305]  ? do_writepages+0xd1/0x190 [  493.928439]
> >>>>> do_writepages+0xd1/0x190 [  493.928551]
> >>>>> file_write_and_wait_range+0xa3/0xe0
> >>>>> [  493.928690]  f2fs_do_sync_file+0x10f/0x910 [  493.928823]
> >>>>> do_fsync+0x38/0x60 [  493.928924]  __x64_sys_fsync+0x10/0x20 [
> >>>>> 493.929208]  do_syscall_64+0x3a/0x90 [  493.929327]
> >>>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>>>> [  493.929486] RIP: 0033:0x7f64cb26ca97 [  493.929636] RSP:
> >>>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
> >>>> 000000000000004a [
> >>>>> 493.929816] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
> >>>>> 00007f64cb26ca97 [  493.929975] RDX: 0000000000000000 RSI:
> >>>>> 000056378a8ad540 RDI: 0000000000000005 [  493.931164] RBP:
> >>>>> 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170 [
> >>>>> 493.931347] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
> >>>>> 0000000000000003 [  493.931517] R13: 00007f647e2836d8 R14:
> >>>> 0000000000000000 R15: 000056378a8ad568 [  493.931802] INFO:
> task
> >>>> fio:480 blocked for more than 122 seconds.
> >>>>> [  493.931946]       Not tainted 5.15.0-rc1+ #516
> >>>>> [  493.932202] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs"
> >>>> disables this message.
> >>>>> [  493.932382] task:fio             state:D stack:13216 pid:  480
> >> ppid:
> >>>> 474 flags:0x00000000
> >>>>> [  493.932596] Call Trace:
> >>>>> [  493.932681]  __schedule+0x317/0x920 [  493.932796]  ?
> >>>>> rcu_read_lock_sched_held+0x23/0x80
> >>>>> [  493.932939]  schedule+0x59/0xc0 [  493.933180]
> >>>>> io_schedule+0x12/0x40 [  493.933285]
> >>>>> __lock_page+0x141/0x230 [  493.933391]  ?
> >>>>> filemap_invalidate_unlock_two+0x40/0x40
> >>>>> [  493.933539]  f2fs_write_cache_pages+0x38a/0x840
> >>>>> [  493.933708]  ? do_writepages+0xd1/0x190 [  493.933821]  ?
> >>>>> do_writepages+0xd1/0x190 [  493.933939]
> >>>>> f2fs_write_data_pages+0x20e/0x3f0 [  493.935516]  ?
> >>>>> rcu_read_lock_held_common+0xe/0x40
> >>>>> [  493.935682]  ? do_writepages+0xd1/0x190 [  493.935790]
> >>>>> do_writepages+0xd1/0x190 [  493.935895]
> >>>>> file_write_and_wait_range+0xa3/0xe0
> >>>>> [  493.938074]  f2fs_do_sync_file+0x10f/0x910 [  493.938225]
> >>>>> do_fsync+0x38/0x60 [  493.938328]  __x64_sys_fsync+0x10/0x20 [
> >>>>> 493.938430]  do_syscall_64+0x3a/0x90 [  493.938548]
> >>>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>>>> [  493.938683] RIP: 0033:0x7f64cb26ca97 [  493.938783] RSP:
> >>>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
> >>>> 000000000000004a [
> >>>>> 493.938969] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
> >>>>> 00007f64cb26ca97 [  493.940413] RDX: 0000000000000000 RSI:
> >>>>> 000056378a8ad540 RDI: 0000000000000005 [  493.940591] RBP:
> >>>>> 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170 [
> >>>>> 493.940759] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
> >>>>> 0000000000000003 [  493.940924] R13: 00007f647e2c80e8 R14:
> >>>> 0000000000000000 R15: 000056378a8ad568 [  493.941341] INFO:
> >> lockdep
> >>>> is turned off.
> >>>>> [  616.796413] INFO: task fio:479 blocked for more than 245
> seconds.
> >>>>> [  616.796706]       Not tainted 5.15.0-rc1+ #516
> >>>>> [  616.796824] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs"
> >>>> disables this message.
> >>>>> [  616.827081] task:fio             state:D stack:13008 pid:  479
> >> ppid:
> >>>> 474 flags:0x00004000
> >>>>> [  616.841373] Call Trace:
> >>>>> [  616.841492]  __schedule+0x317/0x920 [  616.841635]  ?
> >>>>> rcu_read_lock_sched_held+0x23/0x80
> >>>>> [  616.841767]  schedule+0x59/0xc0 [  616.841858]
> >>>>> io_schedule+0x12/0x40 [  616.841952]
> >>>>> __lock_page+0x141/0x230 [  616.852135]  ?
> >>>>> filemap_invalidate_unlock_two+0x40/0x40
> >>>>> [  616.852305]  f2fs_write_multi_pages+0x1a7/0x9c0
> >>>>> [  616.852455]  f2fs_write_cache_pages+0x711/0x840
> >>>>> [  616.852661]  f2fs_write_data_pages+0x20e/0x3f0
> >> [  616.852785]  ?
> >>>>> rcu_read_lock_held_common+0xe/0x40
> >>>>> [  616.852924]  ? do_writepages+0xd1/0x190 [  616.853281]
> >>>>> do_writepages+0xd1/0x190 [  616.853409]
> >>>>> file_write_and_wait_range+0xa3/0xe0
> >>>>> [  616.853537]  f2fs_do_sync_file+0x10f/0x910 [  616.853670]
> >>>>> do_fsync+0x38/0x60 [  616.853773]  __x64_sys_fsync+0x10/0x20 [
> >>>>> 616.853869]  do_syscall_64+0x3a/0x90 [  616.853971]
> >>>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>>>> [  616.863145] RIP: 0033:0x7f64cb26ca97 [  616.863273] RSP:
> >>>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
> >>>> 000000000000004a [
> >>>>> 616.863468] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
> >>>>> 00007f64cb26ca97 [  616.863635] RDX: 0000000000000000 RSI:
> >>>>> 000056378a8ad540 RDI: 0000000000000005 [  616.863803] RBP:
> >>>>> 00007f647e23f000 R08: 0000000000000000 R09: 00007ffe387e6170 [
> >>>>> 616.863975] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
> >>>>> 0000000000000003 [  616.870147] R13: 00007f647e2836d8 R14:
> >>>> 0000000000000000 R15: 000056378a8ad568 [  616.870388] INFO:
> task
> >>>> fio:480 blocked for more than 245 seconds.
> >>>>> [  616.870537]       Not tainted 5.15.0-rc1+ #516
> >>>>> [  616.870650] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs"
> >>>> disables this message.
> >>>>> [  616.870833] task:fio             state:D stack:13216 pid:  480
> >> ppid:
> >>>> 474 flags:0x00000000
> >>>>> [  616.872161] Call Trace:
> >>>>> [  616.872257]  __schedule+0x317/0x920 [  616.872366]  ?
> >>>>> rcu_read_lock_sched_held+0x23/0x80
> >>>>> [  616.872503]  schedule+0x59/0xc0 [  616.872596]
> >>>>> io_schedule+0x12/0x40 [  616.872694]
> >>>>> __lock_page+0x141/0x230 [  616.872803]  ?
> >>>>> filemap_invalidate_unlock_two+0x40/0x40
> >>>>> [  616.872945]  f2fs_write_cache_pages+0x38a/0x840
> >>>>> [  616.873331]  ? do_writepages+0xd1/0x190 [  616.873448]  ?
> >>>>> do_writepages+0xd1/0x190 [  616.873569]
> >>>>> f2fs_write_data_pages+0x20e/0x3f0 [  616.873694]  ?
> >>>>> rcu_read_lock_held_common+0xe/0x40
> >>>>> [  616.873833]  ? do_writepages+0xd1/0x190 [  616.873936]
> >>>>> do_writepages+0xd1/0x190 [  616.874272]
> >>>>> file_write_and_wait_range+0xa3/0xe0
> >>>>> [  616.874424]  f2fs_do_sync_file+0x10f/0x910 [  616.874560]
> >>>>> do_fsync+0x38/0x60 [  616.874660]  __x64_sys_fsync+0x10/0x20 [
> >>>>> 616.874759]  do_syscall_64+0x3a/0x90 [  616.874863]
> >>>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>>>> [  616.875792] RIP: 0033:0x7f64cb26ca97 [  616.875905] RSP:
> >>>>> 002b:00007ffe3863d180 EFLAGS: 00000293 ORIG_RAX:
> >>>> 000000000000004a [
> >>>>> 616.876345] RAX: ffffffffffffffda RBX: 0000000000000005 RCX:
> >>>>> 00007f64cb26ca97 [  616.876515] RDX: 0000000000000000 RSI:
> >>>>> 000056378a8ad540 RDI: 0000000000000005 [  616.876692] RBP:
> >>>>> 00007f647e283a10 R08: 0000000000000000 R09: 00007ffe387e6170 [
> >>>>> 616.876859] R10: 00007ffe387e61b0 R11: 0000000000000293 R12:
> >>>>> 0000000000000003 [  616.877231] R13: 00007f647e2c80e8 R14:
> >>>> 0000000000000000 R15: 000056378a8ad568 [  616.877456] INFO:
> >> lockdep
> >>>> is turned off.
> >>>>>> If there is, it needs to figure out the root cause.
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>>> fio.conf:
> >>>>>>> [global]
> >>>>>>> direct=1
> >>>>>>> numjobs=8
> >>>>>>> time_based
> >>>>>>> runtime=30
> >>>>>>> ioengine=sync
> >>>>>>> iodepth=16
> >>>>>>> buffer_pattern="ZZZZ"
> >>>>>>> fsync=1
> >>>>>>>
> >>>>>>> [file0]
> >>>>>>> name=fio-rand-RW
> >>>>>>> filename=fio-rand-RW
> >>>>>>> rw=rw
> >>>>>>> rwmixread=60
> >>>>>>> rwmixwrite=40
> >>>>>>> bs=1M
> >>>>>>> size=64M
> >>>>>>>
> >>>>>>> [file1]
> >>>>>>> name=fio-rand-RW
> >>>>>>> filename=fio-rand-RW
> >>>>>>> rw=randrw
> >>>>>>> rwmixread=60
> >>>>>>> rwmixwrite=40
> >>>>>>> bs=4K
> >>>>>>> size=64M
> >>>>>>>
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>>
> >>>>>>>>> is ok. So I think maybe we should use another lock to allow
> >>>>>>>>> write
> >>>>>>>> different files in multithread.
> >>>>>>>>>

_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-11-15  3:51 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-08  3:54 [f2fs-dev] Do we need serial io for compress file? Fengnan Chang
2021-11-08  8:56 ` 常凤楠
2021-11-08 14:30   ` Chao Yu
     [not found]   ` <AI6AmQANEzwDyLqc-ild4qqN.9.1636381829406.Hmail.changfengnan@vivo.com>
2021-11-09  3:18     ` 常凤楠
2021-11-09 13:46       ` Chao Yu
     [not found]       ` <AFUAIwC1E3YFUbOLaOBVbqp6.9.1636465594624.Hmail.changfengnan@vivo.com>
2021-11-10  1:41         ` 常凤楠
2021-11-13  6:15           ` Chao Yu
2021-11-08 14:21 ` Chao Yu
     [not found] ` <AOQAuQAvE4gDy5nrp7t7Q4pj.9.1636381271838.Hmail.changfengnan@vivo.com>
2021-11-09  1:59   ` 常凤楠
2021-11-09 13:41     ` Chao Yu
     [not found]     ` <ACQA0AB6E*gFLLLxbhwrcKo0.9.1636465280667.Hmail.changfengnan@vivo.com>
2021-11-10  1:49       ` 常凤楠
2021-11-13  6:21         ` Chao Yu
     [not found]         ` <AIIAVAAPE7EN6BQe5FH43Kp3.9.1636784464708.Hmail.changfengnan@vivo.com>
     [not found]           ` <KL1PR0601MB40038EBE3EE038926365F40DBB989@KL1PR0601MB4003.apcprd06.prod.outlook.com>
2021-11-15  2:46             ` Chao Yu
     [not found]             ` <AOEA7wCAE-cPtwt6leizvqr0.9.1636944422386.Hmail.changfengnan@vivo.com>
2021-11-15  2:56               ` 常凤楠
2021-11-15  3:28                 ` Chao Yu
     [not found]                 ` <AHYAMQC7E1oPQCsov290cqpf.9.1636946920105.Hmail.changfengnan@vivo.com>
2021-11-15  3:51                   ` 常凤楠

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.