linux-f2fs-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
* [f2fs-dev] Discard issue
@ 2020-05-26  1:59 Jaegeuk Kim
  2020-05-26  2:20 ` Chao Yu
  0 siblings, 1 reply; 7+ messages in thread
From: Jaegeuk Kim @ 2020-05-26  1:59 UTC (permalink / raw)
  To: yuchao0; +Cc: Linux F2FS Dev Mailing List

Hi Chao,

I'm hitting segment.c:1065 when running longer fsstress (1000s) with error
injection. Do you have any issue from your side?

Thanks,


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [f2fs-dev] Discard issue
  2020-05-26  1:59 [f2fs-dev] Discard issue Jaegeuk Kim
@ 2020-05-26  2:20 ` Chao Yu
  2020-05-26  2:26   ` Jaegeuk Kim
  0 siblings, 1 reply; 7+ messages in thread
From: Chao Yu @ 2020-05-26  2:20 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: Linux F2FS Dev Mailing List

Hi Jaegeuk,

On 2020/5/26 9:59, Jaegeuk Kim wrote:
> Hi Chao,
> 
> I'm hitting segment.c:1065 when running longer fsstress (1000s) with error

(1000s) do you mean time in single round or total time of multi rounds?

> injection. Do you have any issue from your side?

I haven't hit that before, in my test, in single round, fsstress won't last long
time (normally about 10s+ for each round).

Below is por_fsstress() implementation in my code base:

por_fsstress()
{
        _fs_opts

        while true; do
                ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 200000 -d $TESTDIR/test &
                sleep 10
                src/godown $TESTDIR
                killall fsstress
                sleep 5
                umount $TESTDIR
                if [ $? -ne 0 ]; then
                        for i in `seq 1 50`
                        do
                                umount $TESTDIR
                                if [ $? -eq 0]; then
                                        break
                                fi
                                sleep 5
                        done
                fi
                echo 3 > /proc/sys/vm/drop_caches
                _fsck
                _mount f2fs
                rm $TESTDIR/testfile
                touch $TESTDIR/testfile
                umount $TESTDIR
                _fsck
                _mount f2fs
                _rm_50
        done
}

Did you update this code?

Could you share more test configuration, like mkfs option, device size, mount option,
new por_fsstress() implementation if it exists? I can try to reproduce this issue
in my env.

Thanks,

> 
> Thanks,
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [f2fs-dev] Discard issue
  2020-05-26  2:20 ` Chao Yu
@ 2020-05-26  2:26   ` Jaegeuk Kim
  2020-05-26  7:44     ` Chao Yu
  0 siblings, 1 reply; 7+ messages in thread
From: Jaegeuk Kim @ 2020-05-26  2:26 UTC (permalink / raw)
  To: Chao Yu; +Cc: Linux F2FS Dev Mailing List

On 05/26, Chao Yu wrote:
> Hi Jaegeuk,
> 
> On 2020/5/26 9:59, Jaegeuk Kim wrote:
> > Hi Chao,
> > 
> > I'm hitting segment.c:1065 when running longer fsstress (1000s) with error
> 
> (1000s) do you mean time in single round or total time of multi rounds?
> 
> > injection. Do you have any issue from your side?
> 
> I haven't hit that before, in my test, in single round, fsstress won't last long
> time (normally about 10s+ for each round).
> 
> Below is por_fsstress() implementation in my code base:
> 
> por_fsstress()
> {
>         _fs_opts
> 
>         while true; do
>                 ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 200000 -d $TESTDIR/test &
>                 sleep 10
>                 src/godown $TESTDIR
>                 killall fsstress
>                 sleep 5
>                 umount $TESTDIR
>                 if [ $? -ne 0 ]; then
>                         for i in `seq 1 50`
>                         do
>                                 umount $TESTDIR
>                                 if [ $? -eq 0]; then
>                                         break
>                                 fi
>                                 sleep 5
>                         done
>                 fi
>                 echo 3 > /proc/sys/vm/drop_caches
>                 _fsck
>                 _mount f2fs
>                 rm $TESTDIR/testfile
>                 touch $TESTDIR/testfile
>                 umount $TESTDIR
>                 _fsck
>                 _mount f2fs
>                 _rm_50
>         done
> }
> 
> Did you update this code?
> 
> Could you share more test configuration, like mkfs option, device size, mount option,
> new por_fsstress() implementation if it exists? I can try to reproduce this issue
> in my env.

I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.

https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249

./run.sh por_fsstress

> 
> Thanks,
> 
> > 
> > Thanks,
> > .
> > 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [f2fs-dev] Discard issue
  2020-05-26  2:26   ` Jaegeuk Kim
@ 2020-05-26  7:44     ` Chao Yu
  2020-05-27  1:44       ` Chao Yu
  0 siblings, 1 reply; 7+ messages in thread
From: Chao Yu @ 2020-05-26  7:44 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: Linux F2FS Dev Mailing List

On 2020/5/26 10:26, Jaegeuk Kim wrote:
> On 05/26, Chao Yu wrote:
>> Hi Jaegeuk,
>>
>> On 2020/5/26 9:59, Jaegeuk Kim wrote:
>>> Hi Chao,
>>>
>>> I'm hitting segment.c:1065 when running longer fsstress (1000s) with error
>>
>> (1000s) do you mean time in single round or total time of multi rounds?
>>
>>> injection. Do you have any issue from your side?
>>
>> I haven't hit that before, in my test, in single round, fsstress won't last long
>> time (normally about 10s+ for each round).
>>
>> Below is por_fsstress() implementation in my code base:
>>
>> por_fsstress()
>> {
>>         _fs_opts
>>
>>         while true; do
>>                 ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 200000 -d $TESTDIR/test &
>>                 sleep 10
>>                 src/godown $TESTDIR
>>                 killall fsstress
>>                 sleep 5
>>                 umount $TESTDIR
>>                 if [ $? -ne 0 ]; then
>>                         for i in `seq 1 50`
>>                         do
>>                                 umount $TESTDIR
>>                                 if [ $? -eq 0]; then
>>                                         break
>>                                 fi
>>                                 sleep 5
>>                         done
>>                 fi
>>                 echo 3 > /proc/sys/vm/drop_caches
>>                 _fsck
>>                 _mount f2fs
>>                 rm $TESTDIR/testfile
>>                 touch $TESTDIR/testfile
>>                 umount $TESTDIR
>>                 _fsck
>>                 _mount f2fs
>>                 _rm_50
>>         done
>> }
>>
>> Did you update this code?
>>
>> Could you share more test configuration, like mkfs option, device size, mount option,
>> new por_fsstress() implementation if it exists? I can try to reproduce this issue
>> in my env.
> 
> I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.
> 
> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249
> 
> ./run.sh por_fsstress

Reproducing...

Thanks,

> 
>>
>> Thanks,
>>
>>>
>>> Thanks,
>>> .
>>>
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [f2fs-dev] Discard issue
  2020-05-26  7:44     ` Chao Yu
@ 2020-05-27  1:44       ` Chao Yu
  2020-05-27  1:58         ` Jaegeuk Kim
  0 siblings, 1 reply; 7+ messages in thread
From: Chao Yu @ 2020-05-27  1:44 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: Linux F2FS Dev Mailing List

On 2020/5/26 15:44, Chao Yu wrote:
> On 2020/5/26 10:26, Jaegeuk Kim wrote:
>> On 05/26, Chao Yu wrote:
>>> Hi Jaegeuk,
>>>
>>> On 2020/5/26 9:59, Jaegeuk Kim wrote:
>>>> Hi Chao,
>>>>
>>>> I'm hitting segment.c:1065 when running longer fsstress (1000s) with error
>>>
>>> (1000s) do you mean time in single round or total time of multi rounds?
>>>
>>>> injection. Do you have any issue from your side?
>>>
>>> I haven't hit that before, in my test, in single round, fsstress won't last long
>>> time (normally about 10s+ for each round).
>>>
>>> Below is por_fsstress() implementation in my code base:
>>>
>>> por_fsstress()
>>> {
>>>         _fs_opts
>>>
>>>         while true; do
>>>                 ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 200000 -d $TESTDIR/test &
>>>                 sleep 10
>>>                 src/godown $TESTDIR
>>>                 killall fsstress
>>>                 sleep 5
>>>                 umount $TESTDIR
>>>                 if [ $? -ne 0 ]; then
>>>                         for i in `seq 1 50`
>>>                         do
>>>                                 umount $TESTDIR
>>>                                 if [ $? -eq 0]; then
>>>                                         break
>>>                                 fi
>>>                                 sleep 5
>>>                         done
>>>                 fi
>>>                 echo 3 > /proc/sys/vm/drop_caches
>>>                 _fsck
>>>                 _mount f2fs
>>>                 rm $TESTDIR/testfile
>>>                 touch $TESTDIR/testfile
>>>                 umount $TESTDIR
>>>                 _fsck
>>>                 _mount f2fs
>>>                 _rm_50
>>>         done
>>> }
>>>
>>> Did you update this code?
>>>
>>> Could you share more test configuration, like mkfs option, device size, mount option,
>>> new por_fsstress() implementation if it exists? I can try to reproduce this issue
>>> in my env.
>>
>> I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.
>>
>> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249
>>
>> ./run.sh por_fsstress
> 
> Reproducing...

After one night reproducing, the issue still not occur..

BTW, I enabled below features in image:

extra_attr project_quota inode_checksum flexible_inline_xattr inode_crtime compression

and tagged compression flag on root inode.

> 
> Thanks,
> 
>>
>>>
>>> Thanks,
>>>
>>>>
>>>> Thanks,
>>>> .
>>>>
>> .
>>
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [f2fs-dev] Discard issue
  2020-05-27  1:44       ` Chao Yu
@ 2020-05-27  1:58         ` Jaegeuk Kim
  2020-05-27  2:06           ` Chao Yu
  0 siblings, 1 reply; 7+ messages in thread
From: Jaegeuk Kim @ 2020-05-27  1:58 UTC (permalink / raw)
  To: Chao Yu; +Cc: Linux F2FS Dev Mailing List

On 05/27, Chao Yu wrote:
> On 2020/5/26 15:44, Chao Yu wrote:
> > On 2020/5/26 10:26, Jaegeuk Kim wrote:
> >> On 05/26, Chao Yu wrote:
> >>> Hi Jaegeuk,
> >>>
> >>> On 2020/5/26 9:59, Jaegeuk Kim wrote:
> >>>> Hi Chao,
> >>>>
> >>>> I'm hitting segment.c:1065 when running longer fsstress (1000s) with error
> >>>
> >>> (1000s) do you mean time in single round or total time of multi rounds?
> >>>
> >>>> injection. Do you have any issue from your side?
> >>>
> >>> I haven't hit that before, in my test, in single round, fsstress won't last long
> >>> time (normally about 10s+ for each round).
> >>>
> >>> Below is por_fsstress() implementation in my code base:
> >>>
> >>> por_fsstress()
> >>> {
> >>>         _fs_opts
> >>>
> >>>         while true; do
> >>>                 ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 200000 -d $TESTDIR/test &
> >>>                 sleep 10
> >>>                 src/godown $TESTDIR
> >>>                 killall fsstress
> >>>                 sleep 5
> >>>                 umount $TESTDIR
> >>>                 if [ $? -ne 0 ]; then
> >>>                         for i in `seq 1 50`
> >>>                         do
> >>>                                 umount $TESTDIR
> >>>                                 if [ $? -eq 0]; then
> >>>                                         break
> >>>                                 fi
> >>>                                 sleep 5
> >>>                         done
> >>>                 fi
> >>>                 echo 3 > /proc/sys/vm/drop_caches
> >>>                 _fsck
> >>>                 _mount f2fs
> >>>                 rm $TESTDIR/testfile
> >>>                 touch $TESTDIR/testfile
> >>>                 umount $TESTDIR
> >>>                 _fsck
> >>>                 _mount f2fs
> >>>                 _rm_50
> >>>         done
> >>> }
> >>>
> >>> Did you update this code?
> >>>
> >>> Could you share more test configuration, like mkfs option, device size, mount option,
> >>> new por_fsstress() implementation if it exists? I can try to reproduce this issue
> >>> in my env.
> >>
> >> I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.
> >>
> >> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249
> >>
> >> ./run.sh por_fsstress
> > 
> > Reproducing...
> 
> After one night reproducing, the issue still not occur..
> 
> BTW, I enabled below features in image:
> 
> extra_attr project_quota inode_checksum flexible_inline_xattr inode_crtime compression
> 
> and tagged compression flag on root inode.

Could you check disk supports discard? I didn't set compression to the root
inode.

I set _mkfs with "f2fs":
mkfs.f2fs -f -O encrypt -O extra_attr -O quota -O inode_checksum /dev/$DEV;;

# run.sh reload
# run.sh por_fsstress

> 
> > 
> > Thanks,
> > 
> >>
> >>>
> >>> Thanks,
> >>>
> >>>>
> >>>> Thanks,
> >>>> .
> >>>>
> >> .
> >>
> > 
> > 
> > _______________________________________________
> > Linux-f2fs-devel mailing list
> > Linux-f2fs-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> > .
> > 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [f2fs-dev] Discard issue
  2020-05-27  1:58         ` Jaegeuk Kim
@ 2020-05-27  2:06           ` Chao Yu
  0 siblings, 0 replies; 7+ messages in thread
From: Chao Yu @ 2020-05-27  2:06 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: Linux F2FS Dev Mailing List

On 2020/5/27 9:58, Jaegeuk Kim wrote:
> On 05/27, Chao Yu wrote:
>> On 2020/5/26 15:44, Chao Yu wrote:
>>> On 2020/5/26 10:26, Jaegeuk Kim wrote:
>>>> On 05/26, Chao Yu wrote:
>>>>> Hi Jaegeuk,
>>>>>
>>>>> On 2020/5/26 9:59, Jaegeuk Kim wrote:
>>>>>> Hi Chao,
>>>>>>
>>>>>> I'm hitting segment.c:1065 when running longer fsstress (1000s) with error
>>>>>
>>>>> (1000s) do you mean time in single round or total time of multi rounds?
>>>>>
>>>>>> injection. Do you have any issue from your side?
>>>>>
>>>>> I haven't hit that before, in my test, in single round, fsstress won't last long
>>>>> time (normally about 10s+ for each round).
>>>>>
>>>>> Below is por_fsstress() implementation in my code base:
>>>>>
>>>>> por_fsstress()
>>>>> {
>>>>>         _fs_opts
>>>>>
>>>>>         while true; do
>>>>>                 ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 200000 -d $TESTDIR/test &
>>>>>                 sleep 10
>>>>>                 src/godown $TESTDIR
>>>>>                 killall fsstress
>>>>>                 sleep 5
>>>>>                 umount $TESTDIR
>>>>>                 if [ $? -ne 0 ]; then
>>>>>                         for i in `seq 1 50`
>>>>>                         do
>>>>>                                 umount $TESTDIR
>>>>>                                 if [ $? -eq 0]; then
>>>>>                                         break
>>>>>                                 fi
>>>>>                                 sleep 5
>>>>>                         done
>>>>>                 fi
>>>>>                 echo 3 > /proc/sys/vm/drop_caches
>>>>>                 _fsck
>>>>>                 _mount f2fs
>>>>>                 rm $TESTDIR/testfile
>>>>>                 touch $TESTDIR/testfile
>>>>>                 umount $TESTDIR
>>>>>                 _fsck
>>>>>                 _mount f2fs
>>>>>                 _rm_50
>>>>>         done
>>>>> }
>>>>>
>>>>> Did you update this code?
>>>>>
>>>>> Could you share more test configuration, like mkfs option, device size, mount option,
>>>>> new por_fsstress() implementation if it exists? I can try to reproduce this issue
>>>>> in my env.
>>>>
>>>> I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.
>>>>
>>>> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249
>>>>
>>>> ./run.sh por_fsstress
>>>
>>> Reproducing...
>>
>> After one night reproducing, the issue still not occur..
>>
>> BTW, I enabled below features in image:
>>
>> extra_attr project_quota inode_checksum flexible_inline_xattr inode_crtime compression
>>
>> and tagged compression flag on root inode.
> 
> Could you check disk supports discard? I didn't set compression to the root
> inode.

I start to review discard support codes from yesterday, however I have not found
anything suspectable yet.

> 
> I set _mkfs with "f2fs":
> mkfs.f2fs -f -O encrypt -O extra_attr -O quota -O inode_checksum /dev/$DEV;;

Let me update test configs.

Thanks,

> 
> # run.sh reload
> # run.sh por_fsstress
> 
>>
>>>
>>> Thanks,
>>>
>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> .
>>>>>>
>>>> .
>>>>
>>>
>>>
>>> _______________________________________________
>>> Linux-f2fs-devel mailing list
>>> Linux-f2fs-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
>>> .
>>>
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-05-27  2:10 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-26  1:59 [f2fs-dev] Discard issue Jaegeuk Kim
2020-05-26  2:20 ` Chao Yu
2020-05-26  2:26   ` Jaegeuk Kim
2020-05-26  7:44     ` Chao Yu
2020-05-27  1:44       ` Chao Yu
2020-05-27  1:58         ` Jaegeuk Kim
2020-05-27  2:06           ` Chao Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).