linux-f2fs-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
* [f2fs-dev] Can I know if now is gc-ing or checkpointing?
@ 2020-07-06  7:10 lampahome
  2020-07-06  7:29 ` Chao Yu
  0 siblings, 1 reply; 8+ messages in thread
From: lampahome @ 2020-07-06  7:10 UTC (permalink / raw)
  To: linux-f2fs-devel

I tried to test  performance with f2fs and create many fio to test it.

I found when fio reach a number(e.g. 25 fio), the performance degrade
not in proportional with small number

EX:
5 fio: bandwidth 300MB/s
10 fio: bandwidth 150MB/s
25 fio: bandwidth 30MB/s

I wonder many trigger the gc or checkpoint, may I know if it running
foreground or background?


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] Can I know if now is gc-ing or checkpointing?
  2020-07-06  7:10 [f2fs-dev] Can I know if now is gc-ing or checkpointing? lampahome
@ 2020-07-06  7:29 ` Chao Yu
  2020-07-06  7:34   ` lampahome
  0 siblings, 1 reply; 8+ messages in thread
From: Chao Yu @ 2020-07-06  7:29 UTC (permalink / raw)
  To: lampahome, linux-f2fs-devel

On 2020/7/6 15:10, lampahome wrote:
> I tried to test  performance with f2fs and create many fio to test it.
> 
> I found when fio reach a number(e.g. 25 fio), the performance degrade
> not in proportional with small number
> 
> EX:
> 5 fio: bandwidth 300MB/s
> 10 fio: bandwidth 150MB/s
> 25 fio: bandwidth 30MB/s

What's your buffer size for each flush?

> 
> I wonder many trigger the gc or checkpoint, may I know if it running
> foreground or background?

cat /sys/kernel/debug/f2fs/status |grep CP

and

cat /sys/kernel/debug/f2fs/status |grep GC

> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] Can I know if now is gc-ing or checkpointing?
  2020-07-06  7:29 ` Chao Yu
@ 2020-07-06  7:34   ` lampahome
  2020-07-06  7:46     ` Chao Yu
  0 siblings, 1 reply; 8+ messages in thread
From: lampahome @ 2020-07-06  7:34 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-f2fs-devel

Chao Yu <yuchao0@huawei.com> 於 2020年7月6日 週一 下午3:29寫道:
>
> On 2020/7/6 15:10, lampahome wrote:
> > I tried to test  performance with f2fs and create many fio to test it.
> >
> > I found when fio reach a number(e.g. 25 fio), the performance degrade
> > not in proportional with small number
> >
> > EX:
> > 5 fio: bandwidth 300MB/s
> > 10 fio: bandwidth 150MB/s
> > 25 fio: bandwidth 30MB/s
>
> What's your buffer size for each flush?
>
Each fio submit blocksize=4k, direct=0, 1GB file. So buffer size is 4k?

When grep GC and CP in f2fs status, it shows did GC and CP some times.
But my disk has 128GB and each fio only write 1GB file.
Why does the behavior trigger GC and CP?


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] Can I know if now is gc-ing or checkpointing?
  2020-07-06  7:34   ` lampahome
@ 2020-07-06  7:46     ` Chao Yu
  2020-07-06  8:11       ` lampahome
  0 siblings, 1 reply; 8+ messages in thread
From: Chao Yu @ 2020-07-06  7:46 UTC (permalink / raw)
  To: lampahome; +Cc: linux-f2fs-devel

On 2020/7/6 15:34, lampahome wrote:
> Chao Yu <yuchao0@huawei.com> 於 2020年7月6日 週一 下午3:29寫道:
>>
>> On 2020/7/6 15:10, lampahome wrote:
>>> I tried to test  performance with f2fs and create many fio to test it.
>>>
>>> I found when fio reach a number(e.g. 25 fio), the performance degrade
>>> not in proportional with small number
>>>
>>> EX:
>>> 5 fio: bandwidth 300MB/s
>>> 10 fio: bandwidth 150MB/s
>>> 25 fio: bandwidth 30MB/s
>>
>> What's your buffer size for each flush?

Could you share the whole command?

>>
> Each fio submit blocksize=4k, direct=0, 1GB file. So buffer size is 4k?

I meant how many data fio will write before triggering fsync?

I doubt that __should_serialize_io() may serialize all fio threads if your
buffer size is larger than size of one section (2MB by default)

> 
> When grep GC and CP in f2fs status, it shows did GC and CP some times.
> But my disk has 128GB and each fio only write 1GB file.
> Why does the behavior trigger GC and CP?

Can you share result of status before and after test?

There is BGGC and FGGC, BGGC runs periodically, FGGC runs when there is
almost no free segments; CP trigger condition is complicated, commonly,
via syncfs.

> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] Can I know if now is gc-ing or checkpointing?
  2020-07-06  7:46     ` Chao Yu
@ 2020-07-06  8:11       ` lampahome
  2020-07-06  8:51         ` Chao Yu
  0 siblings, 1 reply; 8+ messages in thread
From: lampahome @ 2020-07-06  8:11 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-f2fs-devel

Brief procedures:
parted /dev/sdd1 as 160GB disk
mkfs -t f2fs /dev/sdd1
mount /dev/sdd1 /mnt/f2fsdir

I use shell script to create 20 fio concurrently and wait for finish.
fio command:
fio -bs=4k -iodepth=4 -rw=write -ioengine=libaio -name=my -direct=0
-size=1G -runtime=6000 -filename /mnt/f2fsdir/ggg$id
// $id correspond to number of fio, so $id range is 1~20



Status before fio:
=====[ partition info(sdd1). #0, RW, CP: Good]=====
[SB: 1] [CP: 2] [SIT: 6] [NAT: 114] [SSA: 153] [MAIN:
77849(OverProv:794 Resv:400)]

Utilization: 0% (2 valid blocks, 39858686 discard blocks)
  - Node: 1 (Inode: 1, Other: 0)
  - Data: 1
  - Inline_xattr Inode: 0
  - Inline_data Inode: 0
  - Inline_dentry Inode: 0
  - Orphan/Append/Update Inode: 0, 0, 0

Main area: 77849 segs, 77849 secs 77849 zones
  - COLD  data: 0, 0, 0
  - WARM  data: 1, 1, 1
  - HOT   data: 77845, 77845, 77845
  - Dir   dnode: 77848, 77848, 77848
  - File   dnode: 77847, 77847, 77847
  - Indir nodes: 77846, 77846, 77846

  - Valid: 6
  - Dirty: 0
  - Prefree: 0
  - Free: 77843 (77843)

CP calls: 1 (BG: 0)
  - cp blocks : 3
  - sit blocks : 0
  - nat blocks : 0
  - ssa blocks : 0
GC calls: 0 (BG: 0)
  - data segments : 0 (0)
  - node segments : 0 (0)
Try to move 0 blocks (BG: 0)
  - data blocks : 0 (0)
  - node blocks : 0 (0)
Skipped : atomic write 0 (0)
BG skip : IO: 0, Other: 0

Extent Cache:
  - Hit Count: L1-1:0 L1-2:0 L2:0
  - Hit Ratio: 0% (0 / 0)
  - Inner Struct Count: tree: 0(0), node: 0

Balancing F2FS Async:
  - DIO (R:    0, W:    0)
  - IO_R (Data:    0, Node:    0, Meta:    0
  - IO_W (CP:    0, Data:    0, Flush: (   0    0    1), Discard: (
0    0)) cmd:    0 undiscard:   0
  - inmem:    0, atomic IO:    0 (Max.    0), volatile IO:    0 (Max.    0)
  - nodes:    0 in    0
  - dents:    0 in dirs:   0 (   0)
  - datas:    0 in files:   0
  - quota datas:    0 in quota files:   0
  - meta:    0 in    0
  - imeta:    0
  - NATs:         0/        0
  - SITs:         0/    77849
  - free_nids:      3636/ 13278716
  - alloc_nids:         0

Distribution of User Blocks: [ valid | invalid | free ]
  [|-|-------------------------------------------------]

IPU: 0 blocks
SSR: 0 blocks in 0 segments
LFS: 1 blocks in 0 segments

BDF: 99, avg. vblocks: 0

Memory: 19767 KB
  - static: 19674 KB
  - cached: 93 KB
  - paged : 0 KB

Status after fio:
=====[ partition info(sdd1). #0, RW, CP: Good]=====
[SB: 1] [CP: 2] [SIT: 6] [NAT: 114] [SSA: 153] [MAIN:
77849(OverProv:794 Resv:400)]

Utilization: 13% (5248062 valid blocks, 34610626 discard blocks)
  - Node: 5181 (Inode: 21, Other: 5160)
  - Data: 5242881
  - Inline_xattr Inode: 20
  - Inline_data Inode: 0
  - Inline_dentry Inode: 0
  - Orphan/Append/Update Inode: 0, 0, 0

Main area: 77849 segs, 77849 secs 77849 zones
  - COLD  data: 0, 0, 0
  - WARM  data: 10260, 10260, 10260
  - HOT   data: 10023, 10023, 10023
  - Dir   dnode: 77848, 77848, 77848
  - File   dnode: 10184, 10184, 10184
  - Indir nodes: 77846, 77846, 77846

  - Valid: 10244
  - Dirty: 10
  - Prefree: 0
  - Free: 67595 (67595)

CP calls: 7 (BG: 6)
  - cp blocks : 27
  - sit blocks : 195
  - nat blocks : 42
  - ssa blocks : 10259
GC calls: 1 (BG: 2)
  - data segments : 1 (1)
  - node segments : 0 (0)
Try to move 511 blocks (BG: 511)
  - data blocks : 511 (511)
  - node blocks : 0 (0)
Skipped : atomic write 0 (0)
BG skip : IO: 4, Other: 0

Extent Cache:
  - Hit Count: L1-1:0 L1-2:0 L2:0
  - Hit Ratio: 0% (0 / 1020)
  - Inner Struct Count: tree: 20(0), node: 1003

Balancing F2FS Async:
  - DIO (R:    0, W:    0)
  - IO_R (Data:    0, Node:    0, Meta:    0
  - IO_W (CP:    0, Data:    0, Flush: (   0    0    1), Discard: (
0    0)) cmd:    0 undiscard:   0
  - inmem:    0, atomic IO:    0 (Max.    0), volatile IO:    0 (Max.    0)
  - nodes:    0 in 1980
  - dents:    0 in dirs:   0 (   0)
  - datas:    0 in files:   0
  - quota datas:    0 in quota files:   0
  - meta:    0 in  670
  - imeta:    0
  - NATs:         0/     1154
  - SITs:         0/    77849
  - free_nids:      2096/ 13273536
  - alloc_nids:         0

Distribution of User Blocks: [ valid | invalid | free ]
  [------|-|-------------------------------------------]

IPU: 0 blocks
SSR: 0 blocks in 0 segments
LFS: 5253829 blocks in 10259 segments

BDF: 99, avg. vblocks: 387

Memory: 30432 KB
  - static: 19674 KB
  - cached: 157 KB
  - paged : 10600 KB


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] Can I know if now is gc-ing or checkpointing?
  2020-07-06  8:11       ` lampahome
@ 2020-07-06  8:51         ` Chao Yu
  2020-07-06  8:58           ` lampahome
  0 siblings, 1 reply; 8+ messages in thread
From: Chao Yu @ 2020-07-06  8:51 UTC (permalink / raw)
  To: lampahome; +Cc: linux-f2fs-devel

On 2020/7/6 16:11, lampahome wrote:
> Brief procedures:
> parted /dev/sdd1 as 160GB disk
> mkfs -t f2fs /dev/sdd1
> mount /dev/sdd1 /mnt/f2fsdir
> 
> I use shell script to create 20 fio concurrently and wait for finish.
> fio command:
> fio -bs=4k -iodepth=4 -rw=write -ioengine=libaio -name=my -direct=0

Why not using direct=1 combining with libaio? otherwise data was only
written to cache.

> -size=1G -runtime=6000 -filename /mnt/f2fsdir/ggg$id
> // $id correspond to number of fio, so $id range is 1~20
> 
> 
> 

Below info shows GC or CP didn't affect the test result.

> Status before fio:
> =====[ partition info(sdd1). #0, RW, CP: Good]=====
> [SB: 1] [CP: 2] [SIT: 6] [NAT: 114] [SSA: 153] [MAIN:
> 77849(OverProv:794 Resv:400)]
> 
> Utilization: 0% (2 valid blocks, 39858686 discard blocks)
>   - Node: 1 (Inode: 1, Other: 0)
>   - Data: 1
>   - Inline_xattr Inode: 0
>   - Inline_data Inode: 0
>   - Inline_dentry Inode: 0
>   - Orphan/Append/Update Inode: 0, 0, 0
> 
> Main area: 77849 segs, 77849 secs 77849 zones
>   - COLD  data: 0, 0, 0
>   - WARM  data: 1, 1, 1
>   - HOT   data: 77845, 77845, 77845
>   - Dir   dnode: 77848, 77848, 77848
>   - File   dnode: 77847, 77847, 77847
>   - Indir nodes: 77846, 77846, 77846
> 
>   - Valid: 6
>   - Dirty: 0
>   - Prefree: 0
>   - Free: 77843 (77843)
> 
> CP calls: 1 (BG: 0)
>   - cp blocks : 3
>   - sit blocks : 0
>   - nat blocks : 0
>   - ssa blocks : 0
> GC calls: 0 (BG: 0)
>   - data segments : 0 (0)
>   - node segments : 0 (0)
> Try to move 0 blocks (BG: 0)
>   - data blocks : 0 (0)
>   - node blocks : 0 (0)
> Skipped : atomic write 0 (0)
> BG skip : IO: 0, Other: 0
> 
> Extent Cache:
>   - Hit Count: L1-1:0 L1-2:0 L2:0
>   - Hit Ratio: 0% (0 / 0)
>   - Inner Struct Count: tree: 0(0), node: 0
> 
> Balancing F2FS Async:
>   - DIO (R:    0, W:    0)
>   - IO_R (Data:    0, Node:    0, Meta:    0
>   - IO_W (CP:    0, Data:    0, Flush: (   0    0    1), Discard: (
> 0    0)) cmd:    0 undiscard:   0
>   - inmem:    0, atomic IO:    0 (Max.    0), volatile IO:    0 (Max.    0)
>   - nodes:    0 in    0
>   - dents:    0 in dirs:   0 (   0)
>   - datas:    0 in files:   0
>   - quota datas:    0 in quota files:   0
>   - meta:    0 in    0
>   - imeta:    0
>   - NATs:         0/        0
>   - SITs:         0/    77849
>   - free_nids:      3636/ 13278716
>   - alloc_nids:         0
> 
> Distribution of User Blocks: [ valid | invalid | free ]
>   [|-|-------------------------------------------------]
> 
> IPU: 0 blocks
> SSR: 0 blocks in 0 segments
> LFS: 1 blocks in 0 segments
> 
> BDF: 99, avg. vblocks: 0
> 
> Memory: 19767 KB
>   - static: 19674 KB
>   - cached: 93 KB
>   - paged : 0 KB
> 
> Status after fio:
> =====[ partition info(sdd1). #0, RW, CP: Good]=====
> [SB: 1] [CP: 2] [SIT: 6] [NAT: 114] [SSA: 153] [MAIN:
> 77849(OverProv:794 Resv:400)]
> 
> Utilization: 13% (5248062 valid blocks, 34610626 discard blocks)
>   - Node: 5181 (Inode: 21, Other: 5160)
>   - Data: 5242881
>   - Inline_xattr Inode: 20
>   - Inline_data Inode: 0
>   - Inline_dentry Inode: 0
>   - Orphan/Append/Update Inode: 0, 0, 0
> 
> Main area: 77849 segs, 77849 secs 77849 zones
>   - COLD  data: 0, 0, 0
>   - WARM  data: 10260, 10260, 10260
>   - HOT   data: 10023, 10023, 10023
>   - Dir   dnode: 77848, 77848, 77848
>   - File   dnode: 10184, 10184, 10184
>   - Indir nodes: 77846, 77846, 77846
> 
>   - Valid: 10244
>   - Dirty: 10
>   - Prefree: 0
>   - Free: 67595 (67595)
> 
> CP calls: 7 (BG: 6)
>   - cp blocks : 27
>   - sit blocks : 195
>   - nat blocks : 42
>   - ssa blocks : 10259
> GC calls: 1 (BG: 2)
>   - data segments : 1 (1)
>   - node segments : 0 (0)
> Try to move 511 blocks (BG: 511)
>   - data blocks : 511 (511)
>   - node blocks : 0 (0)
> Skipped : atomic write 0 (0)
> BG skip : IO: 4, Other: 0
> 
> Extent Cache:
>   - Hit Count: L1-1:0 L1-2:0 L2:0
>   - Hit Ratio: 0% (0 / 1020)
>   - Inner Struct Count: tree: 20(0), node: 1003
> 
> Balancing F2FS Async:
>   - DIO (R:    0, W:    0)
>   - IO_R (Data:    0, Node:    0, Meta:    0
>   - IO_W (CP:    0, Data:    0, Flush: (   0    0    1), Discard: (
> 0    0)) cmd:    0 undiscard:   0
>   - inmem:    0, atomic IO:    0 (Max.    0), volatile IO:    0 (Max.    0)
>   - nodes:    0 in 1980
>   - dents:    0 in dirs:   0 (   0)
>   - datas:    0 in files:   0
>   - quota datas:    0 in quota files:   0
>   - meta:    0 in  670
>   - imeta:    0
>   - NATs:         0/     1154
>   - SITs:         0/    77849
>   - free_nids:      2096/ 13273536
>   - alloc_nids:         0
> 
> Distribution of User Blocks: [ valid | invalid | free ]
>   [------|-|-------------------------------------------]
> 
> IPU: 0 blocks
> SSR: 0 blocks in 0 segments
> LFS: 5253829 blocks in 10259 segments
> 
> BDF: 99, avg. vblocks: 387
> 
> Memory: 30432 KB
>   - static: 19674 KB
>   - cached: 157 KB
>   - paged : 10600 KB
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] Can I know if now is gc-ing or checkpointing?
  2020-07-06  8:51         ` Chao Yu
@ 2020-07-06  8:58           ` lampahome
  2020-07-06 10:27             ` Chao Yu
  0 siblings, 1 reply; 8+ messages in thread
From: lampahome @ 2020-07-06  8:58 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-f2fs-devel

Chao Yu <yuchao0@huawei.com> 於 2020年7月6日 週一 下午4:51寫道:
>
> On 2020/7/6 16:11, lampahome wrote:
> > Brief procedures:
> > parted /dev/sdd1 as 160GB disk
> > mkfs -t f2fs /dev/sdd1
> > mount /dev/sdd1 /mnt/f2fsdir
> >
> > I use shell script to create 20 fio concurrently and wait for finish.
> > fio command:
> > fio -bs=4k -iodepth=4 -rw=write -ioengine=libaio -name=my -direct=0
>
> Why not using direct=1 combining with libaio? otherwise data was only
> written to cache.
>
So directIO help performance?

> Below info shows GC or CP didn't affect the test result.
Why? So the GC and CP works normally?
Could you tell me in detail? thanks


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] Can I know if now is gc-ing or checkpointing?
  2020-07-06  8:58           ` lampahome
@ 2020-07-06 10:27             ` Chao Yu
  0 siblings, 0 replies; 8+ messages in thread
From: Chao Yu @ 2020-07-06 10:27 UTC (permalink / raw)
  To: lampahome; +Cc: linux-f2fs-devel

On 2020/7/6 16:58, lampahome wrote:
> Chao Yu <yuchao0@huawei.com> 於 2020年7月6日 週一 下午4:51寫道:
>>
>> On 2020/7/6 16:11, lampahome wrote:
>>> Brief procedures:
>>> parted /dev/sdd1 as 160GB disk
>>> mkfs -t f2fs /dev/sdd1
>>> mount /dev/sdd1 /mnt/f2fsdir
>>>
>>> I use shell script to create 20 fio concurrently and wait for finish.
>>> fio command:
>>> fio -bs=4k -iodepth=4 -rw=write -ioengine=libaio -name=my -direct=0
>>
>> Why not using direct=1 combining with libaio? otherwise data was only
>> written to cache.
>>
> So directIO help performance?

I guess libaio + directio shows real device performance, rather than memory
performance when using libaio + bufferedio if your memory is large enough...

> 
>> Below info shows GC or CP didn't affect the test result.
> Why? So the GC and CP works normally?
> Could you tell me in detail? thanks

The GC and CP value is very small, I don't think that could affect performance.

> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-07-06 10:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-06  7:10 [f2fs-dev] Can I know if now is gc-ing or checkpointing? lampahome
2020-07-06  7:29 ` Chao Yu
2020-07-06  7:34   ` lampahome
2020-07-06  7:46     ` Chao Yu
2020-07-06  8:11       ` lampahome
2020-07-06  8:51         ` Chao Yu
2020-07-06  8:58           ` lampahome
2020-07-06 10:27             ` Chao Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).