linux-f2fs-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
* [f2fs-dev] f2fs: dirty memory increasing during gc_urgent
@ 2019-08-14 14:15 Ju Hyung Park
  2019-08-15  6:48 ` Chao Yu
  0 siblings, 1 reply; 6+ messages in thread
From: Ju Hyung Park @ 2019-08-14 14:15 UTC (permalink / raw)
  To: linux-f2fs-devel

Hi.

I'm reporting some strangeness with gc_urgent.

When running gc_urgent, I can see that dirty memory written in
/proc/meminfo continuously getting increased until GC cannot find any
more segments to clean.

I thought FG_GC are flushed.

And after GC ends, if I do `sync` and run gc_urgent again, it easily
runs thousands of times more.

Is this an expected behavior?

I would much prefer gc_urgent cleaning everything up at first run,
without having to sync at the end and running gc_urgent again.

Thanks.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [f2fs-dev] f2fs: dirty memory increasing during gc_urgent
  2019-08-14 14:15 [f2fs-dev] f2fs: dirty memory increasing during gc_urgent Ju Hyung Park
@ 2019-08-15  6:48 ` Chao Yu
  2019-08-16 15:37   ` Ju Hyung Park
  0 siblings, 1 reply; 6+ messages in thread
From: Chao Yu @ 2019-08-15  6:48 UTC (permalink / raw)
  To: Ju Hyung Park, linux-f2fs-devel

Hi Ju Hyung,

On 2019/8/14 22:15, Ju Hyung Park wrote:
> Hi.
> 
> I'm reporting some strangeness with gc_urgent.
> 
> When running gc_urgent, I can see that dirty memory written in
> /proc/meminfo continuously getting increased until GC cannot find any
> more segments to clean.
> 
> I thought FG_GC are flushed.
> 
> And after GC ends, if I do `sync` and run gc_urgent again, it easily
> runs thousands of times more.
> 
> Is this an expected behavior?

I doubt that before triggering urgent GC, system has dirty datas in memory, then
when you trigger `sync`, GCed data and dirty data were flushed to devices
together, if we write dirty data with out-place-update model, it may make fragment.

So we can try
- sync
- trigger urgent GC
- sync
- cat /sys/kernel/debug/f2fs/status to check 'Dirty' field, the value should
close to zero

Thanks,


> 
> I would much prefer gc_urgent cleaning everything up at first run,
> without having to sync at the end and running gc_urgent again.
> 
> Thanks.
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [f2fs-dev] f2fs: dirty memory increasing during gc_urgent
  2019-08-15  6:48 ` Chao Yu
@ 2019-08-16 15:37   ` Ju Hyung Park
  2019-08-23 15:52     ` Chao Yu
  0 siblings, 1 reply; 6+ messages in thread
From: Ju Hyung Park @ 2019-08-16 15:37 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-f2fs-devel

Hi Chao,

On Thu, Aug 15, 2019 at 3:49 PM Chao Yu <yuchao0@huawei.com> wrote:
> I doubt that before triggering urgent GC, system has dirty datas in memory, then
> when you trigger `sync`, GCed data and dirty data were flushed to devices
> together, if we write dirty data with out-place-update model, it may make fragment.
>
> So we can try
> - sync
> - trigger urgent GC
> - sync
> - cat /sys/kernel/debug/f2fs/status to check 'Dirty' field, the value should
> close to zero

It's actually not zero.

Before triggering gc_urgent: 601
After gc_urgent ends and doing a `sync`: 400

And after another 2nd gc_urgent run, it finally becomes 0.

So I'm guessing this wasn't intentional? :P

Thanks,


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [f2fs-dev] f2fs: dirty memory increasing during gc_urgent
  2019-08-16 15:37   ` Ju Hyung Park
@ 2019-08-23 15:52     ` Chao Yu
  2019-08-25 11:06       ` Ju Hyung Park
  0 siblings, 1 reply; 6+ messages in thread
From: Chao Yu @ 2019-08-23 15:52 UTC (permalink / raw)
  To: Ju Hyung Park, Chao Yu; +Cc: linux-f2fs-devel

Hi Ju Hyung,

Sorry for the delay.

On 2019-8-16 23:37, Ju Hyung Park wrote:
> Hi Chao,
> 
> On Thu, Aug 15, 2019 at 3:49 PM Chao Yu <yuchao0@huawei.com> wrote:
>> I doubt that before triggering urgent GC, system has dirty datas in memory, then
>> when you trigger `sync`, GCed data and dirty data were flushed to devices
>> together, if we write dirty data with out-place-update model, it may make fragment.
>>
>> So we can try
>> - sync
>> - trigger urgent GC
>> - sync
>> - cat /sys/kernel/debug/f2fs/status to check 'Dirty' field, the value should
>> close to zero
> 
> It's actually not zero.
> 
> Before triggering gc_urgent: 601
> After gc_urgent ends and doing a `sync`: 400
> 
> And after another 2nd gc_urgent run, it finally becomes 0.
> 
> So I'm guessing this wasn't intentional? :P

It's not intentional, I failed to reproduce this issue, could you add some logs
to track why we stop urgent GC even there are still dirty segments?

Thanks,

> 
> Thanks,
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [f2fs-dev] f2fs: dirty memory increasing during gc_urgent
  2019-08-23 15:52     ` Chao Yu
@ 2019-08-25 11:06       ` Ju Hyung Park
  2019-08-26  7:17         ` Chao Yu
  0 siblings, 1 reply; 6+ messages in thread
From: Ju Hyung Park @ 2019-08-25 11:06 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-f2fs-devel

Hi Chao,

On Sat, Aug 24, 2019 at 12:52 AM Chao Yu <chao@kernel.org> wrote:
> It's not intentional, I failed to reproduce this issue, could you add some logs
> to track why we stop urgent GC even there are still dirty segments?

I'm pretty sure you can reproduce this issue quite easily.

I can see this happening on multiple devices including my workstation,
laptop and my Android phone.

Here's a simple reproduction step:
1. Do `rm -rf * && git reset --hard` a few times under a Linux kernel Git
2. Do a sync
3. echo 1 > /sys/fs/f2fs/dev/gc_urgent_sleep_time
4. echo 1 > /sys/fs/f2fs/dev/gc_urgent
5. Once the number on "GC calls" doesn't change, look at "Dirty" under
/sys/kernel/debug/f2fs/status. It's close to 0.
6. After doing a 'sync', "Dirty" increases a lot.
7. Remember the number on "GC calls" and run 3 and 4 again.
8. The number of "GC calls" increases by a few hundreds.

Thanks.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [f2fs-dev] f2fs: dirty memory increasing during gc_urgent
  2019-08-25 11:06       ` Ju Hyung Park
@ 2019-08-26  7:17         ` Chao Yu
  0 siblings, 0 replies; 6+ messages in thread
From: Chao Yu @ 2019-08-26  7:17 UTC (permalink / raw)
  To: Ju Hyung Park, Chao Yu; +Cc: linux-f2fs-devel

Hi Ju Hyung,

On 2019/8/25 19:06, Ju Hyung Park wrote:
> Hi Chao,
> 
> On Sat, Aug 24, 2019 at 12:52 AM Chao Yu <chao@kernel.org> wrote:
>> It's not intentional, I failed to reproduce this issue, could you add some logs
>> to track why we stop urgent GC even there are still dirty segments?
> 
> I'm pretty sure you can reproduce this issue quite easily.

Oh, I just notice that my scope of data sample is too small.

> 
> I can see this happening on multiple devices including my workstation,
> laptop and my Android phone.
> 
> Here's a simple reproduction step:
> 1. Do `rm -rf * && git reset --hard` a few times under a Linux kernel Git
> 2. Do a sync
> 3. echo 1 > /sys/fs/f2fs/dev/gc_urgent_sleep_time
> 4. echo 1 > /sys/fs/f2fs/dev/gc_urgent
> 5. Once the number on "GC calls" doesn't change, look at "Dirty" under
> /sys/kernel/debug/f2fs/status. It's close to 0.
> 6. After doing a 'sync', "Dirty" increases a lot.
> 7. Remember the number on "GC calls" and run 3 and 4 again.
> 8. The number of "GC calls" increases by a few hundreds.

Thank for provided test script.

I found out that after data blocks migration, their parent dnodes will become
dirty, so that once we execute step 6), some node segments become dirty...

So after step 6), we can run 3), 4) and 6) again, "Dirty" will close to zero,
that's because node blocks migration will not dirty their parent
(indirect/didirect) nodes.

Thanks,

> 
> Thanks.
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-08-26  7:17 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-14 14:15 [f2fs-dev] f2fs: dirty memory increasing during gc_urgent Ju Hyung Park
2019-08-15  6:48 ` Chao Yu
2019-08-16 15:37   ` Ju Hyung Park
2019-08-23 15:52     ` Chao Yu
2019-08-25 11:06       ` Ju Hyung Park
2019-08-26  7:17         ` Chao Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).