linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] f2fs: check return value of write_checkpoint during fstrim
@ 2016-08-21 15:21 Chao Yu
  2016-08-21 15:21 ` [PATCH 2/3] f2fs: schedule in between two continous batch discards Chao Yu
  2016-08-21 15:21 ` [PATCH 3/3] f2fs: remove redundant judgement condition in available_free_memory Chao Yu
  0 siblings, 2 replies; 8+ messages in thread
From: Chao Yu @ 2016-08-21 15:21 UTC (permalink / raw)
  To: jaegeuk; +Cc: linux-f2fs-devel, linux-kernel, Chao Yu

From: Chao Yu <yuchao0@huawei.com>

During fstrim, if one of multiple write_checkpoint failed, break off and
return error number to caller.

Signed-off-by: Chao Yu <yuchao0@huawei.com>
---
 fs/f2fs/segment.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index a394012..020767c 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -1303,6 +1303,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
 		mutex_lock(&sbi->gc_mutex);
 		err = write_checkpoint(sbi, &cpc);
 		mutex_unlock(&sbi->gc_mutex);
+		if (err)
+			break;
 	}
 out:
 	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
-- 
2.7.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] f2fs: schedule in between two continous batch discards
  2016-08-21 15:21 [PATCH 1/3] f2fs: check return value of write_checkpoint during fstrim Chao Yu
@ 2016-08-21 15:21 ` Chao Yu
  2016-08-23 16:53   ` Jaegeuk Kim
  2016-08-21 15:21 ` [PATCH 3/3] f2fs: remove redundant judgement condition in available_free_memory Chao Yu
  1 sibling, 1 reply; 8+ messages in thread
From: Chao Yu @ 2016-08-21 15:21 UTC (permalink / raw)
  To: jaegeuk; +Cc: linux-f2fs-devel, linux-kernel, Chao Yu

From: Chao Yu <yuchao0@huawei.com>

In batch discard approach of fstrim will grab/release gc_mutex lock
repeatly, it makes contention of the lock becoming more intensive.

So after one batch discards were issued in checkpoint and the lock
was released, it's better to do schedule() to increase opportunity
of grabbing gc_mutex lock for other competitors.

Signed-off-by: Chao Yu <yuchao0@huawei.com>
---
 fs/f2fs/segment.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 020767c..d0f74eb 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
 		mutex_unlock(&sbi->gc_mutex);
 		if (err)
 			break;
+
+		schedule();
 	}
 out:
 	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
-- 
2.7.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] f2fs: remove redundant judgement condition in available_free_memory
  2016-08-21 15:21 [PATCH 1/3] f2fs: check return value of write_checkpoint during fstrim Chao Yu
  2016-08-21 15:21 ` [PATCH 2/3] f2fs: schedule in between two continous batch discards Chao Yu
@ 2016-08-21 15:21 ` Chao Yu
  1 sibling, 0 replies; 8+ messages in thread
From: Chao Yu @ 2016-08-21 15:21 UTC (permalink / raw)
  To: jaegeuk; +Cc: linux-f2fs-devel, linux-kernel, Chao Yu

From: Chao Yu <yuchao0@huawei.com>

In available_free_memory, there are two same judgement conditions which
is used for checking NAT excess, remove one of them.

Signed-off-by: Chao Yu <yuchao0@huawei.com>
---
 fs/f2fs/node.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index f75d197..8a28800 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -54,8 +54,6 @@ bool available_free_memory(struct f2fs_sb_info *sbi, int type)
 		res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2);
 		if (excess_cached_nats(sbi))
 			res = false;
-		if (nm_i->nat_cnt > DEF_NAT_CACHE_THRESHOLD)
-			res = false;
 	} else if (type == DIRTY_DENTS) {
 		if (sbi->sb->s_bdi->wb.dirty_exceeded)
 			return false;
-- 
2.7.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] f2fs: schedule in between two continous batch discards
  2016-08-21 15:21 ` [PATCH 2/3] f2fs: schedule in between two continous batch discards Chao Yu
@ 2016-08-23 16:53   ` Jaegeuk Kim
  2016-08-25  9:22     ` Chao Yu
  0 siblings, 1 reply; 8+ messages in thread
From: Jaegeuk Kim @ 2016-08-23 16:53 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-f2fs-devel, linux-kernel, Chao Yu

Hi Chao,

On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
> From: Chao Yu <yuchao0@huawei.com>
> 
> In batch discard approach of fstrim will grab/release gc_mutex lock
> repeatly, it makes contention of the lock becoming more intensive.
> 
> So after one batch discards were issued in checkpoint and the lock
> was released, it's better to do schedule() to increase opportunity
> of grabbing gc_mutex lock for other competitors.
> 
> Signed-off-by: Chao Yu <yuchao0@huawei.com>
> ---
>  fs/f2fs/segment.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 020767c..d0f74eb 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
>  		mutex_unlock(&sbi->gc_mutex);
>  		if (err)
>  			break;
> +
> +		schedule();

Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
In order to avoid long latency, wouldn't it be enough to reduce the batch size?

Thanks,

>  	}
>  out:
>  	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
> -- 
> 2.7.2

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] f2fs: schedule in between two continous batch discards
  2016-08-23 16:53   ` Jaegeuk Kim
@ 2016-08-25  9:22     ` Chao Yu
  2016-08-25 16:57       ` Jaegeuk Kim
  0 siblings, 1 reply; 8+ messages in thread
From: Chao Yu @ 2016-08-25  9:22 UTC (permalink / raw)
  To: Jaegeuk Kim, Chao Yu; +Cc: linux-f2fs-devel, linux-kernel

Hi Jaegeuk,

On 2016/8/24 0:53, Jaegeuk Kim wrote:
> Hi Chao,
> 
> On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
>> From: Chao Yu <yuchao0@huawei.com>
>>
>> In batch discard approach of fstrim will grab/release gc_mutex lock
>> repeatly, it makes contention of the lock becoming more intensive.
>>
>> So after one batch discards were issued in checkpoint and the lock
>> was released, it's better to do schedule() to increase opportunity
>> of grabbing gc_mutex lock for other competitors.
>>
>> Signed-off-by: Chao Yu <yuchao0@huawei.com>
>> ---
>>  fs/f2fs/segment.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 020767c..d0f74eb 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
>>  		mutex_unlock(&sbi->gc_mutex);
>>  		if (err)
>>  			break;
>> +
>> +		schedule();
> 
> Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
> In order to avoid long latency, wouldn't it be enough to reduce the batch size?

Hmm, when fstrim call mutex_unlock we will pop one blocked locker from FIFO list
of mutex lock, and wake it up, then fstrimer will try to lock gc_mutex for next
batch trim, so the popped locker and fstrimer will make a new competition in
gc_mutex. If fstrimer is running in a big core, and popped locker is running in
a small core, we can't guarantee popped locker can win the race, and for the
most of time, fstrimer will win. So in order to reduce starvation of other
gc_mutext locker, it's better to do schedule() here.

Thanks,

> 
> Thanks,
> 
>>  	}
>>  out:
>>  	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
>> -- 
>> 2.7.2
> 
> .
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] f2fs: schedule in between two continous batch discards
  2016-08-25  9:22     ` Chao Yu
@ 2016-08-25 16:57       ` Jaegeuk Kim
  2016-08-26  0:50         ` Chao Yu
  0 siblings, 1 reply; 8+ messages in thread
From: Jaegeuk Kim @ 2016-08-25 16:57 UTC (permalink / raw)
  To: Chao Yu; +Cc: Chao Yu, linux-f2fs-devel, linux-kernel

Hi Chao,

On Thu, Aug 25, 2016 at 05:22:29PM +0800, Chao Yu wrote:
> Hi Jaegeuk,
> 
> On 2016/8/24 0:53, Jaegeuk Kim wrote:
> > Hi Chao,
> > 
> > On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
> >> From: Chao Yu <yuchao0@huawei.com>
> >>
> >> In batch discard approach of fstrim will grab/release gc_mutex lock
> >> repeatly, it makes contention of the lock becoming more intensive.
> >>
> >> So after one batch discards were issued in checkpoint and the lock
> >> was released, it's better to do schedule() to increase opportunity
> >> of grabbing gc_mutex lock for other competitors.
> >>
> >> Signed-off-by: Chao Yu <yuchao0@huawei.com>
> >> ---
> >>  fs/f2fs/segment.c | 2 ++
> >>  1 file changed, 2 insertions(+)
> >>
> >> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> >> index 020767c..d0f74eb 100644
> >> --- a/fs/f2fs/segment.c
> >> +++ b/fs/f2fs/segment.c
> >> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
> >>  		mutex_unlock(&sbi->gc_mutex);
> >>  		if (err)
> >>  			break;
> >> +
> >> +		schedule();
> > 
> > Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
> > In order to avoid long latency, wouldn't it be enough to reduce the batch size?
> 
> Hmm, when fstrim call mutex_unlock we will pop one blocked locker from FIFO list
> of mutex lock, and wake it up, then fstrimer will try to lock gc_mutex for next
> batch trim, so the popped locker and fstrimer will make a new competition in
> gc_mutex.

Before trying to grab gc_mutex by fstrim again, there are already blocked tasks
waiting for gc_mutex. Hence the next one should be selectec by FIFO, no?

Thanks,

> If fstrimer is running in a big core, and popped locker is running in
> a small core, we can't guarantee popped locker can win the race, and for the
> most of time, fstrimer will win. So in order to reduce starvation of other
> gc_mutext locker, it's better to do schedule() here.
> 
> Thanks,
> 
> > 
> > Thanks,
> > 
> >>  	}
> >>  out:
> >>  	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
> >> -- 
> >> 2.7.2
> > 
> > .
> > 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] f2fs: schedule in between two continous batch discards
  2016-08-25 16:57       ` Jaegeuk Kim
@ 2016-08-26  0:50         ` Chao Yu
  2016-08-26  2:50           ` Jaegeuk Kim
  0 siblings, 1 reply; 8+ messages in thread
From: Chao Yu @ 2016-08-26  0:50 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: Chao Yu, linux-f2fs-devel, linux-kernel

Hi Jaegeuk,

On 2016/8/26 0:57, Jaegeuk Kim wrote:
> Hi Chao,
> 
> On Thu, Aug 25, 2016 at 05:22:29PM +0800, Chao Yu wrote:
>> Hi Jaegeuk,
>>
>> On 2016/8/24 0:53, Jaegeuk Kim wrote:
>>> Hi Chao,
>>>
>>> On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
>>>> From: Chao Yu <yuchao0@huawei.com>
>>>>
>>>> In batch discard approach of fstrim will grab/release gc_mutex lock
>>>> repeatly, it makes contention of the lock becoming more intensive.
>>>>
>>>> So after one batch discards were issued in checkpoint and the lock
>>>> was released, it's better to do schedule() to increase opportunity
>>>> of grabbing gc_mutex lock for other competitors.
>>>>
>>>> Signed-off-by: Chao Yu <yuchao0@huawei.com>
>>>> ---
>>>>  fs/f2fs/segment.c | 2 ++
>>>>  1 file changed, 2 insertions(+)
>>>>
>>>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>>>> index 020767c..d0f74eb 100644
>>>> --- a/fs/f2fs/segment.c
>>>> +++ b/fs/f2fs/segment.c
>>>> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
>>>>  		mutex_unlock(&sbi->gc_mutex);
>>>>  		if (err)
>>>>  			break;
>>>> +
>>>> +		schedule();
>>>
>>> Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
>>> In order to avoid long latency, wouldn't it be enough to reduce the batch size?
>>
>> Hmm, when fstrim call mutex_unlock we will pop one blocked locker from FIFO list
>> of mutex lock, and wake it up, then fstrimer will try to lock gc_mutex for next
>> batch trim, so the popped locker and fstrimer will make a new competition in
>> gc_mutex.
> 
> Before trying to grab gc_mutex by fstrim again, there are already blocked tasks
> waiting for gc_mutex. Hence the next one should be selectec by FIFO, no?

The next one which is going to be waked up is selected by FIFO, but the waked
one is still needs to be race with other mutex lock grabber.

So there is no such guarantee that the waked one must get the lock.

Thanks,

> 
> Thanks,
> 
>> If fstrimer is running in a big core, and popped locker is running in
>> a small core, we can't guarantee popped locker can win the race, and for the
>> most of time, fstrimer will win. So in order to reduce starvation of other
>> gc_mutext locker, it's better to do schedule() here.
>>
>> Thanks,
>>
>>>
>>> Thanks,
>>>
>>>>  	}
>>>>  out:
>>>>  	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
>>>> -- 
>>>> 2.7.2
>>>
>>> .
>>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] f2fs: schedule in between two continous batch discards
  2016-08-26  0:50         ` Chao Yu
@ 2016-08-26  2:50           ` Jaegeuk Kim
  0 siblings, 0 replies; 8+ messages in thread
From: Jaegeuk Kim @ 2016-08-26  2:50 UTC (permalink / raw)
  To: Chao Yu; +Cc: Chao Yu, linux-f2fs-devel, linux-kernel

On Fri, Aug 26, 2016 at 08:50:50AM +0800, Chao Yu wrote:
> Hi Jaegeuk,
> 
> On 2016/8/26 0:57, Jaegeuk Kim wrote:
> > Hi Chao,
> > 
> > On Thu, Aug 25, 2016 at 05:22:29PM +0800, Chao Yu wrote:
> >> Hi Jaegeuk,
> >>
> >> On 2016/8/24 0:53, Jaegeuk Kim wrote:
> >>> Hi Chao,
> >>>
> >>> On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
> >>>> From: Chao Yu <yuchao0@huawei.com>
> >>>>
> >>>> In batch discard approach of fstrim will grab/release gc_mutex lock
> >>>> repeatly, it makes contention of the lock becoming more intensive.
> >>>>
> >>>> So after one batch discards were issued in checkpoint and the lock
> >>>> was released, it's better to do schedule() to increase opportunity
> >>>> of grabbing gc_mutex lock for other competitors.
> >>>>
> >>>> Signed-off-by: Chao Yu <yuchao0@huawei.com>
> >>>> ---
> >>>>  fs/f2fs/segment.c | 2 ++
> >>>>  1 file changed, 2 insertions(+)
> >>>>
> >>>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> >>>> index 020767c..d0f74eb 100644
> >>>> --- a/fs/f2fs/segment.c
> >>>> +++ b/fs/f2fs/segment.c
> >>>> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
> >>>>  		mutex_unlock(&sbi->gc_mutex);
> >>>>  		if (err)
> >>>>  			break;
> >>>> +
> >>>> +		schedule();
> >>>
> >>> Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
> >>> In order to avoid long latency, wouldn't it be enough to reduce the batch size?
> >>
> >> Hmm, when fstrim call mutex_unlock we will pop one blocked locker from FIFO list
> >> of mutex lock, and wake it up, then fstrimer will try to lock gc_mutex for next
> >> batch trim, so the popped locker and fstrimer will make a new competition in
> >> gc_mutex.
> > 
> > Before trying to grab gc_mutex by fstrim again, there are already blocked tasks
> > waiting for gc_mutex. Hence the next one should be selectec by FIFO, no?
> 
> The next one which is going to be waked up is selected by FIFO, but the waked
> one is still needs to be race with other mutex lock grabber.
> 
> So there is no such guarantee that the waked one must get the lock.

Okay, I'll merge this. :)

Thanks,

> 
> Thanks,
> 
> > 
> > Thanks,
> > 
> >> If fstrimer is running in a big core, and popped locker is running in
> >> a small core, we can't guarantee popped locker can win the race, and for the
> >> most of time, fstrimer will win. So in order to reduce starvation of other
> >> gc_mutext locker, it's better to do schedule() here.
> >>
> >> Thanks,
> >>
> >>>
> >>> Thanks,
> >>>
> >>>>  	}
> >>>>  out:
> >>>>  	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
> >>>> -- 
> >>>> 2.7.2
> >>>
> >>> .
> >>>
> > 
> > .
> > 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-08-26  3:17 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-21 15:21 [PATCH 1/3] f2fs: check return value of write_checkpoint during fstrim Chao Yu
2016-08-21 15:21 ` [PATCH 2/3] f2fs: schedule in between two continous batch discards Chao Yu
2016-08-23 16:53   ` Jaegeuk Kim
2016-08-25  9:22     ` Chao Yu
2016-08-25 16:57       ` Jaegeuk Kim
2016-08-26  0:50         ` Chao Yu
2016-08-26  2:50           ` Jaegeuk Kim
2016-08-21 15:21 ` [PATCH 3/3] f2fs: remove redundant judgement condition in available_free_memory Chao Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).