linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] btrfs: zstd: use spin_lock in timer function
@ 2022-04-08 18:15 Schspa Shi
  2022-04-08 18:44 ` David Sterba
  0 siblings, 1 reply; 11+ messages in thread
From: Schspa Shi @ 2022-04-08 18:15 UTC (permalink / raw)
  To: clm, josef, dsterba, terrelln; +Cc: schspa, linux-btrfs, linux-kernel

timer callback was running on bh, and there is no need to disable bh again.

Signed-off-by: Schspa Shi <schspa@gmail.com>
---
 fs/btrfs/zstd.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index fc42dd0badd7..faa74306f0b7 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -105,10 +105,10 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
 	unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
 	struct list_head *pos, *next;
 
-	spin_lock_bh(&wsm.lock);
+	spin_lock(&wsm.lock);
 
 	if (list_empty(&wsm.lru_list)) {
-		spin_unlock_bh(&wsm.lock);
+		spin_unlock(&wsm.lock);
 		return;
 	}
 
@@ -137,7 +137,7 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
 	if (!list_empty(&wsm.lru_list))
 		mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
 
-	spin_unlock_bh(&wsm.lock);
+	spin_unlock(&wsm.lock);
 }
 
 /*
-- 
2.24.3 (Apple Git-128)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] btrfs: zstd: use spin_lock in timer function
  2022-04-08 18:15 [PATCH] btrfs: zstd: use spin_lock in timer function Schspa Shi
@ 2022-04-08 18:44 ` David Sterba
  2022-04-09  7:36   ` Schspa Shi
  0 siblings, 1 reply; 11+ messages in thread
From: David Sterba @ 2022-04-08 18:44 UTC (permalink / raw)
  To: Schspa Shi; +Cc: clm, josef, dsterba, terrelln, linux-btrfs, linux-kernel

On Sat, Apr 09, 2022 at 02:15:23AM +0800, Schspa Shi wrote:
> timer callback was running on bh, and there is no need to disable bh again.

Why do you think so? There was a specific fix fee13fe96529 ("btrfs:
correct zstd workspace manager lock to use spin_lock_bh()") that
actually added the _bh, so either you need to explain why exactly it's
not needed anymore and verify that the reported lockdep warning from the
fix does not happen.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] btrfs: zstd: use spin_lock in timer function
  2022-04-08 18:44 ` David Sterba
@ 2022-04-09  7:36   ` Schspa Shi
  2022-04-11 13:51     ` David Sterba
  0 siblings, 1 reply; 11+ messages in thread
From: Schspa Shi @ 2022-04-09  7:36 UTC (permalink / raw)
  To: dsterba; +Cc: clm, Josef Bacik, dsterba, terrelln, linux-btrfs, linux-kernel

David Sterba <dsterba@suse.cz> writes:

> On Sat, Apr 09, 2022 at 02:15:23AM +0800, Schspa Shi wrote:
>> timer callback was running on bh, and there is no need to disable bh again.
>
> Why do you think so? There was a specific fix fee13fe96529 ("btrfs:
> correct zstd workspace manager lock to use spin_lock_bh()") that
> actually added the _bh, so either you need to explain why exactly it's
> not needed anymore and verify that the reported lockdep warning from the
> fix does not happen.

Yes, I've seen this fix, and wsm.lru_list is protected by wsm.lock.
This patch will not remove all changes that were fixed. Just a little
improvement
to remove the unnecessary bh disabling. Like
static inline void red_adaptative_timer(struct timer_list *t)
in net/sched/sch_red.c.

Because the critical section is only used by the process context and
the softirq context,
it is safe to remove bh_disable in the softirq context since it will
not be preempted by the softirq.

BRs
Schspa Shi

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] btrfs: zstd: use spin_lock in timer function
  2022-04-09  7:36   ` Schspa Shi
@ 2022-04-11 13:51     ` David Sterba
  2022-04-11 15:55       ` [PATCH v2] btrfs: zstd: use spin_lock in timer callback Schspa Shi
  0 siblings, 1 reply; 11+ messages in thread
From: David Sterba @ 2022-04-11 13:51 UTC (permalink / raw)
  To: Schspa Shi; +Cc: clm, Josef Bacik, dsterba, terrelln, linux-btrfs, linux-kernel

On Sat, Apr 09, 2022 at 03:36:54PM +0800, Schspa Shi wrote:
> David Sterba <dsterba@suse.cz> writes:
> 
> > On Sat, Apr 09, 2022 at 02:15:23AM +0800, Schspa Shi wrote:
> >> timer callback was running on bh, and there is no need to disable bh again.
> >
> > Why do you think so? There was a specific fix fee13fe96529 ("btrfs:
> > correct zstd workspace manager lock to use spin_lock_bh()") that
> > actually added the _bh, so either you need to explain why exactly it's
> > not needed anymore and verify that the reported lockdep warning from the
> > fix does not happen.
> 
> Yes, I've seen this fix, and wsm.lru_list is protected by wsm.lock.
> This patch will not remove all changes that were fixed. Just a little
> improvement
> to remove the unnecessary bh disabling. Like
> static inline void red_adaptative_timer(struct timer_list *t)
> in net/sched/sch_red.c.
> 
> Because the critical section is only used by the process context and
> the softirq context,
> it is safe to remove bh_disable in the softirq context since it will
> not be preempted by the softirq.

So why haven't you written that as a proper explanation the first time,
you apparenly analyzed the correctness. Please update the changelog and
also please try to rephrase it so it's more readable, I kind of
understand what you mean but it still leaves some things to hard to
read. Thanks.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2] btrfs: zstd: use spin_lock in timer callback
  2022-04-11 13:51     ` David Sterba
@ 2022-04-11 15:55       ` Schspa Shi
  2022-04-13 14:58         ` Nikolay Borisov
  2022-04-14 19:26         ` David Sterba
  0 siblings, 2 replies; 11+ messages in thread
From: Schspa Shi @ 2022-04-11 15:55 UTC (permalink / raw)
  To: dsterba; +Cc: clm, dsterba, josef, linux-btrfs, linux-kernel, schspa, terrelln

This is an optimization for fix fee13fe96529 ("btrfs:
correct zstd workspace manager lock to use spin_lock_bh()")

The critical region for wsm.lock is only accessed by the process context and
the softirq context.

Because in the soft interrupt, the critical section will not be preempted by the
soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.

Changelog:
v1 -> v2:
	- Change the commit message to make it more readable.

[1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/

Signed-off-by: Schspa Shi <schspa@gmail.com>
---
 fs/btrfs/zstd.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index fc42dd0badd7..faa74306f0b7 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -105,10 +105,10 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
 	unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
 	struct list_head *pos, *next;
 
-	spin_lock_bh(&wsm.lock);
+	spin_lock(&wsm.lock);
 
 	if (list_empty(&wsm.lru_list)) {
-		spin_unlock_bh(&wsm.lock);
+		spin_unlock(&wsm.lock);
 		return;
 	}
 
@@ -137,7 +137,7 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
 	if (!list_empty(&wsm.lru_list))
 		mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
 
-	spin_unlock_bh(&wsm.lock);
+	spin_unlock(&wsm.lock);
 }
 
 /*
-- 
2.24.3 (Apple Git-128)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
  2022-04-11 15:55       ` [PATCH v2] btrfs: zstd: use spin_lock in timer callback Schspa Shi
@ 2022-04-13 14:58         ` Nikolay Borisov
  2022-04-13 16:00           ` David Sterba
  2022-04-13 16:03           ` Schspa Shi
  2022-04-14 19:26         ` David Sterba
  1 sibling, 2 replies; 11+ messages in thread
From: Nikolay Borisov @ 2022-04-13 14:58 UTC (permalink / raw)
  To: Schspa Shi, dsterba
  Cc: clm, dsterba, josef, linux-btrfs, linux-kernel, terrelln



On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
> This is an optimization for fix fee13fe96529 ("btrfs:
> correct zstd workspace manager lock to use spin_lock_bh()")
> 
> The critical region for wsm.lock is only accessed by the process context and
> the softirq context.
> 
> Because in the soft interrupt, the critical section will not be preempted by the
> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
> 
> Changelog:
> v1 -> v2:
> 	- Change the commit message to make it more readable.
> 
> [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
> 
> Signed-off-by: Schspa Shi <schspa@gmail.com>

Has there been any measurable impact by this change? While it's correct it does mean that
  someone looking at the code would see that in one call site we use plain spinlock and in
another a _bh version and this is somewhat inconsistent.

What's more I believe this is a noop since when softirqs are executing preemptible() would
be false due to preempt_count() being non-0 and in the bh-disabling code
in the spinlock we have:

  /* First entry of a task into a BH disabled section? */
     1         if (!current->softirq_disable_cnt) {
   167                 if (preemptible()) {
     1                         local_lock(&softirq_ctrl.lock);
     2                         /* Required to meet the RCU bottomhalf requirements. */
     3                         rcu_read_lock();
     4                 } else {
     5                         DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
     6                 }
     7         }


In this case we'd hit the else branch.
> ---
>   fs/btrfs/zstd.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index fc42dd0badd7..faa74306f0b7 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -105,10 +105,10 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
>   	unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
>   	struct list_head *pos, *next;
>   
> -	spin_lock_bh(&wsm.lock);
> +	spin_lock(&wsm.lock);
>   
>   	if (list_empty(&wsm.lru_list)) {
> -		spin_unlock_bh(&wsm.lock);
> +		spin_unlock(&wsm.lock);
>   		return;
>   	}
>   
> @@ -137,7 +137,7 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
>   	if (!list_empty(&wsm.lru_list))
>   		mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
>   
> -	spin_unlock_bh(&wsm.lock);
> +	spin_unlock(&wsm.lock);
>   }
>   
>   /*

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
  2022-04-13 14:58         ` Nikolay Borisov
@ 2022-04-13 16:00           ` David Sterba
  2022-04-13 16:03           ` Schspa Shi
  1 sibling, 0 replies; 11+ messages in thread
From: David Sterba @ 2022-04-13 16:00 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: Schspa Shi, clm, dsterba, josef, linux-btrfs, linux-kernel, terrelln

On Wed, Apr 13, 2022 at 05:58:41PM +0300, Nikolay Borisov wrote:
> 
> 
> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
> > This is an optimization for fix fee13fe96529 ("btrfs:
> > correct zstd workspace manager lock to use spin_lock_bh()")
> > 
> > The critical region for wsm.lock is only accessed by the process context and
> > the softirq context.
> > 
> > Because in the soft interrupt, the critical section will not be preempted by the
> > soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
> > off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
> > 
> > Changelog:
> > v1 -> v2:
> > 	- Change the commit message to make it more readable.
> > 
> > [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
> > 
> > Signed-off-by: Schspa Shi <schspa@gmail.com>
> 
> Has there been any measurable impact by this change? While it's correct it does mean that
>   someone looking at the code would see that in one call site we use plain spinlock and in
> another a _bh version and this is somewhat inconsistent.

I think it would be hard to measure the impact, maybe in some kind of
load the _bh version would be unnecessarily blocking some other threads.

Regarding the used locking primitives, I'll add a comment about that to
the function, it is indeed inconsistent and not obvious from the
context.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
  2022-04-13 14:58         ` Nikolay Borisov
  2022-04-13 16:00           ` David Sterba
@ 2022-04-13 16:03           ` Schspa Shi
  2022-04-13 19:08             ` Nikolay Borisov
  1 sibling, 1 reply; 11+ messages in thread
From: Schspa Shi @ 2022-04-13 16:03 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: dsterba, clm, dsterba, josef, linux-btrfs, linux-kernel, terrelln

Nikolay Borisov <nborisov@suse.com> writes:

> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>> This is an optimization for fix fee13fe96529 ("btrfs:
>> correct zstd workspace manager lock to use spin_lock_bh()")
>> The critical region for wsm.lock is only accessed by the process context and
>> the softirq context.
>> Because in the soft interrupt, the critical section will not be preempted by
>> the
>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>> Changelog:
>> v1 -> v2:
>>      - Change the commit message to make it more readable.
>> [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
>> Signed-off-by: Schspa Shi <schspa@gmail.com>
>
> Has there been any measurable impact by this change? While it's correct it does mean that
>  someone looking at the code would see that in one call site we use plain spinlock and in
> another a _bh version and this is somewhat inconsistent.
>
Yes, it may seem a little confused. but it's allowed to save some
little peace of CPU times.
and "static inline void red_adaptative_timer(struct timer_list *t) in
net/sched/sch_red.c"
have similar usage.

> What's more I believe this is a noop since when softirqs are executing preemptible() would
> be false due to preempt_count() being non-0 and in the bh-disabling code
> in the spinlock we have:
>
>  /* First entry of a task into a BH disabled section? */
>     1         if (!current->softirq_disable_cnt) {
>   167                 if (preemptible()) {
>     1                         local_lock(&softirq_ctrl.lock);
>     2                         /* Required to meet the RCU bottomhalf requirements. */
>     3                         rcu_read_lock();
>     4                 } else {
>     5                         DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>     6                 }
>     7         }
>
>
> In this case we'd hit the else branch.

We won't hit the else branch. because current->softirq_disable_cnt
won't be zero in the origin case.

__do_softirq(void)
        softirq_handle_begin(void)
        __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
                        current->softirq_disable_cnt will be > 0 at this time.
    ......
        zstd_reclaim_timer_fn(struct timer_list *timer)
                        spin_lock_bh(&wsm.lock);
                        __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
                        if (!current->softirq_disable_cnt) {
                                                // this if branch won't hit
                                        }

        softirq_handle_end();

In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
won't do anything useful it only
increase softirq disable depth and decrease it in
"__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".

So it's safe to replace spin_lock_bh with spin_lock in a timer
callback function.


For the ksoftirqd, it's all the same.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
  2022-04-13 16:03           ` Schspa Shi
@ 2022-04-13 19:08             ` Nikolay Borisov
  2022-04-14 16:39               ` Schspa Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Nikolay Borisov @ 2022-04-13 19:08 UTC (permalink / raw)
  To: Schspa Shi
  Cc: dsterba, clm, dsterba, josef, linux-btrfs, linux-kernel, terrelln



On 13.04.22 г. 19:03 ч., Schspa Shi wrote:
> Nikolay Borisov <nborisov@suse.com> writes:
> 
>> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>>> This is an optimization for fix fee13fe96529 ("btrfs:
>>> correct zstd workspace manager lock to use spin_lock_bh()")
>>> The critical region for wsm.lock is only accessed by the process context and
>>> the softirq context.
>>> Because in the soft interrupt, the critical section will not be preempted by
>>> the
>>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>>> Changelog:
>>> v1 -> v2:
>>>       - Change the commit message to make it more readable.
>>> [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
>>> Signed-off-by: Schspa Shi <schspa@gmail.com>
>>
>> Has there been any measurable impact by this change? While it's correct it does mean that
>>   someone looking at the code would see that in one call site we use plain spinlock and in
>> another a _bh version and this is somewhat inconsistent.
>>
> Yes, it may seem a little confused. but it's allowed to save some
> little peace of CPU times.
> and "static inline void red_adaptative_timer(struct timer_list *t) in
> net/sched/sch_red.c"
> have similar usage.
> 
>> What's more I believe this is a noop since when softirqs are executing preemptible() would
>> be false due to preempt_count() being non-0 and in the bh-disabling code
>> in the spinlock we have:
>>
>>   /* First entry of a task into a BH disabled section? */
>>      1         if (!current->softirq_disable_cnt) {
>>    167                 if (preemptible()) {
>>      1                         local_lock(&softirq_ctrl.lock);
>>      2                         /* Required to meet the RCU bottomhalf requirements. */
>>      3                         rcu_read_lock();
>>      4                 } else {
>>      5                         DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>>      6                 }
>>      7         }
>>
>>
>> In this case we'd hit the else branch.
> 
> We won't hit the else branch. because current->softirq_disable_cnt
> won't be zero in the origin case.
> 
> __do_softirq(void)
>          softirq_handle_begin(void)
>          __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
>                          current->softirq_disable_cnt will be > 0 at this time.

That's only relevant on CONFIG_PREEMPT_RT though, on usual kernels 
softirq_handle_being is empty. Furthermore, in case of the non-preempt 
rt if preemptible() always returns false this means that even in the 
__do_softirq path we'll never increment softirq_disable_cnt. So if 
anything this change is only beneficial (theoretically at that in 
preempt_rt scenarios).

>      ......
>          zstd_reclaim_timer_fn(struct timer_list *timer)
>                          spin_lock_bh(&wsm.lock);
>                          __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
>                          if (!current->softirq_disable_cnt) {
>                                                  // this if branch won't hit
>                                          }
> 
>          softirq_handle_end();
> 
> In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
> won't do anything useful it only
> increase softirq disable depth and decrease it in
> "__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".
> 
> So it's safe to replace spin_lock_bh with spin_lock in a timer
> callback function.
> 
> 
> For the ksoftirqd, it's all the same.
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
  2022-04-13 19:08             ` Nikolay Borisov
@ 2022-04-14 16:39               ` Schspa Shi
  0 siblings, 0 replies; 11+ messages in thread
From: Schspa Shi @ 2022-04-14 16:39 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: dsterba, clm, dsterba, Josef Bacik, linux-btrfs, linux-kernel, terrelln

Nikolay Borisov <nborisov@suse.com> writes:

> On 13.04.22 г. 19:03 ч., Schspa Shi wrote:
>> Nikolay Borisov <nborisov@suse.com> writes:
>>
>>> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>>>> This is an optimization for fix fee13fe96529 ("btrfs:
>>>> correct zstd workspace manager lock to use spin_lock_bh()")
>>>> The critical region for wsm.lock is only accessed by the process context and
>>>> the softirq context.
>>>> Because in the soft interrupt, the critical section will not be preempted by
>>>> the
>>>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>>>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>>>> Changelog:
>>>> v1 -> v2:
>>>>       - Change the commit message to make it more readable.
>>>> [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
>>>> Signed-off-by: Schspa Shi <schspa@gmail.com>
>>>
>>> Has there been any measurable impact by this change? While it's correct it does mean that
>>>   someone looking at the code would see that in one call site we use plain spinlock and in
>>> another a _bh version and this is somewhat inconsistent.
>>>
>> Yes, it may seem a little confused. but it's allowed to save some
>> little peace of CPU times.
>> and "static inline void red_adaptative_timer(struct timer_list *t) in
>> net/sched/sch_red.c"
>> have similar usage.
>>
>>> What's more I believe this is a noop since when softirqs are executing preemptible() would
>>> be false due to preempt_count() being non-0 and in the bh-disabling code
>>> in the spinlock we have:
>>>
>>>   /* First entry of a task into a BH disabled section? */
>>>      1         if (!current->softirq_disable_cnt) {
>>>    167                 if (preemptible()) {
>>>      1                         local_lock(&softirq_ctrl.lock);
>>>      2                         /* Required to meet the RCU bottomhalf requirements. */
>>>      3                         rcu_read_lock();
>>>      4                 } else {
>>>      5                         DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>>>      6                 }
>>>      7         }
>>>
>>>
>>> In this case we'd hit the else branch.
>> We won't hit the else branch. because current->softirq_disable_cnt
>> won't be zero in the origin case.
>> __do_softirq(void)
>>          softirq_handle_begin(void)
>>          __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
>>                          current->softirq_disable_cnt will be > 0 at this time.
>
> That's only relevant on CONFIG_PREEMPT_RT though, on usual kernels
> softirq_handle_being is empty. Furthermore, in case of the non-preempt
> rt if preemptible() always returns false this means that even in the
> __do_softirq path we'll never increment softirq_disable_cnt. So if
> anything this change is only beneficial (theoretically at that in preempt_rt
> scenarios).
>
For either case, __local_bh_disable_ip will add preempt count or something else.
for CONFIG_PREEMPT_RT we have discussed, it will be OK and some beneficial.

In the case of CONFIG_TRACE_IRQFLAGS:

#ifdef CONFIG_TRACE_IRQFLAGS
void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
{
        unsigned long flags;

        WARN_ON_ONCE(in_hardirq());

        raw_local_irq_save(flags);
        /*
         * The preempt tracer hooks into preempt_count_add and will break
         * lockdep because it calls back into lockdep after SOFTIRQ_OFFSET
         * is set and before current->softirq_enabled is cleared.
         * We must manually increment preempt_count here and manually
         * call the trace_preempt_off later.
         */
        __preempt_count_add(cnt);
        /*
         * Were softirqs turned off above:
         */
        if (softirq_count() == (cnt & SOFTIRQ_MASK))
                lockdep_softirqs_off(ip);
        raw_local_irq_restore(flags);

        if (preempt_count() == cnt) {
#ifdef CONFIG_DEBUG_PREEMPT
                current->preempt_disable_ip = get_lock_parent_ip();
#endif
                trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
        }
}
EXPORT_SYMBOL(__local_bh_disable_ip);
#endif /* CONFIG_TRACE_IRQFLAGS */

There is also __preempt_count_add(cnt), local IRQ disable. which
reduces the system's
corresponding speed.

In another case (usual kernels):

#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_TRACE_IRQFLAGS)
extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
#else
static __always_inline void __local_bh_disable_ip(unsigned long ip,
unsigned int cnt)
{
        preempt_count_add(cnt);
        barrier();
}
#endif

There is preempt_count_add(cnt), and it's useless in the timer's callback.

In summary:
There is a benefit for all the cases to replace spin_lock_bh with spin_lock in
timer's callback.

>>      ......
>>          zstd_reclaim_timer_fn(struct timer_list *timer)
>>                          spin_lock_bh(&wsm.lock);
>>                          __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
>>                          if (!current->softirq_disable_cnt) {
>>                                                  // this if branch won't hit
>>                                          }
>>          softirq_handle_end();
>> In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
>> won't do anything useful it only
>> increase softirq disable depth and decrease it in
>> "__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".
>> So it's safe to replace spin_lock_bh with spin_lock in a timer
>> callback function.
>>
>> For the ksoftirqd, it's all the same.
>>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
  2022-04-11 15:55       ` [PATCH v2] btrfs: zstd: use spin_lock in timer callback Schspa Shi
  2022-04-13 14:58         ` Nikolay Borisov
@ 2022-04-14 19:26         ` David Sterba
  1 sibling, 0 replies; 11+ messages in thread
From: David Sterba @ 2022-04-14 19:26 UTC (permalink / raw)
  To: Schspa Shi
  Cc: dsterba, clm, dsterba, josef, linux-btrfs, linux-kernel, terrelln

On Mon, Apr 11, 2022 at 11:55:41PM +0800, Schspa Shi wrote:
> This is an optimization for fix fee13fe96529 ("btrfs:
> correct zstd workspace manager lock to use spin_lock_bh()")
> 
> The critical region for wsm.lock is only accessed by the process context and
> the softirq context.
> 
> Because in the soft interrupt, the critical section will not be preempted by the
> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
> 
> Changelog:
> v1 -> v2:
> 	- Change the commit message to make it more readable.
> 
> [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
> 
> Signed-off-by: Schspa Shi <schspa@gmail.com>

Added to misc-next, thanks.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-04-14 19:31 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-08 18:15 [PATCH] btrfs: zstd: use spin_lock in timer function Schspa Shi
2022-04-08 18:44 ` David Sterba
2022-04-09  7:36   ` Schspa Shi
2022-04-11 13:51     ` David Sterba
2022-04-11 15:55       ` [PATCH v2] btrfs: zstd: use spin_lock in timer callback Schspa Shi
2022-04-13 14:58         ` Nikolay Borisov
2022-04-13 16:00           ` David Sterba
2022-04-13 16:03           ` Schspa Shi
2022-04-13 19:08             ` Nikolay Borisov
2022-04-14 16:39               ` Schspa Shi
2022-04-14 19:26         ` David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).