All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();
@ 2018-04-16  8:00 Davidwang
  2018-04-16 11:48 ` Jan Beulich
  0 siblings, 1 reply; 4+ messages in thread
From: Davidwang @ 2018-04-16  8:00 UTC (permalink / raw)
  To: xen-devel; +Cc: fionali, jbeulich

From: David Wang <davidwang@zhaoxin.com>

By the hpet_get_channel(), cpus share an in-use channel somtime.
So, core shouldn't clear cpumask while others are getting first
cpumask. If core zero and core one share an channel, the cpumask
is 0x3. Core zero clear cpumask between core one executing
cpumask_empty() and cpumask_first(). The return of cpumask_first()
is nr_cpu_ids. That would lead to ASSERT(cpu < nr_cpu_ids).

Signed-off-by: David Wang <davidwang@zhaoxin.com>
---
 xen/arch/x86/hpet.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index bc7a851..69a7b3a 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -740,7 +740,9 @@ void hpet_broadcast_exit(void)
     if ( !reprogram_timer(deadline) )
         raise_softirq(TIMER_SOFTIRQ);
 
+    spin_lock_irq(&ch->lock);
     cpumask_clear_cpu(cpu, ch->cpumask);
+    spin_unlock_irq(&ch->lock);
 
     if ( !(ch->flags & HPET_EVT_LEGACY) )
         hpet_detach_channel(cpu, ch);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();
  2018-04-16  8:00 [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit(); Davidwang
@ 2018-04-16 11:48 ` Jan Beulich
  2018-04-17  5:44   ` 答复: " David Wang
  0 siblings, 1 reply; 4+ messages in thread
From: Jan Beulich @ 2018-04-16 11:48 UTC (permalink / raw)
  To: Davidwang; +Cc: xen-devel, Fiona Li(BJ-RD)

>>> On 16.04.18 at 10:00, <Davidwang@zhaoxin.com> wrote:
> By the hpet_get_channel(), cpus share an in-use channel somtime.
> So, core shouldn't clear cpumask while others are getting first
> cpumask. If core zero and core one share an channel, the cpumask
> is 0x3. Core zero clear cpumask between core one executing
> cpumask_empty() and cpumask_first(). The return of cpumask_first()
> is nr_cpu_ids. That would lead to ASSERT(cpu < nr_cpu_ids).

I can see your point, but that's in hpet_detach_channel() afaics,
which your description doesn't mention at all. And the assertion
would - afaict - happen through hpet_detach_channel() ->
set_channel_irq_affinity() -> cpumask_of() (as of e8bf5addc9).

Please realize that it helps review quite a bit if you write concise
descriptions for your changes.

> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -740,7 +740,9 @@ void hpet_broadcast_exit(void)
>      if ( !reprogram_timer(deadline) )
>          raise_softirq(TIMER_SOFTIRQ);
>  
> +    spin_lock_irq(&ch->lock);
>      cpumask_clear_cpu(cpu, ch->cpumask);
> +    spin_unlock_irq(&ch->lock);

Rather than this, how about eliminating the cpumask_empty() call
in favor of just the cpumask_first() one in hpet_detach_channel()
(with a local variable storing the intermediate result)? Or if acquiring
the locking can't be avoided here, you would perhaps better not
drop it before calling hpet_detach_channel() (which has only this
single call site and hence would be straightforward to adjust).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* 答复: [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();
  2018-04-16 11:48 ` Jan Beulich
@ 2018-04-17  5:44   ` David Wang
  2018-04-17  9:32     ` Jan Beulich
  0 siblings, 1 reply; 4+ messages in thread
From: David Wang @ 2018-04-17  5:44 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Fiona Li(BJ-RD)



Hi Jan
                Thank you for pointing out my problem. I will  revise that.  Answer the following.

发件人: Jan Beulich <JBeulich@suse.com>
发送时间: 2018年4月16日 19:48
收件人: David Wang
抄送: xen-devel; Fiona Li(BJ-RD)
主题: Re: [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();
    
>>> On 16.04.18 at 10:00, <Davidwang@zhaoxin.com> wrote:
> By the hpet_get_channel(), cpus share an in-use channel somtime.
> So, core shouldn't clear cpumask while others are getting first
> cpumask. If core zero and core one share an channel, the cpumask
> is 0x3. Core zero clear cpumask between core one executing
> cpumask_empty() and cpumask_first(). The return of cpumask_first()
> is nr_cpu_ids. That would lead to ASSERT(cpu < nr_cpu_ids).

I can see your point, but that's in hpet_detach_channel() afaics,
which your description doesn't mention at all. And the assertion
would - afaict - happen through hpet_detach_channel() ->
set_channel_irq_affinity() -> cpumask_of() (as of e8bf5addc9).

Please realize that it helps review quite a bit if you write concise
descriptions for your changes.

[DavidWang]: Ok, revise it and thanks.
 

> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -740,7 +740,9 @@ void hpet_broadcast_exit(void)
>      if ( !reprogram_timer(deadline) )
>          raise_softirq(TIMER_SOFTIRQ);
>  
> +    spin_lock_irq(&ch->lock);
>      cpumask_clear_cpu(cpu, ch->cpumask);
> +    spin_unlock_irq(&ch->lock);

Rather than this, how about eliminating the cpumask_empty() call
in favor of just the cpumask_first() one in hpet_detach_channel()
(with a local variable storing the intermediate result)? Or if acquiring
the locking can't be avoided here, you would perhaps better not
drop it before calling hpet_detach_channel() (which has only this
single call site and hence would be straightforward to adjust).

[DavidWang]:  The experiment proved that a local variable storing the intermediate result  can slove the problem. On one hand a local variable is more efficient than adding lock, On the other it is not very clear for reading. In fact, In hpet_detach_channel(), a lock for ch->lock will be added.  Can we move the lock( in hpet_detach_channel()) backward  to calling cpumask_clear_cpu()  and drop it in function (hpet_detach_channel()) ?
Looking forward to your suggestion.
  



    
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: 答复: [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();
  2018-04-17  5:44   ` 答复: " David Wang
@ 2018-04-17  9:32     ` Jan Beulich
  0 siblings, 0 replies; 4+ messages in thread
From: Jan Beulich @ 2018-04-17  9:32 UTC (permalink / raw)
  To: Davidwang; +Cc: xen-devel, Fiona Li(BJ-RD)

>>> On 17.04.18 at 07:44, <DavidWang@zhaoxin.com> wrote:
> 发件人: Jan Beulich <JBeulich@suse.com>
> 发送时间: 2018年4月16日 19:48
>>>> On 16.04.18 at 10:00, <Davidwang@zhaoxin.com> wrote:
>> --- a/xen/arch/x86/hpet.c
>> +++ b/xen/arch/x86/hpet.c
>> @@ -740,7 +740,9 @@ void hpet_broadcast_exit(void)
>>      if ( !reprogram_timer(deadline) )
>>          raise_softirq(TIMER_SOFTIRQ);
>>  
>> +    spin_lock_irq(&ch->lock);
>>      cpumask_clear_cpu(cpu, ch->cpumask);
>> +    spin_unlock_irq(&ch->lock);
> 
> Rather than this, how about eliminating the cpumask_empty() call
> in favor of just the cpumask_first() one in hpet_detach_channel()
> (with a local variable storing the intermediate result)? Or if acquiring
> the locking can't be avoided here, you would perhaps better not
> drop it before calling hpet_detach_channel() (which has only this
> single call site and hence would be straightforward to adjust).
> 
> [DavidWang]:  The experiment proved that a local variable storing the 
> intermediate result  can slove the problem. On one hand a local variable is 
> more efficient than adding lock, On the other it is not very clear for 
> reading. In fact, In hpet_detach_channel(), a lock for ch->lock will be added. 
>  Can we move the lock( in hpet_detach_channel()) backward  to calling 
> cpumask_clear_cpu()  and drop it in function (hpet_detach_channel()) ?
> Looking forward to your suggestion.

I'd certainly prefer the local variable approach - I'm not sure why you think
this introduces an issue with readability. Quite the inverse I would say, it
eliminates the problematic interdependency between the cpumask_empty()
and cpumask_first() calls. It is only if there's a _technical_ reason speaking
against this approach when I'd view the revised locking as a viable
alternative.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-04-17  9:32 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-16  8:00 [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit(); Davidwang
2018-04-16 11:48 ` Jan Beulich
2018-04-17  5:44   ` 答复: " David Wang
2018-04-17  9:32     ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.