All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/hpet: Improve handling of timer_deadline
@ 2017-05-31 14:14 Andrew Cooper
  2017-06-01 13:34 ` Jan Beulich
  0 siblings, 1 reply; 2+ messages in thread
From: Andrew Cooper @ 2017-05-31 14:14 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich

timer_deadline is only ever updated via this_cpu() in timer_softirq_action(),
so is not going to change behind the back of the currently running cpu.

Update hpet_broadcast_{enter,exit}() to use this_cpu() rather than per_cpu()
for clarity, and cache the value in a local variable to avoid the repeated
RELOC_HIDE() penalty.

handle_hpet_broadcast() reads the timer_deadlines of remote cpus, but there is
no need to force the read for cpus which are not present in the mask.  One
requirement is that we only sample the value once (which happens as a side
effect of RELOC_HIDE()), but is made more explicit with ACCESS_ONCE().

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/hpet.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 7e8b438..fdad818 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -189,12 +189,11 @@ static void handle_hpet_broadcast(struct hpet_event_channel *ch)
     {
         s_time_t deadline;
 
-        rmb();
-        deadline = per_cpu(timer_deadline, cpu);
-        rmb();
         if ( !cpumask_test_cpu(cpu, ch->cpumask) )
             continue;
 
+        deadline = ACCESS_ONCE(per_cpu(timer_deadline, cpu));
+
         if ( deadline <= now )
             __cpumask_set_cpu(cpu, &mask);
         else if ( deadline < next_event )
@@ -697,8 +696,9 @@ void hpet_broadcast_enter(void)
 {
     unsigned int cpu = smp_processor_id();
     struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
+    s_time_t deadline = this_cpu(timer_deadline);
 
-    if ( per_cpu(timer_deadline, cpu) == 0 )
+    if ( deadline == 0 )
         return;
 
     if ( !ch )
@@ -715,8 +715,8 @@ void hpet_broadcast_enter(void)
 
     spin_lock(&ch->lock);
     /* reprogram if current cpu expire time is nearer */
-    if ( per_cpu(timer_deadline, cpu) < ch->next_event )
-        reprogram_hpet_evt_channel(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
+    if ( deadline < ch->next_event )
+        reprogram_hpet_evt_channel(ch, deadline, NOW(), 1);
     spin_unlock(&ch->lock);
 }
 
@@ -724,8 +724,9 @@ void hpet_broadcast_exit(void)
 {
     unsigned int cpu = smp_processor_id();
     struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
+    s_time_t deadline = this_cpu(timer_deadline);
 
-    if ( per_cpu(timer_deadline, cpu) == 0 )
+    if ( deadline == 0 )
         return;
 
     if ( !ch )
@@ -733,7 +734,7 @@ void hpet_broadcast_exit(void)
 
     /* Reprogram the deadline; trigger timer work now if it has passed. */
     enable_APIC_timer();
-    if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) )
+    if ( !reprogram_timer(deadline) )
         raise_softirq(TIMER_SOFTIRQ);
 
     cpumask_clear_cpu(cpu, ch->cpumask);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] x86/hpet: Improve handling of timer_deadline
  2017-05-31 14:14 [PATCH] x86/hpet: Improve handling of timer_deadline Andrew Cooper
@ 2017-06-01 13:34 ` Jan Beulich
  0 siblings, 0 replies; 2+ messages in thread
From: Jan Beulich @ 2017-06-01 13:34 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel

>>> On 31.05.17 at 16:14, <andrew.cooper3@citrix.com> wrote:
> Update hpet_broadcast_{enter,exit}() to use this_cpu() rather than per_cpu()
> for clarity,

I'm afraid this makes things worse in other respects (see below).

> @@ -697,8 +696,9 @@ void hpet_broadcast_enter(void)
>  {
>      unsigned int cpu = smp_processor_id();
>      struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
> +    s_time_t deadline = this_cpu(timer_deadline);

There's a this_cpu() now side by side with a per_cpu(, cpu). Please
let's not do this, and since per_cpu() produces better code when
there are multiple in a single function, I'd prefer if we stayed with
that.

> -    if ( per_cpu(timer_deadline, cpu) == 0 )
> +    if ( deadline == 0 )
>          return;
>  
>      if ( !ch )
> @@ -715,8 +715,8 @@ void hpet_broadcast_enter(void)
>  
>      spin_lock(&ch->lock);
>      /* reprogram if current cpu expire time is nearer */
> -    if ( per_cpu(timer_deadline, cpu) < ch->next_event )
> -        reprogram_hpet_evt_channel(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
> +    if ( deadline < ch->next_event )
> +        reprogram_hpet_evt_channel(ch, deadline, NOW(), 1);
>      spin_unlock(&ch->lock);
>  }

So you re-use a previously read value after potentially waiting
quite a while for the lock to become available. The fact that the
variable is being updated on the local CPU only is not immediately
visible here, so I think the comment wants expanding.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-06-01 13:34 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-31 14:14 [PATCH] x86/hpet: Improve handling of timer_deadline Andrew Cooper
2017-06-01 13:34 ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.