On Fri, 2020-06-12 at 14:41 +0200, Jürgen Groß wrote: > On 12.06.20 14:29, Julien Grall wrote: > > On 12/06/2020 05:57, Jürgen Groß wrote: > > > On 12.06.20 02:22, Volodymyr Babchuk wrote: > > > > > > > > @@ -994,9 +998,22 @@ s_time_t sched_get_time_correction(struct > > > > sched_unit *u) > > > > break; > > > > } > > > > + spin_lock_irqsave(&sched_stat_lock, flags); > > > > + sched_stat_irq_time += irq; > > > > + sched_stat_hyp_time += hyp; > > > > + spin_unlock_irqrestore(&sched_stat_lock, flags); > > > > > > Please don't use a lock. Just use add_sized() instead which will > > > add > > > atomically. > > > > If we expect sched_get_time_correction to be called concurrently > > then we > > would need to introduce atomic64_t or a spin lock. > > Or we could use percpu variables and add the cpu values up when > fetching the values. > Yes, either percpu or atomic looks much better than locking, to me, for this. Regards -- Dario Faggioli, Ph.D http://about.me/dario.faggioli Virtualization Software Engineer SUSE Labs, SUSE https://www.suse.com/ ------------------------------------------------------------------- <> (Raistlin Majere)