From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A56BEC433DF for ; Fri, 12 Jun 2020 12:42:03 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7C791207D8 for ; Fri, 12 Jun 2020 12:42:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C791207D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jjj0B-0006RP-Ci; Fri, 12 Jun 2020 12:41:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jjj0A-0006RK-Ar for xen-devel@lists.xenproject.org; Fri, 12 Jun 2020 12:41:42 +0000 X-Inumbo-ID: 107212dc-acaa-11ea-b7bb-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 107212dc-acaa-11ea-b7bb-bc764e2007e4; Fri, 12 Jun 2020 12:41:41 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id AB4F2AC91; Fri, 12 Jun 2020 12:41:43 +0000 (UTC) Subject: Re: [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics. To: Julien Grall , Volodymyr Babchuk , "xen-devel@lists.xenproject.org" References: <20200612002205.174295-1-volodymyr_babchuk@epam.com> <20200612002205.174295-5-volodymyr_babchuk@epam.com> <2a0ff6f5-1ada-9d0a-5014-709c873ec3e3@suse.com> <88eac035-8769-24f7-45e6-11a1c4739ccb@xen.org> From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= Message-ID: Date: Fri, 12 Jun 2020 14:41:38 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <88eac035-8769-24f7-45e6-11a1c4739ccb@xen.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Dario Faggioli , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" On 12.06.20 14:29, Julien Grall wrote: > Hi Juergen, > > On 12/06/2020 05:57, Jürgen Groß wrote: >> On 12.06.20 02:22, Volodymyr Babchuk wrote: >>> As scheduler code now collects time spent in IRQ handlers and in >>> do_softirq(), we can present those values to userspace tools like >>> xentop, so system administrator can see how system behaves. >>> >>> We are updating counters only in sched_get_time_correction() function >>> to minimize number of taken spinlocks. As atomic_t is 32 bit wide, it >>> is not enough to store time with nanosecond precision. So we need to >>> use 64 bit variables and protect them with spinlock. >>> >>> Signed-off-by: Volodymyr Babchuk >>> --- >>>   xen/common/sched/core.c     | 17 +++++++++++++++++ >>>   xen/common/sysctl.c         |  1 + >>>   xen/include/public/sysctl.h |  4 +++- >>>   xen/include/xen/sched.h     |  2 ++ >>>   4 files changed, 23 insertions(+), 1 deletion(-) >>> >>> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c >>> index a7294ff5c3..ee6b1d9161 100644 >>> --- a/xen/common/sched/core.c >>> +++ b/xen/common/sched/core.c >>> @@ -95,6 +95,10 @@ static struct scheduler __read_mostly ops; >>>   static bool scheduler_active; >>> +static DEFINE_SPINLOCK(sched_stat_lock); >>> +s_time_t sched_stat_irq_time; >>> +s_time_t sched_stat_hyp_time; >>> + >>>   static void sched_set_affinity( >>>       struct sched_unit *unit, const cpumask_t *hard, const cpumask_t >>> *soft); >>> @@ -994,9 +998,22 @@ s_time_t sched_get_time_correction(struct >>> sched_unit *u) >>>               break; >>>       } >>> +    spin_lock_irqsave(&sched_stat_lock, flags); >>> +    sched_stat_irq_time += irq; >>> +    sched_stat_hyp_time += hyp; >>> +    spin_unlock_irqrestore(&sched_stat_lock, flags); >> >> Please don't use a lock. Just use add_sized() instead which will add >> atomically. > > add_sized() is definitely not atomic. It will only prevent the compiler > to read/write multiple time the variable. Oh, my bad, I let myself fool by it being defined in atomic.h. > > If we expect sched_get_time_correction to be called concurrently then we > would need to introduce atomic64_t or a spin lock. Or we could use percpu variables and add the cpu values up when fetching the values. Juergen