From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D56A6C433DF for ; Fri, 12 Jun 2020 11:41:10 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A85232065C for ; Fri, 12 Jun 2020 11:41:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A85232065C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jji3L-0000s9-Ko; Fri, 12 Jun 2020 11:40:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jji3K-0000s4-OX for xen-devel@lists.xenproject.org; Fri, 12 Jun 2020 11:40:54 +0000 X-Inumbo-ID: 9274a550-aca1-11ea-bb8b-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9274a550-aca1-11ea-bb8b-bc764e2007e4; Fri, 12 Jun 2020 11:40:54 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 330B3AD75; Fri, 12 Jun 2020 11:40:56 +0000 (UTC) Subject: Re: [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks To: Volodymyr Babchuk , "xen-devel@lists.xenproject.org" References: <20200612002205.174295-1-volodymyr_babchuk@epam.com> <20200612002205.174295-3-volodymyr_babchuk@epam.com> From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= Message-ID: <918fa2e1-232c-a3ff-d0a9-776b470ee5db@suse.com> Date: Fri, 12 Jun 2020 13:40:51 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "sstabellini@kernel.org" , "julien@xen.org" , "wl@xen.org" , "andrew.cooper3@citrix.com" , "ian.jackson@eu.citrix.com" , "george.dunlap@citrix.com" , "dfaggioli@suse.com" , "jbeulich@suse.com" Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" On 12.06.20 13:30, Volodymyr Babchuk wrote: > On Fri, 2020-06-12 at 06:43 +0200, Jürgen Groß wrote: >> On 12.06.20 02:22, Volodymyr Babchuk wrote: >>> +void vcpu_end_hyp_task(struct vcpu *v) >>> +{ >>> + int delta; >>> + >>> + if ( is_idle_vcpu(v) ) >>> + return; >>> + >>> + ASSERT(v->in_hyp_task); >>> + >>> + /* We assume that hypervisor task time will not overflow int */ >> >> This will definitely happen for long running VMs. Please use a 64-bit >> variable. >> > > It is not suposed to hold long time spans, as I described in the reply > to previous email. > >>> + delta = NOW() - v->hyp_entry_time; >>> + atomic_add(delta, &v->sched_unit->hyp_time); >>> + >>> +#ifndef NDEBUG >>> + v->in_hyp_task = false; >>> +#endif >>> +} >>> + >>> /* >>> * Do the actual movement of an unit from old to new CPU. Locks for *both* >>> * CPUs needs to have been taken already when calling this! >>> @@ -2615,6 +2646,7 @@ static void schedule(void) >>> >>> SCHED_STAT_CRANK(sched_run); >>> >>> + vcpu_end_hyp_task(current); >>> rcu_read_lock(&sched_res_rculock); >>> >>> lock = pcpu_schedule_lock_irq(cpu); >>> diff --git a/xen/common/softirq.c b/xen/common/softirq.c >>> index 063e93cbe3..03a29384d1 100644 >>> --- a/xen/common/softirq.c >>> +++ b/xen/common/softirq.c >>> @@ -71,7 +71,9 @@ void process_pending_softirqs(void) >>> void do_softirq(void) >>> { >>> ASSERT_NOT_IN_ATOMIC(); >>> + vcpu_begin_hyp_task(current); >>> __do_softirq(0); >>> + vcpu_end_hyp_task(current); >> >> This won't work for scheduling. current will either have changed, >> or in x86 case __do_softirq() might just not return. You need to >> handle that case explicitly in schedule() (you did that for the >> old vcpu, but for the case schedule() is returning you need to >> call vcpu_begin_hyp_task(current) there). >> > > Well, this is one of questions, I wanted to discuss. I certainly need > to call vcpu_begin_hyp_task(current) after context switch. But what it > is the right place? If my understaning is right, code on x86 platform > will never reach this point. Or I'm wrong there? No, this is correct. You can add the call to context_switch() just after set_current() has been called. Juergen