From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7643EC10F14 for ; Tue, 8 Oct 2019 22:42:30 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id F3E41206BB for ; Tue, 8 Oct 2019 22:42:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F3E41206BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6215D4A8F5; Tue, 8 Oct 2019 18:42:29 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wDtSYr3WeXWp; Tue, 8 Oct 2019 18:42:28 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 2AFA94A8E5; Tue, 8 Oct 2019 18:42:28 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id CD78C4A8DE for ; Tue, 8 Oct 2019 18:42:26 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KvvhtjRhJtfx for ; Tue, 8 Oct 2019 18:42:25 -0400 (EDT) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 7709F4A8CD for ; Tue, 8 Oct 2019 18:42:25 -0400 (EDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D4624337; Tue, 8 Oct 2019 15:42:24 -0700 (PDT) Received: from localhost (unknown [10.37.6.20]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 579B13F68E; Tue, 8 Oct 2019 15:42:24 -0700 (PDT) Date: Tue, 8 Oct 2019 23:42:22 +0100 From: Andrew Murray To: Marc Zyngier Subject: Re: [PATCH v2 5/5] KVM: arm64: pmu: Reset sample period on overflow handling Message-ID: <20191008224221.GK42880@e119886-lin.cambridge.arm.com> References: <20191008160128.8872-1-maz@kernel.org> <20191008160128.8872-6-maz@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20191008160128.8872-6-maz@kernel.org> User-Agent: Mutt/1.10.1+81 (426a6c1) (2018-08-26) Cc: kvm@vger.kernel.org, Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Tue, Oct 08, 2019 at 05:01:28PM +0100, Marc Zyngier wrote: > The PMU emulation code uses the perf event sample period to trigger > the overflow detection. This works fine for the *first* overflow > handling, but results in a huge number of interrupts on the host, > unrelated to the number of interrupts handled in the guest (a x20 > factor is pretty common for the cycle counter). On a slow system > (such as a SW model), this can result in the guest only making > forward progress at a glacial pace. > > It turns out that the clue is in the name. The sample period is > exactly that: a period. And once the an overflow has occured, > the following period should be the full width of the associated > counter, instead of whatever the guest had initially programed. > > Reset the sample period to the architected value in the overflow > handler, which now results in a number of host interrupts that is > much closer to the number of interrupts in the guest. > > Fixes: b02386eb7dac ("arm64: KVM: Add PMU overflow interrupt routing") > Signed-off-by: Marc Zyngier > --- > virt/kvm/arm/pmu.c | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c > index 25a483a04beb..8b524d74c68a 100644 > --- a/virt/kvm/arm/pmu.c > +++ b/virt/kvm/arm/pmu.c > @@ -442,6 +442,20 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, > struct kvm_pmc *pmc = perf_event->overflow_handler_context; > struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); > int idx = pmc->idx; > + u64 period; > + > + /* > + * Reset the sample period to the architectural limit, > + * i.e. the point where the counter overflows. > + */ > + period = -(local64_read(&pmc->perf_event->count)); > + > + if (!kvm_pmu_idx_is_64bit(vcpu, pmc->idx)) > + period &= GENMASK(31, 0); > + > + local64_set(&pmc->perf_event->hw.period_left, 0); > + pmc->perf_event->attr.sample_period = period; > + pmc->perf_event->hw.sample_period = period; I believe that above, you are reducing the period by the amount period_left would have been - they cancel each other out. Given that kvm_pmu_perf_overflow is now always called between a cpu_pmu->pmu.stop and a cpu_pmu->pmu.start, it means armpmu_event_update has been called prior to this function, and armpmu_event_set_period will be called after... Therefore, I think the above could be reduced to: + /* + * Reset the sample period to the architectural limit, + * i.e. the point where the counter overflows. + */ + u64 period = GENMASK(63, 0); + if (!kvm_pmu_idx_is_64bit(vcpu, pmc->idx)) + period = GENMASK(31, 0); + + pmc->perf_event->attr.sample_period = period; + pmc->perf_event->hw.sample_period = period; This is because armpmu_event_set_period takes into account the overflow and the counter wrapping via the "if (unlikely(left <= 0)) {" block. Though this code confuses me easily, so I may be talking rubbish. > > __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx); > > @@ -557,6 +571,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) > attr.exclude_host = 1; /* Don't count host events */ > attr.config = (pmc->idx == ARMV8_PMU_CYCLE_IDX) ? > ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel; > + attr.config1 = PERF_ATTR_CFG1_RELOAD_EVENT; I'm not sure that this flag, or patch 4 is really needed. As the perf events created by KVM are pinned to the task and exclude_(host,hv) are set - I think the perf event is not active at this point. Therefore if you change the sample period, you can wait until the perf event gets scheduled back in (when you return to the guest) where it's call to pmu.start will result in armpmu_event_set_period being called. In other words the pmu.start and pmu.stop you add in patch 4 is effectively being done for you by perf when the KVM task is switched out. I'd be interested to see if the following works: + WARN_ON(pmc->perf_event->state == PERF_EVENT_STATE_ACTIVE) + + /* + * Reset the sample period to the architectural limit, + * i.e. the point where the counter overflows. + */ + u64 period = GENMASK(63, 0); + if (!kvm_pmu_idx_is_64bit(vcpu, pmc->idx)) + period = GENMASK(31, 0); + + pmc->perf_event->attr.sample_period = period; + pmc->perf_event->hw.sample_period = period; > > counter = kvm_pmu_get_pair_counter_value(vcpu, pmc); > What about ARM 32 bit support for this? Thanks, Andrew Murray > -- > 2.20.1 > _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm