From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7495FC04E87 for ; Tue, 21 May 2019 16:31:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4A9A9208C3 for ; Tue, 21 May 2019 16:31:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="VQUeyG7X" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A9A9208C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date: Message-ID:From:References:To:Subject:Reply-To:Content-ID:Content-Description :Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ydaEkWG1Aqu/q6uNlWIw05W9M8ySyNZ9vAMqry15wTE=; b=VQUeyG7XxoPpK5 Jo4mK8ys1GQLxZKIvcKkMdavKZjmVSmiKTukDckQJO0H3PLZ1e7UBkJXBuI2s4ulyWKx3dNAF60Nd P2opbY16pu/UMCFo4c+3jccEcoRrLnMimmkYM6pMQHGpOBO63J8ftYoQiaFG5nvWYYJyTQLMO4PtM UTdThxPGYN/GlBGR31hRMQCnXAh6uu+FpDmZq8BrhzAKqqYYRwwLKHA/YAF7Fpp3g0WvF+O78TBMZ 8ctKu2oSxqEGwkj939P6+Kh0JQPn4Ztf79doiilebZmr3RvgJBxd8uG8VIFWY4vDXqJoFAvBhyikr IrIyM4JE8+QA113NxUEw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hT7gB-00082h-MG; Tue, 21 May 2019 16:31:55 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hT7g8-00082F-9R for linux-arm-kernel@lists.infradead.org; Tue, 21 May 2019 16:31:54 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ED0AB374; Tue, 21 May 2019 09:31:51 -0700 (PDT) Received: from [10.1.197.61] (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 27F393F718; Tue, 21 May 2019 09:31:49 -0700 (PDT) Subject: Re: [PATCH v7 5/5] KVM: arm/arm64: support chained PMU counters To: Andrew Murray , Christoffer Dall References: <20190521155228.903-1-andrew.murray@arm.com> <20190521155228.903-6-andrew.murray@arm.com> From: Marc Zyngier Openpgp: preference=signencrypt Autocrypt: addr=marc.zyngier@arm.com; prefer-encrypt=mutual; keydata= mQINBE6Jf0UBEADLCxpix34Ch3kQKA9SNlVQroj9aHAEzzl0+V8jrvT9a9GkK+FjBOIQz4KE g+3p+lqgJH4NfwPm9H5I5e3wa+Scz9wAqWLTT772Rqb6hf6kx0kKd0P2jGv79qXSmwru28vJ t9NNsmIhEYwS5eTfCbsZZDCnR31J6qxozsDHpCGLHlYym/VbC199Uq/pN5gH+5JHZyhyZiNW ozUCjMqC4eNW42nYVKZQfbj/k4W9xFfudFaFEhAf/Vb1r6F05eBP1uopuzNkAN7vqS8XcgQH qXI357YC4ToCbmqLue4HK9+2mtf7MTdHZYGZ939OfTlOGuxFW+bhtPQzsHiW7eNe0ew0+LaL 3wdNzT5abPBscqXWVGsZWCAzBmrZato+Pd2bSCDPLInZV0j+rjt7MWiSxEAEowue3IcZA++7 ifTDIscQdpeKT8hcL+9eHLgoSDH62SlubO/y8bB1hV8JjLW/jQpLnae0oz25h39ij4ijcp8N t5slf5DNRi1NLz5+iaaLg4gaM3ywVK2VEKdBTg+JTg3dfrb3DH7ctTQquyKun9IVY8AsxMc6 lxl4HxrpLX7HgF10685GG5fFla7R1RUnW5svgQhz6YVU33yJjk5lIIrrxKI/wLlhn066mtu1 DoD9TEAjwOmpa6ofV6rHeBPehUwMZEsLqlKfLsl0PpsJwov8TQARAQABtCNNYXJjIFp5bmdp ZXIgPG1hcmMuenluZ2llckBhcm0uY29tPokCOwQTAQIAJQIbAwYLCQgHAwIGFQgCCQoLBBYC AwECHgECF4AFAk6NvYYCGQEACgkQI9DQutE9ekObww/+NcUATWXOcnoPflpYG43GZ0XjQLng LQFjBZL+CJV5+1XMDfz4ATH37cR+8gMO1UwmWPv5tOMKLHhw6uLxGG4upPAm0qxjRA/SE3LC 22kBjWiSMrkQgv5FDcwdhAcj8A+gKgcXBeyXsGBXLjo5UQOGvPTQXcqNXB9A3ZZN9vS6QUYN TXFjnUnzCJd+PVI/4jORz9EUVw1q/+kZgmA8/GhfPH3xNetTGLyJCJcQ86acom2liLZZX4+1 6Hda2x3hxpoQo7pTu+XA2YC4XyUstNDYIsE4F4NVHGi88a3N8yWE+Z7cBI2HjGvpfNxZnmKX 6bws6RQ4LHDPhy0yzWFowJXGTqM/e79c1UeqOVxKGFF3VhJJu1nMlh+5hnW4glXOoy/WmDEM UMbl9KbJUfo+GgIQGMp8mwgW0vK4HrSmevlDeMcrLdfbbFbcZLNeFFBn6KqxFZaTd+LpylIH bOPN6fy1Dxf7UZscogYw5Pt0JscgpciuO3DAZo3eXz6ffj2NrWchnbj+SpPBiH4srfFmHY+Y LBemIIOmSqIsjoSRjNEZeEObkshDVG5NncJzbAQY+V3Q3yo9og/8ZiaulVWDbcpKyUpzt7pv cdnY3baDE8ate/cymFP5jGJK++QCeA6u6JzBp7HnKbngqWa6g8qDSjPXBPCLmmRWbc5j0lvA 6ilrF8m5Ag0ETol/RQEQAM/2pdLYCWmf3rtIiP8Wj5NwyjSL6/UrChXtoX9wlY8a4h3EX6E3 64snIJVMLbyr4bwdmPKULlny7T/R8dx/mCOWu/DztrVNQiXWOTKJnd/2iQblBT+W5W8ep/nS w3qUIckKwKdplQtzSKeE+PJ+GMS+DoNDDkcrVjUnsoCEr0aK3cO6g5hLGu8IBbC1CJYSpple VVb/sADnWF3SfUvJ/l4K8Uk4B4+X90KpA7U9MhvDTCy5mJGaTsFqDLpnqp/yqaT2P7kyMG2E w+eqtVIqwwweZA0S+tuqput5xdNAcsj2PugVx9tlw/LJo39nh8NrMxAhv5aQ+JJ2I8UTiHLX QvoC0Yc/jZX/JRB5r4x4IhK34Mv5TiH/gFfZbwxd287Y1jOaD9lhnke1SX5MXF7eCT3cgyB+ hgSu42w+2xYl3+rzIhQqxXhaP232t/b3ilJO00ZZ19d4KICGcakeiL6ZBtD8TrtkRiewI3v0 o8rUBWtjcDRgg3tWx/PcJvZnw1twbmRdaNvsvnlapD2Y9Js3woRLIjSAGOijwzFXSJyC2HU1 AAuR9uo4/QkeIrQVHIxP7TJZdJ9sGEWdeGPzzPlKLHwIX2HzfbdtPejPSXm5LJ026qdtJHgz BAb3NygZG6BH6EC1NPDQ6O53EXorXS1tsSAgp5ZDSFEBklpRVT3E0NrDABEBAAGJAh8EGAEC AAkFAk6Jf0UCGwwACgkQI9DQutE9ekMLBQ//U+Mt9DtFpzMCIHFPE9nNlsCm75j22lNiw6mX mx3cUA3pl+uRGQr/zQC5inQNtjFUmwGkHqrAw+SmG5gsgnM4pSdYvraWaCWOZCQCx1lpaCOl MotrNcwMJTJLQGc4BjJyOeSH59HQDitKfKMu/yjRhzT8CXhys6R0kYMrEN0tbe1cFOJkxSbV 0GgRTDF4PKyLT+RncoKxQe8lGxuk5614aRpBQa0LPafkirwqkUtxsPnarkPUEfkBlnIhAR8L kmneYLu0AvbWjfJCUH7qfpyS/FRrQCoBq9QIEcf2v1f0AIpA27f9KCEv5MZSHXGCdNcbjKw1 39YxYZhmXaHFKDSZIC29YhQJeXWlfDEDq6nIhvurZy3mSh2OMQgaIoFexPCsBBOclH8QUtMk a3jW/qYyrV+qUq9Wf3SKPrXf7B3xB332jFCETbyZQXqmowV+2b3rJFRWn5hK5B+xwvuxKyGq qDOGjof2dKl2zBIxbFgOclV7wqCVkhxSJi/QaOj2zBqSNPXga5DWtX3ekRnJLa1+ijXxmdjz hApihi08gwvP5G9fNGKQyRETePEtEAWt0b7dOqMzYBYGRVr7uS4uT6WP7fzOwAJC4lU7ZYWZ yVshCa0IvTtp1085RtT3qhh9mobkcZ+7cQOY+Tx2RGXS9WeOh2jZjdoWUv6CevXNQyOUXMM= Organization: ARM Ltd Message-ID: <42142487-4059-dbfb-062c-0839c1f0ea51@arm.com> Date: Tue, 21 May 2019 17:31:47 +0100 User-Agent: Mozilla/5.0 (X11; Linux aarch64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190521155228.903-6-andrew.murray@arm.com> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190521_093152_348281_1C8A3AAB X-CRM114-Status: GOOD ( 28.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Pouloze , James Morse , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Julien Thierry Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 21/05/2019 16:52, Andrew Murray wrote: > ARMv8 provides support for chained PMU counters, where an event type > of 0x001E is set for odd-numbered counters, the event counter will > increment by one for each overflow of the preceding even-numbered > counter. Let's emulate this in KVM by creating a 64 bit perf counter > when a user chains two emulated counters together. > > For chained events we only support generating an overflow interrupt > on the high counter. We use the attributes of the low counter to > determine the attributes of the perf event. > > Suggested-by: Marc Zyngier > Signed-off-by: Andrew Murray > --- > include/kvm/arm_pmu.h | 2 + > virt/kvm/arm/pmu.c | 246 ++++++++++++++++++++++++++++++++++++------ > 2 files changed, 215 insertions(+), 33 deletions(-) > > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h > index b73f31baca52..8b434745500a 100644 > --- a/include/kvm/arm_pmu.h > +++ b/include/kvm/arm_pmu.h > @@ -22,6 +22,7 @@ > #include > > #define ARMV8_PMU_CYCLE_IDX (ARMV8_PMU_MAX_COUNTERS - 1) > +#define ARMV8_PMU_MAX_COUNTER_PAIRS ((ARMV8_PMU_MAX_COUNTERS + 1) >> 1) > > #ifdef CONFIG_KVM_ARM_PMU > > @@ -34,6 +35,7 @@ struct kvm_pmc { > struct kvm_pmu { > int irq_num; > struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS]; > + DECLARE_BITMAP(chained, ARMV8_PMU_MAX_COUNTER_PAIRS); > bool ready; > bool created; > bool irq_level; > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c > index ae1e886d4a1a..4b0981c402c6 100644 > --- a/virt/kvm/arm/pmu.c > +++ b/virt/kvm/arm/pmu.c > @@ -25,28 +25,128 @@ > #include > > static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); > + > +#define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1 > + > +static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc) > +{ > + struct kvm_pmu *pmu; > + struct kvm_vcpu_arch *vcpu_arch; > + > + pmc -= pmc->idx; > + pmu = container_of(pmc, struct kvm_pmu, pmc[0]); > + vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu); > + return container_of(vcpu_arch, struct kvm_vcpu, arch); > +} > + > /** > - * kvm_pmu_get_counter_value - get PMU counter value > + * kvm_pmu_pmc_is_chained - determine if the pmc is chained > + * @pmc: The PMU counter pointer > + */ > +static bool kvm_pmu_pmc_is_chained(struct kvm_pmc *pmc) > +{ > + struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); > + > + return test_bit(pmc->idx >> 1, vcpu->arch.pmu.chained); > +} > + > +/** > + * kvm_pmu_pmc_is_high_counter - determine if select_idx is a high/low counter > + * @select_idx: The counter index > + */ > +static bool kvm_pmu_pmc_is_high_counter(u64 select_idx) > +{ > + return select_idx & 0x1; > +} > + > +/** > + * kvm_pmu_get_canonical_pmc - obtain the canonical pmc > + * @pmc: The PMU counter pointer > + * > + * When a pair of PMCs are chained together we use the low counter (canonical) > + * to hold the underlying perf event. > + */ > +static struct kvm_pmc *kvm_pmu_get_canonical_pmc(struct kvm_pmc *pmc) > +{ > + if (kvm_pmu_pmc_is_chained(pmc) && > + kvm_pmu_pmc_is_high_counter(pmc->idx)) > + return pmc - 1; > + > + return pmc; > +} > + > +/** > + * kvm_pmu_idx_has_chain_evtype - determine if the event type is chain > * @vcpu: The vcpu pointer > * @select_idx: The counter index > */ > -u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > +static bool kvm_pmu_idx_has_chain_evtype(struct kvm_vcpu *vcpu, u64 select_idx) > { > - u64 counter, reg, enabled, running; > - struct kvm_pmu *pmu = &vcpu->arch.pmu; > - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; > + u64 eventsel, reg; > > - reg = (select_idx == ARMV8_PMU_CYCLE_IDX) > - ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; > - counter = __vcpu_sys_reg(vcpu, reg); > + select_idx |= 0x1; > + > + if (select_idx == ARMV8_PMU_CYCLE_IDX) > + return false; > + > + reg = PMEVTYPER0_EL0 + select_idx; > + eventsel = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_EVENT; > + > + return armv8pmu_evtype_is_chain(eventsel); > +} > + > +/** > + * kvm_pmu_get_pair_counter_value - get PMU counter value > + * @vcpu: The vcpu pointer > + * @pmc: The PMU counter pointer > + */ > +static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu, > + struct kvm_pmc *pmc) > +{ > + u64 counter, counter_high, reg, enabled, running; > + > + if (kvm_pmu_pmc_is_chained(pmc)) { > + pmc = kvm_pmu_get_canonical_pmc(pmc); > + reg = PMEVCNTR0_EL0 + pmc->idx; > + > + counter = __vcpu_sys_reg(vcpu, reg); > + counter_high = __vcpu_sys_reg(vcpu, reg + 1); > + > + counter = lower_32_bits(counter) | (counter_high << 32); > + } else { > + reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) > + ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; > + counter = __vcpu_sys_reg(vcpu, reg); > + } > > - /* The real counter value is equal to the value of counter register plus > + /* > + * The real counter value is equal to the value of counter register plus > * the value perf event counts. > */ > if (pmc->perf_event) > counter += perf_event_read_value(pmc->perf_event, &enabled, > &running); > > + return counter; > +} > + > +/** > + * kvm_pmu_get_counter_value - get PMU counter value > + * @vcpu: The vcpu pointer > + * @select_idx: The counter index > + */ > +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > +{ > + u64 counter; > + struct kvm_pmu *pmu = &vcpu->arch.pmu; > + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; > + > + counter = kvm_pmu_get_pair_counter_value(vcpu, pmc); > + > + if (kvm_pmu_pmc_is_chained(pmc) && > + kvm_pmu_pmc_is_high_counter(select_idx)) > + counter >>= 32; > + > return counter & pmc->bitmask; > } > > @@ -74,6 +174,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) > */ > static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) > { > + pmc = kvm_pmu_get_canonical_pmc(pmc); > if (pmc->perf_event) { > perf_event_disable(pmc->perf_event); > perf_event_release_kernel(pmc->perf_event); > @@ -91,13 +192,24 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) > { > u64 counter, reg; > > - if (pmc->perf_event) { > + pmc = kvm_pmu_get_canonical_pmc(pmc); > + if (!pmc->perf_event) > + return; > + > + if (kvm_pmu_pmc_is_chained(pmc)) { > + counter = kvm_pmu_get_pair_counter_value(vcpu, pmc); > + > + reg = PMEVCNTR0_EL0 + pmc->idx; > + __vcpu_sys_reg(vcpu, reg) = counter & pmc->bitmask; > + __vcpu_sys_reg(vcpu, reg + 1) = (counter >> 32) & pmc->bitmask; There is something odd here: You use the same mask for both half of the counter. The second one doesn't make much sense, and the first one makes me wonder... Why isn't bitmask a 64bit quantity in this case? > + } else { > counter = kvm_pmu_get_counter_value(vcpu, pmc->idx); > reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) > ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; > __vcpu_sys_reg(vcpu, reg) = counter; > - kvm_pmu_release_perf_event(pmc); > } > + > + kvm_pmu_release_perf_event(pmc); > } > > /** > @@ -115,6 +227,8 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) > pmu->pmc[i].idx = i; > pmu->pmc[i].bitmask = 0xffffffffUL; > } > + > + bitmap_zero(vcpu->arch.pmu.chained, ARMV8_PMU_MAX_COUNTER_PAIRS); > } > > /** > @@ -154,6 +268,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) > int i; > struct kvm_pmu *pmu = &vcpu->arch.pmu; > struct kvm_pmc *pmc; > + struct perf_event *perf_event; > > if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) > return; > @@ -163,9 +278,21 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) > continue; > > pmc = &pmu->pmc[i]; > + > + /* > + * For high counters of chained events we must recreate the > + * perf event with the long (64bit) attribute set. > + */ > + if (kvm_pmu_pmc_is_chained(pmc) && > + kvm_pmu_pmc_is_high_counter(i)) { > + kvm_pmu_create_perf_event(vcpu, i); > + continue; > + } > + > + pmc = kvm_pmu_get_canonical_pmc(pmc); > if (pmc->perf_event) { > perf_event_enable(pmc->perf_event); > - if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) > + if (perf_event->state != PERF_EVENT_STATE_ACTIVE) > kvm_debug("fail to enable perf event\n"); > } > } > @@ -192,6 +319,18 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) > continue; > > pmc = &pmu->pmc[i]; > + > + /* > + * For high counters of chained events we must recreate the > + * perf event with the long (64bit) attribute unset. > + */ > + if (kvm_pmu_pmc_is_chained(pmc) && > + kvm_pmu_pmc_is_high_counter(i)) { > + kvm_pmu_create_perf_event(vcpu, i); > + continue; > + } > + > + pmc = kvm_pmu_get_canonical_pmc(pmc); > if (pmc->perf_event) > perf_event_disable(pmc->perf_event); > } > @@ -281,17 +420,6 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) > kvm_pmu_update_state(vcpu); > } > > -static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc) > -{ > - struct kvm_pmu *pmu; > - struct kvm_vcpu_arch *vcpu_arch; > - > - pmc -= pmc->idx; > - pmu = container_of(pmc, struct kvm_pmu, pmc[0]); > - vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu); > - return container_of(vcpu_arch, struct kvm_vcpu, arch); > -} > - > /** > * When the perf event overflows, set the overflow status and inform the vcpu. > */ > @@ -389,13 +517,20 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) > static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) > { > struct kvm_pmu *pmu = &vcpu->arch.pmu; > - struct kvm_pmc *pmc = &pmu->pmc[select_idx]; > + struct kvm_pmc *pmc; > struct perf_event *event; > struct perf_event_attr attr; > u64 eventsel, counter, reg, data; > > - reg = (select_idx == ARMV8_PMU_CYCLE_IDX) > - ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; > + /* > + * For chained counters the event type and filtering attributes are > + * obtained from the low/even counter. We also use this counter to > + * determine if the event is enabled/disabled. > + */ > + pmc = kvm_pmu_get_canonical_pmc(&pmu->pmc[select_idx]); > + > + reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) > + ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx; > data = __vcpu_sys_reg(vcpu, reg); > > kvm_pmu_stop_counter(vcpu, pmc); > @@ -403,27 +538,43 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) > > /* Software increment event does't need to be backed by a perf event */ > if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR && > - select_idx != ARMV8_PMU_CYCLE_IDX) > + pmc->idx != ARMV8_PMU_CYCLE_IDX) > return; > > memset(&attr, 0, sizeof(struct perf_event_attr)); > attr.type = PERF_TYPE_RAW; > attr.size = sizeof(attr); > attr.pinned = 1; > - attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx); > + attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, pmc->idx); > attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0; > attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0; > attr.exclude_hv = 1; /* Don't count EL2 events */ > attr.exclude_host = 1; /* Don't count host events */ > - attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ? > + attr.config = (pmc->idx == ARMV8_PMU_CYCLE_IDX) ? > ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel; > > - counter = kvm_pmu_get_counter_value(vcpu, select_idx); > - /* The initial sample period (overflow count) of an event. */ > - attr.sample_period = (-counter) & pmc->bitmask; > + counter = kvm_pmu_get_pair_counter_value(vcpu, pmc); > + > + if (kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx)) { > + /** > + * The initial sample period (overflow count) of an event. For > + * chained counters we only support overflow interrupts on the > + * high counter. > + */ > + attr.sample_period = (-counter) & GENMASK(63, 0); Same thing here. I wonder why the counter mask is not upgraded to 64bit, forcing us to compute the sample period in a different way depending on whether the counter is chained or not... > + event = perf_event_create_kernel_counter(&attr, -1, current, > + kvm_pmu_perf_overflow, > + pmc + 1); > > - event = perf_event_create_kernel_counter(&attr, -1, current, > + if (kvm_pmu_counter_is_enabled(vcpu, pmc->idx + 1)) > + attr.config1 |= PERF_ATTR_CFG1_KVM_PMU_CHAINED; > + } else { > + /* The initial sample period (overflow count) of an event. */ > + attr.sample_period = (-counter) & pmc->bitmask; > + event = perf_event_create_kernel_counter(&attr, -1, current, > kvm_pmu_perf_overflow, pmc); > + } > + > if (IS_ERR(event)) { > pr_err_once("kvm: pmu event creation failed %ld\n", > PTR_ERR(event)); > @@ -433,6 +584,33 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) > pmc->perf_event = event; > } > > +/** > + * kvm_pmu_update_pmc_chained - update chained bitmap > + * @vcpu: The vcpu pointer > + * @select_idx: The number of selected counter > + * > + * Update the chained bitmap based on the event type written in the > + * typer register. > + */ > +static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 select_idx) > +{ > + struct kvm_pmu *pmu = &vcpu->arch.pmu; > + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; > + > + if (kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx)) { > + /* > + * During promotion from !chained to chained we must ensure > + * the adjacent counter is stopped and its event destroyed > + */ > + if (!kvm_pmu_pmc_is_chained(pmc)) > + kvm_pmu_stop_counter(vcpu, pmc); > + > + set_bit(pmc->idx >> 1, vcpu->arch.pmu.chained); > + } else { > + clear_bit(pmc->idx >> 1, vcpu->arch.pmu.chained); > + } > +} > + > /** > * kvm_pmu_set_counter_event_type - set selected counter to monitor some event > * @vcpu: The vcpu pointer > @@ -452,6 +630,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, > ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; > > __vcpu_sys_reg(vcpu, reg) = event_type; > + > + kvm_pmu_update_pmc_chained(vcpu, select_idx); > kvm_pmu_create_perf_event(vcpu, select_idx); > } > > Thanks, M. -- Jazz is not dead. It just smells funny... _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel