From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B30A7C7EE2E for ; Sat, 27 May 2023 04:05:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231740AbjE0EFP (ORCPT ); Sat, 27 May 2023 00:05:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237958AbjE0EFL (ORCPT ); Sat, 27 May 2023 00:05:11 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 009F6D7 for ; Fri, 26 May 2023 21:05:09 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-256498322a2so197563a91.1 for ; Fri, 26 May 2023 21:05:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685160309; x=1687752309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uKyUjzGxMpAFYUO03Bjkr2AfOXHzBadtPHxiD64SwA0=; b=u3TLHrYK4S39RSZshsMoRKMWs10XcbGSNj6L5Phe7dIdJ8To+/hWEiSAFlLozdP517 a3kiQSbkpg/uFOIMqYe3m2LWkPiWaB+8XxzdbUUSqP0vFKc+fVD7mAH+VCv4r2X3W5ak uPQJv4yWH26iZeg82TXekzt1mh7lEMk1c+ZMJRRuvtNZSxTOLdT3C/2E3dsf6KjXSkP9 7Nvp+oC5avEw3t85SDLkag6Mu77J41BX5J6HRHF8xwUtOPAhy+UVqH+FrBnu0ZMYzdkA KM6Zoby653oxCtPpjytcghuzuzQMuNNIWf+6sF4JnN/w/mKIh5bZ0bKcvU9+dvPxZTBz Fy+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685160309; x=1687752309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uKyUjzGxMpAFYUO03Bjkr2AfOXHzBadtPHxiD64SwA0=; b=kkHpU46yb6wkibMl7QPVYip1sjHGYScT7z+4GcCyod+G6ZYVyZ7MPgd8m7dXtYBhY2 yY7DFvhthwNBzq0S+RdPaXP7MokYtgEVMBd/xWhTbFGJeiY+bx7YNqZLXs1v8bMp4ozl RhAgFuWzr26naFqQLGHqXfHxQRWorHdGwL8oLBBVR5+nDSuy1pRlIX8Xa5+f2fYYj5Xl xryA9y9Rx7Z+XCkdi5JxnIgGbzzEewMg1XMVKdHJQeR696Zd7AluhqwU0afclAnpcnwQ FO+PwwodCeHavKmylk59GF0mTuJkF/WB7+GEO0cRYx1s74xVxTToF9ej6l40/gRoqZfY /Nfw== X-Gm-Message-State: AC+VfDydlStqW6nyB4DHWz1+W7uHP177Xgh8V7j3b4ei/Hym0bY8zVWU hdkodwrtbyG2KX7or/EmMM4ETAqNzIA= X-Google-Smtp-Source: ACHHUZ4Zo6lTvJnnh4eGslCPLrtwqCpnLb6UvNOS8aOzf5ORprs/RuwVER61Ok7lDJ8BWcoPB83lX7RlfiE= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a17:90a:ec08:b0:24d:e14a:6361 with SMTP id l8-20020a17090aec0800b0024de14a6361mr188030pjy.3.1685160309467; Fri, 26 May 2023 21:05:09 -0700 (PDT) Date: Fri, 26 May 2023 21:02:36 -0700 In-Reply-To: <20230527040236.1875860-1-reijiw@google.com> Mime-Version: 1.0 References: <20230527040236.1875860-1-reijiw@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230527040236.1875860-5-reijiw@google.com> Subject: [PATCH 4/4] KVM: arm64: PMU: Don't use the PMUVer of the PMU set for guest From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Will Deacon , Reiji Watanabe Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Avoid using the PMUVer of the PMU hardware that is associated to the guest, except in a few cases, as the PMUVer may be different from the value of ID_AA64DFR0_EL1.PMUVer for the guest. The first case is when using the PMUVer as the limit value of the ID_AA64DFR0_EL1.PMUVer for the guest. The second case is when using the PMUVer to determine the valid range of events for KVM_ARM_VCPU_PMU_V3_FILTER, as it has been allowing userspace to specify events that are valid for the PMU hardware, regardless of the value of the guest's ID_AA64DFR0_EL1.PMUVer. KVM will change the valid range of the event that the guest can use based on the value of the guest's ID_AA64DFR0_EL1.PMUVer though. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6cd08d5e5b72..67512b13ba2d 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -35,12 +35,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } -static u32 kvm_pmu_event_mask(struct kvm *kvm) +static u32 __kvm_pmu_event_mask(u8 pmuver) { - unsigned int pmuver; - - pmuver = kvm->arch.arm_pmu->pmuver; - switch (pmuver) { case ID_AA64DFR0_EL1_PMUVer_IMP: return GENMASK(9, 0); @@ -55,6 +51,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) } } +static u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + return __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp); +} + /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -757,7 +758,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled * as RAZ */ - if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4) + if (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P4) val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32); base = 32; } @@ -970,11 +971,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return 0; } case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver = kvm_arm_pmu_get_pmuver_limit(kvm); struct kvm_pmu_event_filter __user *uaddr; struct kvm_pmu_event_filter filter; int nr_events; - nr_events = kvm_pmu_event_mask(kvm) + 1; + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events = __kvm_pmu_event_mask(pmuver) + 1; uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr; -- 2.41.0.rc0.172.g3f132b7071-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AFD1EC77B7E for ; Sat, 27 May 2023 04:05:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uFrx45tzaiwJmgSKyi2rUzPMsnP5jh/DvJJoNertgfo=; b=zH7M3au4J8r2NGJ7fl9JhLL39i vtqATg8ryYf3P9X+DGfDHZnWr/e4CCcs6PyqVjSfamI7plbWzq+WNILhATJccD7vMayQMHGBqnM1Z izaITr5tY+R+XxYdkE1aV6W3fFYtLtEfiy3dXLa2qxZdyAHKaYulsLLcM4fq0JSOrzkNSoE9j0RTI 6AEyKqoRB3gvQXoJOGUa4DolClkQNVD2a7JnfcHpF5Pf9Svgvn9VmCS+x5iSRNcniGgS1oBQffWBc UCWi9j1z9s5uLt+jcCsNHu8UoF2QHHzhg/bgtmJu5O2Yon4PCBqkt7ThEGMAKueb2Si8oiXwSlCuE XMdrWz0Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q2lB3-004o9N-2X; Sat, 27 May 2023 04:05:13 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2lB1-004o8O-05 for linux-arm-kernel@lists.infradead.org; Sat, 27 May 2023 04:05:12 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-256366ac520so723967a91.0 for ; Fri, 26 May 2023 21:05:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685160309; x=1687752309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uKyUjzGxMpAFYUO03Bjkr2AfOXHzBadtPHxiD64SwA0=; b=u3TLHrYK4S39RSZshsMoRKMWs10XcbGSNj6L5Phe7dIdJ8To+/hWEiSAFlLozdP517 a3kiQSbkpg/uFOIMqYe3m2LWkPiWaB+8XxzdbUUSqP0vFKc+fVD7mAH+VCv4r2X3W5ak uPQJv4yWH26iZeg82TXekzt1mh7lEMk1c+ZMJRRuvtNZSxTOLdT3C/2E3dsf6KjXSkP9 7Nvp+oC5avEw3t85SDLkag6Mu77J41BX5J6HRHF8xwUtOPAhy+UVqH+FrBnu0ZMYzdkA KM6Zoby653oxCtPpjytcghuzuzQMuNNIWf+6sF4JnN/w/mKIh5bZ0bKcvU9+dvPxZTBz Fy+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685160309; x=1687752309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uKyUjzGxMpAFYUO03Bjkr2AfOXHzBadtPHxiD64SwA0=; b=lZQzKbhpJFsO0ayg7PufUzKGHniMvngWLOgyT1PydR8VPH/TfDD3ZJnNX0Kgs6wHMp cj5jinmZt4VU1MZTMzT+6qIFoK6JBFr6DrOpVHhz669e/Fpwc2cGxBb+SOuG9hjvk1GY 4YMPRL3zpZOjAm0n3sGdE6mD79ghdZVlRkV4cF7a6CATKBvy8pikDYZ8OYgsAT2EOGIj AhIbBG/fG1bxW2+AwveQyhprjpFaLZLgO5tZv5UlHUyDqezV7+TBgNJyDLPQYHn1sQpO 1cMpYHVySZpU57S9TywsGW7HZpPeDOzG9cHYRMgTgjEiodRuG/ALMZD1l4fRDkl7HLDt z1Fg== X-Gm-Message-State: AC+VfDwvn6u+HhAmCOARXwzHVS+pN+EQHVoGnxJP2LrWnD7jJlrhWsFv +JEGDWJOVBGFhpZAt3kD1MMp1n1tyeY= X-Google-Smtp-Source: ACHHUZ4Zo6lTvJnnh4eGslCPLrtwqCpnLb6UvNOS8aOzf5ORprs/RuwVER61Ok7lDJ8BWcoPB83lX7RlfiE= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a17:90a:ec08:b0:24d:e14a:6361 with SMTP id l8-20020a17090aec0800b0024de14a6361mr188030pjy.3.1685160309467; Fri, 26 May 2023 21:05:09 -0700 (PDT) Date: Fri, 26 May 2023 21:02:36 -0700 In-Reply-To: <20230527040236.1875860-1-reijiw@google.com> Mime-Version: 1.0 References: <20230527040236.1875860-1-reijiw@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230527040236.1875860-5-reijiw@google.com> Subject: [PATCH 4/4] KVM: arm64: PMU: Don't use the PMUVer of the PMU set for guest From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Will Deacon , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230526_210511_067225_271B574C X-CRM114-Status: GOOD ( 16.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Avoid using the PMUVer of the PMU hardware that is associated to the guest, except in a few cases, as the PMUVer may be different from the value of ID_AA64DFR0_EL1.PMUVer for the guest. The first case is when using the PMUVer as the limit value of the ID_AA64DFR0_EL1.PMUVer for the guest. The second case is when using the PMUVer to determine the valid range of events for KVM_ARM_VCPU_PMU_V3_FILTER, as it has been allowing userspace to specify events that are valid for the PMU hardware, regardless of the value of the guest's ID_AA64DFR0_EL1.PMUVer. KVM will change the valid range of the event that the guest can use based on the value of the guest's ID_AA64DFR0_EL1.PMUVer though. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6cd08d5e5b72..67512b13ba2d 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -35,12 +35,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } -static u32 kvm_pmu_event_mask(struct kvm *kvm) +static u32 __kvm_pmu_event_mask(u8 pmuver) { - unsigned int pmuver; - - pmuver = kvm->arch.arm_pmu->pmuver; - switch (pmuver) { case ID_AA64DFR0_EL1_PMUVer_IMP: return GENMASK(9, 0); @@ -55,6 +51,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) } } +static u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + return __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp); +} + /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -757,7 +758,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled * as RAZ */ - if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4) + if (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P4) val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32); base = 32; } @@ -970,11 +971,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return 0; } case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver = kvm_arm_pmu_get_pmuver_limit(kvm); struct kvm_pmu_event_filter __user *uaddr; struct kvm_pmu_event_filter filter; int nr_events; - nr_events = kvm_pmu_event_mask(kvm) + 1; + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events = __kvm_pmu_event_mask(pmuver) + 1; uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr; -- 2.41.0.rc0.172.g3f132b7071-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel