From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 320F6C05027 for ; Wed, 1 Feb 2023 03:48:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231454AbjBADsb (ORCPT ); Tue, 31 Jan 2023 22:48:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229500AbjBADs2 (ORCPT ); Tue, 31 Jan 2023 22:48:28 -0500 Received: from mail-ua1-x929.google.com (mail-ua1-x929.google.com [IPv6:2607:f8b0:4864:20::929]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2303247083 for ; Tue, 31 Jan 2023 19:48:26 -0800 (PST) Received: by mail-ua1-x929.google.com with SMTP id x23so553141uau.3 for ; Tue, 31 Jan 2023 19:48:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=1zd8QjF+kSLhB09/TgKQjJ+DKY9znHQWFGavSjJPgGA=; b=SXvBD2C+HSlcKogQu81OV01RKaqtrQKo+NmyXMx9KLMQYApMDh2ZWUpLTdxPeyZWJb pQs7ZtnLacqM30a3KNZ/kaA1Bm59QO+pok3BikGM5vOgzwlx62uyLngySUZDVozgLhcs 5e+CDuYrfsaTSNcARnKTFr+5o+onKeVg/gdfzjgDQICe63+6c2+nTCjzLuDTVhHFssy4 s3rW8yao1DQr2HYKdleQvVKwixWBjaYZuo4xbvgx6MtMjAO8l/nn+ma3v0XKxFRkAUWE 9b/KjxV0rxNZeoPXqU/Yf6arZ8Ss9e1Epl2TiWoW9fB/1F9QpKMFh25lx4A2fFyKOTN4 wiiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1zd8QjF+kSLhB09/TgKQjJ+DKY9znHQWFGavSjJPgGA=; b=NIbYtUU8OrbrqbSjkiIDgNNTBoyOK1Of/xGupyB3+pZvux8VlUK3mW0YbXmkAynko0 rMH0kXlDqx/0t/KyCBLUDXWWzYJHkQK6Cfsp6JQ91Y8WTK555E20kbA2fEuYb74Gekn3 FdUr/nAfqx/fAMQiZpOdsXPt4SOSuB48+/hzgqemHDFYdMfLEqvvAYNBG9TV5YHsWYI7 s9O3DIywGhlfwS0EE1UEMIkgx/KxnXkS2kAXZmB/Gbb3RxdtNLlS/EJHIrm7WCzPhxN9 zBec4eVjyKqXxVhiNRAcCdotsaENt1P/YFoJVUgXKAVGcTcisON5I7WwtNQWnNncc9nw nIlg== X-Gm-Message-State: AO0yUKWt1znJ+1IzzwoDvPB+PMc6iy84nlayQC6BWhYIhjd89YSJvlFs bhJZBUKMR9GeKRYKJEf5Zwtiu4LIX/jSgE9l+FeQYA== X-Google-Smtp-Source: AK7set+uy5mDZkUzvH0ia/5hL0gxU+ejbjPg/JU5gsdJ+BVTnrXxcDyElYmZ7NhVb/PbkY6WJSrVzg8XHKLoY2vsp9M= X-Received: by 2002:ab0:13c4:0:b0:652:a521:2cfa with SMTP id n4-20020ab013c4000000b00652a5212cfamr127573uae.56.1675223304802; Tue, 31 Jan 2023 19:48:24 -0800 (PST) MIME-Version: 1.0 References: <20230127182558.2416400-1-atishp@rivosinc.com> <20230127182558.2416400-8-atishp@rivosinc.com> In-Reply-To: From: Anup Patel Date: Wed, 1 Feb 2023 09:18:13 +0530 Message-ID: Subject: Re: [PATCH v3 07/14] RISC-V: KVM: Add skeleton support for perf To: Atish Patra Cc: Anup Patel , Atish Patra , linux-kernel@vger.kernel.org, Andrew Jones , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 1, 2023 at 4:05 AM Atish Patra wrote: > > On Sun, Jan 29, 2023 at 4:30 AM Anup Patel wrote: > > > > On Fri, Jan 27, 2023 at 11:56 PM Atish Patra wrote: > > > > > > This patch only adds barebore structure of perf implementation. Most of > > > > s/barebore/barebone/ > > > > > the function returns zero at this point and will be implemented > > > fully in the future. > > > > > > Signed-off-by: Atish Patra > > > --- > > > arch/riscv/include/asm/kvm_host.h | 3 + > > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 76 ++++++++++++++ > > > arch/riscv/kvm/Makefile | 1 + > > > arch/riscv/kvm/vcpu.c | 5 + > > > arch/riscv/kvm/vcpu_pmu.c | 145 ++++++++++++++++++++++++++ > > > 5 files changed, 230 insertions(+) > > > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > > > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > > > > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > > > index 93f43a3..f9874b4 100644 > > > --- a/arch/riscv/include/asm/kvm_host.h > > > +++ b/arch/riscv/include/asm/kvm_host.h > > > @@ -18,6 +18,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > > > > #define KVM_MAX_VCPUS 1024 > > > > > > @@ -228,6 +229,8 @@ struct kvm_vcpu_arch { > > > > > > /* Don't run the VCPU (blocked) */ > > > bool pause; > > > + > > > + struct kvm_pmu pmu; > > > > Add a single line comment just like other members of the structure. > > > > I also suggest naming this variable "pmu_context" or something similar > > for naming consistency. > > > > Done. > > > > }; > > > > > > static inline void kvm_arch_hardware_unsetup(void) {} > > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > new file mode 100644 > > > index 0000000..3f43a43 > > > --- /dev/null > > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > @@ -0,0 +1,76 @@ > > > +/* SPDX-License-Identifier: GPL-2.0-only */ > > > +/* > > > + * Copyright (c) 2023 Rivos Inc > > > + * > > > + * Authors: > > > + * Atish Patra > > > + */ > > > + > > > +#ifndef __KVM_VCPU_RISCV_PMU_H > > > +#define __KVM_VCPU_RISCV_PMU_H > > > + > > > +#include > > > +#include > > > +#include > > > + > > > +#ifdef CONFIG_RISCV_PMU_SBI > > > +#define RISCV_KVM_MAX_FW_CTRS 32 > > > +#define RISCV_MAX_COUNTERS 64 > > > + > > > +/* Per virtual pmu counter data */ > > > +struct kvm_pmc { > > > + u8 idx; > > > + struct perf_event *perf_event; > > > + uint64_t counter_val; > > > + union sbi_pmu_ctr_info cinfo; > > > + /* Event monitoring status */ > > > + bool started; > > > +}; > > > + > > > +/* PMU data structure per vcpu */ > > > +struct kvm_pmu { > > > + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; > > > + /* Number of the virtual firmware counters available */ > > > + int num_fw_ctrs; > > > + /* Number of the virtual hardware counters available */ > > > + int num_hw_ctrs; > > > + /* A flag to indicate that pmu initialization is done */ > > > + bool init_done; > > > + /* Bit map of all the virtual counter used */ > > > + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); > > > +}; > > > + > > > +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) > > > +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) > > > + > > > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + unsigned long eidx, uint64_t evtdata, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); > > > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); > > > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > > > + > > > +#else > > > +struct kvm_pmu { > > > +}; > > > + > > > +static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > > > +{ > > > + return 0; > > > +} > > > +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} > > > +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} > > > +#endif /* CONFIG_RISCV_PMU_SBI */ > > > +#endif /* !__KVM_VCPU_RISCV_PMU_H */ > > > diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile > > > index 019df920..5de1053 100644 > > > --- a/arch/riscv/kvm/Makefile > > > +++ b/arch/riscv/kvm/Makefile > > > @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o > > > kvm-y += vcpu_sbi_replace.o > > > kvm-y += vcpu_sbi_hsm.o > > > kvm-y += vcpu_timer.o > > > +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o > > > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c > > > index 7c08567..b746f21 100644 > > > --- a/arch/riscv/kvm/vcpu.c > > > +++ b/arch/riscv/kvm/vcpu.c > > > @@ -137,6 +137,7 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) > > > > > > WRITE_ONCE(vcpu->arch.irqs_pending, 0); > > > WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); > > > > Add an empty newline here. > > > > Done. > > > + kvm_riscv_vcpu_pmu_reset(vcpu); > > > > > > vcpu->arch.hfence_head = 0; > > > vcpu->arch.hfence_tail = 0; > > > @@ -194,6 +195,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > > > /* Setup VCPU timer */ > > > kvm_riscv_vcpu_timer_init(vcpu); > > > > > > + /* setup performance monitoring */ > > > + kvm_riscv_vcpu_pmu_init(vcpu); > > > + > > > /* Reset VCPU */ > > > kvm_riscv_reset_vcpu(vcpu); > > > > > > @@ -216,6 +220,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) > > > /* Cleanup VCPU timer */ > > > kvm_riscv_vcpu_timer_deinit(vcpu); > > > > > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > > > > Add an empty newline here. > > > > Done. > > > > /* Free unused pages pre-allocated for G-stage page table mappings */ > > > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); > > > } > > > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > > > new file mode 100644 > > > index 0000000..d3fd551 > > > --- /dev/null > > > +++ b/arch/riscv/kvm/vcpu_pmu.c > > > @@ -0,0 +1,145 @@ > > > +// SPDX-License-Identifier: GPL-2.0 > > > +/* > > > + * Copyright (c) 2023 Rivos Inc > > > + * > > > + * Authors: > > > + * Atish Patra > > > + */ > > > + > > > +#include > > > +#include > > > +#include > > > +#include > > > +#include > > > +#include > > > +#include > > > +#include > > > + > > > +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > > > + > > > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + > > > + edata->out_val = kvm_pmu_num_counters(kvpmu); > > > + > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + > > > + if (cidx > RISCV_MAX_COUNTERS || cidx == 1) { > > > + edata->err_val = SBI_ERR_INVALID_PARAM; > > > + return 0; > > > + } > > > + > > > + edata->out_val = kvpmu->pmc[cidx].cinfo.value; > > > + > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + unsigned long eidx, uint64_t evtdata, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > > > +{ > > > + int i = 0, num_fw_ctrs, ret, num_hw_ctrs = 0, hpm_width = 0; > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + struct kvm_pmc *pmc; > > > + > > > + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); > > > + if (ret < 0) > > > + return ret; > > > + > > > + if (!hpm_width || !num_hw_ctrs) { > > > + pr_err("Cannot initialize VCPU with NULL hpmcounter width or number of counters\n"); > > > > What will happen if underlying M-mode firmware does not implement > > SBI PMU extension ? > > > > riscv_pmu_get_hpm_info will return an error and > kvm_riscv_vcpu_pmu_init will fail. Ahh okay. The vcpu_create() is not looking at return value of kvm_riscv_vcpu_pmu_init() but this function should fail silently "return 0" if riscv_pmu_get_hpm_info() fails. > > > > + return -EINVAL; > > > + } > > > + > > > + if ((num_hw_ctrs + RISCV_KVM_MAX_FW_CTRS) > RISCV_MAX_COUNTERS) { > > > + pr_warn("Limiting fw counters as hw & fw counters exceed maximum counters\n"); > > > > How is this possible ? > > > > Maximum HW counters is 32 (including time, cycle, and instret), > > RISCV_KVM_MAX_FW_CTRS = 32, and > > RISCV_MAX_COUNTERS = 64 > > This was added to prevent the condition where somebody changed the > definition RISCV_KVM_MAX_FW_CTRS > without incrementing MAX_COUNTERS. The error might be subtle as it may > work on some platforms (with less hardware counter) > but fail on others (with more hardware counters.) > I couldn't find a better way to describe the relationship between > RISCV_KVM_MAX_FW_CTRS and RISCV_MAX_COUNTERS. > > I can just put a big comment here instead of the condition check if > you prefer that way. Maybe you can add a compile-time check using "#if" and "#error" ? > > > > > > > + num_fw_ctrs = RISCV_MAX_COUNTERS - num_hw_ctrs; > > > + } else > > > + num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; > > > + > > > + kvpmu->num_hw_ctrs = num_hw_ctrs; > > > + kvpmu->num_fw_ctrs = num_fw_ctrs; > > > + > > > + /* > > > + * There is no correlation between the logical hardware counter and virtual counters. > > > + * However, we need to encode a hpmcounter CSR in the counter info field so that > > > + * KVM can trap n emulate the read. This works well in the migration use case as > > > + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. > > > + */ > > > + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { > > > + /* TIME CSR shouldn't be read from perf interface */ > > > + if (i == 1) > > > + continue; > > > + pmc = &kvpmu->pmc[i]; > > > + pmc->idx = i; > > > + if (i < kvpmu->num_hw_ctrs) { > > > + kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_HW; > > > + if (i < 3) > > > + /* CY, IR counters */ > > > + kvpmu->pmc[i].cinfo.width = 63; > > > + else > > > + kvpmu->pmc[i].cinfo.width = hpm_width; > > > + /* > > > + * The CSR number doesn't have any relation with the logical > > > + * hardware counters. The CSR numbers are encoded sequentially > > > + * to avoid maintaining a map between the virtual counter > > > + * and CSR number. > > > + */ > > > + pmc->cinfo.csr = CSR_CYCLE + i; > > > + } else { > > > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; > > > + pmc->cinfo.width = BITS_PER_LONG - 1; > > > + } > > > + } > > > + > > > + kvpmu->init_done = true; > > > + > > > + return 0; > > > +} > > > + > > > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) > > > +{ > > > + /* TODO */ > > > +} > > > + > > > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) > > > +{ > > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > > > +} > > > -- > > > 2.25.1 > > > > > > > Regards, > > Anup > > > > -- > Regards, > Atish Regards, Anup From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 47785C38142 for ; Wed, 1 Feb 2023 03:48:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5GE9ZcFKQywJ+awpdBfqzWqSryqTgHQE3e7gdNvHWcQ=; b=3NuI2Of15zV037 HSblSlkiZW0chZugZbsy1W6WnMoMC7zV6cK+XyalMQB69qTqDAlfRTAmtokA3U9sVLaM7AxixWvQ5 qoyMwmtLFOpjZCxxOe85a/8vV6XrxwThc7/RUSoPjSjV+pa6RnIrPAyKS2cTJ49eqdYfsTuYDi+aD JWA8OU0RKRltJrrntwI4Sioc5p8PXFnfashrba9Oxp1yOHw38ZVOZxPTEuEXaPKtbbx8h/xuMh5lx shSJgj9VvEysZv+uNMYOcgcTpK9eX0uixzL8HwBMLzJh4H+AHKsHrTMuK7vlUKZxNCd6xQZiHRKEZ QjINYeYTpK25Wujc4zlA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pN46s-00AAl2-4z; Wed, 01 Feb 2023 03:48:34 +0000 Received: from mail-ua1-x934.google.com ([2607:f8b0:4864:20::934]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pN46n-00AAjn-UE for linux-riscv@lists.infradead.org; Wed, 01 Feb 2023 03:48:32 +0000 Received: by mail-ua1-x934.google.com with SMTP id r12so3274374uaf.7 for ; Tue, 31 Jan 2023 19:48:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=1zd8QjF+kSLhB09/TgKQjJ+DKY9znHQWFGavSjJPgGA=; b=SXvBD2C+HSlcKogQu81OV01RKaqtrQKo+NmyXMx9KLMQYApMDh2ZWUpLTdxPeyZWJb pQs7ZtnLacqM30a3KNZ/kaA1Bm59QO+pok3BikGM5vOgzwlx62uyLngySUZDVozgLhcs 5e+CDuYrfsaTSNcARnKTFr+5o+onKeVg/gdfzjgDQICe63+6c2+nTCjzLuDTVhHFssy4 s3rW8yao1DQr2HYKdleQvVKwixWBjaYZuo4xbvgx6MtMjAO8l/nn+ma3v0XKxFRkAUWE 9b/KjxV0rxNZeoPXqU/Yf6arZ8Ss9e1Epl2TiWoW9fB/1F9QpKMFh25lx4A2fFyKOTN4 wiiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1zd8QjF+kSLhB09/TgKQjJ+DKY9znHQWFGavSjJPgGA=; b=KnfT6rWz/Hqu5eVZs5ga5zXGf1ZcML8t09pgNEdzrFKZX6Izj0qc8bHpHhvoh/g8Ta 8rJyP0iyE9elgndrgTDvZD8N4+fZW2Eu1X6OrmFOGLzWMh/2L5BZ4FExR+w+4NMwsBT6 42sOa6rsvn8QEZaLOo3XqrwiEXTONIwosTOontokjal10M111BfZaRcpR2UeSpdJT2GV DN9RbAC0G34bjYf9bXBE+iPI3uVFIuWNXmtBJw3G4eetSDBUVLpJwihNg3t+4JzQcBd2 d/k4by6AAcPqWsQBJHu5SbVYEDJhUemjM3vOr29pwRmiSBzupvfKX4d7pTcTbabPycEJ 4cGA== X-Gm-Message-State: AO0yUKW+oGQKGxiZgSlFT78ma0s/bRHXcHWM71WDcnU+CLbWWrCA1VqD NTDhxzc/Z6MB2fwBbRqqoOnZGFHtAOWNvU79y2X8uA== X-Google-Smtp-Source: AK7set+uy5mDZkUzvH0ia/5hL0gxU+ejbjPg/JU5gsdJ+BVTnrXxcDyElYmZ7NhVb/PbkY6WJSrVzg8XHKLoY2vsp9M= X-Received: by 2002:ab0:13c4:0:b0:652:a521:2cfa with SMTP id n4-20020ab013c4000000b00652a5212cfamr127573uae.56.1675223304802; Tue, 31 Jan 2023 19:48:24 -0800 (PST) MIME-Version: 1.0 References: <20230127182558.2416400-1-atishp@rivosinc.com> <20230127182558.2416400-8-atishp@rivosinc.com> In-Reply-To: From: Anup Patel Date: Wed, 1 Feb 2023 09:18:13 +0530 Message-ID: Subject: Re: [PATCH v3 07/14] RISC-V: KVM: Add skeleton support for perf To: Atish Patra Cc: Anup Patel , Atish Patra , linux-kernel@vger.kernel.org, Andrew Jones , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230131_194830_033552_3FA7EECE X-CRM114-Status: GOOD ( 43.42 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On Wed, Feb 1, 2023 at 4:05 AM Atish Patra wrote: > > On Sun, Jan 29, 2023 at 4:30 AM Anup Patel wrote: > > > > On Fri, Jan 27, 2023 at 11:56 PM Atish Patra wrote: > > > > > > This patch only adds barebore structure of perf implementation. Most of > > > > s/barebore/barebone/ > > > > > the function returns zero at this point and will be implemented > > > fully in the future. > > > > > > Signed-off-by: Atish Patra > > > --- > > > arch/riscv/include/asm/kvm_host.h | 3 + > > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 76 ++++++++++++++ > > > arch/riscv/kvm/Makefile | 1 + > > > arch/riscv/kvm/vcpu.c | 5 + > > > arch/riscv/kvm/vcpu_pmu.c | 145 ++++++++++++++++++++++++++ > > > 5 files changed, 230 insertions(+) > > > create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h > > > create mode 100644 arch/riscv/kvm/vcpu_pmu.c > > > > > > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > > > index 93f43a3..f9874b4 100644 > > > --- a/arch/riscv/include/asm/kvm_host.h > > > +++ b/arch/riscv/include/asm/kvm_host.h > > > @@ -18,6 +18,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > > > > #define KVM_MAX_VCPUS 1024 > > > > > > @@ -228,6 +229,8 @@ struct kvm_vcpu_arch { > > > > > > /* Don't run the VCPU (blocked) */ > > > bool pause; > > > + > > > + struct kvm_pmu pmu; > > > > Add a single line comment just like other members of the structure. > > > > I also suggest naming this variable "pmu_context" or something similar > > for naming consistency. > > > > Done. > > > > }; > > > > > > static inline void kvm_arch_hardware_unsetup(void) {} > > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > new file mode 100644 > > > index 0000000..3f43a43 > > > --- /dev/null > > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > @@ -0,0 +1,76 @@ > > > +/* SPDX-License-Identifier: GPL-2.0-only */ > > > +/* > > > + * Copyright (c) 2023 Rivos Inc > > > + * > > > + * Authors: > > > + * Atish Patra > > > + */ > > > + > > > +#ifndef __KVM_VCPU_RISCV_PMU_H > > > +#define __KVM_VCPU_RISCV_PMU_H > > > + > > > +#include > > > +#include > > > +#include > > > + > > > +#ifdef CONFIG_RISCV_PMU_SBI > > > +#define RISCV_KVM_MAX_FW_CTRS 32 > > > +#define RISCV_MAX_COUNTERS 64 > > > + > > > +/* Per virtual pmu counter data */ > > > +struct kvm_pmc { > > > + u8 idx; > > > + struct perf_event *perf_event; > > > + uint64_t counter_val; > > > + union sbi_pmu_ctr_info cinfo; > > > + /* Event monitoring status */ > > > + bool started; > > > +}; > > > + > > > +/* PMU data structure per vcpu */ > > > +struct kvm_pmu { > > > + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; > > > + /* Number of the virtual firmware counters available */ > > > + int num_fw_ctrs; > > > + /* Number of the virtual hardware counters available */ > > > + int num_hw_ctrs; > > > + /* A flag to indicate that pmu initialization is done */ > > > + bool init_done; > > > + /* Bit map of all the virtual counter used */ > > > + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); > > > +}; > > > + > > > +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) > > > +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) > > > + > > > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + unsigned long eidx, uint64_t evtdata, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_ext_data *edata); > > > +int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); > > > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); > > > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > > > + > > > +#else > > > +struct kvm_pmu { > > > +}; > > > + > > > +static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > > > +{ > > > + return 0; > > > +} > > > +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} > > > +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} > > > +#endif /* CONFIG_RISCV_PMU_SBI */ > > > +#endif /* !__KVM_VCPU_RISCV_PMU_H */ > > > diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile > > > index 019df920..5de1053 100644 > > > --- a/arch/riscv/kvm/Makefile > > > +++ b/arch/riscv/kvm/Makefile > > > @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o > > > kvm-y += vcpu_sbi_replace.o > > > kvm-y += vcpu_sbi_hsm.o > > > kvm-y += vcpu_timer.o > > > +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o > > > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c > > > index 7c08567..b746f21 100644 > > > --- a/arch/riscv/kvm/vcpu.c > > > +++ b/arch/riscv/kvm/vcpu.c > > > @@ -137,6 +137,7 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) > > > > > > WRITE_ONCE(vcpu->arch.irqs_pending, 0); > > > WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); > > > > Add an empty newline here. > > > > Done. > > > + kvm_riscv_vcpu_pmu_reset(vcpu); > > > > > > vcpu->arch.hfence_head = 0; > > > vcpu->arch.hfence_tail = 0; > > > @@ -194,6 +195,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > > > /* Setup VCPU timer */ > > > kvm_riscv_vcpu_timer_init(vcpu); > > > > > > + /* setup performance monitoring */ > > > + kvm_riscv_vcpu_pmu_init(vcpu); > > > + > > > /* Reset VCPU */ > > > kvm_riscv_reset_vcpu(vcpu); > > > > > > @@ -216,6 +220,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) > > > /* Cleanup VCPU timer */ > > > kvm_riscv_vcpu_timer_deinit(vcpu); > > > > > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > > > > Add an empty newline here. > > > > Done. > > > > /* Free unused pages pre-allocated for G-stage page table mappings */ > > > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); > > > } > > > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > > > new file mode 100644 > > > index 0000000..d3fd551 > > > --- /dev/null > > > +++ b/arch/riscv/kvm/vcpu_pmu.c > > > @@ -0,0 +1,145 @@ > > > +// SPDX-License-Identifier: GPL-2.0 > > > +/* > > > + * Copyright (c) 2023 Rivos Inc > > > + * > > > + * Authors: > > > + * Atish Patra > > > + */ > > > + > > > +#include > > > +#include > > > +#include > > > +#include > > > +#include > > > +#include > > > +#include > > > +#include > > > + > > > +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > > > + > > > +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + > > > + edata->out_val = kvm_pmu_num_counters(kvpmu); > > > + > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + > > > + if (cidx > RISCV_MAX_COUNTERS || cidx == 1) { > > > + edata->err_val = SBI_ERR_INVALID_PARAM; > > > + return 0; > > > + } > > > + > > > + edata->out_val = kvpmu->pmc[cidx].cinfo.value; > > > + > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, uint64_t ival, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, > > > + unsigned long ctr_mask, unsigned long flag, > > > + unsigned long eidx, uint64_t evtdata, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + struct kvm_vcpu_sbi_ext_data *edata) > > > +{ > > > + /* TODO */ > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > > > +{ > > > + int i = 0, num_fw_ctrs, ret, num_hw_ctrs = 0, hpm_width = 0; > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + struct kvm_pmc *pmc; > > > + > > > + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); > > > + if (ret < 0) > > > + return ret; > > > + > > > + if (!hpm_width || !num_hw_ctrs) { > > > + pr_err("Cannot initialize VCPU with NULL hpmcounter width or number of counters\n"); > > > > What will happen if underlying M-mode firmware does not implement > > SBI PMU extension ? > > > > riscv_pmu_get_hpm_info will return an error and > kvm_riscv_vcpu_pmu_init will fail. Ahh okay. The vcpu_create() is not looking at return value of kvm_riscv_vcpu_pmu_init() but this function should fail silently "return 0" if riscv_pmu_get_hpm_info() fails. > > > > + return -EINVAL; > > > + } > > > + > > > + if ((num_hw_ctrs + RISCV_KVM_MAX_FW_CTRS) > RISCV_MAX_COUNTERS) { > > > + pr_warn("Limiting fw counters as hw & fw counters exceed maximum counters\n"); > > > > How is this possible ? > > > > Maximum HW counters is 32 (including time, cycle, and instret), > > RISCV_KVM_MAX_FW_CTRS = 32, and > > RISCV_MAX_COUNTERS = 64 > > This was added to prevent the condition where somebody changed the > definition RISCV_KVM_MAX_FW_CTRS > without incrementing MAX_COUNTERS. The error might be subtle as it may > work on some platforms (with less hardware counter) > but fail on others (with more hardware counters.) > I couldn't find a better way to describe the relationship between > RISCV_KVM_MAX_FW_CTRS and RISCV_MAX_COUNTERS. > > I can just put a big comment here instead of the condition check if > you prefer that way. Maybe you can add a compile-time check using "#if" and "#error" ? > > > > > > > + num_fw_ctrs = RISCV_MAX_COUNTERS - num_hw_ctrs; > > > + } else > > > + num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; > > > + > > > + kvpmu->num_hw_ctrs = num_hw_ctrs; > > > + kvpmu->num_fw_ctrs = num_fw_ctrs; > > > + > > > + /* > > > + * There is no correlation between the logical hardware counter and virtual counters. > > > + * However, we need to encode a hpmcounter CSR in the counter info field so that > > > + * KVM can trap n emulate the read. This works well in the migration use case as > > > + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. > > > + */ > > > + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { > > > + /* TIME CSR shouldn't be read from perf interface */ > > > + if (i == 1) > > > + continue; > > > + pmc = &kvpmu->pmc[i]; > > > + pmc->idx = i; > > > + if (i < kvpmu->num_hw_ctrs) { > > > + kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_HW; > > > + if (i < 3) > > > + /* CY, IR counters */ > > > + kvpmu->pmc[i].cinfo.width = 63; > > > + else > > > + kvpmu->pmc[i].cinfo.width = hpm_width; > > > + /* > > > + * The CSR number doesn't have any relation with the logical > > > + * hardware counters. The CSR numbers are encoded sequentially > > > + * to avoid maintaining a map between the virtual counter > > > + * and CSR number. > > > + */ > > > + pmc->cinfo.csr = CSR_CYCLE + i; > > > + } else { > > > + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; > > > + pmc->cinfo.width = BITS_PER_LONG - 1; > > > + } > > > + } > > > + > > > + kvpmu->init_done = true; > > > + > > > + return 0; > > > +} > > > + > > > +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) > > > +{ > > > + /* TODO */ > > > +} > > > + > > > +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) > > > +{ > > > + kvm_riscv_vcpu_pmu_deinit(vcpu); > > > +} > > > -- > > > 2.25.1 > > > > > > > Regards, > > Anup > > > > -- > Regards, > Atish Regards, Anup _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv