From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A17AC38142 for ; Wed, 1 Feb 2023 08:59:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229719AbjBAI7M (ORCPT ); Wed, 1 Feb 2023 03:59:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230147AbjBAI7J (ORCPT ); Wed, 1 Feb 2023 03:59:09 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EC645D90A for ; Wed, 1 Feb 2023 00:59:08 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id e6so9559260plg.12 for ; Wed, 01 Feb 2023 00:59:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=atishpatra.org; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=MsiWfKMhO8V8ake/oMGe13EZKsi0si1DZ5pIY6nialg=; b=QQ4xw7awnS2eQeWmbichWhcwdgl0psemdV5DTEqye2xxbnf30Om3rRBkyNtwI8M5Pp X0KEJvGHX4xInJ4TsWXwbxlqoGfdEtw+DMlbjiIl6SI2439kwf9f4KiVjWJtk7dphYzC rXne3lAenDp4/eDSNJq2CCnWnP/Hy1gUiENu8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=MsiWfKMhO8V8ake/oMGe13EZKsi0si1DZ5pIY6nialg=; b=EtatCzFBnQfxybiLi7EoRjw/IB2fjvzjP1dJHIun2F2FpyxAbFsEQP6ZK7RRHV7tD4 y1t1ym5kmngV5CzTaR8E/ITc7pd4XgoXP4Uz2hivxARk2aVzgoRLXEk6HHaj7m6+bqkj W0f7H37pXT6YW04gcgZ1neUbykJXyhy2cuHRM9QLNa1QZrD5n5lZzKv7uo9uvDBayUJd ir6+tkfGN0ViEG0BF1USKDD91yOlXtl6SKEaVH2lsxwsrF32/QehrYIuGvaPh2ndUA0M vr3Ezsl6ymWKhVpO2FShvn1Ob7LD2I901dCGr33HoTmxWha42gq0+WGXLRVz29wN7aW7 IkHg== X-Gm-Message-State: AO0yUKVQMTEaO3nAQrIFAvzGhCNizMBo0k2j3K4uOZePRaFoIitCytkl t6cLP3TGGf7kzUBVnPqc6IZZ+5QU1KL1pCQfWJjo X-Google-Smtp-Source: AK7set/7hUDhdbximkycvws+gp+DAtJQVy+K2ZGiJXrD0KIyNd0YO7VDr6uvWis6v4ws3ZeBQwaXAEhGKT70VbgU4FE= X-Received: by 2002:a17:90a:1346:b0:22c:1613:164f with SMTP id y6-20020a17090a134600b0022c1613164fmr228761pjf.103.1675241947590; Wed, 01 Feb 2023 00:59:07 -0800 (PST) MIME-Version: 1.0 References: <20230127182558.2416400-1-atishp@rivosinc.com> <20230127182558.2416400-12-atishp@rivosinc.com> In-Reply-To: From: Atish Patra Date: Wed, 1 Feb 2023 00:58:54 -0800 Message-ID: Subject: Re: [PATCH v3 11/14] RISC-V: KVM: Implement trap & emulate for hpmcounters To: Anup Patel Cc: Atish Patra , linux-kernel@vger.kernel.org, Andrew Jones , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 31, 2023 at 2:46 PM Atish Patra wrote: > > On Sun, Jan 29, 2023 at 4:44 AM Anup Patel wrote: > > > > On Fri, Jan 27, 2023 at 11:56 PM Atish Patra wrote: > > > > > > As the KVM guests only see the virtual PMU counters, all hpmcounter > > > access should trap and KVM emulates the read access on behalf of guests. > > > > > > Reviewed-by: Andrew Jones > > > Signed-off-by: Atish Patra > > > --- > > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 16 ++++++++++ > > > arch/riscv/kvm/vcpu_insn.c | 4 ++- > > > arch/riscv/kvm/vcpu_pmu.c | 45 ++++++++++++++++++++++++++- > > > 3 files changed, 63 insertions(+), 2 deletions(-) > > > > > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > index 3f43a43..022d45d 100644 > > > --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > > @@ -43,6 +43,19 @@ struct kvm_pmu { > > > #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) > > > #define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) > > > > > > +#if defined(CONFIG_32BIT) > > > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > > > +{ .base = CSR_CYCLEH, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, \ > > > +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, > > > +#else > > > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > > > +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, > > > +#endif > > > + > > > +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, > > > + unsigned long *val, unsigned long new_val, > > > + unsigned long wr_mask); > > > + > > > int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); > > > int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > > struct kvm_vcpu_sbi_ext_data *edata); > > > @@ -65,6 +78,9 @@ void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > > > #else > > > struct kvm_pmu { > > > }; > > > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > > > +{ .base = 0, .count = 0, .func = NULL }, > > > + > > > > Redundant newline here. > > > > Fixed. > > > > > > > static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > > > { > > > diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c > > > index 0bb5276..f689337 100644 > > > --- a/arch/riscv/kvm/vcpu_insn.c > > > +++ b/arch/riscv/kvm/vcpu_insn.c > > > @@ -213,7 +213,9 @@ struct csr_func { > > > unsigned long wr_mask); > > > }; > > > > > > -static const struct csr_func csr_funcs[] = { }; > > > +static const struct csr_func csr_funcs[] = { > > > + KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS > > > +}; > > > > > > /** > > > * kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space > > > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > > > index 7713927..894053a 100644 > > > --- a/arch/riscv/kvm/vcpu_pmu.c > > > +++ b/arch/riscv/kvm/vcpu_pmu.c > > > @@ -17,6 +17,44 @@ > > > > > > #define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > > > > > > +static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > > + unsigned long *out_val) > > > +{ > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + struct kvm_pmc *pmc; > > > + u64 enabled, running; > > > + > > > + pmc = &kvpmu->pmc[cidx]; > > > + if (!pmc->perf_event) > > > + return -EINVAL; > > > + > > > + pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); > > > + *out_val = pmc->counter_val; > > > + > > > + return 0; > > > +} > > > + > > > +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, > > > + unsigned long *val, unsigned long new_val, > > > + unsigned long wr_mask) > > > +{ > > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > + int cidx, ret = KVM_INSN_CONTINUE_NEXT_SEPC; > > > + > > > + if (!kvpmu || !kvpmu->init_done) > > > + return KVM_INSN_EXIT_TO_USER_SPACE; > > > > As discussed previously, this should be KVM_INSN_ILLEGAL_TRAP. > > Thinking about it more, this results in a panic in guest S-mode which is probably undesirable. As per your earlier suggestion, we can return 0 for cycle/instret counters if accessed. This is only possible through legacy pmu drivers running in guests or some other OS that access any hpmcounters for random reasons. I think we should return KVM_INSN_ILLEGAL_TRAP for other counters and make the guest kernel panic. This does separate the behavior between fixed and programmable counters when everything is denied access in hcounteren. The new code will look like this: if (!kvpmu || !kvpmu->init_done) { if (csr_num == CSR_CYCLE || csr_num == CSR_INSTRET) { *val = 0; return ret; } else return KVM_INSN_ILLEGAL_TRAP; } Let me know if you think otherwise. > > Done. > > > + > > > + if (wr_mask) > > > + return KVM_INSN_ILLEGAL_TRAP; > > > + > > > + cidx = csr_num - CSR_CYCLE; > > > + > > > + if (pmu_ctr_read(vcpu, cidx, val) < 0) > > > + return KVM_INSN_EXIT_TO_USER_SPACE; > > > > Same as above. > > We can get rid of this as pmu_ctr_read doesn't return errors anyways. > > Done. > > > > + > > > + return ret; > > > +} > > > + > > > int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) > > > { > > > struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > > @@ -69,7 +107,12 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba > > > int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > > struct kvm_vcpu_sbi_ext_data *edata) > > > { > > > - /* TODO */ > > > + int ret; > > > + > > > + ret = pmu_ctr_read(vcpu, cidx, &edata->out_val); > > > + if (ret == -EINVAL) > > > + edata->err_val = SBI_ERR_INVALID_PARAM; > > > + > > > return 0; > > > } > > > > > > -- > > > 2.25.1 > > > > > > > Regards, > > Anup > > > > -- > Regards, > Atish -- Regards, Atish