From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 756CAC43460 for ; Thu, 15 Apr 2021 03:21:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A4DA61158 for ; Thu, 15 Apr 2021 03:21:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230173AbhDODVn (ORCPT ); Wed, 14 Apr 2021 23:21:43 -0400 Received: from mga01.intel.com ([192.55.52.88]:10604 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230083AbhDODVe (ORCPT ); Wed, 14 Apr 2021 23:21:34 -0400 IronPort-SDR: 5CSX7fI/2lti08Cd5rk7iB5Jv3a13bc6IEyivLjT1b2oq4G6KCb8OCGuP6ys4rT2AZscVYe9lj 0wfUWwFzOvNw== X-IronPort-AV: E=McAfee;i="6200,9189,9954"; a="215281582" X-IronPort-AV: E=Sophos;i="5.82,223,1613462400"; d="scan'208";a="215281582" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2021 20:21:12 -0700 IronPort-SDR: EnmolPtiJZv+8UBz8gKwXl0dleOdSj+e9InDrLsa0n7tabMLNEblOkd/or3g/QI6bLIUMVc0ET mOYR54lrJ1DA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,223,1613462400"; d="scan'208";a="425014089" Received: from clx-ap-likexu.sh.intel.com ([10.239.48.108]) by orsmga008.jf.intel.com with ESMTP; 14 Apr 2021 20:21:08 -0700 From: Like Xu To: peterz@infradead.org, Paolo Bonzini , Sean Christopherson Cc: andi@firstfloor.org, kan.liang@linux.intel.com, wei.w.wang@intel.com, eranian@google.com, liuxiangdong5@huawei.com, Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v5 12/16] KVM: x86/pmu: Move pmc_speculative_in_use() to arch/x86/kvm/pmu.h Date: Thu, 15 Apr 2021 11:20:12 +0800 Message-Id: <20210415032016.166201-13-like.xu@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210415032016.166201-1-like.xu@linux.intel.com> References: <20210415032016.166201-1-like.xu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It allows this inline function to be reused by more callers in more files, such as pmu_intel.c. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 11 ----------- arch/x86/kvm/pmu.h | 11 +++++++++++ 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index d3f746877d1b..666a5e90a3cb 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -477,17 +477,6 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) kvm_pmu_refresh(vcpu); } -static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - - if (pmc_is_fixed(pmc)) - return fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; - - return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; -} - /* Release perf_events for vPMCs that have been unused for a full time slice. */ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index d9157128e6eb..6c902b2d2d5a 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -149,6 +149,17 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } +static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + + if (pmc_is_fixed(pmc)) + return fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; + + return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); -- 2.30.2