From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753906AbdLHWkO (ORCPT ); Fri, 8 Dec 2017 17:40:14 -0500 Received: from mail-co1nam03on0049.outbound.protection.outlook.com ([104.47.40.49]:8198 "EHLO NAM03-CO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753198AbdLHWjb (ORCPT ); Fri, 8 Dec 2017 17:39:31 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Janakarajan.Natarajan@amd.com; From: Janakarajan Natarajan To: kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org Cc: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Paolo Bonzini , Radim Krcmar , Len Brown , Kyle Huey , Tom Lendacky , Borislav Petkov , Grzegorz Andrejczuk , Kan Liang , Janakarajan Natarajan Subject: [PATCH v3 2/3] x86/kvm: Add support for AMD Core Perf Extension in guest Date: Fri, 8 Dec 2017 16:39:13 -0600 Message-Id: <3635a3d7d2eab37e54136693c49f6ad172926e70.1512771422.git.Janakarajan.Natarajan@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BN6PR08CA0070.namprd08.prod.outlook.com (10.172.144.32) To DM5PR12MB1674.namprd12.prod.outlook.com (10.172.40.143) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c1fb650c-079e-4f5e-b9e2-08d53e8c8ae8 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(48565401081)(2017052603307);SRVR:DM5PR12MB1674; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1674;3:xelUmpovuNWESq7YgvifNSXV1vE9bkT5uh2CM8ztwuBHSIWWt5e3NKpOwLFzTUTAukyZ8SxgUrDGPn3eJwDceKuL3+2m5HVqr473vj0BTNGgntj5T/TQ/nK2wrV1elAwVE2+EGotDcoil8rLUA0saO977Xsv/QvMeqPbM+eMF6nA514skEaUFTxYHqCL3Fks8FnOYEIcxubC9ZnDx6JKeyP9SLHpwezfKI+fYjA9WWuBMStXWsh+i6UBvoN2DeNX;25:1qUoz+HQDGdsTx9HEA8XkgTL6CYiUKfhoT5LROMv6IrxSjso/1xJ9rMSGvspegYAStCyptxCRcM2RkurE4IFLdVpUIGebVRgxDatYgmM5aTVs3UUAl3kHZduWNiNFsPnHuTNP1PAP2yDnxp3H92ylZpMlvIbPzuEsDPrTOYFMZEY8Ey8ptjKZCJLOBOuEnwTSvLEzL0r8TjbKy/FoEKgPdyD/1/Hda4eiT/IVtNE+uwhkiQaiXFPhbYJedLvm/vCtd+BJnekych7FPK5/XJ0hx7u/h6roHyZsUeuktxV1f4hUpK70BKJc6gZ07GVyDmUfXs22+I/13hAk9g5xDRnVQ==;31:LqzKU1MljtQ09Y7tpWvCWzPk+WW553AsxRh5NxxRxEkLjRGw860UN8YdS50kJ9b6blHqGLnGqb9pHbpIgS0Bbaw8CdEbjpVnCbU+5HX9Pbydxl/1REB8zG2UQMxT4dv4pgrwg30likbmAMy3ZsN16pBRKd8EHUfO3L/TnOwsEZ4IrnNYGKUjmkLeO5QzmTuRZ2HQqZ+ldwOXUV9Hw0R3wuep9jtcl40EYNl4Cmg8TTI= X-MS-TrafficTypeDiagnostic: DM5PR12MB1674: X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1674;20:wad6b+pV1ec59KIQToRnysUPrcaD7ZgJRSmaH/yYY2A+ozyUMva7h8CEVSZmFyBNrGzsUSOXk5C7eyueVDZpM6oWepvV4lD1rd26vzbECKnjHdaFgSCSrr8CNUK2NlORNKM7h9/aWEz0RhePbw+3QPnjkkIW+gycqdLDYT+SiQsWya2jX6+gvFz+vntw3OrSyJ6GyUGivdiyFMhA1YGkT5yE08PrCG0oq1fdWF1jNYgIMPvix899WVLB7FjY+G8/GiEY0gj1vcetBtxueR6gp5oluC5Kj7ZgGxk/5vH5/z+8tVvNdLnSc/HUxZMCwZlIy+pWuPRxfFEotehpVXlgCOvI7UVms0QVNcs1BfeuiiCyKK2YahCMwCoAFQKuQC6J4716hsIzENYrhbztH/pLieZ9P7aO6dIyWGNXc9g5YJbJCU1D2tdJj4fbAbLO9CTa2jDna2SU0HmDx7r1JHz9aJgs7fUZpu05i81vKoTQOihceOWw5GaDigD32cfNQfYj;4:iIClMAebq2uDl56L2tIER7Cr2XlwjyCe4/Js7XN2ZnoBbntkX0ZcyP+BC3QEnVEaKsYChASRPSZhOPYBAw+iDSXDDJC6CtI16TbnZJlxxT7lRKihFSTtNv5s7O3jxNi0NVewlskeq9xfZV6GX3mQ8HLs1pzCP9VtmbuQtV7n6vP+LQNOvvmtYkiRV2gR/m8r8VyDCPKTHUow5juZWjla2EyAUwKMh6OReMZVZPk5j2uuaUtxg9OFrElYezUl3NX20LvBk9zUoRL3kDZNPYnjfg2NXkhP3AheywPySVmTOyCl3OqdlEefbXRH0eZYAEcc X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040450)(2401047)(5005006)(8121501046)(3002001)(3231022)(93006095)(93001095)(10201501046)(6055026)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123562025)(20161123564025)(20161123560025)(20161123555025)(6072148)(201708071742011);SRVR:DM5PR12MB1674;BCL:0;PCL:0;RULEID:(100000803101)(100110400095);SRVR:DM5PR12MB1674; X-Forefront-PRVS: 0515208626 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(376002)(366004)(346002)(39860400002)(189003)(199004)(8936002)(68736007)(4326008)(81166006)(81156014)(8676002)(50226002)(86362001)(48376002)(3846002)(6116002)(2906002)(16526018)(47776003)(118296001)(66066001)(6666003)(54906003)(2950100002)(7736002)(305945005)(72206003)(6486002)(53416004)(16586007)(5660300001)(106356001)(105586002)(7416002)(316002)(25786009)(478600001)(51416003)(7696005)(50466002)(76176011)(52116002)(97736004)(36756003)(53936002);DIR:OUT;SFP:1101;SCL:1;SRVR:DM5PR12MB1674;H:gi-joe.amd.com;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1;DM5PR12MB1674;23:7/L27Uvrr0pTB6e6v//njGQvi0KeVoZU/u0xtg6B4?= =?us-ascii?Q?wHjGLQ4OCUuW4asLzWxMmxjzWcYyWVWPN/7WGnVPWFSgSlqSeCmErw5du3qB?= =?us-ascii?Q?h2kG2qhpYP4xgIXayotj7bsaxDmCN8DiIXpgXgQGGWOl0wKFVhf4hC4yoCbM?= =?us-ascii?Q?E31W4gX2PIb1p285/hwCRVTxfxv6t3LMvU7UCCuvoEkO5XDoRbLET5RvKp71?= =?us-ascii?Q?UIzMXuPuiw1RXbRGCJaleDmjRLMV3jsjdSob4B08tPN+YpqUfyI0l2uchOFR?= =?us-ascii?Q?tDculV50tcJg4SuyJpqcLjCODo/HunkUNp587M8eyNJgSAw94WKoZAT6qXc4?= =?us-ascii?Q?iKaXWN/zwGZ6NZh48PJmJx1tBh2DEpm9odSDsfZ2mm9k0PVa+JoHz7K9zTso?= =?us-ascii?Q?5hHeae0Ue2XfJa7XzKZ6J04cLw/xM65SdlRXrD3FpXhzA3/k3L+jq/9w0lqb?= =?us-ascii?Q?DNG4a8e7Fe3Gy6eb96x4ndDSEKyvlyHAEMZf6O/RnPFj+dc/tOKHtZtTnzkC?= =?us-ascii?Q?GHxewPr4ZuSqcUpLuNXlLkTgVLGs0wBqddPyi/Ieyz3Iw5NKTN0FighgwMI0?= =?us-ascii?Q?xk2mCQQzXHorrC07MaP+JFear4jF/Bzs8vKogO9w9La64duGVFPlKt5LXsWl?= =?us-ascii?Q?iotx+uVyWCmIxjD4FtCYhfQFU28CCYudF/g8LT/quFVHgTkIbk1PO81dE0b1?= =?us-ascii?Q?N9Hd5fUw5cqPcBHw1jQi6u5NNvgTlOjj+UddRNkgquSWATIqA5zM5hYq0uOL?= =?us-ascii?Q?Dkym8oaFZCHSP/Ox7mz6+9OigRPFGB15B0g0Td7RRnIyqlYEK+yiMwYAuKG7?= =?us-ascii?Q?4bzldC5mzP1698Hs1EUbSz0I2S3q9XilpbXB3m6KY/tbSFGlUfwl3LkcvCo8?= =?us-ascii?Q?1G7PjEqb0x7ns7k6gEy8fgNjphBiWxNGrSVaGphaxaK/sdrEqhFTUPaaTvTa?= =?us-ascii?Q?CQlvn9YH1ZzLcuznmndw2yVIEkWSBThhuyuyJuDfpaqB0B3NUjwwgrDeW8Jm?= =?us-ascii?Q?PoXqm8PtG9hn/1nYh5YAZcgfiyjNRkpMaJgD7xpI80aADCZp28UHV/b+XP9E?= =?us-ascii?Q?h8vPjSLpPwK/teceFLOlBQAgY2W?= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1674;6:zKNNjp2SiCic0OlreK+G4dieEV+irri9vBDzscqNMVGCP9D8tJtdTHpmRB4ntW1sR/1h6QTcZE+xzgTXX65l+osLu4MRw/ycw0Lq/et8ph/KrvvfJ3IYMNfq8VzSBeUa4w5E7FheiFLA6TIV4ffkyTNPQLsUFsq8HBVsMnXY67Tq3mAluowPJ9dc73o8HOqoV4PZpmmb6Bdtq3Oj6D4kGB4Xmqhr5yzXKD9QvSIsvQ55pz/lN/enUGzItxNhaMERTgDC2zKEb+MIp27r3UW11Yufneq1GkCOejBWrzup+93R9IhJuvHzq055d3q5UMI3H3JED8Z26zon6YvCBefZj+WiBxkm357dAJ0c9E/v6Dk=;5:2IrExPeRYbAk5Bbki2WWXq4LvfHLLRbtdiD98IY1962DJZpPdFbIdAWP0WtZkXiTQkxp7CF9kEhexoLyy+HwmnxzfZNdQuRTpvXRaDGd/6HSJw9r9+tov0ZCBNVqgDLLDWq9PY2cJx3lZS7nKMWMLsZFfwqH7GHnL9HlrdLriWM=;24:qwtVTl6qy4O6//3Um0vwYwzvRrM67C03kxvmXwHvp7vaLIW4jjYZz6372TIuVLExrSRnWMSqswhmOMZD292+80H1vIR0r6+qcbBs0CS6LZs=;7:s6hEPfJusRpJ2YlNPVKvPZtAly/5Fvzc/5Hi0qHyBm5s2Xb/86C0Qe9lH0wvF4V6AjXfMfH3reCunHhQKDlp4UWxVccgxpSI5FhxzSnpkhYYsH0DH7OS1j0HBU5kJ8ZkNGF+gHYy1fSR5lg1nNEffsoqqLs0VtPsAGww4DOxnyYdzeXEt3Q2AuHuYs3cIy3BZ1CMCgKELaWcWd7VwBAhB/hIB7/0/WLY/4nS+kzFYZ4ZD7sBgG0JAmMrEYrEIzi+ SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1674;20:mPVdLmkd7e+mS23M3zjJU59x0BVL050S9NTmn+w15nJ1EYsPd9HmnS0QsbaRuMtwWo7OUrwy2wHPPJhBCC7+MPCdjWn5P0megiZMqcKIoHkcdZnOADeVL3w/ad/POukF5tCGgrO2aKirytPVnJLM0mSnZWuspehCpTjjdAqRYWXiIC29jfUrIyUqkx2DCccRYuOBw6qlYX9Tw65P9/7xzpqLlYsQrehokJJLeQQALsmbdYDC/Pcujxmmp7I9+Vi+ X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2017 22:39:27.6050 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c1fb650c-079e-4f5e-b9e2-08d53e8c8ae8 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1674 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add support for AMD Core Performance counters in the guest. The base event select and counter MSRs are changed. In addition, with the core extension, there are 2 extra counters available for performance measurements for a total of 6. With the new MSRs, the logic to map them to the gp_counters[] is changed. New functions are added to check the validity of the get/set MSRs. If the guest has the X86_FEATURE_PERFCTR_CORE cpuid flag set, the number of counters available to the vcpu is set to 6. It the flag is not set then it is 4. Signed-off-by: Janakarajan Natarajan --- arch/x86/kvm/pmu_amd.c | 140 ++++++++++++++++++++++++++++++++++++++++++++----- arch/x86/kvm/x86.c | 1 + 2 files changed, 127 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/pmu_amd.c b/arch/x86/kvm/pmu_amd.c index cd94443..233354a 100644 --- a/arch/x86/kvm/pmu_amd.c +++ b/arch/x86/kvm/pmu_amd.c @@ -19,6 +19,21 @@ #include "lapic.h" #include "pmu.h" +enum pmu_type { + PMU_TYPE_COUNTER = 0, + PMU_TYPE_EVNTSEL, +}; + +enum index { + INDEX_ZERO = 0, + INDEX_ONE, + INDEX_TWO, + INDEX_THREE, + INDEX_FOUR, + INDEX_FIVE, + INDEX_ERROR, +}; + /* duplicated from amd_perfmon_event_map, K7 and above should work. */ static struct kvm_event_hw_type_mapping amd_event_mapping[] = { [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES }, @@ -31,6 +46,88 @@ static struct kvm_event_hw_type_mapping amd_event_mapping[] = { [7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND }, }; +static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) +{ + struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); + + if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { + if (type == PMU_TYPE_COUNTER) + return MSR_F15H_PERF_CTR; + else + return MSR_F15H_PERF_CTL; + } else { + if (type == PMU_TYPE_COUNTER) + return MSR_K7_PERFCTR0; + else + return MSR_K7_EVNTSEL0; + } +} + +static enum index msr_to_index(u32 msr) +{ + switch (msr) { + case MSR_F15H_PERF_CTL0: + case MSR_F15H_PERF_CTR0: + case MSR_K7_EVNTSEL0: + case MSR_K7_PERFCTR0: + return INDEX_ZERO; + case MSR_F15H_PERF_CTL1: + case MSR_F15H_PERF_CTR1: + case MSR_K7_EVNTSEL1: + case MSR_K7_PERFCTR1: + return INDEX_ONE; + case MSR_F15H_PERF_CTL2: + case MSR_F15H_PERF_CTR2: + case MSR_K7_EVNTSEL2: + case MSR_K7_PERFCTR2: + return INDEX_TWO; + case MSR_F15H_PERF_CTL3: + case MSR_F15H_PERF_CTR3: + case MSR_K7_EVNTSEL3: + case MSR_K7_PERFCTR3: + return INDEX_THREE; + case MSR_F15H_PERF_CTL4: + case MSR_F15H_PERF_CTR4: + return INDEX_FOUR; + case MSR_F15H_PERF_CTL5: + case MSR_F15H_PERF_CTR5: + return INDEX_FIVE; + default: + return INDEX_ERROR; + } +} + +static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, + enum pmu_type type) +{ + switch (msr) { + case MSR_F15H_PERF_CTL0: + case MSR_F15H_PERF_CTL1: + case MSR_F15H_PERF_CTL2: + case MSR_F15H_PERF_CTL3: + case MSR_F15H_PERF_CTL4: + case MSR_F15H_PERF_CTL5: + case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3: + if (type != PMU_TYPE_EVNTSEL) + return NULL; + break; + case MSR_F15H_PERF_CTR0: + case MSR_F15H_PERF_CTR1: + case MSR_F15H_PERF_CTR2: + case MSR_F15H_PERF_CTR3: + case MSR_F15H_PERF_CTR4: + case MSR_F15H_PERF_CTR5: + case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3: + if (type != PMU_TYPE_COUNTER) + return NULL; + break; + default: + return NULL; + } + + return &pmu->gp_counters[msr_to_index(msr)]; +} + static unsigned amd_find_arch_event(struct kvm_pmu *pmu, u8 event_select, u8 unit_mask) @@ -64,7 +161,18 @@ static bool amd_pmc_is_enabled(struct kvm_pmc *pmc) static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) { - return get_gp_pmc(pmu, MSR_K7_EVNTSEL0 + pmc_idx, MSR_K7_EVNTSEL0); + unsigned int base = get_msr_base(pmu, PMU_TYPE_COUNTER); + struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); + + if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { + /* + * The idx is contiguous. The MSRs are not. The counter MSRs + * are interleaved with the event select MSRs. + */ + pmc_idx *= 2; + } + + return get_gp_pmc_amd(pmu, base + pmc_idx, PMU_TYPE_COUNTER); } /* returns 0 if idx's corresponding MSR exists; otherwise returns 1. */ @@ -96,8 +204,8 @@ static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int ret = false; - ret = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0) || - get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0); + ret = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER) || + get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL); return ret; } @@ -107,14 +215,14 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; - /* MSR_K7_PERFCTRn */ - pmc = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0); + /* MSR_PERFCTRn */ + pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER); if (pmc) { *data = pmc_read_counter(pmc); return 0; } - /* MSR_K7_EVNTSELn */ - pmc = get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0); + /* MSR_EVNTSELn */ + pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL); if (pmc) { *data = pmc->eventsel; return 0; @@ -130,14 +238,14 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) u32 msr = msr_info->index; u64 data = msr_info->data; - /* MSR_K7_PERFCTRn */ - pmc = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0); + /* MSR_PERFCTRn */ + pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER); if (pmc) { pmc->counter += data - pmc_read_counter(pmc); return 0; } - /* MSR_K7_EVNTSELn */ - pmc = get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0); + /* MSR_EVNTSELn */ + pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL); if (pmc) { if (data == pmc->eventsel) return 0; @@ -154,7 +262,11 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS; + if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) + pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS_CORE; + else + pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS; + pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1; pmu->reserved_bits = 0xffffffff00200000ull; /* not applicable to AMD; but clean them to prevent any fall out */ @@ -169,7 +281,7 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int i; - for (i = 0; i < AMD64_NUM_COUNTERS ; i++) { + for (i = 0; i < AMD64_NUM_COUNTERS_CORE ; i++) { pmu->gp_counters[i].type = KVM_PMC_GP; pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; @@ -181,7 +293,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int i; - for (i = 0; i < AMD64_NUM_COUNTERS; i++) { + for (i = 0; i < AMD64_NUM_COUNTERS_CORE; i++) { struct kvm_pmc *pmc = &pmu->gp_counters[i]; pmc_stop_counter(pmc); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6ca747a..5a32e17 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2451,6 +2451,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_AMD64_DC_CFG: msr_info->data = 0; break; + case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3: case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3: case MSR_P6_PERFCTR0 ... MSR_P6_PERFCTR1: -- 2.7.4