From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754156AbdKFRpO (ORCPT ); Mon, 6 Nov 2017 12:45:14 -0500 Received: from mail-sn1nam01on0086.outbound.protection.outlook.com ([104.47.32.86]:16416 "EHLO NAM01-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753729AbdKFRol (ORCPT ); Mon, 6 Nov 2017 12:44:41 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Janakarajan.Natarajan@amd.com; From: Janakarajan Natarajan To: kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org Cc: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Paolo Bonzini , Radim Krcmar , Len Brown , Kyle Huey , Borislav Petkov , Kan Liang , Grzegorz Andrejczuk , Tom Lendacky , Tony Luck , Janakarajan Natarajan Subject: [PATCH v2 3/4] Add support for AMD Core Perf Extension in guest Date: Mon, 6 Nov 2017 11:44:25 -0600 Message-Id: <5113a9d6e76d2c6050c1fba4007068340321521c.1509985085.git.Janakarajan.Natarajan@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BN6PR16CA0012.namprd16.prod.outlook.com (10.172.212.150) To BN6PR12MB1668.namprd12.prod.outlook.com (10.172.19.19) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7d50a11c-08c8-4d3e-8366-08d5253e0d29 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(48565401081)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603249);SRVR:BN6PR12MB1668; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1668;3:KLNGWYx1sKJ2ZUl+0R7V0TUu6nhCpnwV4wWog04puxm3fnvtWhwcgx6Zb/xZCWz9A3kI7oCe4icLdEBwB1/C0/XPeMPi+2eylg/6hoISyRW5/1kRSJsf2SImOnoX9G4ljvV86R60AWsXxiEskJPeDGwMQC/qe6f+vKQ05/YRU2UTUgDYvEzyUEx+CV8iVnj3lDSt5cniMuymHMrG3yz//OrzZFrd1B5bfMuH+IrvSyxV6vZW1WIpOOpaJBSQSflO;25:sf5nA7agI1CI3BwWzkz44ZE+sBaztMCTk6DuyYYFUYg3Y834B5JQ5LblnZEjIIZ6WwHYAOXp3q+BeqH3Y06BQNcc05N/UspCJWJyiIuHcw8w3VV3Opi+w4PazPO/S0O50lvt7ndK6+rYS6YEa71L4xc2M5rKbsq1R+oJcOvY3iq2kpXPAgSjl1K/A8Vj8CJXV43bCjjezpEduhR3ZSVb6+90MaAKMTGIFHV+0xzGxTJa7qEjfUxEH/jYcSP0b6ulxJGS4Betj8NmMj8OGO4BqVIGRjWJ5nt8l5Bifk/x6Kno4KrFV1UkygTYoxDJVaGUP/fRZk/g/lIiBppphvnCzQ==;31:K6Ns6TVZD4WyuebGAcclsb4LQxyRqFG8zwFRPAs63gXHoJ3RF0sUhQp6WeXX3MbozdAFrFiubGR3Rtji+atp5UVRdvDLk1bjTwHapu3UmvqjjMTHyXpa+2kH6/5qIKA61JQhPxEP15UIRgqQeEJqWvg5t+fnSyaNnz3x/bKBAPMWwV0P1ktlebbXLSjXVO9jCgO+0CJ+WxqEBSo8gBaeF6uujaFQbkI+2W9pEI/as+c= X-MS-TrafficTypeDiagnostic: BN6PR12MB1668: X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1668;20:khVUtuJnFP8sauJTrxYQ26Ed2rNBEmeE1RrLBPOgc1doRiUVd4+Jp6RrRtfNGvQPu69sAOsrzro9TjyKt1rX84OZS/1S5GGXM/YulDGmrP4CjUd8IO4YwKqjt0vYo2DS/dS7RUuhW1l8a8z1EV3FQOK3eab0B/QjsvFVOPjJkpa5YecEPr2L6tviS3p3H/Qw7ToGFuFzBaeREBf73YMoP0wzqmIZXvrE52NEJZndd6AMbKSpr0TNfL8SR6iz77iR8ymXwNkppfljj2k3UZ/bEttPhjlcrsm1H1AL9yukbvLlIUhYiLLar5Z6ZktIbOuz7lPOnv2pXsGkhph5LsBb3JuBrscNSx/fcDB0pItpudXRf41ecSOeSG5mclOJpdb9SNWoiyx6sbpe9PUC/+63mSpjoIlr1tO8AAmrM0qUQziZgJyDdwci7nlcp4b87v3y/TLK7tHhxG8K/nYM0KHxgSuvckDPZV54+xy3sGGAHMAlOWB0Zxr0sGWHFWxsoN0Q;4:4sUzhrVJbUS3wU/n7e4rWY3hRmLyPMotLEDSpI2BefWxJ0ihaBM+q6zGXd0GFZYVAXfdO+e+d5mm0v30dbMj4FDgrqZAmaDwOEHP8+hKFv4Mid/f/cTOlYmDP84KkiP2rDHNlRigf3aj2wiX+weblhy8FqNX6oHinbSZZnM4UzAI3QCPVP13qw+ZXUl6bYDyfyFKChG2EThTjVxoY0Cb6zBTpGeR1H60d3/aW6lr9TIWXy6LKL8EmOSikhpxEdOq1+pMKEbV+hI8OviSSHZMrsMI2pCp97divSIif2e9bJg= X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(3231021)(100000703101)(100105400095)(3002001)(93006095)(93001095)(10201501046)(6055026)(6041248)(20161123564025)(20161123562025)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123560025)(20161123558100)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095);SRVR:BN6PR12MB1668;BCL:0;PCL:0;RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);SRVR:BN6PR12MB1668; X-Forefront-PRVS: 048396AFA0 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(6009001)(346002)(376002)(39860400002)(189002)(199003)(6486002)(16586007)(25786009)(305945005)(54906003)(3846002)(16526018)(478600001)(189998001)(50226002)(36756003)(66066001)(48376002)(7736002)(47776003)(4326008)(72206003)(316002)(86362001)(2906002)(6116002)(53936002)(76176999)(97736004)(50986999)(105586002)(53416004)(8936002)(68736007)(118296001)(50466002)(7416002)(81166006)(106356001)(5003940100001)(101416001)(81156014)(2950100002)(8676002)(6666003)(5660300001);DIR:OUT;SFP:1101;SCL:1;SRVR:BN6PR12MB1668;H:gi-joe.amd.com;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1;BN6PR12MB1668;23:4NnZhnGqewUNzcSGWXC8zO0erA/Ii9eCQVjJ0SaKe?= =?us-ascii?Q?9DqWJW4o3Ky45bC/wUiNtfu0Fas2F7GSkiXXmQSlFg+MJYtOY9pp9TLy+rxM?= =?us-ascii?Q?MYDwShaEmsm7w19K3IpNGkO5nIAbcV/KefXQvsYOv+jewo2gYRGySEl5euM9?= =?us-ascii?Q?OuvwuGc7FNzfoEmvc8VKR38kazIeWrYNBbmlKSl37gdMgsvx43uQdaM8Yh9z?= =?us-ascii?Q?oUZLLQ4R5CsjCrEZbqKYDjSQTFYHXwSoqjXh9iz7hiS8HdkuHR/H5pGHzl7C?= =?us-ascii?Q?nDev+z/c8eku4MWwtrBPAfQ+vVRToJMkYGaSYCLCr+KRwxcw9ejKwvWrNDfs?= =?us-ascii?Q?YQXMjxI9pOtM0tRWEYK8jUM/suC21ryfRgZa0PwcwhX1gZtc6/wAWou/sY+1?= =?us-ascii?Q?aHDRkX6B4tk/da4AwUa4cPkvo1ebyXYb5VGCHWTVvwRbQ6tueDwrl2txCNzT?= =?us-ascii?Q?E2y8XLVJ6tKzYJkr8whRRvjg3Rbuk2IHrNM4q+iS9wajJJFawrcrJ5sic0hK?= =?us-ascii?Q?5l7X/K4c4x8vpMimVd43qmvI61+bkhHac8V2sH3Ln9aM0UWb9VXKAzc5ifU+?= =?us-ascii?Q?4zDq4522+6IuTWyX+gkducxLf0azj3x2untQpM/QZ0bNngwjpJEzQQReTwqh?= =?us-ascii?Q?DI8QjOSMC3Nw5Kh2WZQC1snIHqm4iA+Q9TgO8ZxPnGAEDFdp9BM5QMTEI+yu?= =?us-ascii?Q?2jO9iZMZ5Q11IFbUmIsmNwVJLJG9PY0N3RutJ/VLYCzT7jYdErBBzJ3BBd4e?= =?us-ascii?Q?3Im/lAxqul8qRnNAqtk2sFoTDgAzmOX0RilZi8sC/L7eSDGYkvkg3bME+m0B?= =?us-ascii?Q?tZyJp1njp2szDcNvkC+on/S8f2mDGpIkKm3cfpwkWwIzZs9WU8cPg3lGcc+/?= =?us-ascii?Q?MYxNVGuXjAKXsj+aIO8bF0QhWuNOy6tXFQFlrOaAUB5ugRnDf9jCNaZaz6wn?= =?us-ascii?Q?1zMe+faQVq3c8BdLuAdtFVltlqcV2aEi3OWWA6kvunaJIPa0DpWqSrVUm2uA?= =?us-ascii?Q?jQrmsAKEVC6LpYT5P7RoJ0g8sT8dKp8FgWARwpJhoXvssYCNktYT6UhiUPkG?= =?us-ascii?Q?PAJXwpK2LV5xRX44IsM+Gn2aOT5bU6vPdr1IHiiklRhLERWeWo8Ug7otgx6i?= =?us-ascii?Q?lzmdEMR7tY=3D?= X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1668;6:NJX6G6uBErCOg6vE9iDSutwvDiANRmeZ3KShm/kVeF2lcXkue0PtxmBMSakpVCcoTOS0KLWqvYwcLPTG8cZtSvAXmbptkNSoxkyS792KIFcRjdofcalaC94jfefAouUi20s4Botko3U4PXuYPAmuNVD4WY9sIj3HvpCkm+WjnvAQftElxvocb9QFEZGjwkJni2qbx885neeHG83lMdK5w2lehlNXElPx8M6Jq3Cqo6Im3+rwaI6QyLtShQW/iLjiNZ2RSwaxPqWxwVGlBONMoP47a46L569QvSxHccm4mExA1HpzINQCFcA+3fmbOm6wmhYTmUp/Rhvo6jeccfTBMg==;5:zWVM6SaFAlNisi82ZHQ7Opp5xFNlpRa3Q8OZbdVNfMPHR9t7eLIbot7IK9k9TCkL9HFLHSL9tNm2396uE9xiN6a4l1BW5M4rhMJVRZCql4XnA3wPcSaGdF1dUAnqC1fZUu92xJEvQufkhlfHWmN8wg==;24:FiD/Fk8yMt4x7/wDPL/WywdFwRvYhOOdIK9FdDBJfbbiApsENeB9G4bxwkL3oGextL/apsgZHbHqC5fb9b9HcMf7MfS+mikctqn+1v+3JrU=;7:1P1PuCpVzJJ+B0qC20xz4OTpb8H+BzswJyx1X/iDD+eOSNhTnRygfaYkQQCNiUwN9+GRThVA0ZvoVzaCE4K6mt0INHVgTJt92o6ZUjK1d+uyIjnzpymZH7A03gDbldhsCQpTVNOGsiSJg/2zeVvDlNG6aqfW6OI4+RfbkpX8yCXyRSCiMy8dagw0IHEkA8SqzBkX9Oh9Otcl1PJMZdGOvQhKT41oF4JzQBXXq4e/bSI= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1668;20:nvk0Yg85UscMLIGcErXaRBzYe4aS1n1IDtSmBMHKgcgcJxlALuYbhjXqMBR+lU+M8Hh1xILHSQTjDMlkq1gwgw92Q+fRxaSmILzMf9t1RVaLdTZQDw1InsjatJJ9pIyNuPMGB/hznfJnNn8Nh3Z/FmXi/sSSy3bgoxT0NvDbU5MXfovVQLjb0SbRyxQDA4OV6l8jSFyaWmImQBJXhTA5/t9gDyBX69VgCDEkBwQnMjuOl1bumQl4YrH8i81oXaBB X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2017 17:44:37.4317 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7d50a11c-08c8-4d3e-8366-08d5253e0d29 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1668 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds support for AMD Core Performance counters in the guest. The base event select and counter MSRs are changed. In addition, with the core extension, there are 2 extra counters available for performance measurements for a total of 6. With the new MSRs, the logic to map them to the gp_counters[] is changed. New functions are introduced to get the right base MSRs and to check the validity of the get/set MSRs. If the guest has vcpus of either family 16h or a generation < 15h, it falls back to using K7 MSRs and the number of counters the guest can access is set to 4. Signed-off-by: Janakarajan Natarajan --- arch/x86/kvm/pmu_amd.c | 133 +++++++++++++++++++++++++++++++++++++++++++------ arch/x86/kvm/x86.c | 1 + 2 files changed, 120 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/pmu_amd.c b/arch/x86/kvm/pmu_amd.c index cd94443..2c694446 100644 --- a/arch/x86/kvm/pmu_amd.c +++ b/arch/x86/kvm/pmu_amd.c @@ -19,6 +19,11 @@ #include "lapic.h" #include "pmu.h" +enum pmu_type { + PMU_TYPE_COUNTER = 0, + PMU_TYPE_EVNTSEL, +}; + /* duplicated from amd_perfmon_event_map, K7 and above should work. */ static struct kvm_event_hw_type_mapping amd_event_mapping[] = { [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES }, @@ -31,6 +36,86 @@ static struct kvm_event_hw_type_mapping amd_event_mapping[] = { [7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND }, }; +static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) +{ + struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); + int family; + bool has_perf_ext; + + family = guest_cpuid_family(vcpu); + has_perf_ext = guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE); + + switch (family) { + case 0x15: + case 0x17: + if (has_perf_ext) { + if (type == PMU_TYPE_COUNTER) + return MSR_F15H_PERF_CTR; + else + return MSR_F15H_PERF_CTL; + break; + } + /* + * Fall-through because the K7 MSRs are + * backwards compatible + */ + default: + if (type == PMU_TYPE_COUNTER) + return MSR_K7_PERFCTR0; + else + return MSR_K7_EVNTSEL0; + } +} + +static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, + enum pmu_type type) +{ + unsigned int base = get_msr_base(pmu, type); + + if (base == MSR_F15H_PERF_CTL) { + switch (msr) { + case MSR_F15H_PERF_CTL0: + case MSR_F15H_PERF_CTL1: + case MSR_F15H_PERF_CTL2: + case MSR_F15H_PERF_CTL3: + case MSR_F15H_PERF_CTL4: + case MSR_F15H_PERF_CTL5: + /* + * AMD Perf Extension MSRs are not continuous. + * + * E.g. MSR_F15H_PERF_CTR0 -> 0xc0010201 + * MSR_F15H_PERF_CTR1 -> 0xc0010203 + * + * These are mapped to work with gp_counters[]. + * The index into the array is calculated by + * dividing the difference between the requested + * msr and the msr base by 2. + * + * E.g. MSR_F15H_PERF_CTR1 uses + * ->gp_counters[(0xc0010203-0xc0010201)/2] + * ->gp_counters[1] + */ + return &pmu->gp_counters[(msr - base) >> 1]; + default: + return NULL; + } + } else if (base == MSR_F15H_PERF_CTR) { + switch (msr) { + case MSR_F15H_PERF_CTR0: + case MSR_F15H_PERF_CTR1: + case MSR_F15H_PERF_CTR2: + case MSR_F15H_PERF_CTR3: + case MSR_F15H_PERF_CTR4: + case MSR_F15H_PERF_CTR5: + return &pmu->gp_counters[(msr - base) >> 1]; + default: + return NULL; + } + } else { + return get_gp_pmc(pmu, msr, base); + } +} + static unsigned amd_find_arch_event(struct kvm_pmu *pmu, u8 event_select, u8 unit_mask) @@ -64,7 +149,20 @@ static bool amd_pmc_is_enabled(struct kvm_pmc *pmc) static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) { - return get_gp_pmc(pmu, MSR_K7_EVNTSEL0 + pmc_idx, MSR_K7_EVNTSEL0); + unsigned int base = get_msr_base(pmu, PMU_TYPE_COUNTER); + unsigned int family; + + family = guest_cpuid_family(pmu_to_vcpu(pmu)); + + if (family == 0x15 || family == 0x17) { + /* + * The idx is contiguous. The MSRs are not. The counter MSRs + * are interleaved with the event select MSRs. + */ + pmc_idx *= 2; + } + + return get_gp_pmc_amd(pmu, base + pmc_idx, PMU_TYPE_COUNTER); } /* returns 0 if idx's corresponding MSR exists; otherwise returns 1. */ @@ -96,8 +194,8 @@ static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int ret = false; - ret = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0) || - get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0); + ret = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER) || + get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL); return ret; } @@ -107,14 +205,14 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; - /* MSR_K7_PERFCTRn */ - pmc = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0); + /* MSR_PERFCTRn */ + pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER); if (pmc) { *data = pmc_read_counter(pmc); return 0; } - /* MSR_K7_EVNTSELn */ - pmc = get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0); + /* MSR_EVNTSELn */ + pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL); if (pmc) { *data = pmc->eventsel; return 0; @@ -130,14 +228,14 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) u32 msr = msr_info->index; u64 data = msr_info->data; - /* MSR_K7_PERFCTRn */ - pmc = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0); + /* MSR_PERFCTRn */ + pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER); if (pmc) { pmc->counter += data - pmc_read_counter(pmc); return 0; } - /* MSR_K7_EVNTSELn */ - pmc = get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0); + /* MSR_EVNTSELn */ + pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL); if (pmc) { if (data == pmc->eventsel) return 0; @@ -153,8 +251,15 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) static void amd_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + int family, nr_counters; + + family = guest_cpuid_family(vcpu); + if (family == 0x15 || family == 0x17) + nr_counters = AMD64_NUM_COUNTERS_CORE; + else + nr_counters = AMD64_NUM_COUNTERS; - pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS; + pmu->nr_arch_gp_counters = nr_counters; pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1; pmu->reserved_bits = 0xffffffff00200000ull; /* not applicable to AMD; but clean them to prevent any fall out */ @@ -169,7 +274,7 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int i; - for (i = 0; i < AMD64_NUM_COUNTERS ; i++) { + for (i = 0; i < AMD64_NUM_COUNTERS_CORE ; i++) { pmu->gp_counters[i].type = KVM_PMC_GP; pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; @@ -181,7 +286,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int i; - for (i = 0; i < AMD64_NUM_COUNTERS; i++) { + for (i = 0; i < AMD64_NUM_COUNTERS_CORE; i++) { struct kvm_pmc *pmc = &pmu->gp_counters[i]; pmc_stop_counter(pmc); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 03869eb..5a6ad6f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2433,6 +2433,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_AMD64_DC_CFG: msr_info->data = 0; break; + case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3: case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3: case MSR_P6_PERFCTR0 ... MSR_P6_PERFCTR1: -- 2.7.4