From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932388AbcKGP5B (ORCPT ); Mon, 7 Nov 2016 10:57:01 -0500 Received: from mx2.suse.de ([195.135.220.15]:38773 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932385AbcKGP46 (ORCPT ); Mon, 7 Nov 2016 10:56:58 -0500 Date: Mon, 7 Nov 2016 16:56:26 +0100 From: Borislav Petkov To: Ingo Molnar Cc: x86@kernel.org, Yazen Ghannam , linux-kernel@vger.kernel.org, Peter Zijlstra Subject: Re: [PATCH v3 1/2] x86/AMD: Fix cpu_llc_id for AMD Fam17h systems Message-ID: <20161107155626.rjapdlgiredm7uvh@pd.tnic> References: <1478019063-2632-1-git-send-email-Yazen.Ghannam@amd.com> <20161102201321.slgzk2x2ya4jzfax@pd.tnic> <20161107073121.GB26938@gmail.com> <20161107092031.alxfkr6rpctodbdk@pd.tnic> <20161107140746.GA20626@gmail.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="robf4pbmvxcv3vcu" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20161107140746.GA20626@gmail.com> User-Agent: NeoMutt/20161014 (1.7.1) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --robf4pbmvxcv3vcu Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit On Mon, Nov 07, 2016 at 03:07:46PM +0100, Ingo Molnar wrote: > - cache domains might be seriously mixed up, resulting in serious drop in > performance. > > - or domains might be partitioned 'wrong' but not catastrophically > wrong, resulting in a minor performance drop (if at all) Something between the two. Here's some debugging output from set_cpu_sibling_map(): [ 0.202033] smpboot: set_cpu_sibling_map: cpu: 0, has_smt: 0, has_mp: 1 [ 0.202043] smpboot: set_cpu_sibling_map: first loop, llc(this): 65528, o: 0, llc(o): 65528 [ 0.202058] smpboot: set_cpu_sibling_map: first loop, link mask smt so we link it into the SMT mask even if has_smt is off. [ 0.202067] smpboot: set_cpu_sibling_map: first loop, link mask llc [ 0.202077] smpboot: set_cpu_sibling_map: second loop, llc(this): 65528, o: 0, llc(o): 65528 [ 0.202091] smpboot: set_cpu_sibling_map: second loop, link mask die I've attached the debug diff. And since those llc(o), i.e. the cpu_llc_id of the *other* CPU in the loops in set_cpu_sibling_map() underflows, we're generating the funniest thread_siblings masks and then when I run 8 threads of nbench, they get spread around the LLC domains in a very strange pattern which doesn't give you the normal scheduling spread one would expect for performance. And this is just one workload - I can't imagine what else might be influenced by this funkiness. Oh and other things like EDAC use cpu_llc_id so they will be b0rked too. So we absolutely need to fix that cpu_llc_id thing. -- Regards/Gruss, Boris. SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) -- --robf4pbmvxcv3vcu Content-Type: text/x-diff; charset=utf-8 Content-Disposition: attachment; filename="set_cpu_sibling_map_debug.diff" diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 601d2b331350..5974098d8266 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -506,6 +506,9 @@ void set_cpu_sibling_map(int cpu) struct cpuinfo_x86 *o; int i, threads; + pr_info("%s: cpu: %d, has_smt: %d, has_mp: %d\n", + __func__, cpu, has_smt, has_mp); + cpumask_set_cpu(cpu, cpu_sibling_setup_mask); if (!has_mp) { @@ -519,11 +522,19 @@ void set_cpu_sibling_map(int cpu) for_each_cpu(i, cpu_sibling_setup_mask) { o = &cpu_data(i); - if ((i == cpu) || (has_smt && match_smt(c, o))) + pr_info("%s: first loop, llc(this): %d, o: %d, llc(o): %d\n", + __func__, per_cpu(cpu_llc_id, cpu), + o->cpu_index, per_cpu(cpu_llc_id, o->cpu_index)); + + if ((i == cpu) || (has_smt && match_smt(c, o))) { + pr_info("%s: first loop, link mask smt\n", __func__); link_mask(topology_sibling_cpumask, cpu, i); + } - if ((i == cpu) || (has_mp && match_llc(c, o))) + if ((i == cpu) || (has_mp && match_llc(c, o))) { + pr_info("%s: first loop, link mask llc\n", __func__); link_mask(cpu_llc_shared_mask, cpu, i); + } } @@ -534,7 +545,12 @@ void set_cpu_sibling_map(int cpu) for_each_cpu(i, cpu_sibling_setup_mask) { o = &cpu_data(i); + pr_info("%s: second loop, llc(this): %d, o: %d, llc(o): %d\n", + __func__, per_cpu(cpu_llc_id, cpu), + o->cpu_index, per_cpu(cpu_llc_id, o->cpu_index)); + if ((i == cpu) || (has_mp && match_die(c, o))) { + pr_info("%s: second loop, link mask die\n", __func__); link_mask(topology_core_cpumask, cpu, i); /* --robf4pbmvxcv3vcu--