From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756731AbdDQPeM (ORCPT ); Mon, 17 Apr 2017 11:34:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46492 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754508AbdDQPeJ (ORCPT ); Mon, 17 Apr 2017 11:34:09 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 5C502C05973C Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=lvenanci@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 5C502C05973C Reply-To: lvenanci@redhat.com Subject: Re: [RFC 3/3] sched/topology: Different sched groups must not have the same balance cpu References: <1492091769-19879-1-git-send-email-lvenanci@redhat.com> <1492091769-19879-4-git-send-email-lvenanci@redhat.com> <20170414164909.tfybszncwkm4yxap@hirez.programming.kicks-ass.net> To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, lwang@redhat.com, riel@redhat.com, Mike Galbraith , Thomas Gleixner , Ingo Molnar From: Lauro Venancio Organization: Red Hat Message-ID: <731e0515-63e8-2a58-832d-89619065a328@redhat.com> Date: Mon, 17 Apr 2017 12:34:05 -0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <20170414164909.tfybszncwkm4yxap@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 17 Apr 2017 15:34:08 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/14/2017 01:49 PM, Peter Zijlstra wrote: > On Thu, Apr 13, 2017 at 10:56:09AM -0300, Lauro Ramos Venancio wrote: >> Currently, the group balance cpu is the groups's first CPU. But with >> overlapping groups, two different groups can have the same first CPU. >> >> This patch uses the group mask to mark all the CPUs that have a >> particular group as its main sched group. The group balance cpu is the >> first group CPU that is also in the mask. > Please give a NUMA configuration and CPU number where this goes wrong. On a 4 nodes with ring topology, the groups (0-1,3 [cpu 0]), (0-2 [cpu 1]) and (0,2-3 [cpu 3]) share the same sched_group_capacity instance when the first groups cpu is used to select the sgc. > > Because only the first group of a domain matters, and with the other > thing fixed, I'm not immediately seeing where we go wobbly. Before patch 2, the group balance cpu was implicitly used to select the sched_group_capacity instance. When two different groups had the same balance cpu, they shared the same sched_group_capacity instance. After patch 2, one different sched_group_capacity instance is assigned to each group instance. This patch ensures tree things: 1) different instances of the same group share the same sched_group_capacity instance. 2) instances of different groups don't share the same sched_group_capacity instance. 3) the group balance cpu must be one of the cpus where the group is installed. I am rebasing this patch on top of your patches.