From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0655C433DB for ; Mon, 11 Jan 2021 09:29:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A8949224F9 for ; Mon, 11 Jan 2021 09:29:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727716AbhAKJ3F (ORCPT ); Mon, 11 Jan 2021 04:29:05 -0500 Received: from foss.arm.com ([217.140.110.172]:51098 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726273AbhAKJ3F (ORCPT ); Mon, 11 Jan 2021 04:29:05 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88323101E; Mon, 11 Jan 2021 01:28:19 -0800 (PST) Received: from e123083-lin (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E58003F70D; Mon, 11 Jan 2021 01:28:14 -0800 (PST) Date: Mon, 11 Jan 2021 10:28:11 +0100 From: Morten Rasmussen To: Tim Chen Cc: Barry Song , valentin.schneider@arm.com, catalin.marinas@arm.com, will@kernel.org, rjw@rjwysocki.net, vincent.guittot@linaro.org, lenb@kernel.org, gregkh@linuxfoundation.org, jonathan.cameron@huawei.com, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, mark.rutland@arm.com, sudeep.holla@arm.com, aubrey.li@linux.intel.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linuxarm@openeuler.org, xuwei5@huawei.com, prime.zeng@hisilicon.com, tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler Message-ID: <20210111092811.GB47324@e123083-lin> References: <20210106083026.40444-1-song.bao.hua@hisilicon.com> <737932c9-846a-0a6b-08b8-e2d2d95b67ce@linux.intel.com> <20210108151241.GA47324@e123083-lin> <99c07bdf-02d1-153a-bd1e-2f4200cc67c5@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <99c07bdf-02d1-153a-bd1e-2f4200cc67c5@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote: > > > On 1/8/21 7:12 AM, Morten Rasmussen wrote: > > On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote: > >> On 1/6/21 12:30 AM, Barry Song wrote: > >>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each > >>> cluster has 4 cpus. All clusters share L3 cache data while each cluster > >>> has local L3 tag. On the other hand, each cluster will share some > >>> internal system bus. This means cache is much more affine inside one cluster > >>> than across clusters. > >> > >> There is a similar need for clustering in x86. Some x86 cores could share L2 caches that > >> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters > >> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3). > >> Having a sched domain at the L2 cluster helps spread load among > >> L2 domains. This will reduce L2 cache contention and help with > >> performance for low to moderate load scenarios. > > > > IIUC, you are arguing for the exact opposite behaviour, i.e. balancing > > between L2 caches while Barry is after consolidating tasks within the > > boundaries of a L3 tag cache. One helps cache utilization, the other > > communication latency between tasks. Am I missing something? > > > > IMHO, we need some numbers on the table to say which way to go. Looking > > at just benchmarks of one type doesn't show that this is a good idea in > > general. > > > > I think it is going to depend on the workload. If there are dependent > tasks that communicate with one another, putting them together > in the same cluster will be the right thing to do to reduce communication > costs. On the other hand, if the tasks are independent, putting them together on the same cluster > will increase resource contention and spreading them out will be better. Agree. That is exactly where I'm coming from. This is all about the task placement policy. We generally tend to spread tasks to avoid resource contention, SMT and caches, which seems to be what you are proposing to extend. I think that makes sense given it can produce significant benefits. > > Any thoughts on what is the right clustering "tag" to use to clump > related tasks together? > Cgroup? Pid? Tasks with same mm? I think this is the real question. I think the closest thing we have at the moment is the wakee/waker flip heuristic. This seems to be related. Perhaps the wake_affine tricks can serve as starting point? Morten From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E617AC433DB for ; Mon, 11 Jan 2021 09:29:44 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8382E20B1F for ; Mon, 11 Jan 2021 09:29:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8382E20B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=BGn5qt4OKIcIz3nOon7iS837N0q4uaJDVz8O+7C2KHQ=; b=zSP5R1drUO0zpBMk6/m/8uWc9 GXgylSpgYYSivJwXnrN/VbrdU54nTZEOXQ7ytwf4Qzvln0Vn0sxiCB0nnRMaFiRM7WLlFLA9cu6Ed iT8HzyiktBhA+YQ0qZg2MFhLdGbTSDtcmWjTw8Qdu0xYAcCpi9VJcCTER0tKj+pPQPVxEsh8ywZlF etsCdjycSubvCAH2sveZC/wyQhPNIjzE9CVKpggSbjIIcmK3BTw9lQfLKzBdxE+v3PVEUKDP624aI lJEb2wjDiFxeDdBpPnrWPoMm6X91HrRjMgzQl/XWB6IoPKSDW3oMtet3rEIDvDRFiYm5u+2Lt/NWn FDLpoc0XQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kytUv-0002T4-OD; Mon, 11 Jan 2021 09:28:25 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kytUt-0002Qy-6a for linux-arm-kernel@lists.infradead.org; Mon, 11 Jan 2021 09:28:24 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88323101E; Mon, 11 Jan 2021 01:28:19 -0800 (PST) Received: from e123083-lin (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E58003F70D; Mon, 11 Jan 2021 01:28:14 -0800 (PST) Date: Mon, 11 Jan 2021 10:28:11 +0100 From: Morten Rasmussen To: Tim Chen Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler Message-ID: <20210111092811.GB47324@e123083-lin> References: <20210106083026.40444-1-song.bao.hua@hisilicon.com> <737932c9-846a-0a6b-08b8-e2d2d95b67ce@linux.intel.com> <20210108151241.GA47324@e123083-lin> <99c07bdf-02d1-153a-bd1e-2f4200cc67c5@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <99c07bdf-02d1-153a-bd1e-2f4200cc67c5@linux.intel.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210111_042823_346268_65477C2C X-CRM114-Status: GOOD ( 24.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: juri.lelli@redhat.com, mark.rutland@arm.com, peterz@infradead.org, catalin.marinas@arm.com, bsegall@google.com, xuwei5@huawei.com, will@kernel.org, vincent.guittot@linaro.org, aubrey.li@linux.intel.com, linux-acpi@vger.kernel.org, mingo@redhat.com, mgorman@suse.de, valentin.schneider@arm.com, lenb@kernel.org, linuxarm@openeuler.org, rostedt@goodmis.org, prime.zeng@hisilicon.com, jonathan.cameron@huawei.com, dietmar.eggemann@arm.com, linux-arm-kernel@lists.infradead.org, Barry Song , gregkh@linuxfoundation.org, rjw@rjwysocki.net, linux-kernel@vger.kernel.org, sudeep.holla@arm.com, tiantao6@hisilicon.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote: > > > On 1/8/21 7:12 AM, Morten Rasmussen wrote: > > On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote: > >> On 1/6/21 12:30 AM, Barry Song wrote: > >>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each > >>> cluster has 4 cpus. All clusters share L3 cache data while each cluster > >>> has local L3 tag. On the other hand, each cluster will share some > >>> internal system bus. This means cache is much more affine inside one cluster > >>> than across clusters. > >> > >> There is a similar need for clustering in x86. Some x86 cores could share L2 caches that > >> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters > >> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3). > >> Having a sched domain at the L2 cluster helps spread load among > >> L2 domains. This will reduce L2 cache contention and help with > >> performance for low to moderate load scenarios. > > > > IIUC, you are arguing for the exact opposite behaviour, i.e. balancing > > between L2 caches while Barry is after consolidating tasks within the > > boundaries of a L3 tag cache. One helps cache utilization, the other > > communication latency between tasks. Am I missing something? > > > > IMHO, we need some numbers on the table to say which way to go. Looking > > at just benchmarks of one type doesn't show that this is a good idea in > > general. > > > > I think it is going to depend on the workload. If there are dependent > tasks that communicate with one another, putting them together > in the same cluster will be the right thing to do to reduce communication > costs. On the other hand, if the tasks are independent, putting them together on the same cluster > will increase resource contention and spreading them out will be better. Agree. That is exactly where I'm coming from. This is all about the task placement policy. We generally tend to spread tasks to avoid resource contention, SMT and caches, which seems to be what you are proposing to extend. I think that makes sense given it can produce significant benefits. > > Any thoughts on what is the right clustering "tag" to use to clump > related tasks together? > Cgroup? Pid? Tasks with same mm? I think this is the real question. I think the closest thing we have at the moment is the wakee/waker flip heuristic. This seems to be related. Perhaps the wake_affine tricks can serve as starting point? Morten _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel