From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.5 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS, URIBL_BLOCKED,USER_AGENT_NEOMUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA3D9C0044C for ; Wed, 7 Nov 2018 12:07:07 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8C2D62086B for ; Wed, 7 Nov 2018 12:07:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="E3rxFw4p"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="MjoUHwEQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C2D62086B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+infradead-linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=B5KaN4tGfWVeSjH4NQiQw5VbVmBBeonI3tz3jVGk66o=; b=E3rxFw4pdie07B gTb9frbPA058TmV8A2uKvq8y+EW4XCgaoGZ2f//d4ZXn3jW+EfkaxNiFmjzIWaUyEt0WnZh9qjUtT h32CfW35v3jkDSa5quV2MugzLW4QO4D/BWb508FBMx0JrlUjcaSnRqSnHFeYEb0LLrmbeQShHR++L gjdf8f2Rsblh5lK0WaxhZ5G4UEqupbp47ELR83XrXRTZ2y5McJrrVMh/GKNQ2/cltUCEFdfqwwZih 3/+ofnHUitBnfZuhJqNhViIVpFSA9ihC1+KaQAqCz0Qu1NFzWzUQHTEN1dkQCKRfzhOlP7C2fj0LO Ho24uV0yMJ56RNnZQ3Aw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gKMby-0008TO-EI; Wed, 07 Nov 2018 12:07:06 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gKMbx-0008TD-K4 for linux-riscv@bombadil.infradead.org; Wed, 07 Nov 2018 12:07:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=MlWrYS4gh5L9tWoBzEvufh/mEjv1SlbltvsLd1PsEcg=; b=MjoUHwEQyHZUIsBDGl0zf9x+r H7pdOBSKm4StdPPsFTv15FpvQyeh/IWskfiesgKGa+9MLWHnikfkXQGYc63vjubsUtftMb9yjhtZ7 QXUej78H+wtq96wPZ6QgXRv+43vhJoBoBMb+0lG9mQFCH5fvocdWYmVjU6WSwsIT9p+siHzHPvxSP Jagw9OgjlwNyCKzp1uZ4MUR7s85Rn7E2pmXawg4/vJJLgkMfuqa968ElaCQkI9jlkixng8PZKvOpm 1hfuzRVI/Sw32AejwRDbqBoJKnbhavDfUNauu5c+MghcT3OaK8plEqMOtEPjfma8PbfVHZ+RtqUps Yotc7U+sA==; Received: from foss.arm.com ([217.140.101.70]) by casper.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gKMbu-0006uZ-8p for linux-riscv@lists.infradead.org; Wed, 07 Nov 2018 12:07:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4C4ED80D; Wed, 7 Nov 2018 04:06:51 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E639E3F5CF; Wed, 7 Nov 2018 04:06:48 -0800 (PST) Date: Wed, 7 Nov 2018 12:06:46 +0000 From: Mark Rutland To: Nick Kossifidis Subject: Re: [RFC 0/2] Add RISC-V cpu topology Message-ID: <20181107120645.sc3wjgr2yakvxktl@lakrids.cambridge.arm.com> References: <1541113468-22097-1-git-send-email-atish.patra@wdc.com> <866dedbc78ab4fa0e3b040697e112106@mailhost.ics.forth.gr> <20181106141331.GA28458@e107155-lin> <969fc2a5198984e0dfe8c3f585dc65f9@mailhost.ics.forth.gr> <20181106162051.w7fyweuxrl7ujzuz@lakrids.cambridge.arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181107_120702_602571_A8AABC0D X-CRM114-Status: GOOD ( 37.19 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, Damien.LeMoal@wdc.com, alankao@andestech.com, hch@infradead.org, anup@brainfault.org, palmer@sifive.com, linux-kernel@vger.kernel.org, zong@andestech.com, Atish Patra , robh+dt@kernel.org, Sudeep Holla , linux-riscv@lists.infradead.org, tglx@linutronix.de Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+infradead-linux-riscv=archiver.kernel.org@lists.infradead.org Message-ID: <20181107120646.nnLQALMmpUWYG5_JXE77aDdzFjDFKK2n2R5H5thuNIQ@z> On Wed, Nov 07, 2018 at 04:31:34AM +0200, Nick Kossifidis wrote: > Mark and Sundeep thanks a lot for your feedback, I guess you convinced > me that having a device tree binding for the scheduler is not a > correct approach. It's not a device after all and I agree that the > device tree shouldn't become an OS configuration file. Good to hear. > Regarding multiple levels of shared resources my point is that since > cpu-map doesn't contain any information of what is shared among the > cluster/core members it's not easy to do any further translation. Last > time I checked the arm code that uses cpu-map, it only defines one > domain for SMT, one for MC and then everything else is ignored. No > matter how many clusters have been defined, anything above the core > level is the same (and then I guess you started talking about adding > "packages" on the representation side). While cpu-map doesn't contain that information today, we can *add* that information to the cpu-map binding if necessary. > The reason I proposed to have a binding for the scheduler directly is > not only because it's simpler and closer to what really happens in the > code, it also makes more sense to me than the combination of cpu-map > with all the related mappings e.g. for numa or caches or power > domains etc. > > However you are right we could definitely augment cpu-map to include > support for what I'm saying and clean things up, and since you are > open about improving it here is a proposal that I hope you find > interesting: > > At first let's get rid of the nodes, they don't make sense: > > thread0 { > cpu = <&CPU0>; > }; > > A thread node can't have more than one cpu entry and any properties > should be on the cpu node itself, so it doesn't / can't add any > more information. We could just have an array of cpu nodes on the > node, it's much cleaner this way. > > core0 { > members = <&CPU0>, <&CPU1>; > }; Hold on. Rather than reinventing things from first principles, can we please discuss what you want to *achieve*, i.e. what information you need? Having a node is not a significant cost, and there are reasons we may want thread nodes. For example, it means that we can always refer to any level of topology with a phandle, and we might want to describe thread-affine devices in future. There are a tonne of existing bindings that are ugly, but re-inventing them for taste reasons alone is more costly to the ecosystem than simply using the existing bindings. We avoid re-inventing bindings unless there is a functional problem e.g. cases which they cannot possibly describe. > Then let's allow the cluster and core nodes to accept attributes that are > common for the cpus they contain. Right now this is considered invalid. > > For power domains we have a generic binding described on > Documentation/devicetree/bindings/power/power_domain.txt > which basically says that we need to put power-domains = specifiers> > attribute on each of the cpu nodes. FWIW, given this is arguably topological, I'm not personally averse to describing this in the cpu-map, if that actually gains us more than the complexity require to support it. Given we don't do this for device power domains, I suspect that it's simpler to stick with the existing binding. > The same happens with the capacity binding specified for arm on > Documentation/devicetree/bindings/arm/cpu-capacity.txt > which says we should add the capacity-dmips-mhz on each of the cpu nodes. The cpu-map was intended to expose topological dtails, and this isn't really a topological property. For example, Arm DynamIQ systems can have heterogeneous CPUs within clusters. I do not think it's worth moving this, tbh. > The same also happens with the generic numa binding on > Documentation/devicetree/bindings/numa.txt > which says we should add the nuna-node-id on each of the cpu nodes. Is there a strong gain from moving this? [...] > Finally from the examples above I'd like to stress out that the distinction > between a cluster and a core doesn't make much sense and it also makes the > representation more complicated. To begin with, how would you call the setup > on HiFive Unleashed ? A cluster of 4 cores that share the same L3 cache ? Not knowing much about the hardware, I can't really say. I'm not sure I follow why the distinction between a cluster and a core is non-sensical. A cluster is always a collection of cores. A hart could be a core in its own right, or it could be a thread under a core, which shares functional units with other harts within that core. Arguably, we could have mandated that the topology always needed to describe down to a thread, even if a core only had a single thread. That ship has sailed, however. Thanks, Mark. _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv