From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8D2EC43334 for ; Mon, 4 Jul 2022 15:21:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233694AbiGDPVY (ORCPT ); Mon, 4 Jul 2022 11:21:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231250AbiGDPVX (ORCPT ); Mon, 4 Jul 2022 11:21:23 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A60895F4C for ; Mon, 4 Jul 2022 08:21:22 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BAD5923A; Mon, 4 Jul 2022 08:21:22 -0700 (PDT) Received: from bogus (unknown [10.57.39.193]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BD36E3F792; Mon, 4 Jul 2022 08:21:19 -0700 (PDT) Date: Mon, 4 Jul 2022 16:20:08 +0100 From: Sudeep Holla To: Conor.Dooley@microchip.com Cc: linux-kernel@vger.kernel.org, gregkh@linuxfoundation.org, Valentina.FernandezAlanis@microchip.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, wangqing@vivo.com, robh+dt@kernel.org, rafael@kernel.org, ionela.voinescu@arm.com, pierre.gondois@arm.com, linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org Subject: Re: [PATCH v6 00/21] arch_topology: Updates to add socket support and fix cluster ids Message-ID: <20220704152008.pc4s2olkdqfnx34h@bogus> References: <20220704101605.1318280-1-sudeep.holla@arm.com> <6a647b6b-c913-b9d7-a23e-b17a8034c5c8@microchip.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6a647b6b-c913-b9d7-a23e-b17a8034c5c8@microchip.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 04, 2022 at 03:10:30PM +0000, Conor.Dooley@microchip.com wrote: > On 04/07/2022 11:15, Sudeep Holla wrote: > > EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe > > > > Hi Greg, > > > > Let me know if you prefer to pull the patches directly or prefer pull > > request. It has been in -next for a while now. > > > > Hi All, > > > > This version updates cacheinfo to populate and use the information from > > there for all the cache topology. > > > > This series intends to fix some discrepancies we have in the CPU topology > > parsing from the device tree /cpu-map node. Also this diverges from the > > behaviour on a ACPI enabled platform. The expectation is that both DT > > and ACPI enabled systems must present consistent view of the CPU topology. > > > > Currently we assign generated cluster count as the physical package identifier > > for each CPU which is wrong. The device tree bindings for CPU topology supports > > sockets to infer the socket or physical package identifier for a given CPU. > > Also we don't check if all the cores/threads belong to the same cluster before > > updating their sibling masks which is fine as we don't set the cluster id yet. > > > > These changes also assigns the cluster identifier as parsed from the device tree > > cluster nodes within /cpu-map without support for nesting of the clusters. > > Finally, it also add support for socket nodes in /cpu-map. With this the > > parsing of exact same information from ACPI PPTT and /cpu-map DT node > > aligns well. > > > > The only exception is that the last level cache id information can be > > inferred from the same ACPI PPTT while we need to parse CPU cache nodes > > in the device tree. > > For DT + RISC-V on PolarFire SoC and SiFive fu540 > Tested-by: Conor Dooley > > Anecdotally, v5 was tested on the !SMP D1 which worked fine when > CONFIG_SMP was enabled. > Thanks a lot for testing on RISC-V, much appreciated! Thanks for your patience and help with v5 so that we could figure out the silly issue finally. -- Regards, Sudeep