linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Dragan Simic <dsimic@manjaro.org>
To: Andre Przywara <andre.przywara@arm.com>
Cc: linux-sunxi@lists.linux.dev, wens@csie.org,
	jernej.skrabec@gmail.com, samuel@sholland.org,
	linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org,
	robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] arm64: dts: allwinner: Add cache information to the SoC dtsi for H6
Date: Tue, 30 Apr 2024 02:01:42 +0200	[thread overview]
Message-ID: <6fdeb49d57ccccca62e4f43dbe9475e3@manjaro.org> (raw)
In-Reply-To: <20240430001002.4797e4e3@minigeek.lan>

Hello Andre,

On 2024-04-30 01:10, Andre Przywara wrote:
> On Sun, 28 Apr 2024 13:40:36 +0200
> Dragan Simic <dsimic@manjaro.org> wrote:
> 
>> Add missing cache information to the Allwinner H6 SoC dtsi, to allow
>> the userspace, which includes lscpu(1) that uses the virtual files 
>> provided
>> by the kernel under the /sys/devices/system/cpu directory, to display 
>> the
>> proper H6 cache information.
>> 
>> Adding the cache information to the H6 SoC dtsi also makes the 
>> following
>> warning message in the kernel log go away:
>> 
>>   cacheinfo: Unable to detect cache hierarchy for CPU 0
>> 
>> The cache parameters for the H6 dtsi were obtained and partially 
>> derived
>> by hand from the cache size and layout specifications found in the 
>> following
>> datasheets and technical reference manuals:
>> 
>>   - Allwinner H6 V200 datasheet, version 1.1
>>   - ARM Cortex-A53 revision r0p3 TRM, version E
>> 
>> For future reference, here's a brief summary of the documentation:
>> 
>>   - All caches employ the 64-byte cache line length
>>   - Each Cortex-A53 core has 32 KB of L1 2-way, set-associative 
>> instruction
>>     cache and 32 KB of L1 4-way, set-associative data cache
>>   - The entire SoC has 512 KB of unified L2 16-way, set-associative 
>> cache
>> 
>> Signed-off-by: Dragan Simic <dsimic@manjaro.org>
> 
> I can confirm that the data below matches the manuals, but also the
> decoding of the architectural cache type registers (CCSIDR_EL1):
>   L1D: 32 KB: 128 sets, 4 way associative, 64 bytes/line
>   L1I: 32 KB: 256 sets, 2 way associative, 64 bytes/line
>   L2: 512 KB: 512 sets, 16 way associative, 64 bytes/line

Thank you very much for reviewing my patch in such a detailed way!
It's good to know that the values in the Allwinner datasheets match
with the observed reality, so to speak. :)

> tinymembench results for the H6 are available here:
> https://github.com/ThomasKaiser/sbc-bench/blob/master/results/26Ph.txt
> and confirm the theory. Also ran it locally with similar results.

Here's a quick copy & paste of the most important benchmark results
from the link above, as a quick reference for anyone reading this
thread in the future, or as a data source in case the link above
becomes inaccessible at some point in the future:

==========================================================================
== Memory latency test                                                  
==
==                                                                      
==
== Average time is measured for random memory accesses in the buffers   
==
== of different sizes. The larger is the buffer, the more significant   
==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM      
==
== accesses. For extremely large buffer sizes we are expecting to see   
==
== page table walk with several requests to SDRAM for almost every      
==
== memory access (though 64MiB is not nearly large enough to experience 
==
== this effect to its fullest).                                         
==
==                                                                      
==
== Note 1: All the numbers are representing extra time, which needs to  
==
==         be added to L1 cache latency. The cycle timings for L1 cache 
==
==         latency can be usually found in the processor documentation. 
==
== Note 2: Dual random read means that we are simultaneously performing 
==
==         two independent memory accesses at a time. In the case if    
==
==         the memory subsystem can't handle multiple outstanding       
==
==         requests, dual random read has the same timings as two       
==
==         single reads performed one after another.                    
==
==========================================================================

block size : single random read / dual random read, [MADV_NOHUGEPAGE]
       1024 :    0.0 ns          /     0.0 ns
       2048 :    0.0 ns          /     0.0 ns
       4096 :    0.0 ns          /     0.0 ns
       8192 :    0.0 ns          /     0.0 ns
      16384 :    0.0 ns          /     0.0 ns
      32768 :    0.0 ns          /     0.0 ns
      65536 :    3.8 ns          /     6.5 ns
     131072 :    5.8 ns          /     9.1 ns
     262144 :    6.9 ns          /    10.2 ns
     524288 :    7.8 ns          /    11.2 ns
    1048576 :   74.3 ns          /   114.5 ns
    2097152 :  110.5 ns          /   148.1 ns
    4194304 :  132.6 ns          /   164.5 ns
    8388608 :  144.0 ns          /   172.3 ns
   16777216 :  151.5 ns          /   177.3 ns
   33554432 :  156.3 ns          /   180.7 ns
   67108864 :  158.7 ns          /   182.9 ns

block size : single random read / dual random read, [MADV_HUGEPAGE]
       1024 :    0.0 ns          /     0.0 ns
       2048 :    0.0 ns          /     0.0 ns
       4096 :    0.0 ns          /     0.0 ns
       8192 :    0.0 ns          /     0.0 ns
      16384 :    0.0 ns          /     0.0 ns
      32768 :    0.0 ns          /     0.0 ns
      65536 :    3.8 ns          /     6.5 ns
     131072 :    5.8 ns          /     9.1 ns
     262144 :    6.9 ns          /    10.2 ns
     524288 :    7.8 ns          /    11.2 ns
    1048576 :   74.3 ns          /   114.5 ns
    2097152 :  110.0 ns          /   147.5 ns
    4194304 :  127.6 ns          /   158.3 ns
    8388608 :  136.4 ns          /   162.2 ns
   16777216 :  141.2 ns          /   165.6 ns
   33554432 :  143.7 ns          /   168.4 ns
   67108864 :  144.9 ns          /   168.9 ns

> Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Thanks!

>> ---
>>  arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi | 37 
>> ++++++++++++++++++++
>>  1 file changed, 37 insertions(+)
>> 
>> diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi 
>> b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
>> index d11e5041bae9..1a63066396e8 100644
>> --- a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
>> +++ b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
>> @@ -29,36 +29,73 @@ cpu0: cpu@0 {
>>  			clocks = <&ccu CLK_CPUX>;
>>  			clock-latency-ns = <244144>; /* 8 32k periods */
>>  			#cooling-cells = <2>;
>> +			i-cache-size = <0x8000>;
>> +			i-cache-line-size = <64>;
>> +			i-cache-sets = <256>;
>> +			d-cache-size = <0x8000>;
>> +			d-cache-line-size = <64>;
>> +			d-cache-sets = <128>;
>> +			next-level-cache = <&l2_cache>;
>>  		};
>> 
>>  		cpu1: cpu@1 {
>>  			compatible = "arm,cortex-a53";
>>  			device_type = "cpu";
>>  			reg = <1>;
>>  			enable-method = "psci";
>>  			clocks = <&ccu CLK_CPUX>;
>>  			clock-latency-ns = <244144>; /* 8 32k periods */
>>  			#cooling-cells = <2>;
>> +			i-cache-size = <0x8000>;
>> +			i-cache-line-size = <64>;
>> +			i-cache-sets = <256>;
>> +			d-cache-size = <0x8000>;
>> +			d-cache-line-size = <64>;
>> +			d-cache-sets = <128>;
>> +			next-level-cache = <&l2_cache>;
>>  		};
>> 
>>  		cpu2: cpu@2 {
>>  			compatible = "arm,cortex-a53";
>>  			device_type = "cpu";
>>  			reg = <2>;
>>  			enable-method = "psci";
>>  			clocks = <&ccu CLK_CPUX>;
>>  			clock-latency-ns = <244144>; /* 8 32k periods */
>>  			#cooling-cells = <2>;
>> +			i-cache-size = <0x8000>;
>> +			i-cache-line-size = <64>;
>> +			i-cache-sets = <256>;
>> +			d-cache-size = <0x8000>;
>> +			d-cache-line-size = <64>;
>> +			d-cache-sets = <128>;
>> +			next-level-cache = <&l2_cache>;
>>  		};
>> 
>>  		cpu3: cpu@3 {
>>  			compatible = "arm,cortex-a53";
>>  			device_type = "cpu";
>>  			reg = <3>;
>>  			enable-method = "psci";
>>  			clocks = <&ccu CLK_CPUX>;
>>  			clock-latency-ns = <244144>; /* 8 32k periods */
>>  			#cooling-cells = <2>;
>> +			i-cache-size = <0x8000>;
>> +			i-cache-line-size = <64>;
>> +			i-cache-sets = <256>;
>> +			d-cache-size = <0x8000>;
>> +			d-cache-line-size = <64>;
>> +			d-cache-sets = <128>;
>> +			next-level-cache = <&l2_cache>;
>> +		};
>> +
>> +		l2_cache: l2-cache {
>> +			compatible = "cache";
>> +			cache-level = <2>;
>> +			cache-unified;
>> +			cache-size = <0x80000>;
>> +			cache-line-size = <64>;
>> +			cache-sets = <512>;
>>  		};
>>  	};

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2024-04-30  0:02 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-28 11:40 [PATCH] arm64: dts: allwinner: Add cache information to the SoC dtsi for A64 Dragan Simic
2024-04-28 11:40 ` [PATCH] arm64: dts: allwinner: Add cache information to the SoC dtsi for H6 Dragan Simic
2024-04-28 16:21   ` Jernej Škrabec
2024-04-29 23:10   ` Andre Przywara
2024-04-30  0:01     ` Dragan Simic [this message]
2024-04-30 10:46       ` Andre Przywara
2024-04-30 11:10         ` Dragan Simic
2024-05-01  9:30           ` Andre Przywara
2024-05-03  9:13             ` Dragan Simic
2024-05-28 15:46   ` Chen-Yu Tsai
2024-05-28 15:56     ` Chen-Yu Tsai
2024-05-28 16:02       ` Dragan Simic
2024-05-28 16:06         ` Chen-Yu Tsai
2024-05-28 16:17     ` Chen-Yu Tsai
2024-05-28 16:10   ` Chen-Yu Tsai
2024-04-28 16:19 ` [PATCH] arm64: dts: allwinner: Add cache information to the SoC dtsi for A64 Jernej Škrabec
2024-04-29 10:33 ` Andre Przywara
2024-04-29 13:51   ` Dragan Simic
2024-05-28 16:10 ` Chen-Yu Tsai
2024-05-28 16:16   ` Chen-Yu Tsai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6fdeb49d57ccccca62e4f43dbe9475e3@manjaro.org \
    --to=dsimic@manjaro.org \
    --cc=andre.przywara@arm.com \
    --cc=conor+dt@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=jernej.skrabec@gmail.com \
    --cc=krzk+dt@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-sunxi@lists.linux.dev \
    --cc=robh@kernel.org \
    --cc=samuel@sholland.org \
    --cc=wens@csie.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).