From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7696C07E99 for ; Fri, 9 Jul 2021 17:11:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 88DFD613C8 for ; Fri, 9 Jul 2021 17:11:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229506AbhGIRNw (ORCPT ); Fri, 9 Jul 2021 13:13:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:36456 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229459AbhGIRNv (ORCPT ); Fri, 9 Jul 2021 13:13:51 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 06B38613C1; Fri, 9 Jul 2021 17:11:05 +0000 (UTC) Date: Fri, 9 Jul 2021 18:10:53 +0100 From: Catalin Marinas To: Will Deacon Cc: Jeffrey Hugo , Arnd Bergmann , Yassine Oudjana , Marc Zyngier , Robin Murphy , Ard Biesheuvel , Android Kernel Team , Linux ARM , Mark Rutland , Vincent Whitchurch , linux-arm-msm , Bjorn Andersson Subject: Re: [PATCH] arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES) Message-ID: <20210709171051.GB29765@arm.com> References: <87zguz7b6b.wl-maz@kernel.org> <20210709084842.GA24432@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210709084842.GA24432@willie-the-truck> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org On Fri, Jul 09, 2021 at 09:48:42AM +0100, Will Deacon wrote: > On Thu, Jul 08, 2021 at 02:59:28PM -0600, Jeffrey Hugo wrote: > > On Wed, Jul 7, 2021 at 8:41 AM Jeffrey Hugo wrote: > > > L0 I 64 byte cacheline > > > L1 I 64 > > > L1 D 64 > > > L2 unified 128 (shared between the CPUs of a duplex) > > > > > > I believe L2 is within the POC, but I'm trying to dig up the old > > > documentation to confirm. > > > > Was able to track down a friendly hardware designer. The POC lies > > between L2 and L3. Hope this helps. > > Damn, yes, it's bad news but thanks for chasing it up. I'll revert the patch > at -rc1 and add a comment about MSM8996. It's a shame but we can't do much for this platform. Longer term, we should look at making kmalloc() cache selection more dynamic. Probably still starting with a 128 byte minimum size but, after initialising all the devices during boot, if we can't find any non-coherent one just relax the kmalloc() allocations. We still have the issue with platform devices with DT assumed to be non-coherent and any late call (after boot) to arch_setup_dma_ops(). Some bodge below to get an idea, not a final patch (not even the beginning of one). It initialises the kmalloc caches to size 8 but limits the allocation size to a kmalloc_dyn_min_size, initially set to 128 on arm64. In a device_initcall_sync(), if we didn't find any non-coherent device, we lower this to KMALLOC_MIN_SIZE (8 with slub). ----------------8<---------------------------- diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h index a074459f8f2f..bed65db3c42e 100644 --- a/arch/arm64/include/asm/cache.h +++ b/arch/arm64/include/asm/cache.h @@ -40,15 +40,6 @@ #define CLIDR_LOC(clidr) (((clidr) >> CLIDR_LOC_SHIFT) & 0x7) #define CLIDR_LOUIS(clidr) (((clidr) >> CLIDR_LOUIS_SHIFT) & 0x7) -/* - * Memory returned by kmalloc() may be used for DMA, so we must make - * sure that all such allocations are cache aligned. Otherwise, - * unrelated code may cause parts of the buffer to be read into the - * cache before the transfer is done, causing old data to be seen by - * the CPU. - */ -#define ARCH_DMA_MINALIGN (128) - #ifdef CONFIG_KASAN_SW_TAGS #define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT) #elif defined(CONFIG_KASAN_HW_TAGS) @@ -59,6 +50,9 @@ #include +extern int kmalloc_dyn_min_size; +#define __HAVE_ARCH_KMALLOC_DYN_MIN_SIZE + #define ICACHEF_ALIASING 0 #define ICACHEF_VPIPT 1 extern unsigned long __icache_flags; @@ -88,7 +82,7 @@ static inline int cache_line_size_of_cpu(void) { u32 cwg = cache_type_cwg(); - return cwg ? 4 << cwg : ARCH_DMA_MINALIGN; + return cwg ? 4 << cwg : __alignof__(unsigned long long); } int cache_line_size(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index efed2830d141..a25813377187 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2808,8 +2808,8 @@ void __init setup_cpu_features(void) */ cwg = cache_type_cwg(); if (!cwg) - pr_warn("No Cache Writeback Granule information, assuming %d\n", - ARCH_DMA_MINALIGN); + pr_warn("No Cache Writeback Granule information, assuming %ld\n", + __alignof__(unsigned long long)); } static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 4bf1dd3eb041..9a30d1beb3ea 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -13,6 +13,18 @@ #include +/* + * Memory returned by kmalloc() may be used for DMA, so we must make + * sure that all such allocations are cache aligned. Otherwise, + * unrelated code may cause parts of the buffer to be read into the + * cache before the transfer is done, causing old data to be seen by + * the CPU. + */ +int kmalloc_dyn_min_size = 128; +EXPORT_SYMBOL(kmalloc_dyn_min_size); + +static bool non_coherent_devices; + void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { @@ -42,11 +54,14 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, { int cls = cache_line_size_of_cpu(); - WARN_TAINT(!coherent && cls > ARCH_DMA_MINALIGN, + WARN_TAINT(!coherent && cls > kmalloc_dyn_min_size, TAINT_CPU_OUT_OF_SPEC, - "%s %s: ARCH_DMA_MINALIGN smaller than CTR_EL0.CWG (%d < %d)", + "%s %s: kmalloc() minimum size smaller than CTR_EL0.CWG (%d < %d)", dev_driver_string(dev), dev_name(dev), - ARCH_DMA_MINALIGN, cls); + kmalloc_dyn_min_size, cls); + + if (!coherent) + non_coherent_devices = true; dev->dma_coherent = coherent; if (iommu) @@ -57,3 +72,12 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, dev->dma_ops = &xen_swiotlb_dma_ops; #endif } + +static int __init adjust_kmalloc_dyn_min_size(void) +{ + if (!non_coherent_devices) + kmalloc_dyn_min_size = KMALLOC_MIN_SIZE; + + return 0; +} +device_initcall_sync(adjust_kmalloc_dyn_min_size); diff --git a/include/linux/slab.h b/include/linux/slab.h index 0c97d788762c..e40c7899cb07 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -349,15 +349,21 @@ static __always_inline enum kmalloc_cache_type kmalloc_type(gfp_t flags) */ static __always_inline unsigned int kmalloc_index(size_t size) { + int min_size = KMALLOC_MIN_SIZE; + if (!size) return 0; - if (size <= KMALLOC_MIN_SIZE) - return KMALLOC_SHIFT_LOW; +#ifdef __HAVE_ARCH_KMALLOC_DYN_MIN_SIZE + min_size = kmalloc_dyn_min_size; +#endif + + if (size <= min_size) + return ilog2(min_size); - if (KMALLOC_MIN_SIZE <= 32 && size > 64 && size <= 96) + if (min_size <= 32 && size > 64 && size <= 96) return 1; - if (KMALLOC_MIN_SIZE <= 64 && size > 128 && size <= 192) + if (min_size <= 64 && size > 128 && size <= 192) return 2; if (size <= 8) return 3; if (size <= 16) return 4; diff --git a/mm/slab_common.c b/mm/slab_common.c index 7cab77655f11..2666237c84c4 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -725,6 +725,10 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags) if (!size) return ZERO_SIZE_PTR; +#ifdef __HAVE_ARCH_KMALLOC_DYN_MIN_SIZE + if (size < kmalloc_dyn_min_size) + size = kmalloc_dyn_min_size; +#endif index = size_index[size_index_elem(size)]; } else { if (WARN_ON_ONCE(size > KMALLOC_MAX_CACHE_SIZE)) -- Catalin From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38D45C07E99 for ; Fri, 9 Jul 2021 17:12:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0234861351 for ; Fri, 9 Jul 2021 17:12:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0234861351 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nW2kocD/Ho3+IH37p9f4J8Ta7PUmxvxTb0vD06b2RMM=; b=BE7oETM7qqNKZ6 PqWcCDIqBxfPJhbb4WNinl7+mvMGFWJbGSHqX5yGwAmcMTAzslqxPN7D+lwKfpk44v3r02BCHebNp OL5VQtxh1uNys7k1rPa+Si58/PUWBoFQonLFSXjM8ehcXGT5exRO8og9rvVrxopsdBscWF4HEA8X5 FGD1LWmHLf2ao4f3sfCMDt+dQqDMdzCIsR/YxYEbKMxpdpPX7PNIkK7cR8BY/TxB3bpDDCRj5f7Yi 0VpeLL2hfHC53fSEFbd+gztRo23Roo5u5+i2aakZ7G4b39QZfM8R2B0ljyZIcGpQTjDjwRZ1waqou o2Aqxkc0ja0Q73u+aMnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m1u1y-002C3g-NL; Fri, 09 Jul 2021 17:11:14 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m1u1t-002C3M-Jf for linux-arm-kernel@lists.infradead.org; Fri, 09 Jul 2021 17:11:12 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 06B38613C1; Fri, 9 Jul 2021 17:11:05 +0000 (UTC) Date: Fri, 9 Jul 2021 18:10:53 +0100 From: Catalin Marinas To: Will Deacon Cc: Jeffrey Hugo , Arnd Bergmann , Yassine Oudjana , Marc Zyngier , Robin Murphy , Ard Biesheuvel , Android Kernel Team , Linux ARM , Mark Rutland , Vincent Whitchurch , linux-arm-msm , Bjorn Andersson Subject: Re: [PATCH] arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES) Message-ID: <20210709171051.GB29765@arm.com> References: <87zguz7b6b.wl-maz@kernel.org> <20210709084842.GA24432@willie-the-truck> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210709084842.GA24432@willie-the-truck> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210709_101110_717314_DAF90062 X-CRM114-Status: GOOD ( 30.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Jul 09, 2021 at 09:48:42AM +0100, Will Deacon wrote: > On Thu, Jul 08, 2021 at 02:59:28PM -0600, Jeffrey Hugo wrote: > > On Wed, Jul 7, 2021 at 8:41 AM Jeffrey Hugo wrote: > > > L0 I 64 byte cacheline > > > L1 I 64 > > > L1 D 64 > > > L2 unified 128 (shared between the CPUs of a duplex) > > > > > > I believe L2 is within the POC, but I'm trying to dig up the old > > > documentation to confirm. > > > > Was able to track down a friendly hardware designer. The POC lies > > between L2 and L3. Hope this helps. > > Damn, yes, it's bad news but thanks for chasing it up. I'll revert the patch > at -rc1 and add a comment about MSM8996. It's a shame but we can't do much for this platform. Longer term, we should look at making kmalloc() cache selection more dynamic. Probably still starting with a 128 byte minimum size but, after initialising all the devices during boot, if we can't find any non-coherent one just relax the kmalloc() allocations. We still have the issue with platform devices with DT assumed to be non-coherent and any late call (after boot) to arch_setup_dma_ops(). Some bodge below to get an idea, not a final patch (not even the beginning of one). It initialises the kmalloc caches to size 8 but limits the allocation size to a kmalloc_dyn_min_size, initially set to 128 on arm64. In a device_initcall_sync(), if we didn't find any non-coherent device, we lower this to KMALLOC_MIN_SIZE (8 with slub). ----------------8<---------------------------- diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h index a074459f8f2f..bed65db3c42e 100644 --- a/arch/arm64/include/asm/cache.h +++ b/arch/arm64/include/asm/cache.h @@ -40,15 +40,6 @@ #define CLIDR_LOC(clidr) (((clidr) >> CLIDR_LOC_SHIFT) & 0x7) #define CLIDR_LOUIS(clidr) (((clidr) >> CLIDR_LOUIS_SHIFT) & 0x7) -/* - * Memory returned by kmalloc() may be used for DMA, so we must make - * sure that all such allocations are cache aligned. Otherwise, - * unrelated code may cause parts of the buffer to be read into the - * cache before the transfer is done, causing old data to be seen by - * the CPU. - */ -#define ARCH_DMA_MINALIGN (128) - #ifdef CONFIG_KASAN_SW_TAGS #define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT) #elif defined(CONFIG_KASAN_HW_TAGS) @@ -59,6 +50,9 @@ #include +extern int kmalloc_dyn_min_size; +#define __HAVE_ARCH_KMALLOC_DYN_MIN_SIZE + #define ICACHEF_ALIASING 0 #define ICACHEF_VPIPT 1 extern unsigned long __icache_flags; @@ -88,7 +82,7 @@ static inline int cache_line_size_of_cpu(void) { u32 cwg = cache_type_cwg(); - return cwg ? 4 << cwg : ARCH_DMA_MINALIGN; + return cwg ? 4 << cwg : __alignof__(unsigned long long); } int cache_line_size(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index efed2830d141..a25813377187 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2808,8 +2808,8 @@ void __init setup_cpu_features(void) */ cwg = cache_type_cwg(); if (!cwg) - pr_warn("No Cache Writeback Granule information, assuming %d\n", - ARCH_DMA_MINALIGN); + pr_warn("No Cache Writeback Granule information, assuming %ld\n", + __alignof__(unsigned long long)); } static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 4bf1dd3eb041..9a30d1beb3ea 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -13,6 +13,18 @@ #include +/* + * Memory returned by kmalloc() may be used for DMA, so we must make + * sure that all such allocations are cache aligned. Otherwise, + * unrelated code may cause parts of the buffer to be read into the + * cache before the transfer is done, causing old data to be seen by + * the CPU. + */ +int kmalloc_dyn_min_size = 128; +EXPORT_SYMBOL(kmalloc_dyn_min_size); + +static bool non_coherent_devices; + void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { @@ -42,11 +54,14 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, { int cls = cache_line_size_of_cpu(); - WARN_TAINT(!coherent && cls > ARCH_DMA_MINALIGN, + WARN_TAINT(!coherent && cls > kmalloc_dyn_min_size, TAINT_CPU_OUT_OF_SPEC, - "%s %s: ARCH_DMA_MINALIGN smaller than CTR_EL0.CWG (%d < %d)", + "%s %s: kmalloc() minimum size smaller than CTR_EL0.CWG (%d < %d)", dev_driver_string(dev), dev_name(dev), - ARCH_DMA_MINALIGN, cls); + kmalloc_dyn_min_size, cls); + + if (!coherent) + non_coherent_devices = true; dev->dma_coherent = coherent; if (iommu) @@ -57,3 +72,12 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, dev->dma_ops = &xen_swiotlb_dma_ops; #endif } + +static int __init adjust_kmalloc_dyn_min_size(void) +{ + if (!non_coherent_devices) + kmalloc_dyn_min_size = KMALLOC_MIN_SIZE; + + return 0; +} +device_initcall_sync(adjust_kmalloc_dyn_min_size); diff --git a/include/linux/slab.h b/include/linux/slab.h index 0c97d788762c..e40c7899cb07 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -349,15 +349,21 @@ static __always_inline enum kmalloc_cache_type kmalloc_type(gfp_t flags) */ static __always_inline unsigned int kmalloc_index(size_t size) { + int min_size = KMALLOC_MIN_SIZE; + if (!size) return 0; - if (size <= KMALLOC_MIN_SIZE) - return KMALLOC_SHIFT_LOW; +#ifdef __HAVE_ARCH_KMALLOC_DYN_MIN_SIZE + min_size = kmalloc_dyn_min_size; +#endif + + if (size <= min_size) + return ilog2(min_size); - if (KMALLOC_MIN_SIZE <= 32 && size > 64 && size <= 96) + if (min_size <= 32 && size > 64 && size <= 96) return 1; - if (KMALLOC_MIN_SIZE <= 64 && size > 128 && size <= 192) + if (min_size <= 64 && size > 128 && size <= 192) return 2; if (size <= 8) return 3; if (size <= 16) return 4; diff --git a/mm/slab_common.c b/mm/slab_common.c index 7cab77655f11..2666237c84c4 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -725,6 +725,10 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags) if (!size) return ZERO_SIZE_PTR; +#ifdef __HAVE_ARCH_KMALLOC_DYN_MIN_SIZE + if (size < kmalloc_dyn_min_size) + size = kmalloc_dyn_min_size; +#endif index = size_index[size_index_elem(size)]; } else { if (WARN_ON_ONCE(size > KMALLOC_MAX_CACHE_SIZE)) -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel