linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Christoph Hellwig <hch@lst.de>
Cc: Arnd Bergmann <arnd@arndb.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Ard Biesheuvel <ardb@kernel.org>,
	Isaac Manjarres <isaacmanjarres@google.com>,
	Saravana Kannan <saravanak@google.com>,
	Alasdair Kergon <agk@redhat.com>, Daniel Vetter <daniel@ffwll.ch>,
	Joerg Roedel <joro@8bytes.org>, Mark Brown <broonie@kernel.org>,
	Mike Snitzer <snitzer@kernel.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-mm@kvack.org, iommu@lists.linux.dev,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v5 12/15] dma-mapping: Force bouncing if the kmalloc() size is not cache-line-aligned
Date: Thu, 25 May 2023 16:53:56 +0100	[thread overview]
Message-ID: <ce1abf65-35cd-fd7a-5688-6a4168a821cc@arm.com> (raw)
In-Reply-To: <20230524171904.3967031-13-catalin.marinas@arm.com>

On 24/05/2023 6:19 pm, Catalin Marinas wrote:
> For direct DMA, if the size is small enough to have originated from a
> kmalloc() cache below ARCH_DMA_MINALIGN, check its alignment against
> dma_get_cache_alignment() and bounce if necessary. For larger sizes, it
> is the responsibility of the DMA API caller to ensure proper alignment.
> 
> At this point, the kmalloc() caches are properly aligned but this will
> change in a subsequent patch.
> 
> Architectures can opt in by selecting ARCH_WANT_KMALLOC_DMA_BOUNCE.

Thanks for the additional comment, that's a great summary for future 
reference.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Cc: Robin Murphy <robin.murphy@arm.com>
> ---
>   include/linux/dma-map-ops.h | 61 +++++++++++++++++++++++++++++++++++++
>   kernel/dma/Kconfig          |  4 +++
>   kernel/dma/direct.h         |  3 +-
>   3 files changed, 67 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 31f114f486c4..9bf19b5bf755 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -8,6 +8,7 @@
>   
>   #include <linux/dma-mapping.h>
>   #include <linux/pgtable.h>
> +#include <linux/slab.h>
>   
>   struct cma;
>   
> @@ -277,6 +278,66 @@ static inline bool dev_is_dma_coherent(struct device *dev)
>   }
>   #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */
>   
> +/*
> + * Check whether potential kmalloc() buffers are safe for non-coherent DMA.
> + */
> +static inline bool dma_kmalloc_safe(struct device *dev,
> +				    enum dma_data_direction dir)
> +{
> +	/*
> +	 * If DMA bouncing of kmalloc() buffers is disabled, the kmalloc()
> +	 * caches have already been aligned to a DMA-safe size.
> +	 */
> +	if (!IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC))
> +		return true;
> +
> +	/*
> +	 * kmalloc() buffers are DMA-safe irrespective of size if the device
> +	 * is coherent or the direction is DMA_TO_DEVICE (non-desctructive
> +	 * cache maintenance and benign cache line evictions).
> +	 */
> +	if (dev_is_dma_coherent(dev) || dir == DMA_TO_DEVICE)
> +		return true;
> +
> +	return false;
> +}
> +
> +/*
> + * Check whether the given size, assuming it is for a kmalloc()'ed buffer, is
> + * sufficiently aligned for non-coherent DMA.
> + */
> +static inline bool dma_kmalloc_size_aligned(size_t size)
> +{
> +	/*
> +	 * Larger kmalloc() sizes are guaranteed to be aligned to
> +	 * ARCH_DMA_MINALIGN.
> +	 */
> +	if (size >= 2 * ARCH_DMA_MINALIGN ||
> +	    IS_ALIGNED(kmalloc_size_roundup(size), dma_get_cache_alignment()))
> +		return true;
> +
> +	return false;
> +}
> +
> +/*
> + * Check whether the given object size may have originated from a kmalloc()
> + * buffer with a slab alignment below the DMA-safe alignment and needs
> + * bouncing for non-coherent DMA. The pointer alignment is not considered and
> + * in-structure DMA-safe offsets are the responsibility of the caller. Such
> + * code should use the static ARCH_DMA_MINALIGN for compiler annotations.
> + *
> + * The heuristics can have false positives, bouncing unnecessarily, though the
> + * buffers would be small. False negatives are theoretically possible if, for
> + * example, multiple small kmalloc() buffers are coalesced into a larger
> + * buffer that passes the alignment check. There are no such known constructs
> + * in the kernel.
> + */
> +static inline bool dma_kmalloc_needs_bounce(struct device *dev, size_t size,
> +					    enum dma_data_direction dir)
> +{
> +	return !dma_kmalloc_safe(dev, dir) && !dma_kmalloc_size_aligned(size);
> +}
> +
>   void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
>   		gfp_t gfp, unsigned long attrs);
>   void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
> diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
> index acc6f231259c..abea1823fe21 100644
> --- a/kernel/dma/Kconfig
> +++ b/kernel/dma/Kconfig
> @@ -90,6 +90,10 @@ config SWIOTLB
>   	bool
>   	select NEED_DMA_MAP_STATE
>   
> +config DMA_BOUNCE_UNALIGNED_KMALLOC
> +	bool
> +	depends on SWIOTLB
> +
>   config DMA_RESTRICTED_POOL
>   	bool "DMA Restricted Pool"
>   	depends on OF && OF_RESERVED_MEM && SWIOTLB
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index e38ffc5e6bdd..97ec892ea0b5 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -94,7 +94,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
>   		return swiotlb_map(dev, phys, size, dir, attrs);
>   	}
>   
> -	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
> +	if (unlikely(!dma_capable(dev, dma_addr, size, true)) ||
> +	    dma_kmalloc_needs_bounce(dev, size, dir)) {
>   		if (is_pci_p2pdma_page(page))
>   			return DMA_MAPPING_ERROR;
>   		if (is_swiotlb_active(dev))


  reply	other threads:[~2023-05-25 15:54 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-24 17:18 [PATCH v5 00/15] mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 Catalin Marinas
2023-05-24 17:18 ` [PATCH v5 01/15] mm/slab: Decouple ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGN Catalin Marinas
2023-05-24 17:18 ` [PATCH v5 02/15] dma: Allow dma_get_cache_alignment() to be overridden by the arch code Catalin Marinas
2023-05-25 13:59   ` Christoph Hellwig
2023-05-24 17:18 ` [PATCH v5 03/15] mm/slab: Simplify create_kmalloc_cache() args and make it static Catalin Marinas
2023-05-24 17:18 ` [PATCH v5 04/15] mm/slab: Limit kmalloc() minimum alignment to dma_get_cache_alignment() Catalin Marinas
2023-05-24 17:18 ` [PATCH v5 05/15] drivers/base: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Catalin Marinas
2023-05-24 17:18 ` [PATCH v5 06/15] drivers/gpu: " Catalin Marinas
2023-05-24 17:18 ` [PATCH v5 07/15] drivers/usb: " Catalin Marinas
2023-05-24 17:18 ` [PATCH v5 08/15] drivers/spi: " Catalin Marinas
2023-05-24 17:18 ` [PATCH v5 09/15] drivers/md: " Catalin Marinas
2023-05-25 14:00   ` Christoph Hellwig
2023-05-24 17:18 ` [PATCH v5 10/15] arm64: Allow kmalloc() caches aligned to the smaller cache_line_size() Catalin Marinas
2023-05-24 17:19 ` [PATCH v5 11/15] scatterlist: Add dedicated config for DMA flags Catalin Marinas
2023-05-24 17:19 ` [PATCH v5 12/15] dma-mapping: Force bouncing if the kmalloc() size is not cache-line-aligned Catalin Marinas
2023-05-25 15:53   ` Robin Murphy [this message]
2023-05-24 17:19 ` [PATCH v5 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned Catalin Marinas
2023-05-25 15:57   ` Robin Murphy
2023-05-26 16:36   ` Jisheng Zhang
2023-05-26 19:22     ` Catalin Marinas
2023-05-30 13:01       ` Robin Murphy
2023-05-24 17:19 ` [PATCH v5 14/15] mm: slab: Reduce the kmalloc() minimum alignment if DMA bouncing possible Catalin Marinas
2023-05-24 17:19 ` [PATCH v5 15/15] arm64: Enable ARCH_WANT_KMALLOC_DMA_BOUNCE for arm64 Catalin Marinas
2023-05-25 16:12   ` Robin Murphy
2023-05-25 17:08     ` Catalin Marinas
2023-05-25 12:31 ` [PATCH v5 00/15] mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 Jonathan Cameron
2023-05-25 14:31   ` Catalin Marinas
2023-05-26 16:07     ` Jonathan Cameron
2023-05-26 16:29       ` Jonathan Cameron
2023-05-30 13:38         ` Catalin Marinas
2023-05-30 16:31           ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ce1abf65-35cd-fd7a-5688-6a4168a821cc@arm.com \
    --to=robin.murphy@arm.com \
    --cc=agk@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=ardb@kernel.org \
    --cc=arnd@arndb.de \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=daniel@ffwll.ch \
    --cc=gregkh@linuxfoundation.org \
    --cc=hch@lst.de \
    --cc=herbert@gondor.apana.org.au \
    --cc=iommu@lists.linux.dev \
    --cc=isaacmanjarres@google.com \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=maz@kernel.org \
    --cc=rafael@kernel.org \
    --cc=saravanak@google.com \
    --cc=snitzer@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).