All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] dma-contiguous: do not allocate a single page from CMA area
@ 2019-02-15 20:06 ` Nicolin Chen
  2019-02-18 13:51     ` Marek Szyprowski
  0 siblings, 1 reply; 3+ messages in thread
From: Nicolin Chen @ 2019-02-15 20:06 UTC (permalink / raw)
  To: robin.murphy, m.szyprowski, hch; +Cc: linux-kernel, iommu

The addresses within a single page are always contiguous, so it's
not so necessary to always allocate one single page from CMA area.
Since the CMA area has a limited predefined size of space, it may
run out of space in heavy use cases, where there might be quite a
lot CMA pages being allocated for single pages.

However, there is also a concern that a device might care where a
page comes from -- it might expect the page from CMA area and act
differently if the page doesn't.

This patch tries to skip of one-page size allocations and returns
NULL so as to let callers allocate normal pages unless the device
has its own CMA area. This would save resources from the CMA area
for more CMA allocations. And it'd also reduce CMA fragmentations
resulted from trivial allocations.

Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
---
 kernel/dma/contiguous.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
index b2a87905846d..09074bd04793 100644
--- a/kernel/dma/contiguous.c
+++ b/kernel/dma/contiguous.c
@@ -186,16 +186,32 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
  *
  * This function allocates memory buffer for specified device. It uses
  * device specific contiguous memory area if available or the default
- * global one. Requires architecture specific dev_get_cma_area() helper
- * function.
+ * global one.
+ *
+ * However, it skips one-page size of allocations from the global area.
+ * As the addresses within one page are always contiguous, so there is
+ * no need to waste CMA pages for that kind; it also helps reduce the
+ * fragmentations in the CMA area. So a caller should be the rebounder
+ * in such case to allocate a normal page upon NULL return value.
+ *
+ * Requires architecture specific dev_get_cma_area() helper function.
  */
 struct page *dma_alloc_from_contiguous(struct device *dev, size_t count,
 				       unsigned int align, bool no_warn)
 {
+	struct cma *cma;
+
 	if (align > CONFIG_CMA_ALIGNMENT)
 		align = CONFIG_CMA_ALIGNMENT;
 
-	return cma_alloc(dev_get_cma_area(dev), count, align, no_warn);
+	if (dev && dev->cma_area)
+		cma = dev->cma_area;
+	else if (count > 1)
+		cma = dma_contiguous_default_area;
+	else
+		return NULL;
+
+	return cma_alloc(cma, count, align, no_warn);
 }
 
 /**
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] dma-contiguous: do not allocate a single page from CMA area
@ 2019-02-18 13:51     ` Marek Szyprowski
  0 siblings, 0 replies; 3+ messages in thread
From: Marek Szyprowski @ 2019-02-18 13:51 UTC (permalink / raw)
  To: Nicolin Chen, robin.murphy, hch; +Cc: linux-kernel, iommu

Hi Nicolin,

On 2019-02-15 21:06, Nicolin Chen wrote:
> The addresses within a single page are always contiguous, so it's
> not so necessary to always allocate one single page from CMA area.
> Since the CMA area has a limited predefined size of space, it may
> run out of space in heavy use cases, where there might be quite a
> lot CMA pages being allocated for single pages.
>
> However, there is also a concern that a device might care where a
> page comes from -- it might expect the page from CMA area and act
> differently if the page doesn't.
>
> This patch tries to skip of one-page size allocations and returns
> NULL so as to let callers allocate normal pages unless the device
> has its own CMA area. This would save resources from the CMA area
> for more CMA allocations. And it'd also reduce CMA fragmentations
> resulted from trivial allocations.
>
> Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>

Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>

> ---
>  kernel/dma/contiguous.c | 22 +++++++++++++++++++---
>  1 file changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> index b2a87905846d..09074bd04793 100644
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -186,16 +186,32 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
>   *
>   * This function allocates memory buffer for specified device. It uses
>   * device specific contiguous memory area if available or the default
> - * global one. Requires architecture specific dev_get_cma_area() helper
> - * function.
> + * global one.
> + *
> + * However, it skips one-page size of allocations from the global area.
> + * As the addresses within one page are always contiguous, so there is
> + * no need to waste CMA pages for that kind; it also helps reduce the
> + * fragmentations in the CMA area. So a caller should be the rebounder
> + * in such case to allocate a normal page upon NULL return value.
> + *
> + * Requires architecture specific dev_get_cma_area() helper function.
>   */
>  struct page *dma_alloc_from_contiguous(struct device *dev, size_t count,
>  				       unsigned int align, bool no_warn)
>  {
> +	struct cma *cma;
> +
>  	if (align > CONFIG_CMA_ALIGNMENT)
>  		align = CONFIG_CMA_ALIGNMENT;
>  
> -	return cma_alloc(dev_get_cma_area(dev), count, align, no_warn);
> +	if (dev && dev->cma_area)
> +		cma = dev->cma_area;
> +	else if (count > 1)
> +		cma = dma_contiguous_default_area;
> +	else
> +		return NULL;
> +
> +	return cma_alloc(cma, count, align, no_warn);
>  }
>  
>  /**

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] dma-contiguous: do not allocate a single page from CMA area
@ 2019-02-18 13:51     ` Marek Szyprowski
  0 siblings, 0 replies; 3+ messages in thread
From: Marek Szyprowski @ 2019-02-18 13:51 UTC (permalink / raw)
  To: Nicolin Chen, robin.murphy-5wv7dgnIgG8, hch-jcswGhMUV9g
  Cc: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

Hi Nicolin,

On 2019-02-15 21:06, Nicolin Chen wrote:
> The addresses within a single page are always contiguous, so it's
> not so necessary to always allocate one single page from CMA area.
> Since the CMA area has a limited predefined size of space, it may
> run out of space in heavy use cases, where there might be quite a
> lot CMA pages being allocated for single pages.
>
> However, there is also a concern that a device might care where a
> page comes from -- it might expect the page from CMA area and act
> differently if the page doesn't.
>
> This patch tries to skip of one-page size allocations and returns
> NULL so as to let callers allocate normal pages unless the device
> has its own CMA area. This would save resources from the CMA area
> for more CMA allocations. And it'd also reduce CMA fragmentations
> resulted from trivial allocations.
>
> Signed-off-by: Nicolin Chen <nicoleotsuka-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

Acked-by: Marek Szyprowski <m.szyprowski-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org>

> ---
>  kernel/dma/contiguous.c | 22 +++++++++++++++++++---
>  1 file changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> index b2a87905846d..09074bd04793 100644
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -186,16 +186,32 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
>   *
>   * This function allocates memory buffer for specified device. It uses
>   * device specific contiguous memory area if available or the default
> - * global one. Requires architecture specific dev_get_cma_area() helper
> - * function.
> + * global one.
> + *
> + * However, it skips one-page size of allocations from the global area.
> + * As the addresses within one page are always contiguous, so there is
> + * no need to waste CMA pages for that kind; it also helps reduce the
> + * fragmentations in the CMA area. So a caller should be the rebounder
> + * in such case to allocate a normal page upon NULL return value.
> + *
> + * Requires architecture specific dev_get_cma_area() helper function.
>   */
>  struct page *dma_alloc_from_contiguous(struct device *dev, size_t count,
>  				       unsigned int align, bool no_warn)
>  {
> +	struct cma *cma;
> +
>  	if (align > CONFIG_CMA_ALIGNMENT)
>  		align = CONFIG_CMA_ALIGNMENT;
>  
> -	return cma_alloc(dev_get_cma_area(dev), count, align, no_warn);
> +	if (dev && dev->cma_area)
> +		cma = dev->cma_area;
> +	else if (count > 1)
> +		cma = dma_contiguous_default_area;
> +	else
> +		return NULL;
> +
> +	return cma_alloc(cma, count, align, no_warn);
>  }
>  
>  /**

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-02-18 14:40 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20190215200726epcas2p1f3da0044c08c5dae90eec3d73656c135@epcas2p1.samsung.com>
2019-02-15 20:06 ` [PATCH] dma-contiguous: do not allocate a single page from CMA area Nicolin Chen
2019-02-18 13:51   ` Marek Szyprowski
2019-02-18 13:51     ` Marek Szyprowski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.