linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC] dma-direct: do not allocate a single page from CMA area
@ 2018-10-31 20:03 Nicolin Chen
  2018-11-01 14:07 ` Robin Murphy
  0 siblings, 1 reply; 10+ messages in thread
From: Nicolin Chen @ 2018-10-31 20:03 UTC (permalink / raw)
  To: hch, m.szyprowski, robin.murphy; +Cc: iommu, linux-kernel, vdumpa

The addresses within a single page are always contiguous, so it's
not so necessary to allocate one single page from CMA area. Since
the CMA area has a limited predefined size of space, it might run
out of space in some heavy use case, where there might be quite a
lot CMA pages being allocated for single pages.

This patch tries to skip CMA allocations of single pages and lets
them go through normal page allocations. This would save resource
in the CMA area for further more CMA allocations.

Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
---
 kernel/dma/direct.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 22a12ab5a5e9..14c5d49eded2 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -120,8 +120,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 	gfp |= __dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 			&phys_mask);
 again:
-	/* CMA can be used only in the context which permits sleeping */
-	if (gfpflags_allow_blocking(gfp)) {
+	/*
+	 * CMA can be used only in the context which permits sleeping.
+	 * Since addresses within one PAGE are always contiguous, skip
+	 * CMA allocation for a single page to save CMA reserved space
+	 */
+	if (gfpflags_allow_blocking(gfp) && count > 1) {
 		page = dma_alloc_from_contiguous(dev, count, page_order,
 						 gfp & __GFP_NOWARN);
 		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-10-31 20:03 [PATCH RFC] dma-direct: do not allocate a single page from CMA area Nicolin Chen
@ 2018-11-01 14:07 ` Robin Murphy
  2018-11-01 18:04   ` Nicolin Chen
  2018-11-02  6:35   ` Christoph Hellwig
  0 siblings, 2 replies; 10+ messages in thread
From: Robin Murphy @ 2018-11-01 14:07 UTC (permalink / raw)
  To: Nicolin Chen, hch, m.szyprowski; +Cc: iommu, linux-kernel, vdumpa

On 31/10/2018 20:03, Nicolin Chen wrote:
> The addresses within a single page are always contiguous, so it's
> not so necessary to allocate one single page from CMA area. Since
> the CMA area has a limited predefined size of space, it might run
> out of space in some heavy use case, where there might be quite a
> lot CMA pages being allocated for single pages.
> 
> This patch tries to skip CMA allocations of single pages and lets
> them go through normal page allocations. This would save resource
> in the CMA area for further more CMA allocations.

In general, this seems to make sense to me. It does represent a 
theoretical change in behaviour for devices which have their own CMA 
area somewhere other than kernel memory, and only ever make non-atomic 
allocations, but I'm not sure whether that's a realistic or common 
enough case to really worry about.

Robin.

> Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
> ---
>   kernel/dma/direct.c | 8 ++++++--
>   1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 22a12ab5a5e9..14c5d49eded2 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -120,8 +120,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
>   	gfp |= __dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
>   			&phys_mask);
>   again:
> -	/* CMA can be used only in the context which permits sleeping */
> -	if (gfpflags_allow_blocking(gfp)) {
> +	/*
> +	 * CMA can be used only in the context which permits sleeping.
> +	 * Since addresses within one PAGE are always contiguous, skip
> +	 * CMA allocation for a single page to save CMA reserved space
> +	 */
> +	if (gfpflags_allow_blocking(gfp) && count > 1) {
>   		page = dma_alloc_from_contiguous(dev, count, page_order,
>   						 gfp & __GFP_NOWARN);
>   		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-11-01 14:07 ` Robin Murphy
@ 2018-11-01 18:04   ` Nicolin Chen
  2018-11-01 19:32     ` Robin Murphy
  2018-11-02  6:35   ` Christoph Hellwig
  1 sibling, 1 reply; 10+ messages in thread
From: Nicolin Chen @ 2018-11-01 18:04 UTC (permalink / raw)
  To: Robin Murphy; +Cc: hch, m.szyprowski, iommu, linux-kernel, vdumpa

Hi Robin,

Thanks for the comments.

On Thu, Nov 01, 2018 at 02:07:55PM +0000, Robin Murphy wrote:
> On 31/10/2018 20:03, Nicolin Chen wrote:
> > The addresses within a single page are always contiguous, so it's
> > not so necessary to allocate one single page from CMA area. Since
> > the CMA area has a limited predefined size of space, it might run
> > out of space in some heavy use case, where there might be quite a
> > lot CMA pages being allocated for single pages.
> > 
> > This patch tries to skip CMA allocations of single pages and lets
> > them go through normal page allocations. This would save resource
> > in the CMA area for further more CMA allocations.
> 
> In general, this seems to make sense to me. It does represent a theoretical
> change in behaviour for devices which have their own CMA area somewhere
> other than kernel memory, and only ever make non-atomic allocations, but I'm
> not sure whether that's a realistic or common enough case to really worry
> about.

Hmm..I don't quite understand the part of worrying its realisticness.
Would you mind elaborating a bit? As I tested this change on Tegra186
board, and saw some single-page allocations have been directed to the
normal allocation; and the "CmaFree" size reported from /proc/meminfo
is also increased. Does this mean it's realistic?

Thank you
Nicolin

-----

> 
> Robin.
> 
> > Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
> > ---
> >   kernel/dma/direct.c | 8 ++++++--
> >   1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> > index 22a12ab5a5e9..14c5d49eded2 100644
> > --- a/kernel/dma/direct.c
> > +++ b/kernel/dma/direct.c
> > @@ -120,8 +120,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
> >   	gfp |= __dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
> >   			&phys_mask);
> >   again:
> > -	/* CMA can be used only in the context which permits sleeping */
> > -	if (gfpflags_allow_blocking(gfp)) {
> > +	/*
> > +	 * CMA can be used only in the context which permits sleeping.
> > +	 * Since addresses within one PAGE are always contiguous, skip
> > +	 * CMA allocation for a single page to save CMA reserved space
> > +	 */
> > +	if (gfpflags_allow_blocking(gfp) && count > 1) {
> >   		page = dma_alloc_from_contiguous(dev, count, page_order,
> >   						 gfp & __GFP_NOWARN);
> >   		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> > 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-11-01 18:04   ` Nicolin Chen
@ 2018-11-01 19:32     ` Robin Murphy
  2018-11-01 20:22       ` Nicolin Chen
  0 siblings, 1 reply; 10+ messages in thread
From: Robin Murphy @ 2018-11-01 19:32 UTC (permalink / raw)
  To: Nicolin Chen; +Cc: hch, m.szyprowski, iommu, linux-kernel, vdumpa

On 01/11/2018 18:04, Nicolin Chen wrote:
> Hi Robin,
> 
> Thanks for the comments.
> 
> On Thu, Nov 01, 2018 at 02:07:55PM +0000, Robin Murphy wrote:
>> On 31/10/2018 20:03, Nicolin Chen wrote:
>>> The addresses within a single page are always contiguous, so it's
>>> not so necessary to allocate one single page from CMA area. Since
>>> the CMA area has a limited predefined size of space, it might run
>>> out of space in some heavy use case, where there might be quite a
>>> lot CMA pages being allocated for single pages.
>>>
>>> This patch tries to skip CMA allocations of single pages and lets
>>> them go through normal page allocations. This would save resource
>>> in the CMA area for further more CMA allocations.
>>
>> In general, this seems to make sense to me. It does represent a theoretical
>> change in behaviour for devices which have their own CMA area somewhere
>> other than kernel memory, and only ever make non-atomic allocations, but I'm
>> not sure whether that's a realistic or common enough case to really worry
>> about.
> 
> Hmm..I don't quite understand the part of worrying its realisticness.
> Would you mind elaborating a bit?

I only mean the case where a driver previously happened to get single 
pages allocated from a per-device CMA area, would now always get them 
fulfilled from regular kernel memory instead, and actually cares about 
the difference. As I say, that's a contrived case that I doubt is 
honestly a significant concern, but it's not *entirely* inconceivable. 
I've just been bitten before by drivers relying on specific DMA API 
implementation behaviour which was never guaranteed or even necessarily 
correct by the terms of the API itself, so I'm naturally wary of the 
corner cases ;)

On second thought, however, I suppose we could always key this off 
DMA_ATTR_FORCE_CONTIGUOUS as well if we really want - technically it has 
a more general meaning than "only ever allocate from CMA", but in 
practice if that's the behaviour a driver wants, then that flag is 
already the only way it can even hope to get dma_alloc_coherent() to 
comply anywhere near reliably.

> As I tested this change on Tegra186
> board, and saw some single-page allocations have been directed to the
> normal allocation; and the "CmaFree" size reported from /proc/meminfo
> is also increased. Does this mean it's realistic?

Indeed - I happen to have CMA debug enabled for no good reason in my 
current development config, and on my relatively unexciting Juno board 
single-page allocations turn out to be the majority by number, even if 
not by total consumption:

[    0.519663] cma: cma_alloc(cma (____ptrval____), count 64, align 6)
[    0.527508] cma: cma_alloc(): returned (____ptrval____)
[    3.768066] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
[    3.774566] cma: cma_alloc(): returned (____ptrval____)
[    3.860097] cma: cma_alloc(cma (____ptrval____), count 1875, align 8)
[    3.867150] cma: cma_alloc(): returned (____ptrval____)
[    3.920796] cma: cma_alloc(cma (____ptrval____), count 31, align 5)
[    3.927093] cma: cma_alloc(): returned (____ptrval____)
[    3.932326] cma: cma_alloc(cma (____ptrval____), count 31, align 5)
[    3.938643] cma: cma_alloc(): returned (____ptrval____)
[    4.022188] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
[    4.028415] cma: cma_alloc(): returned (____ptrval____)
[    4.033600] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
[    4.039786] cma: cma_alloc(): returned (____ptrval____)
[    4.044968] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
[    4.051150] cma: cma_alloc(): returned (____ptrval____)
[    4.113556] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
[    4.119785] cma: cma_alloc(): returned (____ptrval____)
[    5.012654] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
[    5.019047] cma: cma_alloc(): returned (____ptrval____)
[   11.485179] cma: cma_alloc(cma 000000009dd074ee, count 1, align 0)
[   11.492096] cma: cma_alloc(): returned 000000009264a86c
[   12.269355] cma: cma_alloc(cma 000000009dd074ee, count 1875, align 8)
[   12.277535] cma: cma_alloc(): returned 00000000d7bb9ae5
[   12.286110] cma: cma_alloc(cma 000000009dd074ee, count 4, align 2)
[   12.292507] cma: cma_alloc(): returned 0000000007ba7a39

I don't have any exciting peripherals to really exercise the coherent 
allocator, but I imagine that fragmentation is probably just as good a 
reason as total CMA usage for avoiding trivial allocations by default.

Robin.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-11-01 19:32     ` Robin Murphy
@ 2018-11-01 20:22       ` Nicolin Chen
  0 siblings, 0 replies; 10+ messages in thread
From: Nicolin Chen @ 2018-11-01 20:22 UTC (permalink / raw)
  To: Robin Murphy; +Cc: hch, m.szyprowski, iommu, linux-kernel, vdumpa

On Thu, Nov 01, 2018 at 07:32:39PM +0000, Robin Murphy wrote:

> > On Thu, Nov 01, 2018 at 02:07:55PM +0000, Robin Murphy wrote:
> > > On 31/10/2018 20:03, Nicolin Chen wrote:
> > > > The addresses within a single page are always contiguous, so it's
> > > > not so necessary to allocate one single page from CMA area. Since
> > > > the CMA area has a limited predefined size of space, it might run
> > > > out of space in some heavy use case, where there might be quite a
> > > > lot CMA pages being allocated for single pages.
> > > > 
> > > > This patch tries to skip CMA allocations of single pages and lets
> > > > them go through normal page allocations. This would save resource
> > > > in the CMA area for further more CMA allocations.
> > > 
> > > In general, this seems to make sense to me. It does represent a theoretical
> > > change in behaviour for devices which have their own CMA area somewhere
> > > other than kernel memory, and only ever make non-atomic allocations, but I'm
> > > not sure whether that's a realistic or common enough case to really worry
> > > about.
> > 
> > Hmm..I don't quite understand the part of worrying its realisticness.
> > Would you mind elaborating a bit?
> 
> I only mean the case where a driver previously happened to get single pages
> allocated from a per-device CMA area, would now always get them fulfilled
> from regular kernel memory instead, and actually cares about the difference.

I see. I think that's a good question.

> As I say, that's a contrived case that I doubt is honestly a significant
> concern, but it's not *entirely* inconceivable. I've just been bitten before
> by drivers relying on specific DMA API implementation behaviour which was
> never guaranteed or even necessarily correct by the terms of the API itself,
> so I'm naturally wary of the corner cases ;)

I also have a vague concern that CMA pages might turn out to be
special so this change would make some differences in stability
or performance for those who actually rely on actual CMA pages,
though I ain't sure if it can be true or realistic as you said.

> On second thought, however, I suppose we could always key this off
> DMA_ATTR_FORCE_CONTIGUOUS as well if we really want - technically it has a
> more general meaning than "only ever allocate from CMA", but in practice if
> that's the behaviour a driver wants, then that flag is already the only way
> it can even hope to get dma_alloc_coherent() to comply anywhere near
> reliably.

That is a good input. Would you prefer to have it in condition
check now with this patch?

> > As I tested this change on Tegra186
> > board, and saw some single-page allocations have been directed to the
> > normal allocation; and the "CmaFree" size reported from /proc/meminfo
> > is also increased. Does this mean it's realistic?
> 
> Indeed - I happen to have CMA debug enabled for no good reason in my current
> development config, and on my relatively unexciting Juno board single-page
> allocations turn out to be the majority by number, even if not by total
> consumption:
> 
> [    0.519663] cma: cma_alloc(cma (____ptrval____), count 64, align 6)
> [    0.527508] cma: cma_alloc(): returned (____ptrval____)
> [    3.768066] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
> [    3.774566] cma: cma_alloc(): returned (____ptrval____)
> [    3.860097] cma: cma_alloc(cma (____ptrval____), count 1875, align 8)
> [    3.867150] cma: cma_alloc(): returned (____ptrval____)
> [    3.920796] cma: cma_alloc(cma (____ptrval____), count 31, align 5)
> [    3.927093] cma: cma_alloc(): returned (____ptrval____)
> [    3.932326] cma: cma_alloc(cma (____ptrval____), count 31, align 5)
> [    3.938643] cma: cma_alloc(): returned (____ptrval____)
> [    4.022188] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
> [    4.028415] cma: cma_alloc(): returned (____ptrval____)
> [    4.033600] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
> [    4.039786] cma: cma_alloc(): returned (____ptrval____)
> [    4.044968] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
> [    4.051150] cma: cma_alloc(): returned (____ptrval____)
> [    4.113556] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
> [    4.119785] cma: cma_alloc(): returned (____ptrval____)
> [    5.012654] cma: cma_alloc(cma (____ptrval____), count 1, align 0)
> [    5.019047] cma: cma_alloc(): returned (____ptrval____)
> [   11.485179] cma: cma_alloc(cma 000000009dd074ee, count 1, align 0)
> [   11.492096] cma: cma_alloc(): returned 000000009264a86c
> [   12.269355] cma: cma_alloc(cma 000000009dd074ee, count 1875, align 8)
> [   12.277535] cma: cma_alloc(): returned 00000000d7bb9ae5
> [   12.286110] cma: cma_alloc(cma 000000009dd074ee, count 4, align 2)
> [   12.292507] cma: cma_alloc(): returned 0000000007ba7a39
> 
> I don't have any exciting peripherals to really exercise the coherent
> allocator, but I imagine that fragmentation is probably just as good a
> reason as total CMA usage for avoiding trivial allocations by default.

I will also add to the commit message fragmentation reduction.

Thanks
Nicolin

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-11-01 14:07 ` Robin Murphy
  2018-11-01 18:04   ` Nicolin Chen
@ 2018-11-02  6:35   ` Christoph Hellwig
  2018-11-05 22:40     ` Nicolin Chen
  1 sibling, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2018-11-02  6:35 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Nicolin Chen, hch, m.szyprowski, iommu, linux-kernel, vdumpa

On Thu, Nov 01, 2018 at 02:07:55PM +0000, Robin Murphy wrote:
> On 31/10/2018 20:03, Nicolin Chen wrote:
>> The addresses within a single page are always contiguous, so it's
>> not so necessary to allocate one single page from CMA area. Since
>> the CMA area has a limited predefined size of space, it might run
>> out of space in some heavy use case, where there might be quite a
>> lot CMA pages being allocated for single pages.
>>
>> This patch tries to skip CMA allocations of single pages and lets
>> them go through normal page allocations. This would save resource
>> in the CMA area for further more CMA allocations.
>
> In general, this seems to make sense to me. It does represent a theoretical 
> change in behaviour for devices which have their own CMA area somewhere 
> other than kernel memory, and only ever make non-atomic allocations, but 
> I'm not sure whether that's a realistic or common enough case to really 
> worry about.

Yes, I think we should make the decision in dma_alloc_from_contiguous
based on having a per-dev CMA area or not.  There is a lot of cruft in
this area that should be cleaned up while we're at it, like always
falling back to the normal page allocator if there is no CMA area or
nothing suitable found in dma_alloc_from_contiguous instead of
having to duplicate all that in the caller.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-11-02  6:35   ` Christoph Hellwig
@ 2018-11-05 22:40     ` Nicolin Chen
  2018-11-20  2:39       ` Nicolin Chen
  2018-11-20  9:20       ` Christoph Hellwig
  0 siblings, 2 replies; 10+ messages in thread
From: Nicolin Chen @ 2018-11-05 22:40 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Robin Murphy, m.szyprowski, iommu, linux-kernel, vdumpa

On Fri, Nov 02, 2018 at 07:35:42AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 01, 2018 at 02:07:55PM +0000, Robin Murphy wrote:
> > On 31/10/2018 20:03, Nicolin Chen wrote:
> >> The addresses within a single page are always contiguous, so it's
> >> not so necessary to allocate one single page from CMA area. Since
> >> the CMA area has a limited predefined size of space, it might run
> >> out of space in some heavy use case, where there might be quite a
> >> lot CMA pages being allocated for single pages.
> >>
> >> This patch tries to skip CMA allocations of single pages and lets
> >> them go through normal page allocations. This would save resource
> >> in the CMA area for further more CMA allocations.
> >
> > In general, this seems to make sense to me. It does represent a theoretical 
> > change in behaviour for devices which have their own CMA area somewhere 
> > other than kernel memory, and only ever make non-atomic allocations, but 
> > I'm not sure whether that's a realistic or common enough case to really 
> > worry about.
> 
> Yes, I think we should make the decision in dma_alloc_from_contiguous
> based on having a per-dev CMA area or not.  There is a lot of cruft in

It seems that cma_alloc() already has a CMA area check? Would it
be duplicated to have a similar one in dma_alloc_from_contiguous?

> this area that should be cleaned up while we're at it, like always
> falling back to the normal page allocator if there is no CMA area or
> nothing suitable found in dma_alloc_from_contiguous instead of
> having to duplicate all that in the caller.

Am I supposed to clean up things that's mentioned above by moving
the fallback allocator into dma_alloc_from_contiguous, or to just
move my change (the count check) into dma_alloc_from_contiguous?

I understand that'd be great to have a cleanup, yet feel it could
be done separately as this patch isn't really a cleanup change.

Thanks
Nicolin

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-11-05 22:40     ` Nicolin Chen
@ 2018-11-20  2:39       ` Nicolin Chen
  2018-11-20  9:20       ` Christoph Hellwig
  1 sibling, 0 replies; 10+ messages in thread
From: Nicolin Chen @ 2018-11-20  2:39 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Robin Murphy, m.szyprowski, iommu, linux-kernel, vdumpa

Robin? Christ?

On Mon, Nov 05, 2018 at 02:40:50PM -0800, Nicolin Chen wrote:
> On Fri, Nov 02, 2018 at 07:35:42AM +0100, Christoph Hellwig wrote:
> > On Thu, Nov 01, 2018 at 02:07:55PM +0000, Robin Murphy wrote:
> > > On 31/10/2018 20:03, Nicolin Chen wrote:
> > >> The addresses within a single page are always contiguous, so it's
> > >> not so necessary to allocate one single page from CMA area. Since
> > >> the CMA area has a limited predefined size of space, it might run
> > >> out of space in some heavy use case, where there might be quite a
> > >> lot CMA pages being allocated for single pages.
> > >>
> > >> This patch tries to skip CMA allocations of single pages and lets
> > >> them go through normal page allocations. This would save resource
> > >> in the CMA area for further more CMA allocations.
> > >
> > > In general, this seems to make sense to me. It does represent a theoretical 
> > > change in behaviour for devices which have their own CMA area somewhere 
> > > other than kernel memory, and only ever make non-atomic allocations, but 
> > > I'm not sure whether that's a realistic or common enough case to really 
> > > worry about.
> > 
> > Yes, I think we should make the decision in dma_alloc_from_contiguous
> > based on having a per-dev CMA area or not.  There is a lot of cruft in
> 
> It seems that cma_alloc() already has a CMA area check? Would it
> be duplicated to have a similar one in dma_alloc_from_contiguous?
> 
> > this area that should be cleaned up while we're at it, like always
> > falling back to the normal page allocator if there is no CMA area or
> > nothing suitable found in dma_alloc_from_contiguous instead of
> > having to duplicate all that in the caller.
> 
> Am I supposed to clean up things that's mentioned above by moving
> the fallback allocator into dma_alloc_from_contiguous, or to just
> move my change (the count check) into dma_alloc_from_contiguous?
> 
> I understand that'd be great to have a cleanup, yet feel it could
> be done separately as this patch isn't really a cleanup change.
> 
> Thanks
> Nicolin

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-11-05 22:40     ` Nicolin Chen
  2018-11-20  2:39       ` Nicolin Chen
@ 2018-11-20  9:20       ` Christoph Hellwig
  2018-11-21  1:30         ` Nicolin Chen
  1 sibling, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2018-11-20  9:20 UTC (permalink / raw)
  To: Nicolin Chen
  Cc: Christoph Hellwig, Robin Murphy, m.szyprowski, iommu,
	linux-kernel, vdumpa

On Mon, Nov 05, 2018 at 02:40:51PM -0800, Nicolin Chen wrote:
> > > In general, this seems to make sense to me. It does represent a theoretical 
> > > change in behaviour for devices which have their own CMA area somewhere 
> > > other than kernel memory, and only ever make non-atomic allocations, but 
> > > I'm not sure whether that's a realistic or common enough case to really 
> > > worry about.
> > 
> > Yes, I think we should make the decision in dma_alloc_from_contiguous
> > based on having a per-dev CMA area or not.  There is a lot of cruft in
> 
> It seems that cma_alloc() already has a CMA area check? Would it
> be duplicated to have a similar one in dma_alloc_from_contiguous?

It isn't duplicate if it serves a different purpose.

> > this area that should be cleaned up while we're at it, like always
> > falling back to the normal page allocator if there is no CMA area or
> > nothing suitable found in dma_alloc_from_contiguous instead of
> > having to duplicate all that in the caller.
> 
> Am I supposed to clean up things that's mentioned above by moving
> the fallback allocator into dma_alloc_from_contiguous, or to just
> move my change (the count check) into dma_alloc_from_contiguous?
> 
> I understand that'd be great to have a cleanup, yet feel it could
> be done separately as this patch isn't really a cleanup change.

I can take care of any cleanups.  I've been trying to dust up that
area anyway.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
  2018-11-20  9:20       ` Christoph Hellwig
@ 2018-11-21  1:30         ` Nicolin Chen
  0 siblings, 0 replies; 10+ messages in thread
From: Nicolin Chen @ 2018-11-21  1:30 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Robin Murphy, m.szyprowski, iommu, linux-kernel, vdumpa

On Tue, Nov 20, 2018 at 10:20:10AM +0100, Christoph Hellwig wrote:
> On Mon, Nov 05, 2018 at 02:40:51PM -0800, Nicolin Chen wrote:
> > > > In general, this seems to make sense to me. It does represent a theoretical 
> > > > change in behaviour for devices which have their own CMA area somewhere 
> > > > other than kernel memory, and only ever make non-atomic allocations, but 
> > > > I'm not sure whether that's a realistic or common enough case to really 
> > > > worry about.
> > > 
> > > Yes, I think we should make the decision in dma_alloc_from_contiguous
> > > based on having a per-dev CMA area or not.  There is a lot of cruft in
> > 
> > It seems that cma_alloc() already has a CMA area check? Would it
> > be duplicated to have a similar one in dma_alloc_from_contiguous?
> 
> It isn't duplicate if it serves a different purpose.
> 
> > > this area that should be cleaned up while we're at it, like always
> > > falling back to the normal page allocator if there is no CMA area or
> > > nothing suitable found in dma_alloc_from_contiguous instead of
> > > having to duplicate all that in the caller.
> > 
> > Am I supposed to clean up things that's mentioned above by moving
> > the fallback allocator into dma_alloc_from_contiguous, or to just
> > move my change (the count check) into dma_alloc_from_contiguous?
> > 
> > I understand that'd be great to have a cleanup, yet feel it could
> > be done separately as this patch isn't really a cleanup change.
> 
> I can take care of any cleanups.  I've been trying to dust up that
> area anyway.

Thanks for the reply. It looks like it'd be better for me to wait
for the cleanup being done? I feel odd merely adding a size check
in the dma_alloc_from_contiguous().

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-11-21  1:30 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-31 20:03 [PATCH RFC] dma-direct: do not allocate a single page from CMA area Nicolin Chen
2018-11-01 14:07 ` Robin Murphy
2018-11-01 18:04   ` Nicolin Chen
2018-11-01 19:32     ` Robin Murphy
2018-11-01 20:22       ` Nicolin Chen
2018-11-02  6:35   ` Christoph Hellwig
2018-11-05 22:40     ` Nicolin Chen
2018-11-20  2:39       ` Nicolin Chen
2018-11-20  9:20       ` Christoph Hellwig
2018-11-21  1:30         ` Nicolin Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).