Hi Chritoph, On Fri, 2020-07-10 at 16:10 +0200, Nicolas Saenz Julienne wrote: > There is no guarantee to CMA's placement, so allocating a zone specific > atomic pool from CMA might return memory from a completely different > memory zone. To get around this double check CMA's placement before > allocating from it. > > Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to map to gfp > mask") > Reported-by: Jeremy Linton > Signed-off-by: Nicolas Saenz Julienne > --- > > This is a code intensive alternative to "dma-pool: Do not allocate pool > memory from CMA"[1]. I see you applied "dma-pool: Do not allocate pool memory from CMA" on your tree. Do you want me to send a v2 of this patch taking that into account targeting v5.9? or you'd rather just follow another approach? Regards, Nicolas > > [1] https://lkml.org/lkml/2020/7/8/1108 > > kernel/dma/pool.c | 36 +++++++++++++++++++++++++++++++++++- > 1 file changed, 35 insertions(+), 1 deletion(-) > > diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c > index 8cfa01243ed2..ccf3eeb77e00 100644 > --- a/kernel/dma/pool.c > +++ b/kernel/dma/pool.c > @@ -3,6 +3,7 @@ > * Copyright (C) 2012 ARM Ltd. > * Copyright (C) 2020 Google LLC > */ > +#include > #include > #include > #include > @@ -56,6 +57,39 @@ static void dma_atomic_pool_size_add(gfp_t gfp, size_t > size) > pool_size_kernel += size; > } > > +static bool cma_in_zone(gfp_t gfp) > +{ > + u64 zone_dma_end, zone_dma32_end; > + phys_addr_t base, end; > + unsigned long size; > + struct cma *cma; > + > + cma = dev_get_cma_area(NULL); > + if (!cma) > + return false; > + > + size = cma_get_size(cma); > + if (!size) > + return false; > + base = cma_get_base(cma) - memblock_start_of_DRAM(); > + end = base + size - 1; > + > + zone_dma_end = IS_ENABLED(CONFIG_ZONE_DMA) ? DMA_BIT_MASK(zone_dma_bits) > : 0; > + zone_dma32_end = IS_ENABLED(CONFIG_ZONE_DMA32) ? DMA_BIT_MASK(32) : 0; > + > + /* CMA can't cross zone boundaries, see cma_activate_area() */ > + if (IS_ENABLED(CONFIG_ZONE_DMA) && gfp & GFP_DMA && > + end <= zone_dma_end) > + return true; > + else if (IS_ENABLED(CONFIG_ZONE_DMA32) && gfp & GFP_DMA32 && > + base > zone_dma_end && end <= zone_dma32_end) > + return true; > + else if (base > zone_dma32_end) > + return true; > + > + return false; > +} > + > static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, > gfp_t gfp) > { > @@ -70,7 +104,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t > pool_size, > do { > pool_size = 1 << (PAGE_SHIFT + order); > > - if (dev_get_cma_area(NULL)) > + if (cma_in_zone(gfp)) > page = dma_alloc_from_contiguous(NULL, 1 << order, > order, false); > else