linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables
@ 2018-11-09  8:24 Nicolas Boichat
  2018-11-09  8:24 ` [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA Nicolas Boichat
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Nicolas Boichat @ 2018-11-09  8:24 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Will Deacon, Joerg Roedel, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
	Michal Hocko, Mel Gorman, Levin Alexander, Huaisheng Ye,
	Mike Rapoport, linux-arm-kernel, iommu, linux-kernel, linux-mm,
	Yong Wu, Matthias Brugger, Tomasz Figa, yingjoe.chen

This is a follow-up to the discussion in [1], to make sure that the page tables
allocated by iommu/io-pgtable-arm-v7s are contained within 32-bit physical
address space.

[1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html

Nicolas Boichat (3):
  mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA
  include/linux/gfp.h: Add __get_dma32_pages macro
  iommu/io-pgtable-arm-v7s: Request DMA32 memory, and improve debugging

 drivers/iommu/io-pgtable-arm-v7s.c |  6 ++++--
 include/linux/gfp.h                |  2 ++
 include/linux/slab.h               | 13 ++++++++++++-
 mm/slab.c                          |  2 +-
 mm/slub.c                          |  2 +-
 5 files changed, 20 insertions(+), 5 deletions(-)

-- 
2.19.1.930.g4563a0d9d0-goog


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA
  2018-11-09  8:24 [PATCH RFC 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables Nicolas Boichat
@ 2018-11-09  8:24 ` Nicolas Boichat
  2018-11-09 10:43   ` Vlastimil Babka
  2018-11-09  8:24 ` [PATCH RFC 2/3] include/linux/gfp.h: Add __get_dma32_pages macro Nicolas Boichat
  2018-11-09  8:24 ` [PATCH RFC 3/3] iommu/io-pgtable-arm-v7s: Request DMA32 memory, and improve debugging Nicolas Boichat
  2 siblings, 1 reply; 7+ messages in thread
From: Nicolas Boichat @ 2018-11-09  8:24 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Will Deacon, Joerg Roedel, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
	Michal Hocko, Mel Gorman, Levin Alexander, Huaisheng Ye,
	Mike Rapoport, linux-arm-kernel, iommu, linux-kernel, linux-mm,
	Yong Wu, Matthias Brugger, Tomasz Figa, yingjoe.chen

Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical
address returned by kmem_cache_alloc with GFP_DMA parameter to be
a 32-bit address.

Instead of adding a separate SLAB_CACHE_DMA32 (and then audit
all the calls to check if they require memory from DMA or DMA32
zone), we simply allocate SLAB_CACHE_DMA cache in DMA32 region,
if CONFIG_ZONE_DMA32 is set.

Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32")
Signed-off-by: Nicolas Boichat <drinkcat@chromium.org>
---
 include/linux/slab.h | 13 ++++++++++++-
 mm/slab.c            |  2 +-
 mm/slub.c            |  2 +-
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 918f374e7156f4..390afe90c5dec0 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -30,7 +30,7 @@
 #define SLAB_POISON		((slab_flags_t __force)0x00000800U)
 /* Align objs on cache lines */
 #define SLAB_HWCACHE_ALIGN	((slab_flags_t __force)0x00002000U)
-/* Use GFP_DMA memory */
+/* Use GFP_DMA or GFP_DMA32 memory */
 #define SLAB_CACHE_DMA		((slab_flags_t __force)0x00004000U)
 /* DEBUG: Store the last owner for bug hunting */
 #define SLAB_STORE_USER		((slab_flags_t __force)0x00010000U)
@@ -126,6 +126,17 @@
 #define ZERO_OR_NULL_PTR(x) ((unsigned long)(x) <= \
 				(unsigned long)ZERO_SIZE_PTR)
 
+/*
+ * When ZONE_DMA32 is defined, have SLAB_CACHE_DMA allocate memory with
+ * GFP_DMA32 instead of GFP_DMA, as this is what some of the callers
+ * require (instead of duplicating cache for DMA and DMA32 zones).
+ */
+#ifdef CONFIG_ZONE_DMA32
+#define SLAB_CACHE_DMA_GFP GFP_DMA32
+#else
+#define SLAB_CACHE_DMA_GFP GFP_DMA
+#endif
+
 #include <linux/kasan.h>
 
 struct mem_cgroup;
diff --git a/mm/slab.c b/mm/slab.c
index 2a5654bb3b3ff3..8810daa052dcdc 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2121,7 +2121,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
 	cachep->flags = flags;
 	cachep->allocflags = __GFP_COMP;
 	if (flags & SLAB_CACHE_DMA)
-		cachep->allocflags |= GFP_DMA;
+		cachep->allocflags |= SLAB_CACHE_DMA_GFP;
 	if (flags & SLAB_RECLAIM_ACCOUNT)
 		cachep->allocflags |= __GFP_RECLAIMABLE;
 	cachep->size = size;
diff --git a/mm/slub.c b/mm/slub.c
index e3629cd7aff164..fdd05323e54cbd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3575,7 +3575,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 		s->allocflags |= __GFP_COMP;
 
 	if (s->flags & SLAB_CACHE_DMA)
-		s->allocflags |= GFP_DMA;
+		s->allocflags |= SLAB_CACHE_DMA_GFP;
 
 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
 		s->allocflags |= __GFP_RECLAIMABLE;
-- 
2.19.1.930.g4563a0d9d0-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH RFC 2/3] include/linux/gfp.h: Add __get_dma32_pages macro
  2018-11-09  8:24 [PATCH RFC 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables Nicolas Boichat
  2018-11-09  8:24 ` [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA Nicolas Boichat
@ 2018-11-09  8:24 ` Nicolas Boichat
  2018-11-09  8:24 ` [PATCH RFC 3/3] iommu/io-pgtable-arm-v7s: Request DMA32 memory, and improve debugging Nicolas Boichat
  2 siblings, 0 replies; 7+ messages in thread
From: Nicolas Boichat @ 2018-11-09  8:24 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Will Deacon, Joerg Roedel, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
	Michal Hocko, Mel Gorman, Levin Alexander, Huaisheng Ye,
	Mike Rapoport, linux-arm-kernel, iommu, linux-kernel, linux-mm,
	Yong Wu, Matthias Brugger, Tomasz Figa, yingjoe.chen

Some callers (e.g. iommu/io-pgtable-arm-v7s) require DMA32 memory
when calling __get_dma_pages. Add a new macro for that purpose.

Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32")
Signed-off-by: Nicolas Boichat <drinkcat@chromium.org>
---
 include/linux/gfp.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 76f8db0b0e715c..50e04896b78017 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -535,6 +535,8 @@ void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask);
 
 #define __get_dma_pages(gfp_mask, order) \
 		__get_free_pages((gfp_mask) | GFP_DMA, (order))
+#define __get_dma32_pages(gfp_mask, order) \
+		__get_free_pages((gfp_mask) | GFP_DMA32, (order))
 
 extern void __free_pages(struct page *page, unsigned int order);
 extern void free_pages(unsigned long addr, unsigned int order);
-- 
2.19.1.930.g4563a0d9d0-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH RFC 3/3] iommu/io-pgtable-arm-v7s: Request DMA32 memory, and improve debugging
  2018-11-09  8:24 [PATCH RFC 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables Nicolas Boichat
  2018-11-09  8:24 ` [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA Nicolas Boichat
  2018-11-09  8:24 ` [PATCH RFC 2/3] include/linux/gfp.h: Add __get_dma32_pages macro Nicolas Boichat
@ 2018-11-09  8:24 ` Nicolas Boichat
  2 siblings, 0 replies; 7+ messages in thread
From: Nicolas Boichat @ 2018-11-09  8:24 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Will Deacon, Joerg Roedel, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
	Michal Hocko, Mel Gorman, Levin Alexander, Huaisheng Ye,
	Mike Rapoport, linux-arm-kernel, iommu, linux-kernel, linux-mm,
	Yong Wu, Matthias Brugger, Tomasz Figa, yingjoe.chen

For level 1 pages, use __get_dma32_pages to make sure physical
memory address is 32-bit.

For level 2 pages, kmem_cache_zalloc with GFP_DMA has been modified
in a previous patch to be allocated in DMA32 zone.

Also, print an error when the physical address does not fit in
32-bit, to make debugging easier in the future.

Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32")
Signed-off-by: Nicolas Boichat <drinkcat@chromium.org>
---
 drivers/iommu/io-pgtable-arm-v7s.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
index 445c3bde04800c..862fd404de84d8 100644
--- a/drivers/iommu/io-pgtable-arm-v7s.c
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -198,13 +198,15 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
 	void *table = NULL;
 
 	if (lvl == 1)
-		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
+		table = (void *)__get_dma32_pages(__GFP_ZERO, get_order(size));
 	else if (lvl == 2)
 		table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
 	phys = virt_to_phys(table);
-	if (phys != (arm_v7s_iopte)phys)
+	if (phys != (arm_v7s_iopte)phys) {
 		/* Doesn't fit in PTE */
+		dev_err(dev, "Page table does not fit: %pa", &phys);
 		goto out_free;
+	}
 	if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
 		if (dma_mapping_error(dev, dma))
-- 
2.19.1.930.g4563a0d9d0-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA
  2018-11-09  8:24 ` [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA Nicolas Boichat
@ 2018-11-09 10:43   ` Vlastimil Babka
  2018-11-09 11:57     ` Nicolas Boichat
  0 siblings, 1 reply; 7+ messages in thread
From: Vlastimil Babka @ 2018-11-09 10:43 UTC (permalink / raw)
  To: Nicolas Boichat, Robin Murphy
  Cc: Will Deacon, Joerg Roedel, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Michal Hocko,
	Mel Gorman, Levin Alexander, Huaisheng Ye, Mike Rapoport,
	linux-arm-kernel, iommu, linux-kernel, linux-mm, Yong Wu,
	Matthias Brugger, Tomasz Figa, yingjoe.chen

On 11/9/18 9:24 AM, Nicolas Boichat wrote:
> Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical
> address returned by kmem_cache_alloc with GFP_DMA parameter to be
> a 32-bit address.
> 
> Instead of adding a separate SLAB_CACHE_DMA32 (and then audit
> all the calls to check if they require memory from DMA or DMA32
> zone), we simply allocate SLAB_CACHE_DMA cache in DMA32 region,
> if CONFIG_ZONE_DMA32 is set.
> 
> Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32")
> Signed-off-by: Nicolas Boichat <drinkcat@chromium.org>
> ---
>  include/linux/slab.h | 13 ++++++++++++-
>  mm/slab.c            |  2 +-
>  mm/slub.c            |  2 +-
>  3 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 918f374e7156f4..390afe90c5dec0 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -30,7 +30,7 @@
>  #define SLAB_POISON		((slab_flags_t __force)0x00000800U)
>  /* Align objs on cache lines */
>  #define SLAB_HWCACHE_ALIGN	((slab_flags_t __force)0x00002000U)
> -/* Use GFP_DMA memory */
> +/* Use GFP_DMA or GFP_DMA32 memory */
>  #define SLAB_CACHE_DMA		((slab_flags_t __force)0x00004000U)
>  /* DEBUG: Store the last owner for bug hunting */
>  #define SLAB_STORE_USER		((slab_flags_t __force)0x00010000U)
> @@ -126,6 +126,17 @@
>  #define ZERO_OR_NULL_PTR(x) ((unsigned long)(x) <= \
>  				(unsigned long)ZERO_SIZE_PTR)
>  
> +/*
> + * When ZONE_DMA32 is defined, have SLAB_CACHE_DMA allocate memory with
> + * GFP_DMA32 instead of GFP_DMA, as this is what some of the callers
> + * require (instead of duplicating cache for DMA and DMA32 zones).
> + */
> +#ifdef CONFIG_ZONE_DMA32
> +#define SLAB_CACHE_DMA_GFP GFP_DMA32
> +#else
> +#define SLAB_CACHE_DMA_GFP GFP_DMA
> +#endif

AFAICS this will break e.g. x86 which can have both ZONE_DMA and
ZONE_DMA32, and now you would make kmalloc(__GFP_DMA) return objects
from ZONE_DMA32 instead of __ZONE_DMA, which can break something.

Also I'm probably missing the point of this all. In patch 3 you use
__get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses
alloc_pages, thus the page allocator directly, and there's no slab
caches involved. It makes little sense to involve slab for page table
allocations anyway, as those tend to be aligned to a page size (or
high-order page size). So what am I missing?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA
  2018-11-09 10:43   ` Vlastimil Babka
@ 2018-11-09 11:57     ` Nicolas Boichat
  2018-11-09 12:14       ` Vlastimil Babka
  0 siblings, 1 reply; 7+ messages in thread
From: Nicolas Boichat @ 2018-11-09 11:57 UTC (permalink / raw)
  To: vbabka
  Cc: robin.murphy, will.deacon, joro, cl, penberg, rientjes,
	Joonsoo Kim, Andrew Morton, mhocko, mgorman, yehs1, rppt,
	linux-arm Mailing List, iommu, lkml, linux-mm, yong.wu,
	Matthias Brugger, tfiga, yingjoe.chen, Alexander.Levin

On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 11/9/18 9:24 AM, Nicolas Boichat wrote:
> > Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical
> > address returned by kmem_cache_alloc with GFP_DMA parameter to be
> > a 32-bit address.
> >
> > Instead of adding a separate SLAB_CACHE_DMA32 (and then audit
> > all the calls to check if they require memory from DMA or DMA32
> > zone), we simply allocate SLAB_CACHE_DMA cache in DMA32 region,
> > if CONFIG_ZONE_DMA32 is set.
> >
> > Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32")
> > Signed-off-by: Nicolas Boichat <drinkcat@chromium.org>
> > ---
> >  include/linux/slab.h | 13 ++++++++++++-
> >  mm/slab.c            |  2 +-
> >  mm/slub.c            |  2 +-
> >  3 files changed, 14 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 918f374e7156f4..390afe90c5dec0 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -30,7 +30,7 @@
> >  #define SLAB_POISON          ((slab_flags_t __force)0x00000800U)
> >  /* Align objs on cache lines */
> >  #define SLAB_HWCACHE_ALIGN   ((slab_flags_t __force)0x00002000U)
> > -/* Use GFP_DMA memory */
> > +/* Use GFP_DMA or GFP_DMA32 memory */
> >  #define SLAB_CACHE_DMA               ((slab_flags_t __force)0x00004000U)
> >  /* DEBUG: Store the last owner for bug hunting */
> >  #define SLAB_STORE_USER              ((slab_flags_t __force)0x00010000U)
> > @@ -126,6 +126,17 @@
> >  #define ZERO_OR_NULL_PTR(x) ((unsigned long)(x) <= \
> >                               (unsigned long)ZERO_SIZE_PTR)
> >
> > +/*
> > + * When ZONE_DMA32 is defined, have SLAB_CACHE_DMA allocate memory with
> > + * GFP_DMA32 instead of GFP_DMA, as this is what some of the callers
> > + * require (instead of duplicating cache for DMA and DMA32 zones).
> > + */
> > +#ifdef CONFIG_ZONE_DMA32
> > +#define SLAB_CACHE_DMA_GFP GFP_DMA32
> > +#else
> > +#define SLAB_CACHE_DMA_GFP GFP_DMA
> > +#endif
>
> AFAICS this will break e.g. x86 which can have both ZONE_DMA and
> ZONE_DMA32, and now you would make kmalloc(__GFP_DMA) return objects
> from ZONE_DMA32 instead of __ZONE_DMA, which can break something.

Oh, I was not aware that both ZONE_DMA and ZONE_DMA32 can be defined
at the same time. I guess the test should be inverted, something like
this (can be simplified...):
#ifdef CONFIG_ZONE_DMA
#define SLAB_CACHE_DMA_GFP GFP_DMA
#elif defined(CONFIG_ZONE_DMA32)
#define SLAB_CACHE_DMA_GFP GFP_DMA32
#else
#define SLAB_CACHE_DMA_GFP GFP_DMA // ?
#endif

> Also I'm probably missing the point of this all. In patch 3 you use
> __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses
> alloc_pages, thus the page allocator directly, and there's no slab
> caches involved.

__get_dma32_pages fixes level 1 page allocations in the patch 3.

This change fixes level 2 page allocations
(kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA)), by transparently
remapping GFP_DMA to an underlying ZONE_DMA32.

The alternative would be to create a new SLAB_CACHE_DMA32 when
CONFIG_ZONE_DMA32 is defined, but then I'm concerned that the callers
would need to choose between the 2 (GFP_DMA or GFP_DMA32...), and also
need to use some ifdefs (but maybe that's not a valid concern?).

> It makes little sense to involve slab for page table
> allocations anyway, as those tend to be aligned to a page size (or
> high-order page size). So what am I missing?

Level 2 tables are ARM_V7S_TABLE_SIZE(2) => 1kb, so we'd waste 3kb if
we allocated a full page.

Thanks,

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA
  2018-11-09 11:57     ` Nicolas Boichat
@ 2018-11-09 12:14       ` Vlastimil Babka
  0 siblings, 0 replies; 7+ messages in thread
From: Vlastimil Babka @ 2018-11-09 12:14 UTC (permalink / raw)
  To: Nicolas Boichat
  Cc: robin.murphy, will.deacon, joro, cl, penberg, rientjes,
	Joonsoo Kim, Andrew Morton, mhocko, mgorman, yehs1, rppt,
	linux-arm Mailing List, iommu, lkml, linux-mm, yong.wu,
	Matthias Brugger, tfiga, yingjoe.chen, Alexander.Levin

On 11/9/18 12:57 PM, Nicolas Boichat wrote:
> On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>> Also I'm probably missing the point of this all. In patch 3 you use
>> __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses
>> alloc_pages, thus the page allocator directly, and there's no slab
>> caches involved.
> 
> __get_dma32_pages fixes level 1 page allocations in the patch 3.
> 
> This change fixes level 2 page allocations
> (kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA)), by transparently
> remapping GFP_DMA to an underlying ZONE_DMA32.
> 
> The alternative would be to create a new SLAB_CACHE_DMA32 when
> CONFIG_ZONE_DMA32 is defined, but then I'm concerned that the callers
> would need to choose between the 2 (GFP_DMA or GFP_DMA32...), and also
> need to use some ifdefs (but maybe that's not a valid concern?).
> 
>> It makes little sense to involve slab for page table
>> allocations anyway, as those tend to be aligned to a page size (or
>> high-order page size). So what am I missing?
> 
> Level 2 tables are ARM_V7S_TABLE_SIZE(2) => 1kb, so we'd waste 3kb if
> we allocated a full page.

Oh, I see.

Well, I think indeed the most transparent would be to support
SLAB_CACHE_DMA32. The callers of kmem_cache_zalloc() would then need not
add anything special to gfp, as that's stored internally upon
kmem_cache_create(). Of course SLAB_BUG_MASK would no longer have to
treat __GFP_DMA32 as unexpected. It would be unexpected when passed to
kmalloc() which doesn't have special dma32 caches, but for a cache
explicitly created to allocate from ZONE_DMA32, I don't see why not. I'm
somewhat surprised that there wouldn't be a need for this earlier, so
maybe I'm still missing something.

> Thanks,
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-11-09 12:14 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-09  8:24 [PATCH RFC 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables Nicolas Boichat
2018-11-09  8:24 ` [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA Nicolas Boichat
2018-11-09 10:43   ` Vlastimil Babka
2018-11-09 11:57     ` Nicolas Boichat
2018-11-09 12:14       ` Vlastimil Babka
2018-11-09  8:24 ` [PATCH RFC 2/3] include/linux/gfp.h: Add __get_dma32_pages macro Nicolas Boichat
2018-11-09  8:24 ` [PATCH RFC 3/3] iommu/io-pgtable-arm-v7s: Request DMA32 memory, and improve debugging Nicolas Boichat

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).