All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Avoid requesting page from DMA zone when no managed pages
@ 2021-12-13 12:27 ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe

Background information can be checked in cover letter of v2 RESEND POST
as below:
https://lore.kernel.org/all/20211207030750.30824-1-bhe@redhat.com/T/#u

Changelog:
v2-Resend -> v3:
 - Re-implement has_managed_dma() according to David's suggestion.
 - Add Fixes tag and cc stable.

v2->v2 RESEND:
 - John pinged to push the repost of this patchset. So fix one typo of
   suject of patch 3/5; Fix a building error caused by mix declaration in
   patch 5/5. Both of them are found by John from his testing.
 - Rewrite cover letter to add more information.

v1->v2:
 Change to check if managed DMA zone exists. If DMA zone has managed
 pages, go further to request page from DMA zone to initialize. Otherwise,
 just skip to initialize stuffs which need pages from DMA zone.
  

V2 RESEND post:
https://lore.kernel.org/all/20211207030750.30824-1-bhe@redhat.com/T/#u

v2 post:
https://lore.kernel.org/all/20210810094835.13402-1-bhe@redhat.com/T/#u

v1 post:
https://lore.kernel.org/all/20210624052010.5676-1-bhe@redhat.com/T/#u


Baoquan He (5):
  docs: kernel-parameters: Update to reflect the current default size of
    atomic pool
  dma-pool: allow user to disable atomic pool
  mm_zone: add function to check if managed dma zone exists
  dma/pool: create dma atomic pool only if dma zone has managed pages
  mm/slub: do not create dma-kmalloc if no managed pages in DMA zone

 Documentation/admin-guide/kernel-parameters.txt |  5 ++++-
 include/linux/mmzone.h                          |  9 +++++++++
 kernel/dma/pool.c                               | 11 +++++++----
 mm/page_alloc.c                                 | 15 +++++++++++++++
 mm/slab_common.c                                |  9 +++++++++
 5 files changed, 44 insertions(+), 5 deletions(-)

-- 
2.17.2


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH v3 0/5] Avoid requesting page from DMA zone when no managed pages
@ 2021-12-13 12:27 ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe

Background information can be checked in cover letter of v2 RESEND POST
as below:
https://lore.kernel.org/all/20211207030750.30824-1-bhe@redhat.com/T/#u

Changelog:
v2-Resend -> v3:
 - Re-implement has_managed_dma() according to David's suggestion.
 - Add Fixes tag and cc stable.

v2->v2 RESEND:
 - John pinged to push the repost of this patchset. So fix one typo of
   suject of patch 3/5; Fix a building error caused by mix declaration in
   patch 5/5. Both of them are found by John from his testing.
 - Rewrite cover letter to add more information.

v1->v2:
 Change to check if managed DMA zone exists. If DMA zone has managed
 pages, go further to request page from DMA zone to initialize. Otherwise,
 just skip to initialize stuffs which need pages from DMA zone.
  

V2 RESEND post:
https://lore.kernel.org/all/20211207030750.30824-1-bhe@redhat.com/T/#u

v2 post:
https://lore.kernel.org/all/20210810094835.13402-1-bhe@redhat.com/T/#u

v1 post:
https://lore.kernel.org/all/20210624052010.5676-1-bhe@redhat.com/T/#u


Baoquan He (5):
  docs: kernel-parameters: Update to reflect the current default size of
    atomic pool
  dma-pool: allow user to disable atomic pool
  mm_zone: add function to check if managed dma zone exists
  dma/pool: create dma atomic pool only if dma zone has managed pages
  mm/slub: do not create dma-kmalloc if no managed pages in DMA zone

 Documentation/admin-guide/kernel-parameters.txt |  5 ++++-
 include/linux/mmzone.h                          |  9 +++++++++
 kernel/dma/pool.c                               | 11 +++++++----
 mm/page_alloc.c                                 | 15 +++++++++++++++
 mm/slab_common.c                                |  9 +++++++++
 5 files changed, 44 insertions(+), 5 deletions(-)

-- 
2.17.2


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH v3 1/5] docs: kernel-parameters: Update to reflect the current default size of atomic pool
  2021-12-13 12:27 ` Baoquan He
@ 2021-12-13 12:27   ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe

Since commit 1d659236fb43("dma-pool: scale the default DMA coherent pool
size with memory capacity"), the default size of atomic pool has been
changed to take by scaling with system memory capacity. So update the
document in kerenl-parameter.txt accordingly.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 9725c546a0d4..ec4d25e854a8 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -664,7 +664,9 @@
 
 	coherent_pool=nn[KMG]	[ARM,KNL]
 			Sets the size of memory pool for coherent, atomic dma
-			allocations, by default set to 256K.
+			allocations. Otherwise the default size will be scaled
+			with memory capacity, while clamped between 128K and
+			1 << (PAGE_SHIFT + MAX_ORDER-1).
 
 	com20020=	[HW,NET] ARCnet - COM20020 chipset
 			Format:
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 1/5] docs: kernel-parameters: Update to reflect the current default size of atomic pool
@ 2021-12-13 12:27   ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe

Since commit 1d659236fb43("dma-pool: scale the default DMA coherent pool
size with memory capacity"), the default size of atomic pool has been
changed to take by scaling with system memory capacity. So update the
document in kerenl-parameter.txt accordingly.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 9725c546a0d4..ec4d25e854a8 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -664,7 +664,9 @@
 
 	coherent_pool=nn[KMG]	[ARM,KNL]
 			Sets the size of memory pool for coherent, atomic dma
-			allocations, by default set to 256K.
+			allocations. Otherwise the default size will be scaled
+			with memory capacity, while clamped between 128K and
+			1 << (PAGE_SHIFT + MAX_ORDER-1).
 
 	com20020=	[HW,NET] ARCnet - COM20020 chipset
 			Format:
-- 
2.17.2


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 2/5] dma-pool: allow user to disable atomic pool
  2021-12-13 12:27 ` Baoquan He
@ 2021-12-13 12:27   ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe

In the current code, three atomic memory pools are always created,
atomic_pool_kernel|dma|dma32, even though 'coherent_pool=0' is
specified in kernel command line. In fact, atomic pool is only
necessary when CONFIG_DMA_DIRECT_REMAP=y or mem_encrypt_active=y
which are needed on few ARCHes.

So change code to allow user to disable atomic pool by specifying
'coherent_pool=0'.

Meanwhile, update the relevant document in kernel-parameter.txt.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 3 ++-
 kernel/dma/pool.c                               | 7 +++++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index ec4d25e854a8..d7015309614b 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -664,7 +664,8 @@
 
 	coherent_pool=nn[KMG]	[ARM,KNL]
 			Sets the size of memory pool for coherent, atomic dma
-			allocations. Otherwise the default size will be scaled
+			allocations. A value of 0 disables the three atomic
+			memory pool. Otherwise the default size will be scaled
 			with memory capacity, while clamped between 128K and
 			1 << (PAGE_SHIFT + MAX_ORDER-1).
 
diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 5f84e6cdb78e..5a85804b5beb 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -21,7 +21,7 @@ static struct gen_pool *atomic_pool_kernel __ro_after_init;
 static unsigned long pool_size_kernel;
 
 /* Size can be defined by the coherent_pool command line */
-static size_t atomic_pool_size;
+static unsigned long atomic_pool_size = -1;
 
 /* Dynamic background expansion when the atomic pool is near capacity */
 static struct work_struct atomic_pool_work;
@@ -188,11 +188,14 @@ static int __init dma_atomic_pool_init(void)
 {
 	int ret = 0;
 
+	if (!atomic_pool_size)
+		return 0;
+
 	/*
 	 * If coherent_pool was not used on the command line, default the pool
 	 * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
 	 */
-	if (!atomic_pool_size) {
+	if (atomic_pool_size == -1) {
 		unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K);
 		pages = min_t(unsigned long, pages, MAX_ORDER_NR_PAGES);
 		atomic_pool_size = max_t(size_t, pages << PAGE_SHIFT, SZ_128K);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 2/5] dma-pool: allow user to disable atomic pool
@ 2021-12-13 12:27   ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe

In the current code, three atomic memory pools are always created,
atomic_pool_kernel|dma|dma32, even though 'coherent_pool=0' is
specified in kernel command line. In fact, atomic pool is only
necessary when CONFIG_DMA_DIRECT_REMAP=y or mem_encrypt_active=y
which are needed on few ARCHes.

So change code to allow user to disable atomic pool by specifying
'coherent_pool=0'.

Meanwhile, update the relevant document in kernel-parameter.txt.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 3 ++-
 kernel/dma/pool.c                               | 7 +++++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index ec4d25e854a8..d7015309614b 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -664,7 +664,8 @@
 
 	coherent_pool=nn[KMG]	[ARM,KNL]
 			Sets the size of memory pool for coherent, atomic dma
-			allocations. Otherwise the default size will be scaled
+			allocations. A value of 0 disables the three atomic
+			memory pool. Otherwise the default size will be scaled
 			with memory capacity, while clamped between 128K and
 			1 << (PAGE_SHIFT + MAX_ORDER-1).
 
diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 5f84e6cdb78e..5a85804b5beb 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -21,7 +21,7 @@ static struct gen_pool *atomic_pool_kernel __ro_after_init;
 static unsigned long pool_size_kernel;
 
 /* Size can be defined by the coherent_pool command line */
-static size_t atomic_pool_size;
+static unsigned long atomic_pool_size = -1;
 
 /* Dynamic background expansion when the atomic pool is near capacity */
 static struct work_struct atomic_pool_work;
@@ -188,11 +188,14 @@ static int __init dma_atomic_pool_init(void)
 {
 	int ret = 0;
 
+	if (!atomic_pool_size)
+		return 0;
+
 	/*
 	 * If coherent_pool was not used on the command line, default the pool
 	 * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
 	 */
-	if (!atomic_pool_size) {
+	if (atomic_pool_size == -1) {
 		unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K);
 		pages = min_t(unsigned long, pages, MAX_ORDER_NR_PAGES);
 		atomic_pool_size = max_t(size_t, pages << PAGE_SHIFT, SZ_128K);
-- 
2.17.2


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 3/5] mm_zone: add function to check if managed dma zone exists
  2021-12-13 12:27 ` Baoquan He
@ 2021-12-13 12:27   ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe, stable

In some places of the current kernel, it assumes that dma zone must have
managed pages if CONFIG_ZONE_DMA is enabled. While this is not always true.
E.g in kdump kernel of x86_64, only low 1M is presented and locked down
at very early stage of boot, so that there's no managed pages at all in
DMA zone. This exception will always cause page allocation failure if page
is requested from DMA zone.

Here add function has_managed_dma() and the relevant helper functions to
check if there's DMA zone with managed pages. It will be used in later
patches.

Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
Cc: stable@vger.kernel.org
Signed-off-by: Baoquan He <bhe@redhat.com>
---
v2->v3:
 Rewrite has_managed_dma() in a simpler and more efficient way which is
 sugggested by DavidH. 

 include/linux/mmzone.h |  9 +++++++++
 mm/page_alloc.c        | 15 +++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 58e744b78c2c..6e1b726e9adf 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1046,6 +1046,15 @@ static inline int is_highmem_idx(enum zone_type idx)
 #endif
 }
 
+#ifdef CONFIG_ZONE_DMA
+bool has_managed_dma(void);
+#else
+static inline bool has_managed_dma(void)
+{
+	return false;
+}
+#endif
+
 /**
  * is_highmem - helper function to quickly check if a struct zone is a
  *              highmem zone or not.  This is an attempt to keep references
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5952749ad40..7c7a0b5de2ff 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -9460,3 +9460,18 @@ bool take_page_off_buddy(struct page *page)
 	return ret;
 }
 #endif
+
+#ifdef CONFIG_ZONE_DMA
+bool has_managed_dma(void)
+{
+	struct pglist_data *pgdat;
+
+	for_each_online_pgdat(pgdat) {
+		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
+
+		if (managed_zone(zone))
+			return true;
+	}
+	return false;
+}
+#endif /* CONFIG_ZONE_DMA */
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 3/5] mm_zone: add function to check if managed dma zone exists
@ 2021-12-13 12:27   ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe, stable

In some places of the current kernel, it assumes that dma zone must have
managed pages if CONFIG_ZONE_DMA is enabled. While this is not always true.
E.g in kdump kernel of x86_64, only low 1M is presented and locked down
at very early stage of boot, so that there's no managed pages at all in
DMA zone. This exception will always cause page allocation failure if page
is requested from DMA zone.

Here add function has_managed_dma() and the relevant helper functions to
check if there's DMA zone with managed pages. It will be used in later
patches.

Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
Cc: stable@vger.kernel.org
Signed-off-by: Baoquan He <bhe@redhat.com>
---
v2->v3:
 Rewrite has_managed_dma() in a simpler and more efficient way which is
 sugggested by DavidH. 

 include/linux/mmzone.h |  9 +++++++++
 mm/page_alloc.c        | 15 +++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 58e744b78c2c..6e1b726e9adf 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1046,6 +1046,15 @@ static inline int is_highmem_idx(enum zone_type idx)
 #endif
 }
 
+#ifdef CONFIG_ZONE_DMA
+bool has_managed_dma(void);
+#else
+static inline bool has_managed_dma(void)
+{
+	return false;
+}
+#endif
+
 /**
  * is_highmem - helper function to quickly check if a struct zone is a
  *              highmem zone or not.  This is an attempt to keep references
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5952749ad40..7c7a0b5de2ff 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -9460,3 +9460,18 @@ bool take_page_off_buddy(struct page *page)
 	return ret;
 }
 #endif
+
+#ifdef CONFIG_ZONE_DMA
+bool has_managed_dma(void)
+{
+	struct pglist_data *pgdat;
+
+	for_each_online_pgdat(pgdat) {
+		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
+
+		if (managed_zone(zone))
+			return true;
+	}
+	return false;
+}
+#endif /* CONFIG_ZONE_DMA */
-- 
2.17.2


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 4/5] dma/pool: create dma atomic pool only if dma zone has managed pages
  2021-12-13 12:27 ` Baoquan He
  (?)
@ 2021-12-13 12:27   ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe, stable,
	Marek Szyprowski, Robin Murphy, iommu

Currently three dma atomic pools are initialized as long as the relevant
kernel codes are built in. While in kdump kernel of x86_64, this is not
right when trying to create atomic_pool_dma, because there's no managed
pages in DMA zone. In the case, DMA zone only has low 1M memory presented
and locked down by memblock allocator. So no pages are added into buddy
of DMA zone. Please check commit f1d4d47c5851 ("x86/setup: Always reserve
the first 1M of RAM").

Then in kdump kernel of x86_64, it always prints below failure message:

 DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocations
 swapper/0: page allocation failure: order:5, mode:0xcc1(GFP_KERNEL|GFP_DMA), nodemask=(null),cpuset=/,mems_allowed=0
 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-0.rc5.20210611git929d931f2b40.42.fc35.x86_64 #1
 Hardware name: Dell Inc. PowerEdge R910/0P658H, BIOS 2.12.0 06/04/2018
 Call Trace:
  dump_stack+0x7f/0xa1
  warn_alloc.cold+0x72/0xd6
  ? _raw_spin_unlock_irq+0x24/0x40
  ? __alloc_pages_direct_compact+0x90/0x1b0
  __alloc_pages_slowpath.constprop.0+0xf29/0xf50
  ? __cond_resched+0x16/0x50
  ? prepare_alloc_pages.constprop.0+0x19d/0x1b0
  __alloc_pages+0x24d/0x2c0
  ? __dma_atomic_pool_init+0x93/0x93
  alloc_page_interleave+0x13/0xb0
  atomic_pool_expand+0x118/0x210
  ? __dma_atomic_pool_init+0x93/0x93
  __dma_atomic_pool_init+0x45/0x93
  dma_atomic_pool_init+0xdb/0x176
  do_one_initcall+0x67/0x320
  ? rcu_read_lock_sched_held+0x3f/0x80
  kernel_init_freeable+0x290/0x2dc
  ? rest_init+0x24f/0x24f
  kernel_init+0xa/0x111
  ret_from_fork+0x22/0x30
 Mem-Info:
 ......
 DMA: failed to allocate 128 KiB GFP_KERNEL|GFP_DMA pool for atomic allocation
 DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations

Here, let's check if DMA zone has managed pages, then create atomic_pool_dma
if yes. Otherwise just skip it.

Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
Cc: stable@vger.kernel.org
Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: iommu@lists.linux-foundation.org
---
 kernel/dma/pool.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 5a85804b5beb..00df3edd6c5d 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -206,7 +206,7 @@ static int __init dma_atomic_pool_init(void)
 						    GFP_KERNEL);
 	if (!atomic_pool_kernel)
 		ret = -ENOMEM;
-	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
+	if (has_managed_dma()) {
 		atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
 						GFP_KERNEL | GFP_DMA);
 		if (!atomic_pool_dma)
@@ -229,7 +229,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
 	if (prev == NULL) {
 		if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
 			return atomic_pool_dma32;
-		if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
+		if (atomic_pool_dma && (gfp & GFP_DMA))
 			return atomic_pool_dma;
 		return atomic_pool_kernel;
 	}
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 4/5] dma/pool: create dma atomic pool only if dma zone has managed pages
@ 2021-12-13 12:27   ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: John.p.donnelly, kexec, stable, hch, linux-mm, iommu, akpm,
	Robin Murphy, cl

Currently three dma atomic pools are initialized as long as the relevant
kernel codes are built in. While in kdump kernel of x86_64, this is not
right when trying to create atomic_pool_dma, because there's no managed
pages in DMA zone. In the case, DMA zone only has low 1M memory presented
and locked down by memblock allocator. So no pages are added into buddy
of DMA zone. Please check commit f1d4d47c5851 ("x86/setup: Always reserve
the first 1M of RAM").

Then in kdump kernel of x86_64, it always prints below failure message:

 DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocations
 swapper/0: page allocation failure: order:5, mode:0xcc1(GFP_KERNEL|GFP_DMA), nodemask=(null),cpuset=/,mems_allowed=0
 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-0.rc5.20210611git929d931f2b40.42.fc35.x86_64 #1
 Hardware name: Dell Inc. PowerEdge R910/0P658H, BIOS 2.12.0 06/04/2018
 Call Trace:
  dump_stack+0x7f/0xa1
  warn_alloc.cold+0x72/0xd6
  ? _raw_spin_unlock_irq+0x24/0x40
  ? __alloc_pages_direct_compact+0x90/0x1b0
  __alloc_pages_slowpath.constprop.0+0xf29/0xf50
  ? __cond_resched+0x16/0x50
  ? prepare_alloc_pages.constprop.0+0x19d/0x1b0
  __alloc_pages+0x24d/0x2c0
  ? __dma_atomic_pool_init+0x93/0x93
  alloc_page_interleave+0x13/0xb0
  atomic_pool_expand+0x118/0x210
  ? __dma_atomic_pool_init+0x93/0x93
  __dma_atomic_pool_init+0x45/0x93
  dma_atomic_pool_init+0xdb/0x176
  do_one_initcall+0x67/0x320
  ? rcu_read_lock_sched_held+0x3f/0x80
  kernel_init_freeable+0x290/0x2dc
  ? rest_init+0x24f/0x24f
  kernel_init+0xa/0x111
  ret_from_fork+0x22/0x30
 Mem-Info:
 ......
 DMA: failed to allocate 128 KiB GFP_KERNEL|GFP_DMA pool for atomic allocation
 DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations

Here, let's check if DMA zone has managed pages, then create atomic_pool_dma
if yes. Otherwise just skip it.

Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
Cc: stable@vger.kernel.org
Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: iommu@lists.linux-foundation.org
---
 kernel/dma/pool.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 5a85804b5beb..00df3edd6c5d 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -206,7 +206,7 @@ static int __init dma_atomic_pool_init(void)
 						    GFP_KERNEL);
 	if (!atomic_pool_kernel)
 		ret = -ENOMEM;
-	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
+	if (has_managed_dma()) {
 		atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
 						GFP_KERNEL | GFP_DMA);
 		if (!atomic_pool_dma)
@@ -229,7 +229,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
 	if (prev == NULL) {
 		if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
 			return atomic_pool_dma32;
-		if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
+		if (atomic_pool_dma && (gfp & GFP_DMA))
 			return atomic_pool_dma;
 		return atomic_pool_kernel;
 	}
-- 
2.17.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 4/5] dma/pool: create dma atomic pool only if dma zone has managed pages
@ 2021-12-13 12:27   ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe, stable,
	Marek Szyprowski, Robin Murphy, iommu

Currently three dma atomic pools are initialized as long as the relevant
kernel codes are built in. While in kdump kernel of x86_64, this is not
right when trying to create atomic_pool_dma, because there's no managed
pages in DMA zone. In the case, DMA zone only has low 1M memory presented
and locked down by memblock allocator. So no pages are added into buddy
of DMA zone. Please check commit f1d4d47c5851 ("x86/setup: Always reserve
the first 1M of RAM").

Then in kdump kernel of x86_64, it always prints below failure message:

 DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocations
 swapper/0: page allocation failure: order:5, mode:0xcc1(GFP_KERNEL|GFP_DMA), nodemask=(null),cpuset=/,mems_allowed=0
 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-0.rc5.20210611git929d931f2b40.42.fc35.x86_64 #1
 Hardware name: Dell Inc. PowerEdge R910/0P658H, BIOS 2.12.0 06/04/2018
 Call Trace:
  dump_stack+0x7f/0xa1
  warn_alloc.cold+0x72/0xd6
  ? _raw_spin_unlock_irq+0x24/0x40
  ? __alloc_pages_direct_compact+0x90/0x1b0
  __alloc_pages_slowpath.constprop.0+0xf29/0xf50
  ? __cond_resched+0x16/0x50
  ? prepare_alloc_pages.constprop.0+0x19d/0x1b0
  __alloc_pages+0x24d/0x2c0
  ? __dma_atomic_pool_init+0x93/0x93
  alloc_page_interleave+0x13/0xb0
  atomic_pool_expand+0x118/0x210
  ? __dma_atomic_pool_init+0x93/0x93
  __dma_atomic_pool_init+0x45/0x93
  dma_atomic_pool_init+0xdb/0x176
  do_one_initcall+0x67/0x320
  ? rcu_read_lock_sched_held+0x3f/0x80
  kernel_init_freeable+0x290/0x2dc
  ? rest_init+0x24f/0x24f
  kernel_init+0xa/0x111
  ret_from_fork+0x22/0x30
 Mem-Info:
 ......
 DMA: failed to allocate 128 KiB GFP_KERNEL|GFP_DMA pool for atomic allocation
 DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations

Here, let's check if DMA zone has managed pages, then create atomic_pool_dma
if yes. Otherwise just skip it.

Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
Cc: stable@vger.kernel.org
Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: iommu@lists.linux-foundation.org
---
 kernel/dma/pool.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 5a85804b5beb..00df3edd6c5d 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -206,7 +206,7 @@ static int __init dma_atomic_pool_init(void)
 						    GFP_KERNEL);
 	if (!atomic_pool_kernel)
 		ret = -ENOMEM;
-	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
+	if (has_managed_dma()) {
 		atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
 						GFP_KERNEL | GFP_DMA);
 		if (!atomic_pool_dma)
@@ -229,7 +229,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
 	if (prev == NULL) {
 		if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
 			return atomic_pool_dma32;
-		if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
+		if (atomic_pool_dma && (gfp & GFP_DMA))
 			return atomic_pool_dma;
 		return atomic_pool_kernel;
 	}
-- 
2.17.2


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-13 12:27 ` Baoquan He
@ 2021-12-13 12:27   ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe, stable,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Vlastimil Babka

Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
However, it will fail if DMA zone has no managed pages. The failure
can be seen in kdump kernel of x86_64 as below:

 CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
 Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
 Workqueue: events_unbound async_run_entry_fn
 Call Trace:
  dump_stack_lvl+0x57/0x72
  warn_alloc.cold+0x72/0xd6
  __alloc_pages_slowpath.constprop.0+0xf56/0xf70
  __alloc_pages+0x23b/0x2b0
  allocate_slab+0x406/0x630
  ___slab_alloc+0x4b1/0x7e0
  ? sr_probe+0x200/0x600
  ? lock_acquire+0xc4/0x2e0
  ? fs_reclaim_acquire+0x4d/0xe0
  ? lock_is_held_type+0xa7/0x120
  ? sr_probe+0x200/0x600
  ? __slab_alloc+0x67/0x90
  __slab_alloc+0x67/0x90
  ? sr_probe+0x200/0x600
  ? sr_probe+0x200/0x600
  kmem_cache_alloc_trace+0x259/0x270
  sr_probe+0x200/0x600
  ......
  bus_probe_device+0x9f/0xb0
  device_add+0x3d2/0x970
  ......
  __scsi_add_device+0xea/0x100
  ata_scsi_scan_host+0x97/0x1d0
  async_run_entry_fn+0x30/0x130
  process_one_work+0x2b0/0x5c0
  worker_thread+0x55/0x3c0
  ? process_one_work+0x5c0/0x5c0
  kthread+0x149/0x170
  ? set_kthread_struct+0x40/0x40
  ret_from_fork+0x22/0x30
 Mem-Info:
 ......

The above failure happened when calling kmalloc() to allocate buffer with
GFP_DMA. It requests to allocate slab page from DMA zone while no managed
pages in there.
 sr_probe()
 --> get_capabilities()
     --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);

The DMA zone should be checked if it has managed pages, then try to create
dma-kmalloc.

Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
Cc: stable@vger.kernel.org
Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
---
 mm/slab_common.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index e5d080a93009..ae4ef0f8903a 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
 {
 	int i;
 	enum kmalloc_cache_type type;
+#ifdef CONFIG_ZONE_DMA
+	bool managed_dma;
+#endif
 
 	/*
 	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
@@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
 	slab_state = UP;
 
 #ifdef CONFIG_ZONE_DMA
+	managed_dma = has_managed_dma();
+
 	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
 		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
 
 		if (s) {
+			if (!managed_dma) {
+				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
+				continue;
+			}
 			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
 				kmalloc_info[i].name[KMALLOC_DMA],
 				kmalloc_info[i].size,
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-13 12:27   ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-13 12:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, bhe, stable,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Vlastimil Babka

Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
However, it will fail if DMA zone has no managed pages. The failure
can be seen in kdump kernel of x86_64 as below:

 CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
 Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
 Workqueue: events_unbound async_run_entry_fn
 Call Trace:
  dump_stack_lvl+0x57/0x72
  warn_alloc.cold+0x72/0xd6
  __alloc_pages_slowpath.constprop.0+0xf56/0xf70
  __alloc_pages+0x23b/0x2b0
  allocate_slab+0x406/0x630
  ___slab_alloc+0x4b1/0x7e0
  ? sr_probe+0x200/0x600
  ? lock_acquire+0xc4/0x2e0
  ? fs_reclaim_acquire+0x4d/0xe0
  ? lock_is_held_type+0xa7/0x120
  ? sr_probe+0x200/0x600
  ? __slab_alloc+0x67/0x90
  __slab_alloc+0x67/0x90
  ? sr_probe+0x200/0x600
  ? sr_probe+0x200/0x600
  kmem_cache_alloc_trace+0x259/0x270
  sr_probe+0x200/0x600
  ......
  bus_probe_device+0x9f/0xb0
  device_add+0x3d2/0x970
  ......
  __scsi_add_device+0xea/0x100
  ata_scsi_scan_host+0x97/0x1d0
  async_run_entry_fn+0x30/0x130
  process_one_work+0x2b0/0x5c0
  worker_thread+0x55/0x3c0
  ? process_one_work+0x5c0/0x5c0
  kthread+0x149/0x170
  ? set_kthread_struct+0x40/0x40
  ret_from_fork+0x22/0x30
 Mem-Info:
 ......

The above failure happened when calling kmalloc() to allocate buffer with
GFP_DMA. It requests to allocate slab page from DMA zone while no managed
pages in there.
 sr_probe()
 --> get_capabilities()
     --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);

The DMA zone should be checked if it has managed pages, then try to create
dma-kmalloc.

Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
Cc: stable@vger.kernel.org
Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
---
 mm/slab_common.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index e5d080a93009..ae4ef0f8903a 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
 {
 	int i;
 	enum kmalloc_cache_type type;
+#ifdef CONFIG_ZONE_DMA
+	bool managed_dma;
+#endif
 
 	/*
 	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
@@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
 	slab_state = UP;
 
 #ifdef CONFIG_ZONE_DMA
+	managed_dma = has_managed_dma();
+
 	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
 		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
 
 		if (s) {
+			if (!managed_dma) {
+				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
+				continue;
+			}
 			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
 				kmalloc_info[i].name[KMALLOC_DMA],
 				kmalloc_info[i].size,
-- 
2.17.2


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-13 12:27   ` Baoquan He
@ 2021-12-13 13:43     ` Hyeonggon Yoo
  -1 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-13 13:43 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

Hello Baoquan. I have a question on your code.

On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> However, it will fail if DMA zone has no managed pages. The failure
> can be seen in kdump kernel of x86_64 as below:
> 
>  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
>  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
>  Workqueue: events_unbound async_run_entry_fn
>  Call Trace:
>   dump_stack_lvl+0x57/0x72
>   warn_alloc.cold+0x72/0xd6
>   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
>   __alloc_pages+0x23b/0x2b0
>   allocate_slab+0x406/0x630
>   ___slab_alloc+0x4b1/0x7e0
>   ? sr_probe+0x200/0x600
>   ? lock_acquire+0xc4/0x2e0
>   ? fs_reclaim_acquire+0x4d/0xe0
>   ? lock_is_held_type+0xa7/0x120
>   ? sr_probe+0x200/0x600
>   ? __slab_alloc+0x67/0x90
>   __slab_alloc+0x67/0x90
>   ? sr_probe+0x200/0x600
>   ? sr_probe+0x200/0x600
>   kmem_cache_alloc_trace+0x259/0x270
>   sr_probe+0x200/0x600
>   ......
>   bus_probe_device+0x9f/0xb0
>   device_add+0x3d2/0x970
>   ......
>   __scsi_add_device+0xea/0x100
>   ata_scsi_scan_host+0x97/0x1d0
>   async_run_entry_fn+0x30/0x130
>   process_one_work+0x2b0/0x5c0
>   worker_thread+0x55/0x3c0
>   ? process_one_work+0x5c0/0x5c0
>   kthread+0x149/0x170
>   ? set_kthread_struct+0x40/0x40
>   ret_from_fork+0x22/0x30
>  Mem-Info:
>  ......
> 
> The above failure happened when calling kmalloc() to allocate buffer with
> GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> pages in there.
>  sr_probe()
>  --> get_capabilities()
>      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> 
> The DMA zone should be checked if it has managed pages, then try to create
> dma-kmalloc.
>

What is problem here?

The slab allocator requested buddy allocator with GFP_DMA,
and then buddy allocator failed to allocate page in DMA zone because
there was no page in DMA zone. and then the buddy allocator called warn_alloc
because it failed at allocating page.

Looking at warn, I don't understand what the problem is.

> ---
>  mm/slab_common.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index e5d080a93009..ae4ef0f8903a 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>  {
>  	int i;
>  	enum kmalloc_cache_type type;
> +#ifdef CONFIG_ZONE_DMA
> +	bool managed_dma;
> +#endif
>  
>  	/*
>  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>  	slab_state = UP;
>  
>  #ifdef CONFIG_ZONE_DMA
> +	managed_dma = has_managed_dma();
> +
>  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
>  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
>  
>  		if (s) {
> +			if (!managed_dma) {
> +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> +				continue;
> +			}

This code is copying normal kmalloc caches to DMA kmalloc caches.
With this code, the kmalloc() with GFP_DMA will succeed even if allocated
memory is not actually from DMA zone. Is that really what you want?

Maybe the function get_capabilities() want to allocate memory
even if it's not from DMA zone, but other callers will not expect that.

Thanks,
Hyeonggon.

>  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
>  				kmalloc_info[i].name[KMALLOC_DMA],
>  				kmalloc_info[i].size,
> -- 
> 2.17.2
> 
> 

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-13 13:43     ` Hyeonggon Yoo
  0 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-13 13:43 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

Hello Baoquan. I have a question on your code.

On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> However, it will fail if DMA zone has no managed pages. The failure
> can be seen in kdump kernel of x86_64 as below:
> 
>  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
>  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
>  Workqueue: events_unbound async_run_entry_fn
>  Call Trace:
>   dump_stack_lvl+0x57/0x72
>   warn_alloc.cold+0x72/0xd6
>   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
>   __alloc_pages+0x23b/0x2b0
>   allocate_slab+0x406/0x630
>   ___slab_alloc+0x4b1/0x7e0
>   ? sr_probe+0x200/0x600
>   ? lock_acquire+0xc4/0x2e0
>   ? fs_reclaim_acquire+0x4d/0xe0
>   ? lock_is_held_type+0xa7/0x120
>   ? sr_probe+0x200/0x600
>   ? __slab_alloc+0x67/0x90
>   __slab_alloc+0x67/0x90
>   ? sr_probe+0x200/0x600
>   ? sr_probe+0x200/0x600
>   kmem_cache_alloc_trace+0x259/0x270
>   sr_probe+0x200/0x600
>   ......
>   bus_probe_device+0x9f/0xb0
>   device_add+0x3d2/0x970
>   ......
>   __scsi_add_device+0xea/0x100
>   ata_scsi_scan_host+0x97/0x1d0
>   async_run_entry_fn+0x30/0x130
>   process_one_work+0x2b0/0x5c0
>   worker_thread+0x55/0x3c0
>   ? process_one_work+0x5c0/0x5c0
>   kthread+0x149/0x170
>   ? set_kthread_struct+0x40/0x40
>   ret_from_fork+0x22/0x30
>  Mem-Info:
>  ......
> 
> The above failure happened when calling kmalloc() to allocate buffer with
> GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> pages in there.
>  sr_probe()
>  --> get_capabilities()
>      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> 
> The DMA zone should be checked if it has managed pages, then try to create
> dma-kmalloc.
>

What is problem here?

The slab allocator requested buddy allocator with GFP_DMA,
and then buddy allocator failed to allocate page in DMA zone because
there was no page in DMA zone. and then the buddy allocator called warn_alloc
because it failed at allocating page.

Looking at warn, I don't understand what the problem is.

> ---
>  mm/slab_common.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index e5d080a93009..ae4ef0f8903a 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>  {
>  	int i;
>  	enum kmalloc_cache_type type;
> +#ifdef CONFIG_ZONE_DMA
> +	bool managed_dma;
> +#endif
>  
>  	/*
>  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>  	slab_state = UP;
>  
>  #ifdef CONFIG_ZONE_DMA
> +	managed_dma = has_managed_dma();
> +
>  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
>  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
>  
>  		if (s) {
> +			if (!managed_dma) {
> +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> +				continue;
> +			}

This code is copying normal kmalloc caches to DMA kmalloc caches.
With this code, the kmalloc() with GFP_DMA will succeed even if allocated
memory is not actually from DMA zone. Is that really what you want?

Maybe the function get_capabilities() want to allocate memory
even if it's not from DMA zone, but other callers will not expect that.

Thanks,
Hyeonggon.

>  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
>  				kmalloc_info[i].name[KMALLOC_DMA],
>  				kmalloc_info[i].size,
> -- 
> 2.17.2
> 
> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 1/5] docs: kernel-parameters: Update to reflect the current default size of atomic pool
  2021-12-13 12:27   ` Baoquan He
@ 2021-12-13 14:20     ` john.p.donnelly
  -1 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:20 UTC (permalink / raw)
  To: Baoquan He, linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec

On 12/13/21 6:27 AM, Baoquan He wrote:
> Since commit 1d659236fb43("dma-pool: scale the default DMA coherent pool
> size with memory capacity"), the default size of atomic pool has been
> changed to take by scaling with system memory capacity. So update the
> document in kerenl-parameter.txt accordingly.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
 >
  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>

> ---
>   Documentation/admin-guide/kernel-parameters.txt | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 9725c546a0d4..ec4d25e854a8 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -664,7 +664,9 @@
>   
>   	coherent_pool=nn[KMG]	[ARM,KNL]
>   			Sets the size of memory pool for coherent, atomic dma
> -			allocations, by default set to 256K.
> +			allocations. Otherwise the default size will be scaled
> +			with memory capacity, while clamped between 128K and
> +			1 << (PAGE_SHIFT + MAX_ORDER-1).
>   
>   	com20020=	[HW,NET] ARCnet - COM20020 chipset
>   			Format:


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 1/5] docs: kernel-parameters: Update to reflect the current default size of atomic pool
@ 2021-12-13 14:20     ` john.p.donnelly
  0 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:20 UTC (permalink / raw)
  To: Baoquan He, linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec

On 12/13/21 6:27 AM, Baoquan He wrote:
> Since commit 1d659236fb43("dma-pool: scale the default DMA coherent pool
> size with memory capacity"), the default size of atomic pool has been
> changed to take by scaling with system memory capacity. So update the
> document in kerenl-parameter.txt accordingly.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
 >
  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>

> ---
>   Documentation/admin-guide/kernel-parameters.txt | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 9725c546a0d4..ec4d25e854a8 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -664,7 +664,9 @@
>   
>   	coherent_pool=nn[KMG]	[ARM,KNL]
>   			Sets the size of memory pool for coherent, atomic dma
> -			allocations, by default set to 256K.
> +			allocations. Otherwise the default size will be scaled
> +			with memory capacity, while clamped between 128K and
> +			1 << (PAGE_SHIFT + MAX_ORDER-1).
>   
>   	com20020=	[HW,NET] ARCnet - COM20020 chipset
>   			Format:


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 2/5] dma-pool: allow user to disable atomic pool
  2021-12-13 12:27   ` Baoquan He
@ 2021-12-13 14:21     ` john.p.donnelly
  -1 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:21 UTC (permalink / raw)
  To: Baoquan He, linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec

On 12/13/21 6:27 AM, Baoquan He wrote:
> In the current code, three atomic memory pools are always created,
> atomic_pool_kernel|dma|dma32, even though 'coherent_pool=0' is
> specified in kernel command line. In fact, atomic pool is only
> necessary when CONFIG_DMA_DIRECT_REMAP=y or mem_encrypt_active=y
> which are needed on few ARCHes.
> 
> So change code to allow user to disable atomic pool by specifying
> 'coherent_pool=0'.
> 
> Meanwhile, update the relevant document in kernel-parameter.txt.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
 >
  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>

> ---
>   Documentation/admin-guide/kernel-parameters.txt | 3 ++-
>   kernel/dma/pool.c                               | 7 +++++--
>   2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index ec4d25e854a8..d7015309614b 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -664,7 +664,8 @@
>   
>   	coherent_pool=nn[KMG]	[ARM,KNL]
>   			Sets the size of memory pool for coherent, atomic dma
> -			allocations. Otherwise the default size will be scaled
> +			allocations. A value of 0 disables the three atomic
> +			memory pool. Otherwise the default size will be scaled
>   			with memory capacity, while clamped between 128K and
>   			1 << (PAGE_SHIFT + MAX_ORDER-1).
>   
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 5f84e6cdb78e..5a85804b5beb 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -21,7 +21,7 @@ static struct gen_pool *atomic_pool_kernel __ro_after_init;
>   static unsigned long pool_size_kernel;
>   
>   /* Size can be defined by the coherent_pool command line */
> -static size_t atomic_pool_size;
> +static unsigned long atomic_pool_size = -1;
>   
>   /* Dynamic background expansion when the atomic pool is near capacity */
>   static struct work_struct atomic_pool_work;
> @@ -188,11 +188,14 @@ static int __init dma_atomic_pool_init(void)
>   {
>   	int ret = 0;
>   
> +	if (!atomic_pool_size)
> +		return 0;
> +
>   	/*
>   	 * If coherent_pool was not used on the command line, default the pool
>   	 * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
>   	 */
> -	if (!atomic_pool_size) {
> +	if (atomic_pool_size == -1) {
>   		unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K);
>   		pages = min_t(unsigned long, pages, MAX_ORDER_NR_PAGES);
>   		atomic_pool_size = max_t(size_t, pages << PAGE_SHIFT, SZ_128K);


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 2/5] dma-pool: allow user to disable atomic pool
@ 2021-12-13 14:21     ` john.p.donnelly
  0 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:21 UTC (permalink / raw)
  To: Baoquan He, linux-kernel; +Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec

On 12/13/21 6:27 AM, Baoquan He wrote:
> In the current code, three atomic memory pools are always created,
> atomic_pool_kernel|dma|dma32, even though 'coherent_pool=0' is
> specified in kernel command line. In fact, atomic pool is only
> necessary when CONFIG_DMA_DIRECT_REMAP=y or mem_encrypt_active=y
> which are needed on few ARCHes.
> 
> So change code to allow user to disable atomic pool by specifying
> 'coherent_pool=0'.
> 
> Meanwhile, update the relevant document in kernel-parameter.txt.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
 >
  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>

> ---
>   Documentation/admin-guide/kernel-parameters.txt | 3 ++-
>   kernel/dma/pool.c                               | 7 +++++--
>   2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index ec4d25e854a8..d7015309614b 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -664,7 +664,8 @@
>   
>   	coherent_pool=nn[KMG]	[ARM,KNL]
>   			Sets the size of memory pool for coherent, atomic dma
> -			allocations. Otherwise the default size will be scaled
> +			allocations. A value of 0 disables the three atomic
> +			memory pool. Otherwise the default size will be scaled
>   			with memory capacity, while clamped between 128K and
>   			1 << (PAGE_SHIFT + MAX_ORDER-1).
>   
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 5f84e6cdb78e..5a85804b5beb 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -21,7 +21,7 @@ static struct gen_pool *atomic_pool_kernel __ro_after_init;
>   static unsigned long pool_size_kernel;
>   
>   /* Size can be defined by the coherent_pool command line */
> -static size_t atomic_pool_size;
> +static unsigned long atomic_pool_size = -1;
>   
>   /* Dynamic background expansion when the atomic pool is near capacity */
>   static struct work_struct atomic_pool_work;
> @@ -188,11 +188,14 @@ static int __init dma_atomic_pool_init(void)
>   {
>   	int ret = 0;
>   
> +	if (!atomic_pool_size)
> +		return 0;
> +
>   	/*
>   	 * If coherent_pool was not used on the command line, default the pool
>   	 * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
>   	 */
> -	if (!atomic_pool_size) {
> +	if (atomic_pool_size == -1) {
>   		unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K);
>   		pages = min_t(unsigned long, pages, MAX_ORDER_NR_PAGES);
>   		atomic_pool_size = max_t(size_t, pages << PAGE_SHIFT, SZ_128K);


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 3/5] mm_zone: add function to check if managed dma zone exists
  2021-12-13 12:27   ` Baoquan He
@ 2021-12-13 14:22     ` john.p.donnelly
  -1 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:22 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, stable

On 12/13/21 6:27 AM, Baoquan He wrote:
> In some places of the current kernel, it assumes that dma zone must have
> managed pages if CONFIG_ZONE_DMA is enabled. While this is not always true.
> E.g in kdump kernel of x86_64, only low 1M is presented and locked down
> at very early stage of boot, so that there's no managed pages at all in
> DMA zone. This exception will always cause page allocation failure if page
> is requested from DMA zone.
> 
> Here add function has_managed_dma() and the relevant helper functions to
> check if there's DMA zone with managed pages. It will be used in later
> patches.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>

 >
  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>

> ---
> v2->v3:
>   Rewrite has_managed_dma() in a simpler and more efficient way which is
>   sugggested by DavidH.
> 
>   include/linux/mmzone.h |  9 +++++++++
>   mm/page_alloc.c        | 15 +++++++++++++++
>   2 files changed, 24 insertions(+)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 58e744b78c2c..6e1b726e9adf 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1046,6 +1046,15 @@ static inline int is_highmem_idx(enum zone_type idx)
>   #endif
>   }
>   
> +#ifdef CONFIG_ZONE_DMA
> +bool has_managed_dma(void);
> +#else
> +static inline bool has_managed_dma(void)
> +{
> +	return false;
> +}
> +#endif
> +
>   /**
>    * is_highmem - helper function to quickly check if a struct zone is a
>    *              highmem zone or not.  This is an attempt to keep references
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c5952749ad40..7c7a0b5de2ff 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -9460,3 +9460,18 @@ bool take_page_off_buddy(struct page *page)
>   	return ret;
>   }
>   #endif
> +
> +#ifdef CONFIG_ZONE_DMA
> +bool has_managed_dma(void)
> +{
> +	struct pglist_data *pgdat;
> +
> +	for_each_online_pgdat(pgdat) {
> +		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
> +
> +		if (managed_zone(zone))
> +			return true;
> +	}
> +	return false;
> +}
> +#endif /* CONFIG_ZONE_DMA */


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 3/5] mm_zone: add function to check if managed dma zone exists
@ 2021-12-13 14:22     ` john.p.donnelly
  0 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:22 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, stable

On 12/13/21 6:27 AM, Baoquan He wrote:
> In some places of the current kernel, it assumes that dma zone must have
> managed pages if CONFIG_ZONE_DMA is enabled. While this is not always true.
> E.g in kdump kernel of x86_64, only low 1M is presented and locked down
> at very early stage of boot, so that there's no managed pages at all in
> DMA zone. This exception will always cause page allocation failure if page
> is requested from DMA zone.
> 
> Here add function has_managed_dma() and the relevant helper functions to
> check if there's DMA zone with managed pages. It will be used in later
> patches.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>

 >
  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>

> ---
> v2->v3:
>   Rewrite has_managed_dma() in a simpler and more efficient way which is
>   sugggested by DavidH.
> 
>   include/linux/mmzone.h |  9 +++++++++
>   mm/page_alloc.c        | 15 +++++++++++++++
>   2 files changed, 24 insertions(+)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 58e744b78c2c..6e1b726e9adf 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1046,6 +1046,15 @@ static inline int is_highmem_idx(enum zone_type idx)
>   #endif
>   }
>   
> +#ifdef CONFIG_ZONE_DMA
> +bool has_managed_dma(void);
> +#else
> +static inline bool has_managed_dma(void)
> +{
> +	return false;
> +}
> +#endif
> +
>   /**
>    * is_highmem - helper function to quickly check if a struct zone is a
>    *              highmem zone or not.  This is an attempt to keep references
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c5952749ad40..7c7a0b5de2ff 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -9460,3 +9460,18 @@ bool take_page_off_buddy(struct page *page)
>   	return ret;
>   }
>   #endif
> +
> +#ifdef CONFIG_ZONE_DMA
> +bool has_managed_dma(void)
> +{
> +	struct pglist_data *pgdat;
> +
> +	for_each_online_pgdat(pgdat) {
> +		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
> +
> +		if (managed_zone(zone))
> +			return true;
> +	}
> +	return false;
> +}
> +#endif /* CONFIG_ZONE_DMA */


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 4/5] dma/pool: create dma atomic pool only if dma zone has managed pages
  2021-12-13 12:27   ` Baoquan He
  (?)
@ 2021-12-13 14:23     ` john.p.donnelly
  -1 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:23 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, stable,
	Marek Szyprowski, Robin Murphy, iommu

On 12/13/21 6:27 AM, Baoquan He wrote:
> Currently three dma atomic pools are initialized as long as the relevant
> kernel codes are built in. While in kdump kernel of x86_64, this is not
> right when trying to create atomic_pool_dma, because there's no managed
> pages in DMA zone. In the case, DMA zone only has low 1M memory presented
> and locked down by memblock allocator. So no pages are added into buddy
> of DMA zone. Please check commit f1d4d47c5851 ("x86/setup: Always reserve
> the first 1M of RAM").
> 
> Then in kdump kernel of x86_64, it always prints below failure message:
> 
>   DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocations
>   swapper/0: page allocation failure: order:5, mode:0xcc1(GFP_KERNEL|GFP_DMA), nodemask=(null),cpuset=/,mems_allowed=0
>   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-0.rc5.20210611git929d931f2b40.42.fc35.x86_64 #1
>   Hardware name: Dell Inc. PowerEdge R910/0P658H, BIOS 2.12.0 06/04/2018
>   Call Trace:
>    dump_stack+0x7f/0xa1
>    warn_alloc.cold+0x72/0xd6
>    ? _raw_spin_unlock_irq+0x24/0x40
>    ? __alloc_pages_direct_compact+0x90/0x1b0
>    __alloc_pages_slowpath.constprop.0+0xf29/0xf50
>    ? __cond_resched+0x16/0x50
>    ? prepare_alloc_pages.constprop.0+0x19d/0x1b0
>    __alloc_pages+0x24d/0x2c0
>    ? __dma_atomic_pool_init+0x93/0x93
>    alloc_page_interleave+0x13/0xb0
>    atomic_pool_expand+0x118/0x210
>    ? __dma_atomic_pool_init+0x93/0x93
>    __dma_atomic_pool_init+0x45/0x93
>    dma_atomic_pool_init+0xdb/0x176
>    do_one_initcall+0x67/0x320
>    ? rcu_read_lock_sched_held+0x3f/0x80
>    kernel_init_freeable+0x290/0x2dc
>    ? rest_init+0x24f/0x24f
>    kernel_init+0xa/0x111
>    ret_from_fork+0x22/0x30
>   Mem-Info:
>   ......
>   DMA: failed to allocate 128 KiB GFP_KERNEL|GFP_DMA pool for atomic allocation
>   DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> 
> Here, let's check if DMA zone has managed pages, then create atomic_pool_dma
> if yes. Otherwise just skip it.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>



  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>


> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: iommu@lists.linux-foundation.org
> ---
>   kernel/dma/pool.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 5a85804b5beb..00df3edd6c5d 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -206,7 +206,7 @@ static int __init dma_atomic_pool_init(void)
>   						    GFP_KERNEL);
>   	if (!atomic_pool_kernel)
>   		ret = -ENOMEM;
> -	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
> +	if (has_managed_dma()) {
>   		atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
>   						GFP_KERNEL | GFP_DMA);
>   		if (!atomic_pool_dma)
> @@ -229,7 +229,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
>   	if (prev == NULL) {
>   		if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
>   			return atomic_pool_dma32;
> -		if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
> +		if (atomic_pool_dma && (gfp & GFP_DMA))
>   			return atomic_pool_dma;
>   		return atomic_pool_kernel;
>   	}


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 4/5] dma/pool: create dma atomic pool only if dma zone has managed pages
@ 2021-12-13 14:23     ` john.p.donnelly
  0 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:23 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: John.p.donnelly, kexec, stable, hch, linux-mm, iommu, akpm,
	Robin Murphy, cl

On 12/13/21 6:27 AM, Baoquan He wrote:
> Currently three dma atomic pools are initialized as long as the relevant
> kernel codes are built in. While in kdump kernel of x86_64, this is not
> right when trying to create atomic_pool_dma, because there's no managed
> pages in DMA zone. In the case, DMA zone only has low 1M memory presented
> and locked down by memblock allocator. So no pages are added into buddy
> of DMA zone. Please check commit f1d4d47c5851 ("x86/setup: Always reserve
> the first 1M of RAM").
> 
> Then in kdump kernel of x86_64, it always prints below failure message:
> 
>   DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocations
>   swapper/0: page allocation failure: order:5, mode:0xcc1(GFP_KERNEL|GFP_DMA), nodemask=(null),cpuset=/,mems_allowed=0
>   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-0.rc5.20210611git929d931f2b40.42.fc35.x86_64 #1
>   Hardware name: Dell Inc. PowerEdge R910/0P658H, BIOS 2.12.0 06/04/2018
>   Call Trace:
>    dump_stack+0x7f/0xa1
>    warn_alloc.cold+0x72/0xd6
>    ? _raw_spin_unlock_irq+0x24/0x40
>    ? __alloc_pages_direct_compact+0x90/0x1b0
>    __alloc_pages_slowpath.constprop.0+0xf29/0xf50
>    ? __cond_resched+0x16/0x50
>    ? prepare_alloc_pages.constprop.0+0x19d/0x1b0
>    __alloc_pages+0x24d/0x2c0
>    ? __dma_atomic_pool_init+0x93/0x93
>    alloc_page_interleave+0x13/0xb0
>    atomic_pool_expand+0x118/0x210
>    ? __dma_atomic_pool_init+0x93/0x93
>    __dma_atomic_pool_init+0x45/0x93
>    dma_atomic_pool_init+0xdb/0x176
>    do_one_initcall+0x67/0x320
>    ? rcu_read_lock_sched_held+0x3f/0x80
>    kernel_init_freeable+0x290/0x2dc
>    ? rest_init+0x24f/0x24f
>    kernel_init+0xa/0x111
>    ret_from_fork+0x22/0x30
>   Mem-Info:
>   ......
>   DMA: failed to allocate 128 KiB GFP_KERNEL|GFP_DMA pool for atomic allocation
>   DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> 
> Here, let's check if DMA zone has managed pages, then create atomic_pool_dma
> if yes. Otherwise just skip it.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>



  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>


> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: iommu@lists.linux-foundation.org
> ---
>   kernel/dma/pool.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 5a85804b5beb..00df3edd6c5d 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -206,7 +206,7 @@ static int __init dma_atomic_pool_init(void)
>   						    GFP_KERNEL);
>   	if (!atomic_pool_kernel)
>   		ret = -ENOMEM;
> -	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
> +	if (has_managed_dma()) {
>   		atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
>   						GFP_KERNEL | GFP_DMA);
>   		if (!atomic_pool_dma)
> @@ -229,7 +229,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
>   	if (prev == NULL) {
>   		if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
>   			return atomic_pool_dma32;
> -		if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
> +		if (atomic_pool_dma && (gfp & GFP_DMA))
>   			return atomic_pool_dma;
>   		return atomic_pool_kernel;
>   	}

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 4/5] dma/pool: create dma atomic pool only if dma zone has managed pages
@ 2021-12-13 14:23     ` john.p.donnelly
  0 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:23 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, stable,
	Marek Szyprowski, Robin Murphy, iommu

On 12/13/21 6:27 AM, Baoquan He wrote:
> Currently three dma atomic pools are initialized as long as the relevant
> kernel codes are built in. While in kdump kernel of x86_64, this is not
> right when trying to create atomic_pool_dma, because there's no managed
> pages in DMA zone. In the case, DMA zone only has low 1M memory presented
> and locked down by memblock allocator. So no pages are added into buddy
> of DMA zone. Please check commit f1d4d47c5851 ("x86/setup: Always reserve
> the first 1M of RAM").
> 
> Then in kdump kernel of x86_64, it always prints below failure message:
> 
>   DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocations
>   swapper/0: page allocation failure: order:5, mode:0xcc1(GFP_KERNEL|GFP_DMA), nodemask=(null),cpuset=/,mems_allowed=0
>   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-0.rc5.20210611git929d931f2b40.42.fc35.x86_64 #1
>   Hardware name: Dell Inc. PowerEdge R910/0P658H, BIOS 2.12.0 06/04/2018
>   Call Trace:
>    dump_stack+0x7f/0xa1
>    warn_alloc.cold+0x72/0xd6
>    ? _raw_spin_unlock_irq+0x24/0x40
>    ? __alloc_pages_direct_compact+0x90/0x1b0
>    __alloc_pages_slowpath.constprop.0+0xf29/0xf50
>    ? __cond_resched+0x16/0x50
>    ? prepare_alloc_pages.constprop.0+0x19d/0x1b0
>    __alloc_pages+0x24d/0x2c0
>    ? __dma_atomic_pool_init+0x93/0x93
>    alloc_page_interleave+0x13/0xb0
>    atomic_pool_expand+0x118/0x210
>    ? __dma_atomic_pool_init+0x93/0x93
>    __dma_atomic_pool_init+0x45/0x93
>    dma_atomic_pool_init+0xdb/0x176
>    do_one_initcall+0x67/0x320
>    ? rcu_read_lock_sched_held+0x3f/0x80
>    kernel_init_freeable+0x290/0x2dc
>    ? rest_init+0x24f/0x24f
>    kernel_init+0xa/0x111
>    ret_from_fork+0x22/0x30
>   Mem-Info:
>   ......
>   DMA: failed to allocate 128 KiB GFP_KERNEL|GFP_DMA pool for atomic allocation
>   DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> 
> Here, let's check if DMA zone has managed pages, then create atomic_pool_dma
> if yes. Otherwise just skip it.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>



  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>


> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: iommu@lists.linux-foundation.org
> ---
>   kernel/dma/pool.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 5a85804b5beb..00df3edd6c5d 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -206,7 +206,7 @@ static int __init dma_atomic_pool_init(void)
>   						    GFP_KERNEL);
>   	if (!atomic_pool_kernel)
>   		ret = -ENOMEM;
> -	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
> +	if (has_managed_dma()) {
>   		atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
>   						GFP_KERNEL | GFP_DMA);
>   		if (!atomic_pool_dma)
> @@ -229,7 +229,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
>   	if (prev == NULL) {
>   		if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
>   			return atomic_pool_dma32;
> -		if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
> +		if (atomic_pool_dma && (gfp & GFP_DMA))
>   			return atomic_pool_dma;
>   		return atomic_pool_kernel;
>   	}


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-13 12:27   ` Baoquan He
@ 2021-12-13 14:24     ` john.p.donnelly
  -1 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:24 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, stable,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Vlastimil Babka

On 12/13/21 6:27 AM, Baoquan He wrote:
> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> However, it will fail if DMA zone has no managed pages. The failure
> can be seen in kdump kernel of x86_64 as below:
> 
>   CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
>   Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
>   Workqueue: events_unbound async_run_entry_fn
>   Call Trace:
>    dump_stack_lvl+0x57/0x72
>    warn_alloc.cold+0x72/0xd6
>    __alloc_pages_slowpath.constprop.0+0xf56/0xf70
>    __alloc_pages+0x23b/0x2b0
>    allocate_slab+0x406/0x630
>    ___slab_alloc+0x4b1/0x7e0
>    ? sr_probe+0x200/0x600
>    ? lock_acquire+0xc4/0x2e0
>    ? fs_reclaim_acquire+0x4d/0xe0
>    ? lock_is_held_type+0xa7/0x120
>    ? sr_probe+0x200/0x600
>    ? __slab_alloc+0x67/0x90
>    __slab_alloc+0x67/0x90
>    ? sr_probe+0x200/0x600
>    ? sr_probe+0x200/0x600
>    kmem_cache_alloc_trace+0x259/0x270
>    sr_probe+0x200/0x600
>    ......
>    bus_probe_device+0x9f/0xb0
>    device_add+0x3d2/0x970
>    ......
>    __scsi_add_device+0xea/0x100
>    ata_scsi_scan_host+0x97/0x1d0
>    async_run_entry_fn+0x30/0x130
>    process_one_work+0x2b0/0x5c0
>    worker_thread+0x55/0x3c0
>    ? process_one_work+0x5c0/0x5c0
>    kthread+0x149/0x170
>    ? set_kthread_struct+0x40/0x40
>    ret_from_fork+0x22/0x30
>   Mem-Info:
>   ......
> 
> The above failure happened when calling kmalloc() to allocate buffer with
> GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> pages in there.
>   sr_probe()
>   --> get_capabilities()
>       --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> 
> The DMA zone should be checked if it has managed pages, then try to create
> dma-kmalloc.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>

  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>

> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> ---
>   mm/slab_common.c | 9 +++++++++
>   1 file changed, 9 insertions(+)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index e5d080a93009..ae4ef0f8903a 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>   {
>   	int i;
>   	enum kmalloc_cache_type type;
> +#ifdef CONFIG_ZONE_DMA
> +	bool managed_dma;
> +#endif
>   
>   	/*
>   	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>   	slab_state = UP;
>   
>   #ifdef CONFIG_ZONE_DMA
> +	managed_dma = has_managed_dma();
> +
>   	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
>   		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
>   
>   		if (s) {
> +			if (!managed_dma) {
> +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> +				continue;
> +			}
>   			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
>   				kmalloc_info[i].name[KMALLOC_DMA],
>   				kmalloc_info[i].size,


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-13 14:24     ` john.p.donnelly
  0 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-13 14:24 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, stable,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Vlastimil Babka

On 12/13/21 6:27 AM, Baoquan He wrote:
> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> However, it will fail if DMA zone has no managed pages. The failure
> can be seen in kdump kernel of x86_64 as below:
> 
>   CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
>   Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
>   Workqueue: events_unbound async_run_entry_fn
>   Call Trace:
>    dump_stack_lvl+0x57/0x72
>    warn_alloc.cold+0x72/0xd6
>    __alloc_pages_slowpath.constprop.0+0xf56/0xf70
>    __alloc_pages+0x23b/0x2b0
>    allocate_slab+0x406/0x630
>    ___slab_alloc+0x4b1/0x7e0
>    ? sr_probe+0x200/0x600
>    ? lock_acquire+0xc4/0x2e0
>    ? fs_reclaim_acquire+0x4d/0xe0
>    ? lock_is_held_type+0xa7/0x120
>    ? sr_probe+0x200/0x600
>    ? __slab_alloc+0x67/0x90
>    __slab_alloc+0x67/0x90
>    ? sr_probe+0x200/0x600
>    ? sr_probe+0x200/0x600
>    kmem_cache_alloc_trace+0x259/0x270
>    sr_probe+0x200/0x600
>    ......
>    bus_probe_device+0x9f/0xb0
>    device_add+0x3d2/0x970
>    ......
>    __scsi_add_device+0xea/0x100
>    ata_scsi_scan_host+0x97/0x1d0
>    async_run_entry_fn+0x30/0x130
>    process_one_work+0x2b0/0x5c0
>    worker_thread+0x55/0x3c0
>    ? process_one_work+0x5c0/0x5c0
>    kthread+0x149/0x170
>    ? set_kthread_struct+0x40/0x40
>    ret_from_fork+0x22/0x30
>   Mem-Info:
>   ......
> 
> The above failure happened when calling kmalloc() to allocate buffer with
> GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> pages in there.
>   sr_probe()
>   --> get_capabilities()
>       --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> 
> The DMA zone should be checked if it has managed pages, then try to create
> dma-kmalloc.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>

  Acked-by: John Donnelly <john.p.donnelly@oracle.com>
  Tested-by:  John Donnelly <john.p.donnelly@oracle.com>

> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> ---
>   mm/slab_common.c | 9 +++++++++
>   1 file changed, 9 insertions(+)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index e5d080a93009..ae4ef0f8903a 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>   {
>   	int i;
>   	enum kmalloc_cache_type type;
> +#ifdef CONFIG_ZONE_DMA
> +	bool managed_dma;
> +#endif
>   
>   	/*
>   	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>   	slab_state = UP;
>   
>   #ifdef CONFIG_ZONE_DMA
> +	managed_dma = has_managed_dma();
> +
>   	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
>   		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
>   
>   		if (s) {
> +			if (!managed_dma) {
> +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> +				continue;
> +			}
>   			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
>   				kmalloc_info[i].name[KMALLOC_DMA],
>   				kmalloc_info[i].size,


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 0/5] Avoid requesting page from DMA zone when no managed pages
  2021-12-13 12:27 ` Baoquan He
@ 2021-12-13 21:05   ` Andrew Morton
  -1 siblings, 0 replies; 74+ messages in thread
From: Andrew Morton @ 2021-12-13 21:05 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, linux-mm, hch, cl, John.p.donnelly, kexec

On Mon, 13 Dec 2021 20:27:07 +0800 Baoquan He <bhe@redhat.com> wrote:

> Background information can be checked in cover letter of v2 RESEND POST
> as below:
> https://lore.kernel.org/all/20211207030750.30824-1-bhe@redhat.com/T/#u

Please include all relevant info right here, in the [0/n].  For a
number of reasons, one of which is that the text is more likely to be
up to date as the patchset evolves.

It's unusual that this patchset has two non-urgent patches and the
final three patches are cc:stable.  It makes one worry that patches 3-5
might have dependencies on 1-2.  Also, I'd expect to merge the three
-stable patches during 5.16-rcX which means I have to reorder things,
redo changelogs, update links and blah blah.

So can I ask that you redo all of this as two patch series?  A 3-patch
series which is targeted at -stable, followed by a separate two-patch
series which is targeted at 5.17-rc1.  Each series with its own fully
prepared [0/n] cover.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 0/5] Avoid requesting page from DMA zone when no managed pages
@ 2021-12-13 21:05   ` Andrew Morton
  0 siblings, 0 replies; 74+ messages in thread
From: Andrew Morton @ 2021-12-13 21:05 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, linux-mm, hch, cl, John.p.donnelly, kexec

On Mon, 13 Dec 2021 20:27:07 +0800 Baoquan He <bhe@redhat.com> wrote:

> Background information can be checked in cover letter of v2 RESEND POST
> as below:
> https://lore.kernel.org/all/20211207030750.30824-1-bhe@redhat.com/T/#u

Please include all relevant info right here, in the [0/n].  For a
number of reasons, one of which is that the text is more likely to be
up to date as the patchset evolves.

It's unusual that this patchset has two non-urgent patches and the
final three patches are cc:stable.  It makes one worry that patches 3-5
might have dependencies on 1-2.  Also, I'd expect to merge the three
-stable patches during 5.16-rcX which means I have to reorder things,
redo changelogs, update links and blah blah.

So can I ask that you redo all of this as two patch series?  A 3-patch
series which is targeted at -stable, followed by a separate two-patch
series which is targeted at 5.17-rc1.  Each series with its own fully
prepared [0/n] cover.

Thanks.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 0/5] Avoid requesting page from DMA zone when no managed pages
  2021-12-13 21:05   ` Andrew Morton
@ 2021-12-14  0:35     ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-14  0:35 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel, linux-mm, hch, cl, John.p.donnelly, kexec

On 12/13/21 at 01:05pm, Andrew Morton wrote:
> On Mon, 13 Dec 2021 20:27:07 +0800 Baoquan He <bhe@redhat.com> wrote:
> 
> > Background information can be checked in cover letter of v2 RESEND POST
> > as below:
> > https://lore.kernel.org/all/20211207030750.30824-1-bhe@redhat.com/T/#u
> 
> Please include all relevant info right here, in the [0/n].  For a
> number of reasons, one of which is that the text is more likely to be
> up to date as the patchset evolves.
> 
> It's unusual that this patchset has two non-urgent patches and the
> final three patches are cc:stable.  It makes one worry that patches 3-5
> might have dependencies on 1-2.  Also, I'd expect to merge the three
> -stable patches during 5.16-rcX which means I have to reorder things,
> redo changelogs, update links and blah blah.
> 
> So can I ask that you redo all of this as two patch series?  A 3-patch
> series which is targeted at -stable, followed by a separate two-patch
> series which is targeted at 5.17-rc1.  Each series with its own fully
> prepared [0/n] cover.

Sure, will do. Sorry for the mess.

Before the 3-patch series posting, I may need to continue discussing and
making clear if the current patch 5/5 is a good fix, or whether we need
change to take other solution. So I will take the first two patches out
and post them.


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 0/5] Avoid requesting page from DMA zone when no managed pages
@ 2021-12-14  0:35     ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-14  0:35 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel, linux-mm, hch, cl, John.p.donnelly, kexec

On 12/13/21 at 01:05pm, Andrew Morton wrote:
> On Mon, 13 Dec 2021 20:27:07 +0800 Baoquan He <bhe@redhat.com> wrote:
> 
> > Background information can be checked in cover letter of v2 RESEND POST
> > as below:
> > https://lore.kernel.org/all/20211207030750.30824-1-bhe@redhat.com/T/#u
> 
> Please include all relevant info right here, in the [0/n].  For a
> number of reasons, one of which is that the text is more likely to be
> up to date as the patchset evolves.
> 
> It's unusual that this patchset has two non-urgent patches and the
> final three patches are cc:stable.  It makes one worry that patches 3-5
> might have dependencies on 1-2.  Also, I'd expect to merge the three
> -stable patches during 5.16-rcX which means I have to reorder things,
> redo changelogs, update links and blah blah.
> 
> So can I ask that you redo all of this as two patch series?  A 3-patch
> series which is targeted at -stable, followed by a separate two-patch
> series which is targeted at 5.17-rc1.  Each series with its own fully
> prepared [0/n] cover.

Sure, will do. Sorry for the mess.

Before the 3-patch series posting, I may need to continue discussing and
making clear if the current patch 5/5 is a good fix, or whether we need
change to take other solution. So I will take the first two patches out
and post them.


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-13 13:43     ` Hyeonggon Yoo
@ 2021-12-14  5:32       ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-14  5:32 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> Hello Baoquan. I have a question on your code.
> 
> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> > However, it will fail if DMA zone has no managed pages. The failure
> > can be seen in kdump kernel of x86_64 as below:
> > 
> >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
> >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
> >  Workqueue: events_unbound async_run_entry_fn
> >  Call Trace:
> >   dump_stack_lvl+0x57/0x72
> >   warn_alloc.cold+0x72/0xd6
> >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
> >   __alloc_pages+0x23b/0x2b0
> >   allocate_slab+0x406/0x630
> >   ___slab_alloc+0x4b1/0x7e0
> >   ? sr_probe+0x200/0x600
> >   ? lock_acquire+0xc4/0x2e0
> >   ? fs_reclaim_acquire+0x4d/0xe0
> >   ? lock_is_held_type+0xa7/0x120
> >   ? sr_probe+0x200/0x600
> >   ? __slab_alloc+0x67/0x90
> >   __slab_alloc+0x67/0x90
> >   ? sr_probe+0x200/0x600
> >   ? sr_probe+0x200/0x600
> >   kmem_cache_alloc_trace+0x259/0x270
> >   sr_probe+0x200/0x600
> >   ......
> >   bus_probe_device+0x9f/0xb0
> >   device_add+0x3d2/0x970
> >   ......
> >   __scsi_add_device+0xea/0x100
> >   ata_scsi_scan_host+0x97/0x1d0
> >   async_run_entry_fn+0x30/0x130
> >   process_one_work+0x2b0/0x5c0
> >   worker_thread+0x55/0x3c0
> >   ? process_one_work+0x5c0/0x5c0
> >   kthread+0x149/0x170
> >   ? set_kthread_struct+0x40/0x40
> >   ret_from_fork+0x22/0x30
> >  Mem-Info:
> >  ......
> > 
> > The above failure happened when calling kmalloc() to allocate buffer with
> > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> > pages in there.
> >  sr_probe()
> >  --> get_capabilities()
> >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> > 
> > The DMA zone should be checked if it has managed pages, then try to create
> > dma-kmalloc.
> >
> 
> What is problem here?
> 
> The slab allocator requested buddy allocator with GFP_DMA,
> and then buddy allocator failed to allocate page in DMA zone because
> there was no page in DMA zone. and then the buddy allocator called warn_alloc
> because it failed at allocating page.
> 
> Looking at warn, I don't understand what the problem is.

The problem is this is a generic issue on x86_64, and will be warned out
always on all x86_64 systems, but not on a certain machine or a certain
type of machine. If not fixed, we can always see it in kdump kernel. The
way things are, it doesn't casue system or device collapse even if
dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.


I have got bug reports several times from different people, and we have
several bugs tracking this inside Redhat. I think nobody want to see
this appearing in customers' monitor w or w/o a note. If we have to
leave it with that, it's a little embrassing.


> 
> > ---
> >  mm/slab_common.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index e5d080a93009..ae4ef0f8903a 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >  {
> >  	int i;
> >  	enum kmalloc_cache_type type;
> > +#ifdef CONFIG_ZONE_DMA
> > +	bool managed_dma;
> > +#endif
> >  
> >  	/*
> >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >  	slab_state = UP;
> >  
> >  #ifdef CONFIG_ZONE_DMA
> > +	managed_dma = has_managed_dma();
> > +
> >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
> >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
> >  
> >  		if (s) {
> > +			if (!managed_dma) {
> > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> > +				continue;
> > +			}
> 
> This code is copying normal kmalloc caches to DMA kmalloc caches.
> With this code, the kmalloc() with GFP_DMA will succeed even if allocated
> memory is not actually from DMA zone. Is that really what you want?

This is a great question. Honestly, no,

On the surface, it's obviously not what we want, We should never give
user a zone NORMAL memory when they ask for zone DMA memory. If going to
this specific x86_64 ARCH where this problem is observed, I prefer to give
it zone DMA32 memory if zone DMA allocation failed. Because we rarely
have ISA device deployed which requires low 16M DMA buffer. The zone DMA
is just in case. Thus, for kdump kernel, we have been trying to make sure
zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
I don't remember we made any effort to do that for zone DMA.

Now the thing is that the nothing serious happened even if sr_probe()
doesn't get DMA buffer from zone DMA. And it works well when I feed it
with zone NORMAL memory instead with this patch applied.
> 
> Maybe the function get_capabilities() want to allocate memory
> even if it's not from DMA zone, but other callers will not expect that.

Yeah, I have the same guess too for get_capabilities(), not sure about other
callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
the right way to call people when the first name is the same. Correct me if
it's wrong), any buffer requested from kmalloc can be used by device driver.
Means device enforces getting memory inside addressing limit for those
DMA transferring buffer which is usually large, Megabytes level with
vmalloc() or alloc_pages(), but doesn't care about this kind of small
piece buffer memory allocated with kmalloc()? Just a guess, please tell
a counter example if anyone happens to know, it could be easy.


> 
> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> >  				kmalloc_info[i].name[KMALLOC_DMA],
> >  				kmalloc_info[i].size,
> > -- 
> > 2.17.2
> > 
> > 
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-14  5:32       ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-14  5:32 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> Hello Baoquan. I have a question on your code.
> 
> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> > However, it will fail if DMA zone has no managed pages. The failure
> > can be seen in kdump kernel of x86_64 as below:
> > 
> >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
> >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
> >  Workqueue: events_unbound async_run_entry_fn
> >  Call Trace:
> >   dump_stack_lvl+0x57/0x72
> >   warn_alloc.cold+0x72/0xd6
> >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
> >   __alloc_pages+0x23b/0x2b0
> >   allocate_slab+0x406/0x630
> >   ___slab_alloc+0x4b1/0x7e0
> >   ? sr_probe+0x200/0x600
> >   ? lock_acquire+0xc4/0x2e0
> >   ? fs_reclaim_acquire+0x4d/0xe0
> >   ? lock_is_held_type+0xa7/0x120
> >   ? sr_probe+0x200/0x600
> >   ? __slab_alloc+0x67/0x90
> >   __slab_alloc+0x67/0x90
> >   ? sr_probe+0x200/0x600
> >   ? sr_probe+0x200/0x600
> >   kmem_cache_alloc_trace+0x259/0x270
> >   sr_probe+0x200/0x600
> >   ......
> >   bus_probe_device+0x9f/0xb0
> >   device_add+0x3d2/0x970
> >   ......
> >   __scsi_add_device+0xea/0x100
> >   ata_scsi_scan_host+0x97/0x1d0
> >   async_run_entry_fn+0x30/0x130
> >   process_one_work+0x2b0/0x5c0
> >   worker_thread+0x55/0x3c0
> >   ? process_one_work+0x5c0/0x5c0
> >   kthread+0x149/0x170
> >   ? set_kthread_struct+0x40/0x40
> >   ret_from_fork+0x22/0x30
> >  Mem-Info:
> >  ......
> > 
> > The above failure happened when calling kmalloc() to allocate buffer with
> > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> > pages in there.
> >  sr_probe()
> >  --> get_capabilities()
> >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> > 
> > The DMA zone should be checked if it has managed pages, then try to create
> > dma-kmalloc.
> >
> 
> What is problem here?
> 
> The slab allocator requested buddy allocator with GFP_DMA,
> and then buddy allocator failed to allocate page in DMA zone because
> there was no page in DMA zone. and then the buddy allocator called warn_alloc
> because it failed at allocating page.
> 
> Looking at warn, I don't understand what the problem is.

The problem is this is a generic issue on x86_64, and will be warned out
always on all x86_64 systems, but not on a certain machine or a certain
type of machine. If not fixed, we can always see it in kdump kernel. The
way things are, it doesn't casue system or device collapse even if
dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.


I have got bug reports several times from different people, and we have
several bugs tracking this inside Redhat. I think nobody want to see
this appearing in customers' monitor w or w/o a note. If we have to
leave it with that, it's a little embrassing.


> 
> > ---
> >  mm/slab_common.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index e5d080a93009..ae4ef0f8903a 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >  {
> >  	int i;
> >  	enum kmalloc_cache_type type;
> > +#ifdef CONFIG_ZONE_DMA
> > +	bool managed_dma;
> > +#endif
> >  
> >  	/*
> >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >  	slab_state = UP;
> >  
> >  #ifdef CONFIG_ZONE_DMA
> > +	managed_dma = has_managed_dma();
> > +
> >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
> >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
> >  
> >  		if (s) {
> > +			if (!managed_dma) {
> > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> > +				continue;
> > +			}
> 
> This code is copying normal kmalloc caches to DMA kmalloc caches.
> With this code, the kmalloc() with GFP_DMA will succeed even if allocated
> memory is not actually from DMA zone. Is that really what you want?

This is a great question. Honestly, no,

On the surface, it's obviously not what we want, We should never give
user a zone NORMAL memory when they ask for zone DMA memory. If going to
this specific x86_64 ARCH where this problem is observed, I prefer to give
it zone DMA32 memory if zone DMA allocation failed. Because we rarely
have ISA device deployed which requires low 16M DMA buffer. The zone DMA
is just in case. Thus, for kdump kernel, we have been trying to make sure
zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
I don't remember we made any effort to do that for zone DMA.

Now the thing is that the nothing serious happened even if sr_probe()
doesn't get DMA buffer from zone DMA. And it works well when I feed it
with zone NORMAL memory instead with this patch applied.
> 
> Maybe the function get_capabilities() want to allocate memory
> even if it's not from DMA zone, but other callers will not expect that.

Yeah, I have the same guess too for get_capabilities(), not sure about other
callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
the right way to call people when the first name is the same. Correct me if
it's wrong), any buffer requested from kmalloc can be used by device driver.
Means device enforces getting memory inside addressing limit for those
DMA transferring buffer which is usually large, Megabytes level with
vmalloc() or alloc_pages(), but doesn't care about this kind of small
piece buffer memory allocated with kmalloc()? Just a guess, please tell
a counter example if anyone happens to know, it could be easy.


> 
> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> >  				kmalloc_info[i].name[KMALLOC_DMA],
> >  				kmalloc_info[i].size,
> > -- 
> > 2.17.2
> > 
> > 
> 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-14  5:32       ` Baoquan He
@ 2021-12-14 10:09         ` Vlastimil Babka
  -1 siblings, 0 replies; 74+ messages in thread
From: Vlastimil Babka @ 2021-12-14 10:09 UTC (permalink / raw)
  To: Baoquan He, Hyeonggon Yoo
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim

On 12/14/21 06:32, Baoquan He wrote:
> On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
>> Hello Baoquan. I have a question on your code.
>> 
>> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
>> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
>> > However, it will fail if DMA zone has no managed pages. The failure
>> > can be seen in kdump kernel of x86_64 as below:
>> > 

Could have included the warning headline too.

>> >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
>> >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
>> >  Workqueue: events_unbound async_run_entry_fn
>> >  Call Trace:
>> >   dump_stack_lvl+0x57/0x72
>> >   warn_alloc.cold+0x72/0xd6
>> >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
>> >   __alloc_pages+0x23b/0x2b0
>> >   allocate_slab+0x406/0x630
>> >   ___slab_alloc+0x4b1/0x7e0
>> >   ? sr_probe+0x200/0x600
>> >   ? lock_acquire+0xc4/0x2e0
>> >   ? fs_reclaim_acquire+0x4d/0xe0
>> >   ? lock_is_held_type+0xa7/0x120
>> >   ? sr_probe+0x200/0x600
>> >   ? __slab_alloc+0x67/0x90
>> >   __slab_alloc+0x67/0x90
>> >   ? sr_probe+0x200/0x600
>> >   ? sr_probe+0x200/0x600
>> >   kmem_cache_alloc_trace+0x259/0x270
>> >   sr_probe+0x200/0x600
>> >   ......
>> >   bus_probe_device+0x9f/0xb0
>> >   device_add+0x3d2/0x970
>> >   ......
>> >   __scsi_add_device+0xea/0x100
>> >   ata_scsi_scan_host+0x97/0x1d0
>> >   async_run_entry_fn+0x30/0x130
>> >   process_one_work+0x2b0/0x5c0
>> >   worker_thread+0x55/0x3c0
>> >   ? process_one_work+0x5c0/0x5c0
>> >   kthread+0x149/0x170
>> >   ? set_kthread_struct+0x40/0x40
>> >   ret_from_fork+0x22/0x30
>> >  Mem-Info:
>> >  ......
>> > 
>> > The above failure happened when calling kmalloc() to allocate buffer with
>> > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
>> > pages in there.
>> >  sr_probe()
>> >  --> get_capabilities()
>> >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
>> > 
>> > The DMA zone should be checked if it has managed pages, then try to create
>> > dma-kmalloc.
>> >
>> 
>> What is problem here?
>> 
>> The slab allocator requested buddy allocator with GFP_DMA,
>> and then buddy allocator failed to allocate page in DMA zone because
>> there was no page in DMA zone. and then the buddy allocator called warn_alloc
>> because it failed at allocating page.
>> 
>> Looking at warn, I don't understand what the problem is.
> 
> The problem is this is a generic issue on x86_64, and will be warned out
> always on all x86_64 systems, but not on a certain machine or a certain
> type of machine. If not fixed, we can always see it in kdump kernel. The
> way things are, it doesn't casue system or device collapse even if
> dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.
> 
> 
> I have got bug reports several times from different people, and we have
> several bugs tracking this inside Redhat. I think nobody want to see
> this appearing in customers' monitor w or w/o a note. If we have to
> leave it with that, it's a little embrassing.
> 
> 
>> 
>> > ---
>> >  mm/slab_common.c | 9 +++++++++
>> >  1 file changed, 9 insertions(+)
>> > 
>> > diff --git a/mm/slab_common.c b/mm/slab_common.c
>> > index e5d080a93009..ae4ef0f8903a 100644
>> > --- a/mm/slab_common.c
>> > +++ b/mm/slab_common.c
>> > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>> >  {
>> >  	int i;
>> >  	enum kmalloc_cache_type type;
>> > +#ifdef CONFIG_ZONE_DMA
>> > +	bool managed_dma;
>> > +#endif
>> >  
>> >  	/*
>> >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
>> > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>> >  	slab_state = UP;
>> >  
>> >  #ifdef CONFIG_ZONE_DMA
>> > +	managed_dma = has_managed_dma();
>> > +
>> >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
>> >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
>> >  
>> >  		if (s) {
>> > +			if (!managed_dma) {
>> > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];

The right side could be just 's'?

>> > +				continue;
>> > +			}
>> 
>> This code is copying normal kmalloc caches to DMA kmalloc caches.
>> With this code, the kmalloc() with GFP_DMA will succeed even if allocated
>> memory is not actually from DMA zone. Is that really what you want?
> 
> This is a great question. Honestly, no,
> 
> On the surface, it's obviously not what we want, We should never give
> user a zone NORMAL memory when they ask for zone DMA memory. If going to
> this specific x86_64 ARCH where this problem is observed, I prefer to give
> it zone DMA32 memory if zone DMA allocation failed. Because we rarely
> have ISA device deployed which requires low 16M DMA buffer. The zone DMA
> is just in case. Thus, for kdump kernel, we have been trying to make sure
> zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
> I don't remember we made any effort to do that for zone DMA.
> 
> Now the thing is that the nothing serious happened even if sr_probe()
> doesn't get DMA buffer from zone DMA. And it works well when I feed it
> with zone NORMAL memory instead with this patch applied.

If doesn't feel right to me to fix (or rather workaround) this on the level
of kmalloc caches just because the current reports come from there. If we
decide it's acceptable for kdump kernel to return !ZONE_DMA memory for
GFP_DMA requests, then it should apply at the page allocator level for all
allocations, not just kmalloc().

Also you mention above you'd prefer ZONE_DMA32 memory, while chances are
this approach of using KMALLOC_NORMAL caches will end up giving you
ZONE_NORMAL. On the page allocator level it would be much easier to
implement a fallback from non-populated ZONE_DMA to ZONE_DMA32 specifically.

>> 
>> Maybe the function get_capabilities() want to allocate memory
>> even if it's not from DMA zone, but other callers will not expect that.
> 
> Yeah, I have the same guess too for get_capabilities(), not sure about other
> callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> the right way to call people when the first name is the same. Correct me if
> it's wrong), any buffer requested from kmalloc can be used by device driver.
> Means device enforces getting memory inside addressing limit for those
> DMA transferring buffer which is usually large, Megabytes level with
> vmalloc() or alloc_pages(), but doesn't care about this kind of small
> piece buffer memory allocated with kmalloc()? Just a guess, please tell
> a counter example if anyone happens to know, it could be easy.
> 
> 
>> 
>> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
>> >  				kmalloc_info[i].name[KMALLOC_DMA],
>> >  				kmalloc_info[i].size,
>> > -- 
>> > 2.17.2
>> > 
>> > 
>> 
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-14 10:09         ` Vlastimil Babka
  0 siblings, 0 replies; 74+ messages in thread
From: Vlastimil Babka @ 2021-12-14 10:09 UTC (permalink / raw)
  To: Baoquan He, Hyeonggon Yoo
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim

On 12/14/21 06:32, Baoquan He wrote:
> On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
>> Hello Baoquan. I have a question on your code.
>> 
>> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
>> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
>> > However, it will fail if DMA zone has no managed pages. The failure
>> > can be seen in kdump kernel of x86_64 as below:
>> > 

Could have included the warning headline too.

>> >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
>> >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
>> >  Workqueue: events_unbound async_run_entry_fn
>> >  Call Trace:
>> >   dump_stack_lvl+0x57/0x72
>> >   warn_alloc.cold+0x72/0xd6
>> >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
>> >   __alloc_pages+0x23b/0x2b0
>> >   allocate_slab+0x406/0x630
>> >   ___slab_alloc+0x4b1/0x7e0
>> >   ? sr_probe+0x200/0x600
>> >   ? lock_acquire+0xc4/0x2e0
>> >   ? fs_reclaim_acquire+0x4d/0xe0
>> >   ? lock_is_held_type+0xa7/0x120
>> >   ? sr_probe+0x200/0x600
>> >   ? __slab_alloc+0x67/0x90
>> >   __slab_alloc+0x67/0x90
>> >   ? sr_probe+0x200/0x600
>> >   ? sr_probe+0x200/0x600
>> >   kmem_cache_alloc_trace+0x259/0x270
>> >   sr_probe+0x200/0x600
>> >   ......
>> >   bus_probe_device+0x9f/0xb0
>> >   device_add+0x3d2/0x970
>> >   ......
>> >   __scsi_add_device+0xea/0x100
>> >   ata_scsi_scan_host+0x97/0x1d0
>> >   async_run_entry_fn+0x30/0x130
>> >   process_one_work+0x2b0/0x5c0
>> >   worker_thread+0x55/0x3c0
>> >   ? process_one_work+0x5c0/0x5c0
>> >   kthread+0x149/0x170
>> >   ? set_kthread_struct+0x40/0x40
>> >   ret_from_fork+0x22/0x30
>> >  Mem-Info:
>> >  ......
>> > 
>> > The above failure happened when calling kmalloc() to allocate buffer with
>> > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
>> > pages in there.
>> >  sr_probe()
>> >  --> get_capabilities()
>> >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
>> > 
>> > The DMA zone should be checked if it has managed pages, then try to create
>> > dma-kmalloc.
>> >
>> 
>> What is problem here?
>> 
>> The slab allocator requested buddy allocator with GFP_DMA,
>> and then buddy allocator failed to allocate page in DMA zone because
>> there was no page in DMA zone. and then the buddy allocator called warn_alloc
>> because it failed at allocating page.
>> 
>> Looking at warn, I don't understand what the problem is.
> 
> The problem is this is a generic issue on x86_64, and will be warned out
> always on all x86_64 systems, but not on a certain machine or a certain
> type of machine. If not fixed, we can always see it in kdump kernel. The
> way things are, it doesn't casue system or device collapse even if
> dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.
> 
> 
> I have got bug reports several times from different people, and we have
> several bugs tracking this inside Redhat. I think nobody want to see
> this appearing in customers' monitor w or w/o a note. If we have to
> leave it with that, it's a little embrassing.
> 
> 
>> 
>> > ---
>> >  mm/slab_common.c | 9 +++++++++
>> >  1 file changed, 9 insertions(+)
>> > 
>> > diff --git a/mm/slab_common.c b/mm/slab_common.c
>> > index e5d080a93009..ae4ef0f8903a 100644
>> > --- a/mm/slab_common.c
>> > +++ b/mm/slab_common.c
>> > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>> >  {
>> >  	int i;
>> >  	enum kmalloc_cache_type type;
>> > +#ifdef CONFIG_ZONE_DMA
>> > +	bool managed_dma;
>> > +#endif
>> >  
>> >  	/*
>> >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
>> > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>> >  	slab_state = UP;
>> >  
>> >  #ifdef CONFIG_ZONE_DMA
>> > +	managed_dma = has_managed_dma();
>> > +
>> >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
>> >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
>> >  
>> >  		if (s) {
>> > +			if (!managed_dma) {
>> > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];

The right side could be just 's'?

>> > +				continue;
>> > +			}
>> 
>> This code is copying normal kmalloc caches to DMA kmalloc caches.
>> With this code, the kmalloc() with GFP_DMA will succeed even if allocated
>> memory is not actually from DMA zone. Is that really what you want?
> 
> This is a great question. Honestly, no,
> 
> On the surface, it's obviously not what we want, We should never give
> user a zone NORMAL memory when they ask for zone DMA memory. If going to
> this specific x86_64 ARCH where this problem is observed, I prefer to give
> it zone DMA32 memory if zone DMA allocation failed. Because we rarely
> have ISA device deployed which requires low 16M DMA buffer. The zone DMA
> is just in case. Thus, for kdump kernel, we have been trying to make sure
> zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
> I don't remember we made any effort to do that for zone DMA.
> 
> Now the thing is that the nothing serious happened even if sr_probe()
> doesn't get DMA buffer from zone DMA. And it works well when I feed it
> with zone NORMAL memory instead with this patch applied.

If doesn't feel right to me to fix (or rather workaround) this on the level
of kmalloc caches just because the current reports come from there. If we
decide it's acceptable for kdump kernel to return !ZONE_DMA memory for
GFP_DMA requests, then it should apply at the page allocator level for all
allocations, not just kmalloc().

Also you mention above you'd prefer ZONE_DMA32 memory, while chances are
this approach of using KMALLOC_NORMAL caches will end up giving you
ZONE_NORMAL. On the page allocator level it would be much easier to
implement a fallback from non-populated ZONE_DMA to ZONE_DMA32 specifically.

>> 
>> Maybe the function get_capabilities() want to allocate memory
>> even if it's not from DMA zone, but other callers will not expect that.
> 
> Yeah, I have the same guess too for get_capabilities(), not sure about other
> callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> the right way to call people when the first name is the same. Correct me if
> it's wrong), any buffer requested from kmalloc can be used by device driver.
> Means device enforces getting memory inside addressing limit for those
> DMA transferring buffer which is usually large, Megabytes level with
> vmalloc() or alloc_pages(), but doesn't care about this kind of small
> piece buffer memory allocated with kmalloc()? Just a guess, please tell
> a counter example if anyone happens to know, it could be easy.
> 
> 
>> 
>> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
>> >  				kmalloc_info[i].name[KMALLOC_DMA],
>> >  				kmalloc_info[i].size,
>> > -- 
>> > 2.17.2
>> > 
>> > 
>> 
> 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-14 10:09         ` Vlastimil Babka
@ 2021-12-14 10:28           ` Christoph Lameter
  -1 siblings, 0 replies; 74+ messages in thread
From: Christoph Lameter @ 2021-12-14 10:28 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Baoquan He, Hyeonggon Yoo, linux-kernel, linux-mm, akpm, hch,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Tue, 14 Dec 2021, Vlastimil Babka wrote:

> If doesn't feel right to me to fix (or rather workaround) this on the level
> of kmalloc caches just because the current reports come from there. If we
> decide it's acceptable for kdump kernel to return !ZONE_DMA memory for
> GFP_DMA requests, then it should apply at the page allocator level for all
> allocations, not just kmalloc().
>
> Also you mention above you'd prefer ZONE_DMA32 memory, while chances are
> this approach of using KMALLOC_NORMAL caches will end up giving you
> ZONE_NORMAL. On the page allocator level it would be much easier to
> implement a fallback from non-populated ZONE_DMA to ZONE_DMA32 specifically.

Well this only works if the restrictions on the physical memory addresses
of each platform make that possible.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-14 10:28           ` Christoph Lameter
  0 siblings, 0 replies; 74+ messages in thread
From: Christoph Lameter @ 2021-12-14 10:28 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Baoquan He, Hyeonggon Yoo, linux-kernel, linux-mm, akpm, hch,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Tue, 14 Dec 2021, Vlastimil Babka wrote:

> If doesn't feel right to me to fix (or rather workaround) this on the level
> of kmalloc caches just because the current reports come from there. If we
> decide it's acceptable for kdump kernel to return !ZONE_DMA memory for
> GFP_DMA requests, then it should apply at the page allocator level for all
> allocations, not just kmalloc().
>
> Also you mention above you'd prefer ZONE_DMA32 memory, while chances are
> this approach of using KMALLOC_NORMAL caches will end up giving you
> ZONE_NORMAL. On the page allocator level it would be much easier to
> implement a fallback from non-populated ZONE_DMA to ZONE_DMA32 specifically.

Well this only works if the restrictions on the physical memory addresses
of each platform make that possible.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-13 12:27   ` Baoquan He
@ 2021-12-14 16:31     ` Christoph Hellwig
  -1 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-14 16:31 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> However, it will fail if DMA zone has no managed pages. The failure
> can be seen in kdump kernel of x86_64 as below:

Please just switch the sr allocation to use GFP_KERNEL without GFP_DMA.
The block layer will do the proper bounce buffering underneath for the
very unlikely case that we're actually using the single HBA driver that
has ISA DMA addressing limitations.

Same for the ch drive, btw.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-14 16:31     ` Christoph Hellwig
  0 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-14 16:31 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> However, it will fail if DMA zone has no managed pages. The failure
> can be seen in kdump kernel of x86_64 as below:

Please just switch the sr allocation to use GFP_KERNEL without GFP_DMA.
The block layer will do the proper bounce buffering underneath for the
very unlikely case that we're actually using the single HBA driver that
has ISA DMA addressing limitations.

Same for the ch drive, btw.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-14 16:31     ` Christoph Hellwig
@ 2021-12-14 17:07       ` john.p.donnelly
  -1 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-14 17:07 UTC (permalink / raw)
  To: Christoph Hellwig, Baoquan He
  Cc: linux-kernel, linux-mm, akpm, cl, kexec, stable, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vlastimil Babka

On 12/14/21 10:31 AM, Christoph Hellwig wrote:
> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
>> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
>> However, it will fail if DMA zone has no managed pages. The failure
>> can be seen in kdump kernel of x86_64 as below:
> 
> Please just switch the sr allocation to use GFP_KERNEL without GFP_DMA.
> The block layer will do the proper bounce buffering underneath for the
> very unlikely case that we're actually using the single HBA driver that
> has ISA DMA addressing limitations.
> 
> Same for the ch drive, btw.

Hi,

Is CONFIG_ZONE_DMA even needed anymore in x86_64  ?


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-14 17:07       ` john.p.donnelly
  0 siblings, 0 replies; 74+ messages in thread
From: john.p.donnelly @ 2021-12-14 17:07 UTC (permalink / raw)
  To: Christoph Hellwig, Baoquan He
  Cc: linux-kernel, linux-mm, akpm, cl, kexec, stable, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vlastimil Babka

On 12/14/21 10:31 AM, Christoph Hellwig wrote:
> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
>> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
>> However, it will fail if DMA zone has no managed pages. The failure
>> can be seen in kdump kernel of x86_64 as below:
> 
> Please just switch the sr allocation to use GFP_KERNEL without GFP_DMA.
> The block layer will do the proper bounce buffering underneath for the
> very unlikely case that we're actually using the single HBA driver that
> has ISA DMA addressing limitations.
> 
> Same for the ch drive, btw.

Hi,

Is CONFIG_ZONE_DMA even needed anymore in x86_64  ?


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-14 10:09         ` Vlastimil Babka
@ 2021-12-15  4:48           ` Hyeonggon Yoo
  -1 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-15  4:48 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Tue, Dec 14, 2021 at 11:09:23AM +0100, Vlastimil Babka wrote:
> On 12/14/21 06:32, Baoquan He wrote:
> > On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> >> Hello Baoquan. I have a question on your code.
> >> 
> >> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> >> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> >> > However, it will fail if DMA zone has no managed pages. The failure
> >> > can be seen in kdump kernel of x86_64 as below:
> >> > 
> 
> Could have included the warning headline too.
> 
> >> >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
> >> >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
> >> >  Workqueue: events_unbound async_run_entry_fn
> >> >  Call Trace:
> >> >   dump_stack_lvl+0x57/0x72
> >> >   warn_alloc.cold+0x72/0xd6
> >> >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
> >> >   __alloc_pages+0x23b/0x2b0
> >> >   allocate_slab+0x406/0x630
> >> >   ___slab_alloc+0x4b1/0x7e0
> >> >   ? sr_probe+0x200/0x600
> >> >   ? lock_acquire+0xc4/0x2e0
> >> >   ? fs_reclaim_acquire+0x4d/0xe0
> >> >   ? lock_is_held_type+0xa7/0x120
> >> >   ? sr_probe+0x200/0x600
> >> >   ? __slab_alloc+0x67/0x90
> >> >   __slab_alloc+0x67/0x90
> >> >   ? sr_probe+0x200/0x600
> >> >   ? sr_probe+0x200/0x600
> >> >   kmem_cache_alloc_trace+0x259/0x270
> >> >   sr_probe+0x200/0x600
> >> >   ......
> >> >   bus_probe_device+0x9f/0xb0
> >> >   device_add+0x3d2/0x970
> >> >   ......
> >> >   __scsi_add_device+0xea/0x100
> >> >   ata_scsi_scan_host+0x97/0x1d0
> >> >   async_run_entry_fn+0x30/0x130
> >> >   process_one_work+0x2b0/0x5c0
> >> >   worker_thread+0x55/0x3c0
> >> >   ? process_one_work+0x5c0/0x5c0
> >> >   kthread+0x149/0x170
> >> >   ? set_kthread_struct+0x40/0x40
> >> >   ret_from_fork+0x22/0x30
> >> >  Mem-Info:
> >> >  ......
> >> > 
> >> > The above failure happened when calling kmalloc() to allocate buffer with
> >> > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> >> > pages in there.
> >> >  sr_probe()
> >> >  --> get_capabilities()
> >> >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> >> > 
> >> > The DMA zone should be checked if it has managed pages, then try to create
> >> > dma-kmalloc.
> >> >
> >> 
> >> What is problem here?
> >> 
> >> The slab allocator requested buddy allocator with GFP_DMA,
> >> and then buddy allocator failed to allocate page in DMA zone because
> >> there was no page in DMA zone. and then the buddy allocator called warn_alloc
> >> because it failed at allocating page.
> >> 
> >> Looking at warn, I don't understand what the problem is.
> > 
> > The problem is this is a generic issue on x86_64, and will be warned out
> > always on all x86_64 systems, but not on a certain machine or a certain
> > type of machine. If not fixed, we can always see it in kdump kernel. The
> > way things are, it doesn't casue system or device collapse even if
> > dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.
> > 
> > 
> > I have got bug reports several times from different people, and we have
> > several bugs tracking this inside Redhat. I think nobody want to see
> > this appearing in customers' monitor w or w/o a note. If we have to
> > leave it with that, it's a little embrassing.
> >

Okay Then,
Do you care if it just fails (without warning)
or is allocated from ZONE_DMA32?

> > 
> >> 
> >> > ---
> >> >  mm/slab_common.c | 9 +++++++++
> >> >  1 file changed, 9 insertions(+)
> >> > 
> >> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> >> > index e5d080a93009..ae4ef0f8903a 100644
> >> > --- a/mm/slab_common.c
> >> > +++ b/mm/slab_common.c
> >> > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >> >  {
> >> >  	int i;
> >> >  	enum kmalloc_cache_type type;
> >> > +#ifdef CONFIG_ZONE_DMA
> >> > +	bool managed_dma;
> >> > +#endif
> >> >  
> >> >  	/*
> >> >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> >> > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >> >  	slab_state = UP;
> >> >  
> >> >  #ifdef CONFIG_ZONE_DMA
> >> > +	managed_dma = has_managed_dma();
> >> > +
> >> >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
> >> >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
> >> >  
> >> >  		if (s) {
> >> > +			if (!managed_dma) {
> >> > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> 
> The right side could be just 's'?
> 
> >> > +				continue;
> >> > +			}
> >> 
> >> This code is copying normal kmalloc caches to DMA kmalloc caches.
> >> With this code, the kmalloc() with GFP_DMA will succeed even if allocated
> >> memory is not actually from DMA zone. Is that really what you want?
> > 
> > This is a great question. Honestly, no,
> > 
> > On the surface, it's obviously not what we want, We should never give
> > user a zone NORMAL memory when they ask for zone DMA memory. If going to
> > this specific x86_64 ARCH where this problem is observed, I prefer to give
> > it zone DMA32 memory if zone DMA allocation failed. Because we rarely
> > have ISA device deployed which requires low 16M DMA buffer. The zone DMA
> > is just in case. Thus, for kdump kernel, we have been trying to make sure
> > zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
> > I don't remember we made any effort to do that for zone DMA.
> > 
> > Now the thing is that the nothing serious happened even if sr_probe()
> > doesn't get DMA buffer from zone DMA. And it works well when I feed it
> > with zone NORMAL memory instead with this patch applied.
> 
> If doesn't feel right to me to fix (or rather workaround) this on the level
> of kmalloc caches just because the current reports come from there. If we
> decide it's acceptable for kdump kernel to return !ZONE_DMA memory for
> GFP_DMA requests, then it should apply at the page allocator level for all
> allocations, not just kmalloc().

I think that will make it much easier to manage the code.

> Also you mention above you'd prefer ZONE_DMA32 memory, while chances are
> this approach of using KMALLOC_NORMAL caches will end up giving you
> ZONE_NORMAL. On the page allocator level it would be much easier to
> implement a fallback from non-populated ZONE_DMA to ZONE_DMA32 specifically.
>

Hello Baoquan and Vlastimil.

I'm not sure allowing ZONE_DMA32 for kdump kernel is nice way to solve
this problem. Devices that requires ZONE_DMA is rare but we still
support them.

If we allow ZONE_DMA32 for ZONE_DMA in kdump kernels,
the problem will be hard to find.

What about one of those?:

    1) Do not call warn_alloc in page allocator if will always fail
    to allocate ZONE_DMA pages.


    2) let's check all callers of kmalloc with GFP_DMA
    if they really need GFP_DMA flag and replace those by DMA API or
    just remove GFP_DMA from kmalloc()

    3) Drop support for allocating DMA memory from slab allocator
    (as Christoph Hellwig said) and convert them to use DMA32
    and see what happens

Thanks,
Hyeonggon.

> >> 
> >> Maybe the function get_capabilities() want to allocate memory
> >> even if it's not from DMA zone, but other callers will not expect that.
> > 
> > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > the right way to call people when the first name is the same. Correct me if
> > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > Means device enforces getting memory inside addressing limit for those
> > DMA transferring buffer which is usually large, Megabytes level with
> > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > a counter example if anyone happens to know, it could be easy.
> > 
> > 
> >> 
> >> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> >> >  				kmalloc_info[i].name[KMALLOC_DMA],
> >> >  				kmalloc_info[i].size,
> >> > -- 
> >> > 2.17.2
> >> > 
> >> > 
> >> 
> > 
> 

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15  4:48           ` Hyeonggon Yoo
  0 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-15  4:48 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Tue, Dec 14, 2021 at 11:09:23AM +0100, Vlastimil Babka wrote:
> On 12/14/21 06:32, Baoquan He wrote:
> > On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> >> Hello Baoquan. I have a question on your code.
> >> 
> >> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> >> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> >> > However, it will fail if DMA zone has no managed pages. The failure
> >> > can be seen in kdump kernel of x86_64 as below:
> >> > 
> 
> Could have included the warning headline too.
> 
> >> >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
> >> >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
> >> >  Workqueue: events_unbound async_run_entry_fn
> >> >  Call Trace:
> >> >   dump_stack_lvl+0x57/0x72
> >> >   warn_alloc.cold+0x72/0xd6
> >> >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
> >> >   __alloc_pages+0x23b/0x2b0
> >> >   allocate_slab+0x406/0x630
> >> >   ___slab_alloc+0x4b1/0x7e0
> >> >   ? sr_probe+0x200/0x600
> >> >   ? lock_acquire+0xc4/0x2e0
> >> >   ? fs_reclaim_acquire+0x4d/0xe0
> >> >   ? lock_is_held_type+0xa7/0x120
> >> >   ? sr_probe+0x200/0x600
> >> >   ? __slab_alloc+0x67/0x90
> >> >   __slab_alloc+0x67/0x90
> >> >   ? sr_probe+0x200/0x600
> >> >   ? sr_probe+0x200/0x600
> >> >   kmem_cache_alloc_trace+0x259/0x270
> >> >   sr_probe+0x200/0x600
> >> >   ......
> >> >   bus_probe_device+0x9f/0xb0
> >> >   device_add+0x3d2/0x970
> >> >   ......
> >> >   __scsi_add_device+0xea/0x100
> >> >   ata_scsi_scan_host+0x97/0x1d0
> >> >   async_run_entry_fn+0x30/0x130
> >> >   process_one_work+0x2b0/0x5c0
> >> >   worker_thread+0x55/0x3c0
> >> >   ? process_one_work+0x5c0/0x5c0
> >> >   kthread+0x149/0x170
> >> >   ? set_kthread_struct+0x40/0x40
> >> >   ret_from_fork+0x22/0x30
> >> >  Mem-Info:
> >> >  ......
> >> > 
> >> > The above failure happened when calling kmalloc() to allocate buffer with
> >> > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> >> > pages in there.
> >> >  sr_probe()
> >> >  --> get_capabilities()
> >> >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> >> > 
> >> > The DMA zone should be checked if it has managed pages, then try to create
> >> > dma-kmalloc.
> >> >
> >> 
> >> What is problem here?
> >> 
> >> The slab allocator requested buddy allocator with GFP_DMA,
> >> and then buddy allocator failed to allocate page in DMA zone because
> >> there was no page in DMA zone. and then the buddy allocator called warn_alloc
> >> because it failed at allocating page.
> >> 
> >> Looking at warn, I don't understand what the problem is.
> > 
> > The problem is this is a generic issue on x86_64, and will be warned out
> > always on all x86_64 systems, but not on a certain machine or a certain
> > type of machine. If not fixed, we can always see it in kdump kernel. The
> > way things are, it doesn't casue system or device collapse even if
> > dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.
> > 
> > 
> > I have got bug reports several times from different people, and we have
> > several bugs tracking this inside Redhat. I think nobody want to see
> > this appearing in customers' monitor w or w/o a note. If we have to
> > leave it with that, it's a little embrassing.
> >

Okay Then,
Do you care if it just fails (without warning)
or is allocated from ZONE_DMA32?

> > 
> >> 
> >> > ---
> >> >  mm/slab_common.c | 9 +++++++++
> >> >  1 file changed, 9 insertions(+)
> >> > 
> >> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> >> > index e5d080a93009..ae4ef0f8903a 100644
> >> > --- a/mm/slab_common.c
> >> > +++ b/mm/slab_common.c
> >> > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >> >  {
> >> >  	int i;
> >> >  	enum kmalloc_cache_type type;
> >> > +#ifdef CONFIG_ZONE_DMA
> >> > +	bool managed_dma;
> >> > +#endif
> >> >  
> >> >  	/*
> >> >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> >> > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >> >  	slab_state = UP;
> >> >  
> >> >  #ifdef CONFIG_ZONE_DMA
> >> > +	managed_dma = has_managed_dma();
> >> > +
> >> >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
> >> >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
> >> >  
> >> >  		if (s) {
> >> > +			if (!managed_dma) {
> >> > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> 
> The right side could be just 's'?
> 
> >> > +				continue;
> >> > +			}
> >> 
> >> This code is copying normal kmalloc caches to DMA kmalloc caches.
> >> With this code, the kmalloc() with GFP_DMA will succeed even if allocated
> >> memory is not actually from DMA zone. Is that really what you want?
> > 
> > This is a great question. Honestly, no,
> > 
> > On the surface, it's obviously not what we want, We should never give
> > user a zone NORMAL memory when they ask for zone DMA memory. If going to
> > this specific x86_64 ARCH where this problem is observed, I prefer to give
> > it zone DMA32 memory if zone DMA allocation failed. Because we rarely
> > have ISA device deployed which requires low 16M DMA buffer. The zone DMA
> > is just in case. Thus, for kdump kernel, we have been trying to make sure
> > zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
> > I don't remember we made any effort to do that for zone DMA.
> > 
> > Now the thing is that the nothing serious happened even if sr_probe()
> > doesn't get DMA buffer from zone DMA. And it works well when I feed it
> > with zone NORMAL memory instead with this patch applied.
> 
> If doesn't feel right to me to fix (or rather workaround) this on the level
> of kmalloc caches just because the current reports come from there. If we
> decide it's acceptable for kdump kernel to return !ZONE_DMA memory for
> GFP_DMA requests, then it should apply at the page allocator level for all
> allocations, not just kmalloc().

I think that will make it much easier to manage the code.

> Also you mention above you'd prefer ZONE_DMA32 memory, while chances are
> this approach of using KMALLOC_NORMAL caches will end up giving you
> ZONE_NORMAL. On the page allocator level it would be much easier to
> implement a fallback from non-populated ZONE_DMA to ZONE_DMA32 specifically.
>

Hello Baoquan and Vlastimil.

I'm not sure allowing ZONE_DMA32 for kdump kernel is nice way to solve
this problem. Devices that requires ZONE_DMA is rare but we still
support them.

If we allow ZONE_DMA32 for ZONE_DMA in kdump kernels,
the problem will be hard to find.

What about one of those?:

    1) Do not call warn_alloc in page allocator if will always fail
    to allocate ZONE_DMA pages.


    2) let's check all callers of kmalloc with GFP_DMA
    if they really need GFP_DMA flag and replace those by DMA API or
    just remove GFP_DMA from kmalloc()

    3) Drop support for allocating DMA memory from slab allocator
    (as Christoph Hellwig said) and convert them to use DMA32
    and see what happens

Thanks,
Hyeonggon.

> >> 
> >> Maybe the function get_capabilities() want to allocate memory
> >> even if it's not from DMA zone, but other callers will not expect that.
> > 
> > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > the right way to call people when the first name is the same. Correct me if
> > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > Means device enforces getting memory inside addressing limit for those
> > DMA transferring buffer which is usually large, Megabytes level with
> > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > a counter example if anyone happens to know, it could be easy.
> > 
> > 
> >> 
> >> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> >> >  				kmalloc_info[i].name[KMALLOC_DMA],
> >> >  				kmalloc_info[i].size,
> >> > -- 
> >> > 2.17.2
> >> > 
> >> > 
> >> 
> > 
> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-15  4:48           ` Hyeonggon Yoo
@ 2021-12-15  7:03             ` Hyeonggon Yoo
  -1 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-15  7:03 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Wed, Dec 15, 2021 at 04:48:26AM +0000, Hyeonggon Yoo wrote:
> 
> Hello Baoquan and Vlastimil.
> 
> I'm not sure allowing ZONE_DMA32 for kdump kernel is nice way to solve
> this problem. Devices that requires ZONE_DMA is rare but we still
> support them.
> 
> If we allow ZONE_DMA32 for ZONE_DMA in kdump kernels,
> the problem will be hard to find.
> 

Sorry, I sometimes forget validating my english writing :(

What I meant:

I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
for kdump kernel is nice way to solve this problem.

Devices that requires ZONE_DMA memory is rare but we still support them.

If we use ZONE_DMA32 memory instead of ZONE_DMA in kdump kernels,
It will be hard to the problem when we use devices that can use only
ZONE_DMA memory.

> What about one of those?:
> 
>     1) Do not call warn_alloc in page allocator if will always fail
>     to allocate ZONE_DMA pages.
> 
> 
>     2) let's check all callers of kmalloc with GFP_DMA
>     if they really need GFP_DMA flag and replace those by DMA API or
>     just remove GFP_DMA from kmalloc()
> 
>     3) Drop support for allocating DMA memory from slab allocator
>     (as Christoph Hellwig said) and convert them to use DMA32

	(as Christoph Hellwig said) and convert them to use *DMA API*

>     and see what happens
> 
> Thanks,
> Hyeonggon.
> 
> > >> 
> > >> Maybe the function get_capabilities() want to allocate memory
> > >> even if it's not from DMA zone, but other callers will not expect that.
> > > 
> > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > the right way to call people when the first name is the same. Correct me if
> > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > Means device enforces getting memory inside addressing limit for those
> > > DMA transferring buffer which is usually large, Megabytes level with
> > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > a counter example if anyone happens to know, it could be easy.
> > > 
> > > 
> > >> 
> > >> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> > >> >  				kmalloc_info[i].name[KMALLOC_DMA],
> > >> >  				kmalloc_info[i].size,
> > >> > -- 
> > >> > 2.17.2
> > >> > 
> > >> > 
> > >> 
> > > 
> > 

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15  7:03             ` Hyeonggon Yoo
  0 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-15  7:03 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Wed, Dec 15, 2021 at 04:48:26AM +0000, Hyeonggon Yoo wrote:
> 
> Hello Baoquan and Vlastimil.
> 
> I'm not sure allowing ZONE_DMA32 for kdump kernel is nice way to solve
> this problem. Devices that requires ZONE_DMA is rare but we still
> support them.
> 
> If we allow ZONE_DMA32 for ZONE_DMA in kdump kernels,
> the problem will be hard to find.
> 

Sorry, I sometimes forget validating my english writing :(

What I meant:

I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
for kdump kernel is nice way to solve this problem.

Devices that requires ZONE_DMA memory is rare but we still support them.

If we use ZONE_DMA32 memory instead of ZONE_DMA in kdump kernels,
It will be hard to the problem when we use devices that can use only
ZONE_DMA memory.

> What about one of those?:
> 
>     1) Do not call warn_alloc in page allocator if will always fail
>     to allocate ZONE_DMA pages.
> 
> 
>     2) let's check all callers of kmalloc with GFP_DMA
>     if they really need GFP_DMA flag and replace those by DMA API or
>     just remove GFP_DMA from kmalloc()
> 
>     3) Drop support for allocating DMA memory from slab allocator
>     (as Christoph Hellwig said) and convert them to use DMA32

	(as Christoph Hellwig said) and convert them to use *DMA API*

>     and see what happens
> 
> Thanks,
> Hyeonggon.
> 
> > >> 
> > >> Maybe the function get_capabilities() want to allocate memory
> > >> even if it's not from DMA zone, but other callers will not expect that.
> > > 
> > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > the right way to call people when the first name is the same. Correct me if
> > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > Means device enforces getting memory inside addressing limit for those
> > > DMA transferring buffer which is usually large, Megabytes level with
> > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > a counter example if anyone happens to know, it could be easy.
> > > 
> > > 
> > >> 
> > >> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> > >> >  				kmalloc_info[i].name[KMALLOC_DMA],
> > >> >  				kmalloc_info[i].size,
> > >> > -- 
> > >> > 2.17.2
> > >> > 
> > >> > 
> > >> 
> > > 
> > 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-15  7:03             ` Hyeonggon Yoo
@ 2021-12-15  7:27               ` Christoph Hellwig
  -1 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-15  7:27 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Vlastimil Babka, Baoquan He, linux-kernel, linux-mm, akpm, hch,
	cl, John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> for kdump kernel is nice way to solve this problem.

What is the problem with zones in kdump kernels?

> Devices that requires ZONE_DMA memory is rare but we still support them.

Indeed.

> >     1) Do not call warn_alloc in page allocator if will always fail
> >     to allocate ZONE_DMA pages.
> > 
> > 
> >     2) let's check all callers of kmalloc with GFP_DMA
> >     if they really need GFP_DMA flag and replace those by DMA API or
> >     just remove GFP_DMA from kmalloc()
> > 
> >     3) Drop support for allocating DMA memory from slab allocator
> >     (as Christoph Hellwig said) and convert them to use DMA32
> 
> 	(as Christoph Hellwig said) and convert them to use *DMA API*
> 
> >     and see what happens

This is the right thing to do, but it will take a while.  In fact
I dont think we really need the warning in step 1, a simple grep
already allows to go over them.  I just looked at the uses of GFP_DMA
in drivers/scsi for example, and all but one look bogus.

> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > the right way to call people when the first name is the same. Correct me if
> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > Means device enforces getting memory inside addressing limit for those
> > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > a counter example if anyone happens to know, it could be easy.

The way this works is that the dma_map* calls will bounce buffer memory
that does to fall into the addressing limitations.  This is a performance
overhead, but allows drivers to address all memory in a system.  If the
driver controls memory allocation it should use one of the dma_alloc_*
APIs that allocate addressable memory from the start.  The allocator
will dip into ZONE_DMA and ZONE_DMA32 when needed.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15  7:27               ` Christoph Hellwig
  0 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-15  7:27 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Vlastimil Babka, Baoquan He, linux-kernel, linux-mm, akpm, hch,
	cl, John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> for kdump kernel is nice way to solve this problem.

What is the problem with zones in kdump kernels?

> Devices that requires ZONE_DMA memory is rare but we still support them.

Indeed.

> >     1) Do not call warn_alloc in page allocator if will always fail
> >     to allocate ZONE_DMA pages.
> > 
> > 
> >     2) let's check all callers of kmalloc with GFP_DMA
> >     if they really need GFP_DMA flag and replace those by DMA API or
> >     just remove GFP_DMA from kmalloc()
> > 
> >     3) Drop support for allocating DMA memory from slab allocator
> >     (as Christoph Hellwig said) and convert them to use DMA32
> 
> 	(as Christoph Hellwig said) and convert them to use *DMA API*
> 
> >     and see what happens

This is the right thing to do, but it will take a while.  In fact
I dont think we really need the warning in step 1, a simple grep
already allows to go over them.  I just looked at the uses of GFP_DMA
in drivers/scsi for example, and all but one look bogus.

> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > the right way to call people when the first name is the same. Correct me if
> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > Means device enforces getting memory inside addressing limit for those
> > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > a counter example if anyone happens to know, it could be easy.

The way this works is that the dma_map* calls will bounce buffer memory
that does to fall into the addressing limitations.  This is a performance
overhead, but allows drivers to address all memory in a system.  If the
driver controls memory allocation it should use one of the dma_alloc_*
APIs that allocate addressable memory from the start.  The allocator
will dip into ZONE_DMA and ZONE_DMA32 when needed.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-14 17:07       ` john.p.donnelly
@ 2021-12-15  7:27         ` Christoph Hellwig
  -1 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-15  7:27 UTC (permalink / raw)
  To: john.p.donnelly
  Cc: Christoph Hellwig, Baoquan He, linux-kernel, linux-mm, akpm, cl,
	kexec, stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

On Tue, Dec 14, 2021 at 11:07:34AM -0600, john.p.donnelly@oracle.com wrote:
> Is CONFIG_ZONE_DMA even needed anymore in x86_64  ?

Yes.  There are still plenty of addressing challenged devices, mostly
ISA-like but also a few PCI/PCIe ones.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15  7:27         ` Christoph Hellwig
  0 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-15  7:27 UTC (permalink / raw)
  To: john.p.donnelly
  Cc: Christoph Hellwig, Baoquan He, linux-kernel, linux-mm, akpm, cl,
	kexec, stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

On Tue, Dec 14, 2021 at 11:07:34AM -0600, john.p.donnelly@oracle.com wrote:
> Is CONFIG_ZONE_DMA even needed anymore in x86_64  ?

Yes.  There are still plenty of addressing challenged devices, mostly
ISA-like but also a few PCI/PCIe ones.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-14 10:09         ` Vlastimil Babka
@ 2021-12-15 10:08           ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-15 10:08 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Hyeonggon Yoo, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On 12/14/21 at 11:09am, Vlastimil Babka wrote:
> On 12/14/21 06:32, Baoquan He wrote:
> > On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> >> Hello Baoquan. I have a question on your code.
> >> 
> >> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> >> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> >> > However, it will fail if DMA zone has no managed pages. The failure
> >> > can be seen in kdump kernel of x86_64 as below:
> >> > 
> 
> Could have included the warning headline too.

Sure, I will paste the whole warning when repost.

> 
> >> >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
> >> >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
> >> >  Workqueue: events_unbound async_run_entry_fn
> >> >  Call Trace:
> >> >   dump_stack_lvl+0x57/0x72
> >> >   warn_alloc.cold+0x72/0xd6
> >> >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
> >> >   __alloc_pages+0x23b/0x2b0
> >> >   allocate_slab+0x406/0x630
> >> >   ___slab_alloc+0x4b1/0x7e0
> >> >   ? sr_probe+0x200/0x600
> >> >   ? lock_acquire+0xc4/0x2e0
> >> >   ? fs_reclaim_acquire+0x4d/0xe0
> >> >   ? lock_is_held_type+0xa7/0x120
> >> >   ? sr_probe+0x200/0x600
> >> >   ? __slab_alloc+0x67/0x90
> >> >   __slab_alloc+0x67/0x90
> >> >   ? sr_probe+0x200/0x600
> >> >   ? sr_probe+0x200/0x600
> >> >   kmem_cache_alloc_trace+0x259/0x270
> >> >   sr_probe+0x200/0x600
> >> >   ......
> >> >   bus_probe_device+0x9f/0xb0
> >> >   device_add+0x3d2/0x970
> >> >   ......
> >> >   __scsi_add_device+0xea/0x100
> >> >   ata_scsi_scan_host+0x97/0x1d0
> >> >   async_run_entry_fn+0x30/0x130
> >> >   process_one_work+0x2b0/0x5c0
> >> >   worker_thread+0x55/0x3c0
> >> >   ? process_one_work+0x5c0/0x5c0
> >> >   kthread+0x149/0x170
> >> >   ? set_kthread_struct+0x40/0x40
> >> >   ret_from_fork+0x22/0x30
> >> >  Mem-Info:
> >> >  ......
> >> > 
> >> > The above failure happened when calling kmalloc() to allocate buffer with
> >> > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> >> > pages in there.
> >> >  sr_probe()
> >> >  --> get_capabilities()
> >> >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> >> > 
> >> > The DMA zone should be checked if it has managed pages, then try to create
> >> > dma-kmalloc.
> >> >
> >> 
> >> What is problem here?
> >> 
> >> The slab allocator requested buddy allocator with GFP_DMA,
> >> and then buddy allocator failed to allocate page in DMA zone because
> >> there was no page in DMA zone. and then the buddy allocator called warn_alloc
> >> because it failed at allocating page.
> >> 
> >> Looking at warn, I don't understand what the problem is.
> > 
> > The problem is this is a generic issue on x86_64, and will be warned out
> > always on all x86_64 systems, but not on a certain machine or a certain
> > type of machine. If not fixed, we can always see it in kdump kernel. The
> > way things are, it doesn't casue system or device collapse even if
> > dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.
> > 
> > 
> > I have got bug reports several times from different people, and we have
> > several bugs tracking this inside Redhat. I think nobody want to see
> > this appearing in customers' monitor w or w/o a note. If we have to
> > leave it with that, it's a little embrassing.
> > 
> > 
> >> 
> >> > ---
> >> >  mm/slab_common.c | 9 +++++++++
> >> >  1 file changed, 9 insertions(+)
> >> > 
> >> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> >> > index e5d080a93009..ae4ef0f8903a 100644
> >> > --- a/mm/slab_common.c
> >> > +++ b/mm/slab_common.c
> >> > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >> >  {
> >> >  	int i;
> >> >  	enum kmalloc_cache_type type;
> >> > +#ifdef CONFIG_ZONE_DMA
> >> > +	bool managed_dma;
> >> > +#endif
> >> >  
> >> >  	/*
> >> >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> >> > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >> >  	slab_state = UP;
> >> >  
> >> >  #ifdef CONFIG_ZONE_DMA
> >> > +	managed_dma = has_managed_dma();
> >> > +
> >> >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
> >> >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
> >> >  
> >> >  		if (s) {
> >> > +			if (!managed_dma) {
> >> > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> 
> The right side could be just 's'?

Right, will see if we will take another way, will change it if keeping
this way.

> 
> >> > +				continue;
> >> > +			}
> >> 
> >> This code is copying normal kmalloc caches to DMA kmalloc caches.
> >> With this code, the kmalloc() with GFP_DMA will succeed even if allocated
> >> memory is not actually from DMA zone. Is that really what you want?
> > 
> > This is a great question. Honestly, no,
> > 
> > On the surface, it's obviously not what we want, We should never give
> > user a zone NORMAL memory when they ask for zone DMA memory. If going to
> > this specific x86_64 ARCH where this problem is observed, I prefer to give
> > it zone DMA32 memory if zone DMA allocation failed. Because we rarely
> > have ISA device deployed which requires low 16M DMA buffer. The zone DMA
> > is just in case. Thus, for kdump kernel, we have been trying to make sure
> > zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
> > I don't remember we made any effort to do that for zone DMA.
> > 
> > Now the thing is that the nothing serious happened even if sr_probe()
> > doesn't get DMA buffer from zone DMA. And it works well when I feed it
> > with zone NORMAL memory instead with this patch applied.
> 
> If doesn't feel right to me to fix (or rather workaround) this on the level
> of kmalloc caches just because the current reports come from there. If we
> decide it's acceptable for kdump kernel to return !ZONE_DMA memory for
> GFP_DMA requests, then it should apply at the page allocator level for all
> allocations, not just kmalloc().
> 
> Also you mention above you'd prefer ZONE_DMA32 memory, while chances are
> this approach of using KMALLOC_NORMAL caches will end up giving you
> ZONE_NORMAL. On the page allocator level it would be much easier to
> implement a fallback from non-populated ZONE_DMA to ZONE_DMA32 specifically.

This could be do-able. I count this in when investigate all suggested
solutions. Thanks.


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15 10:08           ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-15 10:08 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Hyeonggon Yoo, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On 12/14/21 at 11:09am, Vlastimil Babka wrote:
> On 12/14/21 06:32, Baoquan He wrote:
> > On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> >> Hello Baoquan. I have a question on your code.
> >> 
> >> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> >> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> >> > However, it will fail if DMA zone has no managed pages. The failure
> >> > can be seen in kdump kernel of x86_64 as below:
> >> > 
> 
> Could have included the warning headline too.

Sure, I will paste the whole warning when repost.

> 
> >> >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
> >> >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
> >> >  Workqueue: events_unbound async_run_entry_fn
> >> >  Call Trace:
> >> >   dump_stack_lvl+0x57/0x72
> >> >   warn_alloc.cold+0x72/0xd6
> >> >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
> >> >   __alloc_pages+0x23b/0x2b0
> >> >   allocate_slab+0x406/0x630
> >> >   ___slab_alloc+0x4b1/0x7e0
> >> >   ? sr_probe+0x200/0x600
> >> >   ? lock_acquire+0xc4/0x2e0
> >> >   ? fs_reclaim_acquire+0x4d/0xe0
> >> >   ? lock_is_held_type+0xa7/0x120
> >> >   ? sr_probe+0x200/0x600
> >> >   ? __slab_alloc+0x67/0x90
> >> >   __slab_alloc+0x67/0x90
> >> >   ? sr_probe+0x200/0x600
> >> >   ? sr_probe+0x200/0x600
> >> >   kmem_cache_alloc_trace+0x259/0x270
> >> >   sr_probe+0x200/0x600
> >> >   ......
> >> >   bus_probe_device+0x9f/0xb0
> >> >   device_add+0x3d2/0x970
> >> >   ......
> >> >   __scsi_add_device+0xea/0x100
> >> >   ata_scsi_scan_host+0x97/0x1d0
> >> >   async_run_entry_fn+0x30/0x130
> >> >   process_one_work+0x2b0/0x5c0
> >> >   worker_thread+0x55/0x3c0
> >> >   ? process_one_work+0x5c0/0x5c0
> >> >   kthread+0x149/0x170
> >> >   ? set_kthread_struct+0x40/0x40
> >> >   ret_from_fork+0x22/0x30
> >> >  Mem-Info:
> >> >  ......
> >> > 
> >> > The above failure happened when calling kmalloc() to allocate buffer with
> >> > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> >> > pages in there.
> >> >  sr_probe()
> >> >  --> get_capabilities()
> >> >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> >> > 
> >> > The DMA zone should be checked if it has managed pages, then try to create
> >> > dma-kmalloc.
> >> >
> >> 
> >> What is problem here?
> >> 
> >> The slab allocator requested buddy allocator with GFP_DMA,
> >> and then buddy allocator failed to allocate page in DMA zone because
> >> there was no page in DMA zone. and then the buddy allocator called warn_alloc
> >> because it failed at allocating page.
> >> 
> >> Looking at warn, I don't understand what the problem is.
> > 
> > The problem is this is a generic issue on x86_64, and will be warned out
> > always on all x86_64 systems, but not on a certain machine or a certain
> > type of machine. If not fixed, we can always see it in kdump kernel. The
> > way things are, it doesn't casue system or device collapse even if
> > dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.
> > 
> > 
> > I have got bug reports several times from different people, and we have
> > several bugs tracking this inside Redhat. I think nobody want to see
> > this appearing in customers' monitor w or w/o a note. If we have to
> > leave it with that, it's a little embrassing.
> > 
> > 
> >> 
> >> > ---
> >> >  mm/slab_common.c | 9 +++++++++
> >> >  1 file changed, 9 insertions(+)
> >> > 
> >> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> >> > index e5d080a93009..ae4ef0f8903a 100644
> >> > --- a/mm/slab_common.c
> >> > +++ b/mm/slab_common.c
> >> > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >> >  {
> >> >  	int i;
> >> >  	enum kmalloc_cache_type type;
> >> > +#ifdef CONFIG_ZONE_DMA
> >> > +	bool managed_dma;
> >> > +#endif
> >> >  
> >> >  	/*
> >> >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> >> > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> >> >  	slab_state = UP;
> >> >  
> >> >  #ifdef CONFIG_ZONE_DMA
> >> > +	managed_dma = has_managed_dma();
> >> > +
> >> >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
> >> >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
> >> >  
> >> >  		if (s) {
> >> > +			if (!managed_dma) {
> >> > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> 
> The right side could be just 's'?

Right, will see if we will take another way, will change it if keeping
this way.

> 
> >> > +				continue;
> >> > +			}
> >> 
> >> This code is copying normal kmalloc caches to DMA kmalloc caches.
> >> With this code, the kmalloc() with GFP_DMA will succeed even if allocated
> >> memory is not actually from DMA zone. Is that really what you want?
> > 
> > This is a great question. Honestly, no,
> > 
> > On the surface, it's obviously not what we want, We should never give
> > user a zone NORMAL memory when they ask for zone DMA memory. If going to
> > this specific x86_64 ARCH where this problem is observed, I prefer to give
> > it zone DMA32 memory if zone DMA allocation failed. Because we rarely
> > have ISA device deployed which requires low 16M DMA buffer. The zone DMA
> > is just in case. Thus, for kdump kernel, we have been trying to make sure
> > zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
> > I don't remember we made any effort to do that for zone DMA.
> > 
> > Now the thing is that the nothing serious happened even if sr_probe()
> > doesn't get DMA buffer from zone DMA. And it works well when I feed it
> > with zone NORMAL memory instead with this patch applied.
> 
> If doesn't feel right to me to fix (or rather workaround) this on the level
> of kmalloc caches just because the current reports come from there. If we
> decide it's acceptable for kdump kernel to return !ZONE_DMA memory for
> GFP_DMA requests, then it should apply at the page allocator level for all
> allocations, not just kmalloc().
> 
> Also you mention above you'd prefer ZONE_DMA32 memory, while chances are
> this approach of using KMALLOC_NORMAL caches will end up giving you
> ZONE_NORMAL. On the page allocator level it would be much easier to
> implement a fallback from non-populated ZONE_DMA to ZONE_DMA32 specifically.

This could be do-able. I count this in when investigate all suggested
solutions. Thanks.


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-15  7:27               ` Christoph Hellwig
@ 2021-12-15 10:34                 ` Vlastimil Babka
  -1 siblings, 0 replies; 74+ messages in thread
From: Vlastimil Babka @ 2021-12-15 10:34 UTC (permalink / raw)
  To: Christoph Hellwig, Hyeonggon Yoo
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, cl, John.p.donnelly,
	kexec, stable, Pekka Enberg, David Rientjes, Joonsoo Kim

On 12/15/21 08:27, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
>> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
>> for kdump kernel is nice way to solve this problem.
> 
> What is the problem with zones in kdump kernels?

My understanding is that kdump kernel can only use physical memory that it
got reserved by the main kernel, and the main kernel will reserve some block
of memory that doesn't include any pages from ZONE_DMA (first 16MB of
physical memory or whatnot). By looking at the "crashkernel" parameter
documentation in kernel-parameters.txt it seems we only care about
below-4GB/above-4GB split.
So it can easily happen that ZONE_DMA in the kdump kernel will be completely
empty because the main kernel was using all of it.

>> Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> Indeed.
> 
>> >     1) Do not call warn_alloc in page allocator if will always fail
>> >     to allocate ZONE_DMA pages.
>> > 
>> > 
>> >     2) let's check all callers of kmalloc with GFP_DMA
>> >     if they really need GFP_DMA flag and replace those by DMA API or
>> >     just remove GFP_DMA from kmalloc()
>> > 
>> >     3) Drop support for allocating DMA memory from slab allocator
>> >     (as Christoph Hellwig said) and convert them to use DMA32
>> 
>> 	(as Christoph Hellwig said) and convert them to use *DMA API*
>> 
>> >     and see what happens
> 
> This is the right thing to do, but it will take a while.  In fact
> I dont think we really need the warning in step 1, a simple grep
> already allows to go over them.  I just looked at the uses of GFP_DMA
> in drivers/scsi for example, and all but one look bogus.
> 
>> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
>> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
>> > > > the right way to call people when the first name is the same. Correct me if
>> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
>> > > > Means device enforces getting memory inside addressing limit for those
>> > > > DMA transferring buffer which is usually large, Megabytes level with
>> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
>> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
>> > > > a counter example if anyone happens to know, it could be easy.
> 
> The way this works is that the dma_map* calls will bounce buffer memory

But if ZONE_DMA is not populated, where will it get the bounce buffer from?
I guess nowhere and the problem still exists?

> that does to fall into the addressing limitations.  This is a performance
> overhead, but allows drivers to address all memory in a system.  If the
> driver controls memory allocation it should use one of the dma_alloc_*
> APIs that allocate addressable memory from the start.  The allocator
> will dip into ZONE_DMA and ZONE_DMA32 when needed.


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15 10:34                 ` Vlastimil Babka
  0 siblings, 0 replies; 74+ messages in thread
From: Vlastimil Babka @ 2021-12-15 10:34 UTC (permalink / raw)
  To: Christoph Hellwig, Hyeonggon Yoo
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, cl, John.p.donnelly,
	kexec, stable, Pekka Enberg, David Rientjes, Joonsoo Kim

On 12/15/21 08:27, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
>> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
>> for kdump kernel is nice way to solve this problem.
> 
> What is the problem with zones in kdump kernels?

My understanding is that kdump kernel can only use physical memory that it
got reserved by the main kernel, and the main kernel will reserve some block
of memory that doesn't include any pages from ZONE_DMA (first 16MB of
physical memory or whatnot). By looking at the "crashkernel" parameter
documentation in kernel-parameters.txt it seems we only care about
below-4GB/above-4GB split.
So it can easily happen that ZONE_DMA in the kdump kernel will be completely
empty because the main kernel was using all of it.

>> Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> Indeed.
> 
>> >     1) Do not call warn_alloc in page allocator if will always fail
>> >     to allocate ZONE_DMA pages.
>> > 
>> > 
>> >     2) let's check all callers of kmalloc with GFP_DMA
>> >     if they really need GFP_DMA flag and replace those by DMA API or
>> >     just remove GFP_DMA from kmalloc()
>> > 
>> >     3) Drop support for allocating DMA memory from slab allocator
>> >     (as Christoph Hellwig said) and convert them to use DMA32
>> 
>> 	(as Christoph Hellwig said) and convert them to use *DMA API*
>> 
>> >     and see what happens
> 
> This is the right thing to do, but it will take a while.  In fact
> I dont think we really need the warning in step 1, a simple grep
> already allows to go over them.  I just looked at the uses of GFP_DMA
> in drivers/scsi for example, and all but one look bogus.
> 
>> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
>> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
>> > > > the right way to call people when the first name is the same. Correct me if
>> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
>> > > > Means device enforces getting memory inside addressing limit for those
>> > > > DMA transferring buffer which is usually large, Megabytes level with
>> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
>> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
>> > > > a counter example if anyone happens to know, it could be easy.
> 
> The way this works is that the dma_map* calls will bounce buffer memory

But if ZONE_DMA is not populated, where will it get the bounce buffer from?
I guess nowhere and the problem still exists?

> that does to fall into the addressing limitations.  This is a performance
> overhead, but allows drivers to address all memory in a system.  If the
> driver controls memory allocation it should use one of the dma_alloc_*
> APIs that allocate addressable memory from the start.  The allocator
> will dip into ZONE_DMA and ZONE_DMA32 when needed.


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* RE: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-15 10:34                 ` Vlastimil Babka
@ 2021-12-15 11:51                   ` David Laight
  -1 siblings, 0 replies; 74+ messages in thread
From: David Laight @ 2021-12-15 11:51 UTC (permalink / raw)
  To: 'Vlastimil Babka', Christoph Hellwig, Hyeonggon Yoo
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, cl, John.p.donnelly,
	kexec, stable, Pekka Enberg, David Rientjes, Joonsoo Kim

From: Vlastimil Babka
> Sent: 15 December 2021 10:34
> 
> On 12/15/21 08:27, Christoph Hellwig wrote:
> > On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> >> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> >> for kdump kernel is nice way to solve this problem.
> >
> > What is the problem with zones in kdump kernels?
> 
> My understanding is that kdump kernel can only use physical memory that it
> got reserved by the main kernel, and the main kernel will reserve some block
> of memory that doesn't include any pages from ZONE_DMA (first 16MB of
> physical memory or whatnot). 
...

Is there still any support for any of the very old hardware that could only
support 24bit DMA?

I think the AMD PCnet-ISA and PCnet-PCI ethernet (lance) were both 32bit masters.
(I don't remember ever having to worry about physical addresses.)
I'm sure I remember some old SCSI boards only being able to do 24bit DMA.
But I can't remember which bus interface they were.
Unlikely to be ISA because it has always been hard to get a motherboard
DMA channel into 'cascade mode'.

Might have been some EISA boards - anyone still use those?
So we are left with early PCI boards.

It really is worth looking at what actually needs it at all.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 74+ messages in thread

* RE: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15 11:51                   ` David Laight
  0 siblings, 0 replies; 74+ messages in thread
From: David Laight @ 2021-12-15 11:51 UTC (permalink / raw)
  To: 'Vlastimil Babka', Christoph Hellwig, Hyeonggon Yoo
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, cl, John.p.donnelly,
	kexec, stable, Pekka Enberg, David Rientjes, Joonsoo Kim

From: Vlastimil Babka
> Sent: 15 December 2021 10:34
> 
> On 12/15/21 08:27, Christoph Hellwig wrote:
> > On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> >> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> >> for kdump kernel is nice way to solve this problem.
> >
> > What is the problem with zones in kdump kernels?
> 
> My understanding is that kdump kernel can only use physical memory that it
> got reserved by the main kernel, and the main kernel will reserve some block
> of memory that doesn't include any pages from ZONE_DMA (first 16MB of
> physical memory or whatnot). 
...

Is there still any support for any of the very old hardware that could only
support 24bit DMA?

I think the AMD PCnet-ISA and PCnet-PCI ethernet (lance) were both 32bit masters.
(I don't remember ever having to worry about physical addresses.)
I'm sure I remember some old SCSI boards only being able to do 24bit DMA.
But I can't remember which bus interface they were.
Unlikely to be ISA because it has always been hard to get a motherboard
DMA channel into 'cascade mode'.

Might have been some EISA boards - anyone still use those?
So we are left with early PCI boards.

It really is worth looking at what actually needs it at all.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-15 10:34                 ` Vlastimil Babka
@ 2021-12-15 13:41                   ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-15 13:41 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christoph Hellwig, Hyeonggon Yoo, linux-kernel, linux-mm, akpm,
	cl, John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On 12/15/21 at 11:34am, Vlastimil Babka wrote:
> On 12/15/21 08:27, Christoph Hellwig wrote:
> > On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> >> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> >> for kdump kernel is nice way to solve this problem.
> > 
> > What is the problem with zones in kdump kernels?
> 
> My understanding is that kdump kernel can only use physical memory that it
> got reserved by the main kernel, and the main kernel will reserve some block
> of memory that doesn't include any pages from ZONE_DMA (first 16MB of
> physical memory or whatnot). By looking at the "crashkernel" parameter
> documentation in kernel-parameters.txt it seems we only care about
> below-4GB/above-4GB split.
> So it can easily happen that ZONE_DMA in the kdump kernel will be completely
> empty because the main kernel was using all of it.

Exactly as you said. Even before below regression commit added, we only
have 0~640K reused in kdump kernel. We resued the 1st 640K not because
we need it for zone DMA, just the 1st 640K is needed by BIOS/firmwre
during early stage of system bootup. So there are tens of or several
hundred KB left for managed pages in zone DMA except of those firmware
reserved area in the 1st 640K. After below commit, the 1st 1M is
reserved with memblock_reserve(), so no any physicall memory added to
zone DMA. Then we see the allocation failure.

When we prepare environment for kdump kernel, usually we will customize
a initramfs to includes those necessary ko. E.g a storage device is dump
target, its driver must be loaded. If a network dump specified, network
driver is needed. I never see a ISA device or a device of 24bit
addressing limit is needed in kdump kernel.

6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")

> 
> >> Devices that requires ZONE_DMA memory is rare but we still support them.
> > 
> > Indeed.
> > 
> >> >     1) Do not call warn_alloc in page allocator if will always fail
> >> >     to allocate ZONE_DMA pages.
> >> > 
> >> > 
> >> >     2) let's check all callers of kmalloc with GFP_DMA
> >> >     if they really need GFP_DMA flag and replace those by DMA API or
> >> >     just remove GFP_DMA from kmalloc()
> >> > 
> >> >     3) Drop support for allocating DMA memory from slab allocator
> >> >     (as Christoph Hellwig said) and convert them to use DMA32
> >> 
> >> 	(as Christoph Hellwig said) and convert them to use *DMA API*
> >> 
> >> >     and see what happens
> > 
> > This is the right thing to do, but it will take a while.  In fact
> > I dont think we really need the warning in step 1, a simple grep
> > already allows to go over them.  I just looked at the uses of GFP_DMA
> > in drivers/scsi for example, and all but one look bogus.
> > 
> >> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> >> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> >> > > > the right way to call people when the first name is the same. Correct me if
> >> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> >> > > > Means device enforces getting memory inside addressing limit for those
> >> > > > DMA transferring buffer which is usually large, Megabytes level with
> >> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> >> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> >> > > > a counter example if anyone happens to know, it could be easy.
> > 
> > The way this works is that the dma_map* calls will bounce buffer memory
> 
> But if ZONE_DMA is not populated, where will it get the bounce buffer from?
> I guess nowhere and the problem still exists?

Agree. When I investigated other ARCHs, arm64 has a fascinating setup
for zone DMA/DMA32. It defaults to have all low 4G memory into zone DMA,
but empty zone DMA32. Only if ACPI/DT reports <32 bit addressing
devices, it will set it as limit of zone DMA.

        ZONE_DMA       ZONE_DMA32
arm64   0~X            X~4G  (X is got from ACPI or DT. Otherwise it's 4G by default, DMA32 is empty)

> 
> > that does to fall into the addressing limitations.  This is a performance
> > overhead, but allows drivers to address all memory in a system.  If the
> > driver controls memory allocation it should use one of the dma_alloc_*
> > APIs that allocate addressable memory from the start.  The allocator
> > will dip into ZONE_DMA and ZONE_DMA32 when needed.
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15 13:41                   ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-15 13:41 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christoph Hellwig, Hyeonggon Yoo, linux-kernel, linux-mm, akpm,
	cl, John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On 12/15/21 at 11:34am, Vlastimil Babka wrote:
> On 12/15/21 08:27, Christoph Hellwig wrote:
> > On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> >> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> >> for kdump kernel is nice way to solve this problem.
> > 
> > What is the problem with zones in kdump kernels?
> 
> My understanding is that kdump kernel can only use physical memory that it
> got reserved by the main kernel, and the main kernel will reserve some block
> of memory that doesn't include any pages from ZONE_DMA (first 16MB of
> physical memory or whatnot). By looking at the "crashkernel" parameter
> documentation in kernel-parameters.txt it seems we only care about
> below-4GB/above-4GB split.
> So it can easily happen that ZONE_DMA in the kdump kernel will be completely
> empty because the main kernel was using all of it.

Exactly as you said. Even before below regression commit added, we only
have 0~640K reused in kdump kernel. We resued the 1st 640K not because
we need it for zone DMA, just the 1st 640K is needed by BIOS/firmwre
during early stage of system bootup. So there are tens of or several
hundred KB left for managed pages in zone DMA except of those firmware
reserved area in the 1st 640K. After below commit, the 1st 1M is
reserved with memblock_reserve(), so no any physicall memory added to
zone DMA. Then we see the allocation failure.

When we prepare environment for kdump kernel, usually we will customize
a initramfs to includes those necessary ko. E.g a storage device is dump
target, its driver must be loaded. If a network dump specified, network
driver is needed. I never see a ISA device or a device of 24bit
addressing limit is needed in kdump kernel.

6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")

> 
> >> Devices that requires ZONE_DMA memory is rare but we still support them.
> > 
> > Indeed.
> > 
> >> >     1) Do not call warn_alloc in page allocator if will always fail
> >> >     to allocate ZONE_DMA pages.
> >> > 
> >> > 
> >> >     2) let's check all callers of kmalloc with GFP_DMA
> >> >     if they really need GFP_DMA flag and replace those by DMA API or
> >> >     just remove GFP_DMA from kmalloc()
> >> > 
> >> >     3) Drop support for allocating DMA memory from slab allocator
> >> >     (as Christoph Hellwig said) and convert them to use DMA32
> >> 
> >> 	(as Christoph Hellwig said) and convert them to use *DMA API*
> >> 
> >> >     and see what happens
> > 
> > This is the right thing to do, but it will take a while.  In fact
> > I dont think we really need the warning in step 1, a simple grep
> > already allows to go over them.  I just looked at the uses of GFP_DMA
> > in drivers/scsi for example, and all but one look bogus.
> > 
> >> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> >> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> >> > > > the right way to call people when the first name is the same. Correct me if
> >> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> >> > > > Means device enforces getting memory inside addressing limit for those
> >> > > > DMA transferring buffer which is usually large, Megabytes level with
> >> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> >> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> >> > > > a counter example if anyone happens to know, it could be easy.
> > 
> > The way this works is that the dma_map* calls will bounce buffer memory
> 
> But if ZONE_DMA is not populated, where will it get the bounce buffer from?
> I guess nowhere and the problem still exists?

Agree. When I investigated other ARCHs, arm64 has a fascinating setup
for zone DMA/DMA32. It defaults to have all low 4G memory into zone DMA,
but empty zone DMA32. Only if ACPI/DT reports <32 bit addressing
devices, it will set it as limit of zone DMA.

        ZONE_DMA       ZONE_DMA32
arm64   0~X            X~4G  (X is got from ACPI or DT. Otherwise it's 4G by default, DMA32 is empty)

> 
> > that does to fall into the addressing limitations.  This is a performance
> > overhead, but allows drivers to address all memory in a system.  If the
> > driver controls memory allocation it should use one of the dma_alloc_*
> > APIs that allocate addressable memory from the start.  The allocator
> > will dip into ZONE_DMA and ZONE_DMA32 when needed.
> 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-15  7:03             ` Hyeonggon Yoo
@ 2021-12-15 14:42               ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-15 14:42 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Vlastimil Babka, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On 12/15/21 at 07:03am, Hyeonggon Yoo wrote:
> On Wed, Dec 15, 2021 at 04:48:26AM +0000, Hyeonggon Yoo wrote:
> > 
> > Hello Baoquan and Vlastimil.
> > 
> > I'm not sure allowing ZONE_DMA32 for kdump kernel is nice way to solve
> > this problem. Devices that requires ZONE_DMA is rare but we still
> > support them.
> > 
> > If we allow ZONE_DMA32 for ZONE_DMA in kdump kernels,
> > the problem will be hard to find.
> > 
> 
> Sorry, I sometimes forget validating my english writing :(
> 
> What I meant:
> 
> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> for kdump kernel is nice way to solve this problem.

Yeah, if it's really <32bit addressing limit on device, it doesn't solve
problem. Not sure if devices really has the limitation when
kmalloc(GFP_DMA) is invoked kernel driver.

> 
> Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> If we use ZONE_DMA32 memory instead of ZONE_DMA in kdump kernels,
> It will be hard to the problem when we use devices that can use only
> ZONE_DMA memory.
> 
> > What about one of those?:
> > 
> >     1) Do not call warn_alloc in page allocator if will always fail
> >     to allocate ZONE_DMA pages.
> > 

Seems we can do like below.

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7c7a0b5de2ff..843bc8e5550a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4204,7 +4204,8 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
 	va_list args;
 	static DEFINE_RATELIMIT_STATE(nopage_rs, 10*HZ, 1);
 
-	if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
+	if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs) ||
+		(gfp_mask & __GFP_DMA) && !has_managed_dma())
 		return;
 
> > 
> >     2) let's check all callers of kmalloc with GFP_DMA
> >     if they really need GFP_DMA flag and replace those by DMA API or
> >     just remove GFP_DMA from kmalloc()

I grepped and got a list, I will try to start with several easy place,
see if we can do something to improve.
start with.


> > 
> >     3) Drop support for allocating DMA memory from slab allocator
> >     (as Christoph Hellwig said) and convert them to use DMA32
> 
> 	(as Christoph Hellwig said) and convert them to use *DMA API*

Yes, that will be ideal result. This is equivalent to 2), or depends
on 2).

> 
> >     and see what happens
> > 
> > Thanks,
> > Hyeonggon.
> > 
> > > >> 
> > > >> Maybe the function get_capabilities() want to allocate memory
> > > >> even if it's not from DMA zone, but other callers will not expect that.
> > > > 
> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > the right way to call people when the first name is the same. Correct me if
> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > Means device enforces getting memory inside addressing limit for those
> > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > a counter example if anyone happens to know, it could be easy.
> > > > 
> > > > 
> > > >> 
> > > >> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> > > >> >  				kmalloc_info[i].name[KMALLOC_DMA],
> > > >> >  				kmalloc_info[i].size,
> > > >> > -- 
> > > >> > 2.17.2
> > > >> > 
> > > >> > 
> > > >> 
> > > > 
> > > 
> 


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-15 14:42               ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-15 14:42 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Vlastimil Babka, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On 12/15/21 at 07:03am, Hyeonggon Yoo wrote:
> On Wed, Dec 15, 2021 at 04:48:26AM +0000, Hyeonggon Yoo wrote:
> > 
> > Hello Baoquan and Vlastimil.
> > 
> > I'm not sure allowing ZONE_DMA32 for kdump kernel is nice way to solve
> > this problem. Devices that requires ZONE_DMA is rare but we still
> > support them.
> > 
> > If we allow ZONE_DMA32 for ZONE_DMA in kdump kernels,
> > the problem will be hard to find.
> > 
> 
> Sorry, I sometimes forget validating my english writing :(
> 
> What I meant:
> 
> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> for kdump kernel is nice way to solve this problem.

Yeah, if it's really <32bit addressing limit on device, it doesn't solve
problem. Not sure if devices really has the limitation when
kmalloc(GFP_DMA) is invoked kernel driver.

> 
> Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> If we use ZONE_DMA32 memory instead of ZONE_DMA in kdump kernels,
> It will be hard to the problem when we use devices that can use only
> ZONE_DMA memory.
> 
> > What about one of those?:
> > 
> >     1) Do not call warn_alloc in page allocator if will always fail
> >     to allocate ZONE_DMA pages.
> > 

Seems we can do like below.

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7c7a0b5de2ff..843bc8e5550a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4204,7 +4204,8 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
 	va_list args;
 	static DEFINE_RATELIMIT_STATE(nopage_rs, 10*HZ, 1);
 
-	if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
+	if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs) ||
+		(gfp_mask & __GFP_DMA) && !has_managed_dma())
 		return;
 
> > 
> >     2) let's check all callers of kmalloc with GFP_DMA
> >     if they really need GFP_DMA flag and replace those by DMA API or
> >     just remove GFP_DMA from kmalloc()

I grepped and got a list, I will try to start with several easy place,
see if we can do something to improve.
start with.


> > 
> >     3) Drop support for allocating DMA memory from slab allocator
> >     (as Christoph Hellwig said) and convert them to use DMA32
> 
> 	(as Christoph Hellwig said) and convert them to use *DMA API*

Yes, that will be ideal result. This is equivalent to 2), or depends
on 2).

> 
> >     and see what happens
> > 
> > Thanks,
> > Hyeonggon.
> > 
> > > >> 
> > > >> Maybe the function get_capabilities() want to allocate memory
> > > >> even if it's not from DMA zone, but other callers will not expect that.
> > > > 
> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > the right way to call people when the first name is the same. Correct me if
> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > Means device enforces getting memory inside addressing limit for those
> > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > a counter example if anyone happens to know, it could be easy.
> > > > 
> > > > 
> > > >> 
> > > >> >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> > > >> >  				kmalloc_info[i].name[KMALLOC_DMA],
> > > >> >  				kmalloc_info[i].size,
> > > >> > -- 
> > > >> > 2.17.2
> > > >> > 
> > > >> > 
> > > >> 
> > > > 
> > > 
> 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 3/5] mm_zone: add function to check if managed dma zone exists
  2021-12-13 12:27   ` Baoquan He
@ 2021-12-16 10:52     ` David Hildenbrand
  -1 siblings, 0 replies; 74+ messages in thread
From: David Hildenbrand @ 2021-12-16 10:52 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, stable

On 13.12.21 13:27, Baoquan He wrote:
> In some places of the current kernel, it assumes that dma zone must have
> managed pages if CONFIG_ZONE_DMA is enabled. While this is not always true.
> E.g in kdump kernel of x86_64, only low 1M is presented and locked down
> at very early stage of boot, so that there's no managed pages at all in
> DMA zone. This exception will always cause page allocation failure if page
> is requested from DMA zone.
> 
> Here add function has_managed_dma() and the relevant helper functions to
> check if there's DMA zone with managed pages. It will be used in later
> patches.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
> v2->v3:
>  Rewrite has_managed_dma() in a simpler and more efficient way which is
>  sugggested by DavidH. 
> 
>  include/linux/mmzone.h |  9 +++++++++
>  mm/page_alloc.c        | 15 +++++++++++++++
>  2 files changed, 24 insertions(+)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 58e744b78c2c..6e1b726e9adf 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1046,6 +1046,15 @@ static inline int is_highmem_idx(enum zone_type idx)
>  #endif
>  }
>  
> +#ifdef CONFIG_ZONE_DMA
> +bool has_managed_dma(void);
> +#else
> +static inline bool has_managed_dma(void)
> +{
> +	return false;
> +}
> +#endif
> +
>  /**
>   * is_highmem - helper function to quickly check if a struct zone is a
>   *              highmem zone or not.  This is an attempt to keep references
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c5952749ad40..7c7a0b5de2ff 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -9460,3 +9460,18 @@ bool take_page_off_buddy(struct page *page)
>  	return ret;
>  }
>  #endif
> +
> +#ifdef CONFIG_ZONE_DMA
> +bool has_managed_dma(void)
> +{
> +	struct pglist_data *pgdat;
> +
> +	for_each_online_pgdat(pgdat) {
> +		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
> +
> +		if (managed_zone(zone))
> +			return true;
> +	}
> +	return false;
> +}
> +#endif /* CONFIG_ZONE_DMA */
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 3/5] mm_zone: add function to check if managed dma zone exists
@ 2021-12-16 10:52     ` David Hildenbrand
  0 siblings, 0 replies; 74+ messages in thread
From: David Hildenbrand @ 2021-12-16 10:52 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, hch, cl, John.p.donnelly, kexec, stable

On 13.12.21 13:27, Baoquan He wrote:
> In some places of the current kernel, it assumes that dma zone must have
> managed pages if CONFIG_ZONE_DMA is enabled. While this is not always true.
> E.g in kdump kernel of x86_64, only low 1M is presented and locked down
> at very early stage of boot, so that there's no managed pages at all in
> DMA zone. This exception will always cause page allocation failure if page
> is requested from DMA zone.
> 
> Here add function has_managed_dma() and the relevant helper functions to
> check if there's DMA zone with managed pages. It will be used in later
> patches.
> 
> Fixes: 6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
> Cc: stable@vger.kernel.org
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
> v2->v3:
>  Rewrite has_managed_dma() in a simpler and more efficient way which is
>  sugggested by DavidH. 
> 
>  include/linux/mmzone.h |  9 +++++++++
>  mm/page_alloc.c        | 15 +++++++++++++++
>  2 files changed, 24 insertions(+)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 58e744b78c2c..6e1b726e9adf 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1046,6 +1046,15 @@ static inline int is_highmem_idx(enum zone_type idx)
>  #endif
>  }
>  
> +#ifdef CONFIG_ZONE_DMA
> +bool has_managed_dma(void);
> +#else
> +static inline bool has_managed_dma(void)
> +{
> +	return false;
> +}
> +#endif
> +
>  /**
>   * is_highmem - helper function to quickly check if a struct zone is a
>   *              highmem zone or not.  This is an attempt to keep references
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c5952749ad40..7c7a0b5de2ff 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -9460,3 +9460,18 @@ bool take_page_off_buddy(struct page *page)
>  	return ret;
>  }
>  #endif
> +
> +#ifdef CONFIG_ZONE_DMA
> +bool has_managed_dma(void)
> +{
> +	struct pglist_data *pgdat;
> +
> +	for_each_online_pgdat(pgdat) {
> +		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
> +
> +		if (managed_zone(zone))
> +			return true;
> +	}
> +	return false;
> +}
> +#endif /* CONFIG_ZONE_DMA */
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-14  5:32       ` Baoquan He
@ 2021-12-17 11:38         ` Hyeonggon Yoo
  -1 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-17 11:38 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

On Tue, Dec 14, 2021 at 01:32:53PM +0800, Baoquan He wrote:
> On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> > Hello Baoquan. I have a question on your code.
> > 
> > On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> > > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> > > However, it will fail if DMA zone has no managed pages. The failure
> > > can be seen in kdump kernel of x86_64 as below:
> > > 
> > >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
> > >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
> > >  Workqueue: events_unbound async_run_entry_fn
> > >  Call Trace:
> > >   dump_stack_lvl+0x57/0x72
> > >   warn_alloc.cold+0x72/0xd6
> > >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
> > >   __alloc_pages+0x23b/0x2b0
> > >   allocate_slab+0x406/0x630
> > >   ___slab_alloc+0x4b1/0x7e0
> > >   ? sr_probe+0x200/0x600
> > >   ? lock_acquire+0xc4/0x2e0
> > >   ? fs_reclaim_acquire+0x4d/0xe0
> > >   ? lock_is_held_type+0xa7/0x120
> > >   ? sr_probe+0x200/0x600
> > >   ? __slab_alloc+0x67/0x90
> > >   __slab_alloc+0x67/0x90
> > >   ? sr_probe+0x200/0x600
> > >   ? sr_probe+0x200/0x600
> > >   kmem_cache_alloc_trace+0x259/0x270
> > >   sr_probe+0x200/0x600
> > >   ......
> > >   bus_probe_device+0x9f/0xb0
> > >   device_add+0x3d2/0x970
> > >   ......
> > >   __scsi_add_device+0xea/0x100
> > >   ata_scsi_scan_host+0x97/0x1d0
> > >   async_run_entry_fn+0x30/0x130
> > >   process_one_work+0x2b0/0x5c0
> > >   worker_thread+0x55/0x3c0
> > >   ? process_one_work+0x5c0/0x5c0
> > >   kthread+0x149/0x170
> > >   ? set_kthread_struct+0x40/0x40
> > >   ret_from_fork+0x22/0x30
> > >  Mem-Info:
> > >  ......
> > > 
> > > The above failure happened when calling kmalloc() to allocate buffer with
> > > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> > > pages in there.
> > >  sr_probe()
> > >  --> get_capabilities()
> > >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> > > 
> > > The DMA zone should be checked if it has managed pages, then try to create
> > > dma-kmalloc.
> > >
> > 
> > What is problem here?
> > 
> > The slab allocator requested buddy allocator with GFP_DMA,
> > and then buddy allocator failed to allocate page in DMA zone because
> > there was no page in DMA zone. and then the buddy allocator called warn_alloc
> > because it failed at allocating page.
> > 
> > Looking at warn, I don't understand what the problem is.
> 
> The problem is this is a generic issue on x86_64, and will be warned out
> always on all x86_64 systems, but not on a certain machine or a certain
> type of machine. If not fixed, we can always see it in kdump kernel. The
> way things are, it doesn't casue system or device collapse even if
> dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.
> 
> 
> I have got bug reports several times from different people, and we have
> several bugs tracking this inside Redhat. I think nobody want to see
> this appearing in customers' monitor w or w/o a note. If we have to
> leave it with that, it's a little embrassing.
> 
> 
> > 
> > > ---
> > >  mm/slab_common.c | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > > 
> > > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > > index e5d080a93009..ae4ef0f8903a 100644
> > > --- a/mm/slab_common.c
> > > +++ b/mm/slab_common.c
> > > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> > >  {
> > >  	int i;
> > >  	enum kmalloc_cache_type type;
> > > +#ifdef CONFIG_ZONE_DMA
> > > +	bool managed_dma;
> > > +#endif
> > >  
> > >  	/*
> > >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> > > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> > >  	slab_state = UP;
> > >  
> > >  #ifdef CONFIG_ZONE_DMA
> > > +	managed_dma = has_managed_dma();
> > > +
> > >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
> > >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
> > >  
> > >  		if (s) {
> > > +			if (!managed_dma) {
> > > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> > > +				continue;
> > > +			}
> > 
> > This code is copying normal kmalloc caches to DMA kmalloc caches.
> > With this code, the kmalloc() with GFP_DMA will succeed even if allocated
> > memory is not actually from DMA zone. Is that really what you want?
> 
> This is a great question. Honestly, no,
> 
> On the surface, it's obviously not what we want, We should never give
> user a zone NORMAL memory when they ask for zone DMA memory. If going to
> this specific x86_64 ARCH where this problem is observed, I prefer to give
> it zone DMA32 memory if zone DMA allocation failed. Because we rarely
> have ISA device deployed which requires low 16M DMA buffer. The zone DMA
> is just in case. Thus, for kdump kernel, we have been trying to make sure
> zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
> I don't remember we made any effort to do that for zone DMA.
> 
> Now the thing is that the nothing serious happened even if sr_probe()
> doesn't get DMA buffer from zone DMA. And it works well when I feed it
> with zone NORMAL memory instead with this patch applied.
> > 
> > Maybe the function get_capabilities() want to allocate memory
> > even if it's not from DMA zone, but other callers will not expect that.
> 
> Yeah, I have the same guess too for get_capabilities(), not sure about other
> callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> the right way to call people when the first name is the same. Correct me if
> it's wrong), any buffer requested from kmalloc can be used by device driver.
> Means device enforces getting memory inside addressing limit for those
> DMA transferring buffer which is usually large, Megabytes level with
> vmalloc() or alloc_pages(), but doesn't care about this kind of small
> piece buffer memory allocated with kmalloc()? Just a guess, please tell
> a counter example if anyone happens to know, it could be easy.
>

My understanding is any buffer requested from kmalloc (without
GFP_DMA/DMA32) can be used by device driver because it allocates
continuous physical memory. It doesn't mean that buffer allocated
with kmalloc is free of addressing limitation.

the addressing limitation comes from the capability of device, not
allocation size. if you allocate memory using alloc_pages() or kmalloc(),
the device has same limitation. and vmalloc can't be used for
devices because they have no MMU.

But we can map memory outside DMA zone into bounce buffer (which resides
in DMA zone) using DMA API.

Thanks,
Hyeonggon.

> 
> > 
> > >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> > >  				kmalloc_info[i].name[KMALLOC_DMA],
> > >  				kmalloc_info[i].size,
> > > -- 
> > > 2.17.2
> > > 
> > > 
> > 
> 

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-17 11:38         ` Hyeonggon Yoo
  0 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-17 11:38 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, linux-mm, akpm, hch, cl, John.p.donnelly, kexec,
	stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

On Tue, Dec 14, 2021 at 01:32:53PM +0800, Baoquan He wrote:
> On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> > Hello Baoquan. I have a question on your code.
> > 
> > On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> > > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> > > However, it will fail if DMA zone has no managed pages. The failure
> > > can be seen in kdump kernel of x86_64 as below:
> > > 
> > >  CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
> > >  Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS RMLSDP.86I.R2.28.D690.1306271008 06/27/2013
> > >  Workqueue: events_unbound async_run_entry_fn
> > >  Call Trace:
> > >   dump_stack_lvl+0x57/0x72
> > >   warn_alloc.cold+0x72/0xd6
> > >   __alloc_pages_slowpath.constprop.0+0xf56/0xf70
> > >   __alloc_pages+0x23b/0x2b0
> > >   allocate_slab+0x406/0x630
> > >   ___slab_alloc+0x4b1/0x7e0
> > >   ? sr_probe+0x200/0x600
> > >   ? lock_acquire+0xc4/0x2e0
> > >   ? fs_reclaim_acquire+0x4d/0xe0
> > >   ? lock_is_held_type+0xa7/0x120
> > >   ? sr_probe+0x200/0x600
> > >   ? __slab_alloc+0x67/0x90
> > >   __slab_alloc+0x67/0x90
> > >   ? sr_probe+0x200/0x600
> > >   ? sr_probe+0x200/0x600
> > >   kmem_cache_alloc_trace+0x259/0x270
> > >   sr_probe+0x200/0x600
> > >   ......
> > >   bus_probe_device+0x9f/0xb0
> > >   device_add+0x3d2/0x970
> > >   ......
> > >   __scsi_add_device+0xea/0x100
> > >   ata_scsi_scan_host+0x97/0x1d0
> > >   async_run_entry_fn+0x30/0x130
> > >   process_one_work+0x2b0/0x5c0
> > >   worker_thread+0x55/0x3c0
> > >   ? process_one_work+0x5c0/0x5c0
> > >   kthread+0x149/0x170
> > >   ? set_kthread_struct+0x40/0x40
> > >   ret_from_fork+0x22/0x30
> > >  Mem-Info:
> > >  ......
> > > 
> > > The above failure happened when calling kmalloc() to allocate buffer with
> > > GFP_DMA. It requests to allocate slab page from DMA zone while no managed
> > > pages in there.
> > >  sr_probe()
> > >  --> get_capabilities()
> > >      --> buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
> > > 
> > > The DMA zone should be checked if it has managed pages, then try to create
> > > dma-kmalloc.
> > >
> > 
> > What is problem here?
> > 
> > The slab allocator requested buddy allocator with GFP_DMA,
> > and then buddy allocator failed to allocate page in DMA zone because
> > there was no page in DMA zone. and then the buddy allocator called warn_alloc
> > because it failed at allocating page.
> > 
> > Looking at warn, I don't understand what the problem is.
> 
> The problem is this is a generic issue on x86_64, and will be warned out
> always on all x86_64 systems, but not on a certain machine or a certain
> type of machine. If not fixed, we can always see it in kdump kernel. The
> way things are, it doesn't casue system or device collapse even if
> dma-kmalloc can't provide buffer or provide buffer from zone NORMAL.
> 
> 
> I have got bug reports several times from different people, and we have
> several bugs tracking this inside Redhat. I think nobody want to see
> this appearing in customers' monitor w or w/o a note. If we have to
> leave it with that, it's a little embrassing.
> 
> 
> > 
> > > ---
> > >  mm/slab_common.c | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > > 
> > > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > > index e5d080a93009..ae4ef0f8903a 100644
> > > --- a/mm/slab_common.c
> > > +++ b/mm/slab_common.c
> > > @@ -878,6 +878,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> > >  {
> > >  	int i;
> > >  	enum kmalloc_cache_type type;
> > > +#ifdef CONFIG_ZONE_DMA
> > > +	bool managed_dma;
> > > +#endif
> > >  
> > >  	/*
> > >  	 * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
> > > @@ -905,10 +908,16 @@ void __init create_kmalloc_caches(slab_flags_t flags)
> > >  	slab_state = UP;
> > >  
> > >  #ifdef CONFIG_ZONE_DMA
> > > +	managed_dma = has_managed_dma();
> > > +
> > >  	for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
> > >  		struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
> > >  
> > >  		if (s) {
> > > +			if (!managed_dma) {
> > > +				kmalloc_caches[KMALLOC_DMA][i] = kmalloc_caches[KMALLOC_NORMAL][i];
> > > +				continue;
> > > +			}
> > 
> > This code is copying normal kmalloc caches to DMA kmalloc caches.
> > With this code, the kmalloc() with GFP_DMA will succeed even if allocated
> > memory is not actually from DMA zone. Is that really what you want?
> 
> This is a great question. Honestly, no,
> 
> On the surface, it's obviously not what we want, We should never give
> user a zone NORMAL memory when they ask for zone DMA memory. If going to
> this specific x86_64 ARCH where this problem is observed, I prefer to give
> it zone DMA32 memory if zone DMA allocation failed. Because we rarely
> have ISA device deployed which requires low 16M DMA buffer. The zone DMA
> is just in case. Thus, for kdump kernel, we have been trying to make sure
> zone DMA32 has enough memory to satisfy PCIe device DMA buffer allocation,
> I don't remember we made any effort to do that for zone DMA.
> 
> Now the thing is that the nothing serious happened even if sr_probe()
> doesn't get DMA buffer from zone DMA. And it works well when I feed it
> with zone NORMAL memory instead with this patch applied.
> > 
> > Maybe the function get_capabilities() want to allocate memory
> > even if it's not from DMA zone, but other callers will not expect that.
> 
> Yeah, I have the same guess too for get_capabilities(), not sure about other
> callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> the right way to call people when the first name is the same. Correct me if
> it's wrong), any buffer requested from kmalloc can be used by device driver.
> Means device enforces getting memory inside addressing limit for those
> DMA transferring buffer which is usually large, Megabytes level with
> vmalloc() or alloc_pages(), but doesn't care about this kind of small
> piece buffer memory allocated with kmalloc()? Just a guess, please tell
> a counter example if anyone happens to know, it could be easy.
>

My understanding is any buffer requested from kmalloc (without
GFP_DMA/DMA32) can be used by device driver because it allocates
continuous physical memory. It doesn't mean that buffer allocated
with kmalloc is free of addressing limitation.

the addressing limitation comes from the capability of device, not
allocation size. if you allocate memory using alloc_pages() or kmalloc(),
the device has same limitation. and vmalloc can't be used for
devices because they have no MMU.

But we can map memory outside DMA zone into bounce buffer (which resides
in DMA zone) using DMA API.

Thanks,
Hyeonggon.

> 
> > 
> > >  			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> > >  				kmalloc_info[i].name[KMALLOC_DMA],
> > >  				kmalloc_info[i].size,
> > > -- 
> > > 2.17.2
> > > 
> > > 
> > 
> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-15  7:27               ` Christoph Hellwig
@ 2021-12-17 11:38                 ` Hyeonggon Yoo
  -1 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-17 11:38 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Vlastimil Babka, Baoquan He, linux-kernel, linux-mm, akpm, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Wed, Dec 15, 2021 at 08:27:10AM +0100, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> > I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> > for kdump kernel is nice way to solve this problem.
> 
> What is the problem with zones in kdump kernels?
> 
> > Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> Indeed.
> 
> > >     1) Do not call warn_alloc in page allocator if will always fail
> > >     to allocate ZONE_DMA pages.
> > > 
> > > 
> > >     2) let's check all callers of kmalloc with GFP_DMA
> > >     if they really need GFP_DMA flag and replace those by DMA API or
> > >     just remove GFP_DMA from kmalloc()
> > > 
> > >     3) Drop support for allocating DMA memory from slab allocator
> > >     (as Christoph Hellwig said) and convert them to use DMA32
> > 
> > 	(as Christoph Hellwig said) and convert them to use *DMA API*
> > 
> > >     and see what happens
> 
> This is the right thing to do, but it will take a while.  In fact
> I dont think we really need the warning in step 1,

Hmm I think step 1) will be needed if someone is allocating pages from
DMA zone not using kmalloc or DMA API. (for example directly allocating
from buddy allocator) is there such cases?

> a simple grep
> already allows to go over them.  I just looked at the uses of GFP_DMA
> in drivers/scsi for example, and all but one look bogus.
>

That's good. this cleanup will also remove unnecessary limitations.

> > > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > > the right way to call people when the first name is the same. Correct me if
> > > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > > Means device enforces getting memory inside addressing limit for those
> > > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > > a counter example if anyone happens to know, it could be
> > > > > easy.
> 
> The way this works is that the dma_map* calls will bounce buffer memory
> that does to fall into the addressing limitations.  This is a performance
> overhead, but allows drivers to address all memory in a system.  If the
> driver controls memory allocation it should use one of the dma_alloc_*
> APIs that allocate addressable memory from the start.  The allocator
> will dip into ZONE_DMA and ZONE_DMA32 when needed.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-17 11:38                 ` Hyeonggon Yoo
  0 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-17 11:38 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Vlastimil Babka, Baoquan He, linux-kernel, linux-mm, akpm, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Wed, Dec 15, 2021 at 08:27:10AM +0100, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> > I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> > for kdump kernel is nice way to solve this problem.
> 
> What is the problem with zones in kdump kernels?
> 
> > Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> Indeed.
> 
> > >     1) Do not call warn_alloc in page allocator if will always fail
> > >     to allocate ZONE_DMA pages.
> > > 
> > > 
> > >     2) let's check all callers of kmalloc with GFP_DMA
> > >     if they really need GFP_DMA flag and replace those by DMA API or
> > >     just remove GFP_DMA from kmalloc()
> > > 
> > >     3) Drop support for allocating DMA memory from slab allocator
> > >     (as Christoph Hellwig said) and convert them to use DMA32
> > 
> > 	(as Christoph Hellwig said) and convert them to use *DMA API*
> > 
> > >     and see what happens
> 
> This is the right thing to do, but it will take a while.  In fact
> I dont think we really need the warning in step 1,

Hmm I think step 1) will be needed if someone is allocating pages from
DMA zone not using kmalloc or DMA API. (for example directly allocating
from buddy allocator) is there such cases?

> a simple grep
> already allows to go over them.  I just looked at the uses of GFP_DMA
> in drivers/scsi for example, and all but one look bogus.
>

That's good. this cleanup will also remove unnecessary limitations.

> > > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > > the right way to call people when the first name is the same. Correct me if
> > > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > > Means device enforces getting memory inside addressing limit for those
> > > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > > a counter example if anyone happens to know, it could be
> > > > > easy.
> 
> The way this works is that the dma_map* calls will bounce buffer memory
> that does to fall into the addressing limitations.  This is a performance
> overhead, but allows drivers to address all memory in a system.  If the
> driver controls memory allocation it should use one of the dma_alloc_*
> APIs that allocate addressable memory from the start.  The allocator
> will dip into ZONE_DMA and ZONE_DMA32 when needed.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-17 11:38                 ` Hyeonggon Yoo
@ 2021-12-20  7:32                   ` Baoquan He
  -1 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-20  7:32 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Christoph Hellwig, Vlastimil Babka, linux-kernel, linux-mm, akpm,
	cl, John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On 12/17/21 at 11:38am, Hyeonggon Yoo wrote:
> On Wed, Dec 15, 2021 at 08:27:10AM +0100, Christoph Hellwig wrote:
> > On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> > > I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> > > for kdump kernel is nice way to solve this problem.
> > 
> > What is the problem with zones in kdump kernels?
> > 
> > > Devices that requires ZONE_DMA memory is rare but we still support them.
> > 
> > Indeed.
> > 
> > > >     1) Do not call warn_alloc in page allocator if will always fail
> > > >     to allocate ZONE_DMA pages.
> > > > 
> > > > 
> > > >     2) let's check all callers of kmalloc with GFP_DMA
> > > >     if they really need GFP_DMA flag and replace those by DMA API or
> > > >     just remove GFP_DMA from kmalloc()
> > > > 
> > > >     3) Drop support for allocating DMA memory from slab allocator
> > > >     (as Christoph Hellwig said) and convert them to use DMA32
> > > 
> > > 	(as Christoph Hellwig said) and convert them to use *DMA API*
> > > 
> > > >     and see what happens
> > 
> > This is the right thing to do, but it will take a while.  In fact
> > I dont think we really need the warning in step 1,
> 
> Hmm I think step 1) will be needed if someone is allocating pages from
> DMA zone not using kmalloc or DMA API. (for example directly allocating
> from buddy allocator) is there such cases?

I think Christoph meant to take off the warning. I will post a patch to
mute the warning if it's requesting page from DMA zone which has no
managed pages.

> 
> > a simple grep
> > already allows to go over them.  I just looked at the uses of GFP_DMA
> > in drivers/scsi for example, and all but one look bogus.
> >
> 
> That's good. this cleanup will also remove unnecessary limitations.

I searched and investigated several callsites where kmalloc(GFP_DMA) is
called. E.g drivers/scsi/sr.c: sr_probe(). The scsi sr driver doesn't
check DMA supporting capibility at all, e.g the dma limit, to set the
dma mask or coherent_dma_mask. If we want to convert the
kmalloc(GFP_DMA) to dma_alloc* API, scsi sr drvier developer/expert's
suggestion and help is necessary. Either someone who knows this well
help to change it, or give suggestion how to change so that I can do it. 

> 
> > > > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > > > the right way to call people when the first name is the same. Correct me if
> > > > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > > > Means device enforces getting memory inside addressing limit for those
> > > > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > > > a counter example if anyone happens to know, it could be
> > > > > > easy.
> > 
> > The way this works is that the dma_map* calls will bounce buffer memory
> > that does to fall into the addressing limitations.  This is a performance
> > overhead, but allows drivers to address all memory in a system.  If the
> > driver controls memory allocation it should use one of the dma_alloc_*
> > APIs that allocate addressable memory from the start.  The allocator
> > will dip into ZONE_DMA and ZONE_DMA32 when needed.
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-20  7:32                   ` Baoquan He
  0 siblings, 0 replies; 74+ messages in thread
From: Baoquan He @ 2021-12-20  7:32 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Christoph Hellwig, Vlastimil Babka, linux-kernel, linux-mm, akpm,
	cl, John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On 12/17/21 at 11:38am, Hyeonggon Yoo wrote:
> On Wed, Dec 15, 2021 at 08:27:10AM +0100, Christoph Hellwig wrote:
> > On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> > > I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> > > for kdump kernel is nice way to solve this problem.
> > 
> > What is the problem with zones in kdump kernels?
> > 
> > > Devices that requires ZONE_DMA memory is rare but we still support them.
> > 
> > Indeed.
> > 
> > > >     1) Do not call warn_alloc in page allocator if will always fail
> > > >     to allocate ZONE_DMA pages.
> > > > 
> > > > 
> > > >     2) let's check all callers of kmalloc with GFP_DMA
> > > >     if they really need GFP_DMA flag and replace those by DMA API or
> > > >     just remove GFP_DMA from kmalloc()
> > > > 
> > > >     3) Drop support for allocating DMA memory from slab allocator
> > > >     (as Christoph Hellwig said) and convert them to use DMA32
> > > 
> > > 	(as Christoph Hellwig said) and convert them to use *DMA API*
> > > 
> > > >     and see what happens
> > 
> > This is the right thing to do, but it will take a while.  In fact
> > I dont think we really need the warning in step 1,
> 
> Hmm I think step 1) will be needed if someone is allocating pages from
> DMA zone not using kmalloc or DMA API. (for example directly allocating
> from buddy allocator) is there such cases?

I think Christoph meant to take off the warning. I will post a patch to
mute the warning if it's requesting page from DMA zone which has no
managed pages.

> 
> > a simple grep
> > already allows to go over them.  I just looked at the uses of GFP_DMA
> > in drivers/scsi for example, and all but one look bogus.
> >
> 
> That's good. this cleanup will also remove unnecessary limitations.

I searched and investigated several callsites where kmalloc(GFP_DMA) is
called. E.g drivers/scsi/sr.c: sr_probe(). The scsi sr driver doesn't
check DMA supporting capibility at all, e.g the dma limit, to set the
dma mask or coherent_dma_mask. If we want to convert the
kmalloc(GFP_DMA) to dma_alloc* API, scsi sr drvier developer/expert's
suggestion and help is necessary. Either someone who knows this well
help to change it, or give suggestion how to change so that I can do it. 

> 
> > > > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > > > the right way to call people when the first name is the same. Correct me if
> > > > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > > > Means device enforces getting memory inside addressing limit for those
> > > > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > > > a counter example if anyone happens to know, it could be
> > > > > > easy.
> > 
> > The way this works is that the dma_map* calls will bounce buffer memory
> > that does to fall into the addressing limitations.  This is a performance
> > overhead, but allows drivers to address all memory in a system.  If the
> > driver controls memory allocation it should use one of the dma_alloc_*
> > APIs that allocate addressable memory from the start.  The allocator
> > will dip into ZONE_DMA and ZONE_DMA32 when needed.
> 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-17 11:38         ` Hyeonggon Yoo
@ 2021-12-21  8:56           ` Christoph Hellwig
  -1 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-21  8:56 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vlastimil Babka

On Fri, Dec 17, 2021 at 11:38:27AM +0000, Hyeonggon Yoo wrote:
> My understanding is any buffer requested from kmalloc (without
> GFP_DMA/DMA32) can be used by device driver because it allocates
> continuous physical memory. It doesn't mean that buffer allocated
> with kmalloc is free of addressing limitation.

Yes.

> 
> the addressing limitation comes from the capability of device, not
> allocation size. if you allocate memory using alloc_pages() or kmalloc(),
> the device has same limitation. and vmalloc can't be used for
> devices because they have no MMU.

vmalloc can be used as well, it just needs to be setup as a scatterlist
and needs a little lover for DMA challenged platforms with the
invalidate_kernel_vmap_range and flush_kernel_vmap_range helpers.

> But we can map memory outside DMA zone into bounce buffer (which resides
> in DMA zone) using DMA API.

Yes, although in a few specific cases the bounce buffer could also come
from somewhere else.


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-21  8:56           ` Christoph Hellwig
  0 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-21  8:56 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, hch, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vlastimil Babka

On Fri, Dec 17, 2021 at 11:38:27AM +0000, Hyeonggon Yoo wrote:
> My understanding is any buffer requested from kmalloc (without
> GFP_DMA/DMA32) can be used by device driver because it allocates
> continuous physical memory. It doesn't mean that buffer allocated
> with kmalloc is free of addressing limitation.

Yes.

> 
> the addressing limitation comes from the capability of device, not
> allocation size. if you allocate memory using alloc_pages() or kmalloc(),
> the device has same limitation. and vmalloc can't be used for
> devices because they have no MMU.

vmalloc can be used as well, it just needs to be setup as a scatterlist
and needs a little lover for DMA challenged platforms with the
invalidate_kernel_vmap_range and flush_kernel_vmap_range helpers.

> But we can map memory outside DMA zone into bounce buffer (which resides
> in DMA zone) using DMA API.

Yes, although in a few specific cases the bounce buffer could also come
from somewhere else.


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-21  8:56           ` Christoph Hellwig
@ 2021-12-22 12:37             ` Hyeonggon Yoo
  -1 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-22 12:37 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, cl, John.p.donnelly,
	kexec, stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

Hello Christoph.

On Tue, Dec 21, 2021 at 09:56:23AM +0100, Christoph Hellwig wrote:
> On Fri, Dec 17, 2021 at 11:38:27AM +0000, Hyeonggon Yoo wrote:
> > My understanding is any buffer requested from kmalloc (without
> > GFP_DMA/DMA32) can be used by device driver because it allocates
> > continuous physical memory. It doesn't mean that buffer allocated
> > with kmalloc is free of addressing limitation.
> 
> Yes.
> 
> > 
> > the addressing limitation comes from the capability of device, not
> > allocation size. if you allocate memory using alloc_pages() or kmalloc(),
> > the device has same limitation. and vmalloc can't be used for
> > devices because they have no MMU.
> 
> vmalloc can be used as well, it just needs to be setup as a scatterlist
> and needs a little lover for DMA challenged platforms with the
> invalidate_kernel_vmap_range and flush_kernel_vmap_range helpers.

Oh I misunderstood this. Underlying physical address of vmalloc()-allocated memory
can be mapped using DMA API, and it needs to be setup as scatterlist because
the allocated memory is not physically continuous. Right?

BTW, looking at the API I think the scsi case can be converted to use
dma_alloc_pages(). but driver requires 512 bytes of buffer and the API
supports allocating by at least page size.

It's not a big problem as it allocates a single buffer but in other
cases maybe not. Can't we use dma pool for non-coherent pages?

Thanks,
Hyeonggon.

> > But we can map memory outside DMA zone into bounce buffer (which resides
> > in DMA zone) using DMA API.
> 
> Yes, although in a few specific cases the bounce buffer could also come
> from somewhere else.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-22 12:37             ` Hyeonggon Yoo
  0 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2021-12-22 12:37 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Baoquan He, linux-kernel, linux-mm, akpm, cl, John.p.donnelly,
	kexec, stable, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Vlastimil Babka

Hello Christoph.

On Tue, Dec 21, 2021 at 09:56:23AM +0100, Christoph Hellwig wrote:
> On Fri, Dec 17, 2021 at 11:38:27AM +0000, Hyeonggon Yoo wrote:
> > My understanding is any buffer requested from kmalloc (without
> > GFP_DMA/DMA32) can be used by device driver because it allocates
> > continuous physical memory. It doesn't mean that buffer allocated
> > with kmalloc is free of addressing limitation.
> 
> Yes.
> 
> > 
> > the addressing limitation comes from the capability of device, not
> > allocation size. if you allocate memory using alloc_pages() or kmalloc(),
> > the device has same limitation. and vmalloc can't be used for
> > devices because they have no MMU.
> 
> vmalloc can be used as well, it just needs to be setup as a scatterlist
> and needs a little lover for DMA challenged platforms with the
> invalidate_kernel_vmap_range and flush_kernel_vmap_range helpers.

Oh I misunderstood this. Underlying physical address of vmalloc()-allocated memory
can be mapped using DMA API, and it needs to be setup as scatterlist because
the allocated memory is not physically continuous. Right?

BTW, looking at the API I think the scsi case can be converted to use
dma_alloc_pages(). but driver requires 512 bytes of buffer and the API
supports allocating by at least page size.

It's not a big problem as it allocates a single buffer but in other
cases maybe not. Can't we use dma pool for non-coherent pages?

Thanks,
Hyeonggon.

> > But we can map memory outside DMA zone into bounce buffer (which resides
> > in DMA zone) using DMA API.
> 
> Yes, although in a few specific cases the bounce buffer could also come
> from somewhere else.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-22 12:37             ` Hyeonggon Yoo
@ 2021-12-23  8:52               ` Christoph Hellwig
  -1 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-23  8:52 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Christoph Hellwig, Baoquan He, linux-kernel, linux-mm, akpm, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vlastimil Babka

On Wed, Dec 22, 2021 at 12:37:03PM +0000, Hyeonggon Yoo wrote:
> Oh I misunderstood this. Underlying physical address of vmalloc()-allocated memory
> can be mapped using DMA API, and it needs to be setup as scatterlist because
> the allocated memory is not physically continuous. Right?

Yes.

> BTW, looking at the API I think the scsi case can be converted to use
> dma_alloc_pages(). but driver requires 512 bytes of buffer and the API
> supports allocating by at least page size.

Overallocating is not generally a problem, but if the allocations are for
a slow path it might make more sense to stick to dma_map_* and bounce
buffer if needed.

> It's not a big problem as it allocates a single buffer but in other
> cases maybe not. Can't we use dma pool for non-coherent pages?

No.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2021-12-23  8:52               ` Christoph Hellwig
  0 siblings, 0 replies; 74+ messages in thread
From: Christoph Hellwig @ 2021-12-23  8:52 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Christoph Hellwig, Baoquan He, linux-kernel, linux-mm, akpm, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vlastimil Babka

On Wed, Dec 22, 2021 at 12:37:03PM +0000, Hyeonggon Yoo wrote:
> Oh I misunderstood this. Underlying physical address of vmalloc()-allocated memory
> can be mapped using DMA API, and it needs to be setup as scatterlist because
> the allocated memory is not physically continuous. Right?

Yes.

> BTW, looking at the API I think the scsi case can be converted to use
> dma_alloc_pages(). but driver requires 512 bytes of buffer and the API
> supports allocating by at least page size.

Overallocating is not generally a problem, but if the allocations are for
a slow path it might make more sense to stick to dma_map_* and bounce
buffer if needed.

> It's not a big problem as it allocates a single buffer but in other
> cases maybe not. Can't we use dma pool for non-coherent pages?

No.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
  2021-12-15  7:27               ` Christoph Hellwig
@ 2022-01-07 11:56                 ` Hyeonggon Yoo
  -1 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2022-01-07 11:56 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Vlastimil Babka, Baoquan He, linux-kernel, linux-mm, akpm, cl,
	John.p.donnelly, kexec, stable, Pekka Enberg, David Rientjes,
	Joonsoo Kim

On Wed, Dec 15, 2021 at 08:27:10AM +0100, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> > I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> > for kdump kernel is nice way to solve this problem.
> 
> What is the problem with zones in kdump kernels?
> 
> > Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> Indeed.
> 
> > >     1) Do not call warn_alloc in page allocator if will always fail
> > >     to allocate ZONE_DMA pages.
> > > 
> > > 
> > >     2) let's check all callers of kmalloc with GFP_DMA
> > >     if they really need GFP_DMA flag and replace those by DMA API or
> > >     just remove GFP_DMA from kmalloc()
> > > 
> > >     3) Drop support for allocating DMA memory from slab allocator
> > >     (as Christoph Hellwig said) and convert them to use DMA32
> > 
> > 	(as Christoph Hellwig said) and convert them to use *DMA API*
> > 
> > >     and see what happens
> 
> This is the right thing to do, but it will take a while.  In fact
> I dont think we really need the warning in step 1, a simple grep
> already allows to go over them.  I just looked at the uses of GFP_DMA
> in drivers/scsi for example, and all but one look bogus.
> 
> > > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > > the right way to call people when the first name is the same. Correct me if
> > > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > > Means device enforces getting memory inside addressing limit for those
> > > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > > a counter example if anyone happens to know, it could be easy.
> 
> The way this works is that the dma_map* calls will bounce buffer memory
> that does to fall into the addressing limitations.  This is a performance
> overhead, but allows drivers to address all memory in a system.  If the
> driver controls memory allocation it should use one of the dma_alloc_*
> APIs that allocate addressable memory from the start.  The allocator
> will dip into ZONE_DMA and ZONE_DMA32 when needed.

Hello Christoph, Baoquan and I started this cleanup.
But we're a bit confused. I want to ask you something.

-   Did you mean dma_map_* can handle arbitrary buffer, (and dma_map_* will
    bounce buffer when necessary) Can we assume it on every architectures
    and buses?

    Reading at the DMA API documentation and code (dma_map_page_attrs(),
    dma_direct_map_page()), I'm not sure about that.

    In the documentation: (dma_map_single)
    	Further, the DMA address of the memory must be within the
	dma_mask of the device (the dma_mask is a bit mask of the
	addressable region for the device, i.e., if the DMA address of
	the memory ANDed with the dma_mask is still equal to the DMA
	address, then the device can perform DMA to the memory).  To
	ensure that the memory allocated by kmalloc is within the dma_mask,
	the driver may specify various platform-dependent flags to restrict
	the DMA address range of the allocation (e.g., on x86, GFP_DMA
	guarantees to be within the first 16MB of available DMA addresses,
	as required by ISA devices).

-   In what function does the DMA API do bounce buffering?

Thanks a lot,
Hyeonggon

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone
@ 2022-01-07 11:56                 ` Hyeonggon Yoo
  0 siblings, 0 replies; 74+ messages in thread
From: Hyeonggon Yoo @ 2022-01-07 11:56 UTC (permalink / raw)
  To: kexec

On Wed, Dec 15, 2021 at 08:27:10AM +0100, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> > I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> > for kdump kernel is nice way to solve this problem.
> 
> What is the problem with zones in kdump kernels?
> 
> > Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> Indeed.
> 
> > >     1) Do not call warn_alloc in page allocator if will always fail
> > >     to allocate ZONE_DMA pages.
> > > 
> > > 
> > >     2) let's check all callers of kmalloc with GFP_DMA
> > >     if they really need GFP_DMA flag and replace those by DMA API or
> > >     just remove GFP_DMA from kmalloc()
> > > 
> > >     3) Drop support for allocating DMA memory from slab allocator
> > >     (as Christoph Hellwig said) and convert them to use DMA32
> > 
> > 	(as Christoph Hellwig said) and convert them to use *DMA API*
> > 
> > >     and see what happens
> 
> This is the right thing to do, but it will take a while.  In fact
> I dont think we really need the warning in step 1, a simple grep
> already allows to go over them.  I just looked at the uses of GFP_DMA
> in drivers/scsi for example, and all but one look bogus.
> 
> > > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > > the right way to call people when the first name is the same. Correct me if
> > > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > > Means device enforces getting memory inside addressing limit for those
> > > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > > a counter example if anyone happens to know, it could be easy.
> 
> The way this works is that the dma_map* calls will bounce buffer memory
> that does to fall into the addressing limitations.  This is a performance
> overhead, but allows drivers to address all memory in a system.  If the
> driver controls memory allocation it should use one of the dma_alloc_*
> APIs that allocate addressable memory from the start.  The allocator
> will dip into ZONE_DMA and ZONE_DMA32 when needed.

Hello Christoph, Baoquan and I started this cleanup.
But we're a bit confused. I want to ask you something.

-   Did you mean dma_map_* can handle arbitrary buffer, (and dma_map_* will
    bounce buffer when necessary) Can we assume it on every architectures
    and buses?

    Reading at the DMA API documentation and code (dma_map_page_attrs(),
    dma_direct_map_page()), I'm not sure about that.

    In the documentation: (dma_map_single)
    	Further, the DMA address of the memory must be within the
	dma_mask of the device (the dma_mask is a bit mask of the
	addressable region for the device, i.e., if the DMA address of
	the memory ANDed with the dma_mask is still equal to the DMA
	address, then the device can perform DMA to the memory).  To
	ensure that the memory allocated by kmalloc is within the dma_mask,
	the driver may specify various platform-dependent flags to restrict
	the DMA address range of the allocation (e.g., on x86, GFP_DMA
	guarantees to be within the first 16MB of available DMA addresses,
	as required by ISA devices).

-   In what function does the DMA API do bounce buffering?

Thanks a lot,
Hyeonggon


^ permalink raw reply	[flat|nested] 74+ messages in thread

end of thread, other threads:[~2022-01-07 11:57 UTC | newest]

Thread overview: 74+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-13 12:27 [PATCH v3 0/5] Avoid requesting page from DMA zone when no managed pages Baoquan He
2021-12-13 12:27 ` Baoquan He
2021-12-13 12:27 ` [PATCH v3 1/5] docs: kernel-parameters: Update to reflect the current default size of atomic pool Baoquan He
2021-12-13 12:27   ` Baoquan He
2021-12-13 14:20   ` john.p.donnelly
2021-12-13 14:20     ` john.p.donnelly
2021-12-13 12:27 ` [PATCH v3 2/5] dma-pool: allow user to disable " Baoquan He
2021-12-13 12:27   ` Baoquan He
2021-12-13 14:21   ` john.p.donnelly
2021-12-13 14:21     ` john.p.donnelly
2021-12-13 12:27 ` [PATCH v3 3/5] mm_zone: add function to check if managed dma zone exists Baoquan He
2021-12-13 12:27   ` Baoquan He
2021-12-13 14:22   ` john.p.donnelly
2021-12-13 14:22     ` john.p.donnelly
2021-12-16 10:52   ` David Hildenbrand
2021-12-16 10:52     ` David Hildenbrand
2021-12-13 12:27 ` [PATCH v3 4/5] dma/pool: create dma atomic pool only if dma zone has managed pages Baoquan He
2021-12-13 12:27   ` Baoquan He
2021-12-13 12:27   ` Baoquan He
2021-12-13 14:23   ` john.p.donnelly
2021-12-13 14:23     ` john.p.donnelly
2021-12-13 14:23     ` john.p.donnelly
2021-12-13 12:27 ` [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed pages in DMA zone Baoquan He
2021-12-13 12:27   ` Baoquan He
2021-12-13 13:43   ` Hyeonggon Yoo
2021-12-13 13:43     ` Hyeonggon Yoo
2021-12-14  5:32     ` Baoquan He
2021-12-14  5:32       ` Baoquan He
2021-12-14 10:09       ` Vlastimil Babka
2021-12-14 10:09         ` Vlastimil Babka
2021-12-14 10:28         ` Christoph Lameter
2021-12-14 10:28           ` Christoph Lameter
2021-12-15  4:48         ` Hyeonggon Yoo
2021-12-15  4:48           ` Hyeonggon Yoo
2021-12-15  7:03           ` Hyeonggon Yoo
2021-12-15  7:03             ` Hyeonggon Yoo
2021-12-15  7:27             ` Christoph Hellwig
2021-12-15  7:27               ` Christoph Hellwig
2021-12-15 10:34               ` Vlastimil Babka
2021-12-15 10:34                 ` Vlastimil Babka
2021-12-15 11:51                 ` David Laight
2021-12-15 11:51                   ` David Laight
2021-12-15 13:41                 ` Baoquan He
2021-12-15 13:41                   ` Baoquan He
2021-12-17 11:38               ` Hyeonggon Yoo
2021-12-17 11:38                 ` Hyeonggon Yoo
2021-12-20  7:32                 ` Baoquan He
2021-12-20  7:32                   ` Baoquan He
2022-01-07 11:56               ` Hyeonggon Yoo
2022-01-07 11:56                 ` Hyeonggon Yoo
2021-12-15 14:42             ` Baoquan He
2021-12-15 14:42               ` Baoquan He
2021-12-15 10:08         ` Baoquan He
2021-12-15 10:08           ` Baoquan He
2021-12-17 11:38       ` Hyeonggon Yoo
2021-12-17 11:38         ` Hyeonggon Yoo
2021-12-21  8:56         ` Christoph Hellwig
2021-12-21  8:56           ` Christoph Hellwig
2021-12-22 12:37           ` Hyeonggon Yoo
2021-12-22 12:37             ` Hyeonggon Yoo
2021-12-23  8:52             ` Christoph Hellwig
2021-12-23  8:52               ` Christoph Hellwig
2021-12-13 14:24   ` john.p.donnelly
2021-12-13 14:24     ` john.p.donnelly
2021-12-14 16:31   ` Christoph Hellwig
2021-12-14 16:31     ` Christoph Hellwig
2021-12-14 17:07     ` john.p.donnelly
2021-12-14 17:07       ` john.p.donnelly
2021-12-15  7:27       ` Christoph Hellwig
2021-12-15  7:27         ` Christoph Hellwig
2021-12-13 21:05 ` [PATCH v3 0/5] Avoid requesting page from DMA zone when no managed pages Andrew Morton
2021-12-13 21:05   ` Andrew Morton
2021-12-14  0:35   ` Baoquan He
2021-12-14  0:35     ` Baoquan He

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.