All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] powerpc/fadump: handle CMA activation failure appropriately
@ 2021-12-20 19:34 ` Hari Bathini
  0 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2021-12-20 19:34 UTC (permalink / raw)
  To: akpm, linux-mm, mpe, linuxppc-dev
  Cc: david, osalvador, mike.kravetz, mahesh, sourabhjain, Hari Bathini

While commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
assisted dump"), introduced Linux kernel's Contiguous Memory Allocator
(CMA) based reservation for fadump, it came with the assumption that
the memory remains reserved even if CMA activation fails. This
assumption ensures no kernel page resides in the reserved memory
region, which can't be mapped into the /proc/vmcore.

But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") started returning all pages to buddy
allocator if CMA activation fails. This led to warning messages like
below while running crash-utility on vmcore of a kernel having above
two commits:

  crash: seek error: kernel virtual address: <from reserved region>

as reserved memory region ended up having kernel pages crash-utility
was looking for. Fix this by introducing an option in CMA, to opt out
from exposing pages to buddy allocator, on CMA activation failure.

Hari Bathini (2):
  mm/cma: provide option to opt out from exposing pages on activation
    failure
  powerpc/fadump: opt out from freeing pages on cma activation failure

 arch/powerpc/kernel/fadump.c |  6 ++++++
 include/linux/cma.h          |  2 ++
 mm/cma.c                     | 15 +++++++++++++--
 mm/cma.h                     |  1 +
 4 files changed, 22 insertions(+), 2 deletions(-)

-- 
2.33.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 0/2] powerpc/fadump: handle CMA activation failure appropriately
@ 2021-12-20 19:34 ` Hari Bathini
  0 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2021-12-20 19:34 UTC (permalink / raw)
  To: akpm, linux-mm, mpe, linuxppc-dev
  Cc: david, mahesh, sourabhjain, osalvador, Hari Bathini, mike.kravetz

While commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
assisted dump"), introduced Linux kernel's Contiguous Memory Allocator
(CMA) based reservation for fadump, it came with the assumption that
the memory remains reserved even if CMA activation fails. This
assumption ensures no kernel page resides in the reserved memory
region, which can't be mapped into the /proc/vmcore.

But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") started returning all pages to buddy
allocator if CMA activation fails. This led to warning messages like
below while running crash-utility on vmcore of a kernel having above
two commits:

  crash: seek error: kernel virtual address: <from reserved region>

as reserved memory region ended up having kernel pages crash-utility
was looking for. Fix this by introducing an option in CMA, to opt out
from exposing pages to buddy allocator, on CMA activation failure.

Hari Bathini (2):
  mm/cma: provide option to opt out from exposing pages on activation
    failure
  powerpc/fadump: opt out from freeing pages on cma activation failure

 arch/powerpc/kernel/fadump.c |  6 ++++++
 include/linux/cma.h          |  2 ++
 mm/cma.c                     | 15 +++++++++++++--
 mm/cma.h                     |  1 +
 4 files changed, 22 insertions(+), 2 deletions(-)

-- 
2.33.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
  2021-12-20 19:34 ` Hari Bathini
@ 2021-12-20 19:34   ` Hari Bathini
  -1 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2021-12-20 19:34 UTC (permalink / raw)
  To: akpm, linux-mm, mpe, linuxppc-dev
  Cc: david, osalvador, mike.kravetz, mahesh, sourabhjain, Hari Bathini

Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") started exposing all pages to buddy
allocator on CMA activation failure. But there can be CMA users that
want to handle the reserved memory differently on CMA allocation
failure. Provide an option to opt out from exposing pages to buddy
for such cases.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
 include/linux/cma.h |  2 ++
 mm/cma.c            | 15 +++++++++++++--
 mm/cma.h            |  1 +
 3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index bd801023504b..8c9e229e7080 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -50,4 +50,6 @@ extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned
 extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
 
 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
+
+extern void cma_dont_free_pages_on_error(struct cma *cma);
 #endif
diff --git a/mm/cma.c b/mm/cma.c
index bc9ca8f3c487..6dffc9b2dafe 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -131,8 +131,10 @@ static void __init cma_activate_area(struct cma *cma)
 	bitmap_free(cma->bitmap);
 out_error:
 	/* Expose all pages to the buddy, they are useless for CMA. */
-	for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
-		free_reserved_page(pfn_to_page(pfn));
+	if (cma->free_pages_on_error) {
+		for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
+			free_reserved_page(pfn_to_page(pfn));
+	}
 	totalcma_pages -= cma->count;
 	cma->count = 0;
 	pr_err("CMA area %s could not be activated\n", cma->name);
@@ -150,6 +152,14 @@ static int __init cma_init_reserved_areas(void)
 }
 core_initcall(cma_init_reserved_areas);
 
+void __init cma_dont_free_pages_on_error(struct cma *cma)
+{
+	if (!cma)
+		return;
+
+	cma->free_pages_on_error = false;
+}
+
 /**
  * cma_init_reserved_mem() - create custom contiguous area from reserved memory
  * @base: Base address of the reserved area
@@ -204,6 +214,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 	cma->base_pfn = PFN_DOWN(base);
 	cma->count = size >> PAGE_SHIFT;
 	cma->order_per_bit = order_per_bit;
+	cma->free_pages_on_error = true;
 	*res_cma = cma;
 	cma_area_count++;
 	totalcma_pages += (size / PAGE_SIZE);
diff --git a/mm/cma.h b/mm/cma.h
index 2c775877eae2..9e2438f9233d 100644
--- a/mm/cma.h
+++ b/mm/cma.h
@@ -30,6 +30,7 @@ struct cma {
 	/* kobject requires dynamic object */
 	struct cma_kobject *cma_kobj;
 #endif
+	bool free_pages_on_error;
 };
 
 extern struct cma cma_areas[MAX_CMA_AREAS];
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
@ 2021-12-20 19:34   ` Hari Bathini
  0 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2021-12-20 19:34 UTC (permalink / raw)
  To: akpm, linux-mm, mpe, linuxppc-dev
  Cc: david, mahesh, sourabhjain, osalvador, Hari Bathini, mike.kravetz

Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") started exposing all pages to buddy
allocator on CMA activation failure. But there can be CMA users that
want to handle the reserved memory differently on CMA allocation
failure. Provide an option to opt out from exposing pages to buddy
for such cases.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
 include/linux/cma.h |  2 ++
 mm/cma.c            | 15 +++++++++++++--
 mm/cma.h            |  1 +
 3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index bd801023504b..8c9e229e7080 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -50,4 +50,6 @@ extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned
 extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
 
 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
+
+extern void cma_dont_free_pages_on_error(struct cma *cma);
 #endif
diff --git a/mm/cma.c b/mm/cma.c
index bc9ca8f3c487..6dffc9b2dafe 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -131,8 +131,10 @@ static void __init cma_activate_area(struct cma *cma)
 	bitmap_free(cma->bitmap);
 out_error:
 	/* Expose all pages to the buddy, they are useless for CMA. */
-	for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
-		free_reserved_page(pfn_to_page(pfn));
+	if (cma->free_pages_on_error) {
+		for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
+			free_reserved_page(pfn_to_page(pfn));
+	}
 	totalcma_pages -= cma->count;
 	cma->count = 0;
 	pr_err("CMA area %s could not be activated\n", cma->name);
@@ -150,6 +152,14 @@ static int __init cma_init_reserved_areas(void)
 }
 core_initcall(cma_init_reserved_areas);
 
+void __init cma_dont_free_pages_on_error(struct cma *cma)
+{
+	if (!cma)
+		return;
+
+	cma->free_pages_on_error = false;
+}
+
 /**
  * cma_init_reserved_mem() - create custom contiguous area from reserved memory
  * @base: Base address of the reserved area
@@ -204,6 +214,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 	cma->base_pfn = PFN_DOWN(base);
 	cma->count = size >> PAGE_SHIFT;
 	cma->order_per_bit = order_per_bit;
+	cma->free_pages_on_error = true;
 	*res_cma = cma;
 	cma_area_count++;
 	totalcma_pages += (size / PAGE_SIZE);
diff --git a/mm/cma.h b/mm/cma.h
index 2c775877eae2..9e2438f9233d 100644
--- a/mm/cma.h
+++ b/mm/cma.h
@@ -30,6 +30,7 @@ struct cma {
 	/* kobject requires dynamic object */
 	struct cma_kobject *cma_kobj;
 #endif
+	bool free_pages_on_error;
 };
 
 extern struct cma cma_areas[MAX_CMA_AREAS];
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] powerpc/fadump: opt out from freeing pages on cma activation failure
  2021-12-20 19:34 ` Hari Bathini
@ 2021-12-20 19:34   ` Hari Bathini
  -1 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2021-12-20 19:34 UTC (permalink / raw)
  To: akpm, linux-mm, mpe, linuxppc-dev
  Cc: david, osalvador, mike.kravetz, mahesh, sourabhjain, Hari Bathini

With commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
assisted dump"), Linux kernel's Contiguous Memory Allocator (CMA)
based reservation was introduced in fadump. That change was aimed at
using CMA to let applications utilize the memory reserved for fadump
while blocking it from being used for kernel pages. The assumption
was, even if CMA activation fails for whatever reason, the memory
still remains reserved to avoid it from being used for kernel pages.
But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") breaks this assumption as it started
exposing all pages to buddy allocator on CMA activation failure.
It led to warning messages like below while running crash-utility
on vmcore of a kernel having above two commits:

  crash: seek error: kernel virtual address: <from reserved region>

To fix this problem, opt out from exposing pages to buddy allocator
on CMA activation failure for fadump reserved memory.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
 arch/powerpc/kernel/fadump.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index b7ceb041743c..d1f7f79dfbd8 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -112,6 +112,12 @@ static int __init fadump_cma_init(void)
 		return 1;
 	}
 
+	/*
+	 * If CMA activation fails, do not let the reserved memory be exposed
+	 * to buddy allocator. As good as 'fadump=nocma' case.
+	 */
+	cma_dont_free_pages_on_error(fadump_cma);
+
 	/*
 	 * So we now have successfully initialized cma area for fadump.
 	 */
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] powerpc/fadump: opt out from freeing pages on cma activation failure
@ 2021-12-20 19:34   ` Hari Bathini
  0 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2021-12-20 19:34 UTC (permalink / raw)
  To: akpm, linux-mm, mpe, linuxppc-dev
  Cc: david, mahesh, sourabhjain, osalvador, Hari Bathini, mike.kravetz

With commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
assisted dump"), Linux kernel's Contiguous Memory Allocator (CMA)
based reservation was introduced in fadump. That change was aimed at
using CMA to let applications utilize the memory reserved for fadump
while blocking it from being used for kernel pages. The assumption
was, even if CMA activation fails for whatever reason, the memory
still remains reserved to avoid it from being used for kernel pages.
But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") breaks this assumption as it started
exposing all pages to buddy allocator on CMA activation failure.
It led to warning messages like below while running crash-utility
on vmcore of a kernel having above two commits:

  crash: seek error: kernel virtual address: <from reserved region>

To fix this problem, opt out from exposing pages to buddy allocator
on CMA activation failure for fadump reserved memory.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
 arch/powerpc/kernel/fadump.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index b7ceb041743c..d1f7f79dfbd8 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -112,6 +112,12 @@ static int __init fadump_cma_init(void)
 		return 1;
 	}
 
+	/*
+	 * If CMA activation fails, do not let the reserved memory be exposed
+	 * to buddy allocator. As good as 'fadump=nocma' case.
+	 */
+	cma_dont_free_pages_on_error(fadump_cma);
+
 	/*
 	 * So we now have successfully initialized cma area for fadump.
 	 */
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
  2021-12-20 19:34   ` Hari Bathini
  (?)
@ 2021-12-21 18:48   ` David Hildenbrand
  2022-01-06 12:01       ` Hari Bathini
  -1 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2021-12-21 18:48 UTC (permalink / raw)
  To: Hari Bathini, akpm, linux-mm, mpe, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador

On 20.12.21 20:34, Hari Bathini wrote:
> Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
> activation of an area fails") started exposing all pages to buddy
> allocator on CMA activation failure. But there can be CMA users that
> want to handle the reserved memory differently on CMA allocation
> failure. Provide an option to opt out from exposing pages to buddy
> for such cases.

Can you elaborate why that is important and what the target user can
actually do with it?

It certainly cannot do CMA allocations :)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
  2021-12-21 18:48   ` David Hildenbrand
@ 2022-01-06 12:01       ` Hari Bathini
  0 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2022-01-06 12:01 UTC (permalink / raw)
  To: David Hildenbrand, akpm, linux-mm, mpe, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador



On 22/12/21 12:18 am, David Hildenbrand wrote:
> On 20.12.21 20:34, Hari Bathini wrote:
>> Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
>> activation of an area fails") started exposing all pages to buddy
>> allocator on CMA activation failure. But there can be CMA users that
>> want to handle the reserved memory differently on CMA allocation
>> failure. Provide an option to opt out from exposing pages to buddy
>> for such cases.

Hi David,

Sorry, I could not get back on this sooner. I went out on vacation
and missed this.
.

> 
> Can you elaborate why that is important and what the target user can
> actually do with it?
Previously, firmware-assisted dump [1] used to reserve memory that it 
needs for booting a capture kernel & offloading /proc/vmcore.
This memory is reserved, basically blocked from being used by
production kernel, to ensure kernel crash context is not lost on
booting into a capture kernel from this memory chunk.

But [2] started using CMA instead to let the memory be used at least
in some cases as long as this memory is not going to have kernel pages. 
So, the intention in using CMA was to keep the memory unused if CMA
activation fails and only let it be used for some purpose, if at all,
if CMA activation succeeds. But [3] breaks that assumption reporting
weird errors on vmcore captured with fadump, when CMA activation fails.

To answer the question, fadump does not want the memory to be used for
kernel pages, if CMA activation fails...


[1] 
https://github.com/torvalds/linux/blob/master/Documentation/powerpc/firmware-assisted-dump.rst
[2] https://github.com/torvalds/linux/commit/a4e92ce8e4c8
[3] https://github.com/torvalds/linux/commit/072355c1cf2d

Thanks
Hari


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
@ 2022-01-06 12:01       ` Hari Bathini
  0 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2022-01-06 12:01 UTC (permalink / raw)
  To: David Hildenbrand, akpm, linux-mm, mpe, linuxppc-dev
  Cc: osalvador, mahesh, sourabhjain, mike.kravetz



On 22/12/21 12:18 am, David Hildenbrand wrote:
> On 20.12.21 20:34, Hari Bathini wrote:
>> Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
>> activation of an area fails") started exposing all pages to buddy
>> allocator on CMA activation failure. But there can be CMA users that
>> want to handle the reserved memory differently on CMA allocation
>> failure. Provide an option to opt out from exposing pages to buddy
>> for such cases.

Hi David,

Sorry, I could not get back on this sooner. I went out on vacation
and missed this.
.

> 
> Can you elaborate why that is important and what the target user can
> actually do with it?
Previously, firmware-assisted dump [1] used to reserve memory that it 
needs for booting a capture kernel & offloading /proc/vmcore.
This memory is reserved, basically blocked from being used by
production kernel, to ensure kernel crash context is not lost on
booting into a capture kernel from this memory chunk.

But [2] started using CMA instead to let the memory be used at least
in some cases as long as this memory is not going to have kernel pages. 
So, the intention in using CMA was to keep the memory unused if CMA
activation fails and only let it be used for some purpose, if at all,
if CMA activation succeeds. But [3] breaks that assumption reporting
weird errors on vmcore captured with fadump, when CMA activation fails.

To answer the question, fadump does not want the memory to be used for
kernel pages, if CMA activation fails...


[1] 
https://github.com/torvalds/linux/blob/master/Documentation/powerpc/firmware-assisted-dump.rst
[2] https://github.com/torvalds/linux/commit/a4e92ce8e4c8
[3] https://github.com/torvalds/linux/commit/072355c1cf2d

Thanks
Hari

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
  2022-01-06 12:01       ` Hari Bathini
@ 2022-01-11 14:36         ` David Hildenbrand
  -1 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2022-01-11 14:36 UTC (permalink / raw)
  To: Hari Bathini, akpm, linux-mm, mpe, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador

On 06.01.22 13:01, Hari Bathini wrote:
> 
> 
> On 22/12/21 12:18 am, David Hildenbrand wrote:
>> On 20.12.21 20:34, Hari Bathini wrote:
>>> Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
>>> activation of an area fails") started exposing all pages to buddy
>>> allocator on CMA activation failure. But there can be CMA users that
>>> want to handle the reserved memory differently on CMA allocation
>>> failure. Provide an option to opt out from exposing pages to buddy
>>> for such cases.
> 
> Hi David,
> 
> Sorry, I could not get back on this sooner. I went out on vacation
> and missed this.
> .
> 
>>
>> Can you elaborate why that is important and what the target user can
>> actually do with it?
> Previously, firmware-assisted dump [1] used to reserve memory that it 
> needs for booting a capture kernel & offloading /proc/vmcore.
> This memory is reserved, basically blocked from being used by
> production kernel, to ensure kernel crash context is not lost on
> booting into a capture kernel from this memory chunk.
> 
> But [2] started using CMA instead to let the memory be used at least
> in some cases as long as this memory is not going to have kernel pages. 
> So, the intention in using CMA was to keep the memory unused if CMA
> activation fails and only let it be used for some purpose, if at all,
> if CMA activation succeeds. But [3] breaks that assumption reporting
> weird errors on vmcore captured with fadump, when CMA activation fails.
> 
> To answer the question, fadump does not want the memory to be used for
> kernel pages, if CMA activation fails...

Okay, so what you want is a reserved region, and if possible, let CMA
use that memory for other (movable allocation) purposes until you
actually need that area and free it up by using CMA. If CMA cannot use
the region because of zone issues, you just want that region to stay
reserved.

I guess the biggest different to other CMA users is that it can make use
of the memory even if not allocated via CMA -- because it's going to
make use of the the physical memory range indirectly via a HW facility,
not via any "struct page" access.


I wonder if we can make the terminology a bit clearer, the freeing part
is a bit confusing, because init_cma_reserved_pageblock() essentially
also frees pages, just to the MIGRATE_CMA lists ... what you want is to
treat it like a simple memblock allocation/reservation on error.

What about:
* cma->reserve_pages_on_error that defaults to false
* void __init cma_reserve_pages_on_error(struct cma *cma)


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
@ 2022-01-11 14:36         ` David Hildenbrand
  0 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2022-01-11 14:36 UTC (permalink / raw)
  To: Hari Bathini, akpm, linux-mm, mpe, linuxppc-dev
  Cc: osalvador, mahesh, sourabhjain, mike.kravetz

On 06.01.22 13:01, Hari Bathini wrote:
> 
> 
> On 22/12/21 12:18 am, David Hildenbrand wrote:
>> On 20.12.21 20:34, Hari Bathini wrote:
>>> Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
>>> activation of an area fails") started exposing all pages to buddy
>>> allocator on CMA activation failure. But there can be CMA users that
>>> want to handle the reserved memory differently on CMA allocation
>>> failure. Provide an option to opt out from exposing pages to buddy
>>> for such cases.
> 
> Hi David,
> 
> Sorry, I could not get back on this sooner. I went out on vacation
> and missed this.
> .
> 
>>
>> Can you elaborate why that is important and what the target user can
>> actually do with it?
> Previously, firmware-assisted dump [1] used to reserve memory that it 
> needs for booting a capture kernel & offloading /proc/vmcore.
> This memory is reserved, basically blocked from being used by
> production kernel, to ensure kernel crash context is not lost on
> booting into a capture kernel from this memory chunk.
> 
> But [2] started using CMA instead to let the memory be used at least
> in some cases as long as this memory is not going to have kernel pages. 
> So, the intention in using CMA was to keep the memory unused if CMA
> activation fails and only let it be used for some purpose, if at all,
> if CMA activation succeeds. But [3] breaks that assumption reporting
> weird errors on vmcore captured with fadump, when CMA activation fails.
> 
> To answer the question, fadump does not want the memory to be used for
> kernel pages, if CMA activation fails...

Okay, so what you want is a reserved region, and if possible, let CMA
use that memory for other (movable allocation) purposes until you
actually need that area and free it up by using CMA. If CMA cannot use
the region because of zone issues, you just want that region to stay
reserved.

I guess the biggest different to other CMA users is that it can make use
of the memory even if not allocated via CMA -- because it's going to
make use of the the physical memory range indirectly via a HW facility,
not via any "struct page" access.


I wonder if we can make the terminology a bit clearer, the freeing part
is a bit confusing, because init_cma_reserved_pageblock() essentially
also frees pages, just to the MIGRATE_CMA lists ... what you want is to
treat it like a simple memblock allocation/reservation on error.

What about:
* cma->reserve_pages_on_error that defaults to false
* void __init cma_reserve_pages_on_error(struct cma *cma)


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
  2022-01-11 14:36         ` David Hildenbrand
@ 2022-01-12  9:50           ` Hari Bathini
  -1 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2022-01-12  9:50 UTC (permalink / raw)
  To: David Hildenbrand, akpm, linux-mm, mpe, linuxppc-dev
  Cc: osalvador, mahesh, sourabhjain, mike.kravetz



On 11/01/22 8:06 pm, David Hildenbrand wrote:
> On 06.01.22 13:01, Hari Bathini wrote:
>>
>> To answer the question, fadump does not want the memory to be used for
>> kernel pages, if CMA activation fails...
> 
> Okay, so what you want is a reserved region, and if possible, let CMA
> use that memory for other (movable allocation) purposes until you
> actually need that area and free it up by using CMA. If CMA cannot use
> the region because of zone issues, you just want that region to stay
> reserved.
> 

Right.

> I guess the biggest different to other CMA users is that it can make use
> of the memory even if not allocated via CMA -- because it's going to
> make use of the the physical memory range indirectly via a HW facility,
> not via any "struct page" access.
> 
> 
> I wonder if we can make the terminology a bit clearer, the freeing part
> is a bit confusing, because init_cma_reserved_pageblock() essentially
> also frees pages, just to the MIGRATE_CMA lists ... what you want is to
> treat it like a simple memblock allocation/reservation on error.

> What about:
> * cma->reserve_pages_on_error that defaults to false
> * void __init cma_reserve_pages_on_error(struct cma *cma)

Yeah, this change does make things bit more clearer.
Will send out a v2 with the change..



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
@ 2022-01-12  9:50           ` Hari Bathini
  0 siblings, 0 replies; 13+ messages in thread
From: Hari Bathini @ 2022-01-12  9:50 UTC (permalink / raw)
  To: David Hildenbrand, akpm, linux-mm, mpe, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador



On 11/01/22 8:06 pm, David Hildenbrand wrote:
> On 06.01.22 13:01, Hari Bathini wrote:
>>
>> To answer the question, fadump does not want the memory to be used for
>> kernel pages, if CMA activation fails...
> 
> Okay, so what you want is a reserved region, and if possible, let CMA
> use that memory for other (movable allocation) purposes until you
> actually need that area and free it up by using CMA. If CMA cannot use
> the region because of zone issues, you just want that region to stay
> reserved.
> 

Right.

> I guess the biggest different to other CMA users is that it can make use
> of the memory even if not allocated via CMA -- because it's going to
> make use of the the physical memory range indirectly via a HW facility,
> not via any "struct page" access.
> 
> 
> I wonder if we can make the terminology a bit clearer, the freeing part
> is a bit confusing, because init_cma_reserved_pageblock() essentially
> also frees pages, just to the MIGRATE_CMA lists ... what you want is to
> treat it like a simple memblock allocation/reservation on error.

> What about:
> * cma->reserve_pages_on_error that defaults to false
> * void __init cma_reserve_pages_on_error(struct cma *cma)

Yeah, this change does make things bit more clearer.
Will send out a v2 with the change..


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-01-12  9:51 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-20 19:34 [PATCH 0/2] powerpc/fadump: handle CMA activation failure appropriately Hari Bathini
2021-12-20 19:34 ` Hari Bathini
2021-12-20 19:34 ` [PATCH 1/2] mm/cma: provide option to opt out from exposing pages on activation failure Hari Bathini
2021-12-20 19:34   ` Hari Bathini
2021-12-21 18:48   ` David Hildenbrand
2022-01-06 12:01     ` Hari Bathini
2022-01-06 12:01       ` Hari Bathini
2022-01-11 14:36       ` David Hildenbrand
2022-01-11 14:36         ` David Hildenbrand
2022-01-12  9:50         ` Hari Bathini
2022-01-12  9:50           ` Hari Bathini
2021-12-20 19:34 ` [PATCH 2/2] powerpc/fadump: opt out from freeing pages on cma " Hari Bathini
2021-12-20 19:34   ` Hari Bathini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.