All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] powerpc/fadump: handle CMA activation failure appropriately
@ 2022-01-17  7:52 ` Hari Bathini
  0 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2022-01-17  7:52 UTC (permalink / raw)
  To: akpm, david, linux-mm, mpe, linuxppc-dev
  Cc: osalvador, mike.kravetz, mahesh, sourabhjain

While commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
assisted dump"), introduced Linux kernel's Contiguous Memory Allocator
(CMA) based reservation for fadump, it came with the assumption that
the memory remains reserved even if CMA activation fails. It ensures
no kernel pages reside in the reserved memory region, which can't be
mapped into the /proc/vmcore.

But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") started returning all pages to buddy
allocator if CMA activation fails. This led to warning messages like
below while running crash-utility on vmcore of a kernel having above
two commits:

  crash: seek error: kernel virtual address: <from reserved region>

as reserved memory region ended up having kernel pages crash-utility
was looking for. Fix this by introducing an option in CMA, to opt out
from exposing pages to buddy allocator, on CMA activation failure.

Changes in v3:
* Dropped NULL check in cma_reserve_pages_on_error().
* Dropped explicit initialization of cma->reserve_pages_on_error to
  'false' in cma_init_reserved_mem().
* Added review tags from David.

Changes in v2:
* Replaced cma->free_pages_on_error with cma->reserve_pages_on_error
  & cma_dont_free_pages_on_error() with cma_reserve_pages_on_error()
  to avoid confusion and make the expectation on failure clearer.


Hari Bathini (2):
  mm/cma: provide option to opt out from exposing pages on activation
    failure
  powerpc/fadump: opt out from freeing pages on cma activation failure

 arch/powerpc/kernel/fadump.c |  6 ++++++
 include/linux/cma.h          |  2 ++
 mm/cma.c                     | 11 +++++++++--
 mm/cma.h                     |  1 +
 4 files changed, 18 insertions(+), 2 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 0/2] powerpc/fadump: handle CMA activation failure appropriately
@ 2022-01-17  7:52 ` Hari Bathini
  0 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2022-01-17  7:52 UTC (permalink / raw)
  To: akpm, david, linux-mm, mpe, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador

While commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
assisted dump"), introduced Linux kernel's Contiguous Memory Allocator
(CMA) based reservation for fadump, it came with the assumption that
the memory remains reserved even if CMA activation fails. It ensures
no kernel pages reside in the reserved memory region, which can't be
mapped into the /proc/vmcore.

But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") started returning all pages to buddy
allocator if CMA activation fails. This led to warning messages like
below while running crash-utility on vmcore of a kernel having above
two commits:

  crash: seek error: kernel virtual address: <from reserved region>

as reserved memory region ended up having kernel pages crash-utility
was looking for. Fix this by introducing an option in CMA, to opt out
from exposing pages to buddy allocator, on CMA activation failure.

Changes in v3:
* Dropped NULL check in cma_reserve_pages_on_error().
* Dropped explicit initialization of cma->reserve_pages_on_error to
  'false' in cma_init_reserved_mem().
* Added review tags from David.

Changes in v2:
* Replaced cma->free_pages_on_error with cma->reserve_pages_on_error
  & cma_dont_free_pages_on_error() with cma_reserve_pages_on_error()
  to avoid confusion and make the expectation on failure clearer.


Hari Bathini (2):
  mm/cma: provide option to opt out from exposing pages on activation
    failure
  powerpc/fadump: opt out from freeing pages on cma activation failure

 arch/powerpc/kernel/fadump.c |  6 ++++++
 include/linux/cma.h          |  2 ++
 mm/cma.c                     | 11 +++++++++--
 mm/cma.h                     |  1 +
 4 files changed, 18 insertions(+), 2 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
  2022-01-17  7:52 ` Hari Bathini
@ 2022-01-17  7:52   ` Hari Bathini
  -1 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2022-01-17  7:52 UTC (permalink / raw)
  To: akpm, david, linux-mm, mpe, linuxppc-dev
  Cc: osalvador, mike.kravetz, mahesh, sourabhjain

Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") started exposing all pages to buddy
allocator on CMA activation failure. But there can be CMA users that
want to handle the reserved memory differently on CMA allocation
failure. Provide an option to opt out from exposing pages to buddy
for such cases.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---

Changes in v3:
* Dropped NULL check in cma_reserve_pages_on_error().
* Dropped explicit initialization of cma->reserve_pages_on_error to
  'false' in cma_init_reserved_mem().
* Added Reviewed-by tag from David.

Changes in v2:
* Changed cma->free_pages_on_error to cma->reserve_pages_on_error and
  cma_dont_free_pages_on_error() to cma_reserve_pages_on_error() to
  avoid confusion.


 include/linux/cma.h |  2 ++
 mm/cma.c            | 11 +++++++++--
 mm/cma.h            |  1 +
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index bd801023504b..51d540eee18a 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -50,4 +50,6 @@ extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned
 extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
 
 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
+
+extern void cma_reserve_pages_on_error(struct cma *cma);
 #endif
diff --git a/mm/cma.c b/mm/cma.c
index bc9ca8f3c487..766f1b82b532 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -131,8 +131,10 @@ static void __init cma_activate_area(struct cma *cma)
 	bitmap_free(cma->bitmap);
 out_error:
 	/* Expose all pages to the buddy, they are useless for CMA. */
-	for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
-		free_reserved_page(pfn_to_page(pfn));
+	if (!cma->reserve_pages_on_error) {
+		for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
+			free_reserved_page(pfn_to_page(pfn));
+	}
 	totalcma_pages -= cma->count;
 	cma->count = 0;
 	pr_err("CMA area %s could not be activated\n", cma->name);
@@ -150,6 +152,11 @@ static int __init cma_init_reserved_areas(void)
 }
 core_initcall(cma_init_reserved_areas);
 
+void __init cma_reserve_pages_on_error(struct cma *cma)
+{
+	cma->reserve_pages_on_error = true;
+}
+
 /**
  * cma_init_reserved_mem() - create custom contiguous area from reserved memory
  * @base: Base address of the reserved area
diff --git a/mm/cma.h b/mm/cma.h
index 2c775877eae2..88a0595670b7 100644
--- a/mm/cma.h
+++ b/mm/cma.h
@@ -30,6 +30,7 @@ struct cma {
 	/* kobject requires dynamic object */
 	struct cma_kobject *cma_kobj;
 #endif
+	bool reserve_pages_on_error;
 };
 
 extern struct cma cma_areas[MAX_CMA_AREAS];
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
@ 2022-01-17  7:52   ` Hari Bathini
  0 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2022-01-17  7:52 UTC (permalink / raw)
  To: akpm, david, linux-mm, mpe, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador

Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") started exposing all pages to buddy
allocator on CMA activation failure. But there can be CMA users that
want to handle the reserved memory differently on CMA allocation
failure. Provide an option to opt out from exposing pages to buddy
for such cases.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---

Changes in v3:
* Dropped NULL check in cma_reserve_pages_on_error().
* Dropped explicit initialization of cma->reserve_pages_on_error to
  'false' in cma_init_reserved_mem().
* Added Reviewed-by tag from David.

Changes in v2:
* Changed cma->free_pages_on_error to cma->reserve_pages_on_error and
  cma_dont_free_pages_on_error() to cma_reserve_pages_on_error() to
  avoid confusion.


 include/linux/cma.h |  2 ++
 mm/cma.c            | 11 +++++++++--
 mm/cma.h            |  1 +
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index bd801023504b..51d540eee18a 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -50,4 +50,6 @@ extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned
 extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
 
 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
+
+extern void cma_reserve_pages_on_error(struct cma *cma);
 #endif
diff --git a/mm/cma.c b/mm/cma.c
index bc9ca8f3c487..766f1b82b532 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -131,8 +131,10 @@ static void __init cma_activate_area(struct cma *cma)
 	bitmap_free(cma->bitmap);
 out_error:
 	/* Expose all pages to the buddy, they are useless for CMA. */
-	for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
-		free_reserved_page(pfn_to_page(pfn));
+	if (!cma->reserve_pages_on_error) {
+		for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
+			free_reserved_page(pfn_to_page(pfn));
+	}
 	totalcma_pages -= cma->count;
 	cma->count = 0;
 	pr_err("CMA area %s could not be activated\n", cma->name);
@@ -150,6 +152,11 @@ static int __init cma_init_reserved_areas(void)
 }
 core_initcall(cma_init_reserved_areas);
 
+void __init cma_reserve_pages_on_error(struct cma *cma)
+{
+	cma->reserve_pages_on_error = true;
+}
+
 /**
  * cma_init_reserved_mem() - create custom contiguous area from reserved memory
  * @base: Base address of the reserved area
diff --git a/mm/cma.h b/mm/cma.h
index 2c775877eae2..88a0595670b7 100644
--- a/mm/cma.h
+++ b/mm/cma.h
@@ -30,6 +30,7 @@ struct cma {
 	/* kobject requires dynamic object */
 	struct cma_kobject *cma_kobj;
 #endif
+	bool reserve_pages_on_error;
 };
 
 extern struct cma cma_areas[MAX_CMA_AREAS];
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/2] powerpc/fadump: opt out from freeing pages on cma activation failure
  2022-01-17  7:52 ` Hari Bathini
@ 2022-01-17  7:52   ` Hari Bathini
  -1 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2022-01-17  7:52 UTC (permalink / raw)
  To: akpm, david, linux-mm, mpe, linuxppc-dev
  Cc: osalvador, mike.kravetz, mahesh, sourabhjain

With commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
assisted dump"), Linux kernel's Contiguous Memory Allocator (CMA)
based reservation was introduced in fadump. That change was aimed at
using CMA to let applications utilize the memory reserved for fadump
while blocking it from being used for kernel pages. The assumption
was, even if CMA activation fails for whatever reason, the memory
still remains reserved to avoid it from being used for kernel pages.
But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") breaks this assumption as it started
exposing all pages to buddy allocator on CMA activation failure.
It led to warning messages like below while running crash-utility
on vmcore of a kernel having above two commits:

  crash: seek error: kernel virtual address: <from reserved region>

To fix this problem, opt out from exposing pages to buddy allocator
on CMA activation failure for fadump reserved memory.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
---

Changes in v3:
* Added Acked-by tag from David.


 arch/powerpc/kernel/fadump.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index d03e488cfe9c..d0ad86b67e66 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -112,6 +112,12 @@ static int __init fadump_cma_init(void)
 		return 1;
 	}
 
+	/*
+	 *  If CMA activation fails, keep the pages reserved, instead of
+	 *  exposing them to buddy allocator. Same as 'fadump=nocma' case.
+	 */
+	cma_reserve_pages_on_error(fadump_cma);
+
 	/*
 	 * So we now have successfully initialized cma area for fadump.
 	 */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/2] powerpc/fadump: opt out from freeing pages on cma activation failure
@ 2022-01-17  7:52   ` Hari Bathini
  0 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2022-01-17  7:52 UTC (permalink / raw)
  To: akpm, david, linux-mm, mpe, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador

With commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
assisted dump"), Linux kernel's Contiguous Memory Allocator (CMA)
based reservation was introduced in fadump. That change was aimed at
using CMA to let applications utilize the memory reserved for fadump
while blocking it from being used for kernel pages. The assumption
was, even if CMA activation fails for whatever reason, the memory
still remains reserved to avoid it from being used for kernel pages.
But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
activation of an area fails") breaks this assumption as it started
exposing all pages to buddy allocator on CMA activation failure.
It led to warning messages like below while running crash-utility
on vmcore of a kernel having above two commits:

  crash: seek error: kernel virtual address: <from reserved region>

To fix this problem, opt out from exposing pages to buddy allocator
on CMA activation failure for fadump reserved memory.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
---

Changes in v3:
* Added Acked-by tag from David.


 arch/powerpc/kernel/fadump.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index d03e488cfe9c..d0ad86b67e66 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -112,6 +112,12 @@ static int __init fadump_cma_init(void)
 		return 1;
 	}
 
+	/*
+	 *  If CMA activation fails, keep the pages reserved, instead of
+	 *  exposing them to buddy allocator. Same as 'fadump=nocma' case.
+	 */
+	cma_reserve_pages_on_error(fadump_cma);
+
 	/*
 	 * So we now have successfully initialized cma area for fadump.
 	 */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/2] powerpc/fadump: opt out from freeing pages on cma activation failure
  2022-01-17  7:52   ` Hari Bathini
@ 2022-01-24  0:45     ` Michael Ellerman
  -1 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-01-24  0:45 UTC (permalink / raw)
  To: Hari Bathini, akpm, david, linux-mm, linuxppc-dev
  Cc: osalvador, mike.kravetz, mahesh, sourabhjain

Hari Bathini <hbathini@linux.ibm.com> writes:
> With commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
> assisted dump"), Linux kernel's Contiguous Memory Allocator (CMA)
> based reservation was introduced in fadump. That change was aimed at
> using CMA to let applications utilize the memory reserved for fadump
> while blocking it from being used for kernel pages. The assumption
> was, even if CMA activation fails for whatever reason, the memory
> still remains reserved to avoid it from being used for kernel pages.
> But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
> activation of an area fails") breaks this assumption as it started
> exposing all pages to buddy allocator on CMA activation failure.
> It led to warning messages like below while running crash-utility
> on vmcore of a kernel having above two commits:
>
>   crash: seek error: kernel virtual address: <from reserved region>
>
> To fix this problem, opt out from exposing pages to buddy allocator
> on CMA activation failure for fadump reserved memory.
>
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
>
> Changes in v3:
> * Added Acked-by tag from David.
>
>
>  arch/powerpc/kernel/fadump.c | 6 ++++++
>  1 file changed, 6 insertions(+)

Acked-by: Michael Ellerman <mpe@ellerman.id.au>

cheers

> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index d03e488cfe9c..d0ad86b67e66 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -112,6 +112,12 @@ static int __init fadump_cma_init(void)
>  		return 1;
>  	}
>  
> +	/*
> +	 *  If CMA activation fails, keep the pages reserved, instead of
> +	 *  exposing them to buddy allocator. Same as 'fadump=nocma' case.
> +	 */
> +	cma_reserve_pages_on_error(fadump_cma);
> +
>  	/*
>  	 * So we now have successfully initialized cma area for fadump.
>  	 */
> -- 
> 2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/2] powerpc/fadump: opt out from freeing pages on cma activation failure
@ 2022-01-24  0:45     ` Michael Ellerman
  0 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-01-24  0:45 UTC (permalink / raw)
  To: Hari Bathini, akpm, david, linux-mm, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador

Hari Bathini <hbathini@linux.ibm.com> writes:
> With commit a4e92ce8e4c8 ("powerpc/fadump: Reservationless firmware
> assisted dump"), Linux kernel's Contiguous Memory Allocator (CMA)
> based reservation was introduced in fadump. That change was aimed at
> using CMA to let applications utilize the memory reserved for fadump
> while blocking it from being used for kernel pages. The assumption
> was, even if CMA activation fails for whatever reason, the memory
> still remains reserved to avoid it from being used for kernel pages.
> But commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
> activation of an area fails") breaks this assumption as it started
> exposing all pages to buddy allocator on CMA activation failure.
> It led to warning messages like below while running crash-utility
> on vmcore of a kernel having above two commits:
>
>   crash: seek error: kernel virtual address: <from reserved region>
>
> To fix this problem, opt out from exposing pages to buddy allocator
> on CMA activation failure for fadump reserved memory.
>
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
>
> Changes in v3:
> * Added Acked-by tag from David.
>
>
>  arch/powerpc/kernel/fadump.c | 6 ++++++
>  1 file changed, 6 insertions(+)

Acked-by: Michael Ellerman <mpe@ellerman.id.au>

cheers

> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index d03e488cfe9c..d0ad86b67e66 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -112,6 +112,12 @@ static int __init fadump_cma_init(void)
>  		return 1;
>  	}
>  
> +	/*
> +	 *  If CMA activation fails, keep the pages reserved, instead of
> +	 *  exposing them to buddy allocator. Same as 'fadump=nocma' case.
> +	 */
> +	cma_reserve_pages_on_error(fadump_cma);
> +
>  	/*
>  	 * So we now have successfully initialized cma area for fadump.
>  	 */
> -- 
> 2.34.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
  2022-01-17  7:52   ` Hari Bathini
@ 2022-01-24  5:21     ` Hari Bathini
  -1 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2022-01-24  5:21 UTC (permalink / raw)
  To: akpm, david, linux-mm, mpe, linuxppc-dev
  Cc: mike.kravetz, mahesh, sourabhjain, osalvador

Hi Andrew,

Could you please pick these patches via -mm tree.


On 17/01/22 1:22 pm, Hari Bathini wrote:
> Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
> activation of an area fails") started exposing all pages to buddy
> allocator on CMA activation failure. But there can be CMA users that
> want to handle the reserved memory differently on CMA allocation
> failure. Provide an option to opt out from exposing pages to buddy
> for such cases.
> 
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> ---
> 
> Changes in v3:
> * Dropped NULL check in cma_reserve_pages_on_error().
> * Dropped explicit initialization of cma->reserve_pages_on_error to
>    'false' in cma_init_reserved_mem().
> * Added Reviewed-by tag from David.
> 
> Changes in v2:
> * Changed cma->free_pages_on_error to cma->reserve_pages_on_error and
>    cma_dont_free_pages_on_error() to cma_reserve_pages_on_error() to
>    avoid confusion.
> 
> 
>   include/linux/cma.h |  2 ++
>   mm/cma.c            | 11 +++++++++--
>   mm/cma.h            |  1 +
>   3 files changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index bd801023504b..51d540eee18a 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -50,4 +50,6 @@ extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned
>   extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
>   
>   extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
> +
> +extern void cma_reserve_pages_on_error(struct cma *cma);
>   #endif
> diff --git a/mm/cma.c b/mm/cma.c
> index bc9ca8f3c487..766f1b82b532 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -131,8 +131,10 @@ static void __init cma_activate_area(struct cma *cma)
>   	bitmap_free(cma->bitmap);
>   out_error:
>   	/* Expose all pages to the buddy, they are useless for CMA. */
> -	for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
> -		free_reserved_page(pfn_to_page(pfn));
> +	if (!cma->reserve_pages_on_error) {
> +		for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
> +			free_reserved_page(pfn_to_page(pfn));
> +	}
>   	totalcma_pages -= cma->count;
>   	cma->count = 0;
>   	pr_err("CMA area %s could not be activated\n", cma->name);
> @@ -150,6 +152,11 @@ static int __init cma_init_reserved_areas(void)
>   }
>   core_initcall(cma_init_reserved_areas);
>   
> +void __init cma_reserve_pages_on_error(struct cma *cma)
> +{
> +	cma->reserve_pages_on_error = true;
> +}
> +
>   /**
>    * cma_init_reserved_mem() - create custom contiguous area from reserved memory
>    * @base: Base address of the reserved area
> diff --git a/mm/cma.h b/mm/cma.h
> index 2c775877eae2..88a0595670b7 100644
> --- a/mm/cma.h
> +++ b/mm/cma.h
> @@ -30,6 +30,7 @@ struct cma {
>   	/* kobject requires dynamic object */
>   	struct cma_kobject *cma_kobj;
>   #endif
> +	bool reserve_pages_on_error;
>   };
>   
>   extern struct cma cma_areas[MAX_CMA_AREAS];

Thanks
Hari


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/2] mm/cma: provide option to opt out from exposing pages on activation failure
@ 2022-01-24  5:21     ` Hari Bathini
  0 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2022-01-24  5:21 UTC (permalink / raw)
  To: akpm, david, linux-mm, mpe, linuxppc-dev
  Cc: osalvador, mahesh, sourabhjain, mike.kravetz

Hi Andrew,

Could you please pick these patches via -mm tree.


On 17/01/22 1:22 pm, Hari Bathini wrote:
> Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if
> activation of an area fails") started exposing all pages to buddy
> allocator on CMA activation failure. But there can be CMA users that
> want to handle the reserved memory differently on CMA allocation
> failure. Provide an option to opt out from exposing pages to buddy
> for such cases.
> 
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> ---
> 
> Changes in v3:
> * Dropped NULL check in cma_reserve_pages_on_error().
> * Dropped explicit initialization of cma->reserve_pages_on_error to
>    'false' in cma_init_reserved_mem().
> * Added Reviewed-by tag from David.
> 
> Changes in v2:
> * Changed cma->free_pages_on_error to cma->reserve_pages_on_error and
>    cma_dont_free_pages_on_error() to cma_reserve_pages_on_error() to
>    avoid confusion.
> 
> 
>   include/linux/cma.h |  2 ++
>   mm/cma.c            | 11 +++++++++--
>   mm/cma.h            |  1 +
>   3 files changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index bd801023504b..51d540eee18a 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -50,4 +50,6 @@ extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned
>   extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
>   
>   extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
> +
> +extern void cma_reserve_pages_on_error(struct cma *cma);
>   #endif
> diff --git a/mm/cma.c b/mm/cma.c
> index bc9ca8f3c487..766f1b82b532 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -131,8 +131,10 @@ static void __init cma_activate_area(struct cma *cma)
>   	bitmap_free(cma->bitmap);
>   out_error:
>   	/* Expose all pages to the buddy, they are useless for CMA. */
> -	for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
> -		free_reserved_page(pfn_to_page(pfn));
> +	if (!cma->reserve_pages_on_error) {
> +		for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
> +			free_reserved_page(pfn_to_page(pfn));
> +	}
>   	totalcma_pages -= cma->count;
>   	cma->count = 0;
>   	pr_err("CMA area %s could not be activated\n", cma->name);
> @@ -150,6 +152,11 @@ static int __init cma_init_reserved_areas(void)
>   }
>   core_initcall(cma_init_reserved_areas);
>   
> +void __init cma_reserve_pages_on_error(struct cma *cma)
> +{
> +	cma->reserve_pages_on_error = true;
> +}
> +
>   /**
>    * cma_init_reserved_mem() - create custom contiguous area from reserved memory
>    * @base: Base address of the reserved area
> diff --git a/mm/cma.h b/mm/cma.h
> index 2c775877eae2..88a0595670b7 100644
> --- a/mm/cma.h
> +++ b/mm/cma.h
> @@ -30,6 +30,7 @@ struct cma {
>   	/* kobject requires dynamic object */
>   	struct cma_kobject *cma_kobj;
>   #endif
> +	bool reserve_pages_on_error;
>   };
>   
>   extern struct cma cma_areas[MAX_CMA_AREAS];

Thanks
Hari

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-01-24  5:22 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-17  7:52 [PATCH v3 0/2] powerpc/fadump: handle CMA activation failure appropriately Hari Bathini
2022-01-17  7:52 ` Hari Bathini
2022-01-17  7:52 ` [PATCH v3 1/2] mm/cma: provide option to opt out from exposing pages on activation failure Hari Bathini
2022-01-17  7:52   ` Hari Bathini
2022-01-24  5:21   ` Hari Bathini
2022-01-24  5:21     ` Hari Bathini
2022-01-17  7:52 ` [PATCH v3 2/2] powerpc/fadump: opt out from freeing pages on cma " Hari Bathini
2022-01-17  7:52   ` Hari Bathini
2022-01-24  0:45   ` Michael Ellerman
2022-01-24  0:45     ` Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.