[v2] mm: vmstat: add cma statistics
diff mbox series

Message ID 20210302183346.3707237-1-minchan@kernel.org
State In Next
Commit a73483d81eafe5e99ffef9d79a49e07f1cc84621
Headers show
Series
  • [v2] mm: vmstat: add cma statistics
Related show

Commit Message

Minchan Kim March 2, 2021, 6:33 p.m. UTC
Since CMA is used more widely, it's worth to have CMA
allocation statistics into vmstat. With it, we could
know how agressively system uses cma allocation and
how often it fails.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
* from v1 - https://lore.kernel.org/linux-mm/20210217170025.512704-1-minchan@kernel.org/
  * change alloc_attempt with alloc_success - jhubbard
  * item per line for vm_event_item - jhubbard

 include/linux/vm_event_item.h |  4 ++++
 mm/cma.c                      | 12 +++++++++---
 mm/vmstat.c                   |  4 ++++
 3 files changed, 17 insertions(+), 3 deletions(-)

Comments

John Hubbard March 2, 2021, 11:09 p.m. UTC | #1
On 3/2/21 10:33, Minchan Kim wrote:
> Since CMA is used more widely, it's worth to have CMA
> allocation statistics into vmstat. With it, we could
> know how agressively system uses cma allocation and
> how often it fails.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
> * from v1 - https://lore.kernel.org/linux-mm/20210217170025.512704-1-minchan@kernel.org/
>    * change alloc_attempt with alloc_success - jhubbard
>    * item per line for vm_event_item - jhubbard
> 
>   include/linux/vm_event_item.h |  4 ++++
>   mm/cma.c                      | 12 +++++++++---
>   mm/vmstat.c                   |  4 ++++
>   3 files changed, 17 insertions(+), 3 deletions(-)
> 

Seems reasonable, and the diffs look good.

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,

Patch
diff mbox series

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 18e75974d4e3..21d7c7f72f1c 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -70,6 +70,10 @@  enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 #endif
 #ifdef CONFIG_HUGETLB_PAGE
 		HTLB_BUDDY_PGALLOC, HTLB_BUDDY_PGALLOC_FAIL,
+#endif
+#ifdef CONFIG_CMA
+		CMA_ALLOC_SUCCESS,
+		CMA_ALLOC_FAIL,
 #endif
 		UNEVICTABLE_PGCULLED,	/* culled to noreclaim list */
 		UNEVICTABLE_PGSCANNED,	/* scanned for reclaimability */
diff --git a/mm/cma.c b/mm/cma.c
index 23d4a97c834a..04ca863d1807 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -435,13 +435,13 @@  struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 	int ret = -ENOMEM;
 
 	if (!cma || !cma->count || !cma->bitmap)
-		return NULL;
+		goto out;
 
 	pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma,
 		 count, align);
 
 	if (!count)
-		return NULL;
+		goto out;
 
 	mask = cma_bitmap_aligned_mask(cma, align);
 	offset = cma_bitmap_aligned_offset(cma, align);
@@ -449,7 +449,7 @@  struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 	bitmap_count = cma_bitmap_pages_to_bits(cma, count);
 
 	if (bitmap_count > bitmap_maxno)
-		return NULL;
+		goto out;
 
 	for (;;) {
 		mutex_lock(&cma->lock);
@@ -506,6 +506,12 @@  struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 	}
 
 	pr_debug("%s(): returned %p\n", __func__, page);
+out:
+	if (page)
+		count_vm_event(CMA_ALLOC_SUCCESS);
+	else
+		count_vm_event(CMA_ALLOC_FAIL);
+
 	return page;
 }
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 97fc32a53320..d8c32a33208d 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1305,6 +1305,10 @@  const char * const vmstat_text[] = {
 #ifdef CONFIG_HUGETLB_PAGE
 	"htlb_buddy_alloc_success",
 	"htlb_buddy_alloc_fail",
+#endif
+#ifdef CONFIG_CMA
+	"cma_alloc_success",
+	"cma_alloc_fail",
 #endif
 	"unevictable_pgs_culled",
 	"unevictable_pgs_scanned",