linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release()
@ 2020-10-22 22:53 Roman Gushchin
  2020-10-22 22:53 ` [PATCH v1 1/2] mm: cma: introduce cma_release_nowait() Roman Gushchin
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Roman Gushchin @ 2020-10-22 22:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Zi Yan, Joonsoo Kim, Mike Kravetz, saberlily.xia, linux-kernel,
	linux-mm, kernel-team, Roman Gushchin

This small patchset introduces a non-blocking version of cma_release()
and simplifies the code in hugetlbfs, where previously we had to
temporarily drop hugetlb_lock around the cma_release() call.

It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic
THP under a memory pressure requires a cma_release() call. If it's
a blocking function, it complicates the already complicated code.
Because there are at least two use cases like this (hugetlbfs is
another example), I believe it's just better to make cma_release()
non-blocking.

v1:
  - introduce cma_release_nowait() instead of making cma_release()
    non-blocking, for performance reasons

rfc:
  https://lkml.org/lkml/2020/10/16/1050


Roman Gushchin (2):
  mm: cma: introduce cma_release_nowait()
  mm: hugetlb: don't drop hugetlb_lock around cma_release() call

 include/linux/cma.h |  2 +
 mm/cma.c            | 93 +++++++++++++++++++++++++++++++++++++++++++++
 mm/cma.h            |  5 +++
 mm/hugetlb.c        | 11 ++----
 4 files changed, 103 insertions(+), 8 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v1 1/2] mm: cma: introduce cma_release_nowait()
  2020-10-22 22:53 [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release() Roman Gushchin
@ 2020-10-22 22:53 ` Roman Gushchin
  2020-10-24  7:59   ` Christoph Hellwig
  2020-10-22 22:53 ` [PATCH v1 2/2] mm: hugetlb: don't drop hugetlb_lock around cma_release() call Roman Gushchin
  2020-10-22 23:42 ` [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release() Zi Yan
  2 siblings, 1 reply; 9+ messages in thread
From: Roman Gushchin @ 2020-10-22 22:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Zi Yan, Joonsoo Kim, Mike Kravetz, saberlily.xia, linux-kernel,
	linux-mm, kernel-team, Roman Gushchin

cma_release() has to lock the cma_lock mutex to clear the cma bitmap.
It makes it a blocking function, which complicates its usage from
non-blocking contexts. For instance, hugetlbfs code is temporarily
dropping the hugetlb_lock spinlock to call cma_release().

This patch introduces a non-blocking cma_release_nowait(), which
postpones the cma bitmap clearance. It's done later from a work
context. The first page in the cma allocation is used to store
the work struct. Because CMA allocations and de-allocations are
usually not that frequent, a single global workqueue is used.

To make sure that subsequent cma_alloc() call will pass, cma_alloc()
flushes the cma_release_wq workqueue. To avoid a performance
regression in the case when only cma_release() is used, gate it
by a per-cma area flag, which is set by the first call
of cma_release_nowait().

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/cma.h |  2 +
 mm/cma.c            | 93 +++++++++++++++++++++++++++++++++++++++++++++
 mm/cma.h            |  5 +++
 3 files changed, 100 insertions(+)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index 217999c8a762..497eca478c2f 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -47,6 +47,8 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 			      bool no_warn);
 extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count);
+extern bool cma_release_nowait(struct cma *cma, const struct page *pages,
+			       unsigned int count);
 
 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
 #endif
diff --git a/mm/cma.c b/mm/cma.c
index 7f415d7cda9f..9fcfddcf1a6c 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -36,10 +36,19 @@
 
 #include "cma.h"
 
+struct cma_clear_bitmap_work {
+	struct work_struct work;
+	struct cma *cma;
+	unsigned long pfn;
+	unsigned int count;
+};
+
 struct cma cma_areas[MAX_CMA_AREAS];
 unsigned cma_area_count;
 static DEFINE_MUTEX(cma_mutex);
 
+struct workqueue_struct *cma_release_wq;
+
 phys_addr_t cma_get_base(const struct cma *cma)
 {
 	return PFN_PHYS(cma->base_pfn);
@@ -148,6 +157,10 @@ static int __init cma_init_reserved_areas(void)
 	for (i = 0; i < cma_area_count; i++)
 		cma_activate_area(&cma_areas[i]);
 
+	cma_release_wq = create_workqueue("cma_release");
+	if (!cma_release_wq)
+		return -ENOMEM;
+
 	return 0;
 }
 core_initcall(cma_init_reserved_areas);
@@ -205,6 +218,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 
 	cma->base_pfn = PFN_DOWN(base);
 	cma->count = size >> PAGE_SHIFT;
+	cma->flags = 0;
 	cma->order_per_bit = order_per_bit;
 	*res_cma = cma;
 	cma_area_count++;
@@ -437,6 +451,14 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 		return NULL;
 
 	for (;;) {
+		/*
+		 * If the CMA bitmap is cleared asynchronously after
+		 * cma_release_nowait(), cma release workqueue has to be
+		 * flushed here in order to make the allocation succeed.
+		 */
+		if (test_bit(CMA_DELAYED_RELEASE, &cma->flags))
+			flush_workqueue(cma_release_wq);
+
 		mutex_lock(&cma->lock);
 		bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap,
 				bitmap_maxno, start, bitmap_count, mask,
@@ -528,6 +550,77 @@ bool cma_release(struct cma *cma, const struct page *pages, unsigned int count)
 	return true;
 }
 
+static void cma_clear_bitmap_fn(struct work_struct *work)
+{
+	struct cma_clear_bitmap_work *w;
+
+	w = container_of(work, struct cma_clear_bitmap_work, work);
+
+	cma_clear_bitmap(w->cma, w->pfn, w->count);
+
+	__free_page(pfn_to_page(w->pfn));
+}
+
+/**
+ * cma_release_nowait() - release allocated pages without blocking
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @pages: Allocated pages.
+ * @count: Number of allocated pages.
+ *
+ * Similar to cma_release(), this function releases memory allocated
+ * by cma_alloc(), but unlike cma_release() is non-blocking and can be
+ * called from an atomic context.
+ * It returns false when provided pages do not belong to contiguous area
+ * and true otherwise.
+ */
+bool cma_release_nowait(struct cma *cma, const struct page *pages,
+			unsigned int count)
+{
+	struct cma_clear_bitmap_work *work;
+	unsigned long pfn;
+
+	if (!cma || !pages)
+		return false;
+
+	pr_debug("%s(page %p)\n", __func__, (void *)pages);
+
+	pfn = page_to_pfn(pages);
+
+	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
+		return false;
+
+	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
+
+	/*
+	 * Set CMA_DELAYED_RELEASE flag: subsequent cma_alloc()'s
+	 * will wait for the async part of cma_release_nowait() to
+	 * finish.
+	 */
+	if (unlikely(!test_bit(CMA_DELAYED_RELEASE, &cma->flags)))
+		set_bit(CMA_DELAYED_RELEASE, &cma->flags);
+
+	/*
+	 * To make cma_release_nowait() non-blocking, cma bitmap is cleared
+	 * from a work context (see cma_clear_bitmap_fn()). The first page
+	 * in the cma allocation is used to store the work structure,
+	 * so it's released after the cma bitmap clearance. Other pages
+	 * are released immediately as previously.
+	 */
+	if (count > 1)
+		free_contig_range(pfn + 1, count - 1);
+
+	work = (struct cma_clear_bitmap_work *)page_to_virt(pages);
+	INIT_WORK(&work->work, cma_clear_bitmap_fn);
+	work->cma = cma;
+	work->pfn = pfn;
+	work->count = count;
+	queue_work(cma_release_wq, &work->work);
+
+	trace_cma_release(pfn, pages, count);
+
+	return true;
+}
+
 int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data)
 {
 	int i;
diff --git a/mm/cma.h b/mm/cma.h
index 42ae082cb067..e9293871d122 100644
--- a/mm/cma.h
+++ b/mm/cma.h
@@ -7,6 +7,7 @@
 struct cma {
 	unsigned long   base_pfn;
 	unsigned long   count;
+	unsigned long   flags;
 	unsigned long   *bitmap;
 	unsigned int order_per_bit; /* Order of pages represented by one bit */
 	struct mutex    lock;
@@ -18,6 +19,10 @@ struct cma {
 	char name[CMA_MAX_NAME];
 };
 
+enum cma_flags {
+	CMA_DELAYED_RELEASE, /* cma bitmap is cleared asynchronously */
+};
+
 extern struct cma cma_areas[MAX_CMA_AREAS];
 extern unsigned cma_area_count;
 
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 2/2] mm: hugetlb: don't drop hugetlb_lock around cma_release() call
  2020-10-22 22:53 [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release() Roman Gushchin
  2020-10-22 22:53 ` [PATCH v1 1/2] mm: cma: introduce cma_release_nowait() Roman Gushchin
@ 2020-10-22 22:53 ` Roman Gushchin
  2020-10-22 23:42 ` [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release() Zi Yan
  2 siblings, 0 replies; 9+ messages in thread
From: Roman Gushchin @ 2020-10-22 22:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Zi Yan, Joonsoo Kim, Mike Kravetz, saberlily.xia, linux-kernel,
	linux-mm, kernel-team, Roman Gushchin

Replace blocking cma_release() with a non-blocking cma_release_nowait()
call, so there is no more need to temporarily drop hugetlb_lock.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 mm/hugetlb.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index fe76f8fd5a73..230e9b6c9a2b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1224,10 +1224,11 @@ static void free_gigantic_page(struct page *page, unsigned int order)
 {
 	/*
 	 * If the page isn't allocated using the cma allocator,
-	 * cma_release() returns false.
+	 * cma_release_nowait() returns false.
 	 */
 #ifdef CONFIG_CMA
-	if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
+	if (cma_release_nowait(hugetlb_cma[page_to_nid(page)], page,
+			       1 << order))
 		return;
 #endif
 
@@ -1312,14 +1313,8 @@ static void update_and_free_page(struct hstate *h, struct page *page)
 	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
 	set_page_refcounted(page);
 	if (hstate_is_gigantic(h)) {
-		/*
-		 * Temporarily drop the hugetlb_lock, because
-		 * we might block in free_gigantic_page().
-		 */
-		spin_unlock(&hugetlb_lock);
 		destroy_compound_gigantic_page(page, huge_page_order(h));
 		free_gigantic_page(page, huge_page_order(h));
-		spin_lock(&hugetlb_lock);
 	} else {
 		__free_pages(page, huge_page_order(h));
 	}
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release()
  2020-10-22 22:53 [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release() Roman Gushchin
  2020-10-22 22:53 ` [PATCH v1 1/2] mm: cma: introduce cma_release_nowait() Roman Gushchin
  2020-10-22 22:53 ` [PATCH v1 2/2] mm: hugetlb: don't drop hugetlb_lock around cma_release() call Roman Gushchin
@ 2020-10-22 23:42 ` Zi Yan
  2020-10-23  0:47   ` Roman Gushchin
  2 siblings, 1 reply; 9+ messages in thread
From: Zi Yan @ 2020-10-22 23:42 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Joonsoo Kim, Mike Kravetz, saberlily.xia,
	linux-kernel, linux-mm, kernel-team

[-- Attachment #1: Type: text/plain, Size: 1023 bytes --]

On 22 Oct 2020, at 18:53, Roman Gushchin wrote:

> This small patchset introduces a non-blocking version of cma_release()
> and simplifies the code in hugetlbfs, where previously we had to
> temporarily drop hugetlb_lock around the cma_release() call.
>
> It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic
> THP under a memory pressure requires a cma_release() call. If it's

Thanks for the patch. But during 1GB THP split, we only clear
the bitmaps without releasing the pages. Also in cma_release_nowait(),
the first page in the allocated CMA region is reused to store
struct cma_clear_bitmap_work, but the same method cannot be used
during THP split, since the first page is still in-use. We might
need to allocate some new memory for struct cma_clear_bitmap_work,
which might not be successful under memory pressure. Any suggestion
on where to store struct cma_clear_bitmap_work when I only want to
clear bitmap without releasing the pages?

Thanks.

—
Best Regards,
Yan Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release()
  2020-10-22 23:42 ` [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release() Zi Yan
@ 2020-10-23  0:47   ` Roman Gushchin
  2020-10-23  0:58     ` Zi Yan
  0 siblings, 1 reply; 9+ messages in thread
From: Roman Gushchin @ 2020-10-23  0:47 UTC (permalink / raw)
  To: Zi Yan
  Cc: Andrew Morton, Joonsoo Kim, Mike Kravetz, saberlily.xia,
	linux-kernel, linux-mm, kernel-team

On Thu, Oct 22, 2020 at 07:42:45PM -0400, Zi Yan wrote:
> On 22 Oct 2020, at 18:53, Roman Gushchin wrote:
> 
> > This small patchset introduces a non-blocking version of cma_release()
> > and simplifies the code in hugetlbfs, where previously we had to
> > temporarily drop hugetlb_lock around the cma_release() call.
> >
> > It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic
> > THP under a memory pressure requires a cma_release() call. If it's
> 
> Thanks for the patch. But during 1GB THP split, we only clear
> the bitmaps without releasing the pages. Also in cma_release_nowait(),
> the first page in the allocated CMA region is reused to store
> struct cma_clear_bitmap_work, but the same method cannot be used
> during THP split, since the first page is still in-use. We might
> need to allocate some new memory for struct cma_clear_bitmap_work,
> which might not be successful under memory pressure. Any suggestion
> on where to store struct cma_clear_bitmap_work when I only want to
> clear bitmap without releasing the pages?

It means we can't use cma_release() there either, because it does clear
individual pages. We need to clear the cma bitmap without touching pages.

Can you handle an error there?

If so, we can introduce something like int cma_schedule_bitmap_clearance(),
which will allocate a work structure and will be able to return -ENOMEM
in the unlikely case of error.

Will it work for you?

Thanks!


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release()
  2020-10-23  0:47   ` Roman Gushchin
@ 2020-10-23  0:58     ` Zi Yan
  2020-10-23 20:55       ` Roman Gushchin
  0 siblings, 1 reply; 9+ messages in thread
From: Zi Yan @ 2020-10-23  0:58 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Joonsoo Kim, Mike Kravetz, saberlily.xia,
	linux-kernel, linux-mm, kernel-team

[-- Attachment #1: Type: text/plain, Size: 1605 bytes --]

On 22 Oct 2020, at 20:47, Roman Gushchin wrote:

> On Thu, Oct 22, 2020 at 07:42:45PM -0400, Zi Yan wrote:
>> On 22 Oct 2020, at 18:53, Roman Gushchin wrote:
>>
>>> This small patchset introduces a non-blocking version of cma_release()
>>> and simplifies the code in hugetlbfs, where previously we had to
>>> temporarily drop hugetlb_lock around the cma_release() call.
>>>
>>> It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic
>>> THP under a memory pressure requires a cma_release() call. If it's
>>
>> Thanks for the patch. But during 1GB THP split, we only clear
>> the bitmaps without releasing the pages. Also in cma_release_nowait(),
>> the first page in the allocated CMA region is reused to store
>> struct cma_clear_bitmap_work, but the same method cannot be used
>> during THP split, since the first page is still in-use. We might
>> need to allocate some new memory for struct cma_clear_bitmap_work,
>> which might not be successful under memory pressure. Any suggestion
>> on where to store struct cma_clear_bitmap_work when I only want to
>> clear bitmap without releasing the pages?
>
> It means we can't use cma_release() there either, because it does clear
> individual pages. We need to clear the cma bitmap without touching pages.
>
> Can you handle an error there?
>
> If so, we can introduce something like int cma_schedule_bitmap_clearance(),
> which will allocate a work structure and will be able to return -ENOMEM
> in the unlikely case of error.
>
> Will it work for you?

Yes, it works. Thanks.

—
Best Regards,
Yan Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release()
  2020-10-23  0:58     ` Zi Yan
@ 2020-10-23 20:55       ` Roman Gushchin
  0 siblings, 0 replies; 9+ messages in thread
From: Roman Gushchin @ 2020-10-23 20:55 UTC (permalink / raw)
  To: Zi Yan
  Cc: Andrew Morton, Joonsoo Kim, Mike Kravetz, saberlily.xia,
	linux-kernel, linux-mm, kernel-team

On Thu, Oct 22, 2020 at 08:58:10PM -0400, Zi Yan wrote:
> On 22 Oct 2020, at 20:47, Roman Gushchin wrote:
> 
> > On Thu, Oct 22, 2020 at 07:42:45PM -0400, Zi Yan wrote:
> >> On 22 Oct 2020, at 18:53, Roman Gushchin wrote:
> >>
> >>> This small patchset introduces a non-blocking version of cma_release()
> >>> and simplifies the code in hugetlbfs, where previously we had to
> >>> temporarily drop hugetlb_lock around the cma_release() call.
> >>>
> >>> It should help Zi Yan on his work on 1 GB THPs: splitting a gigantic
> >>> THP under a memory pressure requires a cma_release() call. If it's
> >>
> >> Thanks for the patch. But during 1GB THP split, we only clear
> >> the bitmaps without releasing the pages. Also in cma_release_nowait(),
> >> the first page in the allocated CMA region is reused to store
> >> struct cma_clear_bitmap_work, but the same method cannot be used
> >> during THP split, since the first page is still in-use. We might
> >> need to allocate some new memory for struct cma_clear_bitmap_work,
> >> which might not be successful under memory pressure. Any suggestion
> >> on where to store struct cma_clear_bitmap_work when I only want to
> >> clear bitmap without releasing the pages?
> >
> > It means we can't use cma_release() there either, because it does clear
> > individual pages. We need to clear the cma bitmap without touching pages.
> >
> > Can you handle an error there?
> >
> > If so, we can introduce something like int cma_schedule_bitmap_clearance(),
> > which will allocate a work structure and will be able to return -ENOMEM
> > in the unlikely case of error.
> >
> > Will it work for you?
> 
> Yes, it works. Thanks.

Nice!

It means we can keep these two patches as they are (they do makes sense
even without THP, because they simplify the hugetlbfs code).

I'll send a patch implementing cma_schedule_bitmap_clearance() to you,
so you'll be able to use it in the patchset.

Thanks!


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 1/2] mm: cma: introduce cma_release_nowait()
  2020-10-22 22:53 ` [PATCH v1 1/2] mm: cma: introduce cma_release_nowait() Roman Gushchin
@ 2020-10-24  7:59   ` Christoph Hellwig
  2020-10-27 19:56     ` Roman Gushchin
  0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2020-10-24  7:59 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Zi Yan, Joonsoo Kim, Mike Kravetz, saberlily.xia,
	linux-kernel, linux-mm, kernel-team

Btw, I think we also need to use this nonblocking version from
dma_free_contiguous.  dma_free* is defined to not block.  In practice
callers mostly care if they also did GFP_ATOMIC allocations, which
don't dip into CMA, but I think we do have a problem.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 1/2] mm: cma: introduce cma_release_nowait()
  2020-10-24  7:59   ` Christoph Hellwig
@ 2020-10-27 19:56     ` Roman Gushchin
  0 siblings, 0 replies; 9+ messages in thread
From: Roman Gushchin @ 2020-10-27 19:56 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Zi Yan, Joonsoo Kim, Mike Kravetz, saberlily.xia,
	linux-kernel, linux-mm, kernel-team

On Sat, Oct 24, 2020 at 08:59:59AM +0100, Christoph Hellwig wrote:
> Btw, I think we also need to use this nonblocking version from
> dma_free_contiguous.  dma_free* is defined to not block.  In practice
> callers mostly care if they also did GFP_ATOMIC allocations, which
> don't dip into CMA, but I think we do have a problem.

It's a good point!
Do you mind mastering a patch? I can include it into the patchset.

Thanks!


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-10-27 19:56 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-22 22:53 [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release() Roman Gushchin
2020-10-22 22:53 ` [PATCH v1 1/2] mm: cma: introduce cma_release_nowait() Roman Gushchin
2020-10-24  7:59   ` Christoph Hellwig
2020-10-27 19:56     ` Roman Gushchin
2020-10-22 22:53 ` [PATCH v1 2/2] mm: hugetlb: don't drop hugetlb_lock around cma_release() call Roman Gushchin
2020-10-22 23:42 ` [PATCH v1 0/2] mm: cma: introduce a non-blocking version of cma_release() Zi Yan
2020-10-23  0:47   ` Roman Gushchin
2020-10-23  0:58     ` Zi Yan
2020-10-23 20:55       ` Roman Gushchin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).