linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Lecopzer Chen <lecopzer.chen@mediatek.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: matthias.bgg@gmail.com, akpm@linux-foundation.org,
	yj.chiang@mediatek.com
Subject: Re: [PATCH] mm/cma.c: remove redundant cma_mutex lock
Date: Tue, 20 Oct 2020 13:27:37 +0200	[thread overview]
Message-ID: <458cdc24-f637-ef4b-19de-513ceab14f23@redhat.com> (raw)
In-Reply-To: <20201020102241.3729-1-lecopzer.chen@mediatek.com>

On 20.10.20 12:22, Lecopzer Chen wrote:
> The cma_mutex which protects alloc_contig_range() was first appeared in
> commit 7ee793a62fa8c ("cma: Remove potential deadlock situation"),
> at that time, there is no guarantee the behavior of concurrency inside
> alloc_contig_range().
> 
> After the commit 2c7452a075d4db2dc
> ("mm/page_isolation.c: make start_isolate_page_range() fail if already isolated")
>   > However, two subsystems (CMA and gigantic
>   > huge pages for example) could attempt operations on the same range.  If
>   > this happens, one thread may 'undo' the work another thread is doing.
>   > This can result in pageblocks being incorrectly left marked as
>   > MIGRATE_ISOLATE and therefore not available for page allocation.
> The concurrency inside alloc_contig_range() was clarified.
> 
> Now we can find that hugepage and virtio call alloc_contig_range() without
> any lock, thus cma_mutex is "redundant" in cma_alloc() now.
> 
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> ---
>  mm/cma.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/mm/cma.c b/mm/cma.c
> index 7f415d7cda9f..3692a34e2353 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -38,7 +38,6 @@
>  
>  struct cma cma_areas[MAX_CMA_AREAS];
>  unsigned cma_area_count;
> -static DEFINE_MUTEX(cma_mutex);
>  
>  phys_addr_t cma_get_base(const struct cma *cma)
>  {
> @@ -454,10 +453,9 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
>  		mutex_unlock(&cma->lock);
>  
>  		pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
> -		mutex_lock(&cma_mutex);
>  		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
>  				     GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
> -		mutex_unlock(&cma_mutex);
> +
>  		if (ret == 0) {
>  			page = pfn_to_page(pfn);
>  			break;
> 

I guess this is fine. In case there is a race we return with -EBUSY -
which is suboptimal (as it could just be a temporary issue if the other
user backs off), but should be good enough for now.

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2020-10-20 11:27 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-20 10:22 [PATCH] mm/cma.c: remove redundant cma_mutex lock Lecopzer Chen
2020-10-20 11:27 ` David Hildenbrand [this message]
2020-10-23 11:29   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=458cdc24-f637-ef4b-19de-513ceab14f23@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=lecopzer.chen@mediatek.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=matthias.bgg@gmail.com \
    --cc=yj.chiang@mediatek.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).