From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45BE4C433E3 for ; Tue, 30 Mar 2021 08:10:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0201A619BC for ; Tue, 30 Mar 2021 08:10:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231625AbhC3IJy (ORCPT ); Tue, 30 Mar 2021 04:09:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230243AbhC3IJN (ORCPT ); Tue, 30 Mar 2021 04:09:13 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77025C061762 for ; Tue, 30 Mar 2021 01:09:13 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id s11so5560778pfm.1 for ; Tue, 30 Mar 2021 01:09:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=PB9iUKEPVj2X9TCtIHU68DohXbno7XR7v6QDGO/igRQ=; b=YmpsvoneVKvBDySj1WjQ1+LgtxCkvRZF59P9vv9qcYhhJXAw/AGhOOTSIPfVYC8KUd Yakh01qaYVMQd9qGK5iyegd/Zdv0mMurktcDfCgR17eGzRJ0msm06vr5i5Oo0oJv8Cbe 2qC2O/1DXz5ZAZdTsh6MgIj1RVKG8jBEp7bUYJXL1j+LTcSf8RDzl1RYLljBr5SFRuEj wXcMZfQMAlJ305hVdO8MgtZDi0Zs9INtKEffaCpm9N0va/tyyouXB9hKSUH+Mejigvr4 gYt+wiLi9CMbmCAX71zWXoXK6qopz7NewM49kHfng6AqVmWXt31SuBADsEMnN/shHFUD H85g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=PB9iUKEPVj2X9TCtIHU68DohXbno7XR7v6QDGO/igRQ=; b=LcNdUxlNL978QDy1lxiO1KOvLFpJtwh0cFB7IfySltHYUndK1vaZM9qx7aqhRs7t0o IlhBLUrN9Q6GIV9RnUwPtPhVApOKtMffFXAujIAsJ4pdYNInpYaCbhNv9fagHFxmxP6b zI5K/qXaFqQJoei9ldljLwIyN4Ur/yNQkRcsiUApf3jqjGB6u3g6siKnPN8kQy5GWntA onKIkONZZUD25rp6QVrGG+qU22seo5Hqmz6m1cgCUtGQ8XHISrZvgnAar98+yl1KBqAh dGvqAet5ZlJbMaVknskRgheL/7eaWYqPH+H0vF8tAUX+gTo3Dhju0Fy+TXCqVDgOaIIf wgog== X-Gm-Message-State: AOAM532BGSe4+nLt48WFG+aUYlfFNNiB8ZoWI28L9V5P1aOZTM+9Fd+S dyn0LmxArKYCwyJGTCl8JE9L6Agf1dg8NIDsKnPlPw== X-Google-Smtp-Source: ABdhPJxs5t5yHpj6+kSOJzEGouafGur/MJrNMF278g3GOzw1gaWCvgVeP7f59Rnp5rAKEVxkPri4YuGXTI0bLAYzW9I= X-Received: by 2002:a63:141e:: with SMTP id u30mr27933831pgl.31.1617091753004; Tue, 30 Mar 2021 01:09:13 -0700 (PDT) MIME-Version: 1.0 References: <20210329232402.575396-1-mike.kravetz@oracle.com> <20210329232402.575396-2-mike.kravetz@oracle.com> In-Reply-To: From: Muchun Song Date: Tue, 30 Mar 2021 16:08:36 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v2 1/8] mm/cma: change cma mutex to irq safe spinlock To: Michal Hocko Cc: Mike Kravetz , Linux Memory Management List , LKML , Roman Gushchin , Shakeel Butt , Oscar Salvador , David Hildenbrand , David Rientjes , Miaohe Lin , Peter Zijlstra , Matthew Wilcox , HORIGUCHI NAOYA , "Aneesh Kumar K . V" , Waiman Long , Peter Xu , Mina Almasry , Hillf Danton , Joonsoo Kim , Barry Song , Will Deacon , Andrew Morton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 30, 2021 at 4:01 PM Michal Hocko wrote: > > On Mon 29-03-21 16:23:55, Mike Kravetz wrote: > > Ideally, cma_release could be called from any context. However, that is > > not possible because a mutex is used to protect the per-area bitmap. > > Change the bitmap to an irq safe spinlock. > > I would phrase the changelog slightly differerent > " > cma_release is currently a sleepable operatation because the bitmap > manipulation is protected by cma->lock mutex. Hugetlb code which relies > on cma_release for CMA backed (giga) hugetlb pages, however, needs to be > irq safe. > > The lock doesn't protect any sleepable operation so it can be changed to > a (irq aware) spin lock. The bitmap processing should be quite fast in > typical case but if cma sizes grow to TB then we will likely need to > replace the lock by a more optimized bitmap implementation. > " > > it seems that you are overusing irqsave variants even from context which > are never called from the IRQ context so they do not need storing flags. > > [...] > > @@ -391,8 +391,9 @@ static void cma_debug_show_areas(struct cma *cma) > > unsigned long start = 0; > > unsigned long nr_part, nr_total = 0; > > unsigned long nbits = cma_bitmap_maxno(cma); > > + unsigned long flags; > > > > - mutex_lock(&cma->lock); > > + spin_lock_irqsave(&cma->lock, flags); > > spin_lock_irq should be sufficient. This is only called from the > allocation context and that is never called from IRQ context. This makes me think more. I think that spin_lock should be sufficient. Right? > > > pr_info("number of available pages: "); > > for (;;) { > > next_zero_bit = find_next_zero_bit(cma->bitmap, nbits, start); > > @@ -407,7 +408,7 @@ static void cma_debug_show_areas(struct cma *cma) > > start = next_zero_bit + nr_zero; > > } > > pr_cont("=> %lu free of %lu total pages\n", nr_total, cma->count); > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > } > > #else > > static inline void cma_debug_show_areas(struct cma *cma) { } > > @@ -430,6 +431,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > > unsigned long pfn = -1; > > unsigned long start = 0; > > unsigned long bitmap_maxno, bitmap_no, bitmap_count; > > + unsigned long flags; > > size_t i; > > struct page *page = NULL; > > int ret = -ENOMEM; > > @@ -454,12 +456,12 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > > goto out; > > > > for (;;) { > > - mutex_lock(&cma->lock); > > + spin_lock_irqsave(&cma->lock, flags); > > bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap, > > bitmap_maxno, start, bitmap_count, mask, > > offset); > > if (bitmap_no >= bitmap_maxno) { > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > break; > > } > > bitmap_set(cma->bitmap, bitmap_no, bitmap_count); > > same here. > > > @@ -468,7 +470,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > > * our exclusive use. If the migration fails we will take the > > * lock again and unmark it. > > */ > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > > > pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); > > ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, > > diff --git a/mm/cma.h b/mm/cma.h > > index 68ffad4e430d..2c775877eae2 100644 > > --- a/mm/cma.h > > +++ b/mm/cma.h > > @@ -15,7 +15,7 @@ struct cma { > > unsigned long count; > > unsigned long *bitmap; > > unsigned int order_per_bit; /* Order of pages represented by one bit */ > > - struct mutex lock; > > + spinlock_t lock; > > #ifdef CONFIG_CMA_DEBUGFS > > struct hlist_head mem_head; > > spinlock_t mem_head_lock; > > diff --git a/mm/cma_debug.c b/mm/cma_debug.c > > index d5bf8aa34fdc..6379cfbfd568 100644 > > --- a/mm/cma_debug.c > > +++ b/mm/cma_debug.c > > @@ -35,11 +35,12 @@ static int cma_used_get(void *data, u64 *val) > > { > > struct cma *cma = data; > > unsigned long used; > > + unsigned long flags; > > > > - mutex_lock(&cma->lock); > > + spin_lock_irqsave(&cma->lock, flags); > > /* pages counter is smaller than sizeof(int) */ > > used = bitmap_weight(cma->bitmap, (int)cma_bitmap_maxno(cma)); > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > *val = (u64)used << cma->order_per_bit; > > same here > > > > > return 0; > > @@ -52,8 +53,9 @@ static int cma_maxchunk_get(void *data, u64 *val) > > unsigned long maxchunk = 0; > > unsigned long start, end = 0; > > unsigned long bitmap_maxno = cma_bitmap_maxno(cma); > > + unsigned long flags; > > > > - mutex_lock(&cma->lock); > > + spin_lock_irqsave(&cma->lock, flags); > > for (;;) { > > start = find_next_zero_bit(cma->bitmap, bitmap_maxno, end); > > if (start >= bitmap_maxno) > > @@ -61,7 +63,7 @@ static int cma_maxchunk_get(void *data, u64 *val) > > end = find_next_bit(cma->bitmap, bitmap_maxno, start); > > maxchunk = max(end - start, maxchunk); > > } > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > *val = (u64)maxchunk << cma->order_per_bit; > > > > return 0; > > and here. > -- > Michal Hocko > SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED81DC433DB for ; Tue, 30 Mar 2021 08:09:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4A2BB6198F for ; Tue, 30 Mar 2021 08:09:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A2BB6198F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B9C6D6B007D; Tue, 30 Mar 2021 04:09:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B4BFE6B007E; Tue, 30 Mar 2021 04:09:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C5D16B0080; Tue, 30 Mar 2021 04:09:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 8100F6B007D for ; Tue, 30 Mar 2021 04:09:16 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3AB2E8249980 for ; Tue, 30 Mar 2021 08:09:16 +0000 (UTC) X-FDA: 77975815512.18.7587FEF Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf21.hostedemail.com (Postfix) with ESMTP id 577D9E0011C5 for ; Tue, 30 Mar 2021 08:09:12 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id k8so2002222pgf.4 for ; Tue, 30 Mar 2021 01:09:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=PB9iUKEPVj2X9TCtIHU68DohXbno7XR7v6QDGO/igRQ=; b=YmpsvoneVKvBDySj1WjQ1+LgtxCkvRZF59P9vv9qcYhhJXAw/AGhOOTSIPfVYC8KUd Yakh01qaYVMQd9qGK5iyegd/Zdv0mMurktcDfCgR17eGzRJ0msm06vr5i5Oo0oJv8Cbe 2qC2O/1DXz5ZAZdTsh6MgIj1RVKG8jBEp7bUYJXL1j+LTcSf8RDzl1RYLljBr5SFRuEj wXcMZfQMAlJ305hVdO8MgtZDi0Zs9INtKEffaCpm9N0va/tyyouXB9hKSUH+Mejigvr4 gYt+wiLi9CMbmCAX71zWXoXK6qopz7NewM49kHfng6AqVmWXt31SuBADsEMnN/shHFUD H85g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=PB9iUKEPVj2X9TCtIHU68DohXbno7XR7v6QDGO/igRQ=; b=dAkhBfHwLvObFm76tlqCPEUnU+efVArr7D5HFEI6JOku2JSN+5gpDAkr+SiRvsMtUL Fz5S8tu0rsXXOQVfyqCi++G2wmPg/Ov1iafAwfKUNvYjFiU78/Gk0cU6udNGGx6x2CSS kWVmbGJKCPxTWfWOTOWxux6hbwtKhYnCohFqojiY2wOSSfAQ/Bl39CMCWQm9eTsoM3Rn kuMaq02Doz0S26H2ZBMW/DH2UYXc9BV4yLCJokUGcvr9T+odw9c/+eWNqtSyPfd9fw9a T5VxSkA7a132YKxojwj5FnCFSiIifND7ulsTT5S4m9a/lH2Asf7WhneyYyAiJPa85xOV YfHA== X-Gm-Message-State: AOAM533N/QBR8U2LIWbyFgkhK+YfYXUKtKopW+lfoaYthv4Dz67JyO3p 2bBoGdGp4B7Tx37JkKmqPODi/HTeYk1bXOW4pNJPzQ== X-Google-Smtp-Source: ABdhPJxs5t5yHpj6+kSOJzEGouafGur/MJrNMF278g3GOzw1gaWCvgVeP7f59Rnp5rAKEVxkPri4YuGXTI0bLAYzW9I= X-Received: by 2002:a63:141e:: with SMTP id u30mr27933831pgl.31.1617091753004; Tue, 30 Mar 2021 01:09:13 -0700 (PDT) MIME-Version: 1.0 References: <20210329232402.575396-1-mike.kravetz@oracle.com> <20210329232402.575396-2-mike.kravetz@oracle.com> In-Reply-To: From: Muchun Song Date: Tue, 30 Mar 2021 16:08:36 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v2 1/8] mm/cma: change cma mutex to irq safe spinlock To: Michal Hocko Cc: Mike Kravetz , Linux Memory Management List , LKML , Roman Gushchin , Shakeel Butt , Oscar Salvador , David Hildenbrand , David Rientjes , Miaohe Lin , Peter Zijlstra , Matthew Wilcox , HORIGUCHI NAOYA , "Aneesh Kumar K . V" , Waiman Long , Peter Xu , Mina Almasry , Hillf Danton , Joonsoo Kim , Barry Song , Will Deacon , Andrew Morton Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 577D9E0011C5 X-Stat-Signature: z89kf3gbqy7gb1jzyqkz387818so1x3k Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf21; identity=mailfrom; envelope-from=""; helo=mail-pg1-f174.google.com; client-ip=209.85.215.174 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617091752-876570 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Mar 30, 2021 at 4:01 PM Michal Hocko wrote: > > On Mon 29-03-21 16:23:55, Mike Kravetz wrote: > > Ideally, cma_release could be called from any context. However, that is > > not possible because a mutex is used to protect the per-area bitmap. > > Change the bitmap to an irq safe spinlock. > > I would phrase the changelog slightly differerent > " > cma_release is currently a sleepable operatation because the bitmap > manipulation is protected by cma->lock mutex. Hugetlb code which relies > on cma_release for CMA backed (giga) hugetlb pages, however, needs to be > irq safe. > > The lock doesn't protect any sleepable operation so it can be changed to > a (irq aware) spin lock. The bitmap processing should be quite fast in > typical case but if cma sizes grow to TB then we will likely need to > replace the lock by a more optimized bitmap implementation. > " > > it seems that you are overusing irqsave variants even from context which > are never called from the IRQ context so they do not need storing flags. > > [...] > > @@ -391,8 +391,9 @@ static void cma_debug_show_areas(struct cma *cma) > > unsigned long start = 0; > > unsigned long nr_part, nr_total = 0; > > unsigned long nbits = cma_bitmap_maxno(cma); > > + unsigned long flags; > > > > - mutex_lock(&cma->lock); > > + spin_lock_irqsave(&cma->lock, flags); > > spin_lock_irq should be sufficient. This is only called from the > allocation context and that is never called from IRQ context. This makes me think more. I think that spin_lock should be sufficient. Right? > > > pr_info("number of available pages: "); > > for (;;) { > > next_zero_bit = find_next_zero_bit(cma->bitmap, nbits, start); > > @@ -407,7 +408,7 @@ static void cma_debug_show_areas(struct cma *cma) > > start = next_zero_bit + nr_zero; > > } > > pr_cont("=> %lu free of %lu total pages\n", nr_total, cma->count); > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > } > > #else > > static inline void cma_debug_show_areas(struct cma *cma) { } > > @@ -430,6 +431,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > > unsigned long pfn = -1; > > unsigned long start = 0; > > unsigned long bitmap_maxno, bitmap_no, bitmap_count; > > + unsigned long flags; > > size_t i; > > struct page *page = NULL; > > int ret = -ENOMEM; > > @@ -454,12 +456,12 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > > goto out; > > > > for (;;) { > > - mutex_lock(&cma->lock); > > + spin_lock_irqsave(&cma->lock, flags); > > bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap, > > bitmap_maxno, start, bitmap_count, mask, > > offset); > > if (bitmap_no >= bitmap_maxno) { > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > break; > > } > > bitmap_set(cma->bitmap, bitmap_no, bitmap_count); > > same here. > > > @@ -468,7 +470,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, > > * our exclusive use. If the migration fails we will take the > > * lock again and unmark it. > > */ > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > > > pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); > > ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, > > diff --git a/mm/cma.h b/mm/cma.h > > index 68ffad4e430d..2c775877eae2 100644 > > --- a/mm/cma.h > > +++ b/mm/cma.h > > @@ -15,7 +15,7 @@ struct cma { > > unsigned long count; > > unsigned long *bitmap; > > unsigned int order_per_bit; /* Order of pages represented by one bit */ > > - struct mutex lock; > > + spinlock_t lock; > > #ifdef CONFIG_CMA_DEBUGFS > > struct hlist_head mem_head; > > spinlock_t mem_head_lock; > > diff --git a/mm/cma_debug.c b/mm/cma_debug.c > > index d5bf8aa34fdc..6379cfbfd568 100644 > > --- a/mm/cma_debug.c > > +++ b/mm/cma_debug.c > > @@ -35,11 +35,12 @@ static int cma_used_get(void *data, u64 *val) > > { > > struct cma *cma = data; > > unsigned long used; > > + unsigned long flags; > > > > - mutex_lock(&cma->lock); > > + spin_lock_irqsave(&cma->lock, flags); > > /* pages counter is smaller than sizeof(int) */ > > used = bitmap_weight(cma->bitmap, (int)cma_bitmap_maxno(cma)); > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > *val = (u64)used << cma->order_per_bit; > > same here > > > > > return 0; > > @@ -52,8 +53,9 @@ static int cma_maxchunk_get(void *data, u64 *val) > > unsigned long maxchunk = 0; > > unsigned long start, end = 0; > > unsigned long bitmap_maxno = cma_bitmap_maxno(cma); > > + unsigned long flags; > > > > - mutex_lock(&cma->lock); > > + spin_lock_irqsave(&cma->lock, flags); > > for (;;) { > > start = find_next_zero_bit(cma->bitmap, bitmap_maxno, end); > > if (start >= bitmap_maxno) > > @@ -61,7 +63,7 @@ static int cma_maxchunk_get(void *data, u64 *val) > > end = find_next_bit(cma->bitmap, bitmap_maxno, start); > > maxchunk = max(end - start, maxchunk); > > } > > - mutex_unlock(&cma->lock); > > + spin_unlock_irqrestore(&cma->lock, flags); > > *val = (u64)maxchunk << cma->order_per_bit; > > > > return 0; > > and here. > -- > Michal Hocko > SUSE Labs