From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD6B5C433DF for ; Fri, 16 Oct 2020 22:53:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 022C522201 for ; Fri, 16 Oct 2020 22:53:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="V8pn2+b3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 022C522201 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0092A6B005C; Fri, 16 Oct 2020 18:53:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAA606B005D; Fri, 16 Oct 2020 18:53:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D27036B006E; Fri, 16 Oct 2020 18:53:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 998216B005C for ; Fri, 16 Oct 2020 18:53:03 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 376F5181AEF1F for ; Fri, 16 Oct 2020 22:53:03 +0000 (UTC) X-FDA: 77379290646.12.walk43_0a0997327220 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 1119D1800EB34 for ; Fri, 16 Oct 2020 22:53:03 +0000 (UTC) X-HE-Tag: walk43_0a0997327220 X-Filterd-Recvd-Size: 6712 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Fri, 16 Oct 2020 22:53:02 +0000 (UTC) Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09GMquKK002361 for ; Fri, 16 Oct 2020 15:53:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=facebook; bh=+fLsw6avMA9nUUd+Oxj6oJl+xsVxclhaBM05fQcL2f0=; b=V8pn2+b35sDaIBrBS10Vmb0INPN3HYmqyfjbC+A9DF3AIEnu8iuF6GOChJ8EkV5Vu0Hc ukDBd4MJg56Lhdvy3sDRzI9pibXNuOMPNp92S5Cv6ph+mGz1023v0EpZu9CSVSijxK1B dI3w+vhqhej7Sdh01G74/2/TpeeSYX3l8c0= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 347gmw18eh-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 16 Oct 2020 15:53:01 -0700 Received: from intmgw003.06.prn3.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::e) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Fri, 16 Oct 2020 15:52:59 -0700 Received: by devvm1755.vll0.facebook.com (Postfix, from userid 111017) id 9831916EFB78; Fri, 16 Oct 2020 15:52:56 -0700 (PDT) From: Roman Gushchin To: Andrew Morton CC: Zi Yan , Joonsoo Kim , Mike Kravetz , , , , Roman Gushchin Subject: [PATCH rfc 1/2] mm: cma: make cma_release() non-blocking Date: Fri, 16 Oct 2020 15:52:53 -0700 Message-ID: <20201016225254.3853109-2-guro@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20201016225254.3853109-1-guro@fb.com> References: <20201016225254.3853109-1-guro@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe Content-Type: text/plain Content-Transfer-Encoding: quoted-printable X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-10-16_12:2020-10-16,2020-10-16 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 clxscore=1015 adultscore=0 suspectscore=2 mlxlogscore=878 phishscore=0 impostorscore=0 priorityscore=1501 bulkscore=0 malwarescore=0 spamscore=0 lowpriorityscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2010160162 X-FB-Internal: deliver X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: cma_release() has to lock the cma_lock mutex to clear the cma bitmap. It makes it a blocking function, which complicates its usage from non-blocking contexts. For instance, hugetlbfs code is temporarily dropping the hugetlb_lock spinlock to call cma_release(). This patch makes cma_release non-blocking by postponing the cma bitmap clearance. It's done later from a work context. The first page in the cma allocation is used to store the work struct. To make sure that subsequent cma_alloc() call will pass, cma_alloc() flushes the corresponding workqueue. Because CMA allocations and de-allocations are usually not that frequent, a single global workqueue is used. Signed-off-by: Roman Gushchin --- mm/cma.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 49 insertions(+), 2 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 7f415d7cda9f..523cd9356bc7 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -36,10 +36,19 @@ =20 #include "cma.h" =20 +struct cma_clear_bitmap_work { + struct work_struct work; + struct cma *cma; + unsigned long pfn; + unsigned int count; +}; + struct cma cma_areas[MAX_CMA_AREAS]; unsigned cma_area_count; static DEFINE_MUTEX(cma_mutex); =20 +struct workqueue_struct *cma_release_wq; + phys_addr_t cma_get_base(const struct cma *cma) { return PFN_PHYS(cma->base_pfn); @@ -148,6 +157,10 @@ static int __init cma_init_reserved_areas(void) for (i =3D 0; i < cma_area_count; i++) cma_activate_area(&cma_areas[i]); =20 + cma_release_wq =3D create_workqueue("cma_release"); + if (!cma_release_wq) + return -ENOMEM; + return 0; } core_initcall(cma_init_reserved_areas); @@ -437,6 +450,13 @@ struct page *cma_alloc(struct cma *cma, size_t count= , unsigned int align, return NULL; =20 for (;;) { + /* + * CMA bitmaps are cleared asynchronously from works, + * scheduled by cma_release(). To make sure the allocation + * will success, cma release workqueue is flushed here. + */ + flush_workqueue(cma_release_wq); + mutex_lock(&cma->lock); bitmap_no =3D bitmap_find_next_zero_area_off(cma->bitmap, bitmap_maxno, start, bitmap_count, mask, @@ -495,6 +515,17 @@ struct page *cma_alloc(struct cma *cma, size_t count= , unsigned int align, return page; } =20 +static void cma_clear_bitmap_fn(struct work_struct *work) +{ + struct cma_clear_bitmap_work *w; + + w =3D container_of(work, struct cma_clear_bitmap_work, work); + + cma_clear_bitmap(w->cma, w->pfn, w->count); + + __free_page(pfn_to_page(w->pfn)); +} + /** * cma_release() - release allocated pages * @cma: Contiguous memory region for which the allocation is performe= d. @@ -507,6 +538,7 @@ struct page *cma_alloc(struct cma *cma, size_t count,= unsigned int align, */ bool cma_release(struct cma *cma, const struct page *pages, unsigned int= count) { + struct cma_clear_bitmap_work *work; unsigned long pfn; =20 if (!cma || !pages) @@ -521,8 +553,23 @@ bool cma_release(struct cma *cma, const struct page = *pages, unsigned int count) =20 VM_BUG_ON(pfn + count > cma->base_pfn + cma->count); =20 - free_contig_range(pfn, count); - cma_clear_bitmap(cma, pfn, count); + /* + * To make cma_release() non-blocking, cma bitmap is cleared from + * a work context (see cma_clear_bitmap_fn()). The first page + * in the cma allocation is used to store the work structure, + * so it's released after the cma bitmap clearance. Other pages + * are released immediately as previously. + */ + if (count > 1) + free_contig_range(pfn + 1, count - 1); + + work =3D (struct cma_clear_bitmap_work *)page_to_virt(pages); + INIT_WORK(&work->work, cma_clear_bitmap_fn); + work->cma =3D cma; + work->pfn =3D pfn; + work->count =3D count; + queue_work(cma_release_wq, &work->work); + trace_cma_release(pfn, pages, count); =20 return true; --=20 2.26.2