From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7E89C433DB for ; Mon, 22 Mar 2021 14:19:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 27B9B61920 for ; Mon, 22 Mar 2021 14:19:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 27B9B61920 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E53286B00CE; Mon, 22 Mar 2021 10:00:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E271A6B00D0; Mon, 22 Mar 2021 10:00:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAC2D6B00D1; Mon, 22 Mar 2021 10:00:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0157.hostedemail.com [216.40.44.157]) by kanga.kvack.org (Postfix) with ESMTP id A98B36B00CE for ; Mon, 22 Mar 2021 10:00:52 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 141A88249980 for ; Mon, 22 Mar 2021 14:19:43 +0000 (UTC) X-FDA: 77947718646.08.0CF5935 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf10.hostedemail.com (Postfix) with ESMTP id B52AD407F8F1 for ; Mon, 22 Mar 2021 14:19:33 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1616422762; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=jksCq3DsJImfqIu/dbj3WTPcJZucfV77hGm3/23dYP8=; b=P1px/j7s0d7JUxJpidoKJBMtVLfBUebPS1Pa8E5I1sujL2yYYEs8USRcmWfUJobdAIYYY9 1bCIx6I3uNQ0jcGBhykeI5NHgpBDkdGYgOAF/uXnUL6xmlz0d+J3oGHXXdPiVbFLnjITDd NlE6zO3jYoTkWl67gRmOPQDwCBPUqAw= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7654CACA8; Mon, 22 Mar 2021 14:19:22 +0000 (UTC) Date: Mon, 22 Mar 2021 15:19:21 +0100 From: Michal Hocko To: Mike Kravetz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt , Oscar Salvador , David Hildenbrand , Muchun Song , David Rientjes , Miaohe Lin , Peter Zijlstra , Matthew Wilcox , HORIGUCHI NAOYA , "Aneesh Kumar K . V" , Waiman Long , Peter Xu , Mina Almasry , Andrew Morton Subject: Re: [RFC PATCH 4/8] hugetlb: call update_and_free_page without hugetlb_lock Message-ID: References: <20210319224209.150047-1-mike.kravetz@oracle.com> <20210319224209.150047-5-mike.kravetz@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210319224209.150047-5-mike.kravetz@oracle.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B52AD407F8F1 X-Stat-Signature: uwkgja5mn5ho375bubcru5c7r5qpgjua Received-SPF: none (suse.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616422773-542267 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri 19-03-21 15:42:05, Mike Kravetz wrote: > With the introduction of remove_hugetlb_page(), there is no need for > update_and_free_page to hold the hugetlb lock. Change all callers to > drop the lock before calling. > > With additional code modifications, this will allow loops which decrease > the huge page pool to drop the hugetlb_lock with each page to reduce > long hold times. > > The ugly unlock/lock cycle in free_pool_huge_page will be removed in > a subsequent patch which restructures free_pool_huge_page. > > Signed-off-by: Mike Kravetz Looks good to me. I will not ack it right now though. I am still crawling through the series and want to get a full picture. So far it looks promising ;). > --- > mm/hugetlb.c | 21 +++++++++++++-------- > 1 file changed, 13 insertions(+), 8 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ae185d3315e0..3028cf10d504 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1362,14 +1362,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) > 1 << PG_writeback); > } > if (hstate_is_gigantic(h)) { > - /* > - * Temporarily drop the hugetlb_lock, because > - * we might block in free_gigantic_page(). > - */ > - spin_unlock(&hugetlb_lock); > destroy_compound_gigantic_page(page, huge_page_order(h)); > free_gigantic_page(page, huge_page_order(h)); > - spin_lock(&hugetlb_lock); > } else { > __free_pages(page, huge_page_order(h)); > } > @@ -1435,16 +1429,18 @@ static void __free_huge_page(struct page *page) > > if (HPageTemporary(page)) { > remove_hugetlb_page(h, page, false); > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, page); > } else if (h->surplus_huge_pages_node[nid]) { > /* remove the page from active list */ > remove_hugetlb_page(h, page, true); > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, page); > } else { > arch_clear_hugepage_flags(page); > enqueue_huge_page(h, page); > + spin_unlock(&hugetlb_lock); > } > - spin_unlock(&hugetlb_lock); > } > > /* > @@ -1725,7 +1721,13 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, > list_entry(h->hugepage_freelists[node].next, > struct page, lru); > remove_hugetlb_page(h, page, acct_surplus); > + /* > + * unlock/lock around update_and_free_page is temporary > + * and will be removed with subsequent patch. > + */ > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, page); > + spin_lock(&hugetlb_lock); > ret = 1; > break; > } > @@ -1794,8 +1796,9 @@ int dissolve_free_huge_page(struct page *page) > } > remove_hugetlb_page(h, page, false); > h->max_huge_pages--; > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, head); > - rc = 0; > + return 0; > } > out: > spin_unlock(&hugetlb_lock); > @@ -2572,7 +2575,9 @@ static void try_to_free_low(struct hstate *h, unsigned long count, > remove_hugetlb_page(h, page, false); > h->free_huge_pages--; > h->free_huge_pages_node[page_to_nid(page)]--; > + spin_unlock(&hugetlb_lock); > update_and_free_page(h, page); > + spin_lock(&hugetlb_lock); > > /* > * update_and_free_page could have dropped lock so > -- > 2.30.2 > -- Michal Hocko SUSE Labs