From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B989FC433B4 for ; Wed, 7 Apr 2021 20:26:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5D6A0611C1 for ; Wed, 7 Apr 2021 20:26:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D6A0611C1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F275A6B007E; Wed, 7 Apr 2021 16:26:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EFC666B0083; Wed, 7 Apr 2021 16:26:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC53E6B0085; Wed, 7 Apr 2021 16:26:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C43626B007E for ; Wed, 7 Apr 2021 16:26:19 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 87A8D1814C3D3 for ; Wed, 7 Apr 2021 20:26:19 +0000 (UTC) X-FDA: 78006703278.38.1C008F0 Received: from outbound-smtp62.blacknight.com (outbound-smtp62.blacknight.com [46.22.136.251]) by imf17.hostedemail.com (Postfix) with ESMTP id 56FE840002D8 for ; Wed, 7 Apr 2021 20:26:17 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp62.blacknight.com (Postfix) with ESMTPS id CD16EFB3F4 for ; Wed, 7 Apr 2021 21:26:17 +0100 (IST) Received: (qmail 18464 invoked from network); 7 Apr 2021 20:26:17 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 7 Apr 2021 20:26:17 -0000 From: Mel Gorman To: Linux-MM , Linux-RT-Users Cc: LKML , Chuck Lever , Jesper Dangaard Brouer , Matthew Wilcox , Thomas Gleixner , Peter Zijlstra , Ingo Molnar , Michal Hocko , Oscar Salvador , Mel Gorman Subject: [PATCH 10/11] mm/page_alloc: Avoid conflating IRQs disabled with zone->lock Date: Wed, 7 Apr 2021 21:24:22 +0100 Message-Id: <20210407202423.16022-11-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210407202423.16022-1-mgorman@techsingularity.net> References: <20210407202423.16022-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspamd-Queue-Id: 56FE840002D8 X-Stat-Signature: wchjgrumy83ow53yunc3iibh4gbzctdn X-Rspamd-Server: rspam02 Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=outbound-smtp62.blacknight.com; client-ip=46.22.136.251 X-HE-DKIM-Result: none/none X-HE-Tag: 1617827177-559271 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Historically when freeing pages, free_one_page() assumed that callers had IRQs disabled and the zone->lock could be acquired with spin_lock(). This confuses the scope of what local_lock_irq is protecting and what zone->lock is protecting in free_unref_page_list in particular. This patch uses spin_lock_irqsave() for the zone->lock in free_one_page() instead of relying on callers to have disabled IRQs. free_unref_page_commit() is changed to only deal with PCP pages protected by the local lock. free_unref_page_list() then first frees isolated pages to the buddy lists with free_one_page() and frees the rest of the pages to the PCP via free_unref_page_commit(). The end result is that free_one_page() is no longer depending on side-effects of local_lock to be correct. Note that this may incur a performance penalty while memory hot-remove is running but that is not a common operation. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 67 ++++++++++++++++++++++++++++++------------------- 1 file changed, 41 insertions(+), 26 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d94ec53367bd..6d98d97b6cf5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1473,10 +1473,12 @@ static void free_one_page(struct zone *zone, unsigned int order, int migratetype, fpi_t fpi_flags) { - spin_lock(&zone->lock); + unsigned long flags; + + spin_lock_irqsave(&zone->lock, flags); migratetype =3D check_migratetype_isolated(zone, page, pfn, migratetype= ); __free_one_page(page, pfn, zone, order, migratetype, fpi_flags); - spin_unlock(&zone->lock); + spin_unlock_irqrestore(&zone->lock, flags); } =20 static void __meminit __init_single_page(struct page *page, unsigned lon= g pfn, @@ -3238,31 +3240,13 @@ static bool free_unref_page_prepare(struct page *= page, unsigned long pfn) return true; } =20 -static void free_unref_page_commit(struct page *page, unsigned long pfn) +static void free_unref_page_commit(struct page *page, unsigned long pfn, + int migratetype) { struct zone *zone =3D page_zone(page); struct per_cpu_pages *pcp; - int migratetype; =20 - migratetype =3D get_pcppage_migratetype(page); __count_vm_event(PGFREE); - - /* - * We only track unmovable, reclaimable and movable on pcp lists. - * Free ISOLATE pages back to the allocator because they are being - * offlined but treat HIGHATOMIC as movable pages so we can get those - * areas back if necessary. Otherwise, we may have to free - * excessively into the page allocator - */ - if (migratetype >=3D MIGRATE_PCPTYPES) { - if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(zone, page, pfn, 0, migratetype, - FPI_NONE); - return; - } - migratetype =3D MIGRATE_MOVABLE; - } - pcp =3D this_cpu_ptr(zone->per_cpu_pageset); list_add(&page->lru, &pcp->lists[migratetype]); pcp->count++; @@ -3277,12 +3261,29 @@ void free_unref_page(struct page *page) { unsigned long flags; unsigned long pfn =3D page_to_pfn(page); + int migratetype; =20 if (!free_unref_page_prepare(page, pfn)) return; =20 + /* + * We only track unmovable, reclaimable and movable on pcp lists. + * Place ISOLATE pages on the isolated list because they are being + * offlined but treat HIGHATOMIC as movable pages so we can get those + * areas back if necessary. Otherwise, we may have to free + * excessively into the page allocator + */ + migratetype =3D get_pcppage_migratetype(page); + if (unlikely(migratetype >=3D MIGRATE_PCPTYPES)) { + if (unlikely(is_migrate_isolate(migratetype))) { + free_one_page(page_zone(page), page, pfn, 0, migratetype, FPI_NONE); + return; + } + migratetype =3D MIGRATE_MOVABLE; + } + local_lock_irqsave(&pagesets.lock, flags); - free_unref_page_commit(page, pfn); + free_unref_page_commit(page, pfn, migratetype); local_unlock_irqrestore(&pagesets.lock, flags); } =20 @@ -3294,6 +3295,7 @@ void free_unref_page_list(struct list_head *list) struct page *page, *next; unsigned long flags, pfn; int batch_count =3D 0; + int migratetype; =20 /* Prepare pages for freeing */ list_for_each_entry_safe(page, next, list, lru) { @@ -3301,15 +3303,28 @@ void free_unref_page_list(struct list_head *list) if (!free_unref_page_prepare(page, pfn)) list_del(&page->lru); set_page_private(page, pfn); + + /* + * Free isolated pages directly to the allocator, see + * comment in free_unref_page. + */ + migratetype =3D get_pcppage_migratetype(page); + if (unlikely(migratetype >=3D MIGRATE_PCPTYPES)) { + if (unlikely(is_migrate_isolate(migratetype))) { + free_one_page(page_zone(page), page, pfn, 0, + migratetype, FPI_NONE); + list_del(&page->lru); + } + } } =20 local_lock_irqsave(&pagesets.lock, flags); list_for_each_entry_safe(page, next, list, lru) { - unsigned long pfn =3D page_private(page); - + pfn =3D page_private(page); set_page_private(page, 0); + migratetype =3D get_pcppage_migratetype(page); trace_mm_page_free_batched(page); - free_unref_page_commit(page, pfn); + free_unref_page_commit(page, pfn, migratetype); =20 /* * Guard against excessive IRQ disabled times when we get --=20 2.26.2