From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F161C433E0 for ; Thu, 18 Feb 2021 16:09:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D0AA264E85 for ; Thu, 18 Feb 2021 16:09:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0AA264E85 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 392D16B0006; Thu, 18 Feb 2021 11:09:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 344186B006C; Thu, 18 Feb 2021 11:09:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27FFE6B006E; Thu, 18 Feb 2021 11:09:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 132B06B0006 for ; Thu, 18 Feb 2021 11:09:04 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BF9F18249980 for ; Thu, 18 Feb 2021 16:09:03 +0000 (UTC) X-FDA: 77831872566.10.D3401C6 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf24.hostedemail.com (Postfix) with ESMTP id B03E4A000514 for ; Thu, 18 Feb 2021 16:08:59 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613664541; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=bC2Uhe3wS/iH/9eQnHqw2DtW9bXgLhC7iPsGl5NSF5g=; b=fQBtUlPl0glwKmB4E4ZCwLU7lqM7nkvEiu3jLUn/lXru3Qo+tW7Qz5kzfMFJcw/w2oCJmB 6hPfaCcPYpFbyfJe3GI5yF1cdc3BDV7H0uIzxcngoQvyFx+53oQ/7AiiX/toVWBhICGVMS zrMnj/az2JIcktIdFxmr1agGO5LRKJs= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9DDBCADDB; Thu, 18 Feb 2021 16:09:01 +0000 (UTC) Date: Thu, 18 Feb 2021 17:08:58 +0100 From: Michal Hocko To: Minchan Kim Cc: Matthew Wilcox , Andrew Morton , linux-mm , LKML , cgoldswo@codeaurora.org, linux-fsdevel@vger.kernel.org, david@redhat.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, joaodias@google.com Subject: Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Message-ID: References: <20210216170348.1513483-1-minchan@kernel.org> <20210217211612.GO2858050@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: 1hkwdnseo54x6t3zothtipt89557rjzy X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B03E4A000514 Received-SPF: none (suse.com>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613664539-925664 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu 18-02-21 07:52:25, Minchan Kim wrote: > On Thu, Feb 18, 2021 at 09:17:02AM +0100, Michal Hocko wrote: > > On Wed 17-02-21 13:32:05, Minchan Kim wrote: > > > On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote: > > > > On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote: > > > > > > I suspect you do not want to add atomic_read inside hot paths, right? Is > > > > > > this really something that we have to microoptimize for? atomic_read is > > > > > > a simple READ_ONCE on many archs. > > > > > > > > > > It's also spin_lock_irq_save in some arch. If the new synchonization is > > > > > heavily compilcated, atomic would be better for simple start but I thought > > > > > this locking scheme is too simple so no need to add atomic operation in > > > > > readside. > > > > > > > > What arch uses a spinlock for atomic_read()? I just had a quick grep and > > > > didn't see any. > > > > > > Ah, my bad. I was confused with update side. > > > Okay, let's use atomic op to make it simple. > > > > Thanks. This should make the code much more simple. Before you send > > another version for the review I have another thing to consider. You are > > kind of wiring this into the migration code but control over lru pcp > > caches can be used in other paths as well. Memory offlining would be > > another user. We already disable page allocator pcp caches to prevent > > regular draining. We could do the same with lru pcp caches. > > I didn't catch your point here. If memory offlining is interested on > disabling lru pcp, it could call migrate_prep and migrate_finish > like other places. Are you suggesting this one? What I meant to say is that you can have a look at this not as an integral part of the migration code but rather a common functionality that migration and others can use. So instead of an implicit part of migrate_prep this would become disable_lru_cache and migrate_finish would become lruc_cache_enable. See my point? An advantage of that would be that this would match the pcp page allocator disabling and we could have it in place for the whole operation to make the page state more stable wrt. LRU state (PageLRU). > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index a969463bdda4..0ec1c13bfe32 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1425,8 +1425,12 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > node_clear(mtc.nid, nmask); > if (nodes_empty(nmask)) > node_set(mtc.nid, nmask); > + > + migrate_prep(); > ret = migrate_pages(&source, alloc_migration_target, NULL, > (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > + > + migrate_finish(); > if (ret) { > list_for_each_entry(page, &source, lru) { > pr_warn("migrating pfn %lx failed ret:%d ", > -- Michal Hocko SUSE Labs