From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BA43C3F2D7 for ; Wed, 4 Mar 2020 00:33:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A92720674 for ; Wed, 4 Mar 2020 00:33:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A92720674 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 130466B0003; Tue, 3 Mar 2020 19:33:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DFFD6B000C; Tue, 3 Mar 2020 19:33:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F108B6B000D; Tue, 3 Mar 2020 19:33:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0105.hostedemail.com [216.40.44.105]) by kanga.kvack.org (Postfix) with ESMTP id D975F6B0003 for ; Tue, 3 Mar 2020 19:33:15 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 808088248D7C for ; Wed, 4 Mar 2020 00:33:15 +0000 (UTC) X-FDA: 76555805550.21.lip88_90c1d00367159 X-HE-Tag: lip88_90c1d00367159 X-Filterd-Recvd-Size: 5290 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Mar 2020 00:33:14 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Mar 2020 16:33:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,511,1574150400"; d="scan'208";a="258566718" Received: from yhuang-dev.sh.intel.com (HELO yhuang-dev) ([10.239.159.23]) by orsmga002.jf.intel.com with ESMTP; 03 Mar 2020 16:33:09 -0800 From: "Huang\, Ying" To: Mel Gorman Cc: David Hildenbrand , Michal Hocko , Johannes Weiner , Matthew Wilcox , Andrew Morton , , , Vlastimil Babka , Zi Yan , Peter Zijlstra , Dave Hansen , Minchan Kim , Hugh Dickins , Alexander Duyck Subject: Re: [RFC 0/3] mm: Discard lazily freed pages when migrating References: <20200228033819.3857058-1-ying.huang@intel.com> <20200228034248.GE29971@bombadil.infradead.org> <87a7538977.fsf@yhuang-dev.intel.com> <871rqf850z.fsf@yhuang-dev.intel.com> <20200228094954.GB3772@suse.de> <87h7z76lwf.fsf@yhuang-dev.intel.com> <20200302151607.GC3772@suse.de> <87zhcy5hoj.fsf@yhuang-dev.intel.com> <20200303130241.GE3772@suse.de> Date: Wed, 04 Mar 2020 08:33:08 +0800 In-Reply-To: <20200303130241.GE3772@suse.de> (Mel Gorman's message of "Tue, 3 Mar 2020 13:02:41 +0000") Message-ID: <874kv53qnv.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mel Gorman writes: > On Tue, Mar 03, 2020 at 09:51:56AM +0800, Huang, Ying wrote: >> Mel Gorman writes: >> > On Mon, Mar 02, 2020 at 07:23:12PM +0800, Huang, Ying wrote: >> >> If some applications cannot tolerate the latency incurred by the memory >> >> allocation and zeroing. Then we cannot discard instead of migrate >> >> always. While in some situations, less memory pressure can help. So >> >> it's better to let the administrator and the application choose the >> >> right behavior in the specific situation? >> >> >> > >> > Is there an application you have in mind that benefits from discarding >> > MADV_FREE pages instead of migrating them? >> > >> > Allowing the administrator or application to tune this would be very >> > problematic. An application would require an update to the system call >> > to take advantage of it and then detect if the running kernel supports >> > it. An administrator would have to detect that MADV_FREE pages are being >> > prematurely discarded leading to a slowdown and that is hard to detect. >> > It could be inferred from monitoring compaction stats and checking >> > if compaction activity is correlated with higher minor faults in the >> > target application. Proving the correlation would require using the perf >> > software event PERF_COUNT_SW_PAGE_FAULTS_MIN and matching the addresses >> > to MADV_FREE regions that were freed prematurely. That is not an obvious >> > debugging step to take when an application detects latency spikes. >> > >> > Now, you could add a counter specifically for MADV_FREE pages freed for >> > reasons other than memory pressure and hope the administrator knows about >> > the counter and what it means. That type of knowledge could take a long >> > time to spread so it's really very important that there is evidence of >> > an application that suffers due to the current MADV_FREE and migration >> > behaviour. >> >> OK. I understand that this patchset isn't a universal win, so we need >> some way to justify it. I will try to find some application for that. >> >> Another thought, as proposed by David Hildenbrand, it's may be a >> universal win to discard clean MADV_FREE pages when migrating if there are >> already memory pressure on the target node. For example, if the free >> memory on the target node is lower than high watermark? >> > > That is an extremely specific corner case that is not likely to occur. > NUMA balancing is not going to migrate a MADV_FREE page under these > circumstances as a write cancels MADV_FREE is read attempt will probably > fail to allocate a destination page in alloc_misplaced_dst_page so the > data gets lost instead of remaining remote. sys_movepages is a possibility > but the circumstances of an application delibertly trying to migrate to > a loaded node is low. Compaction never migrates cross-node so the state > of a remote node under pressure do not matter. > > Once again, there needs to be a reasonable use case to be able to > meaningfully balance between the benefits and risks of changing the > MADV_FREE semantics. OK. Will try to find some workloads for this. Best Regards, Huang, Ying