From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55360C3F2CD for ; Tue, 3 Mar 2020 08:48:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 27B6A215A4 for ; Tue, 3 Mar 2020 08:48:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 27B6A215A4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B22346B0007; Tue, 3 Mar 2020 03:48:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A85286B0008; Tue, 3 Mar 2020 03:48:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 94C0E6B000A; Tue, 3 Mar 2020 03:48:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id 7878B6B0007 for ; Tue, 3 Mar 2020 03:48:01 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 24A55180AD801 for ; Tue, 3 Mar 2020 08:48:01 +0000 (UTC) X-FDA: 76553423562.08.wind23_28d3814ee7d3b X-HE-Tag: wind23_28d3814ee7d3b X-Filterd-Recvd-Size: 5216 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Tue, 3 Mar 2020 08:48:00 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Mar 2020 00:47:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,510,1574150400"; d="scan'208";a="233551170" Received: from yhuang-dev.sh.intel.com (HELO yhuang-dev) ([10.239.159.23]) by orsmga008.jf.intel.com with ESMTP; 03 Mar 2020 00:47:55 -0800 From: "Huang\, Ying" To: Michal Hocko Cc: Mel Gorman , David Hildenbrand , Johannes Weiner , Matthew Wilcox , Andrew Morton , , , Vlastimil Babka , Zi Yan , Peter Zijlstra , Dave Hansen , Minchan Kim , Hugh Dickins , Alexander Duyck Subject: Re: [RFC 0/3] mm: Discard lazily freed pages when migrating References: <20200228033819.3857058-1-ying.huang@intel.com> <20200228034248.GE29971@bombadil.infradead.org> <87a7538977.fsf@yhuang-dev.intel.com> <871rqf850z.fsf@yhuang-dev.intel.com> <20200228094954.GB3772@suse.de> <87h7z76lwf.fsf@yhuang-dev.intel.com> <20200302151607.GC3772@suse.de> <87zhcy5hoj.fsf@yhuang-dev.intel.com> <20200303080945.GX4380@dhcp22.suse.cz> Date: Tue, 03 Mar 2020 16:47:54 +0800 In-Reply-To: <20200303080945.GX4380@dhcp22.suse.cz> (Michal Hocko's message of "Tue, 3 Mar 2020 09:09:45 +0100") Message-ID: <87o8td4yf9.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Michal Hocko writes: > On Tue 03-03-20 09:51:56, Huang, Ying wrote: >> Mel Gorman writes: >> > On Mon, Mar 02, 2020 at 07:23:12PM +0800, Huang, Ying wrote: >> >> If some applications cannot tolerate the latency incurred by the memory >> >> allocation and zeroing. Then we cannot discard instead of migrate >> >> always. While in some situations, less memory pressure can help. So >> >> it's better to let the administrator and the application choose the >> >> right behavior in the specific situation? >> >> >> > >> > Is there an application you have in mind that benefits from discarding >> > MADV_FREE pages instead of migrating them? >> > >> > Allowing the administrator or application to tune this would be very >> > problematic. An application would require an update to the system call >> > to take advantage of it and then detect if the running kernel supports >> > it. An administrator would have to detect that MADV_FREE pages are being >> > prematurely discarded leading to a slowdown and that is hard to detect. >> > It could be inferred from monitoring compaction stats and checking >> > if compaction activity is correlated with higher minor faults in the >> > target application. Proving the correlation would require using the perf >> > software event PERF_COUNT_SW_PAGE_FAULTS_MIN and matching the addresses >> > to MADV_FREE regions that were freed prematurely. That is not an obvious >> > debugging step to take when an application detects latency spikes. >> > >> > Now, you could add a counter specifically for MADV_FREE pages freed for >> > reasons other than memory pressure and hope the administrator knows about >> > the counter and what it means. That type of knowledge could take a long >> > time to spread so it's really very important that there is evidence of >> > an application that suffers due to the current MADV_FREE and migration >> > behaviour. >> >> OK. I understand that this patchset isn't a universal win, so we need >> some way to justify it. I will try to find some application for that. >> >> Another thought, as proposed by David Hildenbrand, it's may be a >> universal win to discard clean MADV_FREE pages when migrating if there are >> already memory pressure on the target node. For example, if the free >> memory on the target node is lower than high watermark? > > This is already happening because if the target node is short on memory > it will start to reclaim and if MADV_FREE pages are at the tail of > inactive file LRU list then they will be dropped. Please note how that > follows proper aging and doesn't introduce any special casing. Really > MADV_FREE is an inactive cache for anonymous memory and we treat it like > inactive page cache. This is not carved in stone of course but it really > requires very good justification to change. If my understanding were correct, the newly migrated clean MADV_FREE pages will be put at the head of inactive file LRU list instead of the tail. So it's possible that some useful file cache pages will be reclaimed. Best Regards, Huang, Ying