From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFB0BC35247 for ; Mon, 3 Feb 2020 21:47:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 977C120838 for ; Mon, 3 Feb 2020 21:47:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 977C120838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4390A6B0003; Mon, 3 Feb 2020 16:47:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C3BD6B0005; Mon, 3 Feb 2020 16:47:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 264AD6B0006; Mon, 3 Feb 2020 16:47:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 0AF4B6B0003 for ; Mon, 3 Feb 2020 16:47:57 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A8EF5249B for ; Mon, 3 Feb 2020 21:47:56 +0000 (UTC) X-FDA: 76450153752.25.key61_4541c976fd818 X-HE-Tag: key61_4541c976fd818 X-Filterd-Recvd-Size: 5436 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 3 Feb 2020 21:47:55 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Feb 2020 13:22:23 -0800 X-IronPort-AV: E=Sophos;i="5.70,398,1574150400"; d="scan'208";a="310840298" Received: from ahduyck-desk1.jf.intel.com ([10.7.198.76]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Feb 2020 13:22:23 -0800 Message-ID: <2584af9b8d358faf27ee838fdab2be594e255433.camel@linux.intel.com> Subject: Re: Balloon pressuring page cache From: Alexander Duyck To: Tyler Sanderson , "Michael S. Tsirkin" Cc: David Hildenbrand , "Wang, Wei W" , "virtualization@lists.linux-foundation.org" , David Rientjes , "linux-mm@kvack.org" , Michal Hocko Date: Mon, 03 Feb 2020 13:22:23 -0800 In-Reply-To: References: <91270a68-ff48-88b0-219c-69801f0c252f@redhat.com> <75d4594f-0864-5172-a0f8-f97affedb366@redhat.com> <286AC319A985734F985F78AFA26841F73E3F8A02@shsmsx102.ccr.corp.intel.com> <20200203080520-mutt-send-email-mst@kernel.org> <5ac131de8e3b7fc1fafd05a61feb5f6889aeb917.camel@linux.intel.com> <20200203120225-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.32.5 (3.32.5-1.fc30) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 2020-02-03 at 12:32 -0800, Tyler Sanderson wrote: > There were apparently good reasons for moving away from OOM notifier > callback: > https://lkml.org/lkml/2018/7/12/314 > https://lkml.org/lkml/2018/8/2/322 > > In particular the OOM notifier is worse than the shrinker because: > It is last-resort, which means the system has already gone through > heroics to prevent OOM. Those heroic reclaim efforts are expensive and > impact application performance. > It lacks understanding of NUMA or other OOM constraints. > It has a higher potential for bugs due to the subtlety of the callback > context. > Given the above, I think the shrinker API certainly makes the most sense > _if_ the balloon size is static. In that case memory should be reclaimed > from the balloon early and proportionally to balloon size, which the > shrinker API achieves. The problem is the shrinker doesn't have any concept of tiering or priority. I suspect he reason for using the OOM notification is because in practice it should be the last thing we are pulling memory out of with things like page cache and slab caches being first. Once we have pages that are leaked out of the balloon by the shrinker it will trigger the balloon wanting to reinflate. Ideally if the shrinker is running we shouldn't be able to reinflate the balloon, and if we are reinflating the balloon we shouldn't need to run the shrinker. The fact that we can do both at the same time is problematic. > However, if the balloon is inflating and intentionally causing memory > pressure then this results in the inefficiency pointed out earlier. > > If the balloon is inflating but not causing memory pressure then there > is no problem with either API. The entire point of the balloon is to cause memory pressure. Otherwise essentially all we are really doing is hinting since the guest doesn't need the memory and isn't going to use it any time soon. > This suggests another route: rather than cause memory pressure to shrink > the page cache, the balloon could issue the equivalent of "echo 3 > > /proc/sys/vm/drop_caches". > Of course ideally, we want to be more fine grained than "drop > everything". We really want an API that says "drop everything that > hasn't been accessed in the last 5 minutes". > > This would eliminate the need for the balloon to cause memory pressure > at all which avoids the inefficiency in question. Furthermore, this > pairs nicely with the FREE_PAGE_HINT feature. Something similar was brought up in the discussion we had about this in my patch set. The problem is, by trying to use a value like "5 minutes" it implies that we are going to need to track some extra state somewhere to determine that value. An alternative is to essentially just slowly shrink memory for the guest. We had some discussion about this in another thread, and the following code example was brought up as a way to go about doing that: https://github.com/Conan-Kudo/omv-kernel-rc/blob/master/0154-sysctl-vm-Fine-grained-cache-shrinking.patch The idea is you essentially just slowly bleed the memory from the guest by specifying some amount of MB of cache to be freed on some regular interval. Thanks. - Alex