From: David Hildenbrand <david@redhat.com> To: Tyler Sanderson <tysand@google.com> Cc: "Michael S. Tsirkin" <mst@redhat.com>, Alexander Duyck <alexander.h.duyck@linux.intel.com>, "Wang, Wei W" <wei.w.wang@intel.com>, "virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org>, David Rientjes <rientjes@google.com>, "linux-mm@kvack.org" <linux-mm@kvack.org>, Michal Hocko <mhocko@kernel.org>, namit@vmware.com Subject: Re: Balloon pressuring page cache Date: Tue, 4 Feb 2020 20:17:18 +0100 [thread overview] Message-ID: <b809340d-7e86-caf6-bf12-db7bb8265045@redhat.com> (raw) In-Reply-To: <CAJuQAmpiVqnNt-vSkQh5Gg63QZ49_nuz4+VW2Jfwn51gWVdtfA@mail.gmail.com> On 04.02.20 19:52, Tyler Sanderson wrote: > > > On Tue, Feb 4, 2020 at 12:29 AM David Hildenbrand <david@redhat.com > <mailto:david@redhat.com>> wrote: > > On 03.02.20 21:32, Tyler Sanderson wrote: > > There were apparently good reasons for moving away from OOM notifier > > callback: > > https://lkml.org/lkml/2018/7/12/314 > > https://lkml.org/lkml/2018/8/2/322 > > > > In particular the OOM notifier is worse than the shrinker because: > > The issue is that DEFLATE_ON_OOM is under-specified. > > > > > 1. It is last-resort, which means the system has already gone through > > heroics to prevent OOM. Those heroic reclaim efforts are expensive > > and impact application performance. > > That's *exactly* what "deflate on OOM" suggests. > > > It seems there are some use cases where "deflate on OOM" is desired and > others where "deflate on pressure" is desired. > This suggests adding a new feature bit "DEFLATE_ON_PRESSURE" that > registers the shrinker, and reverting DEFLATE_ON_OOM to use the OOM > notifier callback. > > This lets users configure the balloon for their use case. You want the old behavior back, so why should we introduce a new one? Or am I missing something? (you did want us to revert to old handling, no?) I consider virtio-balloon to this very day a big hack. And I don't see it getting better with new config knobs. Having that said, the technologies that are candidates to replace it (free page reporting, taming the guest page cache, etc.) are still not ready - so we'll have to stick with it for now :( . > > I'm actually not sure how you would safely do memory overcommit without > DEFLATE_ON_OOM. So I think it unlocks a huge use case. Using better suited technologies that are not ready yet (well, some form of free page reporting is available under IBM z already but in a proprietary form) ;) Anyhow, I remember that DEFLATE_ON_OOM only makes it less likely to crash your guest, but not that you are safe to squeeze the last bit out of your guest VM. -- Thanks, David / dhildenb
WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com> To: Tyler Sanderson <tysand@google.com> Cc: "Michael S. Tsirkin" <mst@redhat.com>, "virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org>, "linux-mm@kvack.org" <linux-mm@kvack.org>, namit@vmware.com, David Rientjes <rientjes@google.com>, Alexander Duyck <alexander.h.duyck@linux.intel.com>, Michal Hocko <mhocko@kernel.org> Subject: Re: Balloon pressuring page cache Date: Tue, 4 Feb 2020 20:17:18 +0100 [thread overview] Message-ID: <b809340d-7e86-caf6-bf12-db7bb8265045@redhat.com> (raw) In-Reply-To: <CAJuQAmpiVqnNt-vSkQh5Gg63QZ49_nuz4+VW2Jfwn51gWVdtfA@mail.gmail.com> On 04.02.20 19:52, Tyler Sanderson wrote: > > > On Tue, Feb 4, 2020 at 12:29 AM David Hildenbrand <david@redhat.com > <mailto:david@redhat.com>> wrote: > > On 03.02.20 21:32, Tyler Sanderson wrote: > > There were apparently good reasons for moving away from OOM notifier > > callback: > > https://lkml.org/lkml/2018/7/12/314 > > https://lkml.org/lkml/2018/8/2/322 > > > > In particular the OOM notifier is worse than the shrinker because: > > The issue is that DEFLATE_ON_OOM is under-specified. > > > > > 1. It is last-resort, which means the system has already gone through > > heroics to prevent OOM. Those heroic reclaim efforts are expensive > > and impact application performance. > > That's *exactly* what "deflate on OOM" suggests. > > > It seems there are some use cases where "deflate on OOM" is desired and > others where "deflate on pressure" is desired. > This suggests adding a new feature bit "DEFLATE_ON_PRESSURE" that > registers the shrinker, and reverting DEFLATE_ON_OOM to use the OOM > notifier callback. > > This lets users configure the balloon for their use case. You want the old behavior back, so why should we introduce a new one? Or am I missing something? (you did want us to revert to old handling, no?) I consider virtio-balloon to this very day a big hack. And I don't see it getting better with new config knobs. Having that said, the technologies that are candidates to replace it (free page reporting, taming the guest page cache, etc.) are still not ready - so we'll have to stick with it for now :( . > > I'm actually not sure how you would safely do memory overcommit without > DEFLATE_ON_OOM. So I think it unlocks a huge use case. Using better suited technologies that are not ready yet (well, some form of free page reporting is available under IBM z already but in a proprietary form) ;) Anyhow, I remember that DEFLATE_ON_OOM only makes it less likely to crash your guest, but not that you are safe to squeeze the last bit out of your guest VM. -- Thanks, David / dhildenb _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2020-02-04 19:17 UTC|newest] Thread overview: 119+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-01-29 0:22 Balloon pressuring page cache Tyler Sanderson via Virtualization 2020-01-29 10:31 ` David Hildenbrand 2020-01-29 19:11 ` Tyler Sanderson via Virtualization 2020-01-30 15:02 ` David Hildenbrand 2020-01-30 15:02 ` David Hildenbrand 2020-01-30 15:20 ` Michael S. Tsirkin 2020-01-30 15:20 ` Michael S. Tsirkin 2020-01-30 15:23 ` David Hildenbrand 2020-01-30 15:23 ` David Hildenbrand 2020-01-30 15:31 ` Wang, Wei W 2020-01-30 15:31 ` Wang, Wei W 2020-01-30 19:59 ` Tyler Sanderson 2020-01-30 19:59 ` Tyler Sanderson via Virtualization 2020-02-03 13:11 ` Michael S. Tsirkin 2020-02-03 13:11 ` Michael S. Tsirkin 2020-02-03 16:18 ` Alexander Duyck 2020-02-03 16:34 ` David Hildenbrand 2020-02-03 16:34 ` David Hildenbrand 2020-02-03 17:03 ` Michael S. Tsirkin 2020-02-03 17:03 ` Michael S. Tsirkin 2020-02-03 20:32 ` Tyler Sanderson 2020-02-03 20:32 ` Tyler Sanderson via Virtualization 2020-02-03 21:22 ` Alexander Duyck 2020-02-03 23:16 ` Tyler Sanderson 2020-02-03 23:16 ` Tyler Sanderson via Virtualization 2020-02-04 0:10 ` Alexander Duyck 2020-02-04 5:45 ` Michael S. Tsirkin 2020-02-04 5:45 ` Michael S. Tsirkin 2020-02-04 8:29 ` David Hildenbrand 2020-02-04 8:29 ` David Hildenbrand 2020-02-04 18:52 ` Tyler Sanderson 2020-02-04 18:52 ` Tyler Sanderson via Virtualization 2020-02-04 18:56 ` Michael S. Tsirkin 2020-02-04 18:56 ` Michael S. Tsirkin 2020-02-04 19:17 ` David Hildenbrand [this message] 2020-02-04 19:17 ` David Hildenbrand 2020-02-04 23:58 ` Tyler Sanderson 2020-02-04 23:58 ` Tyler Sanderson via Virtualization 2020-02-05 0:15 ` Tyler Sanderson 2020-02-05 0:15 ` Tyler Sanderson via Virtualization 2020-02-05 6:57 ` Michael S. Tsirkin 2020-02-05 6:57 ` Michael S. Tsirkin 2020-02-05 19:01 ` Tyler Sanderson 2020-02-05 19:01 ` Tyler Sanderson via Virtualization 2020-02-05 19:22 ` Alexander Duyck 2020-02-05 21:44 ` Tyler Sanderson 2020-02-05 21:44 ` Tyler Sanderson via Virtualization 2020-02-06 11:00 ` David Hildenbrand 2020-02-06 11:00 ` David Hildenbrand 2020-02-03 22:50 ` Nadav Amit 2020-02-03 22:50 ` Nadav Amit via Virtualization 2020-02-04 8:35 ` David Hildenbrand 2020-02-04 8:35 ` David Hildenbrand 2020-02-04 8:40 ` Michael S. Tsirkin 2020-02-04 8:40 ` Michael S. Tsirkin 2020-02-04 8:48 ` David Hildenbrand 2020-02-04 8:48 ` David Hildenbrand 2020-02-04 14:30 ` David Hildenbrand 2020-02-04 14:30 ` David Hildenbrand 2020-02-04 16:50 ` Michael S. Tsirkin 2020-02-04 16:50 ` Michael S. Tsirkin 2020-02-04 16:56 ` David Hildenbrand 2020-02-04 16:56 ` David Hildenbrand 2020-02-04 20:33 ` Michael S. Tsirkin 2020-02-04 20:33 ` [virtio-dev] " Michael S. Tsirkin 2020-02-04 20:33 ` Michael S. Tsirkin 2020-02-05 8:31 ` David Hildenbrand 2020-02-05 8:31 ` [virtio-dev] " David Hildenbrand 2020-02-05 6:52 ` Wang, Wei W 2020-02-05 6:52 ` Wang, Wei W 2020-02-05 7:05 ` Michael S. Tsirkin 2020-02-05 7:05 ` Michael S. Tsirkin 2020-02-05 8:50 ` Wang, Wei W 2020-02-05 8:50 ` Wang, Wei W 2020-02-05 6:49 ` Wang, Wei W 2020-02-05 6:49 ` Wang, Wei W 2020-02-05 8:19 ` David Hildenbrand 2020-02-05 8:19 ` David Hildenbrand 2020-02-05 8:54 ` Wang, Wei W 2020-02-05 8:54 ` Wang, Wei W 2020-02-05 8:56 ` David Hildenbrand 2020-02-05 8:56 ` David Hildenbrand 2020-02-05 9:00 ` Wang, Wei W 2020-02-05 9:00 ` Wang, Wei W 2020-02-05 9:05 ` David Hildenbrand 2020-02-05 9:05 ` David Hildenbrand 2020-02-05 9:19 ` Wang, Wei W 2020-02-05 9:19 ` Wang, Wei W 2020-02-05 9:22 ` David Hildenbrand 2020-02-05 9:22 ` David Hildenbrand 2020-02-05 9:35 ` Wang, Wei W 2020-02-05 9:35 ` Wang, Wei W 2020-02-05 9:37 ` David Hildenbrand 2020-02-05 9:37 ` David Hildenbrand 2020-02-05 9:49 ` Wang, Wei W 2020-02-05 9:49 ` Wang, Wei W 2020-02-05 9:58 ` David Hildenbrand 2020-02-05 9:58 ` David Hildenbrand 2020-02-05 10:25 ` Michael S. Tsirkin 2020-02-05 10:25 ` Michael S. Tsirkin 2020-02-05 10:42 ` David Hildenbrand 2020-02-05 10:42 ` David Hildenbrand 2020-02-05 9:35 ` Michael S. Tsirkin 2020-02-05 9:35 ` Michael S. Tsirkin 2020-02-05 18:43 ` Tyler Sanderson 2020-02-05 18:43 ` Tyler Sanderson via Virtualization 2020-02-06 9:30 ` Wang, Wei W 2020-02-06 9:30 ` Wang, Wei W 2020-02-05 7:35 ` Nadav Amit 2020-02-05 7:35 ` Nadav Amit via Virtualization 2020-02-05 8:19 ` David Hildenbrand 2020-02-05 8:19 ` David Hildenbrand 2020-02-05 10:27 ` Michael S. Tsirkin 2020-02-05 10:27 ` Michael S. Tsirkin 2020-02-05 10:43 ` David Hildenbrand 2020-02-05 10:43 ` David Hildenbrand 2020-01-30 22:46 ` Tyler Sanderson 2020-01-30 22:46 ` Tyler Sanderson via Virtualization 2020-02-02 0:21 ` David Rientjes via Virtualization
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=b809340d-7e86-caf6-bf12-db7bb8265045@redhat.com \ --to=david@redhat.com \ --cc=alexander.h.duyck@linux.intel.com \ --cc=linux-mm@kvack.org \ --cc=mhocko@kernel.org \ --cc=mst@redhat.com \ --cc=namit@vmware.com \ --cc=rientjes@google.com \ --cc=tysand@google.com \ --cc=virtualization@lists.linux-foundation.org \ --cc=wei.w.wang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.