From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932722AbcA0Psr (ORCPT ); Wed, 27 Jan 2016 10:48:47 -0500 Received: from mx2.parallels.com ([199.115.105.18]:34321 "EHLO mx2.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932213AbcA0Pso (ORCPT ); Wed, 27 Jan 2016 10:48:44 -0500 Date: Wed, 27 Jan 2016 18:48:31 +0300 From: Vladimir Davydov To: Johannes Weiner CC: Rik van Riel , , Linux Memory Management List , Linux kernel Mailing List , KVM list Subject: Re: [LSF/MM TOPIC] VM containers Message-ID: <20160127154831.GF9623@esperanza> References: <56A2511F.1080900@redhat.com> <20160122171121.GA18062@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20160122171121.GA18062@cmpxchg.org> X-ClientProxiedBy: US-EXCH2.sw.swsoft.com (10.255.249.46) To US-EXCH.sw.swsoft.com (10.255.249.47) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 22, 2016 at 12:11:21PM -0500, Johannes Weiner wrote: > Hi, > > On Fri, Jan 22, 2016 at 10:56:15AM -0500, Rik van Riel wrote: > > I am trying to gauge interest in discussing VM containers at the LSF/MM > > summit this year. Projects like ClearLinux, Qubes, and others are all > > trying to use virtual machines as better isolated containers. > > > > That changes some of the goals the memory management subsystem has, > > from "use all the resources effectively" to "use as few resources as > > necessary, in case the host needs the memory for something else". > > I would be very interested in discussing this topic, because I think > the issue is more generic than these VM applications. We are facing > the same issues with regular containers, where aggressive caching is > counteracting the desire to cut down workloads to their bare minimum > in order to pack them as tightly as possible. > > With per-cgroup LRUs and thrash detection, we have infrastructure in By thrash detection, do you mean vmpressure? > place that could allow us to accomplish this. Right now we only enter > reclaim once memory runs out, but we could add an allocation mode that > would prefer to always reclaim from the local LRU before increasing > the memory footprint, and only expand once we detect thrashing in the > page cache. That would keep the workloads neatly trimmed at all times. I don't get it. Do you mean a sort of special GFP flag that would force the caller to reclaim before actual charging/allocation? Or is it supposed to be automatic, basing on how memcg is behaving? If the latter, I suppose it could be already done by a userspace daemon by adjusting memory.high as needed, although it's unclear how to do it optimally. > > For virtualized environments, the thrashing information would be > communicated slightly differently to the page allocator and/or the > host, but otherwise the fundamental principles should be the same. > > We'd have to figure out how to balance the aggressiveness there and > how to describe this to the user, as I can imagine that users would > want to tune this based on a tolerance for the degree of thrashing: if > pages are used every M ms, keep them cached; if pages are used every N > ms, freeing up the memory and refetching them from disk is better etc. Sounds reasonable. What about adding a parameter to memcg that would define ws access time? So that it would act just like memory.low, but in terms of lruvec age instead of lruvec size. I mean, we keep track of lruvec ages and scan those lruvecs whose age is > ws access time before others. That would protect those workloads that access their ws quite, but not very often from streaming workloads which can generate a lot of useless pressure. Thanks, Vladimir > > And we don't have thrash detection in secondary slab caches (yet). > > > Are people interested in discussing this at LSF/MM, or is it better > > saved for a different forum? > > If more people are interested, I think that could be a great topic.