From: Tejun Heo <firstname.lastname@example.org> To: Shakeel Butt <email@example.com> Cc: Jakub Kicinski <firstname.lastname@example.org>, Andrew Morton <email@example.com>, Linux MM <firstname.lastname@example.org>, Kernel Team <email@example.com>, Johannes Weiner <firstname.lastname@example.org>, Chris Down <email@example.com>, Cgroups <firstname.lastname@example.org> Subject: Re: [PATCH 0/3] memcg: Slow down swap allocation as the available space gets depleted Date: Mon, 20 Apr 2020 12:47:40 -0400 [thread overview] Message-ID: <20200420164740.GF43469@mtj.thefacebook.com> (raw) In-Reply-To: <CALvZod6M4OsM-t8m_KX9wCkEutdwUMgbP9682eHGQor9JvO_BQ@mail.gmail.com> Hello, On Mon, Apr 20, 2020 at 09:12:54AM -0700, Shakeel Butt wrote: > I got the high level vision but I am very skeptical that in terms of > memory and performance isolation this can provide anything better than > best effort QoS which might be good enough for desktop users. However, I don't see that big a gap between desktop and server use cases. There sure are some tolerance differences but for majority of use cases that is a permeable boundary. I believe I can see where you're coming from and think that it'd be difficult to convince you out of the skepticism without concretely demonstrating the contrary, which we're actively working on. A directional point I wanna emphasize tho is that siloing these solutions into special "professional" only use is an easy pitfall which often obscures bigger possibilities and leads to developmental dead-ends and obsolescence. I believe it's a tendency which should be actively resisted and fought against. Servers really aren't all that special. > for a server environment where multiple latency sensitive interactive > jobs are co-hosted with multiple batch jobs and the machine's memory > may be over-committed, this is a recipe for disaster. The only > scenario where I think it might work is if there is only one job > running on the machine. Obviously, you can't overcommit on any resources for critical latency sensitive workloads whether one or multiple, but there also are other types of workloads which can be flexible with resource availability. > I do agree that finding the right upper limit is a challenge. For us, > we have two types of users, first, who knows exactly how much > resources they want and second ask us to set the limits appropriately. > We have a ML/history based central system to dynamically set and > adjust limits for jobs of such users. > > Coming back to this path series, to me, it seems like the patch series > is contrary to the vision you are presenting. Though the users are not > setting memory.[high|max] but they are setting swap.max and this > series is asking to set one more tunable i.e. swap.high. The approach > more consistent with the presented vision is to throttle or slow down > the allocators when the system swap is near full and there is no need > to set swap.max or swap.high. It's a piece of the puzzle to make memory protection work comprehensively. You can argue that the fact swap isn't protection based is against the direction but I find that argument rather facetious as swap is quite different resource from memory and it's not like I'm saying limits shouldn't be used at all. There sure still are missing pieces - ie. slowing down on global depletion, but that doesn't mean swap.high isn't useful. Thanks. -- tejun
next prev parent reply other threads:[~2020-04-20 16:47 UTC|newest] Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-04-17 1:06 Jakub Kicinski 2020-04-17 1:06 ` [PATCH 1/3] mm: prepare for swap over-high accounting and penalty calculation Jakub Kicinski 2020-04-17 1:06 ` [PATCH 2/3] mm: move penalty delay clamping out of calculate_high_delay() Jakub Kicinski 2020-04-17 1:06 ` [PATCH 3/3] mm: automatically penalize tasks with high swap use Jakub Kicinski 2020-04-17 7:37 ` Michal Hocko 2020-04-17 23:22 ` Jakub Kicinski 2020-04-17 16:11 ` [PATCH 0/3] memcg: Slow down swap allocation as the available space gets depleted Shakeel Butt 2020-04-17 16:23 ` Tejun Heo 2020-04-17 17:18 ` Shakeel Butt 2020-04-17 17:36 ` Tejun Heo 2020-04-17 17:51 ` Shakeel Butt 2020-04-17 19:35 ` Tejun Heo 2020-04-17 21:51 ` Shakeel Butt 2020-04-17 22:59 ` Tejun Heo 2020-04-20 16:12 ` Shakeel Butt 2020-04-20 16:47 ` Tejun Heo [this message] 2020-04-20 17:03 ` Michal Hocko 2020-04-20 17:06 ` Tejun Heo 2020-04-21 11:06 ` Michal Hocko 2020-04-21 14:27 ` Johannes Weiner 2020-04-21 16:11 ` Michal Hocko 2020-04-21 16:56 ` Johannes Weiner 2020-04-22 13:26 ` Michal Hocko 2020-04-22 14:15 ` Johannes Weiner 2020-04-22 15:43 ` Michal Hocko 2020-04-22 17:13 ` Johannes Weiner 2020-04-22 18:49 ` Michal Hocko 2020-04-23 15:00 ` Johannes Weiner 2020-04-24 15:05 ` Michal Hocko 2020-04-28 14:24 ` Johannes Weiner 2020-04-29 9:55 ` Michal Hocko 2020-04-21 19:09 ` Shakeel Butt 2020-04-21 21:59 ` Johannes Weiner 2020-04-21 22:39 ` Shakeel Butt 2020-04-21 15:20 ` Tejun Heo
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200420164740.GF43469@mtj.thefacebook.com \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --subject='Re: [PATCH 0/3] memcg: Slow down swap allocation as the available space gets depleted' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).