From: Daniel Jordan <daniel.m.jordan@oracle.com> To: Tejun Heo <tj@kernel.org> Cc: Daniel Jordan <daniel.m.jordan@oracle.com>, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, Roman Gushchin <guro@fb.com>, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, Peter Zijlstra <peterz@infradead.org> Subject: Re: [PATCH] mm: memcontrol: asynchronous reclaim for memory.high Date: Thu, 20 Feb 2020 13:23:26 -0500 [thread overview] Message-ID: <20200220182326.ubcjycaubgykiy6e@ca-dmjordan1.us.oracle.com> (raw) In-Reply-To: <20200220155651.GG698990@mtj.thefacebook.com> On Thu, Feb 20, 2020 at 10:56:51AM -0500, Tejun Heo wrote: > On Thu, Feb 20, 2020 at 10:45:24AM -0500, Daniel Jordan wrote: > > Ok, consistency with io and memory is one advantage to doing it that way. > > Creating kthreads in cgroups also seems viable so far, and it's unclear whether > > either approach is significantly simpler or more maintainable than the other, > > at least to me. > > The problem with separate kthread approach is that many of these work > units are tiny, and cgroup membership might not be known or doesn't > agree with the processing context from the beginning The amount of work wouldn't seem to matter as long as the kernel thread stays in the cgroup and lives long enough. There's only the one-time cost of attaching it when it's forked. That seems doable for unbound workqueues (the async reclaim), but may not be for the network packets. The membership and context issues are pretty compelling though. Good to know, I'll keep it in mind as I think this through. > For example, the ownership of network packets can't be determined till > processing has progressed quite a bit in shared contexts and each item > too small to bounce around. The only viable way I can think of > splitting aggregate overhead according to the number of packets (or > some other trivially measureable quntity) processed. > > Anything sitting in reclaim layer is the same. Reclaim should be > charged to the cgroup whose memory is reclaimed *but* shouldn't block > other cgroups which are waiting for that memory. It has to happen in > the context of the highest priority entity waiting for memory but the > costs incurred must be charged to the memory owners. > > So, one way or the other, I think we'll need back charging and once > back charging is needed for big ticket items like network and reclaim, > it's kinda silly to use separate mechanisms for other stuff. Yes, having both would appear to be redundant. > > Is someone on your side working on remote charging right now? I was planning > > to post an RFD comparing these soon and it would make sense to include them. > > It's been on the to do list but nobody is working on it yet. Ok, thanks.
WARNING: multiple messages have this Message-ID (diff)
From: Daniel Jordan <daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org> To: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> Cc: Daniel Jordan <daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>, Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>, Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>, Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>, Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kernel-team-b10kYP2dOMg@public.gmane.org, Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org> Subject: Re: [PATCH] mm: memcontrol: asynchronous reclaim for memory.high Date: Thu, 20 Feb 2020 13:23:26 -0500 [thread overview] Message-ID: <20200220182326.ubcjycaubgykiy6e@ca-dmjordan1.us.oracle.com> (raw) In-Reply-To: <20200220155651.GG698990-146+VewaZzwNjtGbbfXrCEEOCMrvLtNR@public.gmane.org> On Thu, Feb 20, 2020 at 10:56:51AM -0500, Tejun Heo wrote: > On Thu, Feb 20, 2020 at 10:45:24AM -0500, Daniel Jordan wrote: > > Ok, consistency with io and memory is one advantage to doing it that way. > > Creating kthreads in cgroups also seems viable so far, and it's unclear whether > > either approach is significantly simpler or more maintainable than the other, > > at least to me. > > The problem with separate kthread approach is that many of these work > units are tiny, and cgroup membership might not be known or doesn't > agree with the processing context from the beginning The amount of work wouldn't seem to matter as long as the kernel thread stays in the cgroup and lives long enough. There's only the one-time cost of attaching it when it's forked. That seems doable for unbound workqueues (the async reclaim), but may not be for the network packets. The membership and context issues are pretty compelling though. Good to know, I'll keep it in mind as I think this through. > For example, the ownership of network packets can't be determined till > processing has progressed quite a bit in shared contexts and each item > too small to bounce around. The only viable way I can think of > splitting aggregate overhead according to the number of packets (or > some other trivially measureable quntity) processed. > > Anything sitting in reclaim layer is the same. Reclaim should be > charged to the cgroup whose memory is reclaimed *but* shouldn't block > other cgroups which are waiting for that memory. It has to happen in > the context of the highest priority entity waiting for memory but the > costs incurred must be charged to the memory owners. > > So, one way or the other, I think we'll need back charging and once > back charging is needed for big ticket items like network and reclaim, > it's kinda silly to use separate mechanisms for other stuff. Yes, having both would appear to be redundant. > > Is someone on your side working on remote charging right now? I was planning > > to post an RFD comparing these soon and it would make sense to include them. > > It's been on the to do list but nobody is working on it yet. Ok, thanks.
next prev parent reply other threads:[~2020-02-20 18:25 UTC|newest] Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-02-19 18:12 [PATCH] mm: memcontrol: asynchronous reclaim for memory.high Johannes Weiner 2020-02-19 18:12 ` Johannes Weiner 2020-02-19 18:37 ` Michal Hocko 2020-02-19 18:37 ` Michal Hocko 2020-02-19 19:16 ` Johannes Weiner 2020-02-19 19:16 ` Johannes Weiner 2020-02-19 19:53 ` Michal Hocko 2020-02-19 19:53 ` Michal Hocko 2020-02-19 21:17 ` Johannes Weiner 2020-02-20 9:46 ` Michal Hocko 2020-02-20 9:46 ` Michal Hocko 2020-02-20 14:41 ` Johannes Weiner 2020-02-20 14:41 ` Johannes Weiner 2020-02-19 21:41 ` Daniel Jordan 2020-02-19 21:41 ` Daniel Jordan 2020-02-19 22:08 ` Johannes Weiner 2020-02-19 22:08 ` Johannes Weiner 2020-02-20 15:45 ` Daniel Jordan 2020-02-20 15:45 ` Daniel Jordan 2020-02-20 15:56 ` Tejun Heo 2020-02-20 15:56 ` Tejun Heo 2020-02-20 18:23 ` Daniel Jordan [this message] 2020-02-20 18:23 ` Daniel Jordan 2020-02-20 18:45 ` Tejun Heo 2020-02-20 18:45 ` Tejun Heo 2020-02-20 19:55 ` Daniel Jordan 2020-02-20 19:55 ` Daniel Jordan 2020-02-20 20:54 ` Tejun Heo 2020-02-20 20:54 ` Tejun Heo 2020-02-19 19:17 ` Chris Down 2020-02-19 19:17 ` Chris Down 2020-02-19 19:31 ` Andrew Morton 2020-02-19 19:31 ` Andrew Morton 2020-02-19 21:33 ` Johannes Weiner 2020-02-26 20:25 ` Shakeel Butt 2020-02-26 20:25 ` Shakeel Butt 2020-02-26 20:25 ` Shakeel Butt 2020-02-26 22:26 ` Johannes Weiner 2020-02-26 22:26 ` Johannes Weiner 2020-02-26 23:36 ` Shakeel Butt 2020-02-26 23:36 ` Shakeel Butt 2020-02-26 23:36 ` Shakeel Butt 2020-02-26 23:46 ` Johannes Weiner 2020-02-27 0:12 ` Yang Shi 2020-02-27 0:12 ` Yang Shi 2020-02-27 2:42 ` Shakeel Butt 2020-02-27 2:42 ` Shakeel Butt 2020-02-27 2:42 ` Shakeel Butt 2020-02-27 9:58 ` Michal Hocko 2020-02-27 9:58 ` Michal Hocko 2020-02-27 12:50 ` Johannes Weiner 2020-02-27 12:50 ` Johannes Weiner 2020-02-26 23:59 ` Yang Shi 2020-02-26 23:59 ` Yang Shi 2020-02-27 2:36 ` Shakeel Butt 2020-02-27 2:36 ` Shakeel Butt
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200220182326.ubcjycaubgykiy6e@ca-dmjordan1.us.oracle.com \ --to=daniel.m.jordan@oracle.com \ --cc=akpm@linux-foundation.org \ --cc=cgroups@vger.kernel.org \ --cc=guro@fb.com \ --cc=hannes@cmpxchg.org \ --cc=kernel-team@fb.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@kernel.org \ --cc=peterz@infradead.org \ --cc=tj@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.