From mboxrd@z Thu Jan 1 00:00:00 1970 From: Minchan Kim Subject: Re: [RFC 7/7] mm: madvise support MADV_ANONYMOUS_FILTER and MADV_FILE_FILTER Date: Tue, 28 May 2019 21:10:51 +0900 Message-ID: <20190528121051.GC30365@google.com> References: <20190528062947.GL1658@dhcp22.suse.cz> <20190528081351.GA159710@google.com> <20190528084927.GB159710@google.com> <20190528090821.GU1658@dhcp22.suse.cz> <20190528103256.GA9199@google.com> <20190528104117.GW1658@dhcp22.suse.cz> <20190528111208.GA30365@google.com> <20190528112840.GY1658@dhcp22.suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org To: Daniel Colascione Cc: Michal Hocko , Andrew Morton , LKML , linux-mm , Johannes Weiner , Tim Murray , Joel Fernandes , Suren Baghdasaryan , Shakeel Butt , Sonny Rao , Brian Geffon , Linux API List-Id: linux-api@vger.kernel.org On Tue, May 28, 2019 at 04:42:47AM -0700, Daniel Colascione wrote: > On Tue, May 28, 2019 at 4:28 AM Michal Hocko wrote: > > > > On Tue 28-05-19 20:12:08, Minchan Kim wrote: > > > On Tue, May 28, 2019 at 12:41:17PM +0200, Michal Hocko wrote: > > > > On Tue 28-05-19 19:32:56, Minchan Kim wrote: > > > > > On Tue, May 28, 2019 at 11:08:21AM +0200, Michal Hocko wrote: > > > > > > On Tue 28-05-19 17:49:27, Minchan Kim wrote: > > > > > > > On Tue, May 28, 2019 at 01:31:13AM -0700, Daniel Colascione wrote: > > > > > > > > On Tue, May 28, 2019 at 1:14 AM Minchan Kim wrote: > > > > > > > > > if we went with the per vma fd approach then you would get this > > > > > > > > > > feature automatically because map_files would refer to file backed > > > > > > > > > > mappings while map_anon could refer only to anonymous mappings. > > > > > > > > > > > > > > > > > > The reason to add such filter option is to avoid the parsing overhead > > > > > > > > > so map_anon wouldn't be helpful. > > > > > > > > > > > > > > > > Without chiming on whether the filter option is a good idea, I'd like > > > > > > > > to suggest that providing an efficient binary interfaces for pulling > > > > > > > > memory map information out of processes. Some single-system-call > > > > > > > > method for retrieving a binary snapshot of a process's address space > > > > > > > > complete with attributes (selectable, like statx?) for each VMA would > > > > > > > > reduce complexity and increase performance in a variety of areas, > > > > > > > > e.g., Android memory map debugging commands. > > > > > > > > > > > > > > I agree it's the best we can get *generally*. > > > > > > > Michal, any opinion? > > > > > > > > > > > > I am not really sure this is directly related. I think the primary > > > > > > question that we have to sort out first is whether we want to have > > > > > > the remote madvise call process or vma fd based. This is an important > > > > > > distinction wrt. usability. I have only seen pid vs. pidfd discussions > > > > > > so far unfortunately. > > > > > > > > > > With current usecase, it's per-process API with distinguishable anon/file > > > > > but thought it could be easily extended later for each address range > > > > > operation as userspace getting smarter with more information. > > > > > > > > Never design user API based on a single usecase, please. The "easily > > > > extended" part is by far not clear to me TBH. As I've already mentioned > > > > several times, the synchronization model has to be thought through > > > > carefuly before a remote process address range operation can be > > > > implemented. > > > > > > I agree with you that we shouldn't design API on single usecase but what > > > you are concerning is actually not our usecase because we are resilient > > > with the race since MADV_COLD|PAGEOUT is not destruptive. > > > Actually, many hints are already racy in that the upcoming pattern would > > > be different with the behavior you thought at the moment. > > > > How come they are racy wrt address ranges? You would have to be in > > multithreaded environment and then the onus of synchronization is on > > threads. That model is quite clear. But we are talking about separate > > processes and some of them might be even not aware of an external entity > > tweaking their address space. > > I don't think the difference between a thread and a process matters in > this context. Threads race on address space operations all the time > --- in the sense that multiple threads modify a process's address > space without synchronization. The main reasons that these races > hasn't been a problem are: 1) threads mostly "mind their own business" > and modify different parts of the address space or use locks to ensure > that they don't stop on each other (e.g., the malloc heap lock), and > 2) POSIX mmap atomic-replacement semantics make certain classes of > operation (like "magic ring buffer" setup) safe even in the presence > of other threads stomping over an address space. > > The thing that's new in this discussion from a synchronization point > of view isn't that the VM operation we're talking about is coming from > outside the process, but that we want to do a read-decide-modify-ish > thing. We want to affect (using various hints) classes of pages like > "all file pages" or "all anonymous pages" or "some pages referring to > graphics buffers up to 100MB" (to pick an example off the top of my > head of a policy that might make sense). From a synchronization point > of view, it doesn't really matter whether it's a thread within the > target process or a thread outside the target process that does the > address space manipulation. What's new is the inspection of the > address space before performing an operation. > > Minchan started this thread by proposing some flags that would > implement a few of the filtering policies I used as examples above. > Personally, instead of providing a few pre-built policies as flags, > I'd rather push the page manipulation policy to userspace as much as > possible and just have the kernel provide a mechanism that *in > general* makes these read-decide-modify operations efficient and > robust. I still think there's way to achieve this goal very > inexpensively without compromising on flexibility. I'm looking forward to seeing the way. ;-)