From mboxrd@z Thu Jan 1 00:00:00 1970 From: Linus Torvalds Subject: Re: [PATCH] mm/mincore: allow for making sys_mincore() privileged Date: Thu, 10 Jan 2019 23:11:23 -0800 Message-ID: References: <20190109022430.GE27534@dastard> <20190109043906.GF27534@dastard> <20190110004424.GH27534@dastard> <20190110070355.GJ27534@dastard> <20190110122442.GA21216@nautica> <20190111045750.GA27333@nautica> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Return-path: In-Reply-To: <20190111045750.GA27333@nautica> Sender: linux-kernel-owner@vger.kernel.org To: Dominique Martinet Cc: Dave Chinner , Jiri Kosina , Matthew Wilcox , Jann Horn , Andrew Morton , Greg KH , Peter Zijlstra , Michal Hocko , Linux-MM , kernel list , Linux API List-Id: linux-api@vger.kernel.org On Thu, Jan 10, 2019 at 8:58 PM Dominique Martinet wrote: > > I get on average over a few queries approximately a real time of 350ms, > 230ms and 220ms immediately after drop cache and service restart, and > 150ms, 60ms and 60ms after a prefetch (hand-wavy average over 3 runs, I > didn't have the patience to do proper testing). > (In both cases, user/sys are less than 10ms; I don't see much difference > there) But those numbers aren't about the mincore() change. That's just from dropping caches. Now, what's the difference with the mincore change, and without? Is it actually measurable? Because that's all that matters: is the mincore change something you can even notice? Is it a big regression? The fact that things are slower when they are cold in the cache isn't the issue. The issue is whether the change to mincore semantics makes any difference to real loads. Linus