From mboxrd@z Thu Jan 1 00:00:00 1970 From: Al Viro Subject: Re: [PATCH review 11/11] mnt: Honor MNT_LOCKED when detaching mounts Date: Sun, 11 Jan 2015 02:00:30 +0000 Message-ID: <20150111020030.GF22149@ZenIV.linux.org.uk> References: <1420490787-14387-11-git-send-email-ebiederm@xmission.com> <20150107184334.GZ22149@ZenIV.linux.org.uk> <87h9w2gzht.fsf@x220.int.ebiederm.org> <20150107205239.GB22149@ZenIV.linux.org.uk> <87iogi8dka.fsf@x220.int.ebiederm.org> <20150108002227.GC22149@ZenIV.linux.org.uk> <20150108223212.GF22149@ZenIV.linux.org.uk> <20150109203126.GI22149@ZenIV.linux.org.uk> <87h9vzryio.fsf@x220.int.ebiederm.org> <20150110055148.GY22149@ZenIV.linux.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20150110055148.GY22149-3bDd1+5oDREiFSDQTTA3OLVCufUGDwFn@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: "Eric W. Biederman" Cc: Andrey Vagin , Richard Weinberger , Linux Containers , Andy Lutomirski , linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Linus Torvalds List-Id: containers.vger.kernel.org On Sat, Jan 10, 2015 at 05:51:48AM +0000, Al Viro wrote: > On Fri, Jan 09, 2015 at 11:32:47PM -0600, Eric W. Biederman wrote: > > > I don't believe rcu anything in this function itself buys you anything, > > but structuring this primitive so that it can be called from an rcu list > > traversal seems interesting. > > ??? > > Without RCU, what would prevent it being freed right under us? > > The whole point is to avoid pinning it down - as it is, we can have > several processes call ->kill() on the same object. The first one > would end up doing cleanup, the rest would wait *without* *affecting* > *fs_pin* *lifetime*. > > Note that I'm using autoremove there for wait.func(), then in the wait > loop I check (without locks) wait.task_list being empty. It is racy; > deliberately so. All I really care about in there is checking that > wait.func has not been called until after rcu_read_lock(). If that is > true, we know that p->wait hadn't been woken until that point, i.e. > p hadn't reached rcu delay on the way to being freed until after our > rcu_read_lock(). Ergo, it can't get freed until we do rcu_read_unlock() > and we can safely take p->wait.lock. > > RCU is very much relevant there. FWIW, I've just pushed a completely untested tree in #experimental-fs_pin; it definitely will be reordered, etc., probably with quite a few of the patches from the beginning of your series mixed in, but the current tree in there should show at least what I'm aiming at.