From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932846AbcITQX1 (ORCPT ); Tue, 20 Sep 2016 12:23:27 -0400 Received: from out02.mta.xmission.com ([166.70.13.232]:41515 "EHLO out02.mta.xmission.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755205AbcITQXV (ORCPT ); Tue, 20 Sep 2016 12:23:21 -0400 From: ebiederm@xmission.com (Eric W. Biederman) To: Ian Kent Cc: Andrew Morton , autofs mailing list , Kernel Mailing List , Al Viro , linux-fsdevel , Omar Sandoval References: <20160914061434.24714.490.stgit@pluto.themaw.net> <20160914061445.24714.68331.stgit@pluto.themaw.net> <87zina9ys3.fsf@x220.int.ebiederm.org> <1473898163.3205.32.camel@themaw.net> <87k2ed530d.fsf@x220.int.ebiederm.org> <1473912775.3205.122.camel@themaw.net> <8737l0wtzp.fsf@x220.int.ebiederm.org> <1473994699.3087.53.camel@themaw.net> <1474246680.3204.4.camel@themaw.net> Date: Tue, 20 Sep 2016 11:09:38 -0500 In-Reply-To: <1474246680.3204.4.camel@themaw.net> (Ian Kent's message of "Mon, 19 Sep 2016 08:58:00 +0800") Message-ID: <87r38ek0xp.fsf@x220.int.ebiederm.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-XM-SPF: eid=1bmNpF-0000xs-PG;;;mid=<87r38ek0xp.fsf@x220.int.ebiederm.org>;;;hst=in01.mta.xmission.com;;;ip=97.119.97.64;;;frm=ebiederm@xmission.com;;;spf=neutral X-XM-AID: U2FsdGVkX18KOErZG1Hg5GK0sKzBiZa0RNYLxwyL0Xs= X-SA-Exim-Connect-IP: 97.119.97.64 X-SA-Exim-Mail-From: ebiederm@xmission.com X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP * 0.7 XMSubLong Long Subject * 1.5 XMNoVowels Alpha-numberic number with no vowels * 0.0 TVD_RCVD_IP Message was received from an IP address * 0.0 T_TM2_M_HEADER_IN_MSG BODY: No description available. * 0.8 BAYES_50 BODY: Bayes spam probability is 40 to 60% * [score: 0.5000] * -0.0 DCC_CHECK_NEGATIVE Not listed in DCC * [sa03 1397; Body=1 Fuz1=1 Fuz2=1] * 0.0 T_TooManySym_01 4+ unique symbols in subject * 0.1 XMSolicitRefs_0 Weightloss drug X-Spam-DCC: XMission; sa03 1397; Body=1 Fuz1=1 Fuz2=1 X-Spam-Combo: **;Ian Kent X-Spam-Relay-Country: X-Spam-Timing: total 1684 ms - load_scoreonly_sql: 0.06 (0.0%), signal_user_changed: 3.6 (0.2%), b_tie_ro: 2.5 (0.1%), parse: 1.76 (0.1%), extract_message_metadata: 34 (2.0%), get_uri_detail_list: 11 (0.6%), tests_pri_-1000: 8 (0.5%), tests_pri_-950: 2.1 (0.1%), tests_pri_-900: 1.62 (0.1%), tests_pri_-400: 62 (3.7%), check_bayes: 60 (3.6%), b_tokenize: 26 (1.5%), b_tok_get_all: 17 (1.0%), b_comp_prob: 8 (0.5%), b_tok_touch_all: 4.0 (0.2%), b_finish: 0.88 (0.1%), tests_pri_0: 1556 (92.4%), check_dkim_signature: 1.10 (0.1%), check_dkim_adsp: 4.4 (0.3%), tests_pri_500: 7 (0.4%), rewrite_mail: 0.00 (0.0%) Subject: Re: [PATCH 3/4] autofs - make mountpoint checks namespace aware X-Spam-Flag: No X-SA-Exim-Version: 4.2.1 (built Thu, 05 May 2016 13:38:54 -0600) X-SA-Exim-Scanned: Yes (on in01.mta.xmission.com) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Ian Kent writes: > On Fri, 2016-09-16 at 10:58 +0800, Ian Kent wrote: >> On Thu, 2016-09-15 at 19:47 -0500, Eric W. Biederman wrote: >> > Ian Kent writes: >> > >> > > On Wed, 2016-09-14 at 21:08 -0500, Eric W. Biederman wrote: >> > > > Ian Kent writes: >> > > > >> > > > > On Wed, 2016-09-14 at 12:28 -0500, Eric W. Biederman wrote: >> > > > > > Ian Kent writes: >> > > > > > >> > > > > > > If an automount mount is clone(2)ed into a file system that is >> > > > > > > propagation private, when it later expires in the originating >> > > > > > > namespace subsequent calls to autofs ->d_automount() for that >> > > > > > > dentry in the original namespace will return ELOOP until the >> > > > > > > mount is manually umounted in the cloned namespace. >> > > > > > > >> > > > > > > In the same way, if an autofs mount is triggered by automount(8) >> > > > > > > running within a container the dentry will be seen as mounted in >> > > > > > > the root init namespace and calls to ->d_automount() in that >> > > > > > > namespace >> > > > > > > will return ELOOP until the mount is umounted within the >> > > > > > > container. >> > > > > > > >> > > > > > > Also, have_submounts() can return an incorect result when a mount >> > > > > > > exists in a namespace other than the one being checked. >> > > > > > >> > > > > > Overall this appears to be a fairly reasonable set of changes. It >> > > > > > does >> > > > > > increase the expense when an actual mount point is encountered, but >> > > > > > if >> > > > > > these are the desired some increase in cost when a dentry is a >> > > > > > mountpoint is unavoidable. >> > > > > > >> > > > > > May I ask the motiviation for this set of changes? Reading through >> > > > > > the >> > > > > > changes I don't grasp why we want to change the behavior of autofs. >> > > > > > What problem is being solved? What are the benefits? >> > > > > >> > > > > LOL, it's all too easy for me to give a patch description that I think >> > > > > explains >> > > > > a problem I need to solve without realizing it isn't clear to others >> > > > > what >> > > > > the >> > > > > problem is, sorry about that. >> > > > > >> > > > > For quite a while now, and not that frequently but consistently, I've >> > > > > been >> > > > > getting reports of people using autofs getting ELOOP errors and not >> > > > > being >> > > > > able >> > > > > to mount automounts. >> > > > > >> > > > > This has been due to the cloning of autofs file systems (that have >> > > > > active >> > > > > automounts at the time of the clone) by other systems. >> > > > > >> > > > > An unshare, as one example, can easily result in the cloning of an >> > > > > autofs >> > > > > file >> > > > > system that has active mounts which shows this problem. >> > > > > >> > > > > Once an active mount that has been cloned is expired in the namespace >> > > > > that >> > > > > performed the unshare it can't be (auto)mounted again in the the >> > > > > originating >> > > > > namespace because the mounted check in the autofs module will think it >> > > > > is >> > > > > already mounted. >> > > > > >> > > > > I'm not sure this is a clear description either, hopefully it is >> > > > > enough >> > > > > to >> > > > > demonstrate the type of problem I'm typing to solve. >> > > > >> > > > So to rephrase the problem is that an autofs instance can stop working >> > > > properly from the perspective of the mount namespace it is mounted in >> > > > if the autofs instance is shared between multiple mount namespaces. The >> > > > problem is that mounts and unmounts do not always propogate between >> > > > mount namespaces. This lack of symmetric mount/unmount behavior >> > > > leads to mountpoints that become unusable. >> > > >> > > That's right. >> > > >> > > It's also worth considering that symmetric mount propagation is usually >> > > not >> > > the >> > > behaviour needed either and things like LXC and Docker are set propagation >> > > slave >> > > because of problems caused by propagation back to the parent namespace. >> > > >> > > So a mount can be triggered within a container, mounted by the automount >> > > daemon >> > > in the parent namespace, and propagated to the child and similarly for >> > > expires, >> > > which is the common use case now. >> > > >> > > > >> > > > Which leads to the question what is the expected new behavior with your >> > > > patchset applied. New mounts can be added in the parent mount namespace >> > > > (because the test is local). Does your change also allow the >> > > > autofs mountpoints to be used in the other mount namespaces that share >> > > > the autofs instance if everything becomes unmounted? >> > > >> > > The problem occurs when the subordinate namespace doesn't deal with these >> > > propagated mounts properly, although they can obviously be used by the >> > > subordinate namespace. >> > > >> > > > >> > > > Or is it expected that other mount namespaces that share an autofs >> > > > instance will get changes in their mounts via mount propagation and if >> > > > mount propagation is insufficient they are on their own. >> > > >> > > Namespaces that receive updates via mount propagation from a parent will >> > > continue to function as they do now. >> > > >> > > Mounts that don't get updates via mount propagation will retain the mount >> > > to >> > > use >> > > if they need to, as they would without this change, but the originating >> > > namespace will also continue to function as expected. >> > > >> > > The child namespace needs cleanup its mounts on exit, which it had to do >> > > prior >> > > to this change also. >> > > >> > > > >> > > > I believe this is a question of how do notifications of the desire for >> > > > an automount work after your change, and are those notifications >> > > > consistent with your desired and/or expected behavior. >> > > >> > > It sounds like you might be assuming the service receiving these cloned >> > > mounts >> > > actually wants to use them or is expecting them to behave like automount >> > > mounts. >> > > But that's not what I've seen and is not the way these cloned mounts >> > > behave >> > > without the change. >> > > >> > > However, as has probably occurred to you by now, there is a semantic >> > > change >> > > with >> > > this for namespaces that don't receive mount propogation. >> > > >> > > If a mount request is triggered by an access in the subordinate namespace >> > > for a >> > > dentry that is already mounted in the parent namespace it will silently >> > > fail >> > > (in >> > > that a mount won't appear in the subordinate namespace) rather than >> > > getting >> > > an >> > > ELOOP error as it would now. >> > > >> > > It's also the case that, if such a mount isn't already mounted, it will >> > > cause a >> > > mount to occur in the parent namespace. But that is also the way it is >> > > without >> > > the change. >> > > >> > > TBH I don't know yet how to resolve that, ideally the cloned mounts would >> > > not >> > > appear in the subordinate namespace upon creation but that's also not >> > > currently >> > > possible to do and even if it was it would mean quite a change in to the >> > > way >> > > things behave now. >> > > >> > > All in all I believe the change here solves a problem that needs to be >> > > solved >> > > without affecting normal usage at the expense of a small behaviour change >> > > to >> > > cases where automount isn't providing a mounting service. >> > >> > That sounds like a reasonable semantic change. Limiting the responses >> > of the autofs mount path to what is present in the mount namespace >> > of the program that actually performs the autofs mounts seems needed. >> >> Indeed, yes. >> >> > >> > In fact the entire local mount concept exists because I was solving a >> > very similar problem for rename, unlink and rmdir. Where a cloned mount >> > namespace could cause a denial of service attack on the original >> > mount namespace. >> > >> > I don't know if this change makes sense for mount expiry. >> >> Originally I thought it did but now I think your right, it won't actually make >> a >> difference. >> >> Let me think a little more about it, I thought there was a reason I included >> the >> expire in the changes but I can't remember now. >> >> It may be that originally I thought individual automount(8) instances within >> containers could be affected by an instance of automount(8) in the root >> namespace (and visa versa) but now I think these will all be isolated. > > I also thought that the autofs expire would continue to see the umounted mount > and continue calling back to the daemon in an attempt to umount it. > > That isn't the case so I can drop the changes to the expire expire code as you > recommend. Sounds good. Eric From mboxrd@z Thu Jan 1 00:00:00 1970 From: ebiederm@xmission.com (Eric W. Biederman) Subject: Re: [PATCH 3/4] autofs - make mountpoint checks namespace aware Date: Tue, 20 Sep 2016 11:09:38 -0500 Message-ID: <87r38ek0xp.fsf@x220.int.ebiederm.org> References: <20160914061434.24714.490.stgit@pluto.themaw.net> <20160914061445.24714.68331.stgit@pluto.themaw.net> <87zina9ys3.fsf@x220.int.ebiederm.org> <1473898163.3205.32.camel@themaw.net> <87k2ed530d.fsf@x220.int.ebiederm.org> <1473912775.3205.122.camel@themaw.net> <8737l0wtzp.fsf@x220.int.ebiederm.org> <1473994699.3087.53.camel@themaw.net> <1474246680.3204.4.camel@themaw.net> Mime-Version: 1.0 Return-path: In-Reply-To: <1474246680.3204.4.camel@themaw.net> (Ian Kent's message of "Mon, 19 Sep 2016 08:58:00 +0800") Sender: autofs-owner@vger.kernel.org List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Ian Kent Cc: Andrew Morton , autofs mailing list , Kernel Mailing List , Al Viro , linux-fsdevel , Omar Sandoval Ian Kent writes: > On Fri, 2016-09-16 at 10:58 +0800, Ian Kent wrote: >> On Thu, 2016-09-15 at 19:47 -0500, Eric W. Biederman wrote: >> > Ian Kent writes: >> > >> > > On Wed, 2016-09-14 at 21:08 -0500, Eric W. Biederman wrote: >> > > > Ian Kent writes: >> > > > >> > > > > On Wed, 2016-09-14 at 12:28 -0500, Eric W. Biederman wrote: >> > > > > > Ian Kent writes: >> > > > > > >> > > > > > > If an automount mount is clone(2)ed into a file system that is >> > > > > > > propagation private, when it later expires in the originating >> > > > > > > namespace subsequent calls to autofs ->d_automount() for that >> > > > > > > dentry in the original namespace will return ELOOP until the >> > > > > > > mount is manually umounted in the cloned namespace. >> > > > > > > >> > > > > > > In the same way, if an autofs mount is triggered by automount(8) >> > > > > > > running within a container the dentry will be seen as mounted in >> > > > > > > the root init namespace and calls to ->d_automount() in that >> > > > > > > namespace >> > > > > > > will return ELOOP until the mount is umounted within the >> > > > > > > container. >> > > > > > > >> > > > > > > Also, have_submounts() can return an incorect result when a mount >> > > > > > > exists in a namespace other than the one being checked. >> > > > > > >> > > > > > Overall this appears to be a fairly reasonable set of changes. It >> > > > > > does >> > > > > > increase the expense when an actual mount point is encountered, but >> > > > > > if >> > > > > > these are the desired some increase in cost when a dentry is a >> > > > > > mountpoint is unavoidable. >> > > > > > >> > > > > > May I ask the motiviation for this set of changes? Reading through >> > > > > > the >> > > > > > changes I don't grasp why we want to change the behavior of autofs. >> > > > > > What problem is being solved? What are the benefits? >> > > > > >> > > > > LOL, it's all too easy for me to give a patch description that I think >> > > > > explains >> > > > > a problem I need to solve without realizing it isn't clear to others >> > > > > what >> > > > > the >> > > > > problem is, sorry about that. >> > > > > >> > > > > For quite a while now, and not that frequently but consistently, I've >> > > > > been >> > > > > getting reports of people using autofs getting ELOOP errors and not >> > > > > being >> > > > > able >> > > > > to mount automounts. >> > > > > >> > > > > This has been due to the cloning of autofs file systems (that have >> > > > > active >> > > > > automounts at the time of the clone) by other systems. >> > > > > >> > > > > An unshare, as one example, can easily result in the cloning of an >> > > > > autofs >> > > > > file >> > > > > system that has active mounts which shows this problem. >> > > > > >> > > > > Once an active mount that has been cloned is expired in the namespace >> > > > > that >> > > > > performed the unshare it can't be (auto)mounted again in the the >> > > > > originating >> > > > > namespace because the mounted check in the autofs module will think it >> > > > > is >> > > > > already mounted. >> > > > > >> > > > > I'm not sure this is a clear description either, hopefully it is >> > > > > enough >> > > > > to >> > > > > demonstrate the type of problem I'm typing to solve. >> > > > >> > > > So to rephrase the problem is that an autofs instance can stop working >> > > > properly from the perspective of the mount namespace it is mounted in >> > > > if the autofs instance is shared between multiple mount namespaces. The >> > > > problem is that mounts and unmounts do not always propogate between >> > > > mount namespaces. This lack of symmetric mount/unmount behavior >> > > > leads to mountpoints that become unusable. >> > > >> > > That's right. >> > > >> > > It's also worth considering that symmetric mount propagation is usually >> > > not >> > > the >> > > behaviour needed either and things like LXC and Docker are set propagation >> > > slave >> > > because of problems caused by propagation back to the parent namespace. >> > > >> > > So a mount can be triggered within a container, mounted by the automount >> > > daemon >> > > in the parent namespace, and propagated to the child and similarly for >> > > expires, >> > > which is the common use case now. >> > > >> > > > >> > > > Which leads to the question what is the expected new behavior with your >> > > > patchset applied. New mounts can be added in the parent mount namespace >> > > > (because the test is local). Does your change also allow the >> > > > autofs mountpoints to be used in the other mount namespaces that share >> > > > the autofs instance if everything becomes unmounted? >> > > >> > > The problem occurs when the subordinate namespace doesn't deal with these >> > > propagated mounts properly, although they can obviously be used by the >> > > subordinate namespace. >> > > >> > > > >> > > > Or is it expected that other mount namespaces that share an autofs >> > > > instance will get changes in their mounts via mount propagation and if >> > > > mount propagation is insufficient they are on their own. >> > > >> > > Namespaces that receive updates via mount propagation from a parent will >> > > continue to function as they do now. >> > > >> > > Mounts that don't get updates via mount propagation will retain the mount >> > > to >> > > use >> > > if they need to, as they would without this change, but the originating >> > > namespace will also continue to function as expected. >> > > >> > > The child namespace needs cleanup its mounts on exit, which it had to do >> > > prior >> > > to this change also. >> > > >> > > > >> > > > I believe this is a question of how do notifications of the desire for >> > > > an automount work after your change, and are those notifications >> > > > consistent with your desired and/or expected behavior. >> > > >> > > It sounds like you might be assuming the service receiving these cloned >> > > mounts >> > > actually wants to use them or is expecting them to behave like automount >> > > mounts. >> > > But that's not what I've seen and is not the way these cloned mounts >> > > behave >> > > without the change. >> > > >> > > However, as has probably occurred to you by now, there is a semantic >> > > change >> > > with >> > > this for namespaces that don't receive mount propogation. >> > > >> > > If a mount request is triggered by an access in the subordinate namespace >> > > for a >> > > dentry that is already mounted in the parent namespace it will silently >> > > fail >> > > (in >> > > that a mount won't appear in the subordinate namespace) rather than >> > > getting >> > > an >> > > ELOOP error as it would now. >> > > >> > > It's also the case that, if such a mount isn't already mounted, it will >> > > cause a >> > > mount to occur in the parent namespace. But that is also the way it is >> > > without >> > > the change. >> > > >> > > TBH I don't know yet how to resolve that, ideally the cloned mounts would >> > > not >> > > appear in the subordinate namespace upon creation but that's also not >> > > currently >> > > possible to do and even if it was it would mean quite a change in to the >> > > way >> > > things behave now. >> > > >> > > All in all I believe the change here solves a problem that needs to be >> > > solved >> > > without affecting normal usage at the expense of a small behaviour change >> > > to >> > > cases where automount isn't providing a mounting service. >> > >> > That sounds like a reasonable semantic change. Limiting the responses >> > of the autofs mount path to what is present in the mount namespace >> > of the program that actually performs the autofs mounts seems needed. >> >> Indeed, yes. >> >> > >> > In fact the entire local mount concept exists because I was solving a >> > very similar problem for rename, unlink and rmdir. Where a cloned mount >> > namespace could cause a denial of service attack on the original >> > mount namespace. >> > >> > I don't know if this change makes sense for mount expiry. >> >> Originally I thought it did but now I think your right, it won't actually make >> a >> difference. >> >> Let me think a little more about it, I thought there was a reason I included >> the >> expire in the changes but I can't remember now. >> >> It may be that originally I thought individual automount(8) instances within >> containers could be affected by an instance of automount(8) in the root >> namespace (and visa versa) but now I think these will all be isolated. > > I also thought that the autofs expire would continue to see the umounted mount > and continue calling back to the daemon in an attempt to umount it. > > That isn't the case so I can drop the changes to the expire expire code as you > recommend. Sounds good. Eric -- To unsubscribe from this list: send the line "unsubscribe autofs" in