linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Trond Myklebust <trondmy@hammerspace.com>,
	"lsf-pc@lists.linux-foundation.org" 
	<lsf-pc@lists.linux-foundation.org>
Cc: "linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>
Subject: Re: [LSF/MM TOPIC] Containers and distributed filesystems
Date: Wed, 23 Jan 2019 14:32:27 -0800	[thread overview]
Message-ID: <1548282747.2949.62.camel@HansenPartnership.com> (raw)
In-Reply-To: <28ba1a0012e84a789d2f402d292935e98266212b.camel@hammerspace.com>

On Wed, 2019-01-23 at 20:50 +0000, Trond Myklebust wrote:
> On Wed, 2019-01-23 at 11:21 -0800, James Bottomley wrote:
> > On Wed, 2019-01-23 at 18:10 +0000, Trond Myklebust wrote:
> > > Hi,
> > > 
> > > I'd like to propose an LSF/MM discussion around the topic of
> > > containers and distributed filesystems.
> > > 
> > > The background is that we have a number of decisions to make
> > > around dealing with namespaces when the filesystem is
> > > distributed.
> > > 
> > > On the one hand, there is the issue of which user namespace we
> > > should be using when putting uids/gids on the wire, or when
> > > translating into alternative identities (user/group name, cifs
> > > SIDs,...). There are two main competing proposals: the first
> > > proposal is to select the user namespace of the process that
> > > mounted the distributed filesystem. The second proposal is to
> > > (continue to) use the user namespace pointed to by init_nsproxy.
> > > It seems that whichever choice we make, we probably want to
> > > ensure that all the major distributed filesystems (AFS, CIFS,
> > > NFS) have consistent handling of these situations.
> > 
> > I don't think there's much disagreement among container people:
> > most would agree the uids on the wire should match the uids in the
> > container.  If you're running your remote fs via fuse in an
> > unprivileged container, you have no access to the kuid/kgid anyway,
> > so it's the way you have to run.
> > 
> > I think the latter comes about because most of the container
> > implementations still have difficulty consuming the user namespace,
> > so most run without it (where kuid = uid) or mis-implement it,
> > which is where you might get the mismatch.  Is there an actual use
> > case where you'd want to see the kuid at the remote end, bearing in
> > mind that when user namespaces are properly set up kuid is often
> > the product of internal subuid mapping.
> 
> Wouldn't the above basically allow you to spoof root on any existing
> mounted NFS client using the unprivileged command 'unshare -U -r'?

Yes, but what are you using as security on the remote?  If it's an
assumption of coming from a privileged port, say, then that's not going
to work unprivileged anyway (and is a very 90s way of doing
security).  If it's role based credential based security then, surely,
how the client manages ids shouldn't be visible to the server, because
the server has granular credentials for each of its roles.

> Eric Biederman was the one proposing the 'match the namespace of the
> process that mounted the filesystem' approach. My main questions
> about that approach would be:
> 1) Are we guaranteed to always have a mapping between an arbitrary
> uid/gid from the user namespace in the container, to the user
> namespace of the parent orchestrator process that set up the mount?

Yes, user namespace mappings are injective, so a uid inside always maps
to one outside but not necessarily vice versa.  Each user namespace you
go through can shrink the pool of external ids it maps to.

> 2) How do we reconcile that approach with the requirement that NFSv4
> be able to convert uids/gids into stringified user/group names (which
> is usually solved using an upcall mechanism)?

How do you authenticate the stringified ids?  If you're relying on
authentication at mount time only and trusting the client to tell you
the users with no further granular authentication by id then yes, it's
always going to be a bit unsafe because anyone possessing the mount
credentials can be any id on the server.  So if you want the client to
supervise what id goes to the server then the client has to run the
mount securely and make sure handoff to the user namespace of the
container is correct and, obviously, you can't allow an unprivileged
container to manage the actual client itself.

So, I think, to give a concrete example, the container has what it
thinks of as root and bin (uid 0 and 1) at exterior uid 1000 and 1001. 
You want the handed off mount to accept a write by container bin at
exterior uid 1001 as uid 1 to the server (real bin) but deny a write by
container root (exterior uid 1000)?

> > > Another issue arises around the question of identifying
> > > containers when they are migrated. At least the NFSv4 client
> > > needs to be able to send a unique identifier that is preserved
> > > across container migration. The uts_namespace is typically
> > > insufficient for this purpose, since most containers don't bother
> > > to set a unique hostname.
> > 
> > We did have a discussion in plumbers about the container ID, but
> > I'm not sure it reached a useful conclusion for you (video, I'm
> > afraid):
> > 
> > https://linuxplumbersconf.org/event/2/contributions/215/
> 
> I have a concrete proposal for how we can do this using 'udev', and
> I'm looking for a forum in which to discuss it.

Cc'ing the container list: containers@lists.linux-foundation.org might
be a good start.

> > > Finally, there is an issue that may be unique to NFS (in which
> > > case I'd be happy to see it as a hallway discussion or a BoF
> > > session) around preserving file state across container
> > > migrations.
> > 
> > If by file state, you mean the internal kernel struct file state,
> > doesn't CRIU already do that? or do you mean some other state?
> 
> I thought CRIU was unable to deal with file locking state?

Depends what you mean by "deal with".  The lock state can be extracted
from the source and transferred to the target so it works locally
(every transferred process sees the same locking state before and
after).  However, I think on the server locks get dropped on the
transfer and reacquired so a third party can get in to acquire the lock
if that's the worry?  We probably need a CRIU person to explain this
better and what the current state of play is since my knowledge is some
years old.

James


  reply	other threads:[~2019-01-23 22:32 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-23 18:10 [LSF/MM TOPIC] Containers and distributed filesystems Trond Myklebust
2019-01-23 19:21 ` James Bottomley
2019-01-23 20:50   ` Trond Myklebust
2019-01-23 22:32     ` James Bottomley [this message]
2019-02-09 21:49 ` Steve French

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1548282747.2949.62.camel@HansenPartnership.com \
    --to=james.bottomley@hansenpartnership.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=trondmy@hammerspace.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).