All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	Steve Dickson <SteveD@redhat.com>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>,
	Matt Benjamin <mbenjami@redhat.com>,
	Jeff Layton <jlayton@redhat.com>
Subject: Re: [PATCH nfs-utils v3 00/14] add NFS over AF_VSOCK support
Date: Tue, 19 Sep 2017 16:10:51 +0100	[thread overview]
Message-ID: <20170919151051.GS9536@redhat.com> (raw)
In-Reply-To: <67608054-B771-44F4-8B2F-5F7FDC506CDD@oracle.com>

On Tue, Sep 19, 2017 at 10:35:49AM -0400, Chuck Lever wrote:
> 
> > On Sep 19, 2017, at 5:31 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> > 
> > On Mon, Sep 18, 2017 at 07:09:27PM +0100, Stefan Hajnoczi wrote:
> >> There are 2 main use cases:
> >> 
> >> 1. Easy file sharing between host & guest
> >> 
> >>   It's true that a disk image can be used but that's often inconvenient
> >>   when the data comes in individual files.  Making throwaway ISO or
> >>   disk image from those files requires extra disk space, is slow, etc.
> > 
> > More critically, it cannot be easily live-updated for a running guest.
> > Not all of the setup data that the hypervisor wants to share with the
> > guest is boot-time only - some may be access repeatedly post boot &
> > have a need to update it dynamically. Currently OpenStack can only
> > satisfy this if using its network based metadata REST service, but
> > many cloud operators refuse to deploy this because they are not happy
> > with the guest and host sharing a LAN, leaving only the virtual disk
> > option which can not support dynamic update.
> 
> Hi Daniel-
> 
> OK, but why can't the REST service run on VSOCK, for instance?

That is a possibility, though cloud-init/OpenStack maintainers are
reluctant to add support for new features for the metadata REST
service, because the spec being followed is defined by Amazon (as
part of EC2), not by OpenStack. So adding new features would be
effectively forking the spec by adding stuff Amazon doesn't (yet)
support - this is why its IPv4 only, with no IPv6 support too,
as Amazon has not defined a standardized IPv6 address for the
metadata service at this time. 

> How is VSOCK different than guests and hypervisor sharing a LAN?

VSOCK requires no guest configuration, it won't be broken accidentally
by NetworkManager (or equivalent), it won't be mistakenly blocked by
guest admin/OS adding "deny all" default firewall policy. Similar
applies on the host side, and since there's separation from IP networking,
there is no possibility of the guest ever getting a channel out to the
LAN, even if the host is mis-configurated. 

> Would it be OK if the hypervisor and each guest shared a virtual
> point-to-point IP network?

No - per above / below text

> Can you elaborate on "they are not happy with the guests and host
> sharing a LAN" ?

The security of the host management LAN is so critical to the cloud,
that they're not willing to allow any guest network interface to have
an IP visible to/from the host, even if it were locked down with
firewall rules. It is just one administrative mis-configuration
away from disaster.

> > If the admin takes any live snapshots of the guest, then this throwaway
> > disk image has to be kept around for the lifetime of the snapshot too.
> > We cannot just throw it away & re-generate it later when restoring the
> > snapshot, because we canot guarantee the newly generated image would be
> > byte-for-byte identical to the original one we generated due to possible
> > changes in mkfs related tools.
> 
> Seems like you could create a loopback mount of a small file to
> store configuration data. That would consume very little local
> storage. I've done this already in the fedfs-utils-server package,
> which creates small loopback mounted filesystems to contain FedFS
> domain root directories, for example.
> 
> Sharing the disk serially is a little awkward, but not difficult.
> You could use an automounter in the guest to grab that filesystem
> when needed, then release it after a period of not being used.


With QEMU's previous 9p-over-virtio filesystem support people have
built tools which run virtual machines where the root FS is directly
running against a 9p share from the host filesystem. It isn't possible
to share the host filesystem's /dev/sda (or whatever) to the guest
because its a holding a non-cluster filesystem so can't be mounted
twice. Likewise you don't want to copy the host filesystems entire
contents into a block device and mount that, as its simply impratical

With 9p-over-virtio, or NFS-over-VSOCK, we can execute commands
present in the host's filesystem, sandboxed inside a QEMU guest
by simply sharing the host's '/' FS to the guest and have the
guest mount that as its own / (typically it would be read-only,
and then a further FS share would be added for writeable areas).
For this to be reliable we can't use host IP networking because
there's too many ways for that to fail, and if spawning the sandbox
as non-root we can't influence the host networking setup at all.
Currently it uses 9p-over-virtio for this reason, which works
great, except that distros hate the idea of supporting a 9p
filesystem driver in the kernel - a NFS driver capable of
running over virtio is a much smaller incremental support
burden.

> >>   From a user perspective it's much nicer to point to a directory and
> >>   have it shared with the guest.
> >> 
> >> 2. Using NFS over AF_VSOCK as an interface for a distributed file system
> >>   like Ceph or Gluster.
> >> 
> >>   Hosting providers don't necessarily want to expose their distributed
> >>   file system directly to the guest.  An NFS frontend presents an NFS
> >>   file system to the guest.  The guest doesn't have access to the
> >>   distributed file system configuration details or network access.  The
> >>   hosting provider can even switch backend file systems without
> >>   requiring guest configuration changes.
> 
> Notably, NFS can already support hypervisor file sharing and
> gateway-ing to Ceph and Gluster. We agree that those are useful.
> However VSOCK is not a pre-requisite for either of those use
> cases.

This again requires that the NFS server which runs on the management LAN
be visible to the guest network. So this hits the same problem above with
cloud providers wanting those networks completely separate.

The desire from OpenStack is to have an NFS server on the compute host,
which exposes the Ceph filesystem to the guest over VSOCK

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

  reply	other threads:[~2017-09-19 15:11 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-13 10:26 [PATCH nfs-utils v3 00/14] add NFS over AF_VSOCK support Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 01/14] mount: don't use IPPROTO_UDP for address resolution Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 02/14] nfs-utils: add vsock.h Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 03/14] nfs-utils: add AF_VSOCK support to sockaddr.h Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 04/14] mount: present AF_VSOCK addresses Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 05/14] mount: accept AF_VSOCK in nfs_verify_family() Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 06/14] mount: generate AF_VSOCK clientaddr Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 07/14] getport: recognize "vsock" netid Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 08/14] mount: AF_VSOCK address parsing Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 09/14] exportfs: introduce host_freeaddrinfo() Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 10/14] exportfs: add AF_VSOCK address parsing and printing Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 11/14] exportfs: add AF_VSOCK support to set_addrlist() Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 12/14] exportfs: add support for "vsock:" exports(5) syntax Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 13/14] nfsd: add --vsock (-v) option to nfsd Stefan Hajnoczi
2017-09-13 10:26 ` [PATCH nfs-utils v3 14/14] tests: add "vsock:" exports(5) test case Stefan Hajnoczi
2017-09-13 16:21 ` [PATCH nfs-utils v3 00/14] add NFS over AF_VSOCK support Christoph Hellwig
2017-09-13 18:18   ` [nfsv4] " David Noveck
2017-09-13 18:21     ` Chuck Lever
2017-09-15 11:52       ` Stefan Hajnoczi
2017-09-13 22:39 ` NeilBrown
2017-09-14 15:39 ` Steve Dickson
2017-09-14 15:55   ` Steve Dickson
2017-09-14 17:37     ` J . Bruce Fields
2017-09-15 11:07       ` Jeff Layton
2017-09-15 15:17         ` J . Bruce Fields
2017-09-15 23:29           ` NeilBrown
2017-09-16 14:55             ` J . Bruce Fields
2017-09-15 13:12       ` Stefan Hajnoczi
2017-09-15 13:31         ` J . Bruce Fields
2017-09-15 13:59           ` Chuck Lever
2017-09-15 16:42             ` J. Bruce Fields
2017-09-16 15:55               ` Chuck Lever
2017-09-18 18:09                 ` Stefan Hajnoczi
2017-09-19  9:31                   ` Daniel P. Berrange
2017-09-19 14:35                     ` Chuck Lever
2017-09-19 15:10                       ` Daniel P. Berrange [this message]
2017-09-19 15:48                         ` Chuck Lever
2017-09-19 16:44                           ` Daniel P. Berrange
2017-09-19 17:24                             ` J. Bruce Fields
2017-09-21 17:00                               ` Stefan Hajnoczi
2017-09-22  9:55                                 ` Steven Whitehouse
2017-09-22 11:32                                   ` Jeff Layton
2017-09-22 12:08                                     ` Matt Benjamin
2017-09-22 12:26                                       ` Jeff Layton
2017-09-22 15:28                                         ` Stefan Hajnoczi
2017-09-22 16:23                                           ` Daniel P. Berrange
2017-09-22 18:31                                             ` Chuck Lever
2017-09-25  8:14                                               ` Daniel P. Berrange
2017-09-25 10:31                                                 ` Chuck Lever
2017-09-22 11:43                                   ` Chuck Lever
2017-09-22 11:55                                     ` Daniel P. Berrange
2017-09-22 12:00                                       ` Chuck Lever
2017-09-22 12:10                                         ` Daniel P. Berrange
2017-09-22 19:14                                       ` J. Bruce Fields
2017-09-25  8:30                                         ` Daniel P. Berrange
2017-09-26  2:08                                       ` NeilBrown
2017-09-26  3:40                                         ` J. Bruce Fields
2017-09-26 10:56                                           ` Stefan Hajnoczi
2017-09-26 11:07                                             ` Daniel P. Berrange
2017-09-26 18:32                                             ` J. Bruce Fields
2017-09-27  0:45                                             ` NeilBrown
2017-09-27 13:05                                               ` Stefan Hajnoczi
2017-09-27 22:21                                                 ` NeilBrown
2017-09-28 10:44                                                   ` Stefan Hajnoczi
2017-09-27 13:35                                               ` J. Bruce Fields
2017-09-27 22:25                                                 ` NeilBrown
2017-09-26 13:39                                           ` J. Bruce Fields
2017-09-26 13:42                                             ` J. Bruce Fields
2017-09-27 12:22                                               ` Stefan Hajnoczi
2017-09-27 13:46                                                 ` J. Bruce Fields
2017-09-28 10:34                                                   ` Stefan Hajnoczi
2017-09-19 17:37                             ` Stefan Hajnoczi
2017-09-19 19:56                             ` Chuck Lever
2017-09-19 20:42                               ` J. Bruce Fields
2017-09-19 21:09                                 ` Chuck Lever
2017-09-20 13:16                                   ` J. Bruce Fields
2017-09-20 14:40                                     ` Chuck Lever
2017-09-20 14:45                                       ` J. Bruce Fields
2017-09-20 14:59                                         ` Chuck Lever
2017-09-20 15:25                                           ` Frank Filz
2017-09-20 18:17                                             ` Trond Myklebust
2017-09-20 18:34                                               ` bfields
2017-09-20 18:38                                                 ` Trond Myklebust
2017-09-21 16:20                                                 ` Stefan Hajnoczi
2017-09-20 14:58                                     ` Daniel P. Berrange
2017-09-20 16:39                                       ` J. Bruce Fields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170919151051.GS9536@redhat.com \
    --to=berrange@redhat.com \
    --cc=SteveD@redhat.com \
    --cc=bfields@fieldses.org \
    --cc=chuck.lever@oracle.com \
    --cc=jlayton@redhat.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=mbenjami@redhat.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.