All of lore.kernel.org
 help / color / mirror / Atom feed
* nfsv41 over AF_VSOCK (nfs-ganesha)
       [not found] <1602558852.33130185.1445028743632.JavaMail.zimbra@redhat.com>
@ 2015-10-16 21:08 ` Matt Benjamin
  2015-10-19  6:52   ` Stefan Hajnoczi
                     ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Matt Benjamin @ 2015-10-16 21:08 UTC (permalink / raw)
  To: Ceph Development; +Cc: Stefan Hajnoczi, Sage Weil, J. Bruce Fields

Hi devs (CC Bruce--here is a use case for vmci sockets transport)

One of Sage's possible plans for Manilla integration would use nfs over the new Linux  vmci sockets transport integration in qemu (below) to access Cephfs via an nfs-ganesha server running in the host vm.

This now experimentally works.

some notes on running nfs-ganesha over AF_VSOCK:

1. need stefan hajnoczi's patches for
* linux kernel (and build w/vhost-vsock support
* qemu (and build w/vhost-vsock support)
* nfs-utils (in vm guest)

all linked from https://github.com/stefanha?tab=repositories

2. host and vm guest kernels must include vhost-vsock
* host kernel should load vhost-vsock.ko

3. start a qemu(-kvm) guest (w/patched kernel) with a vhost-vsock-pci device, e.g

/opt/qemu-vsock/bin/qemu-system-x86_64    -m 2048 -usb -name vsock1     --enable-kvm     -drive file=/opt/images/vsock.qcow,if=virtio,index=0,format=qcow2     -drive file=/opt/isos/f22.iso,media=cdrom     -net nic,model=virtio,macaddr=02:36:3e:41:1b:78     -net bridge,br=br0     -parallel none     -serial mon:stdio -device vhost-vsock-pci,id=vhost-vsock-pci0,addr=4.0,guest-cid=4  -boot c

4. nfs-gansha (in host)
* need nfs-ganesha and its ntirpc rpc provider with vsock support
https://github.com/linuxbox2/nfs-ganesha (vsock branch)
https://github.com/linuxbox2/ntirpc (vsock branch)

* configure ganesha w/vsock support
cmake -DCMAKE_INSTALL_PREFIX=/cache/nfs-vsock -DUSE_FSAL_VFS=ON -DUSE_VSOCK -DCMAKE_C_FLAGS="-O0 -g3 -gdwarf-4" ../src

in ganesha.conf, add "nfsvsock" to Protocols list in EXPORT block

5. mount in guest w/nfs41:
(e.g., in fstab)
2:// /vsock41 nfs noauto,soft,nfsvers=4.1,sec=sys,proto=vsock,clientaddr=4,rsize=1048576,wsize=1048576 0 0

If you try this, send feedback.

Thanks!

Matt

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-707-0660
fax.  734-769-8938
cel.  734-216-5309


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-16 21:08 ` nfsv41 over AF_VSOCK (nfs-ganesha) Matt Benjamin
@ 2015-10-19  6:52   ` Stefan Hajnoczi
  2015-10-19 15:13   ` J. Bruce Fields
  2015-10-19 16:13   ` John Spray
  2 siblings, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2015-10-19  6:52 UTC (permalink / raw)
  To: Matt Benjamin; +Cc: Ceph Development, Sage Weil, J. Bruce Fields

On Fri, Oct 16, 2015 at 05:08:17PM -0400, Matt Benjamin wrote:
> One of Sage's possible plans for Manilla integration would use nfs over the new Linux  vmci sockets transport integration in qemu (below) to access Cephfs via an nfs-ganesha server running in the host vm.

Excellent job!  Nice to see you were able to add AF_VSOCK support to
nfs-ganesha so quickly.

I'm currently working on kernel nfsd support and will send the patches
to linux-nfs and CC you.

Stefan

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-16 21:08 ` nfsv41 over AF_VSOCK (nfs-ganesha) Matt Benjamin
  2015-10-19  6:52   ` Stefan Hajnoczi
@ 2015-10-19 15:13   ` J. Bruce Fields
  2015-10-19 15:49     ` Matt Benjamin
  2015-10-19 16:13   ` John Spray
  2 siblings, 1 reply; 11+ messages in thread
From: J. Bruce Fields @ 2015-10-19 15:13 UTC (permalink / raw)
  To: Matt Benjamin; +Cc: Ceph Development, Stefan Hajnoczi, Sage Weil

On Fri, Oct 16, 2015 at 05:08:17PM -0400, Matt Benjamin wrote:
> Hi devs (CC Bruce--here is a use case for vmci sockets transport)
> 
> One of Sage's possible plans for Manilla integration would use nfs over the new Linux  vmci sockets transport integration in qemu (below) to access Cephfs via an nfs-ganesha server running in the host vm.

What does "the host vm" mean, and why is this a particularly useful
configuration?

--b.

> 
> This now experimentally works.
> 
> some notes on running nfs-ganesha over AF_VSOCK:
> 
> 1. need stefan hajnoczi's patches for
> * linux kernel (and build w/vhost-vsock support
> * qemu (and build w/vhost-vsock support)
> * nfs-utils (in vm guest)
> 
> all linked from https://github.com/stefanha?tab=repositories
> 
> 2. host and vm guest kernels must include vhost-vsock
> * host kernel should load vhost-vsock.ko
> 
> 3. start a qemu(-kvm) guest (w/patched kernel) with a vhost-vsock-pci device, e.g
> 
> /opt/qemu-vsock/bin/qemu-system-x86_64    -m 2048 -usb -name vsock1     --enable-kvm     -drive file=/opt/images/vsock.qcow,if=virtio,index=0,format=qcow2     -drive file=/opt/isos/f22.iso,media=cdrom     -net nic,model=virtio,macaddr=02:36:3e:41:1b:78     -net bridge,br=br0     -parallel none     -serial mon:stdio -device vhost-vsock-pci,id=vhost-vsock-pci0,addr=4.0,guest-cid=4  -boot c
> 
> 4. nfs-gansha (in host)
> * need nfs-ganesha and its ntirpc rpc provider with vsock support
> https://github.com/linuxbox2/nfs-ganesha (vsock branch)
> https://github.com/linuxbox2/ntirpc (vsock branch)
> 
> * configure ganesha w/vsock support
> cmake -DCMAKE_INSTALL_PREFIX=/cache/nfs-vsock -DUSE_FSAL_VFS=ON -DUSE_VSOCK -DCMAKE_C_FLAGS="-O0 -g3 -gdwarf-4" ../src
> 
> in ganesha.conf, add "nfsvsock" to Protocols list in EXPORT block
> 
> 5. mount in guest w/nfs41:
> (e.g., in fstab)
> 2:// /vsock41 nfs noauto,soft,nfsvers=4.1,sec=sys,proto=vsock,clientaddr=4,rsize=1048576,wsize=1048576 0 0
> 
> If you try this, send feedback.
> 
> Thanks!
> 
> Matt
> 
> -- 
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
> 
> http://www.redhat.com/en/technologies/storage
> 
> tel.  734-707-0660
> fax.  734-769-8938
> cel.  734-216-5309
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-19 15:13   ` J. Bruce Fields
@ 2015-10-19 15:49     ` Matt Benjamin
  2015-10-19 15:58       ` J. Bruce Fields
  0 siblings, 1 reply; 11+ messages in thread
From: Matt Benjamin @ 2015-10-19 15:49 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Ceph Development, Stefan Hajnoczi, Sage Weil

Hi Bruce,

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-707-0660
fax.  734-769-8938
cel.  734-216-5309

----- Original Message -----
> From: "J. Bruce Fields" <bfields@redhat.com>
> To: "Matt Benjamin" <mbenjamin@redhat.com>
> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>, "Stefan Hajnoczi" <stefanha@redhat.com>, "Sage Weil"
> <sweil@redhat.com>
> Sent: Monday, October 19, 2015 11:13:52 AM
> Subject: Re: nfsv41 over AF_VSOCK (nfs-ganesha)
> 
> On Fri, Oct 16, 2015 at 05:08:17PM -0400, Matt Benjamin wrote:
> > Hi devs (CC Bruce--here is a use case for vmci sockets transport)
> > 
> > One of Sage's possible plans for Manilla integration would use nfs over the
> > new Linux  vmci sockets transport integration in qemu (below) to access
> > Cephfs via an nfs-ganesha server running in the host vm.
> 
> What does "the host vm" mean, and why is this a particularly useful
> configuration?

Sorry, I should say, "the vm host."

I think the claimed utility here is (at least) three-fold:

1. simplified configuration on host and guests
2. some claim to improved security through isolation
3. some expectation of improved latency/performance wrt TCP

Stefan sent a link to a set of slides with his original patches.  Did you get a chance to read through those?

[1] http://events.linuxfoundation.org/sites/events/files/slides/stefanha-kvm-forum-2015.pdf

Regards,

Matt

> 
> --b.
> 
> > 
> > This now experimentally works.
> > 
> > some notes on running nfs-ganesha over AF_VSOCK:
> > 
> > 1. need stefan hajnoczi's patches for
> > * linux kernel (and build w/vhost-vsock support
> > * qemu (and build w/vhost-vsock support)
> > * nfs-utils (in vm guest)
> > 
> > all linked from https://github.com/stefanha?tab=repositories
> > 
> > 2. host and vm guest kernels must include vhost-vsock
> > * host kernel should load vhost-vsock.ko
> > 
> > 3. start a qemu(-kvm) guest (w/patched kernel) with a vhost-vsock-pci
> > device, e.g
> > 
> > /opt/qemu-vsock/bin/qemu-system-x86_64    -m 2048 -usb -name vsock1
> > --enable-kvm     -drive
> > file=/opt/images/vsock.qcow,if=virtio,index=0,format=qcow2     -drive
> > file=/opt/isos/f22.iso,media=cdrom     -net
> > nic,model=virtio,macaddr=02:36:3e:41:1b:78     -net bridge,br=br0
> > -parallel none     -serial mon:stdio -device
> > vhost-vsock-pci,id=vhost-vsock-pci0,addr=4.0,guest-cid=4  -boot c
> > 
> > 4. nfs-gansha (in host)
> > * need nfs-ganesha and its ntirpc rpc provider with vsock support
> > https://github.com/linuxbox2/nfs-ganesha (vsock branch)
> > https://github.com/linuxbox2/ntirpc (vsock branch)
> > 
> > * configure ganesha w/vsock support
> > cmake -DCMAKE_INSTALL_PREFIX=/cache/nfs-vsock -DUSE_FSAL_VFS=ON -DUSE_VSOCK
> > -DCMAKE_C_FLAGS="-O0 -g3 -gdwarf-4" ../src
> > 
> > in ganesha.conf, add "nfsvsock" to Protocols list in EXPORT block
> > 
> > 5. mount in guest w/nfs41:
> > (e.g., in fstab)
> > 2:// /vsock41 nfs
> > noauto,soft,nfsvers=4.1,sec=sys,proto=vsock,clientaddr=4,rsize=1048576,wsize=1048576
> > 0 0
> > 
> > If you try this, send feedback.
> > 
> > Thanks!
> > 
> > Matt
> > 
> > --
> > Matt Benjamin
> > Red Hat, Inc.
> > 315 West Huron Street, Suite 140A
> > Ann Arbor, Michigan 48103
> > 
> > http://www.redhat.com/en/technologies/storage
> > 
> > tel.  734-707-0660
> > fax.  734-769-8938
> > cel.  734-216-5309
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-19 15:49     ` Matt Benjamin
@ 2015-10-19 15:58       ` J. Bruce Fields
  2015-10-19 16:03         ` Matt Benjamin
  0 siblings, 1 reply; 11+ messages in thread
From: J. Bruce Fields @ 2015-10-19 15:58 UTC (permalink / raw)
  To: Matt Benjamin; +Cc: Ceph Development, Stefan Hajnoczi, Sage Weil

On Mon, Oct 19, 2015 at 11:49:15AM -0400, Matt Benjamin wrote:
> ----- Original Message -----
> > From: "J. Bruce Fields" <bfields@redhat.com>
...
> > 
> > On Fri, Oct 16, 2015 at 05:08:17PM -0400, Matt Benjamin wrote:
> > > Hi devs (CC Bruce--here is a use case for vmci sockets transport)
> > > 
> > > One of Sage's possible plans for Manilla integration would use nfs over the
> > > new Linux  vmci sockets transport integration in qemu (below) to access
> > > Cephfs via an nfs-ganesha server running in the host vm.
> > 
> > What does "the host vm" mean, and why is this a particularly useful
> > configuration?
> 
> Sorry, I should say, "the vm host."

Got it, thanks!

> I think the claimed utility here is (at least) three-fold:
> 
> 1. simplified configuration on host and guests
> 2. some claim to improved security through isolation

So why is it especially interesting to put Ceph inside the VM and
Ganesha outside?

> 3. some expectation of improved latency/performance wrt TCP
> 
> Stefan sent a link to a set of slides with his original patches.  Did you get a chance to read through those?
> 
> [1] http://events.linuxfoundation.org/sites/events/files/slides/stefanha-kvm-forum-2015.pdf

Yep, thanks.--b.

> 
> Regards,
> 
> Matt
> 
> > 
> > --b.
> > 
> > > 
> > > This now experimentally works.
> > > 
> > > some notes on running nfs-ganesha over AF_VSOCK:
> > > 
> > > 1. need stefan hajnoczi's patches for
> > > * linux kernel (and build w/vhost-vsock support
> > > * qemu (and build w/vhost-vsock support)
> > > * nfs-utils (in vm guest)
> > > 
> > > all linked from https://github.com/stefanha?tab=repositories
> > > 
> > > 2. host and vm guest kernels must include vhost-vsock
> > > * host kernel should load vhost-vsock.ko
> > > 
> > > 3. start a qemu(-kvm) guest (w/patched kernel) with a vhost-vsock-pci
> > > device, e.g
> > > 
> > > /opt/qemu-vsock/bin/qemu-system-x86_64    -m 2048 -usb -name vsock1
> > > --enable-kvm     -drive
> > > file=/opt/images/vsock.qcow,if=virtio,index=0,format=qcow2     -drive
> > > file=/opt/isos/f22.iso,media=cdrom     -net
> > > nic,model=virtio,macaddr=02:36:3e:41:1b:78     -net bridge,br=br0
> > > -parallel none     -serial mon:stdio -device
> > > vhost-vsock-pci,id=vhost-vsock-pci0,addr=4.0,guest-cid=4  -boot c
> > > 
> > > 4. nfs-gansha (in host)
> > > * need nfs-ganesha and its ntirpc rpc provider with vsock support
> > > https://github.com/linuxbox2/nfs-ganesha (vsock branch)
> > > https://github.com/linuxbox2/ntirpc (vsock branch)
> > > 
> > > * configure ganesha w/vsock support
> > > cmake -DCMAKE_INSTALL_PREFIX=/cache/nfs-vsock -DUSE_FSAL_VFS=ON -DUSE_VSOCK
> > > -DCMAKE_C_FLAGS="-O0 -g3 -gdwarf-4" ../src
> > > 
> > > in ganesha.conf, add "nfsvsock" to Protocols list in EXPORT block
> > > 
> > > 5. mount in guest w/nfs41:
> > > (e.g., in fstab)
> > > 2:// /vsock41 nfs
> > > noauto,soft,nfsvers=4.1,sec=sys,proto=vsock,clientaddr=4,rsize=1048576,wsize=1048576
> > > 0 0
> > > 
> > > If you try this, send feedback.
> > > 
> > > Thanks!
> > > 
> > > Matt
> > > 
> > > --
> > > Matt Benjamin
> > > Red Hat, Inc.
> > > 315 West Huron Street, Suite 140A
> > > Ann Arbor, Michigan 48103
> > > 
> > > http://www.redhat.com/en/technologies/storage
> > > 
> > > tel.  734-707-0660
> > > fax.  734-769-8938
> > > cel.  734-216-5309
> > > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-19 15:58       ` J. Bruce Fields
@ 2015-10-19 16:03         ` Matt Benjamin
  0 siblings, 0 replies; 11+ messages in thread
From: Matt Benjamin @ 2015-10-19 16:03 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Ceph Development, Stefan Hajnoczi, Sage Weil

Hi Bruce,

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-707-0660
fax.  734-769-8938
cel.  734-216-5309

----- Original Message -----
> From: "J. Bruce Fields" <bfields@redhat.com>
> To: "Matt Benjamin" <mbenjamin@redhat.com>
> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>, "Stefan Hajnoczi" <stefanha@redhat.com>, "Sage Weil"
> <sweil@redhat.com>
> Sent: Monday, October 19, 2015 11:58:45 AM
> Subject: Re: nfsv41 over AF_VSOCK (nfs-ganesha)
> 
> On Mon, Oct 19, 2015 at 11:49:15AM -0400, Matt Benjamin wrote:
> > ----- Original Message -----
> > > From: "J. Bruce Fields" <bfields@redhat.com>
> ...
> > > 
> > > On Fri, Oct 16, 2015 at 05:08:17PM -0400, Matt Benjamin wrote:
> > > > Hi devs (CC Bruce--here is a use case for vmci sockets transport)
> > > > 
> > > > One of Sage's possible plans for Manilla integration would use nfs over
> > > > the
> > > > new Linux  vmci sockets transport integration in qemu (below) to access
> > > > Cephfs via an nfs-ganesha server running in the host vm.
> > > 
> > > What does "the host vm" mean, and why is this a particularly useful
> > > configuration?
> > 
> > Sorry, I should say, "the vm host."
> 
> Got it, thanks!
> 
> > I think the claimed utility here is (at least) three-fold:
> > 
> > 1. simplified configuration on host and guests
> > 2. some claim to improved security through isolation
> 
> So why is it especially interesting to put Ceph inside the VM and
> Ganesha outside?

Oh, sorry.  Here Ceph (or Gluster, or, whatever underlying FS provider) is conceptually outside the vm complex altogether, Ganesha is re-exporting on the vm host, and guests access the namespace using NFS(v41).

Regards,

Matt

> 
> > 3. some expectation of improved latency/performance wrt TCP
> > 
> > Stefan sent a link to a set of slides with his original patches.  Did you
> > get a chance to read through those?
> > 
> > [1]
> > http://events.linuxfoundation.org/sites/events/files/slides/stefanha-kvm-forum-2015.pdf
> 
> Yep, thanks.--b.
> 
> > 
> > Regards,
> > 
> > Matt
> > 
> > > 
> > > --b.
> > > 
> > > > 
> > > > This now experimentally works.
> > > > 
> > > > some notes on running nfs-ganesha over AF_VSOCK:
> > > > 
> > > > 1. need stefan hajnoczi's patches for
> > > > * linux kernel (and build w/vhost-vsock support
> > > > * qemu (and build w/vhost-vsock support)
> > > > * nfs-utils (in vm guest)
> > > > 
> > > > all linked from https://github.com/stefanha?tab=repositories
> > > > 
> > > > 2. host and vm guest kernels must include vhost-vsock
> > > > * host kernel should load vhost-vsock.ko
> > > > 
> > > > 3. start a qemu(-kvm) guest (w/patched kernel) with a vhost-vsock-pci
> > > > device, e.g
> > > > 
> > > > /opt/qemu-vsock/bin/qemu-system-x86_64    -m 2048 -usb -name vsock1
> > > > --enable-kvm     -drive
> > > > file=/opt/images/vsock.qcow,if=virtio,index=0,format=qcow2     -drive
> > > > file=/opt/isos/f22.iso,media=cdrom     -net
> > > > nic,model=virtio,macaddr=02:36:3e:41:1b:78     -net bridge,br=br0
> > > > -parallel none     -serial mon:stdio -device
> > > > vhost-vsock-pci,id=vhost-vsock-pci0,addr=4.0,guest-cid=4  -boot c
> > > > 
> > > > 4. nfs-gansha (in host)
> > > > * need nfs-ganesha and its ntirpc rpc provider with vsock support
> > > > https://github.com/linuxbox2/nfs-ganesha (vsock branch)
> > > > https://github.com/linuxbox2/ntirpc (vsock branch)
> > > > 
> > > > * configure ganesha w/vsock support
> > > > cmake -DCMAKE_INSTALL_PREFIX=/cache/nfs-vsock -DUSE_FSAL_VFS=ON
> > > > -DUSE_VSOCK
> > > > -DCMAKE_C_FLAGS="-O0 -g3 -gdwarf-4" ../src
> > > > 
> > > > in ganesha.conf, add "nfsvsock" to Protocols list in EXPORT block
> > > > 
> > > > 5. mount in guest w/nfs41:
> > > > (e.g., in fstab)
> > > > 2:// /vsock41 nfs
> > > > noauto,soft,nfsvers=4.1,sec=sys,proto=vsock,clientaddr=4,rsize=1048576,wsize=1048576
> > > > 0 0
> > > > 
> > > > If you try this, send feedback.
> > > > 
> > > > Thanks!
> > > > 
> > > > Matt
> > > > 
> > > > --
> > > > Matt Benjamin
> > > > Red Hat, Inc.
> > > > 315 West Huron Street, Suite 140A
> > > > Ann Arbor, Michigan 48103
> > > > 
> > > > http://www.redhat.com/en/technologies/storage
> > > > 
> > > > tel.  734-707-0660
> > > > fax.  734-769-8938
> > > > cel.  734-216-5309
> > > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-16 21:08 ` nfsv41 over AF_VSOCK (nfs-ganesha) Matt Benjamin
  2015-10-19  6:52   ` Stefan Hajnoczi
  2015-10-19 15:13   ` J. Bruce Fields
@ 2015-10-19 16:13   ` John Spray
  2015-10-23 13:27     ` John Spray
  2 siblings, 1 reply; 11+ messages in thread
From: John Spray @ 2015-10-19 16:13 UTC (permalink / raw)
  To: Ceph Development

On Fri, Oct 16, 2015 at 10:08 PM, Matt Benjamin <mbenjamin@redhat.com> wrote:
> Hi devs (CC Bruce--here is a use case for vmci sockets transport)
>
> One of Sage's possible plans for Manilla integration would use nfs over the new Linux  vmci sockets transport integration in qemu (below) to access Cephfs via an nfs-ganesha server running in the host vm.
>
> This now experimentally works.

Very cool!  Thank you for the detailed instructions, I look forward to
trying this out soon.

John

> some notes on running nfs-ganesha over AF_VSOCK:
>
> 1. need stefan hajnoczi's patches for
> * linux kernel (and build w/vhost-vsock support
> * qemu (and build w/vhost-vsock support)
> * nfs-utils (in vm guest)
>
> all linked from https://github.com/stefanha?tab=repositories
>
> 2. host and vm guest kernels must include vhost-vsock
> * host kernel should load vhost-vsock.ko
>
> 3. start a qemu(-kvm) guest (w/patched kernel) with a vhost-vsock-pci device, e.g
>
> /opt/qemu-vsock/bin/qemu-system-x86_64    -m 2048 -usb -name vsock1     --enable-kvm     -drive file=/opt/images/vsock.qcow,if=virtio,index=0,format=qcow2     -drive file=/opt/isos/f22.iso,media=cdrom     -net nic,model=virtio,macaddr=02:36:3e:41:1b:78     -net bridge,br=br0     -parallel none     -serial mon:stdio -device vhost-vsock-pci,id=vhost-vsock-pci0,addr=4.0,guest-cid=4  -boot c
>
> 4. nfs-gansha (in host)
> * need nfs-ganesha and its ntirpc rpc provider with vsock support
> https://github.com/linuxbox2/nfs-ganesha (vsock branch)
> https://github.com/linuxbox2/ntirpc (vsock branch)
>
> * configure ganesha w/vsock support
> cmake -DCMAKE_INSTALL_PREFIX=/cache/nfs-vsock -DUSE_FSAL_VFS=ON -DUSE_VSOCK -DCMAKE_C_FLAGS="-O0 -g3 -gdwarf-4" ../src
>
> in ganesha.conf, add "nfsvsock" to Protocols list in EXPORT block
>
> 5. mount in guest w/nfs41:
> (e.g., in fstab)
> 2:// /vsock41 nfs noauto,soft,nfsvers=4.1,sec=sys,proto=vsock,clientaddr=4,rsize=1048576,wsize=1048576 0 0
>
> If you try this, send feedback.
>
> Thanks!
>
> Matt
>
> --
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
>
> http://www.redhat.com/en/technologies/storage
>
> tel.  734-707-0660
> fax.  734-769-8938
> cel.  734-216-5309
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-19 16:13   ` John Spray
@ 2015-10-23 13:27     ` John Spray
  2015-10-23 16:34       ` Daniel Gryniewicz
  2015-10-27 14:49       ` Stefan Hajnoczi
  0 siblings, 2 replies; 11+ messages in thread
From: John Spray @ 2015-10-23 13:27 UTC (permalink / raw)
  To: Ceph Development; +Cc: Stefan Hajnoczi

On Mon, Oct 19, 2015 at 5:13 PM, John Spray <jspray@redhat.com> wrote:
>> If you try this, send feedback.
>>

OK, got this up and running.

I've shared the kernel/qemu/nfsutils packages I built here:
https://copr.fedoraproject.org/coprs/jspray/vsock-nfs/builds/

(at time of writing the kernel one is still building, and I'm running
with ganesha out of a source tree)

Observations:
 * Running VM as qemu user gives EPERM opening vsock device, even
after changing permissions on the device node (for which I guess we'll
want udev rules at some stage) -- is there a particular capability
that we need to grant the qemu user?  Was looking into this to make it
convenient to run inside libvirt.
 * NFS writes from the guest are lagging for like a minute before
completing, my hunch is that this is something in the NFS client
recovery stuff (in ganesha) that's not coping with vsock, the
operations seem to complete at the point where the server declares
itself "NOT IN GRACE".
 * For those (like myself) unaccustomed to running ganesha, do not run
it straight out of a source tree and expect everything to work, by
default even VFS exports won't work that way (mounts work but clients
see an empty tree) because it can't find the built FSAL .so.  You can
write a config file that works, but it's easier just to make install
it.
 * (Anecdotal, seen while messing with other stuff) client mount seems
to hang if I kill ganesha and then start it again, not sure if this is
a ganesha issue or a general vsock issue.

Cheers,
John

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-23 13:27     ` John Spray
@ 2015-10-23 16:34       ` Daniel Gryniewicz
  2015-10-23 18:05         ` Matt Benjamin
  2015-10-27 14:49       ` Stefan Hajnoczi
  1 sibling, 1 reply; 11+ messages in thread
From: Daniel Gryniewicz @ 2015-10-23 16:34 UTC (permalink / raw)
  To: John Spray; +Cc: Ceph Development, Stefan Hajnoczi

On Fri, Oct 23, 2015 at 9:27 AM, John Spray <jspray@redhat.com> wrote:
>  * NFS writes from the guest are lagging for like a minute before
> completing, my hunch is that this is something in the NFS client
> recovery stuff (in ganesha) that's not coping with vsock, the
> operations seem to complete at the point where the server declares
> itself "NOT IN GRACE".


Ganesha always starts in Grace, and will not process new clients until
it exits Grace.  Existing clients should re-connect fine, and new
clients work fine after Grace is exited.

Dan

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-23 16:34       ` Daniel Gryniewicz
@ 2015-10-23 18:05         ` Matt Benjamin
  0 siblings, 0 replies; 11+ messages in thread
From: Matt Benjamin @ 2015-10-23 18:05 UTC (permalink / raw)
  To: Daniel Gryniewicz; +Cc: John Spray, Ceph Development, Stefan Hajnoczi

For hacking around, put  "Graceless = true;" in the NFSV4 block.

Matt

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-707-0660
fax.  734-769-8938
cel.  734-216-5309

----- Original Message -----
> From: "Daniel Gryniewicz" <dang@redhat.com>
> To: "John Spray" <jspray@redhat.com>
> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>, "Stefan Hajnoczi" <shajnocz@redhat.com>
> Sent: Friday, October 23, 2015 12:34:42 PM
> Subject: Re: nfsv41 over AF_VSOCK (nfs-ganesha)
> 
> On Fri, Oct 23, 2015 at 9:27 AM, John Spray <jspray@redhat.com> wrote:
> >  * NFS writes from the guest are lagging for like a minute before
> > completing, my hunch is that this is something in the NFS client
> > recovery stuff (in ganesha) that's not coping with vsock, the
> > operations seem to complete at the point where the server declares
> > itself "NOT IN GRACE".
> 
> 
> Ganesha always starts in Grace, and will not process new clients until
> it exits Grace.  Existing clients should re-connect fine, and new
> clients work fine after Grace is exited.
> 
> Dan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nfsv41 over AF_VSOCK (nfs-ganesha)
  2015-10-23 13:27     ` John Spray
  2015-10-23 16:34       ` Daniel Gryniewicz
@ 2015-10-27 14:49       ` Stefan Hajnoczi
  1 sibling, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2015-10-27 14:49 UTC (permalink / raw)
  To: John Spray; +Cc: Ceph Development, Stefan Hajnoczi

On Fri, Oct 23, 2015 at 02:27:22PM +0100, John Spray wrote:
> On Mon, Oct 19, 2015 at 5:13 PM, John Spray <jspray@redhat.com> wrote:
> >> If you try this, send feedback.
> >>
> 
> OK, got this up and running.
> 
> I've shared the kernel/qemu/nfsutils packages I built here:
> https://copr.fedoraproject.org/coprs/jspray/vsock-nfs/builds/
> 
> (at time of writing the kernel one is still building, and I'm running
> with ganesha out of a source tree)
> 
> Observations:
>  * Running VM as qemu user gives EPERM opening vsock device, even
> after changing permissions on the device node (for which I guess we'll
> want udev rules at some stage) -- is there a particular capability
> that we need to grant the qemu user?  Was looking into this to make it
> convenient to run inside libvirt.

libvirtd runs as root and opens /dev/vhost-*.  It passes file
descriptors to the unprivileged QEMU process.  I think this is how
things work in production (with SELinux enabled too).

On a development machine it is easier to either run QEMU as root or set
uid:gid on /dev/vhost-vsock.

So we don't need to do anything, although libvirt code will need to be
written to support vhost-vsock.  It should be very similar to the
existing vhost-net code in libvirt.

>  * NFS writes from the guest are lagging for like a minute before
> completing, my hunch is that this is something in the NFS client
> recovery stuff (in ganesha) that's not coping with vsock, the
> operations seem to complete at the point where the server declares
> itself "NOT IN GRACE".
>  * For those (like myself) unaccustomed to running ganesha, do not run
> it straight out of a source tree and expect everything to work, by
> default even VFS exports won't work that way (mounts work but clients
> see an empty tree) because it can't find the built FSAL .so.  You can
> write a config file that works, but it's easier just to make install
> it.
>  * (Anecdotal, seen while messing with other stuff) client mount seems
> to hang if I kill ganesha and then start it again, not sure if this is
> a ganesha issue or a general vsock issue.

If you experience hangs when the other side closes the connection you
may need:
https://github.com/stefanha/linux/commit/ae3c6c9b1534c1df5213a72f38e377ecd0852e14

Stefan

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-10-27 14:49 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1602558852.33130185.1445028743632.JavaMail.zimbra@redhat.com>
2015-10-16 21:08 ` nfsv41 over AF_VSOCK (nfs-ganesha) Matt Benjamin
2015-10-19  6:52   ` Stefan Hajnoczi
2015-10-19 15:13   ` J. Bruce Fields
2015-10-19 15:49     ` Matt Benjamin
2015-10-19 15:58       ` J. Bruce Fields
2015-10-19 16:03         ` Matt Benjamin
2015-10-19 16:13   ` John Spray
2015-10-23 13:27     ` John Spray
2015-10-23 16:34       ` Daniel Gryniewicz
2015-10-23 18:05         ` Matt Benjamin
2015-10-27 14:49       ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.