All of lore.kernel.org
 help / color / mirror / Atom feed
From: Deepak Shetty <dpkshetty-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org>
Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [Manila] Ceph native driver for manila
Date: Wed, 4 Mar 2015 00:01:11 +0530	[thread overview]
Message-ID: <CAOXiiM=n6K7LjkB0pkDBj11ND7=UhAgioBZ2wgJOY90u79ZNkA@mail.gmail.com> (raw)
In-Reply-To: <835936292.21191270.1425324075471.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>


[-- Attachment #1.1: Type: text/plain, Size: 6842 bytes --]

On Tue, Mar 3, 2015 at 12:51 AM, Luis Pabon <lpabon-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> What is the status on virtfs?  I am not sure if it is being maintained.
> Does anyone know?
>

The last i knew its not maintained.
Also for what its worth, p9 won't work for windows guest (unless there is a
p9 driver for windows ?) if that is part of your usecase/scenario ?

Last but not the least, p9/virtfs would expose a p9 mount , not a ceph
mount to VMs, which means if there are cephfs specific mount options they
may not work



>
> - Luis
>
> ----- Original Message -----
> From: "Danny Al-Gaaf" <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org>, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Sunday, March 1, 2015 9:07:36 AM
> Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila
>
> Am 27.02.2015 um 01:04 schrieb Sage Weil:
> > [sorry for ceph-devel double-post, forgot to include
> > openstack-dev]
> >
> > Hi everyone,
> >
> > The online Ceph Developer Summit is next week[1] and among other
> > things we'll be talking about how to support CephFS in Manila.  At
> > a high level, there are basically two paths:
>
> We discussed the CephFS Manila topic also on the last Manila Midcycle
> Meetup (Kilo) [1][2]
>
> > 2) Native CephFS driver
> >
> > As I currently understand it,
> >
> > - The driver will set up CephFS auth credentials so that the guest
> > VM can mount CephFS directly - The guest VM will need access to the
> > Ceph network.  That makes this mainly interesting for private
> > clouds and trusted environments. - The guest is responsible for
> > running 'mount -t ceph ...'. - I'm not sure how we provide the auth
> > credential to the user/guest...
>
> The auth credentials need to be handled currently by a application
> orchestration solution I guess. I see currently no solution on the
> Manila layer level atm.
>

There were some discussion in the past in Manila community on guest auto
mount
but i guess nothing was conclusive there.

Appln orchestration can be achived by having tenant specific VM images with
creds
pre-loaded or have the creds injected via cloud-init too should work ?


>
> If Ceph would provide OpenStack Keystone authentication for
> rados/cephfs instead of CephX, it could be handled via app orch easily.
>
> > This would perform better than an NFS gateway, but there are
> > several gaps on the security side that make this unusable currently
> > in an untrusted environment:
> >
> > - The CephFS MDS auth credentials currently are _very_ basic.  As
> > in, binary: can this host mount or it cannot.  We have the auth cap
> > string parsing in place to restrict to a subdirectory (e.g., this
> > tenant can only mount /tenants/foo), but the MDS does not enforce
> > this yet.  [medium project to add that]
> >
> > - The same credential could be used directly via librados to access
> > the data pool directly, regardless of what the MDS has to say about
> > the namespace.  There are two ways around this:
> >
> > 1- Give each tenant a separate rados pool.  This works today.
> > You'd set a directory policy that puts all files created in that
> > subdirectory in that tenant's pool, then only let the client access
> > those rados pools.
> >
> > 1a- We currently lack an MDS auth capability that restricts which
> > clients get to change that policy.  [small project]
> >
> > 2- Extend the MDS file layouts to use the rados namespaces so that
> >  users can be separated within the same rados pool.  [Medium
> > project]
> >
> > 3- Something fancy with MDS-generated capabilities specifying which
> >  rados objects clients get to read.  This probably falls in the
> > category of research, although there are some papers we've seen
> > that look promising. [big project]
> >
> > Anyway, this leads to a few questions:
> >
> > - Who is interested in using Manila to attach CephFS to guest VMs?
>

I didn't get this question... Goal of manila is to provision shared FS to
VMs
so everyone interested in using CephFS would be interested to attach (
'guess you meant mount?)
CephFS to VMs, no ?



> > - What use cases are you interested? - How important is security in
> > your environment?
>

NFS-Ganesha based service VM approach (for network isolation) in Manila is
still
 under works, afaik.


>
> As you know we (Deutsche Telekom) are may interested to provide shared
> filesystems via CephFS to VMs instead of e.g. via NFS. We can
> provide/discuss use cases at CDS.
>
> For us security is very critical, as the performance is too. The first
> solution via ganesha is not what we prefer (to use CephFS via p9 and
> NFS would not perform that well I guess). The second solution, to use
> CephFS directly to the VM would be a bad solution from the security
> point of view since we can't expose the Ceph public network directly
> to the VMs to prevent all the security issues we discussed already.
>

Is there any place the security issues are captured for the case where VMs
access
CephFS directly ? I was curious to understand. IIUC Neutron provides
private and public
networks and for VMs to access external CephFS network, the tenant private
network needs
to be bridged/routed to the external provider network and there are ways
neturon achives it.

Are you saying that this approach of neutron is insecure ?

thanx,
deepak


>
> We discussed during the Midcycle a third option:
>
> Mount CephFS directly on the host system and provide the filesystem to
> the VMs via p9/virtfs. This need nova integration (I will work on a
> POC patch for this) to setup libvirt config correctly for virtfs. This
> solve the security issue and the auth key distribution for the VMs,
> but it may introduces performance issues due to virtfs usage. We have
> to check what the specific performance impact will be. Currently this
> is the preferred solution for our use cases.
>
> What's still missing in this solution is user/tenant/subtree
> separation as in the 2th option. But this is needed anyway for CephFS
> in general.
>
> Danny
>
> [1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
> [2] https://etherpad.openstack.org/p/manila-meetup-winter-2015
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

[-- Attachment #1.2: Type: text/html, Size: 9684 bytes --]

[-- Attachment #2: Type: text/plain, Size: 307 bytes --]

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  parent reply	other threads:[~2015-03-03 18:31 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-27  0:04 [Manila] Ceph native driver for manila Sage Weil
     [not found] ` <alpine.DEB.2.00.1502261602390.23918-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2015-03-01 14:07   ` Danny Al-Gaaf
2015-03-02 19:21     ` [openstack-dev] " Luis Pabon
     [not found]       ` <835936292.21191270.1425324075471.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-03-03 18:31         ` Deepak Shetty [this message]
2015-03-03 23:40           ` Danny Al-Gaaf
     [not found]             ` <54F6467F.2000708-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
2015-03-04  4:19               ` Deepak Shetty
2015-03-04 14:05                 ` [openstack-dev] " Danny Al-Gaaf
2015-03-04 14:12     ` Csaba Henk
     [not found]       ` <1011159141.22473140.1425478320272.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-03-04 14:26         ` Danny Al-Gaaf
2015-03-04 15:03           ` [openstack-dev] " Csaba Henk
2015-03-04 17:56             ` Gregory Farnum
  -- strict thread matches above, loose matches on Subject: below --
2015-02-27  0:01 Sage Weil
2015-02-27  4:23 ` Haomai Wang
2015-02-27  5:19   ` Sage Weil
2015-02-27  5:31     ` Haomai Wang
2015-02-27  9:26     ` Blair Bethwaite

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOXiiM=n6K7LjkB0pkDBj11ND7=UhAgioBZ2wgJOY90u79ZNkA@mail.gmail.com' \
    --to=dpkshetty-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=openstack-dev-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.