From mboxrd@z Thu Jan 1 00:00:00 1970 From: Deepak Shetty Subject: Re: [Manila] Ceph native driver for manila Date: Wed, 4 Mar 2015 00:01:11 +0530 Message-ID: References: <54F31D28.9050103@bisect.de> <835936292.21191270.1425324075471.JavaMail.zimbra@redhat.com> Reply-To: "OpenStack Development Mailing List \(not for usage questions\)" Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1838322692540153028==" Return-path: In-Reply-To: <835936292.21191270.1425324075471.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: openstack-dev-bounces+god-openstack-dev=m.gmane.org-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org To: "OpenStack Development Mailing List (not for usage questions)" Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: ceph-devel.vger.kernel.org --===============1838322692540153028== Content-Type: multipart/alternative; boundary=047d7bd75b34fe7139051066894c --047d7bd75b34fe7139051066894c Content-Type: text/plain; charset=UTF-8 On Tue, Mar 3, 2015 at 12:51 AM, Luis Pabon wrote: > What is the status on virtfs? I am not sure if it is being maintained. > Does anyone know? > The last i knew its not maintained. Also for what its worth, p9 won't work for windows guest (unless there is a p9 driver for windows ?) if that is part of your usecase/scenario ? Last but not the least, p9/virtfs would expose a p9 mount , not a ceph mount to VMs, which means if there are cephfs specific mount options they may not work > > - Luis > > ----- Original Message ----- > From: "Danny Al-Gaaf" > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org>, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > Sent: Sunday, March 1, 2015 9:07:36 AM > Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila > > Am 27.02.2015 um 01:04 schrieb Sage Weil: > > [sorry for ceph-devel double-post, forgot to include > > openstack-dev] > > > > Hi everyone, > > > > The online Ceph Developer Summit is next week[1] and among other > > things we'll be talking about how to support CephFS in Manila. At > > a high level, there are basically two paths: > > We discussed the CephFS Manila topic also on the last Manila Midcycle > Meetup (Kilo) [1][2] > > > 2) Native CephFS driver > > > > As I currently understand it, > > > > - The driver will set up CephFS auth credentials so that the guest > > VM can mount CephFS directly - The guest VM will need access to the > > Ceph network. That makes this mainly interesting for private > > clouds and trusted environments. - The guest is responsible for > > running 'mount -t ceph ...'. - I'm not sure how we provide the auth > > credential to the user/guest... > > The auth credentials need to be handled currently by a application > orchestration solution I guess. I see currently no solution on the > Manila layer level atm. > There were some discussion in the past in Manila community on guest auto mount but i guess nothing was conclusive there. Appln orchestration can be achived by having tenant specific VM images with creds pre-loaded or have the creds injected via cloud-init too should work ? > > If Ceph would provide OpenStack Keystone authentication for > rados/cephfs instead of CephX, it could be handled via app orch easily. > > > This would perform better than an NFS gateway, but there are > > several gaps on the security side that make this unusable currently > > in an untrusted environment: > > > > - The CephFS MDS auth credentials currently are _very_ basic. As > > in, binary: can this host mount or it cannot. We have the auth cap > > string parsing in place to restrict to a subdirectory (e.g., this > > tenant can only mount /tenants/foo), but the MDS does not enforce > > this yet. [medium project to add that] > > > > - The same credential could be used directly via librados to access > > the data pool directly, regardless of what the MDS has to say about > > the namespace. There are two ways around this: > > > > 1- Give each tenant a separate rados pool. This works today. > > You'd set a directory policy that puts all files created in that > > subdirectory in that tenant's pool, then only let the client access > > those rados pools. > > > > 1a- We currently lack an MDS auth capability that restricts which > > clients get to change that policy. [small project] > > > > 2- Extend the MDS file layouts to use the rados namespaces so that > > users can be separated within the same rados pool. [Medium > > project] > > > > 3- Something fancy with MDS-generated capabilities specifying which > > rados objects clients get to read. This probably falls in the > > category of research, although there are some papers we've seen > > that look promising. [big project] > > > > Anyway, this leads to a few questions: > > > > - Who is interested in using Manila to attach CephFS to guest VMs? > I didn't get this question... Goal of manila is to provision shared FS to VMs so everyone interested in using CephFS would be interested to attach ( 'guess you meant mount?) CephFS to VMs, no ? > > - What use cases are you interested? - How important is security in > > your environment? > NFS-Ganesha based service VM approach (for network isolation) in Manila is still under works, afaik. > > As you know we (Deutsche Telekom) are may interested to provide shared > filesystems via CephFS to VMs instead of e.g. via NFS. We can > provide/discuss use cases at CDS. > > For us security is very critical, as the performance is too. The first > solution via ganesha is not what we prefer (to use CephFS via p9 and > NFS would not perform that well I guess). The second solution, to use > CephFS directly to the VM would be a bad solution from the security > point of view since we can't expose the Ceph public network directly > to the VMs to prevent all the security issues we discussed already. > Is there any place the security issues are captured for the case where VMs access CephFS directly ? I was curious to understand. IIUC Neutron provides private and public networks and for VMs to access external CephFS network, the tenant private network needs to be bridged/routed to the external provider network and there are ways neturon achives it. Are you saying that this approach of neutron is insecure ? thanx, deepak > > We discussed during the Midcycle a third option: > > Mount CephFS directly on the host system and provide the filesystem to > the VMs via p9/virtfs. This need nova integration (I will work on a > POC patch for this) to setup libvirt config correctly for virtfs. This > solve the security issue and the auth key distribution for the VMs, > but it may introduces performance issues due to virtfs usage. We have > to check what the specific performance impact will be. Currently this > is the preferred solution for our use cases. > > What's still missing in this solution is user/tenant/subtree > separation as in the 2th option. But this is needed anyway for CephFS > in general. > > Danny > > [1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup > [2] https://etherpad.openstack.org/p/manila-meetup-winter-2015 > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > --047d7bd75b34fe7139051066894c Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On Tue, Mar 3, 2015 at 12:51 AM, Luis Pabon <lpabon-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>= wrote:
What is the status on virt= fs?=C2=A0 I am not sure if it is being maintained.=C2=A0 Does anyone know?<= br>

The last i knew its not maintained.
=
Also for what its worth, p9 won't work for windows guest (un= less there is a p9 driver for windows ?) if that is part of your usecase/sc= enario ?

Last but not the least, p9/virtfs would expose a= p9 mount , not a ceph mount to VMs, which means if there are cephfs specif= ic mount options they may not work

=C2=A0

- Luis

----- Original Message -----
From: "Danny Al-Gaaf" <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
To: "OpenStack Development Mailing List (not for usage questions)"= ; <openstack-dev@li= sts.openstack.org>, ce= ph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Sent: Sunday, March 1, 2015 9:07:36 AM
Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

Am 27.02.2015 um 01:04 schrieb Sage Weil:
> [sorry for ceph-devel double-post, forgot to include
> openstack-dev]
>
> Hi everyone,
>
> The online Ceph Developer Summit is next week[1] and among other
> things we'll be talking about how to support CephFS in Manila.=C2= =A0 At
> a high level, there are basically two paths:

We discussed the CephFS Manila topic also on the last Manila Midcycle
Meetup (Kilo) [1][2]

> 2) Native CephFS driver
>
> As I currently understand it,
>
> - The driver will set up CephFS auth credentials so that the guest
> VM can mount CephFS directly - The guest VM will need access to the > Ceph network.=C2=A0 That makes this mainly interesting for private
> clouds and trusted environments. - The guest is responsible for
> running 'mount -t ceph ...'. - I'm not sure how we provide= the auth
> credential to the user/guest...

The auth credentials need to be handled currently by a application
orchestration solution I guess. I see currently no solution on the
Manila layer level atm.

The= re were some discussion in the past in Manila community on guest auto mount=
but i guess nothing was conclusive there.

= Appln orchestration can be achived by having tenant specific VM images with= creds
pre-loaded or have the creds injected via cloud-init = too should work ?
=C2=A0
=

If Ceph would provide OpenStack Keystone authentication for
rados/cephfs instead of CephX, it could be handled via app orch easily.

> This would perform better than an NFS gateway, but there are
> several gaps on the security side that make this unusable currently > in an untrusted environment:
>
> - The CephFS MDS auth credentials currently are _very_ basic.=C2=A0 As=
> in, binary: can this host mount or it cannot.=C2=A0 We have the auth c= ap
> string parsing in place to restrict to a subdirectory (e.g., this
> tenant can only mount /tenants/foo), but the MDS does not enforce
> this yet.=C2=A0 [medium project to add that]
>
> - The same credential could be used directly via librados to access > the data pool directly, regardless of what the MDS has to say about > the namespace.=C2=A0 There are two ways around this:
>
> 1- Give each tenant a separate rados pool.=C2=A0 This works today.
> You'd set a directory policy that puts all files created in that > subdirectory in that tenant's pool, then only let the client acces= s
> those rados pools.
>
> 1a- We currently lack an MDS auth capability that restricts which
> clients get to change that policy.=C2=A0 [small project]
>
> 2- Extend the MDS file layouts to use the rados namespaces so that
>=C2=A0 users can be separated within the same rados pool.=C2=A0 [Medium=
> project]
>
> 3- Something fancy with MDS-generated capabilities specifying which >=C2=A0 rados objects clients get to read.=C2=A0 This probably falls in = the
> category of research, although there are some papers we've seen > that look promising. [big project]
>
> Anyway, this leads to a few questions:
>
> - Who is interested in using Manila to attach CephFS to guest VMs?
=

I didn't get this question= ... Goal of manila is to provision shared FS to VMs
so everyo= ne interested in using CephFS would be interested to attach ( 'guess yo= u meant mount?)
CephFS to VMs, no ?

=C2=A0<= /div>
> - What use cases are you interested? - How important is security in > your environment?

NFS-= Ganesha based service VM approach (for network isolation) in Manila is stil= l
=C2=A0under works, afaik.
=C2=A0

As you know we (Deutsche Telekom) are may interested to provide shared
filesystems via CephFS to VMs instead of e.g. via NFS. We can
provide/discuss use cases at CDS.

For us security is very critical, as the performance is too. The first
solution via ganesha is not what we prefer (to use CephFS via p9 and
NFS would not perform that well I guess). The second solution, to use
CephFS directly to the VM would be a bad solution from the security
point of view since we can't expose the Ceph public network directly to the VMs to prevent all the security issues we discussed already.

Is there any place the security is= sues are captured for the case where VMs access
CephFS direct= ly ? I was curious to understand. IIUC Neutron provides private and public<= br>
networks and for VMs to access external CephFS network, the t= enant private network needs
to be bridged/routed to the exte= rnal provider network and there are ways neturon achives it.

<= div>Are you saying that this approach of neutron is insecure ?

thanx,
deepak
=C2=A0

We discussed during the Midcycle a third option:

Mount CephFS directly on the host system and provide the filesystem to
the VMs via p9/virtfs. This need nova integration (I will work on a
POC patch for this) to setup libvirt config correctly for virtfs. This
solve the security issue and the auth key distribution for the VMs,
but it may introduces performance issues due to virtfs usage. We have
to check what the specific performance impact will be. Currently this
is the preferred solution for our use cases.

What's still missing in this solution is user/tenant/subtree
separation as in the 2th option. But this is needed anyway for CephFS
in general.

Danny

[1] https://etherpad.openstack.org/p/manila-kilo-midcycle-m= eetup
[2] https://etherpad.openstack.org/p/manila-meetup-winter-201= 5

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel&q= uot; in
the body of a message to major= domo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at=C2=A0 http://vger.kernel.org/majordomo-info.html

__________________________________________________________________________<= br> OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhA@public.gmane.org= org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/op= enstack-dev

--047d7bd75b34fe7139051066894c-- --===============1838322692540153028== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev --===============1838322692540153028==--