All of lore.kernel.org
 help / color / mirror / Atom feed
* [Manila] Ceph native driver for manila
@ 2015-02-27  0:04 Sage Weil
       [not found] ` <alpine.DEB.2.00.1502261602390.23918-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Sage Weil @ 2015-02-27  0:04 UTC (permalink / raw)
  To: ceph-devel, openstack-dev

[sorry for ceph-devel double-post, forgot to include openstack-dev]

Hi everyone,

The online Ceph Developer Summit is next week[1] and among other things 
we'll be talking about how to support CephFS in Manila.  At a high level, 
there are basically two paths:

1) Ganesha + the CephFS FSAL driver

 - This will just use the existing ganesha driver without modifications.  
Ganesha will need to be configured with the CephFS FSAL instead of 
GlusterFS or whatever else you might use.
 - All traffic will pass through the NFS VM, providing network isolation

No real work needed here aside from testing and QA.

2) Native CephFS driver

As I currently understand it,

 - The driver will set up CephFS auth credentials so that the guest VM can 
mount CephFS directly
 - The guest VM will need access to the Ceph network.  That makes this 
mainly interesting for private clouds and trusted environments.
 - The guest is responsible for running 'mount -t ceph ...'.
 - I'm not sure how we provide the auth credential to the user/guest...

This would perform better than an NFS gateway, but there are several gaps 
on the security side that make this unusable currently in an untrusted 
environment:

 - The CephFS MDS auth credentials currently are _very_ basic.  As in, 
binary: can this host mount or it cannot.  We have the auth cap string 
parsing in place to restrict to a subdirectory (e.g., this tenant can only 
mount /tenants/foo), but the MDS does not enforce this yet.  [medium 
project to add that]

 - The same credential could be used directly via librados to access the 
data pool directly, regardless of what the MDS has to say about the 
namespace.  There are two ways around this:

   1- Give each tenant a separate rados pool.  This works today.  You'd 
set a directory policy that puts all files created in that subdirectory in 
that tenant's pool, then only let the client access those rados pools.

     1a- We currently lack an MDS auth capability that restricts which 
clients get to change that policy.  [small project]

   2- Extend the MDS file layouts to use the rados namespaces so that 
users can be separated within the same rados pool.  [Medium project]

   3- Something fancy with MDS-generated capabilities specifying which 
rados objects clients get to read.  This probably falls in the category of 
research, although there are some papers we've seen that look promising. 
[big project]

Anyway, this leads to a few questions:

 - Who is interested in using Manila to attach CephFS to guest VMs?
 - What use cases are you interested?
 - How important is security in your environment?

Thanks!
sage


[1] http://ceph.com/community/ceph-developer-summit-infernalis/



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Manila] Ceph native driver for manila
       [not found] ` <alpine.DEB.2.00.1502261602390.23918-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2015-03-01 14:07   ` Danny Al-Gaaf
  2015-03-02 19:21     ` [openstack-dev] " Luis Pabon
  2015-03-04 14:12     ` Csaba Henk
  0 siblings, 2 replies; 11+ messages in thread
From: Danny Al-Gaaf @ 2015-03-01 14:07 UTC (permalink / raw)
  To: OpenStack Development Mailing List (not for usage questions),
	ceph-devel-u79uwXL29TY76Z2rM5mHXA

Am 27.02.2015 um 01:04 schrieb Sage Weil:
> [sorry for ceph-devel double-post, forgot to include
> openstack-dev]
> 
> Hi everyone,
> 
> The online Ceph Developer Summit is next week[1] and among other
> things we'll be talking about how to support CephFS in Manila.  At
> a high level, there are basically two paths:

We discussed the CephFS Manila topic also on the last Manila Midcycle
Meetup (Kilo) [1][2]

> 2) Native CephFS driver
> 
> As I currently understand it,
> 
> - The driver will set up CephFS auth credentials so that the guest
> VM can mount CephFS directly - The guest VM will need access to the
> Ceph network.  That makes this mainly interesting for private
> clouds and trusted environments. - The guest is responsible for
> running 'mount -t ceph ...'. - I'm not sure how we provide the auth
> credential to the user/guest...

The auth credentials need to be handled currently by a application
orchestration solution I guess. I see currently no solution on the
Manila layer level atm.

If Ceph would provide OpenStack Keystone authentication for
rados/cephfs instead of CephX, it could be handled via app orch easily.

> This would perform better than an NFS gateway, but there are
> several gaps on the security side that make this unusable currently
> in an untrusted environment:
> 
> - The CephFS MDS auth credentials currently are _very_ basic.  As
> in, binary: can this host mount or it cannot.  We have the auth cap
> string parsing in place to restrict to a subdirectory (e.g., this
> tenant can only mount /tenants/foo), but the MDS does not enforce
> this yet.  [medium project to add that]
> 
> - The same credential could be used directly via librados to access
> the data pool directly, regardless of what the MDS has to say about
> the namespace.  There are two ways around this:
> 
> 1- Give each tenant a separate rados pool.  This works today.
> You'd set a directory policy that puts all files created in that
> subdirectory in that tenant's pool, then only let the client access
> those rados pools.
> 
> 1a- We currently lack an MDS auth capability that restricts which 
> clients get to change that policy.  [small project]
> 
> 2- Extend the MDS file layouts to use the rados namespaces so that
>  users can be separated within the same rados pool.  [Medium
> project]
> 
> 3- Something fancy with MDS-generated capabilities specifying which
>  rados objects clients get to read.  This probably falls in the
> category of research, although there are some papers we've seen
> that look promising. [big project]
> 
> Anyway, this leads to a few questions:
> 
> - Who is interested in using Manila to attach CephFS to guest VMs? 
> - What use cases are you interested? - How important is security in
> your environment?

As you know we (Deutsche Telekom) are may interested to provide shared
filesystems via CephFS to VMs instead of e.g. via NFS. We can
provide/discuss use cases at CDS.

For us security is very critical, as the performance is too. The first
solution via ganesha is not what we prefer (to use CephFS via p9 and
NFS would not perform that well I guess). The second solution, to use
CephFS directly to the VM would be a bad solution from the security
point of view since we can't expose the Ceph public network directly
to the VMs to prevent all the security issues we discussed already.

We discussed during the Midcycle a third option:

Mount CephFS directly on the host system and provide the filesystem to
the VMs via p9/virtfs. This need nova integration (I will work on a
POC patch for this) to setup libvirt config correctly for virtfs. This
solve the security issue and the auth key distribution for the VMs,
but it may introduces performance issues due to virtfs usage. We have
to check what the specific performance impact will be. Currently this
is the preferred solution for our use cases.

What's still missing in this solution is user/tenant/subtree
separation as in the 2th option. But this is needed anyway for CephFS
in general.

Danny

[1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
[2] https://etherpad.openstack.org/p/manila-meetup-winter-2015


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [openstack-dev] [Manila] Ceph native driver for manila
  2015-03-01 14:07   ` Danny Al-Gaaf
@ 2015-03-02 19:21     ` Luis Pabon
       [not found]       ` <835936292.21191270.1425324075471.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2015-03-04 14:12     ` Csaba Henk
  1 sibling, 1 reply; 11+ messages in thread
From: Luis Pabon @ 2015-03-02 19:21 UTC (permalink / raw)
  To: Danny Al-Gaaf
  Cc: OpenStack Development Mailing List (not for usage questions), ceph-devel

What is the status on virtfs?  I am not sure if it is being maintained.  Does anyone know?

- Luis

----- Original Message -----
From: "Danny Al-Gaaf" <danny.al-gaaf@bisect.de>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org>, ceph-devel@vger.kernel.org
Sent: Sunday, March 1, 2015 9:07:36 AM
Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

Am 27.02.2015 um 01:04 schrieb Sage Weil:
> [sorry for ceph-devel double-post, forgot to include
> openstack-dev]
> 
> Hi everyone,
> 
> The online Ceph Developer Summit is next week[1] and among other
> things we'll be talking about how to support CephFS in Manila.  At
> a high level, there are basically two paths:

We discussed the CephFS Manila topic also on the last Manila Midcycle
Meetup (Kilo) [1][2]

> 2) Native CephFS driver
> 
> As I currently understand it,
> 
> - The driver will set up CephFS auth credentials so that the guest
> VM can mount CephFS directly - The guest VM will need access to the
> Ceph network.  That makes this mainly interesting for private
> clouds and trusted environments. - The guest is responsible for
> running 'mount -t ceph ...'. - I'm not sure how we provide the auth
> credential to the user/guest...

The auth credentials need to be handled currently by a application
orchestration solution I guess. I see currently no solution on the
Manila layer level atm.

If Ceph would provide OpenStack Keystone authentication for
rados/cephfs instead of CephX, it could be handled via app orch easily.

> This would perform better than an NFS gateway, but there are
> several gaps on the security side that make this unusable currently
> in an untrusted environment:
> 
> - The CephFS MDS auth credentials currently are _very_ basic.  As
> in, binary: can this host mount or it cannot.  We have the auth cap
> string parsing in place to restrict to a subdirectory (e.g., this
> tenant can only mount /tenants/foo), but the MDS does not enforce
> this yet.  [medium project to add that]
> 
> - The same credential could be used directly via librados to access
> the data pool directly, regardless of what the MDS has to say about
> the namespace.  There are two ways around this:
> 
> 1- Give each tenant a separate rados pool.  This works today.
> You'd set a directory policy that puts all files created in that
> subdirectory in that tenant's pool, then only let the client access
> those rados pools.
> 
> 1a- We currently lack an MDS auth capability that restricts which 
> clients get to change that policy.  [small project]
> 
> 2- Extend the MDS file layouts to use the rados namespaces so that
>  users can be separated within the same rados pool.  [Medium
> project]
> 
> 3- Something fancy with MDS-generated capabilities specifying which
>  rados objects clients get to read.  This probably falls in the
> category of research, although there are some papers we've seen
> that look promising. [big project]
> 
> Anyway, this leads to a few questions:
> 
> - Who is interested in using Manila to attach CephFS to guest VMs? 
> - What use cases are you interested? - How important is security in
> your environment?

As you know we (Deutsche Telekom) are may interested to provide shared
filesystems via CephFS to VMs instead of e.g. via NFS. We can
provide/discuss use cases at CDS.

For us security is very critical, as the performance is too. The first
solution via ganesha is not what we prefer (to use CephFS via p9 and
NFS would not perform that well I guess). The second solution, to use
CephFS directly to the VM would be a bad solution from the security
point of view since we can't expose the Ceph public network directly
to the VMs to prevent all the security issues we discussed already.

We discussed during the Midcycle a third option:

Mount CephFS directly on the host system and provide the filesystem to
the VMs via p9/virtfs. This need nova integration (I will work on a
POC patch for this) to setup libvirt config correctly for virtfs. This
solve the security issue and the auth key distribution for the VMs,
but it may introduces performance issues due to virtfs usage. We have
to check what the specific performance impact will be. Currently this
is the preferred solution for our use cases.

What's still missing in this solution is user/tenant/subtree
separation as in the 2th option. But this is needed anyway for CephFS
in general.

Danny

[1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
[2] https://etherpad.openstack.org/p/manila-meetup-winter-2015

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Manila] Ceph native driver for manila
       [not found]       ` <835936292.21191270.1425324075471.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2015-03-03 18:31         ` Deepak Shetty
  2015-03-03 23:40           ` [openstack-dev] " Danny Al-Gaaf
  0 siblings, 1 reply; 11+ messages in thread
From: Deepak Shetty @ 2015-03-03 18:31 UTC (permalink / raw)
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 6842 bytes --]

On Tue, Mar 3, 2015 at 12:51 AM, Luis Pabon <lpabon-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> What is the status on virtfs?  I am not sure if it is being maintained.
> Does anyone know?
>

The last i knew its not maintained.
Also for what its worth, p9 won't work for windows guest (unless there is a
p9 driver for windows ?) if that is part of your usecase/scenario ?

Last but not the least, p9/virtfs would expose a p9 mount , not a ceph
mount to VMs, which means if there are cephfs specific mount options they
may not work



>
> - Luis
>
> ----- Original Message -----
> From: "Danny Al-Gaaf" <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org>, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Sunday, March 1, 2015 9:07:36 AM
> Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila
>
> Am 27.02.2015 um 01:04 schrieb Sage Weil:
> > [sorry for ceph-devel double-post, forgot to include
> > openstack-dev]
> >
> > Hi everyone,
> >
> > The online Ceph Developer Summit is next week[1] and among other
> > things we'll be talking about how to support CephFS in Manila.  At
> > a high level, there are basically two paths:
>
> We discussed the CephFS Manila topic also on the last Manila Midcycle
> Meetup (Kilo) [1][2]
>
> > 2) Native CephFS driver
> >
> > As I currently understand it,
> >
> > - The driver will set up CephFS auth credentials so that the guest
> > VM can mount CephFS directly - The guest VM will need access to the
> > Ceph network.  That makes this mainly interesting for private
> > clouds and trusted environments. - The guest is responsible for
> > running 'mount -t ceph ...'. - I'm not sure how we provide the auth
> > credential to the user/guest...
>
> The auth credentials need to be handled currently by a application
> orchestration solution I guess. I see currently no solution on the
> Manila layer level atm.
>

There were some discussion in the past in Manila community on guest auto
mount
but i guess nothing was conclusive there.

Appln orchestration can be achived by having tenant specific VM images with
creds
pre-loaded or have the creds injected via cloud-init too should work ?


>
> If Ceph would provide OpenStack Keystone authentication for
> rados/cephfs instead of CephX, it could be handled via app orch easily.
>
> > This would perform better than an NFS gateway, but there are
> > several gaps on the security side that make this unusable currently
> > in an untrusted environment:
> >
> > - The CephFS MDS auth credentials currently are _very_ basic.  As
> > in, binary: can this host mount or it cannot.  We have the auth cap
> > string parsing in place to restrict to a subdirectory (e.g., this
> > tenant can only mount /tenants/foo), but the MDS does not enforce
> > this yet.  [medium project to add that]
> >
> > - The same credential could be used directly via librados to access
> > the data pool directly, regardless of what the MDS has to say about
> > the namespace.  There are two ways around this:
> >
> > 1- Give each tenant a separate rados pool.  This works today.
> > You'd set a directory policy that puts all files created in that
> > subdirectory in that tenant's pool, then only let the client access
> > those rados pools.
> >
> > 1a- We currently lack an MDS auth capability that restricts which
> > clients get to change that policy.  [small project]
> >
> > 2- Extend the MDS file layouts to use the rados namespaces so that
> >  users can be separated within the same rados pool.  [Medium
> > project]
> >
> > 3- Something fancy with MDS-generated capabilities specifying which
> >  rados objects clients get to read.  This probably falls in the
> > category of research, although there are some papers we've seen
> > that look promising. [big project]
> >
> > Anyway, this leads to a few questions:
> >
> > - Who is interested in using Manila to attach CephFS to guest VMs?
>

I didn't get this question... Goal of manila is to provision shared FS to
VMs
so everyone interested in using CephFS would be interested to attach (
'guess you meant mount?)
CephFS to VMs, no ?



> > - What use cases are you interested? - How important is security in
> > your environment?
>

NFS-Ganesha based service VM approach (for network isolation) in Manila is
still
 under works, afaik.


>
> As you know we (Deutsche Telekom) are may interested to provide shared
> filesystems via CephFS to VMs instead of e.g. via NFS. We can
> provide/discuss use cases at CDS.
>
> For us security is very critical, as the performance is too. The first
> solution via ganesha is not what we prefer (to use CephFS via p9 and
> NFS would not perform that well I guess). The second solution, to use
> CephFS directly to the VM would be a bad solution from the security
> point of view since we can't expose the Ceph public network directly
> to the VMs to prevent all the security issues we discussed already.
>

Is there any place the security issues are captured for the case where VMs
access
CephFS directly ? I was curious to understand. IIUC Neutron provides
private and public
networks and for VMs to access external CephFS network, the tenant private
network needs
to be bridged/routed to the external provider network and there are ways
neturon achives it.

Are you saying that this approach of neutron is insecure ?

thanx,
deepak


>
> We discussed during the Midcycle a third option:
>
> Mount CephFS directly on the host system and provide the filesystem to
> the VMs via p9/virtfs. This need nova integration (I will work on a
> POC patch for this) to setup libvirt config correctly for virtfs. This
> solve the security issue and the auth key distribution for the VMs,
> but it may introduces performance issues due to virtfs usage. We have
> to check what the specific performance impact will be. Currently this
> is the preferred solution for our use cases.
>
> What's still missing in this solution is user/tenant/subtree
> separation as in the 2th option. But this is needed anyway for CephFS
> in general.
>
> Danny
>
> [1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
> [2] https://etherpad.openstack.org/p/manila-meetup-winter-2015
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

[-- Attachment #1.2: Type: text/html, Size: 9684 bytes --]

[-- Attachment #2: Type: text/plain, Size: 307 bytes --]

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [openstack-dev] [Manila] Ceph native driver for manila
  2015-03-03 18:31         ` Deepak Shetty
@ 2015-03-03 23:40           ` Danny Al-Gaaf
       [not found]             ` <54F6467F.2000708-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Danny Al-Gaaf @ 2015-03-03 23:40 UTC (permalink / raw)
  To: Deepak Shetty,
	OpenStack Development Mailing List (not for usage questions)
  Cc: ceph-devel

Am 03.03.2015 um 19:31 schrieb Deepak Shetty:
[...]
>> For us security is very critical, as the performance is too. The
>> first solution via ganesha is not what we prefer (to use CephFS
>> via p9 and NFS would not perform that well I guess). The second
>> solution, to use CephFS directly to the VM would be a bad
>> solution from the security point of view since we can't expose
>> the Ceph public network directly to the VMs to prevent all the
>> security issues we discussed already.
>> 
> 
> Is there any place the security issues are captured for the case
> where VMs access CephFS directly ?

No there isn't any place and this is the issue for us.

> I was curious to understand. IIUC Neutron provides private and
> public networks and for VMs to access external CephFS network, the
> tenant private network needs to be bridged/routed to the external
> provider network and there are ways neturon achives it.
> 
> Are you saying that this approach of neutron is insecure ?

I don't say neutron itself is insecure.

The problem is: we don't want any VM to get access to the ceph public
network at all since this would mean access to all MON, OSDs and MDS
daemons.

If a tenant VM has access to the ceph public net, which is needed to
use/mount native cephfs in this VM, one critical issue would be: the
client can attack any ceph component via this network. Maybe I misses
something, but routing doesn't change this fact.

Danny




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Manila] Ceph native driver for manila
       [not found]             ` <54F6467F.2000708-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
@ 2015-03-04  4:19               ` Deepak Shetty
  2015-03-04 14:05                 ` [openstack-dev] " Danny Al-Gaaf
  0 siblings, 1 reply; 11+ messages in thread
From: Deepak Shetty @ 2015-03-04  4:19 UTC (permalink / raw)
  To: Danny Al-Gaaf
  Cc: OpenStack Development Mailing List (not for usage questions),
	ceph-devel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 2170 bytes --]

On Wed, Mar 4, 2015 at 5:10 AM, Danny Al-Gaaf <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
wrote:

> Am 03.03.2015 um 19:31 schrieb Deepak Shetty:
> [...]
> >> For us security is very critical, as the performance is too. The
> >> first solution via ganesha is not what we prefer (to use CephFS
> >> via p9 and NFS would not perform that well I guess). The second
> >> solution, to use CephFS directly to the VM would be a bad
> >> solution from the security point of view since we can't expose
> >> the Ceph public network directly to the VMs to prevent all the
> >> security issues we discussed already.
> >>
> >
> > Is there any place the security issues are captured for the case
> > where VMs access CephFS directly ?
>
> No there isn't any place and this is the issue for us.
>
> > I was curious to understand. IIUC Neutron provides private and
> > public networks and for VMs to access external CephFS network, the
> > tenant private network needs to be bridged/routed to the external
> > provider network and there are ways neturon achives it.
> >
> > Are you saying that this approach of neutron is insecure ?
>
> I don't say neutron itself is insecure.
>
> The problem is: we don't want any VM to get access to the ceph public
> network at all since this would mean access to all MON, OSDs and MDS
> daemons.
>
> If a tenant VM has access to the ceph public net, which is needed to
> use/mount native cephfs in this VM, one critical issue would be: the
> client can attack any ceph component via this network. Maybe I misses
> something, but routing doesn't change this fact.
>

Agree, but there are ways you can restrict the tenant VMs to specific
network ports
only using neutron security groups and limit what tenant VM can do. On the
CephFS side one can use selinux labels to provide addnl level of security
for
Ceph daemons, where in only certain process can access/modify them, I am
just thinking aloud here, i m not sure how well cephfs works with selinux
combined.

Thinking more, it seems like then you need a solution that goes via the
serviceVM
approach but provide native CephFS mounts instead of NFS ?

thanx,
deepak


>
> Danny
>
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 3063 bytes --]

[-- Attachment #2: Type: text/plain, Size: 307 bytes --]

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [openstack-dev] [Manila] Ceph native driver for manila
  2015-03-04  4:19               ` Deepak Shetty
@ 2015-03-04 14:05                 ` Danny Al-Gaaf
  0 siblings, 0 replies; 11+ messages in thread
From: Danny Al-Gaaf @ 2015-03-04 14:05 UTC (permalink / raw)
  To: Deepak Shetty
  Cc: OpenStack Development Mailing List (not for usage questions), ceph-devel

Am 04.03.2015 um 05:19 schrieb Deepak Shetty:
> On Wed, Mar 4, 2015 at 5:10 AM, Danny Al-Gaaf
> <danny.al-gaaf@bisect.de> wrote:
>> Am 03.03.2015 um 19:31 schrieb Deepak Shetty: [...]
[...]
>> 
>>> I was curious to understand. IIUC Neutron provides private and 
>>> public networks and for VMs to access external CephFS network,
>>> the tenant private network needs to be bridged/routed to the
>>> external provider network and there are ways neturon achives
>>> it.
>>> 
>>> Are you saying that this approach of neutron is insecure ?
>> 
>> I don't say neutron itself is insecure.
>> 
>> The problem is: we don't want any VM to get access to the ceph
>> public network at all since this would mean access to all MON,
>> OSDs and MDS daemons.
>> 
>> If a tenant VM has access to the ceph public net, which is needed
>> to use/mount native cephfs in this VM, one critical issue would
>> be: the client can attack any ceph component via this network.
>> Maybe I misses something, but routing doesn't change this fact.
>> 
> 
> Agree, but there are ways you can restrict the tenant VMs to
> specific network ports only using neutron security groups and limit
> what tenant VM can do. On the CephFS side one can use selinux
> labels to provide addnl level of security for Ceph daemons, where
> in only certain process can access/modify them, I am just thinking
> aloud here, i m not sure how well cephfs works with selinux 
> combined.

I don't see how neutron security groups would help here. The problem
is if a VM has access, in which way ever, to the Ceph network a
attacker/user can on one hand attack ALL ceph daemons and on the other
 also, if there is a bug, crash all daemons and you would lose the
complete cluster.

SELinux profiles can may help with preventing subvert security or gain
privileges it would not help in this case prevent the VM "user" to
crash the cluster.

> Thinking more, it seems like then you need a solution that goes via
> the serviceVM approach but provide native CephFS mounts instead of
> NFS ?

Another level of indirection. I really like the approach of filesystem
passthrough ... the only critical question is if virtfs/p9 is still
supported in some way (and the question if not: why?).

Danny

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [openstack-dev] [Manila] Ceph native driver for manila
  2015-03-01 14:07   ` Danny Al-Gaaf
  2015-03-02 19:21     ` [openstack-dev] " Luis Pabon
@ 2015-03-04 14:12     ` Csaba Henk
       [not found]       ` <1011159141.22473140.1425478320272.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 11+ messages in thread
From: Csaba Henk @ 2015-03-04 14:12 UTC (permalink / raw)
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: ceph-devel, Danny Al-Gaaf

Hi Danny,

----- Original Message -----
> From: "Danny Al-Gaaf" <danny.al-gaaf@bisect.de>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org>,
> ceph-devel@vger.kernel.org
> Sent: Sunday, March 1, 2015 3:07:36 PM
> Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila
...
> For us security is very critical, as the performance is too. The first
> solution via ganesha is not what we prefer (to use CephFS via p9 and
> NFS would not perform that well I guess). The second solution, to use

Can you please explain that why does the Ganesha based stack involve 9p?
(Maybe I miss something basic, but I don't know.)

Cheers
Csaba

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Manila] Ceph native driver for manila
       [not found]       ` <1011159141.22473140.1425478320272.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2015-03-04 14:26         ` Danny Al-Gaaf
  2015-03-04 15:03           ` [openstack-dev] " Csaba Henk
  0 siblings, 1 reply; 11+ messages in thread
From: Danny Al-Gaaf @ 2015-03-04 14:26 UTC (permalink / raw)
  To: Csaba Henk, OpenStack Development Mailing List (not for usage questions)
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA

Am 04.03.2015 um 15:12 schrieb Csaba Henk:
> Hi Danny,
> 
> ----- Original Message -----
>> From: "Danny Al-Gaaf" <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org> To: "OpenStack
>> Development Mailing List (not for usage questions)"
>> <openstack-dev-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org>, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org 
>> Sent: Sunday, March 1, 2015 3:07:36 PM Subject: Re:
>> [openstack-dev] [Manila] Ceph native driver for manila
> ...
>> For us security is very critical, as the performance is too. The
>> first solution via ganesha is not what we prefer (to use CephFS
>> via p9 and NFS would not perform that well I guess). The second
>> solution, to use
> 
> Can you please explain that why does the Ganesha based stack
> involve 9p? (Maybe I miss something basic, but I don't know.)

Sorry, seems that I mixed it up with the p9 case. But the performance
is may still an issue if you use NFS on top of CephFS (incl. all the
VM layer involved within this setup).

For me the question with all these NFS setups is: why should I use NFS
on top on CephFS? What is the right to exist of CephFS in this case? I
would like to use CephFS directly or via filesystem passthrough.

Danny


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [openstack-dev] [Manila] Ceph native driver for manila
  2015-03-04 14:26         ` Danny Al-Gaaf
@ 2015-03-04 15:03           ` Csaba Henk
  2015-03-04 17:56             ` Gregory Farnum
  0 siblings, 1 reply; 11+ messages in thread
From: Csaba Henk @ 2015-03-04 15:03 UTC (permalink / raw)
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: ceph-devel, Danny Al-Gaaf



----- Original Message -----
> From: "Danny Al-Gaaf" <danny.al-gaaf@bisect.de>
> To: "Csaba Henk" <chenk@redhat.com>, "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Cc: ceph-devel@vger.kernel.org
> Sent: Wednesday, March 4, 2015 3:26:52 PM
> Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila
> 
> Am 04.03.2015 um 15:12 schrieb Csaba Henk:
> > ----- Original Message -----
> >> From: "Danny Al-Gaaf" <danny.al-gaaf@bisect.de> To: "OpenStack
> >> Development Mailing List (not for usage questions)"
> >> <openstack-dev@lists.openstack.org>, ceph-devel@vger.kernel.org
> >> Sent: Sunday, March 1, 2015 3:07:36 PM Subject: Re:
> >> [openstack-dev] [Manila] Ceph native driver for manila
> > ...
> >> For us security is very critical, as the performance is too. The
> >> first solution via ganesha is not what we prefer (to use CephFS
> >> via p9 and NFS would not perform that well I guess). The second
> >> solution, to use
> > 
> > Can you please explain that why does the Ganesha based stack
> > involve 9p? (Maybe I miss something basic, but I don't know.)
> 
> Sorry, seems that I mixed it up with the p9 case. But the performance
> is may still an issue if you use NFS on top of CephFS (incl. all the
> VM layer involved within this setup).
> 
> For me the question with all these NFS setups is: why should I use NFS
> on top on CephFS? What is the right to exist of CephFS in this case? I
> would like to use CephFS directly or via filesystem passthrough.

That's a good question. Or indeed, two questions:

1. Why to use NFS?
2. Why does the NFS export of Ceph need to involve CephFS?

1.

As of "why NFS" -- it's probably a good selling point that it's
standard filesystem export technology and the tenants can remain
backend-unaware as long as the backend provides NFS export.

We are working on the Ganesha library --

https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha

with the aim to make it easy to create Ganesha based drivers. So if you have
already an FSAL, you can get at an NFS exporting driver almost for free (with a
modest amount of glue code). So you could consider making such a driver for
Ceph, to satisfy customers who demand NFS access, even if there is a native
driver which gets the limelight. 

(See commits implementing this under "Work Items" of the BP -- one is the
actual Ganesha library and the other two show how it can be hooked in, by the
example of the Gluster driver. At the moment flat network (share-server-less)
drivers are supported.)

2.

As of why CephFS was the technology chosen for implementing the Ceph FSAL for
Ganesha, that's something I'd also like to know. I have the following naive
question in mind: "Would it not have been better to implement Ceph FSAL with
something »closer to« Ceph?", and I have three actual questions about it:

- does this question make sense in this form, and if not, how to amend?
- I'm asking the question itself, or the amended version of it.
- If the answer is yes, is there a chance someone would create an alternative
  Ceph FSAL on that assumed closer-to-Ceph technology?


Cheers
Csaba






--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [openstack-dev] [Manila] Ceph native driver for manila
  2015-03-04 15:03           ` [openstack-dev] " Csaba Henk
@ 2015-03-04 17:56             ` Gregory Farnum
  0 siblings, 0 replies; 11+ messages in thread
From: Gregory Farnum @ 2015-03-04 17:56 UTC (permalink / raw)
  To: Csaba Henk
  Cc: OpenStack Development Mailing List (not for usage questions),
	ceph-devel, Danny Al-Gaaf

On Wed, Mar 4, 2015 at 7:03 AM, Csaba Henk <chenk@redhat.com> wrote:
>
>
> ----- Original Message -----
>> From: "Danny Al-Gaaf" <danny.al-gaaf@bisect.de>
>> To: "Csaba Henk" <chenk@redhat.com>, "OpenStack Development Mailing List (not for usage questions)"
>> <openstack-dev@lists.openstack.org>
>> Cc: ceph-devel@vger.kernel.org
>> Sent: Wednesday, March 4, 2015 3:26:52 PM
>> Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila
>>
>> Am 04.03.2015 um 15:12 schrieb Csaba Henk:
>> > ----- Original Message -----
>> >> From: "Danny Al-Gaaf" <danny.al-gaaf@bisect.de> To: "OpenStack
>> >> Development Mailing List (not for usage questions)"
>> >> <openstack-dev@lists.openstack.org>, ceph-devel@vger.kernel.org
>> >> Sent: Sunday, March 1, 2015 3:07:36 PM Subject: Re:
>> >> [openstack-dev] [Manila] Ceph native driver for manila
>> > ...
>> >> For us security is very critical, as the performance is too. The
>> >> first solution via ganesha is not what we prefer (to use CephFS
>> >> via p9 and NFS would not perform that well I guess). The second
>> >> solution, to use
>> >
>> > Can you please explain that why does the Ganesha based stack
>> > involve 9p? (Maybe I miss something basic, but I don't know.)
>>
>> Sorry, seems that I mixed it up with the p9 case. But the performance
>> is may still an issue if you use NFS on top of CephFS (incl. all the
>> VM layer involved within this setup).
>>
>> For me the question with all these NFS setups is: why should I use NFS
>> on top on CephFS? What is the right to exist of CephFS in this case? I
>> would like to use CephFS directly or via filesystem passthrough.
>
> That's a good question. Or indeed, two questions:
>
> 1. Why to use NFS?
> 2. Why does the NFS export of Ceph need to involve CephFS?
>
> 1.
>
> As of "why NFS" -- it's probably a good selling point that it's
> standard filesystem export technology and the tenants can remain
> backend-unaware as long as the backend provides NFS export.
>
> We are working on the Ganesha library --
>
> https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha
>
> with the aim to make it easy to create Ganesha based drivers. So if you have
> already an FSAL, you can get at an NFS exporting driver almost for free (with a
> modest amount of glue code). So you could consider making such a driver for
> Ceph, to satisfy customers who demand NFS access, even if there is a native
> driver which gets the limelight.
>
> (See commits implementing this under "Work Items" of the BP -- one is the
> actual Ganesha library and the other two show how it can be hooked in, by the
> example of the Gluster driver. At the moment flat network (share-server-less)
> drivers are supported.)
>
> 2.
>
> As of why CephFS was the technology chosen for implementing the Ceph FSAL for
> Ganesha, that's something I'd also like to know. I have the following naive
> question in mind: "Would it not have been better to implement Ceph FSAL with
> something »closer to« Ceph?", and I have three actual questions about it:
>
> - does this question make sense in this form, and if not, how to amend?
> - I'm asking the question itself, or the amended version of it.
> - If the answer is yes, is there a chance someone would create an alternative
>   Ceph FSAL on that assumed closer-to-Ceph technology?

I don't understand. What "closer-to-Ceph" technology do you want than
native use of the libcephfs library? Are you saying to use raw RADOS
to provide storage instead of CephFS?

In that case, it doesn't make a lot of sense: CephFS is how you
provide a real filesystem in the Ceph ecosystem. I suppose if you
wanted to create a lighter-weight pseudo-filesystem you could do so
(somebody is building a "RadosFS", I think from CERN?) but then it's
not interoperable with other stuff.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-03-04 17:56 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-27  0:04 [Manila] Ceph native driver for manila Sage Weil
     [not found] ` <alpine.DEB.2.00.1502261602390.23918-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2015-03-01 14:07   ` Danny Al-Gaaf
2015-03-02 19:21     ` [openstack-dev] " Luis Pabon
     [not found]       ` <835936292.21191270.1425324075471.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-03-03 18:31         ` Deepak Shetty
2015-03-03 23:40           ` [openstack-dev] " Danny Al-Gaaf
     [not found]             ` <54F6467F.2000708-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
2015-03-04  4:19               ` Deepak Shetty
2015-03-04 14:05                 ` [openstack-dev] " Danny Al-Gaaf
2015-03-04 14:12     ` Csaba Henk
     [not found]       ` <1011159141.22473140.1425478320272.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-03-04 14:26         ` Danny Al-Gaaf
2015-03-04 15:03           ` [openstack-dev] " Csaba Henk
2015-03-04 17:56             ` Gregory Farnum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.