All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] docs: Add CloudStack documentation
@ 2012-08-08 21:51 Wido den Hollander
  2012-09-04 23:49 ` Sage Weil
  2012-09-05 15:21 ` Calvin Morrow
  0 siblings, 2 replies; 10+ messages in thread
From: Wido den Hollander @ 2012-08-08 21:51 UTC (permalink / raw)
  To: ceph-devel; +Cc: Wido den Hollander

The basic documentation about how you can use RBD with CloudStack

Signed-off-by: Wido den Hollander <wido@widodh.nl>
---
 doc/rbd/rbd-cloudstack.rst |   49 ++++++++++++++++++++++++++++++++++++++++++++
 doc/rbd/rbd.rst            |    2 +-
 2 files changed, 50 insertions(+), 1 deletion(-)
 create mode 100644 doc/rbd/rbd-cloudstack.rst

diff --git a/doc/rbd/rbd-cloudstack.rst b/doc/rbd/rbd-cloudstack.rst
new file mode 100644
index 0000000..04e1a7c
--- /dev/null
+++ b/doc/rbd/rbd-cloudstack.rst
@@ -0,0 +1,49 @@
+===================
+ RBD and Apache CloudStack
+===================
+You can use RBD to run instances on in Apache CloudStack.
+
+This can be done by adding a RBD pool as Primary Storage.
+
+There are a couple of prerequisites:
+* You need to CloudStack 4.0 or higher
+* Qemu on the Hypervisor has to be compiled with RBD enabled
+* The libvirt version on the Hypervisor has to be at least 0.10 with RBD enabled
+
+Make sure you meet this requirements before installing the CloudStack Agent on the Hypervisor(s)!
+
+.. important:: To use RBD with CloudStack, you must have a running Ceph cluster!
+
+Limitations
+-------------
+Running instances from RBD has a couple of limitations:
+
+* An additional NFS Primary Storage pool is required for running System VM's
+* Snapshotting RBD volumes is not possible (at this moment)
+* Only one monitor can be configured
+
+Add Hypervisor
+-------------
+Please follow the official CloudStack documentation how to do this.
+
+There is no special way of adding a Hypervisor when using RBD, nor is any configuration needed on the hypervisor.
+
+Add RBD Primary Storage
+-------------
+Once the hypervisor has been added, log on to the CloudStack UI.
+
+* Infrastructure 
+* Primary Storage
+* "Add Primary Storage"
+* Select "Protocol" RBD
+* Fill in your cluster information (cephx is supported)
+* Optionally add the tag 'rbd'
+
+Now you should be able to deploy instances on RBD.
+
+RBD Disk Offering
+-------------
+Create a special "Disk Offering" which needs to match the tag 'rbd' so you can make sure the StoragePoolAllocator
+chooses the RBD pool when searching for a suiteable storage pool.
+
+Since there is also a NFS storage pool it's possible that instances get deployed on NFS instead of RBD.
diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst
index af1682f..6fd1999 100644
--- a/doc/rbd/rbd.rst
+++ b/doc/rbd/rbd.rst
@@ -31,7 +31,7 @@ the Ceph FS filesystem, and RADOS block devices simultaneously.
 	QEMU and RBD <qemu-rbd>
 	libvirt <libvirt>
 	RBD and OpenStack <rbd-openstack>
-	
+	RBD and CloudStack <rbd-cloudstack>
 	
 	
 	
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-08-08 21:51 [PATCH] docs: Add CloudStack documentation Wido den Hollander
@ 2012-09-04 23:49 ` Sage Weil
  2012-09-05 15:21 ` Calvin Morrow
  1 sibling, 0 replies; 10+ messages in thread
From: Sage Weil @ 2012-09-04 23:49 UTC (permalink / raw)
  To: Wido den Hollander; +Cc: ceph-devel

Finally applied this one.  Great work, Wido!

sage


On Wed, 8 Aug 2012, Wido den Hollander wrote:

> The basic documentation about how you can use RBD with CloudStack
> 
> Signed-off-by: Wido den Hollander <wido@widodh.nl>
> ---
>  doc/rbd/rbd-cloudstack.rst |   49 ++++++++++++++++++++++++++++++++++++++++++++
>  doc/rbd/rbd.rst            |    2 +-
>  2 files changed, 50 insertions(+), 1 deletion(-)
>  create mode 100644 doc/rbd/rbd-cloudstack.rst
> 
> diff --git a/doc/rbd/rbd-cloudstack.rst b/doc/rbd/rbd-cloudstack.rst
> new file mode 100644
> index 0000000..04e1a7c
> --- /dev/null
> +++ b/doc/rbd/rbd-cloudstack.rst
> @@ -0,0 +1,49 @@
> +===================
> + RBD and Apache CloudStack
> +===================
> +You can use RBD to run instances on in Apache CloudStack.
> +
> +This can be done by adding a RBD pool as Primary Storage.
> +
> +There are a couple of prerequisites:
> +* You need to CloudStack 4.0 or higher
> +* Qemu on the Hypervisor has to be compiled with RBD enabled
> +* The libvirt version on the Hypervisor has to be at least 0.10 with RBD enabled
> +
> +Make sure you meet this requirements before installing the CloudStack Agent on the Hypervisor(s)!
> +
> +.. important:: To use RBD with CloudStack, you must have a running Ceph cluster!
> +
> +Limitations
> +-------------
> +Running instances from RBD has a couple of limitations:
> +
> +* An additional NFS Primary Storage pool is required for running System VM's
> +* Snapshotting RBD volumes is not possible (at this moment)
> +* Only one monitor can be configured
> +
> +Add Hypervisor
> +-------------
> +Please follow the official CloudStack documentation how to do this.
> +
> +There is no special way of adding a Hypervisor when using RBD, nor is any configuration needed on the hypervisor.
> +
> +Add RBD Primary Storage
> +-------------
> +Once the hypervisor has been added, log on to the CloudStack UI.
> +
> +* Infrastructure 
> +* Primary Storage
> +* "Add Primary Storage"
> +* Select "Protocol" RBD
> +* Fill in your cluster information (cephx is supported)
> +* Optionally add the tag 'rbd'
> +
> +Now you should be able to deploy instances on RBD.
> +
> +RBD Disk Offering
> +-------------
> +Create a special "Disk Offering" which needs to match the tag 'rbd' so you can make sure the StoragePoolAllocator
> +chooses the RBD pool when searching for a suiteable storage pool.
> +
> +Since there is also a NFS storage pool it's possible that instances get deployed on NFS instead of RBD.
> diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst
> index af1682f..6fd1999 100644
> --- a/doc/rbd/rbd.rst
> +++ b/doc/rbd/rbd.rst
> @@ -31,7 +31,7 @@ the Ceph FS filesystem, and RADOS block devices simultaneously.
>  	QEMU and RBD <qemu-rbd>
>  	libvirt <libvirt>
>  	RBD and OpenStack <rbd-openstack>
> -	
> +	RBD and CloudStack <rbd-cloudstack>
>  	
>  	
>  	
> -- 
> 1.7.9.5
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-08-08 21:51 [PATCH] docs: Add CloudStack documentation Wido den Hollander
  2012-09-04 23:49 ` Sage Weil
@ 2012-09-05 15:21 ` Calvin Morrow
  2012-09-05 15:28   ` Wido den Hollander
  1 sibling, 1 reply; 10+ messages in thread
From: Calvin Morrow @ 2012-09-05 15:21 UTC (permalink / raw)
  To: ceph-devel

I saw the limitations section references only being able to configure
a single monitor.  Some followup questions for someone interested in
using RBD with Cloudstack 4:

Is it that you can only specify a single monitor to connect to within
Cloudstack 4 (but can still have a 3 monitor configuration) ... or
must you only have a single monitor for some reason?

If you have a ceph.conf on the kvm nodes with more monitors, will it
pick up on the additional monitors?

Is it possible to use a "floating" ip address resource in a pacemaker
configuration for the CloudStack "monitor" IP address?  Is there any
other way around a single-monitor point of failure?

Thanks for your hard work and any guidance you can provide!

Calvin

On Wed, Aug 8, 2012 at 3:51 PM, Wido den Hollander <wido@widodh.nl> wrote:
>
> The basic documentation about how you can use RBD with CloudStack
>
> Signed-off-by: Wido den Hollander <wido@widodh.nl>
> ---
>  doc/rbd/rbd-cloudstack.rst |   49
> ++++++++++++++++++++++++++++++++++++++++++++
>  doc/rbd/rbd.rst            |    2 +-
>  2 files changed, 50 insertions(+), 1 deletion(-)
>  create mode 100644 doc/rbd/rbd-cloudstack.rst
>
> diff --git a/doc/rbd/rbd-cloudstack.rst b/doc/rbd/rbd-cloudstack.rst
> new file mode 100644
> index 0000000..04e1a7c
> --- /dev/null
> +++ b/doc/rbd/rbd-cloudstack.rst
> @@ -0,0 +1,49 @@
> +===================
> + RBD and Apache CloudStack
> +===================
> +You can use RBD to run instances on in Apache CloudStack.
> +
> +This can be done by adding a RBD pool as Primary Storage.
> +
> +There are a couple of prerequisites:
> +* You need to CloudStack 4.0 or higher
> +* Qemu on the Hypervisor has to be compiled with RBD enabled
> +* The libvirt version on the Hypervisor has to be at least 0.10 with RBD
> enabled
> +
> +Make sure you meet this requirements before installing the CloudStack
> Agent on the Hypervisor(s)!
> +
> +.. important:: To use RBD with CloudStack, you must have a running Ceph
> cluster!
> +
> +Limitations
> +-------------
> +Running instances from RBD has a couple of limitations:
> +
> +* An additional NFS Primary Storage pool is required for running System
> VM's
> +* Snapshotting RBD volumes is not possible (at this moment)
> +* Only one monitor can be configured
> +
> +Add Hypervisor
> +-------------
> +Please follow the official CloudStack documentation how to do this.
> +
> +There is no special way of adding a Hypervisor when using RBD, nor is any
> configuration needed on the hypervisor.
> +
> +Add RBD Primary Storage
> +-------------
> +Once the hypervisor has been added, log on to the CloudStack UI.
> +
> +* Infrastructure
> +* Primary Storage
> +* "Add Primary Storage"
> +* Select "Protocol" RBD
> +* Fill in your cluster information (cephx is supported)
> +* Optionally add the tag 'rbd'
> +
> +Now you should be able to deploy instances on RBD.
> +
> +RBD Disk Offering
> +-------------
> +Create a special "Disk Offering" which needs to match the tag 'rbd' so
> you can make sure the StoragePoolAllocator
> +chooses the RBD pool when searching for a suiteable storage pool.
> +
> +Since there is also a NFS storage pool it's possible that instances get
> deployed on NFS instead of RBD.
> diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst
> index af1682f..6fd1999 100644
> --- a/doc/rbd/rbd.rst
> +++ b/doc/rbd/rbd.rst
> @@ -31,7 +31,7 @@ the Ceph FS filesystem, and RADOS block devices
> simultaneously.
>         QEMU and RBD <qemu-rbd>
>         libvirt <libvirt>
>         RBD and OpenStack <rbd-openstack>
> -
> +       RBD and CloudStack <rbd-cloudstack>
>
>
>
> --
> 1.7.9.5
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-09-05 15:21 ` Calvin Morrow
@ 2012-09-05 15:28   ` Wido den Hollander
  2012-09-05 15:42     ` Sage Weil
  2012-09-05 16:14     ` Tommi Virtanen
  0 siblings, 2 replies; 10+ messages in thread
From: Wido den Hollander @ 2012-09-05 15:28 UTC (permalink / raw)
  To: Calvin Morrow; +Cc: ceph-devel

On 09/05/2012 05:21 PM, Calvin Morrow wrote:
> I saw the limitations section references only being able to configure
> a single monitor.  Some followup questions for someone interested in
> using RBD with Cloudstack 4:
>
> Is it that you can only specify a single monitor to connect to within
> Cloudstack 4 (but can still have a 3 monitor configuration) ... or
> must you only have a single monitor for some reason?
>

You can only specify one monitor in CloudStack, but your cluster can 
have multiple.

This is due to the internals of CloudStack. It stores storage pools in a 
URI format, like: rbd://admin:secret@1.2.3.4/rbd

In that format there is no way of storing multiple monitors.

> If you have a ceph.conf on the kvm nodes with more monitors, will it
> pick up on the additional monitors?
>

That is a good question, I'm not sure, but I wouldn't recommend it. It 
could confuse you.

With CloudStack two components are involved:
* libvirt with a storage pool
* Qemu connecting to RBD

Both could read the ceph.conf since librbd does it, but I don't know if 
it will pick up any additional monitors.

> Is it possible to use a "floating" ip address resource in a pacemaker
> configuration for the CloudStack "monitor" IP address?  Is there any
> other way around a single-monitor point of failure?
>

Your Virtual Machines will not stop functioning if that monitor dies. As 
soon as librbd connects it receives the full monitor map and it functions.

You won't be able to start instances or do any RBD operations as long as 
that monitor is down.

I don't know if you can use VRRP for a monitor, but it wouldn't put all 
the effort in it.

It's on my roadmap to implement RBD layering in a upcoming CloudStack 
release, since the whole storage layer is getting a make-over.

This should enable me to also tune caching settings per pool and 
probably sqeeuze in a way to use multiple monitors as well.

I'm aiming for CloudStack 4.1 or 4.2 for this to be implemented.

> Thanks for your hard work and any guidance you can provide!
>

You're welcome!

Wido

> Calvin
>
> On Wed, Aug 8, 2012 at 3:51 PM, Wido den Hollander <wido@widodh.nl> wrote:
>>
>> The basic documentation about how you can use RBD with CloudStack
>>
>> Signed-off-by: Wido den Hollander <wido@widodh.nl>
>> ---
>>   doc/rbd/rbd-cloudstack.rst |   49
>> ++++++++++++++++++++++++++++++++++++++++++++
>>   doc/rbd/rbd.rst            |    2 +-
>>   2 files changed, 50 insertions(+), 1 deletion(-)
>>   create mode 100644 doc/rbd/rbd-cloudstack.rst
>>
>> diff --git a/doc/rbd/rbd-cloudstack.rst b/doc/rbd/rbd-cloudstack.rst
>> new file mode 100644
>> index 0000000..04e1a7c
>> --- /dev/null
>> +++ b/doc/rbd/rbd-cloudstack.rst
>> @@ -0,0 +1,49 @@
>> +===================
>> + RBD and Apache CloudStack
>> +===================
>> +You can use RBD to run instances on in Apache CloudStack.
>> +
>> +This can be done by adding a RBD pool as Primary Storage.
>> +
>> +There are a couple of prerequisites:
>> +* You need to CloudStack 4.0 or higher
>> +* Qemu on the Hypervisor has to be compiled with RBD enabled
>> +* The libvirt version on the Hypervisor has to be at least 0.10 with RBD
>> enabled
>> +
>> +Make sure you meet this requirements before installing the CloudStack
>> Agent on the Hypervisor(s)!
>> +
>> +.. important:: To use RBD with CloudStack, you must have a running Ceph
>> cluster!
>> +
>> +Limitations
>> +-------------
>> +Running instances from RBD has a couple of limitations:
>> +
>> +* An additional NFS Primary Storage pool is required for running System
>> VM's
>> +* Snapshotting RBD volumes is not possible (at this moment)
>> +* Only one monitor can be configured
>> +
>> +Add Hypervisor
>> +-------------
>> +Please follow the official CloudStack documentation how to do this.
>> +
>> +There is no special way of adding a Hypervisor when using RBD, nor is any
>> configuration needed on the hypervisor.
>> +
>> +Add RBD Primary Storage
>> +-------------
>> +Once the hypervisor has been added, log on to the CloudStack UI.
>> +
>> +* Infrastructure
>> +* Primary Storage
>> +* "Add Primary Storage"
>> +* Select "Protocol" RBD
>> +* Fill in your cluster information (cephx is supported)
>> +* Optionally add the tag 'rbd'
>> +
>> +Now you should be able to deploy instances on RBD.
>> +
>> +RBD Disk Offering
>> +-------------
>> +Create a special "Disk Offering" which needs to match the tag 'rbd' so
>> you can make sure the StoragePoolAllocator
>> +chooses the RBD pool when searching for a suiteable storage pool.
>> +
>> +Since there is also a NFS storage pool it's possible that instances get
>> deployed on NFS instead of RBD.
>> diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst
>> index af1682f..6fd1999 100644
>> --- a/doc/rbd/rbd.rst
>> +++ b/doc/rbd/rbd.rst
>> @@ -31,7 +31,7 @@ the Ceph FS filesystem, and RADOS block devices
>> simultaneously.
>>          QEMU and RBD <qemu-rbd>
>>          libvirt <libvirt>
>>          RBD and OpenStack <rbd-openstack>
>> -
>> +       RBD and CloudStack <rbd-cloudstack>
>>
>>
>>
>> --
>> 1.7.9.5
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-09-05 15:28   ` Wido den Hollander
@ 2012-09-05 15:42     ` Sage Weil
  2012-09-05 16:14     ` Tommi Virtanen
  1 sibling, 0 replies; 10+ messages in thread
From: Sage Weil @ 2012-09-05 15:42 UTC (permalink / raw)
  To: Wido den Hollander; +Cc: Calvin Morrow, ceph-devel

On Wed, 5 Sep 2012, Wido den Hollander wrote:
> On 09/05/2012 05:21 PM, Calvin Morrow wrote:
> > I saw the limitations section references only being able to configure
> > a single monitor.  Some followup questions for someone interested in
> > using RBD with Cloudstack 4:
> > 
> > Is it that you can only specify a single monitor to connect to within
> > Cloudstack 4 (but can still have a 3 monitor configuration) ... or
> > must you only have a single monitor for some reason?
> > 
> 
> You can only specify one monitor in CloudStack, but your cluster can have
> multiple.
> 
> This is due to the internals of CloudStack. It stores storage pools in a URI
> format, like: rbd://admin:secret@1.2.3.4/rbd
>
> In that format there is no way of storing multiple monitors.

What if you use a dns name with multiple A records?  The ceph bits are all 
smart enough to populate the monitor search list with all A and AAAA 
records...

sage

> > If you have a ceph.conf on the kvm nodes with more monitors, will it
> > pick up on the additional monitors?
> > 
> 
> That is a good question, I'm not sure, but I wouldn't recommend it. It could
> confuse you.
> 
> With CloudStack two components are involved:
> * libvirt with a storage pool
> * Qemu connecting to RBD
> 
> Both could read the ceph.conf since librbd does it, but I don't know if it
> will pick up any additional monitors.
> 
> > Is it possible to use a "floating" ip address resource in a pacemaker
> > configuration for the CloudStack "monitor" IP address?  Is there any
> > other way around a single-monitor point of failure?
> > 
> 
> Your Virtual Machines will not stop functioning if that monitor dies. As soon
> as librbd connects it receives the full monitor map and it functions.
> 
> You won't be able to start instances or do any RBD operations as long as that
> monitor is down.
> 
> I don't know if you can use VRRP for a monitor, but it wouldn't put all the
> effort in it.
> 
> It's on my roadmap to implement RBD layering in a upcoming CloudStack release,
> since the whole storage layer is getting a make-over.
> 
> This should enable me to also tune caching settings per pool and probably
> sqeeuze in a way to use multiple monitors as well.
> 
> I'm aiming for CloudStack 4.1 or 4.2 for this to be implemented.
> 
> > Thanks for your hard work and any guidance you can provide!
> > 
> 
> You're welcome!
> 
> Wido
> 
> > Calvin
> > 
> > On Wed, Aug 8, 2012 at 3:51 PM, Wido den Hollander <wido@widodh.nl> wrote:
> > > 
> > > The basic documentation about how you can use RBD with CloudStack
> > > 
> > > Signed-off-by: Wido den Hollander <wido@widodh.nl>
> > > ---
> > >   doc/rbd/rbd-cloudstack.rst |   49
> > > ++++++++++++++++++++++++++++++++++++++++++++
> > >   doc/rbd/rbd.rst            |    2 +-
> > >   2 files changed, 50 insertions(+), 1 deletion(-)
> > >   create mode 100644 doc/rbd/rbd-cloudstack.rst
> > > 
> > > diff --git a/doc/rbd/rbd-cloudstack.rst b/doc/rbd/rbd-cloudstack.rst
> > > new file mode 100644
> > > index 0000000..04e1a7c
> > > --- /dev/null
> > > +++ b/doc/rbd/rbd-cloudstack.rst
> > > @@ -0,0 +1,49 @@
> > > +===================
> > > + RBD and Apache CloudStack
> > > +===================
> > > +You can use RBD to run instances on in Apache CloudStack.
> > > +
> > > +This can be done by adding a RBD pool as Primary Storage.
> > > +
> > > +There are a couple of prerequisites:
> > > +* You need to CloudStack 4.0 or higher
> > > +* Qemu on the Hypervisor has to be compiled with RBD enabled
> > > +* The libvirt version on the Hypervisor has to be at least 0.10 with RBD
> > > enabled
> > > +
> > > +Make sure you meet this requirements before installing the CloudStack
> > > Agent on the Hypervisor(s)!
> > > +
> > > +.. important:: To use RBD with CloudStack, you must have a running Ceph
> > > cluster!
> > > +
> > > +Limitations
> > > +-------------
> > > +Running instances from RBD has a couple of limitations:
> > > +
> > > +* An additional NFS Primary Storage pool is required for running System
> > > VM's
> > > +* Snapshotting RBD volumes is not possible (at this moment)
> > > +* Only one monitor can be configured
> > > +
> > > +Add Hypervisor
> > > +-------------
> > > +Please follow the official CloudStack documentation how to do this.
> > > +
> > > +There is no special way of adding a Hypervisor when using RBD, nor is any
> > > configuration needed on the hypervisor.
> > > +
> > > +Add RBD Primary Storage
> > > +-------------
> > > +Once the hypervisor has been added, log on to the CloudStack UI.
> > > +
> > > +* Infrastructure
> > > +* Primary Storage
> > > +* "Add Primary Storage"
> > > +* Select "Protocol" RBD
> > > +* Fill in your cluster information (cephx is supported)
> > > +* Optionally add the tag 'rbd'
> > > +
> > > +Now you should be able to deploy instances on RBD.
> > > +
> > > +RBD Disk Offering
> > > +-------------
> > > +Create a special "Disk Offering" which needs to match the tag 'rbd' so
> > > you can make sure the StoragePoolAllocator
> > > +chooses the RBD pool when searching for a suiteable storage pool.
> > > +
> > > +Since there is also a NFS storage pool it's possible that instances get
> > > deployed on NFS instead of RBD.
> > > diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst
> > > index af1682f..6fd1999 100644
> > > --- a/doc/rbd/rbd.rst
> > > +++ b/doc/rbd/rbd.rst
> > > @@ -31,7 +31,7 @@ the Ceph FS filesystem, and RADOS block devices
> > > simultaneously.
> > >          QEMU and RBD <qemu-rbd>
> > >          libvirt <libvirt>
> > >          RBD and OpenStack <rbd-openstack>
> > > -
> > > +       RBD and CloudStack <rbd-cloudstack>
> > > 
> > > 
> > > 
> > > --
> > > 1.7.9.5
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-09-05 15:28   ` Wido den Hollander
  2012-09-05 15:42     ` Sage Weil
@ 2012-09-05 16:14     ` Tommi Virtanen
  2012-09-05 17:05       ` Wido den Hollander
  1 sibling, 1 reply; 10+ messages in thread
From: Tommi Virtanen @ 2012-09-05 16:14 UTC (permalink / raw)
  To: Wido den Hollander; +Cc: Calvin Morrow, ceph-devel

On Wed, Sep 5, 2012 at 8:28 AM, Wido den Hollander <wido@widodh.nl> wrote:
> You can only specify one monitor in CloudStack, but your cluster can have
> multiple.
>
> This is due to the internals of CloudStack. It stores storage pools in a URI
> format, like: rbd://admin:secret@1.2.3.4/rbd

You know, for a custom scheme like "rbd:", any generic URI parser has
no business dictating what's allowed and what's not. Just don't use
"//" and you're not forced to follow those rules.

For example, rbd:?id=admin&secret=s3kr1t&mon=1.2.3.4&mon=5.6.7.8&pool=rbd
is perfectly legal.

Whether some Java library fails to implement generic URIs is another concern..

http://tools.ietf.org/html/rfc3986#section-3

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-09-05 16:14     ` Tommi Virtanen
@ 2012-09-05 17:05       ` Wido den Hollander
  2012-09-05 18:22         ` Tommi Virtanen
  0 siblings, 1 reply; 10+ messages in thread
From: Wido den Hollander @ 2012-09-05 17:05 UTC (permalink / raw)
  To: Tommi Virtanen; +Cc: Calvin Morrow, ceph-devel



On 09/05/2012 06:14 PM, Tommi Virtanen wrote:
> On Wed, Sep 5, 2012 at 8:28 AM, Wido den Hollander <wido@widodh.nl> wrote:
>> You can only specify one monitor in CloudStack, but your cluster can have
>> multiple.
>>
>> This is due to the internals of CloudStack. It stores storage pools in a URI
>> format, like: rbd://admin:secret@1.2.3.4/rbd
>
> You know, for a custom scheme like "rbd:", any generic URI parser has
> no business dictating what's allowed and what's not. Just don't use
> "//" and you're not forced to follow those rules.
>
> For example, rbd:?id=admin&secret=s3kr1t&mon=1.2.3.4&mon=5.6.7.8&pool=rbd
> is perfectly legal.
>
> Whether some Java library fails to implement generic URIs is another concern..
>

It is indeed a Java library in this case: 
http://docs.oracle.com/javase/6/docs/api/java/net/URI.html

But Sage's suggestion about using Round-Robin DNS seems great as well. 
Didn't know that was supported.

Wido

> http://tools.ietf.org/html/rfc3986#section-3
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-09-05 17:05       ` Wido den Hollander
@ 2012-09-05 18:22         ` Tommi Virtanen
  2012-09-05 22:39           ` Wido den Hollander
  0 siblings, 1 reply; 10+ messages in thread
From: Tommi Virtanen @ 2012-09-05 18:22 UTC (permalink / raw)
  To: Wido den Hollander; +Cc: Calvin Morrow, ceph-devel

On Wed, Sep 5, 2012 at 10:05 AM, Wido den Hollander <wido@widodh.nl> wrote:
>> For example, rbd:?id=admin&secret=s3kr1t&mon=1.2.3.4&mon=5.6.7.8&pool=rbd
>> is perfectly legal.
>>
>> Whether some Java library fails to implement generic URIs is another
>> concern..
> It is indeed a Java library in this case:
> http://docs.oracle.com/javase/6/docs/api/java/net/URI.html

I don't see anything on that page to prevent you from doing
u.getQuery() and using the fields from that (and then living with Java
lack of good stdlib and using something like
http://stackoverflow.com/questions/1667278/parsing-query-strings-in-java
to actually parse the string into key=value pairs).

Or, use *exactly* what the QEmu RBD code is doing, by getting the data
our of the URI with getSchemeSpecificPart -- that'll work as long as
you don't start the part with a slash ("rbd:/").

I see no reason here why the
rbd:poolname/devicename[@snapshotname][:option1=value1[:option2=value2...]]
syntax from QEmu wouldn't work, just as is.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-09-05 18:22         ` Tommi Virtanen
@ 2012-09-05 22:39           ` Wido den Hollander
  2012-09-05 22:52             ` Tommi Virtanen
  0 siblings, 1 reply; 10+ messages in thread
From: Wido den Hollander @ 2012-09-05 22:39 UTC (permalink / raw)
  To: Tommi Virtanen; +Cc: ceph-devel



On 09/05/2012 08:22 PM, Tommi Virtanen wrote:
> On Wed, Sep 5, 2012 at 10:05 AM, Wido den Hollander <wido@widodh.nl> wrote:
>>> For example, rbd:?id=admin&secret=s3kr1t&mon=1.2.3.4&mon=5.6.7.8&pool=rbd
>>> is perfectly legal.
>>>
>>> Whether some Java library fails to implement generic URIs is another
>>> concern..
>> It is indeed a Java library in this case:
>> http://docs.oracle.com/javase/6/docs/api/java/net/URI.html
>
> I don't see anything on that page to prevent you from doing
> u.getQuery() and using the fields from that (and then living with Java
> lack of good stdlib and using something like
> http://stackoverflow.com/questions/1667278/parsing-query-strings-in-java
> to actually parse the string into key=value pairs).
>

The main problem with that is how CloudStack internally stores the data. 
At the storage driver the URI doesn't arrive in plain format, it gets 
splitted with getHost(), getAuthUsername(), getPath() and arrives in 
these separate variables at the point where libvirt is being called.

Wido

> Or, use *exactly* what the QEmu RBD code is doing, by getting the data
> our of the URI with getSchemeSpecificPart -- that'll work as long as
> you don't start the part with a slash ("rbd:/").
>
> I see no reason here why the
> rbd:poolname/devicename[@snapshotname][:option1=value1[:option2=value2...]]
> syntax from QEmu wouldn't work, just as is.
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] docs: Add CloudStack documentation
  2012-09-05 22:39           ` Wido den Hollander
@ 2012-09-05 22:52             ` Tommi Virtanen
  0 siblings, 0 replies; 10+ messages in thread
From: Tommi Virtanen @ 2012-09-05 22:52 UTC (permalink / raw)
  To: Wido den Hollander; +Cc: ceph-devel

On Wed, Sep 5, 2012 at 3:39 PM, Wido den Hollander <wido@widodh.nl> wrote:
> The main problem with that is how CloudStack internally stores the data. At
> the storage driver the URI doesn't arrive in plain format, it gets splitted
> with getHost(), getAuthUsername(), getPath() and arrives in these separate
> variables at the point where libvirt is being called.

Perhaps they would we amenable to a pretty little patch that passes
the URI all the way through?

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-09-05 22:52 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-08 21:51 [PATCH] docs: Add CloudStack documentation Wido den Hollander
2012-09-04 23:49 ` Sage Weil
2012-09-05 15:21 ` Calvin Morrow
2012-09-05 15:28   ` Wido den Hollander
2012-09-05 15:42     ` Sage Weil
2012-09-05 16:14     ` Tommi Virtanen
2012-09-05 17:05       ` Wido den Hollander
2012-09-05 18:22         ` Tommi Virtanen
2012-09-05 22:39           ` Wido den Hollander
2012-09-05 22:52             ` Tommi Virtanen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.