All of lore.kernel.org
 help / color / mirror / Atom feed
From: Calvin Morrow <calvin.morrow@gmail.com>
To: ceph-devel <ceph-devel@vger.kernel.org>
Subject: Re: [PATCH] docs: Add CloudStack documentation
Date: Wed, 5 Sep 2012 09:21:27 -0600	[thread overview]
Message-ID: <CADxhoDR-3guBfFXc_QSmvPthVp5R=vS9tB2zpn-WRaLCSAu=ZA@mail.gmail.com> (raw)
In-Reply-To: <1344462671-21848-1-git-send-email-wido@widodh.nl>

I saw the limitations section references only being able to configure
a single monitor.  Some followup questions for someone interested in
using RBD with Cloudstack 4:

Is it that you can only specify a single monitor to connect to within
Cloudstack 4 (but can still have a 3 monitor configuration) ... or
must you only have a single monitor for some reason?

If you have a ceph.conf on the kvm nodes with more monitors, will it
pick up on the additional monitors?

Is it possible to use a "floating" ip address resource in a pacemaker
configuration for the CloudStack "monitor" IP address?  Is there any
other way around a single-monitor point of failure?

Thanks for your hard work and any guidance you can provide!

Calvin

On Wed, Aug 8, 2012 at 3:51 PM, Wido den Hollander <wido@widodh.nl> wrote:
>
> The basic documentation about how you can use RBD with CloudStack
>
> Signed-off-by: Wido den Hollander <wido@widodh.nl>
> ---
>  doc/rbd/rbd-cloudstack.rst |   49
> ++++++++++++++++++++++++++++++++++++++++++++
>  doc/rbd/rbd.rst            |    2 +-
>  2 files changed, 50 insertions(+), 1 deletion(-)
>  create mode 100644 doc/rbd/rbd-cloudstack.rst
>
> diff --git a/doc/rbd/rbd-cloudstack.rst b/doc/rbd/rbd-cloudstack.rst
> new file mode 100644
> index 0000000..04e1a7c
> --- /dev/null
> +++ b/doc/rbd/rbd-cloudstack.rst
> @@ -0,0 +1,49 @@
> +===================
> + RBD and Apache CloudStack
> +===================
> +You can use RBD to run instances on in Apache CloudStack.
> +
> +This can be done by adding a RBD pool as Primary Storage.
> +
> +There are a couple of prerequisites:
> +* You need to CloudStack 4.0 or higher
> +* Qemu on the Hypervisor has to be compiled with RBD enabled
> +* The libvirt version on the Hypervisor has to be at least 0.10 with RBD
> enabled
> +
> +Make sure you meet this requirements before installing the CloudStack
> Agent on the Hypervisor(s)!
> +
> +.. important:: To use RBD with CloudStack, you must have a running Ceph
> cluster!
> +
> +Limitations
> +-------------
> +Running instances from RBD has a couple of limitations:
> +
> +* An additional NFS Primary Storage pool is required for running System
> VM's
> +* Snapshotting RBD volumes is not possible (at this moment)
> +* Only one monitor can be configured
> +
> +Add Hypervisor
> +-------------
> +Please follow the official CloudStack documentation how to do this.
> +
> +There is no special way of adding a Hypervisor when using RBD, nor is any
> configuration needed on the hypervisor.
> +
> +Add RBD Primary Storage
> +-------------
> +Once the hypervisor has been added, log on to the CloudStack UI.
> +
> +* Infrastructure
> +* Primary Storage
> +* "Add Primary Storage"
> +* Select "Protocol" RBD
> +* Fill in your cluster information (cephx is supported)
> +* Optionally add the tag 'rbd'
> +
> +Now you should be able to deploy instances on RBD.
> +
> +RBD Disk Offering
> +-------------
> +Create a special "Disk Offering" which needs to match the tag 'rbd' so
> you can make sure the StoragePoolAllocator
> +chooses the RBD pool when searching for a suiteable storage pool.
> +
> +Since there is also a NFS storage pool it's possible that instances get
> deployed on NFS instead of RBD.
> diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst
> index af1682f..6fd1999 100644
> --- a/doc/rbd/rbd.rst
> +++ b/doc/rbd/rbd.rst
> @@ -31,7 +31,7 @@ the Ceph FS filesystem, and RADOS block devices
> simultaneously.
>         QEMU and RBD <qemu-rbd>
>         libvirt <libvirt>
>         RBD and OpenStack <rbd-openstack>
> -
> +       RBD and CloudStack <rbd-cloudstack>
>
>
>
> --
> 1.7.9.5
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2012-09-05 15:21 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-08 21:51 [PATCH] docs: Add CloudStack documentation Wido den Hollander
2012-09-04 23:49 ` Sage Weil
2012-09-05 15:21 ` Calvin Morrow [this message]
2012-09-05 15:28   ` Wido den Hollander
2012-09-05 15:42     ` Sage Weil
2012-09-05 16:14     ` Tommi Virtanen
2012-09-05 17:05       ` Wido den Hollander
2012-09-05 18:22         ` Tommi Virtanen
2012-09-05 22:39           ` Wido den Hollander
2012-09-05 22:52             ` Tommi Virtanen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADxhoDR-3guBfFXc_QSmvPthVp5R=vS9tB2zpn-WRaLCSAu=ZA@mail.gmail.com' \
    --to=calvin.morrow@gmail.com \
    --cc=ceph-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.