All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexandre DERUMIER <aderumier@odiso.com>
To: Wido den Hollander <wido@widodh.nl>
Cc: ceph-devel@vger.kernel.org
Subject: Re: rbd 0.48 storage support for kvm proxmox distribution available
Date: Wed, 05 Sep 2012 16:00:35 +0200 (CEST)	[thread overview]
Message-ID: <a904f44f-1aa0-45c6-9d5c-62c3a26eff0b@mailpro> (raw)
In-Reply-To: <50474704.2000800@widodh.nl>

Thanks for the infos.

I dind't have documented the ceph.conf feature,it's more a workaround for now for users which want to tune some values.

Indeed,I'm planning to add the cache size options to the proxmox storage.cfg.


Thanks

Alexandre


----- Mail original ----- 

De: "Wido den Hollander" <wido@widodh.nl> 
À: "Alexandre DERUMIER" <aderumier@odiso.com> 
Cc: ceph-devel@vger.kernel.org 
Envoyé: Mercredi 5 Septembre 2012 14:35:16 
Objet: Re: rbd 0.48 storage support for kvm proxmox distribution available 

On 09/05/2012 11:30 AM, Alexandre DERUMIER wrote: 
>>> Proxmox doesn't use libvirt, does it? 
> Yes, we don't use libvirt. 
> 
>>> Any plans to implement the RBD caching? 
> 
> It's already implemented (cache=writeback), patched qemu-kvm 1.1. (and qemu-kvm 1.2 is coming in the next days) 
> 
> Tunning of cache size can be done with a /etc/ceph.conf file 
> 

That is kind of dangerous imho and for a couple of reasons. 

For configuring the storage you have /etc/pve/storage.cfg where you can 
add the RBD pool, configure the monitors and cephx, but for caching you 
rely in librbd reading ceph.conf? 

That is hidden from the user, reading /etc/ceph/ceph.conf will go 
without your knowledge. I'd opt for passing down all the options to Qemu 
and being able to run without a ceph.conf 

I've ran into the same problems with libvirt and CloudStack. I couldn't 
figure out why libvirt was still able to connect to a specific cluster 
until I found out my ceph.conf was still in place. 

I also thought it is on the roadmap to not read /etc/ceph/ceph.conf by 
default with librbd/librados to take away these kind of issues. 

And you would also have this weird situation where the ceph.conf could 
have a couple of monitor entries and your "storage.cfg", how will that 
work out? 

I would try not to rely on the ceph.conf at all and have Proxmox pass 
all the configuration options down to Qemu. 

Wido 

> 
> 
> ----- Mail original ----- 
> 
> De: "Wido den Hollander" <wido@widodh.nl> 
> À: "Alexandre DERUMIER" <aderumier@odiso.com> 
> Cc: ceph-devel@vger.kernel.org 
> Envoyé: Mercredi 5 Septembre 2012 11:11:27 
> Objet: Re: rbd 0.48 storage support for kvm proxmox distribution available 
> 
> On 09/05/2012 06:31 AM, Alexandre DERUMIER wrote: 
>> Hi List, 
>> 
>> We have added rbd 0.48 support to the proxmox 2.1 kvm distribution 
>> http://www.proxmox.com/products/proxmox-ve 
>> 
>> 
>> Proxmox setup: 
>> 
>> edit the /etc/pve/storage.cfg and add the configuration (gui creation is not available yet) 
>> 
>> rbd: mycephcluster 
>> monhost 192.168.0.1:6789;192.168.0.2:6789;192.168.0.3:6789 
>> pool rbd 
>> username admin 
>> authsupported cephx;none 
>> content images 
>> 
> 
> Proxmox doesn't use libvirt, does it? 
> 
> Any plans to implement the RBD caching? 
> 
> Nice work though! 
> 
> Wido 
> 
>> 
>> then you need to copy the keyring file from ceph to proxmox 
>> 
>> scp cephserver1:/etc/ceph/client.admin.keyring /etc/pve/priv/ceph/mycephcluster.keyring 
>> 
>> 
>> 
>> For now, you can add/delete/resize rbd volumes from gui. 
>> Snapshots/cloning will be added soon (when layering will be available) 
>> 
>> 
>> Regards, 
>> 
>> Alexandre Derumier 
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
>> the body of a message to majordomo@vger.kernel.org 
>> More majordomo info at http://vger.kernel.org/majordomo-info.html 
>> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
> the body of a message to majordomo@vger.kernel.org 
> More majordomo info at http://vger.kernel.org/majordomo-info.html 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2012-09-05 14:01 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-04 23:26 ceph-fs tests Smart Weblications GmbH - Florian Wiessner
2012-09-05  4:31 ` rbd 0.48 storage support for kvm proxmox distribution available Alexandre DERUMIER
2012-09-05  9:11   ` Wido den Hollander
2012-09-05  9:30     ` Alexandre DERUMIER
2012-09-05 12:35       ` Wido den Hollander
2012-09-05 14:00         ` Alexandre DERUMIER [this message]
2012-09-05 14:47         ` Josh Durgin
2012-09-05 16:40         ` Tommi Virtanen
2012-09-05 16:43           ` Tommi Virtanen
2012-09-05 16:44           ` Josh Durgin
2012-09-05 16:22 ` ceph-fs tests Tommi Virtanen
2012-09-05 16:37   ` Gregory Farnum
2012-09-05 16:42   ` Smart Weblications GmbH - Florian Wiessner
2012-09-05 16:52     ` Gregory Farnum
2012-09-06 23:22       ` Smart Weblications GmbH - Florian Wiessner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a904f44f-1aa0-45c6-9d5c-62c3a26eff0b@mailpro \
    --to=aderumier@odiso.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=wido@widodh.nl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.