All of lore.kernel.org
 help / color / mirror / Atom feed
* Ceph + VMWare
@ 2016-10-05 18:32 Patrick McGarry
  2016-10-06 14:01 ` [ceph-users] " Alex Gorbachev
       [not found] ` <CAAZbbf1HGbZVbB1m5vPd+afWb0pZz4haQhua4FcQa-31FPg=0g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 2 replies; 8+ messages in thread
From: Patrick McGarry @ 2016-10-05 18:32 UTC (permalink / raw)
  To: Ceph-User, Ceph Devel

Hey guys,

Starting to buckle down a bit in looking at how we can better set up
Ceph for VMWare integration, but I need a little info/help from you
folks.

If you currently are using Ceph+VMWare, or are exploring the option,
I'd like some simple info from you:

1) Company
2) Current deployment size
3) Expected deployment growth
4) Integration method (or desired method) ex: iscsi, native, etc

Just casting the net so we know who is interested and might want to
help us shape and/or test things in the future if we can make it
better. Thanks.


-- 

Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com
@scuttlemonkey || @ceph

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph + VMWare
       [not found] ` <CAAZbbf1HGbZVbB1m5vPd+afWb0pZz4haQhua4FcQa-31FPg=0g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-10-06  6:13   ` Daniel Schwager
  2016-10-07 22:39   ` Jake Young
  2016-10-11 15:13   ` Frédéric Nass
  2 siblings, 0 replies; 8+ messages in thread
From: Daniel Schwager @ 2016-10-06  6:13 UTC (permalink / raw)
  To: 'Patrick McGarry', 'Ceph-User', 'Ceph Devel'


[-- Attachment #1.1: Type: text/plain, Size: 2085 bytes --]

Hi all,

we are using Ceph (jewel 10.2.2, 10GBit Ceph frontend/backend, 3 nodes, each 8 OSD's and 2 journal SSD's) 
in out VMware environment especially for test environments and templates - but currently 
not for productive machines (because of missing FC-redundancy & performance).

On our Linux based SCST 4GBit fiber channel proxy, 16 ceph-rbd  devices (non-caching, in total 10 TB) 
creating a LVM (stripped) volume which is published as a FC-target to our VMware cluster. 
Looks fine, works stable. But currently the proxy is not redundant (only one head).
Performance is ok (a), but not that good than our IBM Storwize 3700 SAN (16 HDD's).
Especially for small IO's (4k), the IBM is twice as fast as Ceph. 

Native ceph integration to VMware would be great (-:

Best regards
Daniel

(a) Atto Benchmark screenshots - IBM Storwize 37000 vs. Ceph
https://dtnet.storage.dtnetcloud.com/d/684b330eea/

-------------------------------------------------------------------
DT Netsolution GmbH   -   Taläckerstr. 30    -    D-70437 Stuttgart
Geschäftsführer: Daniel Schwager, Stefan Hörz - HRB Stuttgart 19870
Tel: +49-711-849910-32, Fax: -932 - Mailto:daniel.schwager-keXbdk0DRdY@public.gmane.org

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org] On Behalf Of Patrick McGarry
> Sent: Wednesday, October 05, 2016 8:33 PM
> To: Ceph-User; Ceph Devel
> Subject: [ceph-users] Ceph + VMWare
> 
> Hey guys,
> 
> Starting to buckle down a bit in looking at how we can better set up
> Ceph for VMWare integration, but I need a little info/help from you
> folks.
> 
> If you currently are using Ceph+VMWare, or are exploring the option,
> I'd like some simple info from you:
> 
> 1) Company
> 2) Current deployment size
> 3) Expected deployment growth
> 4) Integration method (or desired method) ex: iscsi, native, etc
> 
> Just casting the net so we know who is interested and might want to
> help us shape and/or test things in the future if we can make it
> better. Thanks.
> 

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 4000 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ceph-users] Ceph + VMWare
  2016-10-05 18:32 Ceph + VMWare Patrick McGarry
@ 2016-10-06 14:01 ` Alex Gorbachev
       [not found]   ` <CADb9453mfXcgxMuOMvbz12XDB02bbzuaV-Jtoiqd04n7XuSzJg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
       [not found] ` <CAAZbbf1HGbZVbB1m5vPd+afWb0pZz4haQhua4FcQa-31FPg=0g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 8+ messages in thread
From: Alex Gorbachev @ 2016-10-06 14:01 UTC (permalink / raw)
  To: Patrick McGarry; +Cc: Ceph-User, Ceph Devel

On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry <pmcgarry@redhat.com> wrote:
> Hey guys,
>
> Starting to buckle down a bit in looking at how we can better set up
> Ceph for VMWare integration, but I need a little info/help from you
> folks.
>
> If you currently are using Ceph+VMWare, or are exploring the option,
> I'd like some simple info from you:
>
> 1) Company
> 2) Current deployment size
> 3) Expected deployment growth
> 4) Integration method (or desired method) ex: iscsi, native, etc
>
> Just casting the net so we know who is interested and might want to
> help us shape and/or test things in the future if we can make it
> better. Thanks.
>

Hi Patrick,

We have Storcium certified with VMWare, and we use it ourselves:

Ceph Hammer latest

SCST redundant Pacemaker based delivery front ends - our agents are
published on github

EnhanceIO for read caching at delivery layer

NFS v3, and iSCSI and FC delivery

Our deployment size we use ourselves is 700 TB raw.

Challenges are as others described, but HA and multi host access works
fine courtesy of SCST.  Write amplification is a challenge on spinning
disks.

Happy to share more.

Alex

>
> --
>
> Best Regards,
>
> Patrick McGarry
> Director Ceph Community || Red Hat
> http://ceph.com  ||  http://community.redhat.com
> @scuttlemonkey || @ceph
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph + VMWare
       [not found] ` <CAAZbbf1HGbZVbB1m5vPd+afWb0pZz4haQhua4FcQa-31FPg=0g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2016-10-06  6:13   ` Daniel Schwager
@ 2016-10-07 22:39   ` Jake Young
  2016-10-11 15:13   ` Frédéric Nass
  2 siblings, 0 replies; 8+ messages in thread
From: Jake Young @ 2016-10-07 22:39 UTC (permalink / raw)
  To: Patrick McGarry; +Cc: Ceph Devel, Ceph-User


[-- Attachment #1.1: Type: text/plain, Size: 2776 bytes --]

Hey Patrick,

I work for Cisco.

We have a 200TB cluster (108 OSDs on 12 OSD Nodes) and use the cluster for
both OpenStack and VMware deployments.

We are using iSCSI now, but it really would be much better if VMware did
support RBD natively.

We present a 1-2TB Volume that is shared between 4-8 ESXi hosts.

I have been looking for an optimal solution for a few years now, and I have
finally found something that works pretty well:

We are installing FreeNAS on a KVM hypervisor and passing through rbd
volumes as disks on a SCSI bus. We are able to add volumes dynamically (no
need to reboot FreeNAS to recognize new drives).  In FreeNAS, we are
passing the disks through directly as iscsi targets, we are not putting the
disks into a ZFS volume.

The biggest benefit to this is that VMware really likes the FreeBSD target
and all VAAI stuff works reliably. We also get the benefit of the stability
of rbd in QEMU client.

My next step is to create a redundant KVM host with a redundant FreeNAS VM
and see how iscsi multipath works with the ESXi hosts.

We have tried many different things and have run into all the same issues
as others have posted on this list. The general theme seems to be that most
(all?) Linux iSCSI Target software and Linux NFS solutions are not very
good. The BSD OS's (FreeBSD, Solaris derivatives, etc.) do these things a
lot better, but typically lack Ceph support as well as having poor HW
compatibility (compared to Linux).

Our goal has always been to replace FC SAN with something comparable in
performance, reliability and redundancy.

Again, the best thing in the world would be for ESXi to mount rbd volumes
natively using librbd. I'm not sure if VMware is interested in this though.

Jake


On Wednesday, October 5, 2016, Patrick McGarry <pmcgarry-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> Hey guys,
>
> Starting to buckle down a bit in looking at how we can better set up
> Ceph for VMWare integration, but I need a little info/help from you
> folks.
>
> If you currently are using Ceph+VMWare, or are exploring the option,
> I'd like some simple info from you:
>
> 1) Company
> 2) Current deployment size
> 3) Expected deployment growth
> 4) Integration method (or desired method) ex: iscsi, native, etc
>
> Just casting the net so we know who is interested and might want to
> help us shape and/or test things in the future if we can make it
> better. Thanks.
>
>
> --
>
> Best Regards,
>
> Patrick McGarry
> Director Ceph Community || Red Hat
> http://ceph.com  ||  http://community.redhat.com
> @scuttlemonkey || @ceph
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org <javascript:;>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 3743 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph + VMWare
       [not found] ` <CAAZbbf1HGbZVbB1m5vPd+afWb0pZz4haQhua4FcQa-31FPg=0g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2016-10-06  6:13   ` Daniel Schwager
  2016-10-07 22:39   ` Jake Young
@ 2016-10-11 15:13   ` Frédéric Nass
  2 siblings, 0 replies; 8+ messages in thread
From: Frédéric Nass @ 2016-10-11 15:13 UTC (permalink / raw)
  To: Patrick McGarry, Ceph-User, Ceph Devel

Hi Patrick,

1) Université de Lorraine. (7.000 researchers and staff members, 60.000 
students, 42 schools and education structures, 60 research labs).

2) RHCS cluster: 144 OSDs on 12 nodes for 520 TB raw capacity.
     VMware clusters: 7 VMware clusters (40 ESXi hosts). First need is 
to provide capacitive storage (Ceph) to VMs running in a VMware vRA IaaS 
cluster (6 ESXi hosts).

3) Deployment growth ?
     RHCS cluster: Initial need was 750 TB of usable storage, so a x4 
growth in the next 3 years is expected to reach 1 PB of usable storage.
     VMware clusters: We just started to offer a IaaS service to 
research laboratories and education structures whithin our university.
     We can expect to host several hundreds of VMs in the next 2 years 
(~600-800).

4) Integration method ? Clearly native.
     I spent some of the last 6 months working on building an HA gateway 
cluster (iSCSI and NFS) to provide RHCS Ceph storage to our VMware IaaS 
Cluster. Here are my findings:

     * iSCSI ?

     Gives better performance than NFS, we know that. BUT, we cannot go 
into production with iSCSI because of ESXi hosts entering a never ending 
iSCSI 'Abort Task' loop when the Ceph cluster fails to acknowledge a 4MB 
IO in less than 5s, resulting in VMs crashing. I've been told by a 
VMware engineer that this 5s limit cannot be raised as it's hardcoded in 
ESXi iSCSI software initiator.
     Why would an IO take more than 5s ? In case of a important load on 
the Ceph cluster, or a Ceph failure scenario (network isolation, OSD 
crash), or deep-scrubbing bothering client IOs or any combination of 
these or those I didn't think about...

     What I have tested:
     iSCSI Active/Active HA cluster. Each ESXi sees the same datastore 
through both targets but only accesses one datastore at a time through a 
statically defined prefered path.
     3 ESXi work on one target, 3 ESXi work on the other. If a target 
goes down, the other paths are used.

     - LIO iSCSI targets with kernel RBD mapping (no cache). VAAI 
methods. Easy to configure. Delivers good performance with eagger zeroed 
virtual disks. 'Abort Task' loop has the ESXi disconnect from the 
vCenter Server.
     Restartign the target get them back in but some VMs certainly crashed.
     - FreeBSD / FreeNAS running in KVM (on top of CentOS) mapping RBD 
images through librbd. Found that fileio backstore was used. Found hard 
to make it HA with librbd cache. And still the 'Abort Task' loop...
     - SCST ESOS targets with kernel RBD mapping (no cache). VAAI 
methods, ALUA. Easy to configure too. 'Abort Task' still happens but the 
ESX does not get disconnected from the vCenter Server. Still targets 
have to be restarted to fix this situation.

     * NFS ?

     Gives less performance than iSCSI, we know that too. BUT, it's 
probably the best option right now. It's very easy to make it HA with 
Pacemaker/Corosync as VMware doesn't make use of the NFS lock manager. 
Here is a good start : 
https://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
     We're still benchmarking IOPs to decide whether we can go into 
production with this infrastructure but we're actually very satisfied 
with the HA mechanism.
     Running synchronous writes on multiple VMs (on virtual disk hosted 
on NFS datastores with 'sync' exports of RBD images) while Storage 
vMotioning those multiple disks between NFS RBD datastores and flapping 
ViP (and thus NFS exports) from one server to the other at the same time 
never kills any VM nor makes any datastore unavailable.
     And every Storage vMotion task complete ! This is excellent 
results. Note that it's important to run VMware Tools in VMs as VMware 
Tools installation extend the write delay timeout on local iSCSI devices.

     What I have tested:
     - NFS exports with async mode sharing RBD images with XFS on top of 
it. Gives the best performances but, as an evidence, no one will want to 
use this mode in production.
     - NFS exports with sync mode sharing RBD images with XFS on top of 
it. Gives mitigated performances. We would clearly announce this type of 
storage as capacitive and not performant through our IaaS service.
       As VMs caches writes, IOPS might be good enough for tier 2 or 3 
applications. We would probably be able to increase the number of IOPS 
by using more RBD images and NFS shares.
     - NFS exports with sync mode sharing RBD images with ZFS (with 
compression) on top of it. The idea is to provide better performance by 
putting the SLOG (write journal) on fast SSD drives.
       See this real life (love-)story : 
https://virtualexistenz.wordpress.com/2013/02/01/using-zfs-storage-as-vmware-nfs-datastores-a-real-life-love-story/
       Each NFS server has 2 mirrored SSDs (RAID1). Each NFS server 
export partitions of this SSD volume through iSCSI.
       Each NFS server is a client of local and distant iSCSI target. 
Then the SLOG device is made of a ZFS mirror of 2 disks : local iSCSI 
device and distant iSCSI device (as vdevs).

       So even if a whole NFS server crashes or is permanently down, the 
ZFS pool can still be imported on the second NFS server.

       First benchmarks show a x4 performance improvement. Further tests 
will help to decide whether its safe or not to go into production with 
this level of complexity.
       Still, as we're using VMware clustered datastores, it's easy to 
go back to classic XFS NFS datastores by putting a ZFS datastore in 
maintenance mode.

     As for SUSE Enterprise Storage HA iSCSI targets, I doubt it can do 
any better regarding the 'Abort Task' command, unless they patch the 
ceph cluster to be able to Abort an IO which I doubt they could.
     From what I got, with how the ESXi iSCSI software initiator works, 
the Ceph cluster HAS to ACK an IO in less than 5s. Period.

Regards,

Frederic Nass.

PS : Thank you Nick for your help regarding the 'Abort Taks' loop. ;-)


Le 05/10/2016 à 20:32, Patrick McGarry a écrit :
> Hey guys,
>
> Starting to buckle down a bit in looking at how we can better set up
> Ceph for VMWare integration, but I need a little info/help from you
> folks.
>
> If you currently are using Ceph+VMWare, or are exploring the option,
> I'd like some simple info from you:
>
> 1) Company
> 2) Current deployment size
> 3) Expected deployment growth
> 4) Integration method (or desired method) ex: iscsi, native, etc
>
> Just casting the net so we know who is interested and might want to
> help us shape and/or test things in the future if we can make it
> better. Thanks.
>
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph + VMWare
       [not found]   ` <CADb9453mfXcgxMuOMvbz12XDB02bbzuaV-Jtoiqd04n7XuSzJg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-10-18 10:17     ` Frédéric Nass
       [not found]       ` <7668a4fa-ad86-30c8-f211-3969b21c58e4-mHc8WM8rKNgDTEWcYJR2Ng@public.gmane.org>
  2016-10-18 10:19     ` Frédéric Nass
  1 sibling, 1 reply; 8+ messages in thread
From: Frédéric Nass @ 2016-10-18 10:17 UTC (permalink / raw)
  To: Alex Gorbachev; +Cc: Ceph Devel, Ceph-User


[-- Attachment #1.1: Type: text/plain, Size: 2101 bytes --]

Hi Alex,

Just to know, what kind of backstore are you using whithin Storcium ? 
vdisk_fileio or vdisk_blockio ?

I see your agents can handle both : 
http://www.spinics.net/lists/ceph-users/msg27817.html

Regards,

Frédéric.


Le 06/10/2016 à 16:01, Alex Gorbachev a écrit :
> On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry <pmcgarry-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>> Hey guys,
>>
>> Starting to buckle down a bit in looking at how we can better set up
>> Ceph for VMWare integration, but I need a little info/help from you
>> folks.
>>
>> If you currently are using Ceph+VMWare, or are exploring the option,
>> I'd like some simple info from you:
>>
>> 1) Company
>> 2) Current deployment size
>> 3) Expected deployment growth
>> 4) Integration method (or desired method) ex: iscsi, native, etc
>>
>> Just casting the net so we know who is interested and might want to
>> help us shape and/or test things in the future if we can make it
>> better. Thanks.
>>
> Hi Patrick,
>
> We have Storcium certified with VMWare, and we use it ourselves:
>
> Ceph Hammer latest
>
> SCST redundant Pacemaker based delivery front ends - our agents are
> published on github
>
> EnhanceIO for read caching at delivery layer
>
> NFS v3, and iSCSI and FC delivery
>
> Our deployment size we use ourselves is 700 TB raw.
>
> Challenges are as others described, but HA and multi host access works
> fine courtesy of SCST.  Write amplification is a challenge on spinning
> disks.
>
> Happy to share more.
>
> Alex
>
>> --
>>
>> Best Regards,
>>
>> Patrick McGarry
>> Director Ceph Community || Red Hat
>> http://ceph.com  ||  http://community.redhat.com
>> @scuttlemonkey || @ceph
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Attachment #1.2: Type: text/html, Size: 3503 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph + VMWare
       [not found]   ` <CADb9453mfXcgxMuOMvbz12XDB02bbzuaV-Jtoiqd04n7XuSzJg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2016-10-18 10:17     ` Frédéric Nass
@ 2016-10-18 10:19     ` Frédéric Nass
  1 sibling, 0 replies; 8+ messages in thread
From: Frédéric Nass @ 2016-10-18 10:19 UTC (permalink / raw)
  To: Alex Gorbachev, Patrick McGarry; +Cc: Ceph Devel, Ceph-User


Hi Alex,

Just to know, what kind of backstore are you using whithin Storcium ? 
vdisk_fileio or vdisk_blockio ?

I see your agents can handle both : 
http://www.spinics.net/lists/ceph-users/msg27817.html

Regards,

Frédéric.

Le 06/10/2016 à 16:01, Alex Gorbachev a écrit :
> On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry <pmcgarry@redhat.com> wrote:
>> Hey guys,
>>
>> Starting to buckle down a bit in looking at how we can better set up
>> Ceph for VMWare integration, but I need a little info/help from you
>> folks.
>>
>> If you currently are using Ceph+VMWare, or are exploring the option,
>> I'd like some simple info from you:
>>
>> 1) Company
>> 2) Current deployment size
>> 3) Expected deployment growth
>> 4) Integration method (or desired method) ex: iscsi, native, etc
>>
>> Just casting the net so we know who is interested and might want to
>> help us shape and/or test things in the future if we can make it
>> better. Thanks.
>>
> Hi Patrick,
>
> We have Storcium certified with VMWare, and we use it ourselves:
>
> Ceph Hammer latest
>
> SCST redundant Pacemaker based delivery front ends - our agents are
> published on github
>
> EnhanceIO for read caching at delivery layer
>
> NFS v3, and iSCSI and FC delivery
>
> Our deployment size we use ourselves is 700 TB raw.
>
> Challenges are as others described, but HA and multi host access works
> fine courtesy of SCST.  Write amplification is a challenge on spinning
> disks.
>
> Happy to share more.
>
> Alex
>
>> --
>>
>> Best Regards,
>>
>> Patrick McGarry
>> Director Ceph Community || Red Hat
>> http://ceph.com  ||  http://community.redhat.com
>> @scuttlemonkey || @ceph
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph + VMWare
       [not found]       ` <7668a4fa-ad86-30c8-f211-3969b21c58e4-mHc8WM8rKNgDTEWcYJR2Ng@public.gmane.org>
@ 2016-10-19  3:10         ` Alex Gorbachev
  0 siblings, 0 replies; 8+ messages in thread
From: Alex Gorbachev @ 2016-10-19  3:10 UTC (permalink / raw)
  To: Frédéric Nass; +Cc: Ceph Devel, Ceph-User


[-- Attachment #1.1: Type: text/plain, Size: 3009 bytes --]

On Tuesday, October 18, 2016, Frédéric Nass <frederic.nass@univ-lorraine.fr>
wrote:

> Hi Alex,
>
> Just to know, what kind of backstore are you using whithin Storcium ? vdisk_fileio
> or vdisk_blockio ?
>
> I see your agents can handle both : http://www.spinics.net/lists/
> ceph-users/msg27817.html
>
Hi Frédéric,

We use all of them, and NFS as well, which has been performing quite well.
Vdisk_fileio is a bit dangerous in write cache mode.  Also, for some
reason, object size of 16MB for RBD does better with VMWare.

Storcium gives you a choice for each LUN.  The challenge has been figuring
out optimal workloads under highly varied use cases.  I see better results
with NVMe journals and write combining HBAs, e.g. Areca.

Regards,
Alex

> Regards,
>
> Frédéric.
>
> Le 06/10/2016 à 16:01, Alex Gorbachev a écrit :
>
> On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry <pmcgarry-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> <javascript:_e(%7B%7D,'cvml','pmcgarry-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org');> wrote:
>
> Hey guys,
>
> Starting to buckle down a bit in looking at how we can better set up
> Ceph for VMWare integration, but I need a little info/help from you
> folks.
>
> If you currently are using Ceph+VMWare, or are exploring the option,
> I'd like some simple info from you:
>
> 1) Company
> 2) Current deployment size
> 3) Expected deployment growth
> 4) Integration method (or desired method) ex: iscsi, native, etc
>
> Just casting the net so we know who is interested and might want to
> help us shape and/or test things in the future if we can make it
> better. Thanks.
>
>
> Hi Patrick,
>
> We have Storcium certified with VMWare, and we use it ourselves:
>
> Ceph Hammer latest
>
> SCST redundant Pacemaker based delivery front ends - our agents are
> published on github
>
> EnhanceIO for read caching at delivery layer
>
> NFS v3, and iSCSI and FC delivery
>
> Our deployment size we use ourselves is 700 TB raw.
>
> Challenges are as others described, but HA and multi host access works
> fine courtesy of SCST.  Write amplification is a challenge on spinning
> disks.
>
> Happy to share more.
>
> Alex
>
>
> --
>
> Best Regards,
>
> Patrick McGarry
> Director Ceph Community || Red Hathttp://ceph.com  ||  http://community.redhat.com
> @scuttlemonkey || @ceph
> _______________________________________________
> ceph-users mailing listceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org <javascript:_e(%7B%7D,'cvml','ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org');>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org <javascript:_e(%7B%7D,'cvml','majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org');>
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>

-- 
--
Alex Gorbachev
Storcium

[-- Attachment #1.2: Type: text/html, Size: 5057 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-10-19  3:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-05 18:32 Ceph + VMWare Patrick McGarry
2016-10-06 14:01 ` [ceph-users] " Alex Gorbachev
     [not found]   ` <CADb9453mfXcgxMuOMvbz12XDB02bbzuaV-Jtoiqd04n7XuSzJg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-10-18 10:17     ` Frédéric Nass
     [not found]       ` <7668a4fa-ad86-30c8-f211-3969b21c58e4-mHc8WM8rKNgDTEWcYJR2Ng@public.gmane.org>
2016-10-19  3:10         ` Alex Gorbachev
2016-10-18 10:19     ` Frédéric Nass
     [not found] ` <CAAZbbf1HGbZVbB1m5vPd+afWb0pZz4haQhua4FcQa-31FPg=0g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-10-06  6:13   ` Daniel Schwager
2016-10-07 22:39   ` Jake Young
2016-10-11 15:13   ` Frédéric Nass

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.