All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-07 22:37 Tobias Oberstein
  0 siblings, 0 replies; 9+ messages in thread
From: Tobias Oberstein @ 2017-02-07 22:37 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3643 bytes --]

>> But what I couldn't find in above or on the net: is there a SPDK
>> backed
>> implementation of this new Bluestore OSD block device abstraction?
>>
>> Do you have a link for me? I really tried to find it ..
>
> Here is a link to the actual code:
>
> https://github.com/ceph/ceph/blob/master/src/os/bluestore/NVMEDevice.cc

Ahh. Thanks! That's a conclusive answer;) So it's definitely real.

Seems like the author of the code is the same as of the presentation I 
stumbled over.

>>> The impact to performance of Ceph was somewhat limited however.
>>> There are bottlenecks in the Ceph OSD.
>>
>> Ok=( Any public avail info on that?
>
> I don't have the actual numbers on hand, but it was a small improvement
> only. I'm speculating, but I can think of a number of problems in the
> above implementation that will limit performance. The biggest problem

Thanks for speculating! This is highly interesting.

> is that Ceph still relies on buffered I/O in a number of cases, but the
> SPDK implementation doesn't do any caching. Caching is of course the
> single most important aspect of storage performance. The above

I see. And then there is the question of where to cache (eg Ceph client 
caches or OSD side).

I am wondering how the SPDK iSCSI target approaches this. Does it 
contain it's own userspace block level caching?

> implementation also copies memory for every read and write into DMA-
> able buffers because Ceph doesn't allocate buffers from DMA-able memory

Uups.

How do I allocate DMA-able memory using SPDK/DPDK?

These are not talking about DMA

http://dpdk.org/doc/api/rte__malloc_8h.html
http://dpdk.org/doc/api/rte__mempool_8h.html

> by default. To fix that, Ceph would need to either make its memory
> manager pluggable as well, or just use SPDK/DPDK throughout for all
> data buffer allocations. Third, Ceph still does some blocking I/O in
> certain cases, and blocking I/O with SPDK, given there is no caching,
> is probably slower than the kernel.
>
>>
>> In general: having a SPDK+DPDK backed implementation of Ceph/OSD
>> seems
>> highly desirable with potentially big impact .. not?
>
> I think there is room to make it far faster than it is today using
> SPDK/DPDK, but it would take a much more dramatic set of changes to the

FWIW, I do think making Ceph block storage really fast would be a game 
changer. I recently benchmarked a data-warehouse box sitting on 8 Intel 
NVMes at 9.5 million random read IOPS - and I'd love to take that level 
of performance over to a solution that scales out. Ceph/RBD.

Using NVMe-oF and mdraid to combine into 1 block device on the 
data-warehouse host is the 2nd route I probably will have a chance to 
investigate - but this has a smell of "piecing together" and with lots 
of potential for things going wild when NVMe-oF targets 
disappear/reappear whereas Ceph/RBD was designed with that in mind.

> structure of the OSD to actually realize the benefit. The whole OSD
> would probably need to be rewritten to do one thread per core with
> message passing and entirely asynchronous network and storage stacks.
> That's effectively a brand new OSD.

I see. The bar indeed seems quite high.

The lack of (if I haven't missed it) a well defined and documented _wire 
protocol_ for talking to OSDs makes this a complete rewrite or an 
alternative complete OSD implementation even more unlikely.

Thanks alot for your detailed and informative response! This really 
helps me mapping out the options and perspectives for above mentioned 
data-warehouse user.

Cheers,
/Tobias




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-08 10:42 Andrey Kuzmin
  0 siblings, 0 replies; 9+ messages in thread
From: Andrey Kuzmin @ 2017-02-08 10:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5521 bytes --]

On Feb 7, 2017 21:54, "Walker, Benjamin" <benjamin.walker(a)intel.com> wrote:

On Tue, 2017-02-07 at 19:20 +0100, Tobias Oberstein wrote:
> Hi Nate,
>
> Am 07.02.2017 um 14:03 schrieb Marushak, Nathan:
> > Hi Tobias,
> >
> > There has been some work done in Bluestore for this. If you search
> > "SPDK Bluestore" or something similar you'll see some links.
>
> I was trying to find conclusive info on the net before - with no
> definite result though, eg, after reading (collegue of yours):
>
> Accelerate Ceph via SPDK
>
> http://7xweck.com1.z0.glb.clouddn.com/cephdaybeijing201608/04-SPDK%E5
> %8A%A0%E9%80%9FCeph-
> XSKY%20Bluestore%E6%A1%88%E4%BE%8B%E5%88%86%E4%BA%AB-
> %E6%89%AC%E5%AD%90%E5%A4%9C-%E7%8E%8B%E8%B1%AA%E8%BF%88.pdf
>
> My understanding is:
>
> Bluestore seems to introduce a proper block device abstraction
> within
> the Ceph OSD implementation.
>
> And this new OSD internal block device abstraction is implemented
> for
> one, over regular Linux block devices (already a step forward from
> being
> forced to shuffle everything through a filesystem).

Correct - Bluestore is a highly simplified user space filesystem.

>
> But what I couldn't find in above or on the net: is there a SPDK
> backed
> implementation of this new Bluestore OSD block device abstraction?
>
> Do you have a link for me? I really tried to find it ..

Here is a link to the actual code:

https://github.com/ceph/ceph/blob/master/src/os/bluestore/NVMEDevice.cc

This was not implemented by the SPDK team and I don't know what state
it is in, but it is definitely there.

>
> > The impact to performance of Ceph was somewhat limited however.
> > There are bottlenecks in the Ceph OSD.
>
> Ok=( Any public avail info on that?

I don't have the actual numbers on hand, but it was a small improvement
only. I'm speculating, but I can think of a number of problems in the
above implementation that will limit performance. The biggest problem
is that Ceph still relies on buffered I/O in a number of cases, but the
SPDK implementation doesn't do any caching. Caching is of course the
single most important aspect of storage performance. The above
implementation also copies memory for every read and write into DMA-
able buffers because Ceph doesn't allocate buffers from DMA-able memory
by default. To fix that, Ceph would need to either make its memory
manager pluggable as well, or just use SPDK/DPDK throughout for all
data buffer allocations. Third, Ceph still does some blocking I/O in
certain cases, and blocking I/O with SPDK, given there is no caching,
is probably slower than the kernel.

>
> In general: having a SPDK+DPDK backed implementation of Ceph/OSD
> seems
> highly desirable with potentially big impact .. not?

I think there is room to make it far faster than it is today using
SPDK/DPDK, but it would take a much more dramatic set of changes to the
structure of the OSD to actually realize the benefit. The whole OSD
would probably need to be rewritten to do one thread per core with
message passing and entirely asynchronous network and storage stacks.
That's effectively a brand new OSD.


Are there any plans to implement an object storage device (SCSI OSD, not
Ceph-specific) backend in SPDK? I've noticed 17.03 release planning talking
blob store and light-weight fs which sounds pretty much like T10 OSD.

Regards,
Andrey


>
> Thanks for your reply!
> Cheers,
> /Tobias
>
> >
> > Thanks,
> > Nate
> >
> > On Feb 7, 2017, at 5:20 AM, Andrey Kuzmin <andrey.v.kuzmin(a)gmail.co
> > m<mailto:andrey.v.kuzmin(a)gmail.com>> wrote:
> >
> > Not that I know of, and likely because it belongs to Ceph, not
> > SPDK. SPDK goal is to enable applications to utilize NVMe flash
> > more efficiently, not to provide a backend for each and every
> > application out there.
> >
> > Regards,
> > Andrey
> >
> > On Feb 7, 2017 14:03, "Tobias Oberstein" <tobias.oberstein(a)gmail.co
> > m<mailto:tobias.oberstein(a)gmail.com>> wrote:
> > Hi,
> >
> > the 16.2 release added a Ceph RBD block device as a backend for
> > SPDK applications. I am wondering about the inverse?
> >
> > As in: having Ceph RBD OSDs use SPDK to use NVMe flash as
> > underlying block storage.
> >
> > There seems to be efforts with Ceph/Bluestore
> >
> > http://www.slideshare.net/sageweil1/bluestore-a-new-faster-storage-
> > backend-for-ceph
> >
> > to allow OSDs use raw block devices as underlying storage (instead
> > of Filestore, which shuffles everything through a filesystem).
> >
> > So put differently: is there a Ceph/Bluestore block device
> > implementation using SPDK?
> >
> > Cheers,
> > /Tobias
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> > https://lists.01.org/mailman/listinfo/spdk
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> > https://lists.01.org/mailman/listinfo/spdk
> >
> >
> >
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> >
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8678 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-07 18:54 Walker, Benjamin
  0 siblings, 0 replies; 9+ messages in thread
From: Walker, Benjamin @ 2017-02-07 18:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5101 bytes --]

On Tue, 2017-02-07 at 19:20 +0100, Tobias Oberstein wrote:
> Hi Nate,
> 
> Am 07.02.2017 um 14:03 schrieb Marushak, Nathan:
> > Hi Tobias,
> > 
> > There has been some work done in Bluestore for this. If you search
> > "SPDK Bluestore" or something similar you'll see some links.
> 
> I was trying to find conclusive info on the net before - with no 
> definite result though, eg, after reading (collegue of yours):
> 
> Accelerate Ceph via SPDK
> 
> http://7xweck.com1.z0.glb.clouddn.com/cephdaybeijing201608/04-SPDK%E5
> %8A%A0%E9%80%9FCeph-
> XSKY%20Bluestore%E6%A1%88%E4%BE%8B%E5%88%86%E4%BA%AB-
> %E6%89%AC%E5%AD%90%E5%A4%9C-%E7%8E%8B%E8%B1%AA%E8%BF%88.pdf
> 
> My understanding is:
> 
> Bluestore seems to introduce a proper block device abstraction
> within 
> the Ceph OSD implementation.
> 
> And this new OSD internal block device abstraction is implemented
> for 
> one, over regular Linux block devices (already a step forward from
> being 
> forced to shuffle everything through a filesystem).

Correct - Bluestore is a highly simplified user space filesystem.

> 
> But what I couldn't find in above or on the net: is there a SPDK
> backed 
> implementation of this new Bluestore OSD block device abstraction?
> 
> Do you have a link for me? I really tried to find it ..

Here is a link to the actual code:

https://github.com/ceph/ceph/blob/master/src/os/bluestore/NVMEDevice.cc

This was not implemented by the SPDK team and I don't know what state
it is in, but it is definitely there.

> 
> > The impact to performance of Ceph was somewhat limited however.
> > There are bottlenecks in the Ceph OSD.
> 
> Ok=( Any public avail info on that?

I don't have the actual numbers on hand, but it was a small improvement
only. I'm speculating, but I can think of a number of problems in the
above implementation that will limit performance. The biggest problem
is that Ceph still relies on buffered I/O in a number of cases, but the
SPDK implementation doesn't do any caching. Caching is of course the
single most important aspect of storage performance. The above
implementation also copies memory for every read and write into DMA-
able buffers because Ceph doesn't allocate buffers from DMA-able memory
by default. To fix that, Ceph would need to either make its memory
manager pluggable as well, or just use SPDK/DPDK throughout for all
data buffer allocations. Third, Ceph still does some blocking I/O in
certain cases, and blocking I/O with SPDK, given there is no caching,
is probably slower than the kernel.

> 
> In general: having a SPDK+DPDK backed implementation of Ceph/OSD
> seems 
> highly desirable with potentially big impact .. not?

I think there is room to make it far faster than it is today using
SPDK/DPDK, but it would take a much more dramatic set of changes to the
structure of the OSD to actually realize the benefit. The whole OSD
would probably need to be rewritten to do one thread per core with
message passing and entirely asynchronous network and storage stacks.
That's effectively a brand new OSD.

> 
> Thanks for your reply!
> Cheers,
> /Tobias
> 
> > 
> > Thanks,
> > Nate
> > 
> > On Feb 7, 2017, at 5:20 AM, Andrey Kuzmin <andrey.v.kuzmin(a)gmail.co
> > m<mailto:andrey.v.kuzmin(a)gmail.com>> wrote:
> > 
> > Not that I know of, and likely because it belongs to Ceph, not
> > SPDK. SPDK goal is to enable applications to utilize NVMe flash
> > more efficiently, not to provide a backend for each and every
> > application out there.
> > 
> > Regards,
> > Andrey
> > 
> > On Feb 7, 2017 14:03, "Tobias Oberstein" <tobias.oberstein(a)gmail.co
> > m<mailto:tobias.oberstein(a)gmail.com>> wrote:
> > Hi,
> > 
> > the 16.2 release added a Ceph RBD block device as a backend for
> > SPDK applications. I am wondering about the inverse?
> > 
> > As in: having Ceph RBD OSDs use SPDK to use NVMe flash as
> > underlying block storage.
> > 
> > There seems to be efforts with Ceph/Bluestore
> > 
> > http://www.slideshare.net/sageweil1/bluestore-a-new-faster-storage-
> > backend-for-ceph
> > 
> > to allow OSDs use raw block devices as underlying storage (instead
> > of Filestore, which shuffles everything through a filesystem).
> > 
> > So put differently: is there a Ceph/Bluestore block device
> > implementation using SPDK?
> > 
> > Cheers,
> > /Tobias
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> > https://lists.01.org/mailman/listinfo/spdk
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> > https://lists.01.org/mailman/listinfo/spdk
> > 
> > 
> > 
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> > 
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 3274 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-07 18:20 Tobias Oberstein
  0 siblings, 0 replies; 9+ messages in thread
From: Tobias Oberstein @ 2017-02-07 18:20 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3047 bytes --]

Hi Nate,

Am 07.02.2017 um 14:03 schrieb Marushak, Nathan:
> Hi Tobias,
>
> There has been some work done in Bluestore for this. If you search "SPDK Bluestore" or something similar you'll see some links.

I was trying to find conclusive info on the net before - with no 
definite result though, eg, after reading (collegue of yours):

Accelerate Ceph via SPDK

http://7xweck.com1.z0.glb.clouddn.com/cephdaybeijing201608/04-SPDK%E5%8A%A0%E9%80%9FCeph-XSKY%20Bluestore%E6%A1%88%E4%BE%8B%E5%88%86%E4%BA%AB-%E6%89%AC%E5%AD%90%E5%A4%9C-%E7%8E%8B%E8%B1%AA%E8%BF%88.pdf

My understanding is:

Bluestore seems to introduce a proper block device abstraction within 
the Ceph OSD implementation.

And this new OSD internal block device abstraction is implemented for 
one, over regular Linux block devices (already a step forward from being 
forced to shuffle everything through a filesystem).

But what I couldn't find in above or on the net: is there a SPDK backed 
implementation of this new Bluestore OSD block device abstraction?

Do you have a link for me? I really tried to find it ..

> The impact to performance of Ceph was somewhat limited however. There are bottlenecks in the Ceph OSD.

Ok=( Any public avail info on that?

In general: having a SPDK+DPDK backed implementation of Ceph/OSD seems 
highly desirable with potentially big impact .. not?

Thanks for your reply!
Cheers,
/Tobias

>
> Thanks,
> Nate
>
> On Feb 7, 2017, at 5:20 AM, Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com<mailto:andrey.v.kuzmin(a)gmail.com>> wrote:
>
> Not that I know of, and likely because it belongs to Ceph, not SPDK. SPDK goal is to enable applications to utilize NVMe flash more efficiently, not to provide a backend for each and every application out there.
>
> Regards,
> Andrey
>
> On Feb 7, 2017 14:03, "Tobias Oberstein" <tobias.oberstein(a)gmail.com<mailto:tobias.oberstein(a)gmail.com>> wrote:
> Hi,
>
> the 16.2 release added a Ceph RBD block device as a backend for SPDK applications. I am wondering about the inverse?
>
> As in: having Ceph RBD OSDs use SPDK to use NVMe flash as underlying block storage.
>
> There seems to be efforts with Ceph/Bluestore
>
> http://www.slideshare.net/sageweil1/bluestore-a-new-faster-storage-backend-for-ceph
>
> to allow OSDs use raw block devices as underlying storage (instead of Filestore, which shuffles everything through a filesystem).
>
> So put differently: is there a Ceph/Bluestore block device implementation using SPDK?
>
> Cheers,
> /Tobias
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-07 18:16 Andrey Kuzmin
  0 siblings, 0 replies; 9+ messages in thread
From: Andrey Kuzmin @ 2017-02-07 18:16 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2322 bytes --]

On Feb 7, 2017 21:06, "Tobias Oberstein" <tobias.oberstein(a)gmail.com> wrote:

Hi Andrey,


Am 07.02.2017 um 13:20 schrieb Andrey Kuzmin:

> Not that I know of, and likely because it belongs to Ceph, not SPDK. SPDK
> goal is to enable applications to utilize NVMe flash more efficiently, not
> to provide a backend for each and every application out there.
>

Right. Understood. I was just looking for expert answers, not trying to
imply SPDK should work for the Ceph project of course;)

And I guess the iSCSI target included with SPDK doesnt't fall under this
("an app"), because iSCSI has a proper open wire protocol definition
(different from Ceph)?


I'm not sure why SPDK, being NVMe-based, has chosen to provide an iSCSI
target, but in the above context, you're perfectly right in making a
distinction between an app that wants some API and a target that exposes a
standard interface via a standard wire protocol, totally inline with SPDK
idea as far as I understand it. Notice that I'm not a maintainer, so SPDK
team's mileage may vary ;).

Regards,
A.


Cheers,
/Tobias



> Regards,
> Andrey
>
> On Feb 7, 2017 14:03, "Tobias Oberstein" <tobias.oberstein(a)gmail.com>
> wrote:
>
> Hi,
>>
>> the 16.2 release added a Ceph RBD block device as a backend for SPDK
>> applications. I am wondering about the inverse?
>>
>> As in: having Ceph RBD OSDs use SPDK to use NVMe flash as underlying block
>> storage.
>>
>> There seems to be efforts with Ceph/Bluestore
>>
>> http://www.slideshare.net/sageweil1/bluestore-a-new-faster-
>> storage-backend-for-ceph
>>
>> to allow OSDs use raw block devices as underlying storage (instead of
>> Filestore, which shuffles everything through a filesystem).
>>
>> So put differently: is there a Ceph/Bluestore block device implementation
>> using SPDK?
>>
>> Cheers,
>> /Tobias
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 4095 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-07 18:06 Tobias Oberstein
  0 siblings, 0 replies; 9+ messages in thread
From: Tobias Oberstein @ 2017-02-07 18:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1678 bytes --]

Hi Andrey,

Am 07.02.2017 um 13:20 schrieb Andrey Kuzmin:
> Not that I know of, and likely because it belongs to Ceph, not SPDK. SPDK
> goal is to enable applications to utilize NVMe flash more efficiently, not
> to provide a backend for each and every application out there.

Right. Understood. I was just looking for expert answers, not trying to 
imply SPDK should work for the Ceph project of course;)

And I guess the iSCSI target included with SPDK doesnt't fall under this 
("an app"), because iSCSI has a proper open wire protocol definition 
(different from Ceph)?

Cheers,
/Tobias

>
> Regards,
> Andrey
>
> On Feb 7, 2017 14:03, "Tobias Oberstein" <tobias.oberstein(a)gmail.com> wrote:
>
>> Hi,
>>
>> the 16.2 release added a Ceph RBD block device as a backend for SPDK
>> applications. I am wondering about the inverse?
>>
>> As in: having Ceph RBD OSDs use SPDK to use NVMe flash as underlying block
>> storage.
>>
>> There seems to be efforts with Ceph/Bluestore
>>
>> http://www.slideshare.net/sageweil1/bluestore-a-new-faster-
>> storage-backend-for-ceph
>>
>> to allow OSDs use raw block devices as underlying storage (instead of
>> Filestore, which shuffles everything through a filesystem).
>>
>> So put differently: is there a Ceph/Bluestore block device implementation
>> using SPDK?
>>
>> Cheers,
>> /Tobias
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-07 13:03 Marushak, Nathan
  0 siblings, 0 replies; 9+ messages in thread
From: Marushak, Nathan @ 2017-02-07 13:03 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1647 bytes --]

Hi Tobias,

There has been some work done in Bluestore for this. If you search "SPDK Bluestore" or something similar you'll see some links. The impact to performance of Ceph was somewhat limited however. There are bottlenecks in the Ceph OSD.

Thanks,
Nate

On Feb 7, 2017, at 5:20 AM, Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com<mailto:andrey.v.kuzmin(a)gmail.com>> wrote:

Not that I know of, and likely because it belongs to Ceph, not SPDK. SPDK goal is to enable applications to utilize NVMe flash more efficiently, not to provide a backend for each and every application out there.

Regards,
Andrey

On Feb 7, 2017 14:03, "Tobias Oberstein" <tobias.oberstein(a)gmail.com<mailto:tobias.oberstein(a)gmail.com>> wrote:
Hi,

the 16.2 release added a Ceph RBD block device as a backend for SPDK applications. I am wondering about the inverse?

As in: having Ceph RBD OSDs use SPDK to use NVMe flash as underlying block storage.

There seems to be efforts with Ceph/Bluestore

http://www.slideshare.net/sageweil1/bluestore-a-new-faster-storage-backend-for-ceph

to allow OSDs use raw block devices as underlying storage (instead of Filestore, which shuffles everything through a filesystem).

So put differently: is there a Ceph/Bluestore block device implementation using SPDK?

Cheers,
/Tobias
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2806 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-07 12:20 Andrey Kuzmin
  0 siblings, 0 replies; 9+ messages in thread
From: Andrey Kuzmin @ 2017-02-07 12:20 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1094 bytes --]

Not that I know of, and likely because it belongs to Ceph, not SPDK. SPDK
goal is to enable applications to utilize NVMe flash more efficiently, not
to provide a backend for each and every application out there.

Regards,
Andrey

On Feb 7, 2017 14:03, "Tobias Oberstein" <tobias.oberstein(a)gmail.com> wrote:

> Hi,
>
> the 16.2 release added a Ceph RBD block device as a backend for SPDK
> applications. I am wondering about the inverse?
>
> As in: having Ceph RBD OSDs use SPDK to use NVMe flash as underlying block
> storage.
>
> There seems to be efforts with Ceph/Bluestore
>
> http://www.slideshare.net/sageweil1/bluestore-a-new-faster-
> storage-backend-for-ceph
>
> to allow OSDs use raw block devices as underlying storage (instead of
> Filestore, which shuffles everything through a filesystem).
>
> So put differently: is there a Ceph/Bluestore block device implementation
> using SPDK?
>
> Cheers,
> /Tobias
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 1739 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [SPDK] Ceph/Bluestore SPDK based backend?
@ 2017-02-07 11:03 Tobias Oberstein
  0 siblings, 0 replies; 9+ messages in thread
From: Tobias Oberstein @ 2017-02-07 11:03 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 598 bytes --]

Hi,

the 16.2 release added a Ceph RBD block device as a backend for SPDK 
applications. I am wondering about the inverse?

As in: having Ceph RBD OSDs use SPDK to use NVMe flash as underlying 
block storage.

There seems to be efforts with Ceph/Bluestore

http://www.slideshare.net/sageweil1/bluestore-a-new-faster-storage-backend-for-ceph

to allow OSDs use raw block devices as underlying storage (instead of 
Filestore, which shuffles everything through a filesystem).

So put differently: is there a Ceph/Bluestore block device 
implementation using SPDK?

Cheers,
/Tobias

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-02-08 10:42 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-07 22:37 [SPDK] Ceph/Bluestore SPDK based backend? Tobias Oberstein
  -- strict thread matches above, loose matches on Subject: below --
2017-02-08 10:42 Andrey Kuzmin
2017-02-07 18:54 Walker, Benjamin
2017-02-07 18:20 Tobias Oberstein
2017-02-07 18:16 Andrey Kuzmin
2017-02-07 18:06 Tobias Oberstein
2017-02-07 13:03 Marushak, Nathan
2017-02-07 12:20 Andrey Kuzmin
2017-02-07 11:03 Tobias Oberstein

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.