All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-17  0:20 Zhipeng Huang
  0 siblings, 0 replies; 15+ messages in thread
From: Zhipeng Huang @ 2018-06-17  0:20 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 13376 bytes --]

Thanks Paul,

Will there is no extra features per se at the moment, but we will gradually
rolls that out hopefully. The current spdk driver code could be found at
https://github.com/openstack/cyborg/blob/master/cyborg/accelerator/drivers/spdk/nvmf/nvmf.py


Well the purpose of the discussion is actually quite simple, we suggest to
add Cyborg as another project like Cinder which supports spdk in the SPDK
architecture figure :) I think our work in Cyborg is good for SPDK's
ecosystem and would be great to acknowledge that :)

Sorry for the many lengthy emails :)


On Sat, Jun 16, 2018 at 11:15 PM Luse, Paul E <paul.e.luse(a)intel.com> wrote:

> Thanks for the information! Are there currently any Cyborg drivers (is
> that the right term?) that implement NVMeOF? It would be great to take a
> look at one to help clear things up a bit – what I’m still not seeing is
> exactly what “extra capabilities” you mention below that could be
> exposed/exploited via Cyborg & SPDK vs Cinder & SPDK.
>
>
>
> Also, assuming that nothing else is required from the SPDK
> codebase/community, is the purpose of this email chain just to
> inform/educate and help enlist the SPDK community in promoting its use in
> Cyborg?
>
>
>
> Note that I’m on holiday from now until July 2 and starting Mon afternoon
> will not have access to email until I get back so when you don’t get any
> kind of response from me, that’s why J Hopefully others will carry the
> conversation forward as I know others are interested as well and fully
> appreciate all of Helloway’s interest in SPDK thus far.  I’ll definitely
> touch base when I return, thanks again!
>
>
>
> -Paul
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Zhipeng
> Huang
> *Sent:* Friday, June 15, 2018 6:27 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Maciek and Paul,
>
>
>
> I'm the current project lead of Cyborg so let me try to answer your
> questions, maybe hellowway did not explain well before.
>
>
>
> Cyborg[0] is a general management framework for accelerators (GPU. FPGA,
> ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the project's
> early days :) . The support of spdk driver has already landed in cyborg
> (although still very premature).
>
>
>
> It is important to notice that Cyborg does not provide volume management
> which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over
> Fabric SSDs, meaning leveraging information that could be attained via the
> Bus (and the NVMe/NOF protocol). A similar comparison could be found in
> Cyborg management of FPGA/GPU. This could enable many great features
> developed in NVMe protocol could be exposed via Cyborg to support more
> granulized scheduling if the user want, instead of just being treated as a
> block device without difference to traditional SSDs.
>
>
>
> Therefore Cyborg is working alongside of Nova and Cinder. An example of
> how Cyborg work with Nova on NVMe devices could be found in [1] starting
> 27:13, which is also proposed from Intel developers. In essence, Cyborg
> helps Nova get knowledge of NVMe SSDs for its various extra capabilities
> compared to normal SSD (i.e, as an accelerator), Nova scheduler then will
> select a node with such capability if the user desires to spawn a VM, and
> then Cinder will just do its job on volume management.
>
>
>
> So on this note, Cyborg is working with Nova and Cinder, the relationship
> is complimentary. Cyborg will interact with Nova through Placement, there
> is no need at the moment for Cyborg to interact with Cinder. It could still
> work without Cyborg in the picture for sure, but NVMe SSD will be seen just
> as a normal block device by Nova and Cinder, and no advanced scheduling
> could be performed.
>
>
>
> Re Paul's question:
>
> - Just a good Cyborg SPDK driver could make it work in OpenStack
>
> - No additional requirement on SPDK community itself
>
> - No additional tweaks in SPDK needed specifically for Cyborg to work with
> it.
>
>
>
> OpenSDS is just another option, since it supports capability report
> functionality, it could gather information from Cyborg to make more
> granulized scheduling, unlike Cinder which does not get input from Cyborg
> and just perform regular volume management. There is no good or bad to
> compare two solutions. It depends on user's requirement.
>
>
>
> I hope this writing could clear things up :)
>
>
>
> [0] https://wiki.openstack.org/wiki/Cyborg
>
> [1] https://www.youtube.com/watch?v=q84Q-6FSXts
>
>
>
>
>
> On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed(a)intel.com>
> wrote:
>
> Hi Helloway,
>
> I’ve been working on Cinder drivers (volume and target) for SPDK. I’m not
> familiar with Cyborg. Does Cyborg have capabilities to manage volumes? How
> does it interact with Nova – how does it provide storage to compute nodes?
>
>
>
> Thanks,
>
> Maciek
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Luse, Paul
> E
> *Sent:* Thursday, June 14, 2018 5:27 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Helloway,
>
>
>
> No problem.  Cinder is the block storage provisioning project for
> Openstack. A VM needing block storage interacts with Cinder to request what
> it needs and storage providers that have written provider (aka drivers) for
> Cinders are matched up based on requests. Here’s the overview from the
> project:
> https://docs.openstack.org/newton/install-guide-ubuntu/common/get-started-block-storage.html
> and this is by far the most common way that I’m aware of to enable a new
> block storage application in OpenStack.
>
>
>
> I haven’t worked in that world in a few years such that Cyborg and OpenSDS
> have been introduced since I was active but as far as I know Cinder is
> still the best place to start introducing SPDK based block storage into the
> Openstack cloud. I do have some OpenSDS contacts, and still some friends
> who work on Cinder.  Let me ask around a little, note that I will be out of
> the office for 2 weeks of “disconnected” vacation after this Friday but
> I’ll try and get a bit more info before then and get back to ya.
>
>
>
> Anyone else out there feel free to chime in if you have more info on
> Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we figure
> that out, the questions below are still very relevant for next steps:
>
>
>
> - what else is required to be pushed into OpenStack for this to work
>
> - is anything required in the SPDK repo for this to work
>
> - how will the necessary SPDK components be associated with the VM in
> question and subsequently configured
>
>
>
> Thanks!
>
> Paul
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>]
> *On Behalf Of *helloway
> *Sent:* Wednesday, June 13, 2018 7:47 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Paul,
>
>
>
> I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what
> I think. If I was wrong, please correct me. From my perspective, Cinder
> cares more about the capacity of the pool whose pool capabilities reporting
> interface is fixed, whereas, in addition to the capacity, Cyborg also cares
> about the fine-grained accelerator capabilities (e.g. iops, queue,
> etc.). These capabilities, reported  from the Cyborg, can be dynamic
> configured and handled through the OpenSDS’ profile. For this reason, it
> provides a more flexible and simple configuration which can be called
> conveniently.
>
>
>
> Thx,
>
> Helloway
>
>
>
> On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com>
> <paul.e.luse(a)intel.com> wrote:
>
> Hi Helloway,
>
>
>
> That’s a great start but I still have the same open questions below, maybe
> you can try and address those directly? Also, below is the link for adding
> SPDK based NVMeOF as a Cinder plug-in.  In addition to the question below
> can you please explain for everyone how you see the approach of using
> Cyborg and OpenSDS compares with the seemingly simpler approach of
> providing a Cinder plug-in?
>
>
>
> https://review.openstack.org/#/c/564229/
>
>
>
> Thanks!!
>
> Paul
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *helloway
> *Sent:* Tuesday, June 12, 2018 5:22 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Paul,
>
> Thank you for your response, I have submitted a trello [0] titled
>  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer
> your questions from this trello, do I make sense? I really hope to receive
> your feedback.
>
>
>
> [0]
> https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spdk-architecture
>
>
>
>
> Thx,
>
> Helloway
>
> On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com>
> <paul.e.luse(a)intel.com> wrote:
>
> Hi Helloway,
>
>
>
> I was actually just wondering what had happened with this.  Looking at the
> OpenStack patch it looks like it’s close to landing. Somewhere out there we
> have a Cinder driver that’s also getting fairly close I believe so for sure
> integration with OpenStack is interesting to many in the community.
>
>
>
> Would you be able to summarize more specifically how your patch would work
> once it lands? I of course see your high level description below that some
> questions that I have, I’m assuming others as well, include:
>
>
>
> - what else is required to be pushed into OpenStack for this to work
>
> - is anything required in the SPDK repo for this to work
>
> - how will the necessary SPDK components be associated with the VM in
> question and subsequently configured
>
>
>
> Thanks for continuing to work on this!
>
>
>
> -Paul
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *helloway
> *Sent:* Monday, June 11, 2018 1:35 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Jim and all,
>
>
>
> Do you know OpenStack/Cyborg? It is OpenStack acceleration service which
> provides a management framework for accelerator devices (e.g. FPGA, GPU,
> NVMe SSD). There is a strong demand for OpenStack to support hardware
> accelerated devices in a dynamic model(mentioned in OpenStack Summit
> Vancouver 2018 [0]).
>
>
>
> For this reason, we can use Cyborg to interactive with nvmf_tgt to realize
> the management of the user space accelerator NVMe SSD device, which can
> badly promote the efficiency. It is worth mentioning that the
> Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q
> [1]. The driver can report the detailed information of the device to the
> Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg
> agent will update the Nova/Placement inventory on available NVMe devices.
> This is a complete process to describe the connection of Cyborg and SPDK.
>
>
>
> I wonder whether you guys are interested in integrating OpenStack/Cyborg
> into SPDK architecture? Do I make sense? Please let me know what your
> thoughts.
>
>
>
> [0]
> https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware
>
> [1]https://review.openstack.org/#/c/538164/
>
>
>
>
>
> Thx,
>
> Helloway
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> --
>
> Zhipeng (Howard) Huang
>
>
>
> Standard Engineer
>
> IT Standard & Patent/IT Product Line
>
> Huawei Technologies Co,. Ltd
>
> Email: huangzhipeng(a)huawei.com
>
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
>
>
> (Previous)
>
> Research Assistant
>
> Mobile Ad-Hoc Network Lab, Calit2
>
> University of California, Irvine
>
> Email: zhipengh(a)uci.edu
>
> Office: Calit2 Building Room 2402
>
>
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng(a)huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh(a)uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 34471 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2019-09-28  2:06 helloway
  0 siblings, 0 replies; 15+ messages in thread
From: helloway @ 2019-09-28  2:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1278 bytes --]

Hi Jim and all,


Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]). 


For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.


I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.


[0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware
[1]https://review.openstack.org/#/c/538164/




Thx,
Helloway

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2255 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2019-09-28  2:06 helloway
  0 siblings, 0 replies; 15+ messages in thread
From: helloway @ 2019-09-28  2:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2832 bytes --]

Hi Paul,
Thank you for your response, I have submitted a trello [0] titled  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer your questions from this trello, do I make sense? I really hope to receive your feedback.


[0]https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spdk-architecture 


Thx,
Helloway
On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com> wrote:

Hi Helloway,

 

I was actually just wondering what had happened with this.  Looking at the OpenStack patch it looks like it’s close to landing. Somewhere out there we have a Cinder driver that’s also getting fairly close I believe so for sure integration with OpenStack is interesting to many in the community.

 

Would you be able to summarize more specifically how your patch would work once it lands? I of course see your high level description below that some questions that I have, I’m assuming others as well, include:

 

- what else is required to be pushed into OpenStack for this to work

- is anything required in the SPDK repo for this to work

- how will the necessary SPDK components be associated with the VM in question and subsequently configured

 

Thanks for continuing to work on this!

 

-Paul

 

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
Sent: Monday, June 11, 2018 1:35 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

 

Hi Jim and all,

 

Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]). 

 

For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.

 

I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.

 

[0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware

[1]https://review.openstack.org/#/c/538164/

 

 

Thx,

Helloway

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 9452 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2019-09-28  2:06 helloway
  0 siblings, 0 replies; 15+ messages in thread
From: helloway @ 2019-09-28  2:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4363 bytes --]

Hi Paul,


I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what I think. If I was wrong, please correct me. From my perspective, Cinder cares more about the capacity of the pool whose pool capabilities reporting interface is fixed, whereas, in addition to the capacity, Cyborg also cares about the fine-grained accelerator capabilities (e.g. iops, queue, etc.). These capabilities, reported  from the Cyborg, can be dynamic configured and handled through the OpenSDS’ profile. For this reason, it provides a more flexible and simple configuration which can be called conveniently.


Thx,
Helloway


On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com> wrote:

Hi Helloway,

 

That’s a great start but I still have the same open questions below, maybe you can try and address those directly? Also, below is the link for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to the question below can you please explain for everyone how you see the approach of using Cyborg and OpenSDS compares with the seemingly simpler approach of providing a Cinder plug-in?

 

https://review.openstack.org/#/c/564229/

 

Thanks!!

Paul

 

 

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
Sent: Tuesday, June 12, 2018 5:22 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

 

Hi Paul,

Thank you for your response, I have submitted a trello [0] titled  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer your questions from this trello, do I make sense? I really hope to receive your feedback.

 

[0]https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spdk-architecture 

 

Thx,

Helloway

On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com> wrote:

Hi Helloway,

 

I was actually just wondering what had happened with this.  Looking at the OpenStack patch it looks like it’s close to landing. Somewhere out there we have a Cinder driver that’s also getting fairly close I believe so for sure integration with OpenStack is interesting to many in the community.

 

Would you be able to summarize more specifically how your patch would work once it lands? I of course see your high level description below that some questions that I have, I’m assuming others as well, include:

 

- what else is required to be pushed into OpenStack for this to work

- is anything required in the SPDK repo for this to work

- how will the necessary SPDK components be associated with the VM in question and subsequently configured

 

Thanks for continuing to work on this!

 

-Paul

 

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
Sent: Monday, June 11, 2018 1:35 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

 

Hi Jim and all,

 

Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]). 

 

For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.

 

I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.

 

[0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware

[1]https://review.openstack.org/#/c/538164/

 

 

Thx,

Helloway

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 13586 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-21  0:41 Zhipeng Huang
  0 siblings, 0 replies; 15+ messages in thread
From: Zhipeng Huang @ 2018-06-21  0:41 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 21021 bytes --]

Hi Jim,

I think people are tapped into the mindset that Cyborg is trying to do
something Cinder should do. Cinder should do the volume management as it is
today without any problem, and work should be contributed to Cinder to
enhance whatever is needed. The use case for Cyborg is totally depended on
the user, whether they need a management on the device itself or not. It is
like OpenStack Nova got the support of vGPU but there are users like CERN
still needs cyborg to do the management on GPU.

This is an option without any imposition on the architecture.

I think this is a simple suggestion of ecosystem collaboration that has
been blown out of proportion . I understand the confusion here and maybe I
should raise it later when some concrete use cases available.

Sorry for the long threads folks, let's not discuss any further on this
topic.

On Wed, Jun 20, 2018 at 11:44 PM Harris, James R <james.r.harris(a)intel.com>
wrote:

>
>
> Hi Ben,
>
>
>
> When Cyborg approaches NVMe devices we view it as storage accelerators
> which means user will use it to accelerate their storage part of the
> service. So the latter niche use case you mentioned is definitely covered.
> But it is more general than that, and the definition of the accelerator in
> Cyborg's sense depends not upon its functionality but it usage. For example
> if the user uses NVMe SSD only for certain high perf demanding services,
> the user could view it as an accelerator and choose to manage it via
> Cyborg, alongside Cinder.
>
>
>
> In some practices vendor produces "accelerator racks" and they will put
> many heterogenous devices in it including NOF SSDs, FPGA (remote), GPU
> (remote) and etc.
>
>
>
> As to what Cyborg provides on management different from Cinder, since it
> concentrates on , again, acceleration management, in theory Cyborg could be
> used to provide management on things like QoS of different streams of a
> NVMe SSD, or performing sanitization as part of the life cycle management.
> All of which are not in the scope of volume management.
>
>
>
> I just think it is a good collaboration between two open source
> communities of many new possibilities. But it is ok if the SPDK community
> does not concern it as a correct usage.
>
>
>
> Hi Zhipeng,
>
>
>
> I have a lot of questions on this similar to Ben’s.
>
>
>
> From an OpenStack perspective, using Cyborg for NVMe SSD access seems to
> complicate storage provisioning.  If a user needs a dedicated NVMe SSD for
> a high performance demanding service, shouldn’t Cinder be able to do that
> kind of provisioning?  And if Cinder can’t currently – wouldn’t it be
> better to fix Cinder so that users don’t have to deal with two different
> services for provisioning NVMe SSDs?
>
>
>
> Thanks,
>
>
>
> -Jim
>
>
>
>
>
> On Wed, Jun 20, 2018 at 12:42 AM Walker, Benjamin <
> benjamin.walker(a)intel.com> wrote:
>
> I've been mulling over this discussion for the past few days. I'm certainly
> always supportive of projects that integrate SPDK, and this situation is no
> different. However, I can't wrap my head around exactly how "NVMe" or
> "NVMe-oF"
> can be thought of as an accelerator. Because of that, I'm not in a
> position to
> know when to point people at the Cyborg project as opposed to Cinder if
> they're
> interested in using SPDK with OpenStack. So I'd like to gain a deeper
> understanding here, if you'll all help me do that.
>
> My understanding is that Cyborg is a framework for discovering available
> "accelerators". To me, an accelerator is some code library or specialized
> piece
> of hardware that performs some computation more quickly than a simple
> algorithm
> running on the CPU. This could range from FPGAs to GPUs to even specialized
> implementations of algorithms that still run on the CPU, but take
> advantage of
> instructions or designs that the compilers are unlikely to emit from plain
> old C
> code (like ISA-L).
>
> To me, traditional NVMe devices do not fall into this category. They don't
> accelerate some computation, but rather are simply a block storage device.
> Further, accelerators are stateless - they take some input and produce some
> output in a repeatable fashion. NVMe devices, in contrast, are stateful -
> their
> entire purpose is to persistently store state. So what, specifically, is
> Cyborg
> exposing about the device that Cinder cannot?
>
> Note that there are "NVMe" devices on the market that really are
> accelerators -
> they contain no persistent storage and are only using NVMe as a convenient
> interface. They act as accelerators by allowing the user to write to a
> certain
> range of blocks, then read the data back from that location and some
> operation
> such as encryption or compression will have been performed on it. These
> devices
> are indeed accelerators and should be exposed through Cyborg. If you're
> only
> talking specifically about these devices (which are a very niche market as
> of
> this writing), then I understand fully what you're doing here and we can
> move
> forward.
>
> Thanks,
> Ben
>
> On Sun, 2018-06-17 at 08:20 +0800, Zhipeng Huang wrote:
> > Thanks Paul,
> >
> > Will there is no extra features per se at the moment, but we will
> gradually
> > rolls that out hopefully. The current spdk driver code could be found at
> https
> > ://
> github.com/openstack/cyborg/blob/master/cyborg/accelerator/drivers/spdk/nvm
> > f/nvmf.py
> >
> > Well the purpose of the discussion is actually quite simple, we suggest
> to add
> > Cyborg as another project like Cinder which supports spdk in the SPDK
> > architecture figure :) I think our work in Cyborg is good for SPDK's
> ecosystem
> > and would be great to acknowledge that :)
> >
> > Sorry for the many lengthy emails :)
> >
> >
> > On Sat, Jun 16, 2018 at 11:15 PM Luse, Paul E <paul.e.luse(a)intel.com>
> wrote:
> > > Thanks for the information! Are there currently any Cyborg drivers (is
> that
> > > the right term?) that implement NVMeOF? It would be great to take a
> look at
> > > one to help clear things up a bit – what I’m still not seeing is
> exactly
> > > what “extra capabilities” you mention below that could be
> exposed/exploited
> > > via Cyborg & SPDK vs Cinder & SPDK.
> > >
> > > Also, assuming that nothing else is required from the SPDK
> > > codebase/community, is the purpose of this email chain just to
> > > inform/educate and help enlist the SPDK community in promoting its use
> in
> > > Cyborg?
> > >
> > > Note that I’m on holiday from now until July 2 and starting Mon
> afternoon
> > > will not have access to email until I get back so when you don’t get
> any
> > > kind of response from me, that’s why J Hopefully others will carry the
> > > conversation forward as I know others are interested as well and fully
> > > appreciate all of Helloway’s interest in SPDK thus far.  I’ll
> definitely
> > > touch base when I return, thanks again!
> > >
> > > -Paul
> > >
> > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Zhipeng
> Huang
> > > Sent: Friday, June 15, 2018 6:27 PM
> > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > >
> > > Hi Maciek and Paul,
> > >
> > > I'm the current project lead of Cyborg so let me try to answer your
> > > questions, maybe hellowway did not explain well before.
> > >
> > > Cyborg[0] is a general management framework for accelerators (GPU.
> FPGA,
> > > ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the
> project's
> > > early days :) . The support of spdk driver has already landed in cyborg
> > > (although still very premature).
> > >
> > > It is important to notice that Cyborg does not provide volume
> management
> > > which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over
> > > Fabric SSDs, meaning leveraging information that could be attained via
> the
> > > Bus (and the NVMe/NOF protocol). A similar comparison could be found in
> > > Cyborg management of FPGA/GPU. This could enable many great features
> > > developed in NVMe protocol could be exposed via Cyborg to support more
> > > granulized scheduling if the user want, instead of just being treated
> as a
> > > block device without difference to traditional SSDs.
> > >
> > > Therefore Cyborg is working alongside of Nova and Cinder. An example
> of how
> > > Cyborg work with Nova on NVMe devices could be found in [1] starting
> 27:13,
> > > which is also proposed from Intel developers. In essence, Cyborg helps
> Nova
> > > get knowledge of NVMe SSDs for its various extra capabilities compared
> to
> > > normal SSD (i.e, as an accelerator), Nova scheduler then will select a
> node
> > > with such capability if the user desires to spawn a VM, and then
> Cinder will
> > > just do its job on volume management.
> > >
> > > So on this note, Cyborg is working with Nova and Cinder, the
> relationship is
> > > complimentary. Cyborg will interact with Nova through Placement, there
> is no
> > > need at the moment for Cyborg to interact with Cinder. It could still
> work
> > > without Cyborg in the picture for sure, but NVMe SSD will be seen just
> as a
> > > normal block device by Nova and Cinder, and no advanced scheduling
> could be
> > > performed.
> > >
> > > Re Paul's question:
> > > - Just a good Cyborg SPDK driver could make it work in OpenStack
> > > - No additional requirement on SPDK community itself
> > > - No additional tweaks in SPDK needed specifically for Cyborg to work
> with
> > > it.
> > >
> > > OpenSDS is just another option, since it supports capability report
> > > functionality, it could gather information from Cyborg to make more
> > > granulized scheduling, unlike Cinder which does not get input from
> Cyborg
> > > and just perform regular volume management. There is no good or bad to
> > > compare two solutions. It depends on user's requirement.
> > >
> > > I hope this writing could clear things up :)
> > >
> > > [0] https://wiki.openstack.org/wiki/Cyborg
> > > [1] https://www.youtube.com/watch?v=q84Q-6FSXts
> > >
> > >
> > > On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed(a)intel.com
> >
> > > wrote:
> > > > Hi Helloway,
> > > > I’ve been working on Cinder drivers (volume and target) for SPDK.
> I’m not
> > > > familiar with Cyborg. Does Cyborg have capabilities to manage
> volumes? How
> > > > does it interact with Nova – how does it provide storage to compute
> nodes?
> > > >
> > > > Thanks,
> > > > Maciek
> > > >
> > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse,
> Paul E
> > > > Sent: Thursday, June 14, 2018 5:27 AM
> > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > >
> > > > Hi Helloway,
> > > >
> > > > No problem.  Cinder is the block storage provisioning project for
> > > > Openstack. A VM needing block storage interacts with Cinder to
> request
> > > > what it needs and storage providers that have written provider (aka
> > > > drivers) for Cinders are matched up based on requests. Here’s the
> overview
> > > > from the project:
> https://docs.openstack.org/newton/install-guide-ubuntu/c
> > > > ommon/get-started-block-storage.html and this is by far the most
> common
> > > > way that I’m aware of to enable a new block storage application in
> > > > OpenStack.
> > > >
> > > > I haven’t worked in that world in a few years such that Cyborg and
> OpenSDS
> > > > have been introduced since I was active but as far as I know Cinder
> is
> > > > still the best place to start introducing SPDK based block storage
> into
> > > > the Openstack cloud. I do have some OpenSDS contacts, and still some
> > > > friends who work on Cinder.  Let me ask around a little, note that I
> will
> > > > be out of the office for 2 weeks of “disconnected” vacation after
> this
> > > > Friday but I’ll try and get a bit more info before then and get back
> to
> > > > ya.
> > > >
> > > > Anyone else out there feel free to chime in if you have more info on
> > > > Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we
> figure
> > > > that out, the questions below are still very relevant for next steps:
> > > >
> > > > - what else is required to be pushed into OpenStack for this to work
> > > > - is anything required in the SPDK repo for this to work
> > > > - how will the necessary SPDK components be associated with the VM in
> > > > question and subsequently configured
> > > >
> > > > Thanks!
> > > > Paul
> > > >
> > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
> > > > Sent: Wednesday, June 13, 2018 7:47 PM
> > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > >
> > > > Hi Paul,
> > > >
> > > > I am sorry for that I'm not an expert on Cinder, I‘ll try to tell
> you what
> > > > I think. If I was wrong, please correct me. From my perspective,
> Cinder
> > > > cares more about the capacity of the pool whose pool capabilities
> > > > reporting interface is fixed, whereas, in addition to the capacity,
> Cyborg
> > > > also cares about the fine-grained accelerator capabilities (e.g.
> iops,
> > > > queue, etc.). These capabilities, reported  from the Cyborg, can be
> > > > dynamic configured and handled through the OpenSDS’ profile. For this
> > > > reason, it provides a more flexible and simple configuration which
> can be
> > > > called conveniently.
> > > >
> > > > Thx,
> > > > Helloway
> > > >
> > > > On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com> wrote:
> > > > > Hi Helloway,
> > > > >
> > > > > That’s a great start but I still have the same open questions
> below,
> > > > > maybe you can try and address those directly? Also, below is the
> link
> > > > > for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to
> the
> > > > > question below can you please explain for everyone how you see the
> > > > > approach of using Cyborg and OpenSDS compares with the seemingly
> simpler
> > > > > approach of providing a Cinder plug-in?
> > > > >
> > > > > https://review.openstack.org/#/c/564229/
> > > > >
> > > > > Thanks!!
> > > > > Paul
> > > > >
> > > > >
> > > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of
> helloway
> > > > > Sent: Tuesday, June 12, 2018 5:22 PM
> > > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK
> Architecture
> > > > >
> > > > > Hi Paul,
> > > > > Thank you for your response, I have submitted a trello [0] titled
> > > > >  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying
> to
> > > > > answer your questions from this trello, do I make sense? I really
> hope
> > > > > to receive your feedback.
> > > > >
> > > > > [0]
> https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spd
> > > > > k-architecture
> > > > >
> > > > > Thx,
> > > > > Helloway
> > > > > On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com> wrote:
> > > > > > Hi Helloway,
> > > > > >
> > > > > > I was actually just wondering what had happened with this.
> Looking at
> > > > > > the OpenStack patch it looks like it’s close to landing.
> Somewhere out
> > > > > > there we have a Cinder driver that’s also getting fairly close I
> > > > > > believe so for sure integration with OpenStack is interesting to
> many
> > > > > > in the community.
> > > > > >
> > > > > > Would you be able to summarize more specifically how your patch
> would
> > > > > > work once it lands? I of course see your high level description
> below
> > > > > > that some questions that I have, I’m assuming others as well,
> include:
> > > > > >
> > > > > > - what else is required to be pushed into OpenStack for this to
> work
> > > > > > - is anything required in the SPDK repo for this to work
> > > > > > - how will the necessary SPDK components be associated with the
> VM in
> > > > > > question and subsequently configured
> > > > > >
> > > > > > Thanks for continuing to work on this!
> > > > > >
> > > > > > -Paul
> > > > > >
> > > > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of
> helloway
> > > > > > Sent: Monday, June 11, 2018 1:35 AM
> > > > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > > Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > > > >
> > > > > > Hi Jim and all,
> > > > > >
> > > > > > Do you know OpenStack/Cyborg? It is OpenStack acceleration
> service
> > > > > > which provides a management framework for accelerator devices
> (e.g.
> > > > > > FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to
> > > > > > support hardware accelerated devices in a dynamic
> model(mentioned in
> > > > > > OpenStack Summit Vancouver 2018 [0]).
> > > > > >
> > > > > > For this reason, we can use Cyborg to interactive with nvmf_tgt
> to
> > > > > > realize the management of the user space accelerator NVMe SSD
> device,
> > > > > > which can badly promote the efficiency. It is worth mentioning
> that
> > > > > > the Cyborg_SPDK_Driver I summitted has been merged into the
> OpenStack
> > > > > > version Q [1]. The driver can report the detailed information of
> the
> > > > > > device to the Cyborg agent. When user requests a vm with a user
> space
> > > > > > NVMe SSD, Cyborg agent will update the Nova/Placement inventory
> on
> > > > > > available NVMe devices. This is a complete process to describe
> the
> > > > > > connection of Cyborg and SPDK.
> > > > > >
> > > > > > I wonder whether you guys are interested in integrating
> > > > > > OpenStack/Cyborg into SPDK architecture? Do I make sense? Please
> let
> > > > > > me know what your thoughts.
> > > > > >
> > > > > > [0]
> https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-clo
> > > > > > ud-with-openstack-acceleration-service-and-composable-hardware
> > > > > > [1]https://review.openstack.org/#/c/538164/
> > > > > >
> > > > > >
> > > > > > Thx,
> > > > > > Helloway
> > > >
> > > > _______________________________________________
> > > > SPDK mailing list
> > > > SPDK(a)lists.01.org
> > > > https://lists.01.org/mailman/listinfo/spdk
> > >
> > >
> > > --
> > > Zhipeng (Howard) Huang
> > >
> > > Standard Engineer
> > > IT Standard & Patent/IT Product Line
> > > Huawei Technologies Co,. Ltd
> > > Email: huangzhipeng(a)huawei.com
> > > Office: Huawei Industrial Base, Longgang, Shenzhen
> > >
> > > (Previous)
> > > Research Assistant
> > > Mobile Ad-Hoc Network Lab, Calit2
> > > University of California, Irvine
> > > Email: zhipengh(a)uci.edu
> > > Office: Calit2 Building Room 2402
> > >
> > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> > > _______________________________________________
> > > SPDK mailing list
> > > SPDK(a)lists.01.org
> > > https://lists.01.org/mailman/listinfo/spdk
> >
> >
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> --
>
> Zhipeng (Howard) Huang
>
>
>
> Standard Engineer
>
> IT Standard & Patent/IT Product Line
>
> Huawei Technologies Co,. Ltd
>
> Email: huangzhipeng(a)huawei.com
>
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
>
>
> (Previous)
>
> Research Assistant
>
> Mobile Ad-Hoc Network Lab, Calit2
>
> University of California, Irvine
>
> Email: zhipengh(a)uci.edu
>
> Office: Calit2 Building Room 2402
>
>
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng(a)huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh(a)uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 30623 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-20 15:43 Harris, James R
  0 siblings, 0 replies; 15+ messages in thread
From: Harris, James R @ 2018-06-20 15:43 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 19138 bytes --]


Hi Ben,

When Cyborg approaches NVMe devices we view it as storage accelerators which means user will use it to accelerate their storage part of the service. So the latter niche use case you mentioned is definitely covered. But it is more general than that, and the definition of the accelerator in Cyborg's sense depends not upon its functionality but it usage. For example if the user uses NVMe SSD only for certain high perf demanding services, the user could view it as an accelerator and choose to manage it via Cyborg, alongside Cinder.

In some practices vendor produces "accelerator racks" and they will put many heterogenous devices in it including NOF SSDs, FPGA (remote), GPU (remote) and etc.

As to what Cyborg provides on management different from Cinder, since it concentrates on , again, acceleration management, in theory Cyborg could be used to provide management on things like QoS of different streams of a NVMe SSD, or performing sanitization as part of the life cycle management. All of which are not in the scope of volume management.

I just think it is a good collaboration between two open source communities of many new possibilities. But it is ok if the SPDK community does not concern it as a correct usage.

Hi Zhipeng,

I have a lot of questions on this similar to Ben’s.

From an OpenStack perspective, using Cyborg for NVMe SSD access seems to complicate storage provisioning.  If a user needs a dedicated NVMe SSD for a high performance demanding service, shouldn’t Cinder be able to do that kind of provisioning?  And if Cinder can’t currently – wouldn’t it be better to fix Cinder so that users don’t have to deal with two different services for provisioning NVMe SSDs?

Thanks,

-Jim


On Wed, Jun 20, 2018 at 12:42 AM Walker, Benjamin <benjamin.walker(a)intel.com<mailto:benjamin.walker(a)intel.com>> wrote:
I've been mulling over this discussion for the past few days. I'm certainly
always supportive of projects that integrate SPDK, and this situation is no
different. However, I can't wrap my head around exactly how "NVMe" or "NVMe-oF"
can be thought of as an accelerator. Because of that, I'm not in a position to
know when to point people at the Cyborg project as opposed to Cinder if they're
interested in using SPDK with OpenStack. So I'd like to gain a deeper
understanding here, if you'll all help me do that.

My understanding is that Cyborg is a framework for discovering available
"accelerators". To me, an accelerator is some code library or specialized piece
of hardware that performs some computation more quickly than a simple algorithm
running on the CPU. This could range from FPGAs to GPUs to even specialized
implementations of algorithms that still run on the CPU, but take advantage of
instructions or designs that the compilers are unlikely to emit from plain old C
code (like ISA-L).

To me, traditional NVMe devices do not fall into this category. They don't
accelerate some computation, but rather are simply a block storage device.
Further, accelerators are stateless - they take some input and produce some
output in a repeatable fashion. NVMe devices, in contrast, are stateful - their
entire purpose is to persistently store state. So what, specifically, is Cyborg
exposing about the device that Cinder cannot?

Note that there are "NVMe" devices on the market that really are accelerators -
they contain no persistent storage and are only using NVMe as a convenient
interface. They act as accelerators by allowing the user to write to a certain
range of blocks, then read the data back from that location and some operation
such as encryption or compression will have been performed on it. These devices
are indeed accelerators and should be exposed through Cyborg. If you're only
talking specifically about these devices (which are a very niche market as of
this writing), then I understand fully what you're doing here and we can move
forward.

Thanks,
Ben

On Sun, 2018-06-17 at 08:20 +0800, Zhipeng Huang wrote:
> Thanks Paul,
>
> Will there is no extra features per se at the moment, but we will gradually
> rolls that out hopefully. The current spdk driver code could be found at https
> ://github.com/openstack/cyborg/blob/master/cyborg/accelerator/drivers/spdk/nvm<http://github.com/openstack/cyborg/blob/master/cyborg/accelerator/drivers/spdk/nvm>
> f/nvmf.py
>
> Well the purpose of the discussion is actually quite simple, we suggest to add
> Cyborg as another project like Cinder which supports spdk in the SPDK
> architecture figure :) I think our work in Cyborg is good for SPDK's ecosystem
> and would be great to acknowledge that :)
>
> Sorry for the many lengthy emails :)
>
>
> On Sat, Jun 16, 2018 at 11:15 PM Luse, Paul E <paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>> wrote:
> > Thanks for the information! Are there currently any Cyborg drivers (is that
> > the right term?) that implement NVMeOF? It would be great to take a look at
> > one to help clear things up a bit – what I’m still not seeing is exactly
> > what “extra capabilities” you mention below that could be exposed/exploited
> > via Cyborg & SPDK vs Cinder & SPDK.
> >
> > Also, assuming that nothing else is required from the SPDK
> > codebase/community, is the purpose of this email chain just to
> > inform/educate and help enlist the SPDK community in promoting its use in
> > Cyborg?
> >
> > Note that I’m on holiday from now until July 2 and starting Mon afternoon
> > will not have access to email until I get back so when you don’t get any
> > kind of response from me, that’s why J Hopefully others will carry the
> > conversation forward as I know others are interested as well and fully
> > appreciate all of Helloway’s interest in SPDK thus far.  I’ll definitely
> > touch base when I return, thanks again!
> >
> > -Paul
> >
> > From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Zhipeng Huang
> > Sent: Friday, June 15, 2018 6:27 PM
> > To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
> > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> >
> > Hi Maciek and Paul,
> >
> > I'm the current project lead of Cyborg so let me try to answer your
> > questions, maybe hellowway did not explain well before.
> >
> > Cyborg[0] is a general management framework for accelerators (GPU. FPGA,
> > ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the project's
> > early days :) . The support of spdk driver has already landed in cyborg
> > (although still very premature).
> >
> > It is important to notice that Cyborg does not provide volume management
> > which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over
> > Fabric SSDs, meaning leveraging information that could be attained via the
> > Bus (and the NVMe/NOF protocol). A similar comparison could be found in
> > Cyborg management of FPGA/GPU. This could enable many great features
> > developed in NVMe protocol could be exposed via Cyborg to support more
> > granulized scheduling if the user want, instead of just being treated as a
> > block device without difference to traditional SSDs.
> >
> > Therefore Cyborg is working alongside of Nova and Cinder. An example of how
> > Cyborg work with Nova on NVMe devices could be found in [1] starting 27:13,
> > which is also proposed from Intel developers. In essence, Cyborg helps Nova
> > get knowledge of NVMe SSDs for its various extra capabilities compared to
> > normal SSD (i.e, as an accelerator), Nova scheduler then will select a node
> > with such capability if the user desires to spawn a VM, and then Cinder will
> > just do its job on volume management.
> >
> > So on this note, Cyborg is working with Nova and Cinder, the relationship is
> > complimentary. Cyborg will interact with Nova through Placement, there is no
> > need at the moment for Cyborg to interact with Cinder. It could still work
> > without Cyborg in the picture for sure, but NVMe SSD will be seen just as a
> > normal block device by Nova and Cinder, and no advanced scheduling could be
> > performed.
> >
> > Re Paul's question:
> > - Just a good Cyborg SPDK driver could make it work in OpenStack
> > - No additional requirement on SPDK community itself
> > - No additional tweaks in SPDK needed specifically for Cyborg to work with
> > it.
> >
> > OpenSDS is just another option, since it supports capability report
> > functionality, it could gather information from Cyborg to make more
> > granulized scheduling, unlike Cinder which does not get input from Cyborg
> > and just perform regular volume management. There is no good or bad to
> > compare two solutions. It depends on user's requirement.
> >
> > I hope this writing could clear things up :)
> >
> > [0] https://wiki.openstack.org/wiki/Cyborg
> > [1] https://www.youtube.com/watch?v=q84Q-6FSXts
> >
> >
> > On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed(a)intel.com<mailto:maciej.szwed(a)intel.com>>
> > wrote:
> > > Hi Helloway,
> > > I’ve been working on Cinder drivers (volume and target) for SPDK. I’m not
> > > familiar with Cyborg. Does Cyborg have capabilities to manage volumes? How
> > > does it interact with Nova – how does it provide storage to compute nodes?
> > >
> > > Thanks,
> > > Maciek
> > >
> > > From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Luse, Paul E
> > > Sent: Thursday, June 14, 2018 5:27 AM
> > > To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
> > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > >
> > > Hi Helloway,
> > >
> > > No problem.  Cinder is the block storage provisioning project for
> > > Openstack. A VM needing block storage interacts with Cinder to request
> > > what it needs and storage providers that have written provider (aka
> > > drivers) for Cinders are matched up based on requests. Here’s the overview
> > > from the project: https://docs.openstack.org/newton/install-guide-ubuntu/c
> > > ommon/get-started-block-storage.html and this is by far the most common
> > > way that I’m aware of to enable a new block storage application in
> > > OpenStack.
> > >
> > > I haven’t worked in that world in a few years such that Cyborg and OpenSDS
> > > have been introduced since I was active but as far as I know Cinder is
> > > still the best place to start introducing SPDK based block storage into
> > > the Openstack cloud. I do have some OpenSDS contacts, and still some
> > > friends who work on Cinder.  Let me ask around a little, note that I will
> > > be out of the office for 2 weeks of “disconnected” vacation after this
> > > Friday but I’ll try and get a bit more info before then and get back to
> > > ya.
> > >
> > > Anyone else out there feel free to chime in if you have more info on
> > > Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we figure
> > > that out, the questions below are still very relevant for next steps:
> > >
> > > - what else is required to be pushed into OpenStack for this to work
> > > - is anything required in the SPDK repo for this to work
> > > - how will the necessary SPDK components be associated with the VM in
> > > question and subsequently configured
> > >
> > > Thanks!
> > > Paul
> > >
> > > From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
> > > Sent: Wednesday, June 13, 2018 7:47 PM
> > > To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
> > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
> > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > >
> > > Hi Paul,
> > >
> > > I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what
> > > I think. If I was wrong, please correct me. From my perspective, Cinder
> > > cares more about the capacity of the pool whose pool capabilities
> > > reporting interface is fixed, whereas, in addition to the capacity, Cyborg
> > > also cares about the fine-grained accelerator capabilities (e.g. iops,
> > > queue, etc.). These capabilities, reported  from the Cyborg, can be
> > > dynamic configured and handled through the OpenSDS’ profile. For this
> > > reason, it provides a more flexible and simple configuration which can be
> > > called conveniently.
> > >
> > > Thx,
> > > Helloway
> > >
> > > On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>> wrote:
> > > > Hi Helloway,
> > > >
> > > > That’s a great start but I still have the same open questions below,
> > > > maybe you can try and address those directly? Also, below is the link
> > > > for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to the
> > > > question below can you please explain for everyone how you see the
> > > > approach of using Cyborg and OpenSDS compares with the seemingly simpler
> > > > approach of providing a Cinder plug-in?
> > > >
> > > > https://review.openstack.org/#/c/564229/
> > > >
> > > > Thanks!!
> > > > Paul
> > > >
> > > >
> > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
> > > > Sent: Tuesday, June 12, 2018 5:22 PM
> > > > To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
> > > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
> > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > >
> > > > Hi Paul,
> > > > Thank you for your response, I have submitted a trello [0] titled
> > > >  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to
> > > > answer your questions from this trello, do I make sense? I really hope
> > > > to receive your feedback.
> > > >
> > > > [0]https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spd
> > > > k-architecture
> > > >
> > > > Thx,
> > > > Helloway
> > > > On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com<mailto:paul.e.luse(a)intel.com>> wrote:
> > > > > Hi Helloway,
> > > > >
> > > > > I was actually just wondering what had happened with this.  Looking at
> > > > > the OpenStack patch it looks like it’s close to landing. Somewhere out
> > > > > there we have a Cinder driver that’s also getting fairly close I
> > > > > believe so for sure integration with OpenStack is interesting to many
> > > > > in the community.
> > > > >
> > > > > Would you be able to summarize more specifically how your patch would
> > > > > work once it lands? I of course see your high level description below
> > > > > that some questions that I have, I’m assuming others as well, include:
> > > > >
> > > > > - what else is required to be pushed into OpenStack for this to work
> > > > > - is anything required in the SPDK repo for this to work
> > > > > - how will the necessary SPDK components be associated with the VM in
> > > > > question and subsequently configured
> > > > >
> > > > > Thanks for continuing to work on this!
> > > > >
> > > > > -Paul
> > > > >
> > > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
> > > > > Sent: Monday, June 11, 2018 1:35 AM
> > > > > To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
> > > > > Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > > >
> > > > > Hi Jim and all,
> > > > >
> > > > > Do you know OpenStack/Cyborg? It is OpenStack acceleration service
> > > > > which provides a management framework for accelerator devices (e.g.
> > > > > FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to
> > > > > support hardware accelerated devices in a dynamic model(mentioned in
> > > > > OpenStack Summit Vancouver 2018 [0]).
> > > > >
> > > > > For this reason, we can use Cyborg to interactive with nvmf_tgt to
> > > > > realize the management of the user space accelerator NVMe SSD device,
> > > > > which can badly promote the efficiency. It is worth mentioning that
> > > > > the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack
> > > > > version Q [1]. The driver can report the detailed information of the
> > > > > device to the Cyborg agent. When user requests a vm with a user space
> > > > > NVMe SSD, Cyborg agent will update the Nova/Placement inventory on
> > > > > available NVMe devices. This is a complete process to describe the
> > > > > connection of Cyborg and SPDK.
> > > > >
> > > > > I wonder whether you guys are interested in integrating
> > > > > OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let
> > > > > me know what your thoughts.
> > > > >
> > > > > [0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-clo
> > > > > ud-with-openstack-acceleration-service-and-composable-hardware
> > > > > [1]https://review.openstack.org/#/c/538164/
> > > > >
> > > > >
> > > > > Thx,
> > > > > Helloway
> > >
> > > _______________________________________________
> > > SPDK mailing list
> > > SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> > > https://lists.01.org/mailman/listinfo/spdk
> >
> >
> > --
> > Zhipeng (Howard) Huang
> >
> > Standard Engineer
> > IT Standard & Patent/IT Product Line
> > Huawei Technologies Co,. Ltd
> > Email: huangzhipeng(a)huawei.com<mailto:huangzhipeng(a)huawei.com>
> > Office: Huawei Industrial Base, Longgang, Shenzhen
> >
> > (Previous)
> > Research Assistant
> > Mobile Ad-Hoc Network Lab, Calit2
> > University of California, Irvine
> > Email: zhipengh(a)uci.edu<mailto:zhipengh(a)uci.edu>
> > Office: Calit2 Building Room 2402
> >
> > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> > https://lists.01.org/mailman/listinfo/spdk
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
> https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng(a)huawei.com<mailto:huangzhipeng(a)huawei.com>
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh(a)uci.edu<mailto:zhipengh(a)uci.edu>
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 29999 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-20  6:48 Bob Chen
  0 siblings, 0 replies; 15+ messages in thread
From: Bob Chen @ 2018-06-20  6:48 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 17369 bytes --]

The OpenStack guys are always obsessed with integrating things, that's how
they make that project a mess.

It's just like opening up a supermarket and trying to provide as many
products as you can, while failing to realize that some products have their
own customers and direct selling strategy, and thus don't need a agency.

2018-06-20 0:42 GMT+08:00 Walker, Benjamin <benjamin.walker(a)intel.com>:

> I've been mulling over this discussion for the past few days. I'm certainly
> always supportive of projects that integrate SPDK, and this situation is no
> different. However, I can't wrap my head around exactly how "NVMe" or
> "NVMe-oF"
> can be thought of as an accelerator. Because of that, I'm not in a
> position to
> know when to point people at the Cyborg project as opposed to Cinder if
> they're
> interested in using SPDK with OpenStack. So I'd like to gain a deeper
> understanding here, if you'll all help me do that.
>
> My understanding is that Cyborg is a framework for discovering available
> "accelerators". To me, an accelerator is some code library or specialized
> piece
> of hardware that performs some computation more quickly than a simple
> algorithm
> running on the CPU. This could range from FPGAs to GPUs to even specialized
> implementations of algorithms that still run on the CPU, but take
> advantage of
> instructions or designs that the compilers are unlikely to emit from plain
> old C
> code (like ISA-L).
>
> To me, traditional NVMe devices do not fall into this category. They don't
> accelerate some computation, but rather are simply a block storage device.
> Further, accelerators are stateless - they take some input and produce some
> output in a repeatable fashion. NVMe devices, in contrast, are stateful -
> their
> entire purpose is to persistently store state. So what, specifically, is
> Cyborg
> exposing about the device that Cinder cannot?
>
> Note that there are "NVMe" devices on the market that really are
> accelerators -
> they contain no persistent storage and are only using NVMe as a convenient
> interface. They act as accelerators by allowing the user to write to a
> certain
> range of blocks, then read the data back from that location and some
> operation
> such as encryption or compression will have been performed on it. These
> devices
> are indeed accelerators and should be exposed through Cyborg. If you're
> only
> talking specifically about these devices (which are a very niche market as
> of
> this writing), then I understand fully what you're doing here and we can
> move
> forward.
>
> Thanks,
> Ben
>
> On Sun, 2018-06-17 at 08:20 +0800, Zhipeng Huang wrote:
> > Thanks Paul,
> >
> > Will there is no extra features per se at the moment, but we will
> gradually
> > rolls that out hopefully. The current spdk driver code could be found at
> https
> > ://github.com/openstack/cyborg/blob/master/cyborg/
> accelerator/drivers/spdk/nvm
> > f/nvmf.py
> >
> > Well the purpose of the discussion is actually quite simple, we suggest
> to add
> > Cyborg as another project like Cinder which supports spdk in the SPDK
> > architecture figure :) I think our work in Cyborg is good for SPDK's
> ecosystem
> > and would be great to acknowledge that :)
> >
> > Sorry for the many lengthy emails :)
> >
> >
> > On Sat, Jun 16, 2018 at 11:15 PM Luse, Paul E <paul.e.luse(a)intel.com>
> wrote:
> > > Thanks for the information! Are there currently any Cyborg drivers (is
> that
> > > the right term?) that implement NVMeOF? It would be great to take a
> look at
> > > one to help clear things up a bit – what I’m still not seeing is
> exactly
> > > what “extra capabilities” you mention below that could be
> exposed/exploited
> > > via Cyborg & SPDK vs Cinder & SPDK.
> > >
> > > Also, assuming that nothing else is required from the SPDK
> > > codebase/community, is the purpose of this email chain just to
> > > inform/educate and help enlist the SPDK community in promoting its use
> in
> > > Cyborg?
> > >
> > > Note that I’m on holiday from now until July 2 and starting Mon
> afternoon
> > > will not have access to email until I get back so when you don’t get
> any
> > > kind of response from me, that’s why J Hopefully others will carry the
> > > conversation forward as I know others are interested as well and fully
> > > appreciate all of Helloway’s interest in SPDK thus far.  I’ll
> definitely
> > > touch base when I return, thanks again!
> > >
> > > -Paul
> > >
> > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Zhipeng
> Huang
> > > Sent: Friday, June 15, 2018 6:27 PM
> > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > >
> > > Hi Maciek and Paul,
> > >
> > > I'm the current project lead of Cyborg so let me try to answer your
> > > questions, maybe hellowway did not explain well before.
> > >
> > > Cyborg[0] is a general management framework for accelerators (GPU.
> FPGA,
> > > ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the
> project's
> > > early days :) . The support of spdk driver has already landed in cyborg
> > > (although still very premature).
> > >
> > > It is important to notice that Cyborg does not provide volume
> management
> > > which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over
> > > Fabric SSDs, meaning leveraging information that could be attained via
> the
> > > Bus (and the NVMe/NOF protocol). A similar comparison could be found in
> > > Cyborg management of FPGA/GPU. This could enable many great features
> > > developed in NVMe protocol could be exposed via Cyborg to support more
> > > granulized scheduling if the user want, instead of just being treated
> as a
> > > block device without difference to traditional SSDs.
> > >
> > > Therefore Cyborg is working alongside of Nova and Cinder. An example
> of how
> > > Cyborg work with Nova on NVMe devices could be found in [1] starting
> 27:13,
> > > which is also proposed from Intel developers. In essence, Cyborg helps
> Nova
> > > get knowledge of NVMe SSDs for its various extra capabilities compared
> to
> > > normal SSD (i.e, as an accelerator), Nova scheduler then will select a
> node
> > > with such capability if the user desires to spawn a VM, and then
> Cinder will
> > > just do its job on volume management.
> > >
> > > So on this note, Cyborg is working with Nova and Cinder, the
> relationship is
> > > complimentary. Cyborg will interact with Nova through Placement, there
> is no
> > > need at the moment for Cyborg to interact with Cinder. It could still
> work
> > > without Cyborg in the picture for sure, but NVMe SSD will be seen just
> as a
> > > normal block device by Nova and Cinder, and no advanced scheduling
> could be
> > > performed.
> > >
> > > Re Paul's question:
> > > - Just a good Cyborg SPDK driver could make it work in OpenStack
> > > - No additional requirement on SPDK community itself
> > > - No additional tweaks in SPDK needed specifically for Cyborg to work
> with
> > > it.
> > >
> > > OpenSDS is just another option, since it supports capability report
> > > functionality, it could gather information from Cyborg to make more
> > > granulized scheduling, unlike Cinder which does not get input from
> Cyborg
> > > and just perform regular volume management. There is no good or bad to
> > > compare two solutions. It depends on user's requirement.
> > >
> > > I hope this writing could clear things up :)
> > >
> > > [0] https://wiki.openstack.org/wiki/Cyborg
> > > [1] https://www.youtube.com/watch?v=q84Q-6FSXts
> > >
> > >
> > > On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed(a)intel.com
> >
> > > wrote:
> > > > Hi Helloway,
> > > > I’ve been working on Cinder drivers (volume and target) for SPDK.
> I’m not
> > > > familiar with Cyborg. Does Cyborg have capabilities to manage
> volumes? How
> > > > does it interact with Nova – how does it provide storage to compute
> nodes?
> > > >
> > > > Thanks,
> > > > Maciek
> > > >
> > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse,
> Paul E
> > > > Sent: Thursday, June 14, 2018 5:27 AM
> > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > >
> > > > Hi Helloway,
> > > >
> > > > No problem.  Cinder is the block storage provisioning project for
> > > > Openstack. A VM needing block storage interacts with Cinder to
> request
> > > > what it needs and storage providers that have written provider (aka
> > > > drivers) for Cinders are matched up based on requests. Here’s the
> overview
> > > > from the project: https://docs.openstack.org/
> newton/install-guide-ubuntu/c
> > > > ommon/get-started-block-storage.html and this is by far the most
> common
> > > > way that I’m aware of to enable a new block storage application in
> > > > OpenStack.
> > > >
> > > > I haven’t worked in that world in a few years such that Cyborg and
> OpenSDS
> > > > have been introduced since I was active but as far as I know Cinder
> is
> > > > still the best place to start introducing SPDK based block storage
> into
> > > > the Openstack cloud. I do have some OpenSDS contacts, and still some
> > > > friends who work on Cinder.  Let me ask around a little, note that I
> will
> > > > be out of the office for 2 weeks of “disconnected” vacation after
> this
> > > > Friday but I’ll try and get a bit more info before then and get back
> to
> > > > ya.
> > > >
> > > > Anyone else out there feel free to chime in if you have more info on
> > > > Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we
> figure
> > > > that out, the questions below are still very relevant for next steps:
> > > >
> > > > - what else is required to be pushed into OpenStack for this to work
> > > > - is anything required in the SPDK repo for this to work
> > > > - how will the necessary SPDK components be associated with the VM in
> > > > question and subsequently configured
> > > >
> > > > Thanks!
> > > > Paul
> > > >
> > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
> > > > Sent: Wednesday, June 13, 2018 7:47 PM
> > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > >
> > > > Hi Paul,
> > > >
> > > > I am sorry for that I'm not an expert on Cinder, I‘ll try to tell
> you what
> > > > I think. If I was wrong, please correct me. From my perspective,
> Cinder
> > > > cares more about the capacity of the pool whose pool capabilities
> > > > reporting interface is fixed, whereas, in addition to the capacity,
> Cyborg
> > > > also cares about the fine-grained accelerator capabilities (e.g.
> iops,
> > > > queue, etc.). These capabilities, reported  from the Cyborg, can be
> > > > dynamic configured and handled through the OpenSDS’ profile. For this
> > > > reason, it provides a more flexible and simple configuration which
> can be
> > > > called conveniently.
> > > >
> > > > Thx,
> > > > Helloway
> > > >
> > > > On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com> wrote:
> > > > > Hi Helloway,
> > > > >
> > > > > That’s a great start but I still have the same open questions
> below,
> > > > > maybe you can try and address those directly? Also, below is the
> link
> > > > > for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to
> the
> > > > > question below can you please explain for everyone how you see the
> > > > > approach of using Cyborg and OpenSDS compares with the seemingly
> simpler
> > > > > approach of providing a Cinder plug-in?
> > > > >
> > > > > https://review.openstack.org/#/c/564229/
> > > > >
> > > > > Thanks!!
> > > > > Paul
> > > > >
> > > > >
> > > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of
> helloway
> > > > > Sent: Tuesday, June 12, 2018 5:22 PM
> > > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK
> Architecture
> > > > >
> > > > > Hi Paul,
> > > > > Thank you for your response, I have submitted a trello [0] titled
> > > > >  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying
> to
> > > > > answer your questions from this trello, do I make sense? I really
> hope
> > > > > to receive your feedback.
> > > > >
> > > > > [0]https://trello.com/c/QfSAkLSS/121-integrate-
> openstack-cyborg-into-spd
> > > > > k-architecture
> > > > >
> > > > > Thx,
> > > > > Helloway
> > > > > On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com> wrote:
> > > > > > Hi Helloway,
> > > > > >
> > > > > > I was actually just wondering what had happened with this.
> Looking at
> > > > > > the OpenStack patch it looks like it’s close to landing.
> Somewhere out
> > > > > > there we have a Cinder driver that’s also getting fairly close I
> > > > > > believe so for sure integration with OpenStack is interesting to
> many
> > > > > > in the community.
> > > > > >
> > > > > > Would you be able to summarize more specifically how your patch
> would
> > > > > > work once it lands? I of course see your high level description
> below
> > > > > > that some questions that I have, I’m assuming others as well,
> include:
> > > > > >
> > > > > > - what else is required to be pushed into OpenStack for this to
> work
> > > > > > - is anything required in the SPDK repo for this to work
> > > > > > - how will the necessary SPDK components be associated with the
> VM in
> > > > > > question and subsequently configured
> > > > > >
> > > > > > Thanks for continuing to work on this!
> > > > > >
> > > > > > -Paul
> > > > > >
> > > > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of
> helloway
> > > > > > Sent: Monday, June 11, 2018 1:35 AM
> > > > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > > Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > > > >
> > > > > > Hi Jim and all,
> > > > > >
> > > > > > Do you know OpenStack/Cyborg? It is OpenStack acceleration
> service
> > > > > > which provides a management framework for accelerator devices
> (e.g.
> > > > > > FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to
> > > > > > support hardware accelerated devices in a dynamic
> model(mentioned in
> > > > > > OpenStack Summit Vancouver 2018 [0]).
> > > > > >
> > > > > > For this reason, we can use Cyborg to interactive with nvmf_tgt
> to
> > > > > > realize the management of the user space accelerator NVMe SSD
> device,
> > > > > > which can badly promote the efficiency. It is worth mentioning
> that
> > > > > > the Cyborg_SPDK_Driver I summitted has been merged into the
> OpenStack
> > > > > > version Q [1]. The driver can report the detailed information of
> the
> > > > > > device to the Cyborg agent. When user requests a vm with a user
> space
> > > > > > NVMe SSD, Cyborg agent will update the Nova/Placement inventory
> on
> > > > > > available NVMe devices. This is a complete process to describe
> the
> > > > > > connection of Cyborg and SPDK.
> > > > > >
> > > > > > I wonder whether you guys are interested in integrating
> > > > > > OpenStack/Cyborg into SPDK architecture? Do I make sense? Please
> let
> > > > > > me know what your thoughts.
> > > > > >
> > > > > > [0]https://www.openstack.org/videos/vancouver-2018/
> optimized-hpcai-clo
> > > > > > ud-with-openstack-acceleration-service-and-composable-hardware
> > > > > > [1]https://review.openstack.org/#/c/538164/
> > > > > >
> > > > > >
> > > > > > Thx,
> > > > > > Helloway
> > > >
> > > > _______________________________________________
> > > > SPDK mailing list
> > > > SPDK(a)lists.01.org
> > > > https://lists.01.org/mailman/listinfo/spdk
> > >
> > >
> > > --
> > > Zhipeng (Howard) Huang
> > >
> > > Standard Engineer
> > > IT Standard & Patent/IT Product Line
> > > Huawei Technologies Co,. Ltd
> > > Email: huangzhipeng(a)huawei.com
> > > Office: Huawei Industrial Base, Longgang, Shenzhen
> > >
> > > (Previous)
> > > Research Assistant
> > > Mobile Ad-Hoc Network Lab, Calit2
> > > University of California, Irvine
> > > Email: zhipengh(a)uci.edu
> > > Office: Calit2 Building Room 2402
> > >
> > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> > > _______________________________________________
> > > SPDK mailing list
> > > SPDK(a)lists.01.org
> > > https://lists.01.org/mailman/listinfo/spdk
> >
> >
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 22990 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-20  4:53 Zhipeng Huang
  0 siblings, 0 replies; 15+ messages in thread
From: Zhipeng Huang @ 2018-06-20  4:53 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 18742 bytes --]

Hi Ben,

When Cyborg approaches NVMe devices we view it as storage accelerators
which means user will use it to accelerate their storage part of the
service. So the latter niche use case you mentioned is definitely covered.
But it is more general than that, and the definition of the accelerator in
Cyborg's sense depends not upon its functionality but it usage. For example
if the user uses NVMe SSD only for certain high perf demanding services,
the user could view it as an accelerator and choose to manage it via
Cyborg, alongside Cinder.

In some practices vendor produces "accelerator racks" and they will put
many heterogenous devices in it including NOF SSDs, FPGA (remote), GPU
(remote) and etc.

As to what Cyborg provides on management different from Cinder, since it
concentrates on , again, acceleration management, in theory Cyborg could be
used to provide management on things like QoS of different streams of a
NVMe SSD, or performing sanitization as part of the life cycle management.
All of which are not in the scope of volume management.

I just think it is a good collaboration between two open source communities
of many new possibilities. But it is ok if the SPDK community does not
concern it as a correct usage.

On Wed, Jun 20, 2018 at 12:42 AM Walker, Benjamin <benjamin.walker(a)intel.com>
wrote:

> I've been mulling over this discussion for the past few days. I'm certainly
> always supportive of projects that integrate SPDK, and this situation is no
> different. However, I can't wrap my head around exactly how "NVMe" or
> "NVMe-oF"
> can be thought of as an accelerator. Because of that, I'm not in a
> position to
> know when to point people at the Cyborg project as opposed to Cinder if
> they're
> interested in using SPDK with OpenStack. So I'd like to gain a deeper
> understanding here, if you'll all help me do that.
>
> My understanding is that Cyborg is a framework for discovering available
> "accelerators". To me, an accelerator is some code library or specialized
> piece
> of hardware that performs some computation more quickly than a simple
> algorithm
> running on the CPU. This could range from FPGAs to GPUs to even specialized
> implementations of algorithms that still run on the CPU, but take
> advantage of
> instructions or designs that the compilers are unlikely to emit from plain
> old C
> code (like ISA-L).
>
> To me, traditional NVMe devices do not fall into this category. They don't
> accelerate some computation, but rather are simply a block storage device.
> Further, accelerators are stateless - they take some input and produce some
> output in a repeatable fashion. NVMe devices, in contrast, are stateful -
> their
> entire purpose is to persistently store state. So what, specifically, is
> Cyborg
> exposing about the device that Cinder cannot?
>
> Note that there are "NVMe" devices on the market that really are
> accelerators -
> they contain no persistent storage and are only using NVMe as a convenient
> interface. They act as accelerators by allowing the user to write to a
> certain
> range of blocks, then read the data back from that location and some
> operation
> such as encryption or compression will have been performed on it. These
> devices
> are indeed accelerators and should be exposed through Cyborg. If you're
> only
> talking specifically about these devices (which are a very niche market as
> of
> this writing), then I understand fully what you're doing here and we can
> move
> forward.
>
> Thanks,
> Ben
>
> On Sun, 2018-06-17 at 08:20 +0800, Zhipeng Huang wrote:
> > Thanks Paul,
> >
> > Will there is no extra features per se at the moment, but we will
> gradually
> > rolls that out hopefully. The current spdk driver code could be found at
> https
> > ://
> github.com/openstack/cyborg/blob/master/cyborg/accelerator/drivers/spdk/nvm
> > f/nvmf.py
> >
> > Well the purpose of the discussion is actually quite simple, we suggest
> to add
> > Cyborg as another project like Cinder which supports spdk in the SPDK
> > architecture figure :) I think our work in Cyborg is good for SPDK's
> ecosystem
> > and would be great to acknowledge that :)
> >
> > Sorry for the many lengthy emails :)
> >
> >
> > On Sat, Jun 16, 2018 at 11:15 PM Luse, Paul E <paul.e.luse(a)intel.com>
> wrote:
> > > Thanks for the information! Are there currently any Cyborg drivers (is
> that
> > > the right term?) that implement NVMeOF? It would be great to take a
> look at
> > > one to help clear things up a bit – what I’m still not seeing is
> exactly
> > > what “extra capabilities” you mention below that could be
> exposed/exploited
> > > via Cyborg & SPDK vs Cinder & SPDK.
> > >
> > > Also, assuming that nothing else is required from the SPDK
> > > codebase/community, is the purpose of this email chain just to
> > > inform/educate and help enlist the SPDK community in promoting its use
> in
> > > Cyborg?
> > >
> > > Note that I’m on holiday from now until July 2 and starting Mon
> afternoon
> > > will not have access to email until I get back so when you don’t get
> any
> > > kind of response from me, that’s why J Hopefully others will carry the
> > > conversation forward as I know others are interested as well and fully
> > > appreciate all of Helloway’s interest in SPDK thus far.  I’ll
> definitely
> > > touch base when I return, thanks again!
> > >
> > > -Paul
> > >
> > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Zhipeng
> Huang
> > > Sent: Friday, June 15, 2018 6:27 PM
> > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > >
> > > Hi Maciek and Paul,
> > >
> > > I'm the current project lead of Cyborg so let me try to answer your
> > > questions, maybe hellowway did not explain well before.
> > >
> > > Cyborg[0] is a general management framework for accelerators (GPU.
> FPGA,
> > > ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the
> project's
> > > early days :) . The support of spdk driver has already landed in cyborg
> > > (although still very premature).
> > >
> > > It is important to notice that Cyborg does not provide volume
> management
> > > which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over
> > > Fabric SSDs, meaning leveraging information that could be attained via
> the
> > > Bus (and the NVMe/NOF protocol). A similar comparison could be found in
> > > Cyborg management of FPGA/GPU. This could enable many great features
> > > developed in NVMe protocol could be exposed via Cyborg to support more
> > > granulized scheduling if the user want, instead of just being treated
> as a
> > > block device without difference to traditional SSDs.
> > >
> > > Therefore Cyborg is working alongside of Nova and Cinder. An example
> of how
> > > Cyborg work with Nova on NVMe devices could be found in [1] starting
> 27:13,
> > > which is also proposed from Intel developers. In essence, Cyborg helps
> Nova
> > > get knowledge of NVMe SSDs for its various extra capabilities compared
> to
> > > normal SSD (i.e, as an accelerator), Nova scheduler then will select a
> node
> > > with such capability if the user desires to spawn a VM, and then
> Cinder will
> > > just do its job on volume management.
> > >
> > > So on this note, Cyborg is working with Nova and Cinder, the
> relationship is
> > > complimentary. Cyborg will interact with Nova through Placement, there
> is no
> > > need at the moment for Cyborg to interact with Cinder. It could still
> work
> > > without Cyborg in the picture for sure, but NVMe SSD will be seen just
> as a
> > > normal block device by Nova and Cinder, and no advanced scheduling
> could be
> > > performed.
> > >
> > > Re Paul's question:
> > > - Just a good Cyborg SPDK driver could make it work in OpenStack
> > > - No additional requirement on SPDK community itself
> > > - No additional tweaks in SPDK needed specifically for Cyborg to work
> with
> > > it.
> > >
> > > OpenSDS is just another option, since it supports capability report
> > > functionality, it could gather information from Cyborg to make more
> > > granulized scheduling, unlike Cinder which does not get input from
> Cyborg
> > > and just perform regular volume management. There is no good or bad to
> > > compare two solutions. It depends on user's requirement.
> > >
> > > I hope this writing could clear things up :)
> > >
> > > [0] https://wiki.openstack.org/wiki/Cyborg
> > > [1] https://www.youtube.com/watch?v=q84Q-6FSXts
> > >
> > >
> > > On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed(a)intel.com
> >
> > > wrote:
> > > > Hi Helloway,
> > > > I’ve been working on Cinder drivers (volume and target) for SPDK.
> I’m not
> > > > familiar with Cyborg. Does Cyborg have capabilities to manage
> volumes? How
> > > > does it interact with Nova – how does it provide storage to compute
> nodes?
> > > >
> > > > Thanks,
> > > > Maciek
> > > >
> > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse,
> Paul E
> > > > Sent: Thursday, June 14, 2018 5:27 AM
> > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > >
> > > > Hi Helloway,
> > > >
> > > > No problem.  Cinder is the block storage provisioning project for
> > > > Openstack. A VM needing block storage interacts with Cinder to
> request
> > > > what it needs and storage providers that have written provider (aka
> > > > drivers) for Cinders are matched up based on requests. Here’s the
> overview
> > > > from the project:
> https://docs.openstack.org/newton/install-guide-ubuntu/c
> > > > ommon/get-started-block-storage.html and this is by far the most
> common
> > > > way that I’m aware of to enable a new block storage application in
> > > > OpenStack.
> > > >
> > > > I haven’t worked in that world in a few years such that Cyborg and
> OpenSDS
> > > > have been introduced since I was active but as far as I know Cinder
> is
> > > > still the best place to start introducing SPDK based block storage
> into
> > > > the Openstack cloud. I do have some OpenSDS contacts, and still some
> > > > friends who work on Cinder.  Let me ask around a little, note that I
> will
> > > > be out of the office for 2 weeks of “disconnected” vacation after
> this
> > > > Friday but I’ll try and get a bit more info before then and get back
> to
> > > > ya.
> > > >
> > > > Anyone else out there feel free to chime in if you have more info on
> > > > Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we
> figure
> > > > that out, the questions below are still very relevant for next steps:
> > > >
> > > > - what else is required to be pushed into OpenStack for this to work
> > > > - is anything required in the SPDK repo for this to work
> > > > - how will the necessary SPDK components be associated with the VM in
> > > > question and subsequently configured
> > > >
> > > > Thanks!
> > > > Paul
> > > >
> > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
> > > > Sent: Wednesday, June 13, 2018 7:47 PM
> > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > >
> > > > Hi Paul,
> > > >
> > > > I am sorry for that I'm not an expert on Cinder, I‘ll try to tell
> you what
> > > > I think. If I was wrong, please correct me. From my perspective,
> Cinder
> > > > cares more about the capacity of the pool whose pool capabilities
> > > > reporting interface is fixed, whereas, in addition to the capacity,
> Cyborg
> > > > also cares about the fine-grained accelerator capabilities (e.g.
> iops,
> > > > queue, etc.). These capabilities, reported  from the Cyborg, can be
> > > > dynamic configured and handled through the OpenSDS’ profile. For this
> > > > reason, it provides a more flexible and simple configuration which
> can be
> > > > called conveniently.
> > > >
> > > > Thx,
> > > > Helloway
> > > >
> > > > On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com> wrote:
> > > > > Hi Helloway,
> > > > >
> > > > > That’s a great start but I still have the same open questions
> below,
> > > > > maybe you can try and address those directly? Also, below is the
> link
> > > > > for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to
> the
> > > > > question below can you please explain for everyone how you see the
> > > > > approach of using Cyborg and OpenSDS compares with the seemingly
> simpler
> > > > > approach of providing a Cinder plug-in?
> > > > >
> > > > > https://review.openstack.org/#/c/564229/
> > > > >
> > > > > Thanks!!
> > > > > Paul
> > > > >
> > > > >
> > > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of
> helloway
> > > > > Sent: Tuesday, June 12, 2018 5:22 PM
> > > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK
> Architecture
> > > > >
> > > > > Hi Paul,
> > > > > Thank you for your response, I have submitted a trello [0] titled
> > > > >  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying
> to
> > > > > answer your questions from this trello, do I make sense? I really
> hope
> > > > > to receive your feedback.
> > > > >
> > > > > [0]
> https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spd
> > > > > k-architecture
> > > > >
> > > > > Thx,
> > > > > Helloway
> > > > > On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com> wrote:
> > > > > > Hi Helloway,
> > > > > >
> > > > > > I was actually just wondering what had happened with this.
> Looking at
> > > > > > the OpenStack patch it looks like it’s close to landing.
> Somewhere out
> > > > > > there we have a Cinder driver that’s also getting fairly close I
> > > > > > believe so for sure integration with OpenStack is interesting to
> many
> > > > > > in the community.
> > > > > >
> > > > > > Would you be able to summarize more specifically how your patch
> would
> > > > > > work once it lands? I of course see your high level description
> below
> > > > > > that some questions that I have, I’m assuming others as well,
> include:
> > > > > >
> > > > > > - what else is required to be pushed into OpenStack for this to
> work
> > > > > > - is anything required in the SPDK repo for this to work
> > > > > > - how will the necessary SPDK components be associated with the
> VM in
> > > > > > question and subsequently configured
> > > > > >
> > > > > > Thanks for continuing to work on this!
> > > > > >
> > > > > > -Paul
> > > > > >
> > > > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of
> helloway
> > > > > > Sent: Monday, June 11, 2018 1:35 AM
> > > > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > > Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > > > >
> > > > > > Hi Jim and all,
> > > > > >
> > > > > > Do you know OpenStack/Cyborg? It is OpenStack acceleration
> service
> > > > > > which provides a management framework for accelerator devices
> (e.g.
> > > > > > FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to
> > > > > > support hardware accelerated devices in a dynamic
> model(mentioned in
> > > > > > OpenStack Summit Vancouver 2018 [0]).
> > > > > >
> > > > > > For this reason, we can use Cyborg to interactive with nvmf_tgt
> to
> > > > > > realize the management of the user space accelerator NVMe SSD
> device,
> > > > > > which can badly promote the efficiency. It is worth mentioning
> that
> > > > > > the Cyborg_SPDK_Driver I summitted has been merged into the
> OpenStack
> > > > > > version Q [1]. The driver can report the detailed information of
> the
> > > > > > device to the Cyborg agent. When user requests a vm with a user
> space
> > > > > > NVMe SSD, Cyborg agent will update the Nova/Placement inventory
> on
> > > > > > available NVMe devices. This is a complete process to describe
> the
> > > > > > connection of Cyborg and SPDK.
> > > > > >
> > > > > > I wonder whether you guys are interested in integrating
> > > > > > OpenStack/Cyborg into SPDK architecture? Do I make sense? Please
> let
> > > > > > me know what your thoughts.
> > > > > >
> > > > > > [0]
> https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-clo
> > > > > > ud-with-openstack-acceleration-service-and-composable-hardware
> > > > > > [1]https://review.openstack.org/#/c/538164/
> > > > > >
> > > > > >
> > > > > > Thx,
> > > > > > Helloway
> > > >
> > > > _______________________________________________
> > > > SPDK mailing list
> > > > SPDK(a)lists.01.org
> > > > https://lists.01.org/mailman/listinfo/spdk
> > >
> > >
> > > --
> > > Zhipeng (Howard) Huang
> > >
> > > Standard Engineer
> > > IT Standard & Patent/IT Product Line
> > > Huawei Technologies Co,. Ltd
> > > Email: huangzhipeng(a)huawei.com
> > > Office: Huawei Industrial Base, Longgang, Shenzhen
> > >
> > > (Previous)
> > > Research Assistant
> > > Mobile Ad-Hoc Network Lab, Calit2
> > > University of California, Irvine
> > > Email: zhipengh(a)uci.edu
> > > Office: Calit2 Building Room 2402
> > >
> > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> > > _______________________________________________
> > > SPDK mailing list
> > > SPDK(a)lists.01.org
> > > https://lists.01.org/mailman/listinfo/spdk
> >
> >
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng(a)huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh(a)uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 24973 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-19 16:42 Walker, Benjamin
  0 siblings, 0 replies; 15+ messages in thread
From: Walker, Benjamin @ 2018-06-19 16:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 15991 bytes --]

I've been mulling over this discussion for the past few days. I'm certainly
always supportive of projects that integrate SPDK, and this situation is no
different. However, I can't wrap my head around exactly how "NVMe" or "NVMe-oF"
can be thought of as an accelerator. Because of that, I'm not in a position to
know when to point people at the Cyborg project as opposed to Cinder if they're
interested in using SPDK with OpenStack. So I'd like to gain a deeper
understanding here, if you'll all help me do that.

My understanding is that Cyborg is a framework for discovering available
"accelerators". To me, an accelerator is some code library or specialized piece
of hardware that performs some computation more quickly than a simple algorithm
running on the CPU. This could range from FPGAs to GPUs to even specialized
implementations of algorithms that still run on the CPU, but take advantage of
instructions or designs that the compilers are unlikely to emit from plain old C
code (like ISA-L).

To me, traditional NVMe devices do not fall into this category. They don't
accelerate some computation, but rather are simply a block storage device.
Further, accelerators are stateless - they take some input and produce some
output in a repeatable fashion. NVMe devices, in contrast, are stateful - their
entire purpose is to persistently store state. So what, specifically, is Cyborg
exposing about the device that Cinder cannot?

Note that there are "NVMe" devices on the market that really are accelerators -
they contain no persistent storage and are only using NVMe as a convenient
interface. They act as accelerators by allowing the user to write to a certain
range of blocks, then read the data back from that location and some operation
such as encryption or compression will have been performed on it. These devices
are indeed accelerators and should be exposed through Cyborg. If you're only
talking specifically about these devices (which are a very niche market as of
this writing), then I understand fully what you're doing here and we can move
forward.

Thanks,
Ben

On Sun, 2018-06-17 at 08:20 +0800, Zhipeng Huang wrote:
> Thanks Paul,
> 
> Will there is no extra features per se at the moment, but we will gradually
> rolls that out hopefully. The current spdk driver code could be found at https
> ://github.com/openstack/cyborg/blob/master/cyborg/accelerator/drivers/spdk/nvm
> f/nvmf.py 
> 
> Well the purpose of the discussion is actually quite simple, we suggest to add
> Cyborg as another project like Cinder which supports spdk in the SPDK
> architecture figure :) I think our work in Cyborg is good for SPDK's ecosystem
> and would be great to acknowledge that :)
> 
> Sorry for the many lengthy emails :)
> 
> 
> On Sat, Jun 16, 2018 at 11:15 PM Luse, Paul E <paul.e.luse(a)intel.com> wrote:
> > Thanks for the information! Are there currently any Cyborg drivers (is that
> > the right term?) that implement NVMeOF? It would be great to take a look at
> > one to help clear things up a bit – what I’m still not seeing is exactly
> > what “extra capabilities” you mention below that could be exposed/exploited
> > via Cyborg & SPDK vs Cinder & SPDK.
> >  
> > Also, assuming that nothing else is required from the SPDK
> > codebase/community, is the purpose of this email chain just to
> > inform/educate and help enlist the SPDK community in promoting its use in
> > Cyborg?
> >  
> > Note that I’m on holiday from now until July 2 and starting Mon afternoon
> > will not have access to email until I get back so when you don’t get any
> > kind of response from me, that’s why J Hopefully others will carry the
> > conversation forward as I know others are interested as well and fully
> > appreciate all of Helloway’s interest in SPDK thus far.  I’ll definitely
> > touch base when I return, thanks again!
> >  
> > -Paul
> >  
> > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Zhipeng Huang
> > Sent: Friday, June 15, 2018 6:27 PM
> > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> >  
> > Hi Maciek and Paul,
> >  
> > I'm the current project lead of Cyborg so let me try to answer your
> > questions, maybe hellowway did not explain well before.
> >  
> > Cyborg[0] is a general management framework for accelerators (GPU. FPGA,
> > ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the project's
> > early days :) . The support of spdk driver has already landed in cyborg
> > (although still very premature).
> >  
> > It is important to notice that Cyborg does not provide volume management
> > which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over
> > Fabric SSDs, meaning leveraging information that could be attained via the
> > Bus (and the NVMe/NOF protocol). A similar comparison could be found in
> > Cyborg management of FPGA/GPU. This could enable many great features
> > developed in NVMe protocol could be exposed via Cyborg to support more
> > granulized scheduling if the user want, instead of just being treated as a
> > block device without difference to traditional SSDs.
> >  
> > Therefore Cyborg is working alongside of Nova and Cinder. An example of how
> > Cyborg work with Nova on NVMe devices could be found in [1] starting 27:13,
> > which is also proposed from Intel developers. In essence, Cyborg helps Nova
> > get knowledge of NVMe SSDs for its various extra capabilities compared to
> > normal SSD (i.e, as an accelerator), Nova scheduler then will select a node
> > with such capability if the user desires to spawn a VM, and then Cinder will
> > just do its job on volume management.
> >  
> > So on this note, Cyborg is working with Nova and Cinder, the relationship is
> > complimentary. Cyborg will interact with Nova through Placement, there is no
> > need at the moment for Cyborg to interact with Cinder. It could still work
> > without Cyborg in the picture for sure, but NVMe SSD will be seen just as a
> > normal block device by Nova and Cinder, and no advanced scheduling could be
> > performed.
> >  
> > Re Paul's question:
> > - Just a good Cyborg SPDK driver could make it work in OpenStack
> > - No additional requirement on SPDK community itself
> > - No additional tweaks in SPDK needed specifically for Cyborg to work with
> > it.
> >  
> > OpenSDS is just another option, since it supports capability report
> > functionality, it could gather information from Cyborg to make more
> > granulized scheduling, unlike Cinder which does not get input from Cyborg
> > and just perform regular volume management. There is no good or bad to
> > compare two solutions. It depends on user's requirement.
> >  
> > I hope this writing could clear things up :)
> >  
> > [0] https://wiki.openstack.org/wiki/Cyborg
> > [1] https://www.youtube.com/watch?v=q84Q-6FSXts 
> >  
> >  
> > On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed(a)intel.com>
> > wrote:
> > > Hi Helloway,
> > > I’ve been working on Cinder drivers (volume and target) for SPDK. I’m not
> > > familiar with Cyborg. Does Cyborg have capabilities to manage volumes? How
> > > does it interact with Nova – how does it provide storage to compute nodes?
> > >  
> > > Thanks,
> > > Maciek
> > >  
> > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
> > > Sent: Thursday, June 14, 2018 5:27 AM
> > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > >  
> > > Hi Helloway,
> > >  
> > > No problem.  Cinder is the block storage provisioning project for
> > > Openstack. A VM needing block storage interacts with Cinder to request
> > > what it needs and storage providers that have written provider (aka
> > > drivers) for Cinders are matched up based on requests. Here’s the overview
> > > from the project: https://docs.openstack.org/newton/install-guide-ubuntu/c
> > > ommon/get-started-block-storage.html and this is by far the most common
> > > way that I’m aware of to enable a new block storage application in
> > > OpenStack.
> > >  
> > > I haven’t worked in that world in a few years such that Cyborg and OpenSDS
> > > have been introduced since I was active but as far as I know Cinder is
> > > still the best place to start introducing SPDK based block storage into
> > > the Openstack cloud. I do have some OpenSDS contacts, and still some
> > > friends who work on Cinder.  Let me ask around a little, note that I will
> > > be out of the office for 2 weeks of “disconnected” vacation after this
> > > Friday but I’ll try and get a bit more info before then and get back to
> > > ya.
> > >  
> > > Anyone else out there feel free to chime in if you have more info on
> > > Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we figure
> > > that out, the questions below are still very relevant for next steps:
> > >  
> > > - what else is required to be pushed into OpenStack for this to work
> > > - is anything required in the SPDK repo for this to work
> > > - how will the necessary SPDK components be associated with the VM in
> > > question and subsequently configured
> > >  
> > > Thanks!
> > > Paul
> > >  
> > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
> > > Sent: Wednesday, June 13, 2018 7:47 PM
> > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > >  
> > > Hi Paul,
> > >  
> > > I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what
> > > I think. If I was wrong, please correct me. From my perspective, Cinder
> > > cares more about the capacity of the pool whose pool capabilities
> > > reporting interface is fixed, whereas, in addition to the capacity, Cyborg
> > > also cares about the fine-grained accelerator capabilities (e.g. iops,
> > > queue, etc.). These capabilities, reported  from the Cyborg, can be
> > > dynamic configured and handled through the OpenSDS’ profile. For this
> > > reason, it provides a more flexible and simple configuration which can be
> > > called conveniently.
> > >  
> > > Thx,
> > > Helloway
> > >  
> > > On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com> wrote:
> > > > Hi Helloway,
> > > >  
> > > > That’s a great start but I still have the same open questions below,
> > > > maybe you can try and address those directly? Also, below is the link
> > > > for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to the
> > > > question below can you please explain for everyone how you see the
> > > > approach of using Cyborg and OpenSDS compares with the seemingly simpler
> > > > approach of providing a Cinder plug-in?
> > > >  
> > > > https://review.openstack.org/#/c/564229/
> > > >  
> > > > Thanks!!
> > > > Paul
> > > >  
> > > >  
> > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
> > > > Sent: Tuesday, June 12, 2018 5:22 PM
> > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > >  
> > > > Hi Paul,
> > > > Thank you for your response, I have submitted a trello [0] titled
> > > >  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to
> > > > answer your questions from this trello, do I make sense? I really hope
> > > > to receive your feedback.
> > > >  
> > > > [0]https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spd
> > > > k-architecture 
> > > >  
> > > > Thx,
> > > > Helloway
> > > > On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com> wrote:
> > > > > Hi Helloway,
> > > > >  
> > > > > I was actually just wondering what had happened with this.  Looking at
> > > > > the OpenStack patch it looks like it’s close to landing. Somewhere out
> > > > > there we have a Cinder driver that’s also getting fairly close I
> > > > > believe so for sure integration with OpenStack is interesting to many
> > > > > in the community.
> > > > >  
> > > > > Would you be able to summarize more specifically how your patch would
> > > > > work once it lands? I of course see your high level description below
> > > > > that some questions that I have, I’m assuming others as well, include:
> > > > >  
> > > > > - what else is required to be pushed into OpenStack for this to work
> > > > > - is anything required in the SPDK repo for this to work
> > > > > - how will the necessary SPDK components be associated with the VM in
> > > > > question and subsequently configured
> > > > >  
> > > > > Thanks for continuing to work on this!
> > > > >  
> > > > > -Paul
> > > > >  
> > > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
> > > > > Sent: Monday, June 11, 2018 1:35 AM
> > > > > To: Storage Performance Development Kit <spdk(a)lists.01.org>
> > > > > Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
> > > > >  
> > > > > Hi Jim and all,
> > > > >  
> > > > > Do you know OpenStack/Cyborg? It is OpenStack acceleration service
> > > > > which provides a management framework for accelerator devices (e.g.
> > > > > FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to
> > > > > support hardware accelerated devices in a dynamic model(mentioned in
> > > > > OpenStack Summit Vancouver 2018 [0]). 
> > > > >  
> > > > > For this reason, we can use Cyborg to interactive with nvmf_tgt to
> > > > > realize the management of the user space accelerator NVMe SSD device,
> > > > > which can badly promote the efficiency. It is worth mentioning that
> > > > > the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack
> > > > > version Q [1]. The driver can report the detailed information of the
> > > > > device to the Cyborg agent. When user requests a vm with a user space
> > > > > NVMe SSD, Cyborg agent will update the Nova/Placement inventory on
> > > > > available NVMe devices. This is a complete process to describe the
> > > > > connection of Cyborg and SPDK.
> > > > >  
> > > > > I wonder whether you guys are interested in integrating
> > > > > OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let
> > > > > me know what your thoughts.
> > > > >  
> > > > > [0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-clo
> > > > > ud-with-openstack-acceleration-service-and-composable-hardware
> > > > > [1]https://review.openstack.org/#/c/538164/
> > > > >  
> > > > >  
> > > > > Thx,
> > > > > Helloway
> > > 
> > > _______________________________________________
> > > SPDK mailing list
> > > SPDK(a)lists.01.org
> > > https://lists.01.org/mailman/listinfo/spdk
> > 
> >  
> > --
> > Zhipeng (Howard) Huang
> >  
> > Standard Engineer
> > IT Standard & Patent/IT Product Line
> > Huawei Technologies Co,. Ltd
> > Email: huangzhipeng(a)huawei.com
> > Office: Huawei Industrial Base, Longgang, Shenzhen
> >  
> > (Previous)
> > Research Assistant
> > Mobile Ad-Hoc Network Lab, Calit2
> > University of California, Irvine
> > Email: zhipengh(a)uci.edu
> > Office: Calit2 Building Room 2402
> >  
> > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> 
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-16 15:15 Luse, Paul E
  0 siblings, 0 replies; 15+ messages in thread
From: Luse, Paul E @ 2018-06-16 15:15 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 11485 bytes --]

Thanks for the information! Are there currently any Cyborg drivers (is that the right term?) that implement NVMeOF? It would be great to take a look at one to help clear things up a bit – what I’m still not seeing is exactly what “extra capabilities” you mention below that could be exposed/exploited via Cyborg & SPDK vs Cinder & SPDK.

Also, assuming that nothing else is required from the SPDK codebase/community, is the purpose of this email chain just to inform/educate and help enlist the SPDK community in promoting its use in Cyborg?

Note that I’m on holiday from now until July 2 and starting Mon afternoon will not have access to email until I get back so when you don’t get any kind of response from me, that’s why ☺ Hopefully others will carry the conversation forward as I know others are interested as well and fully appreciate all of Helloway’s interest in SPDK thus far.  I’ll definitely touch base when I return, thanks again!

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Zhipeng Huang
Sent: Friday, June 15, 2018 6:27 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Maciek and Paul,

I'm the current project lead of Cyborg so let me try to answer your questions, maybe hellowway did not explain well before.

Cyborg[0] is a general management framework for accelerators (GPU. FPGA, ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the project's early days :) . The support of spdk driver has already landed in cyborg (although still very premature).

It is important to notice that Cyborg does not provide volume management which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over Fabric SSDs, meaning leveraging information that could be attained via the Bus (and the NVMe/NOF protocol). A similar comparison could be found in Cyborg management of FPGA/GPU. This could enable many great features developed in NVMe protocol could be exposed via Cyborg to support more granulized scheduling if the user want, instead of just being treated as a block device without difference to traditional SSDs.

Therefore Cyborg is working alongside of Nova and Cinder. An example of how Cyborg work with Nova on NVMe devices could be found in [1] starting 27:13, which is also proposed from Intel developers. In essence, Cyborg helps Nova get knowledge of NVMe SSDs for its various extra capabilities compared to normal SSD (i.e, as an accelerator), Nova scheduler then will select a node with such capability if the user desires to spawn a VM, and then Cinder will just do its job on volume management.

So on this note, Cyborg is working with Nova and Cinder, the relationship is complimentary. Cyborg will interact with Nova through Placement, there is no need at the moment for Cyborg to interact with Cinder. It could still work without Cyborg in the picture for sure, but NVMe SSD will be seen just as a normal block device by Nova and Cinder, and no advanced scheduling could be performed.

Re Paul's question:
- Just a good Cyborg SPDK driver could make it work in OpenStack
- No additional requirement on SPDK community itself
- No additional tweaks in SPDK needed specifically for Cyborg to work with it.

OpenSDS is just another option, since it supports capability report functionality, it could gather information from Cyborg to make more granulized scheduling, unlike Cinder which does not get input from Cyborg and just perform regular volume management. There is no good or bad to compare two solutions. It depends on user's requirement.

I hope this writing could clear things up :)

[0] https://wiki.openstack.org/wiki/Cyborg
[1] https://www.youtube.com/watch?v=q84Q-6FSXts


On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed(a)intel.com<mailto:maciej.szwed(a)intel.com>> wrote:
Hi Helloway,
I’ve been working on Cinder drivers (volume and target) for SPDK. I’m not familiar with Cyborg. Does Cyborg have capabilities to manage volumes? How does it interact with Nova – how does it provide storage to compute nodes?

Thanks,
Maciek

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Luse, Paul E
Sent: Thursday, June 14, 2018 5:27 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Helloway,

No problem.  Cinder is the block storage provisioning project for Openstack. A VM needing block storage interacts with Cinder to request what it needs and storage providers that have written provider (aka drivers) for Cinders are matched up based on requests. Here’s the overview from the project: https://docs.openstack.org/newton/install-guide-ubuntu/common/get-started-block-storage.html and this is by far the most common way that I’m aware of to enable a new block storage application in OpenStack.

I haven’t worked in that world in a few years such that Cyborg and OpenSDS have been introduced since I was active but as far as I know Cinder is still the best place to start introducing SPDK based block storage into the Openstack cloud. I do have some OpenSDS contacts, and still some friends who work on Cinder.  Let me ask around a little, note that I will be out of the office for 2 weeks of “disconnected” vacation after this Friday but I’ll try and get a bit more info before then and get back to ya.

Anyone else out there feel free to chime in if you have more info on Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we figure that out, the questions below are still very relevant for next steps:

- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured

Thanks!
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
Sent: Wednesday, June 13, 2018 7:47 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Paul,

I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what I think. If I was wrong, please correct me. From my perspective, Cinder cares more about the capacity of the pool whose pool capabilities reporting interface is fixed, whereas, in addition to the capacity, Cyborg also cares about the fine-grained accelerator capabilities (e.g. iops, queue, etc.). These capabilities, reported  from the Cyborg, can be dynamic configured and handled through the OpenSDS’ profile. For this reason, it provides a more flexible and simple configuration which can be called conveniently.

Thx,
Helloway

On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com><mailto:paul.e.luse(a)intel.com> wrote:
Hi Helloway,

That’s a great start but I still have the same open questions below, maybe you can try and address those directly? Also, below is the link for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to the question below can you please explain for everyone how you see the approach of using Cyborg and OpenSDS compares with the seemingly simpler approach of providing a Cinder plug-in?

https://review.openstack.org/#/c/564229/

Thanks!!
Paul


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
Sent: Tuesday, June 12, 2018 5:22 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Paul,
Thank you for your response, I have submitted a trello [0] titled  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer your questions from this trello, do I make sense? I really hope to receive your feedback.

[0]https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spdk-architecture

Thx,
Helloway
On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com><mailto:paul.e.luse(a)intel.com> wrote:
Hi Helloway,

I was actually just wondering what had happened with this.  Looking at the OpenStack patch it looks like it’s close to landing. Somewhere out there we have a Cinder driver that’s also getting fairly close I believe so for sure integration with OpenStack is interesting to many in the community.

Would you be able to summarize more specifically how your patch would work once it lands? I of course see your high level description below that some questions that I have, I’m assuming others as well, include:

- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured

Thanks for continuing to work on this!

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
Sent: Monday, June 11, 2018 1:35 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Jim and all,

Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]).

For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.

I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.

[0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware
[1]https://review.openstack.org/#/c/538164/


Thx,
Helloway
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng(a)huawei.com<mailto:huangzhipeng(a)huawei.com>
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh(a)uci.edu<mailto:zhipengh(a)uci.edu>
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 38780 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-16  1:27 Zhipeng Huang
  0 siblings, 0 replies; 15+ messages in thread
From: Zhipeng Huang @ 2018-06-16  1:27 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 10552 bytes --]

Hi Maciek and Paul,

I'm the current project lead of Cyborg so let me try to answer your
questions, maybe hellowway did not explain well before.

Cyborg[0] is a general management framework for accelerators (GPU. FPGA,
ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the project's
early days :) . The support of spdk driver has already landed in cyborg
(although still very premature).

It is important to notice that Cyborg does not provide volume management
which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over
Fabric SSDs, meaning leveraging information that could be attained via the
Bus (and the NVMe/NOF protocol). A similar comparison could be found in
Cyborg management of FPGA/GPU. This could enable many great features
developed in NVMe protocol could be exposed via Cyborg to support more
granulized scheduling if the user want, instead of just being treated as a
block device without difference to traditional SSDs.

Therefore Cyborg is working alongside of Nova and Cinder. An example of how
Cyborg work with Nova on NVMe devices could be found in [1] starting 27:13,
which is also proposed from Intel developers. In essence, Cyborg helps Nova
get knowledge of NVMe SSDs for its various extra capabilities compared to
normal SSD (i.e, as an accelerator), Nova scheduler then will select a node
with such capability if the user desires to spawn a VM, and then Cinder
will just do its job on volume management.

So on this note, Cyborg is working with Nova and Cinder, the relationship
is complimentary. Cyborg will interact with Nova through Placement, there
is no need at the moment for Cyborg to interact with Cinder. It could still
work without Cyborg in the picture for sure, but NVMe SSD will be seen just
as a normal block device by Nova and Cinder, and no advanced scheduling
could be performed.

Re Paul's question:
- Just a good Cyborg SPDK driver could make it work in OpenStack
- No additional requirement on SPDK community itself
- No additional tweaks in SPDK needed specifically for Cyborg to work with
it.

OpenSDS is just another option, since it supports capability report
functionality, it could gather information from Cyborg to make more
granulized scheduling, unlike Cinder which does not get input from Cyborg
and just perform regular volume management. There is no good or bad to
compare two solutions. It depends on user's requirement.

I hope this writing could clear things up :)

[0] https://wiki.openstack.org/wiki/Cyborg
[1] https://www.youtube.com/watch?v=q84Q-6FSXts


On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed(a)intel.com>
wrote:

> Hi Helloway,
>
> I’ve been working on Cinder drivers (volume and target) for SPDK. I’m not
> familiar with Cyborg. Does Cyborg have capabilities to manage volumes? How
> does it interact with Nova – how does it provide storage to compute nodes?
>
>
>
> Thanks,
>
> Maciek
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Luse, Paul
> E
> *Sent:* Thursday, June 14, 2018 5:27 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Helloway,
>
>
>
> No problem.  Cinder is the block storage provisioning project for
> Openstack. A VM needing block storage interacts with Cinder to request what
> it needs and storage providers that have written provider (aka drivers) for
> Cinders are matched up based on requests. Here’s the overview from the
> project:
> https://docs.openstack.org/newton/install-guide-ubuntu/common/get-started-block-storage.html
> and this is by far the most common way that I’m aware of to enable a new
> block storage application in OpenStack.
>
>
>
> I haven’t worked in that world in a few years such that Cyborg and OpenSDS
> have been introduced since I was active but as far as I know Cinder is
> still the best place to start introducing SPDK based block storage into the
> Openstack cloud. I do have some OpenSDS contacts, and still some friends
> who work on Cinder.  Let me ask around a little, note that I will be out of
> the office for 2 weeks of “disconnected” vacation after this Friday but
> I’ll try and get a bit more info before then and get back to ya.
>
>
>
> Anyone else out there feel free to chime in if you have more info on
> Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we figure
> that out, the questions below are still very relevant for next steps:
>
>
>
> - what else is required to be pushed into OpenStack for this to work
>
> - is anything required in the SPDK repo for this to work
>
> - how will the necessary SPDK components be associated with the VM in
> question and subsequently configured
>
>
>
> Thanks!
>
> Paul
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>]
> *On Behalf Of *helloway
> *Sent:* Wednesday, June 13, 2018 7:47 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Paul,
>
>
>
> I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what
> I think. If I was wrong, please correct me. From my perspective, Cinder
> cares more about the capacity of the pool whose pool capabilities reporting
> interface is fixed, whereas, in addition to the capacity, Cyborg also cares
> about the fine-grained accelerator capabilities (e.g. iops, queue,
> etc.). These capabilities, reported  from the Cyborg, can be dynamic
> configured and handled through the OpenSDS’ profile. For this reason, it
> provides a more flexible and simple configuration which can be called
> conveniently.
>
>
>
> Thx,
>
> Helloway
>
>
>
> On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com>
> <paul.e.luse(a)intel.com> wrote:
>
> Hi Helloway,
>
>
>
> That’s a great start but I still have the same open questions below, maybe
> you can try and address those directly? Also, below is the link for adding
> SPDK based NVMeOF as a Cinder plug-in.  In addition to the question below
> can you please explain for everyone how you see the approach of using
> Cyborg and OpenSDS compares with the seemingly simpler approach of
> providing a Cinder plug-in?
>
>
>
> https://review.openstack.org/#/c/564229/
>
>
>
> Thanks!!
>
> Paul
>
>
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *helloway
> *Sent:* Tuesday, June 12, 2018 5:22 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Paul,
>
> Thank you for your response, I have submitted a trello [0] titled
>  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer
> your questions from this trello, do I make sense? I really hope to receive
> your feedback.
>
>
>
> [0]
> https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spdk-architecture
>
>
>
>
> Thx,
>
> Helloway
>
> On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com>
> <paul.e.luse(a)intel.com> wrote:
>
> Hi Helloway,
>
>
>
> I was actually just wondering what had happened with this.  Looking at the
> OpenStack patch it looks like it’s close to landing. Somewhere out there we
> have a Cinder driver that’s also getting fairly close I believe so for sure
> integration with OpenStack is interesting to many in the community.
>
>
>
> Would you be able to summarize more specifically how your patch would work
> once it lands? I of course see your high level description below that some
> questions that I have, I’m assuming others as well, include:
>
>
>
> - what else is required to be pushed into OpenStack for this to work
>
> - is anything required in the SPDK repo for this to work
>
> - how will the necessary SPDK components be associated with the VM in
> question and subsequently configured
>
>
>
> Thanks for continuing to work on this!
>
>
>
> -Paul
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *helloway
> *Sent:* Monday, June 11, 2018 1:35 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
>
>
>
> Hi Jim and all,
>
>
>
> Do you know OpenStack/Cyborg? It is OpenStack acceleration service which
> provides a management framework for accelerator devices (e.g. FPGA, GPU,
> NVMe SSD). There is a strong demand for OpenStack to support hardware
> accelerated devices in a dynamic model(mentioned in OpenStack Summit
> Vancouver 2018 [0]).
>
>
>
> For this reason, we can use Cyborg to interactive with nvmf_tgt to realize
> the management of the user space accelerator NVMe SSD device, which can
> badly promote the efficiency. It is worth mentioning that the
> Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q
> [1]. The driver can report the detailed information of the device to the
> Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg
> agent will update the Nova/Placement inventory on available NVMe devices.
> This is a complete process to describe the connection of Cyborg and SPDK.
>
>
>
> I wonder whether you guys are interested in integrating OpenStack/Cyborg
> into SPDK architecture? Do I make sense? Please let me know what your
> thoughts.
>
>
>
> [0]
> https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware
>
> [1]https://review.openstack.org/#/c/538164/
>
>
>
>
>
> Thx,
>
> Helloway
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng(a)huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh(a)uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 27908 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-15 14:28 Szwed, Maciej
  0 siblings, 0 replies; 15+ messages in thread
From: Szwed, Maciej @ 2018-06-15 14:28 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6854 bytes --]

Hi Helloway,
I’ve been working on Cinder drivers (volume and target) for SPDK. I’m not familiar with Cyborg. Does Cyborg have capabilities to manage volumes? How does it interact with Nova – how does it provide storage to compute nodes?

Thanks,
Maciek

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Thursday, June 14, 2018 5:27 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Helloway,

No problem.  Cinder is the block storage provisioning project for Openstack. A VM needing block storage interacts with Cinder to request what it needs and storage providers that have written provider (aka drivers) for Cinders are matched up based on requests. Here’s the overview from the project: https://docs.openstack.org/newton/install-guide-ubuntu/common/get-started-block-storage.html and this is by far the most common way that I’m aware of to enable a new block storage application in OpenStack.

I haven’t worked in that world in a few years such that Cyborg and OpenSDS have been introduced since I was active but as far as I know Cinder is still the best place to start introducing SPDK based block storage into the Openstack cloud. I do have some OpenSDS contacts, and still some friends who work on Cinder.  Let me ask around a little, note that I will be out of the office for 2 weeks of “disconnected” vacation after this Friday but I’ll try and get a bit more info before then and get back to ya.

Anyone else out there feel free to chime in if you have more info on Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we figure that out, the questions below are still very relevant for next steps:

- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured

Thanks!
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
Sent: Wednesday, June 13, 2018 7:47 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Paul,

I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what I think. If I was wrong, please correct me. From my perspective, Cinder cares more about the capacity of the pool whose pool capabilities reporting interface is fixed, whereas, in addition to the capacity, Cyborg also cares about the fine-grained accelerator capabilities (e.g. iops, queue, etc.). These capabilities, reported  from the Cyborg, can be dynamic configured and handled through the OpenSDS’ profile. For this reason, it provides a more flexible and simple configuration which can be called conveniently.

Thx,
Helloway

On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com><mailto:paul.e.luse(a)intel.com> wrote:
Hi Helloway,

That’s a great start but I still have the same open questions below, maybe you can try and address those directly? Also, below is the link for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to the question below can you please explain for everyone how you see the approach of using Cyborg and OpenSDS compares with the seemingly simpler approach of providing a Cinder plug-in?

https://review.openstack.org/#/c/564229/

Thanks!!
Paul


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
Sent: Tuesday, June 12, 2018 5:22 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Paul,
Thank you for your response, I have submitted a trello [0] titled  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer your questions from this trello, do I make sense? I really hope to receive your feedback.

[0]https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spdk-architecture

Thx,
Helloway
On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com><mailto:paul.e.luse(a)intel.com> wrote:
Hi Helloway,

I was actually just wondering what had happened with this.  Looking at the OpenStack patch it looks like it’s close to landing. Somewhere out there we have a Cinder driver that’s also getting fairly close I believe so for sure integration with OpenStack is interesting to many in the community.

Would you be able to summarize more specifically how your patch would work once it lands? I of course see your high level description below that some questions that I have, I’m assuming others as well, include:

- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured

Thanks for continuing to work on this!

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
Sent: Monday, June 11, 2018 1:35 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Jim and all,

Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]).

For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.

I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.

[0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware
[1]https://review.openstack.org/#/c/538164/


Thx,
Helloway

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 28258 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-14  3:26 Luse, Paul E
  0 siblings, 0 replies; 15+ messages in thread
From: Luse, Paul E @ 2018-06-14  3:26 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6280 bytes --]

Hi Helloway,

No problem.  Cinder is the block storage provisioning project for Openstack. A VM needing block storage interacts with Cinder to request what it needs and storage providers that have written provider (aka drivers) for Cinders are matched up based on requests. Here’s the overview from the project: https://docs.openstack.org/newton/install-guide-ubuntu/common/get-started-block-storage.html and this is by far the most common way that I’m aware of to enable a new block storage application in OpenStack.

I haven’t worked in that world in a few years such that Cyborg and OpenSDS have been introduced since I was active but as far as I know Cinder is still the best place to start introducing SPDK based block storage into the Openstack cloud. I do have some OpenSDS contacts, and still some friends who work on Cinder.  Let me ask around a little, note that I will be out of the office for 2 weeks of “disconnected” vacation after this Friday but I’ll try and get a bit more info before then and get back to ya.

Anyone else out there feel free to chime in if you have more info on Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we figure that out, the questions below are still very relevant for next steps:

- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured
Thanks!
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
Sent: Wednesday, June 13, 2018 7:47 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Paul,

I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what I think. If I was wrong, please correct me. From my perspective, Cinder cares more about the capacity of the pool whose pool capabilities reporting interface is fixed, whereas, in addition to the capacity, Cyborg also cares about the fine-grained accelerator capabilities (e.g. iops, queue, etc.). These capabilities, reported  from the Cyborg, can be dynamic configured and handled through the OpenSDS’ profile. For this reason, it provides a more flexible and simple configuration which can be called conveniently.

Thx,
Helloway

On 06/13/2018 22:46,Luse, Paul E<paul.e.luse(a)intel.com><mailto:paul.e.luse(a)intel.com> wrote:
Hi Helloway,

That’s a great start but I still have the same open questions below, maybe you can try and address those directly? Also, below is the link for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to the question below can you please explain for everyone how you see the approach of using Cyborg and OpenSDS compares with the seemingly simpler approach of providing a Cinder plug-in?

https://review.openstack.org/#/c/564229/

Thanks!!
Paul


From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
Sent: Tuesday, June 12, 2018 5:22 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Paul,
Thank you for your response, I have submitted a trello [0] titled  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer your questions from this trello, do I make sense? I really hope to receive your feedback.

[0]https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spdk-architecture

Thx,
Helloway
On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com><mailto:paul.e.luse(a)intel.com> wrote:
Hi Helloway,

I was actually just wondering what had happened with this.  Looking at the OpenStack patch it looks like it’s close to landing. Somewhere out there we have a Cinder driver that’s also getting fairly close I believe so for sure integration with OpenStack is interesting to many in the community.

Would you be able to summarize more specifically how your patch would work once it lands? I of course see your high level description below that some questions that I have, I’m assuming others as well, include:

- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured

Thanks for continuing to work on this!

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
Sent: Monday, June 11, 2018 1:35 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Jim and all,

Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]).

For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.

I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.

[0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware
[1]https://review.openstack.org/#/c/538164/


Thx,
Helloway

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 23598 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-13 14:46 Luse, Paul E
  0 siblings, 0 replies; 15+ messages in thread
From: Luse, Paul E @ 2018-06-13 14:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3631 bytes --]

Hi Helloway,

That’s a great start but I still have the same open questions below, maybe you can try and address those directly? Also, below is the link for adding SPDK based NVMeOF as a Cinder plug-in.  In addition to the question below can you please explain for everyone how you see the approach of using Cyborg and OpenSDS compares with the seemingly simpler approach of providing a Cinder plug-in?

https://review.openstack.org/#/c/564229/

Thanks!!
Paul


From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
Sent: Tuesday, June 12, 2018 5:22 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Paul,
Thank you for your response, I have submitted a trello [0] titled  “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer your questions from this trello, do I make sense? I really hope to receive your feedback.

[0]https://trello.com/c/QfSAkLSS/121-integrate-openstack-cyborg-into-spdk-architecture

Thx,
Helloway
On 06/12/2018 08:16,Luse, Paul E<paul.e.luse(a)intel.com><mailto:paul.e.luse(a)intel.com> wrote:
Hi Helloway,

I was actually just wondering what had happened with this.  Looking at the OpenStack patch it looks like it’s close to landing. Somewhere out there we have a Cinder driver that’s also getting fairly close I believe so for sure integration with OpenStack is interesting to many in the community.

Would you be able to summarize more specifically how your patch would work once it lands? I of course see your high level description below that some questions that I have, I’m assuming others as well, include:

- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured

Thanks for continuing to work on this!

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of helloway
Sent: Monday, June 11, 2018 1:35 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Jim and all,

Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]).

For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.

I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.

[0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware
[1]https://review.openstack.org/#/c/538164/


Thx,
Helloway

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 13531 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
@ 2018-06-12  0:16 Luse, Paul E
  0 siblings, 0 replies; 15+ messages in thread
From: Luse, Paul E @ 2018-06-12  0:16 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2328 bytes --]

Hi Helloway,

I was actually just wondering what had happened with this.  Looking at the OpenStack patch it looks like it’s close to landing. Somewhere out there we have a Cinder driver that’s also getting fairly close I believe so for sure integration with OpenStack is interesting to many in the community.

Would you be able to summarize more specifically how your patch would work once it lands? I of course see your high level description below that some questions that I have, I’m assuming others as well, include:

- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured

Thanks for continuing to work on this!

-Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of helloway
Sent: Monday, June 11, 2018 1:35 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture

Hi Jim and all,

Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]).

For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.

I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.

[0]https://www.openstack.org/videos/vancouver-2018/optimized-hpcai-cloud-with-openstack-acceleration-service-and-composable-hardware
[1]https://review.openstack.org/#/c/538164/


Thx,
Helloway

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8240 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-09-28  2:06 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-17  0:20 [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture Zhipeng Huang
  -- strict thread matches above, loose matches on Subject: below --
2019-09-28  2:06 helloway
2019-09-28  2:06 helloway
2019-09-28  2:06 helloway
2018-06-21  0:41 Zhipeng Huang
2018-06-20 15:43 Harris, James R
2018-06-20  6:48 Bob Chen
2018-06-20  4:53 Zhipeng Huang
2018-06-19 16:42 Walker, Benjamin
2018-06-16 15:15 Luse, Paul E
2018-06-16  1:27 Zhipeng Huang
2018-06-15 14:28 Szwed, Maciej
2018-06-14  3:26 Luse, Paul E
2018-06-13 14:46 Luse, Paul E
2018-06-12  0:16 Luse, Paul E

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.