Thanks for the information! Are there currently any Cyborg drivers (is that the right term?) that implement NVMeOF? It would be great to take a look at one to
help clear things up a bit – what I’m still not seeing is exactly what “extra capabilities” you mention below that could be exposed/exploited via Cyborg & SPDK vs Cinder & SPDK.
Also, assuming that nothing else is required from the SPDK codebase/community, is the purpose of this email chain just to inform/educate and help enlist the SPDK
community in promoting its use in Cyborg?
Note that I’m on holiday from now until July 2 and starting Mon afternoon will not have access to email until I get back so when you don’t get any kind of response
from me, that’s why J Hopefully others will carry the conversation forward as I know others are interested
as well and fully appreciate all of Helloway’s interest in SPDK thus far. I’ll definitely touch base when I return, thanks again!
-Paul
From: SPDK [mailto:spdk-bounces@lists.01.org]
On Behalf Of Zhipeng Huang
Sent: Friday, June 15, 2018 6:27 PM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
Hi Maciek and Paul,
I'm the current project lead of Cyborg so let me try to answer your questions, maybe hellowway did not explain well before.
Cyborg[0] is a general management framework for accelerators (GPU. FPGA, ASIC, NVMe/NOF, ...). We actually worked with Moshe's team in the project's early days :) . The support of spdk driver has already landed in cyborg (although still
very premature).
It is important to notice that Cyborg does not provide volume management which is the job of Cinder. Cyborg provide management on NVMe/NVMe Over Fabric SSDs, meaning leveraging information that could be attained via the Bus (and the NVMe/NOF
protocol). A similar comparison could be found in Cyborg management of FPGA/GPU. This could enable many great features developed in NVMe protocol could be exposed via Cyborg to support more granulized scheduling if the user want, instead of just being treated
as a block device without difference to traditional SSDs.
Therefore Cyborg is working alongside of Nova and Cinder. An example of how Cyborg work with Nova on NVMe devices could be found in [1] starting 27:13, which is also proposed from Intel developers. In essence, Cyborg helps Nova get knowledge
of NVMe SSDs for its various extra capabilities compared to normal SSD (i.e, as an accelerator), Nova scheduler then will select a node with such capability if the user desires to spawn a VM, and then Cinder will just do its job on volume management.
So on this note, Cyborg is working with Nova and Cinder, the relationship is complimentary. Cyborg will interact with Nova through Placement, there is no need at the moment for Cyborg to interact with Cinder. It could still work without
Cyborg in the picture for sure, but NVMe SSD will be seen just as a normal block device by Nova and Cinder, and no advanced scheduling could be performed.
Re Paul's question:
- Just a good Cyborg SPDK driver could make it work in OpenStack
- No additional requirement on SPDK community itself
- No additional tweaks in SPDK needed specifically for Cyborg to work with it.
OpenSDS is just another option, since it supports capability report functionality, it could gather information from Cyborg to make more granulized scheduling, unlike Cinder which does not get input from Cyborg and just perform regular volume
management. There is no good or bad to compare two solutions. It depends on user's requirement.
I hope this writing could clear things up :)
On Fri, Jun 15, 2018 at 10:28 PM Szwed, Maciej <maciej.szwed@intel.com> wrote:
Hi Helloway,
I’ve been working on Cinder drivers (volume and target) for SPDK. I’m not familiar with Cyborg. Does Cyborg have capabilities to manage volumes? How does it interact with Nova – how does it provide storage to compute nodes?
Thanks,
Maciek
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Luse, Paul E
Sent: Thursday, June 14, 2018 5:27 AM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
Hi Helloway,
No problem. Cinder is the block storage provisioning project for Openstack. A VM needing block storage interacts with Cinder to request what it needs and storage providers that have written provider (aka drivers) for Cinders are matched up based on requests. Here’s the overview from the project: https://docs.openstack.org/newton/install-guide-ubuntu/common/get-started-block-storage.html and this is by far the most common way that I’m aware of to enable a new block storage application in OpenStack.
I haven’t worked in that world in a few years such that Cyborg and OpenSDS have been introduced since I was active but as far as I know Cinder is still the best place to start introducing SPDK based block storage into the Openstack cloud. I do have some OpenSDS contacts, and still some friends who work on Cinder. Let me ask around a little, note that I will be out of the office for 2 weeks of “disconnected” vacation after this Friday but I’ll try and get a bit more info before then and get back to ya.
Anyone else out there feel free to chime in if you have more info on Cyborg+OpenSDS vs Cinder as it pertains to this discussion. Once we figure that out, the questions below are still very relevant for next steps:
- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured
Thanks!
Paul
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of helloway
Sent: Wednesday, June 13, 2018 7:47 PM
To: Storage Performance Development Kit <spdk@lists.01.org>
Cc: Storage Performance Development Kit <spdk@lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
Hi Paul,
I am sorry for that I'm not an expert on Cinder, I‘ll try to tell you what I think. If I was wrong, please correct me. From my perspective, Cinder cares more about the capacity of the pool whose pool capabilities reporting interface is fixed, whereas, in addition to the capacity, Cyborg also cares about the fine-grained accelerator capabilities (e.g. iops, queue, etc.). These capabilities, reported from the Cyborg, can be dynamic configured and handled through the OpenSDS’ profile. For this reason, it provides a more flexible and simple configuration which can be called conveniently.
Thx,
Helloway
On 06/13/2018 22:46,Luse, Paul E<paul.e.luse@intel.com> wrote:
Hi Helloway,
That’s a great start but I still have the same open questions below, maybe you can try and address those directly? Also, below is the link for adding SPDK based NVMeOF as a Cinder plug-in. In addition to the question below can you please explain for everyone how you see the approach of using Cyborg and OpenSDS compares with the seemingly simpler approach of providing a Cinder plug-in?
https://review.openstack.org/#/c/564229/
Thanks!!
Paul
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of helloway
Sent: Tuesday, June 12, 2018 5:22 PM
To: Storage Performance Development Kit <spdk@lists.01.org>
Cc: Storage Performance Development Kit <spdk@lists.01.org>
Subject: Re: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
Hi Paul,
Thank you for your response, I have submitted a trello [0] titled “Integrate OpenStack/Cyborg into SPDK Architecture”. I am trying to answer your questions from this trello, do I make sense? I really hope to receive your feedback.
Thx,
Helloway
On 06/12/2018 08:16,Luse, Paul E<paul.e.luse@intel.com> wrote:
Hi Helloway,
I was actually just wondering what had happened with this. Looking at the OpenStack patch it looks like it’s close to landing. Somewhere out there we have a Cinder driver that’s also getting fairly close I believe so for sure integration with OpenStack is interesting to many in the community.
Would you be able to summarize more specifically how your patch would work once it lands? I of course see your high level description below that some questions that I have, I’m assuming others as well, include:
- what else is required to be pushed into OpenStack for this to work
- is anything required in the SPDK repo for this to work
- how will the necessary SPDK components be associated with the VM in question and subsequently configured
Thanks for continuing to work on this!
-Paul
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of helloway
Sent: Monday, June 11, 2018 1:35 AM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: [SPDK] Integrate OpenStack/Cyborg into SPDK Architecture
Hi Jim and all,
Do you know OpenStack/Cyborg? It is OpenStack acceleration service which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model(mentioned in OpenStack Summit Vancouver 2018 [0]).
For this reason, we can use Cyborg to interactive with nvmf_tgt to realize the management of the user space accelerator NVMe SSD device, which can badly promote the efficiency. It is worth mentioning that the Cyborg_SPDK_Driver I summitted has been merged into the OpenStack version Q [1]. The driver can report the detailed information of the device to the Cyborg agent. When user requests a vm with a user space NVMe SSD, Cyborg agent will update the Nova/Placement inventory on available NVMe devices. This is a complete process to describe the connection of Cyborg and SPDK.
I wonder whether you guys are interested in integrating OpenStack/Cyborg into SPDK architecture? Do I make sense? Please let me know what your thoughts.
Thx,
Helloway
_______________________________________________
SPDK mailing list
SPDK@lists.01.org
https://lists.01.org/mailman/listinfo/spdk
--
Zhipeng (Howard) Huang
Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email:
huangzhipeng@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen
(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh@uci.edu
Office: Calit2 Building Room 2402
OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado