All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] NVMeF online namespace management
@ 2017-11-09 10:16 shahar.salzman
  0 siblings, 0 replies; 3+ messages in thread
From: shahar.salzman @ 2017-11-09 10:16 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 680 bytes --]

Hi guys!

New to this mailing list, awesome project!

We have been integrating the NVMeF target into our appliance. In our 
appliance, the user can add/remove volumes on the fly, these volumes 
need to be exposed to the NVMeF target.

We are looking for a way to avoid the static configuration file 
namespace allocation, I noticed that there is an RPC mechanism which 
does this for iSCSI LUNs, but for NVMeF seems to only create a subsystem 
with a fixed # of namespaces.

The RPC is ideal for integration into our management as it is python, 
would this be the correct place to add namespaces to a subsystem on the fly?

Thanks for any help,
Shahar Salzman


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [SPDK] NVMeF online namespace management
@ 2017-11-10 16:52 Marushak, Nathan
  0 siblings, 0 replies; 3+ messages in thread
From: Marushak, Nathan @ 2017-11-10 16:52 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2556 bytes --]

Adding to Cunyin's response a bit, the community is planning to transition more (as much as sensible) of the configuration to RPC. So, there will be active changes in this area over the next couple of releases.

The changes would definitely benefit from your participation in the development process in this area.


> -----Original Message-----
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Chang, Cunyin
> Sent: Thursday, November 09, 2017 4:56 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Cc: Eran Mann <eran.mann(a)kaminario.com>; Yael Shavit
> <yael.shavit(a)kaminario.com>; open-source-contrib(a)kaminario.com; Ilan
> Steinberg <ilan.steinberg(a)kaminario.com>
> Subject: Re: [SPDK] NVMeF online namespace management
> 
> Hi,
> 
> Please see my embedded comment:
> 
> > -----Original Message-----
> > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of
> > shahar.salzman
> > Sent: Thursday, November 9, 2017 6:16 PM
> > To: spdk(a)lists.01.org
> > Cc: Yael Shavit <yael.shavit(a)kaminario.com>; Eran Mann
> > <eran.mann(a)kaminario.com>; open-source-contrib(a)kaminario.com; Ilan
> > Steinberg <ilan.steinberg(a)kaminario.com>
> > Subject: [SPDK] NVMeF online namespace management
> >
> > Hi guys!
> >
> > New to this mailing list, awesome project!
> >
> > We have been integrating the NVMeF target into our appliance. In our
> > appliance, the user can add/remove volumes on the fly, these volumes
> > need to be exposed to the NVMeF target.
> >
> > We are looking for a way to avoid the static configuration file
> > namespace allocation, I noticed that there is an RPC mechanism which
> > does this for iSCSI LUNs, but for NVMeF seems to only create a
> > subsystem with a fixed # of namespaces.
> >
> > The RPC is ideal for integration into our management as it is python,
> > would this be the correct place to add namespaces to a subsystem on the
> fly?
> 
> RPC method is the correct interface to start the process, but the detail
> work should be done by nvmf library, For example we need attach the bdev to
> NS accordingly, update the identify information of the controller and
> namespace.
> 
> >
> > Thanks for any help,
> > Shahar Salzman
> >
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [SPDK] NVMeF online namespace management
@ 2017-11-09 23:56 Chang, Cunyin
  0 siblings, 0 replies; 3+ messages in thread
From: Chang, Cunyin @ 2017-11-09 23:56 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1550 bytes --]

Hi,

Please see my embedded comment:

> -----Original Message-----
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of shahar.salzman
> Sent: Thursday, November 9, 2017 6:16 PM
> To: spdk(a)lists.01.org
> Cc: Yael Shavit <yael.shavit(a)kaminario.com>; Eran Mann
> <eran.mann(a)kaminario.com>; open-source-contrib(a)kaminario.com; Ilan
> Steinberg <ilan.steinberg(a)kaminario.com>
> Subject: [SPDK] NVMeF online namespace management
> 
> Hi guys!
> 
> New to this mailing list, awesome project!
> 
> We have been integrating the NVMeF target into our appliance. In our
> appliance, the user can add/remove volumes on the fly, these volumes need
> to be exposed to the NVMeF target.
> 
> We are looking for a way to avoid the static configuration file namespace
> allocation, I noticed that there is an RPC mechanism which does this for iSCSI
> LUNs, but for NVMeF seems to only create a subsystem with a fixed # of
> namespaces.
> 
> The RPC is ideal for integration into our management as it is python, would
> this be the correct place to add namespaces to a subsystem on the fly?

RPC method is the correct interface to start the process, but the detail work should be done by nvmf library,
For example we need attach the bdev to NS accordingly, update the identify information of the controller and 
namespace.

> 
> Thanks for any help,
> Shahar Salzman
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-11-10 16:52 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-09 10:16 [SPDK] NVMeF online namespace management shahar.salzman
2017-11-09 23:56 Chang, Cunyin
2017-11-10 16:52 Marushak, Nathan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.