All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] nvmf target and host in the same spdk instance
@ 2019-01-18  6:02 Yang, Ziye
  0 siblings, 0 replies; 6+ messages in thread
From: Yang, Ziye @ 2019-01-18  6:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2451 bytes --]

I think that the same issue could also happen with NVMe-Of RDMA driver(to get the ack event related info from the target). Since the target does not have the chance to execute the code. 




Best Regards
Ziye Yang 

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of wuzhouhui
Sent: Friday, January 18, 2019 9:12 AM
To: storage performance development kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] nvmf target and host in the same spdk instance

Issue submitted in https://github.com/spdk/spdk/issues/587

> -----Original Messages-----
> From: "Harris, James R" <james.r.harris(a)intel.com> Sent Time: 
> 2019-01-17 23:56:26 (Thursday)
> To: "Storage Performance Development Kit" <spdk(a)lists.01.org>
> Cc: 
> Subject: Re: [SPDK] nvmf target and host in the same spdk instance
> 
> Hi Zhouhui,
> 
> I think it's OK to file this as an issue.  This loop isn't exiting probably because the target needs to process the connection on the same core.  We've recently made changes to the NVMe PCIe driver to remove these types of while loops and make it completely asynchronous.  I haven't looked to see what that would look like in the TCP driver.
> 
> -Jim
> 
> On 1/17/19, 3:25 AM, "SPDK on behalf of wuzhouhui" <spdk-bounces(a)lists.01.org on behalf of wuzhouhui14(a)mails.ucas.ac.cn> wrote:
> 
>     Hi,
>     
>     I have a interesting but meaningless use case for spdk nvmf target,
>     i.e. run one spdk instance as nvmf target and host in the same time.
>     
>     For example:
>     1. create a malloc bdev
>     2. export malloc bdev via nvmf/tcp
>     3. construct nvme bdev in the same host, however rpc method construct_nvme_bev
>     timeout, and spdk dont break following loop in nvme_tcp_qpair_icreq_send():
>     
>             while (tqpair->state == NVME_TCP_QPAIR_STATE_INVALID) {
>                     nvme_tcp_qpair_process_completions(&tqpair->qpair, 0);
>             }
>     
>     Is it an issue?
>     _______________________________________________
>     SPDK mailing list
>     SPDK(a)lists.01.org
>     https://lists.01.org/mailman/listinfo/spdk
>     
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [SPDK] nvmf target and host in the same spdk instance
@ 2019-09-28  2:06 wuzhouhui
  0 siblings, 0 replies; 6+ messages in thread
From: wuzhouhui @ 2019-09-28  2:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 547 bytes --]

Hi,

I have a interesting but meaningless use case for spdk nvmf target,
i.e. run one spdk instance as nvmf target and host in the same time.

For example:
1. create a malloc bdev
2. export malloc bdev via nvmf/tcp
3. construct nvme bdev in the same host, however rpc method construct_nvme_bev
timeout, and spdk dont break following loop in nvme_tcp_qpair_icreq_send():

        while (tqpair->state == NVME_TCP_QPAIR_STATE_INVALID) {
                nvme_tcp_qpair_process_completions(&tqpair->qpair, 0);
        }

Is it an issue?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] nvmf target and host in the same spdk instance
@ 2019-09-28  2:06 wuzhouhui
  0 siblings, 0 replies; 6+ messages in thread
From: wuzhouhui @ 2019-09-28  2:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1820 bytes --]

Issue submitted in https://github.com/spdk/spdk/issues/587

> -----Original Messages-----
> From: "Harris, James R" <james.r.harris(a)intel.com>
> Sent Time: 2019-01-17 23:56:26 (Thursday)
> To: "Storage Performance Development Kit" <spdk(a)lists.01.org>
> Cc: 
> Subject: Re: [SPDK] nvmf target and host in the same spdk instance
> 
> Hi Zhouhui,
> 
> I think it's OK to file this as an issue.  This loop isn't exiting probably because the target needs to process the connection on the same core.  We've recently made changes to the NVMe PCIe driver to remove these types of while loops and make it completely asynchronous.  I haven't looked to see what that would look like in the TCP driver.
> 
> -Jim
> 
> On 1/17/19, 3:25 AM, "SPDK on behalf of wuzhouhui" <spdk-bounces(a)lists.01.org on behalf of wuzhouhui14(a)mails.ucas.ac.cn> wrote:
> 
>     Hi,
>     
>     I have a interesting but meaningless use case for spdk nvmf target,
>     i.e. run one spdk instance as nvmf target and host in the same time.
>     
>     For example:
>     1. create a malloc bdev
>     2. export malloc bdev via nvmf/tcp
>     3. construct nvme bdev in the same host, however rpc method construct_nvme_bev
>     timeout, and spdk dont break following loop in nvme_tcp_qpair_icreq_send():
>     
>             while (tqpair->state == NVME_TCP_QPAIR_STATE_INVALID) {
>                     nvme_tcp_qpair_process_completions(&tqpair->qpair, 0);
>             }
>     
>     Is it an issue?
>     _______________________________________________
>     SPDK mailing list
>     SPDK(a)lists.01.org
>     https://lists.01.org/mailman/listinfo/spdk
>     
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] nvmf target and host in the same spdk instance
@ 2019-01-18 16:06 Walker, Benjamin
  0 siblings, 0 replies; 6+ messages in thread
From: Walker, Benjamin @ 2019-01-18 16:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3676 bytes --]

On Fri, 2019-01-18 at 11:11 +0200, Sasha Kotchubievsky wrote:
> Hi,
> 
> If I understand correct, dynamic adding/export namespaces to nvmf target 
> instance, that is host too, is not supported.  That's correct?
> 
> Is it a bug or design issue?

I think we just haven't thought through all of the possible problems running an
initiator connected to a target where both are muxed on the same thread would
cause. A bug report is fine to track it, but you're right that it is more of a
design issue.


> 
> Thanks
> 
> Sasha
> 
> On 1/18/2019 8:02 AM, Yang, Ziye wrote:
> > I think that the same issue could also happen with NVMe-Of RDMA driver(to
> > get the ack event related info from the target). Since the target does not
> > have the chance to execute the code.
> > 
> > 
> > 
> > 
> > Best Regards
> > Ziye Yang
> > 
> > -----Original Message-----
> > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of wuzhouhui
> > Sent: Friday, January 18, 2019 9:12 AM
> > To: storage performance development kit <spdk(a)lists.01.org>
> > Subject: Re: [SPDK] nvmf target and host in the same spdk instance
> > 
> > Issue submitted in https://github.com/spdk/spdk/issues/587
> > 
> > > -----Original Messages-----
> > > From: "Harris, James R" <james.r.harris(a)intel.com> Sent Time:
> > > 2019-01-17 23:56:26 (Thursday)
> > > To: "Storage Performance Development Kit" <spdk(a)lists.01.org>
> > > Cc:
> > > Subject: Re: [SPDK] nvmf target and host in the same spdk instance
> > > 
> > > Hi Zhouhui,
> > > 
> > > I think it's OK to file this as an issue.  This loop isn't exiting
> > > probably because the target needs to process the connection on the same
> > > core.  We've recently made changes to the NVMe PCIe driver to remove these
> > > types of while loops and make it completely asynchronous.  I haven't
> > > looked to see what that would look like in the TCP driver.
> > > 
> > > -Jim
> > > 
> > > On 1/17/19, 3:25 AM, "SPDK on behalf of wuzhouhui" <
> > > spdk-bounces(a)lists.01.org on behalf of wuzhouhui14(a)mails.ucas.ac.cn>
> > > wrote:
> > > 
> > >      Hi,
> > >      
> > >      I have a interesting but meaningless use case for spdk nvmf target,
> > >      i.e. run one spdk instance as nvmf target and host in the same time.
> > >      
> > >      For example:
> > >      1. create a malloc bdev
> > >      2. export malloc bdev via nvmf/tcp
> > >      3. construct nvme bdev in the same host, however rpc method
> > > construct_nvme_bev
> > >      timeout, and spdk dont break following loop in
> > > nvme_tcp_qpair_icreq_send():
> > >      
> > >              while (tqpair->state == NVME_TCP_QPAIR_STATE_INVALID) {
> > >                      nvme_tcp_qpair_process_completions(&tqpair->qpair,
> > > 0);
> > >              }
> > >      
> > >      Is it an issue?
> > >      _______________________________________________
> > >      SPDK mailing list
> > >      SPDK(a)lists.01.org
> > >      https://lists.01.org/mailman/listinfo/spdk
> > >      
> > > 
> > > _______________________________________________
> > > SPDK mailing list
> > > SPDK(a)lists.01.org
> > > https://lists.01.org/mailman/listinfo/spdk
> > 
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> > _______________________________________________
> > SPDK mailing list
> > SPDK(a)lists.01.org
> > https://lists.01.org/mailman/listinfo/spdk
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] nvmf target and host in the same spdk instance
@ 2019-01-18  9:11 Sasha Kotchubievsky
  0 siblings, 0 replies; 6+ messages in thread
From: Sasha Kotchubievsky @ 2019-01-18  9:11 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2918 bytes --]

Hi,

If I understand correct, dynamic adding/export namespaces to nvmf target 
instance, that is host too, is not supported.  That's correct?

Is it a bug or design issue?

Thanks

Sasha

On 1/18/2019 8:02 AM, Yang, Ziye wrote:
> I think that the same issue could also happen with NVMe-Of RDMA driver(to get the ack event related info from the target). Since the target does not have the chance to execute the code.
>
>
>
>
> Best Regards
> Ziye Yang
>
> -----Original Message-----
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of wuzhouhui
> Sent: Friday, January 18, 2019 9:12 AM
> To: storage performance development kit <spdk(a)lists.01.org>
> Subject: Re: [SPDK] nvmf target and host in the same spdk instance
>
> Issue submitted in https://github.com/spdk/spdk/issues/587
>
>> -----Original Messages-----
>> From: "Harris, James R" <james.r.harris(a)intel.com> Sent Time:
>> 2019-01-17 23:56:26 (Thursday)
>> To: "Storage Performance Development Kit" <spdk(a)lists.01.org>
>> Cc:
>> Subject: Re: [SPDK] nvmf target and host in the same spdk instance
>>
>> Hi Zhouhui,
>>
>> I think it's OK to file this as an issue.  This loop isn't exiting probably because the target needs to process the connection on the same core.  We've recently made changes to the NVMe PCIe driver to remove these types of while loops and make it completely asynchronous.  I haven't looked to see what that would look like in the TCP driver.
>>
>> -Jim
>>
>> On 1/17/19, 3:25 AM, "SPDK on behalf of wuzhouhui" <spdk-bounces(a)lists.01.org on behalf of wuzhouhui14(a)mails.ucas.ac.cn> wrote:
>>
>>      Hi,
>>      
>>      I have a interesting but meaningless use case for spdk nvmf target,
>>      i.e. run one spdk instance as nvmf target and host in the same time.
>>      
>>      For example:
>>      1. create a malloc bdev
>>      2. export malloc bdev via nvmf/tcp
>>      3. construct nvme bdev in the same host, however rpc method construct_nvme_bev
>>      timeout, and spdk dont break following loop in nvme_tcp_qpair_icreq_send():
>>      
>>              while (tqpair->state == NVME_TCP_QPAIR_STATE_INVALID) {
>>                      nvme_tcp_qpair_process_completions(&tqpair->qpair, 0);
>>              }
>>      
>>      Is it an issue?
>>      _______________________________________________
>>      SPDK mailing list
>>      SPDK(a)lists.01.org
>>      https://lists.01.org/mailman/listinfo/spdk
>>      
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] nvmf target and host in the same spdk instance
@ 2019-01-17 15:56 Harris, James R
  0 siblings, 0 replies; 6+ messages in thread
From: Harris, James R @ 2019-01-17 15:56 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1276 bytes --]

Hi Zhouhui,

I think it's OK to file this as an issue.  This loop isn't exiting probably because the target needs to process the connection on the same core.  We've recently made changes to the NVMe PCIe driver to remove these types of while loops and make it completely asynchronous.  I haven't looked to see what that would look like in the TCP driver.

-Jim

On 1/17/19, 3:25 AM, "SPDK on behalf of wuzhouhui" <spdk-bounces(a)lists.01.org on behalf of wuzhouhui14(a)mails.ucas.ac.cn> wrote:

    Hi,
    
    I have a interesting but meaningless use case for spdk nvmf target,
    i.e. run one spdk instance as nvmf target and host in the same time.
    
    For example:
    1. create a malloc bdev
    2. export malloc bdev via nvmf/tcp
    3. construct nvme bdev in the same host, however rpc method construct_nvme_bev
    timeout, and spdk dont break following loop in nvme_tcp_qpair_icreq_send():
    
            while (tqpair->state == NVME_TCP_QPAIR_STATE_INVALID) {
                    nvme_tcp_qpair_process_completions(&tqpair->qpair, 0);
            }
    
    Is it an issue?
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-09-28  2:06 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-18  6:02 [SPDK] nvmf target and host in the same spdk instance Yang, Ziye
  -- strict thread matches above, loose matches on Subject: below --
2019-09-28  2:06 wuzhouhui
2019-09-28  2:06 wuzhouhui
2019-01-18 16:06 Walker, Benjamin
2019-01-18  9:11 Sasha Kotchubievsky
2019-01-17 15:56 Harris, James R

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.