All of lore.kernel.org
 help / color / mirror / Atom feed
* nvme-cli: connect-all failed with EALREADY but exit code is 0
@ 2018-11-06 15:44 Eyal BenDavid
  2018-11-06 18:50 ` James Smart
  0 siblings, 1 reply; 3+ messages in thread
From: Eyal BenDavid @ 2018-11-06 15:44 UTC (permalink / raw)


Hi All,

Scenario:
1. A nvme controller already exists for FC traddr and host_traddr
2. Running connect all with those parameters again failed
3. nvme-cli result is:
   "Failed to write to /dev/nvme-fabrics: Operation already in progress"
4. Exit code for nvme cli is 0

Is it intended that exit code is 0?

Thanks,
Eyal


More info:
==========
# nvme connect-all \
--host-traddr=nn-0x20000090fa94845f:pn-0x10000090fa94845f \
--hostid=d8266f50-5f24-4c55-8959-23d500e4e450 \
--hostnqn=nqn.2014-08.org.nvmexpress:uuid:d8266f50-5f24-4c55-8959-23d500e4e450 \
--traddr=nn-0x500507680c00002b:pn-0x500507680c2a002b \
--transport=fc
Failed to write to /dev/nvme-fabrics: Operation already in progress

# echo $?
0

>From strace we can see that discovery controller and discovery log page commands
were ok but the creation of the nvme (io) controller failed with EALREADY

== strace ==

open("/dev/nvme-fabrics", O_RDWR)       = 3
write(3, "nqn=nqn.2014-08.org.nvmexpress.d"..., 281) = 281
read(3, "instance=17,cntlid=4099\n", 4096) = 24
close(3)                                = 0
open("/dev/nvme17", O_RDWR)             = 3
ioctl(3, NVME_IOCTL_ADMIN_CMD, 0x7ffe3790a260) = 0
ioctl(3, NVME_IOCTL_ADMIN_CMD, 0x7ffe3790a260) = 0
ioctl(3, NVME_IOCTL_ADMIN_CMD, 0x7ffe3790a260) = 0
close(3)                                = 0
open("/sys/class/nvme/nvme17/delete_controller", O_WRONLY) = 3
write(3, "1", 1)                        = 1
close(3)                                = 0
open("/dev/nvme-fabrics", O_RDWR)       = 3
write(3, "nqn=nqn.1986-03.com.ibm:nvme:214"..., 300) = -1 EALREADY
(Operation already in progress)

=======

^ permalink raw reply	[flat|nested] 3+ messages in thread

* nvme-cli: connect-all failed with EALREADY but exit code is 0
  2018-11-06 15:44 nvme-cli: connect-all failed with EALREADY but exit code is 0 Eyal BenDavid
@ 2018-11-06 18:50 ` James Smart
  2018-11-06 18:59   ` Eyal BenDavid
  0 siblings, 1 reply; 3+ messages in thread
From: James Smart @ 2018-11-06 18:50 UTC (permalink / raw)


it is the way it was written.

And the fact it's already in progress is intended and is not necessarily 
an error unless you are trying to create 2 controllers using the same 
<host_traddr, traddr, hostnqn, hostid, subnqn>. And if that was what you 
intended, there's the "duplicate_connect" argument, granted it has to be 
used on a connect, not connect-all, basis.

-- james


On 11/6/2018 7:44 AM, Eyal BenDavid wrote:
> Hi All,
>
> Scenario:
> 1. A nvme controller already exists for FC traddr and host_traddr
> 2. Running connect all with those parameters again failed
> 3. nvme-cli result is:
>     "Failed to write to /dev/nvme-fabrics: Operation already in progress"
> 4. Exit code for nvme cli is 0
>
> Is it intended that exit code is 0?
>
> Thanks,
> Eyal
>
>
> More info:
> ==========
> # nvme connect-all \
> --host-traddr=nn-0x20000090fa94845f:pn-0x10000090fa94845f \
> --hostid=d8266f50-5f24-4c55-8959-23d500e4e450 \
> --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d8266f50-5f24-4c55-8959-23d500e4e450 \
> --traddr=nn-0x500507680c00002b:pn-0x500507680c2a002b \
> --transport=fc
> Failed to write to /dev/nvme-fabrics: Operation already in progress
>
> # echo $?
> 0
>
>  From strace we can see that discovery controller and discovery log page commands
> were ok but the creation of the nvme (io) controller failed with EALREADY
>
> == strace ==
>
> open("/dev/nvme-fabrics", O_RDWR)       = 3
> write(3, "nqn=nqn.2014-08.org.nvmexpress.d"..., 281) = 281
> read(3, "instance=17,cntlid=4099\n", 4096) = 24
> close(3)                                = 0
> open("/dev/nvme17", O_RDWR)             = 3
> ioctl(3, NVME_IOCTL_ADMIN_CMD, 0x7ffe3790a260) = 0
> ioctl(3, NVME_IOCTL_ADMIN_CMD, 0x7ffe3790a260) = 0
> ioctl(3, NVME_IOCTL_ADMIN_CMD, 0x7ffe3790a260) = 0
> close(3)                                = 0
> open("/sys/class/nvme/nvme17/delete_controller", O_WRONLY) = 3
> write(3, "1", 1)                        = 1
> close(3)                                = 0
> open("/dev/nvme-fabrics", O_RDWR)       = 3
> write(3, "nqn=nqn.1986-03.com.ibm:nvme:214"..., 300) = -1 EALREADY
> (Operation already in progress)
>
> =======
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 3+ messages in thread

* nvme-cli: connect-all failed with EALREADY but exit code is 0
  2018-11-06 18:50 ` James Smart
@ 2018-11-06 18:59   ` Eyal BenDavid
  0 siblings, 0 replies; 3+ messages in thread
From: Eyal BenDavid @ 2018-11-06 18:59 UTC (permalink / raw)


On Tue, Nov 6, 2018@8:50 PM James Smart <james.smart@broadcom.com> wrote:
>
> it is the way it was written.
>
> And the fact it's already in progress is intended and is not necessarily
> an error unless you are trying to create 2 controllers using the same
> <host_traddr, traddr, hostnqn, hostid, subnqn>. And if that was what you
> intended, there's the "duplicate_connect" argument, granted it has to be
> used on a connect, not connect-all, basis.
>
> -- james
>

Thanks,

That's reasonable but only if the error is EALREADY.

For any other errno the exit code would still be 0.

Eyal

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-11-06 18:59 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-06 15:44 nvme-cli: connect-all failed with EALREADY but exit code is 0 Eyal BenDavid
2018-11-06 18:50 ` James Smart
2018-11-06 18:59   ` Eyal BenDavid

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.