All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] Bug report: core dump when `nvme disconnect`
@ 2018-02-07 18:53 Meng Wang
  0 siblings, 0 replies; 5+ messages in thread
From: Meng Wang @ 2018-02-07 18:53 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9358 bytes --]

On Tue, Feb 6, 2018 at 9:24 PM, Wan, Qun <qun.wan(a)intel.com> wrote:

> Addition:
>
>                 3. Did you use intel SSD?
>
>                 4. You may also use the following step to get the core
> dump info in the nvmf target to get more debug info
>
>                                 a. gdb –c .core file
>
>                                b. ./app/nvme_tgt/nvmf_tgt
>
>                                 c. bt
>
>
>
> Best Regards,
>
> Anna
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Wan, Qun
> *Sent:* Wednesday, February 7, 2018 11:50 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] Bug report: core dump when `nvme disconnect`
>
>
>
> Hi, Meng
>
>                 We can’t reproduce this issue in our machine with SPDK
> v18.01 with AIO backend.
>
> target version: Fedora release 27 (Twenty Seven)  4.14.0
>
>
>
>                 Can you provide the following info in more detail?
>
> 1.       What’s the configuration file you are using?
>
> 2.       And did you disconnect >=2 times?
>
>
>
> Best Regards,
>
> Anna
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org <spdk-bounces(a)lists.01.org>]
> *On Behalf Of *Meng Wang
> *Sent:* Wednesday, February 7, 2018 7:15 AM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* [SPDK] Bug report: core dump when `nvme disconnect`
>
>
>
> Hello all,
>
>
>
> I configured SPDK (v18.01) with AIO backends. Then, I use 'nvme connect'
> to connect the remote volume as /dev/nvme4n1 in client.
>
>
>
> When I use 'nvme disconnect -d /dev/nvme4n1', core dumped in the target
> server:
>
>
>
> *** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop):
> 0x00000000020399e0 ***
>
> *** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop):
> 0x00000000020399e0 ***
>
> *** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop):
> 0x00000000020399e0 ***
>
> ======= Backtrace: =========
>
> ======= Backtrace: =========
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
>
> /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
>
> /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
>
> /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
>
> app/nvmf_tgt/nvmf_tgt[0x45384c]
>
> app/nvmf_tgt/nvmf_tgtapp/nvmf_tgt/nvmf_tgt[0x454011]
>
> app/nvmf_tgt/nvmf_tgt[0xapp/nvmf_tgt/nvmf_tgt[0x45468d]
>
> app/nvmf_tgt/nvmf_tgt[0x45384c]
>
> app/nvmf_tgt/nvmf_tgt[0x405e41]
>
> app/nvmf_tgt/nvmf_tgt[0x44e0fb]
>
> app/nvmf_tgt/nvmf_tgt[0x449c11]
>
> app/nvmf_tgt/nvmf_tgt[0x454011]
>
> /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
>
> /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
>
> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f48e94e1830]
>
> /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f48e95c83dd]
>
> ======= Memory map: ========
>
> /lib/x86_64-linux-gnu/libc.so.6(======= Memory map: ========
>
> 00400000-004ce000 r-xp 00000000 08:01 795228
>  /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
>
> 006cd000-006d0000 r--p 000cd000 08:01 795228
>  /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
>
> 006d0000-006d3000 rw-p 000d0000 08:01 795228
>  /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
>
> 006d3000-00711000 rw-p 00000000 00:00 0
>
> 016db000-024e1000 rw-p 00000000 00:00 0
> [heap]
>
> 7f4844000000-7f4844021000 rw-p 00000000 00:00 0
>
> 7f4844021000-7f4848000000 ---p 00000000 00:00 0
>
> 7f484c000000-7f484c035000 rw-p 00000000 00:00 0
>
> 7f484c035000-7f4850000000 ---p 00000000 00:00 0
>
> 7f4850000000-7f4850035000 rw-p 00000000 00:00 0
>
> 7f4850035000-7f4854000000 ---p 00000000 00:00 0
>
> 7f4854000000-7f4854035000 rw-p 00000000 00:00 0
>
> 7f4854035000-7f4858000000 ---p 00000000 00:00 0
>
> 7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956
> /dev/shm/nvmf_trace.pid23576
>
> 7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
>
> 7f48615fe000-7f4861df00400000-004ce000 r-xp 00000000 08:01 795228
>                      /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
>
> 006cd000-006d0000 r--p 000cd000 08:01 795228
>  /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
>
> 006d0000-006d3000 rw-p 000d0000 08:01 795228
>  /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
>
> 006d3000-00711000 rw-p 00000000 00:00 0
>
> 016db000-024e1000 rw-p 00000000 00:00 0
> [heap]
>
> 7f4844000000-7f4844021000 rw-p 00000000 00:00 0
>
> 7f4844021000-7f4848000000 ---p 00000000 00:00 0
>
> 7f484c000000-7f484c035000 rw-p 00000000 00:00 0
>
> 7f484c035000-7f4850000000 ---p 00000000 00:00 0
>
> 7f4850000000-7f4850035000 rw-p 00000000 00:00 0
>
> 7f4850035000-7f4854000000 ---p 00000000 00:00 0
>
> 7f4854000000-7f4854035000 rw-p 00000000 00:00 0
>
> 7f4854035000-7f4858000000 ---p 00000000 00:00 0
>
> 7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956
> /dev/shm/nvmf_trace.pid23576
>
> 7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
>
> 7f48615fe000-7f4861dfe000 rw-p 00000000 00:00 0
>
> 7f4861dfe000-7f4861dff000 ---p 00000000 00:00 0
>
> 7f4861dff000-7f48625ff000 rw-p 00000000 00:00 0
>
> 7f48625ff000-7f4862600000 ---p 00000000 00:00 0
>
> 7f4862600000-7f4862e00000 rw-p 00000000 00:00 0
>
> 7f4862e00000-7f4863000000 rw-s 00000000 00:27 1457854
> /dev/hugepages/spdk_pid23576map_535
>
> 7f4863200000-7f4863400000 rw-s 00000000 00:27 1457903
> /dev/hugepages/spdk_pid23576map_584
>
> 7f4863400000-7f4863600000 rw-s 00000000 00:27 1457904
> /dev/hugepages/spdk_pid23576map_585
>
> 7f4863600000-7f4863800000 rw-s 00000000 00:27 1457901
> /dev/hugepages/spdk_pid23576map_582
>
> 7f4863800000-7f4863a00000 rw-s 00000000 00:27 1457902
> /dev/hugepages/spdk_pid23576map_583
>
> 7f4863a00000-7f4863c00000 rw-s 00000000 00:27 1457899
> /dev/hugepages/spdk_pid23576map_580
>
> 7f4863c00000-7f4863e00000 rw-s 00000000 00:27 1457900
> /dev/hugepages/spdk_pid23576map_581
>
> 7f4863e00000-7f4864000000 rw-s 000000Aborted (core dumped)
>
>
>
>
>
> --
>
> Meng Wang
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
Hello Anna,
Thanks for the followup. Here is my config file. Only the changed part are
shown. The omitted options are default.

[global]
ReactorMask 0x3C0000000

[AIO]
  AIO /dev/nvme0n1 AIO0
  AIO /dev/nvme1n1 AIO1
  AIO /dev/nvme2n1 AIO2
  AIO /dev/nvme3n1 AIO3

[Subsystem1]
  NQN nqn.2017-12.spdk:1
  Core 30
  Listen RDMA 192.168.6.21:4421
  AllowAnyHost Yes
  Host nqn.2017-12.spdk:1
  SN SPDK00000000000001
  Namespace AIO0 1

# Multiple subsystems are allowed.
# Namespaces backed by non-NVMe devices
[Subsystem2]
  NQN nqn.2017-12.spdk:2
  Core 31
  Listen RDMA 192.168.6.21:4422
  AllowAnyHost Yes
  Host nqn.2017-12.spdk:2
  SN SPDK00000000000002
  Namespace AIO1 1

[Subsystem3]
  NQN nqn.2017-12.spdk:3
  Core 32
  Listen RDMA 192.168.6.21:4423
  AllowAnyHost Yes
  Host nqn.2017-12.spdk:3
  SN SPDK00000000000003
  Namespace AIO2 1

[Subsystem4]
  NQN nqn.2017-12.spdk:4
  Core 33
  Listen RDMA 192.168.6.21:4424
  AllowAnyHost Yes
  Host nqn.2017-12.spdk:4
  SN SPDK00000000000004
  Namespace AIO3 1

I issued `nvme disconnect -d /dev/nvme4n1` in client, then core dumped.

GDB bt:

#0  0x00007f48e94f6428 in __GI_raise (sig=sig(a)entry=6)
    at ../sysdeps/unix/sysv/linux/raise.c:54
#1  0x00007f48e94f802a in __GI_abort () at abort.c:89
#2  0x00007f48e95387ea in __libc_message (do_abort=do_abort(a)entry=2,
    fmt=fmt(a)entry=0x7f48e9651e98 "*** Error in `%s': %s: 0x%s ***\n")
    at ../sysdeps/posix/libc_fatal.c:175
#3  0x00007f48e954137a in malloc_printerr (ar_ptr=<optimized out>,
ptr=<optimized out>,
    str=0x7f48e9651f60 "double free or corruption (fasttop)", action=3) at
malloc.c:5006
#4  _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at
malloc.c:3867
#5  0x00007f48e954553c in __GI___libc_free (mem=<optimized out>) at
malloc.c:2968
#6  0x000000000044d389 in ctrlr_destruct (ctrlr=0x20399e0) at ctrlr.c:123
#7  0x000000000044e0fb in spdk_nvmf_ctrlr_disconnect (qpair=0x1fbe9a0) at
ctrlr.c:337
#8  0x0000000000449c11 in nvmf_rdma_handle_disconnect (ctx=0x1fbe9a0) at
rdma.c:726
#9  0x000000000045384c in _spdk_reactor_msg_passed (
    arg1=0x449bc8 <nvmf_rdma_handle_disconnect>, arg2=0x1fbe9a0) at
reactor.c:209
#10 0x00000000004537f9 in _spdk_event_queue_run_batch (reactor=0x1f50600)
at reactor.c:196
#11 0x0000000000454011 in _spdk_reactor_run (arg=0x1f50600) at reactor.c:438
#12 0x0000000000477726 in eal_thread_loop (arg=0x0)
    at /home/hcd/wm/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_thread.c:182
#13 0x00007f48e98926ba in start_thread (arg=0x7f4861dfd700) at
pthread_create.c:333
#14 0x00007f48e95c83dd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:109






-- 
Meng Wang

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 20449 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] Bug report: core dump when `nvme disconnect`
@ 2018-02-08  7:39 Wan, Qun
  0 siblings, 0 replies; 5+ messages in thread
From: Wan, Qun @ 2018-02-08  7:39 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 9823 bytes --]

Hi, Meng
                Thanks for the reply. We can reproduce the issue with the config and have raised an issue in the github. You may check following link on the status update following on.
https://github.com/spdk/spdk/issues/235


Best Regards,
Anna

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Meng Wang
Sent: Thursday, February 8, 2018 2:54 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Bug report: core dump when `nvme disconnect`

On Tue, Feb 6, 2018 at 9:24 PM, Wan, Qun <qun.wan(a)intel.com<mailto:qun.wan(a)intel.com>> wrote:
Addition:
                3. Did you use intel SSD?
                4. You may also use the following step to get the core dump info in the nvmf target to get more debug info
                                a. gdb –c .core file
                               b. ./app/nvme_tgt/nvmf_tgt
                                c. bt

Best Regards,
Anna

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Wan, Qun
Sent: Wednesday, February 7, 2018 11:50 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] Bug report: core dump when `nvme disconnect`

Hi, Meng
                We can’t reproduce this issue in our machine with SPDK v18.01 with AIO backend.
target version: Fedora release 27 (Twenty Seven)  4.14.0

                Can you provide the following info in more detail?

1.       What’s the configuration file you are using?

2.       And did you disconnect >=2 times?

Best Regards,
Anna

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Meng Wang
Sent: Wednesday, February 7, 2018 7:15 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Bug report: core dump when `nvme disconnect`

Hello all,

I configured SPDK (v18.01) with AIO backends. Then, I use 'nvme connect' to connect the remote volume as /dev/nvme4n1 in client.

When I use 'nvme disconnect -d /dev/nvme4n1', core dumped in the target server:

*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
======= Backtrace: =========
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
app/nvmf_tgt/nvmf_tgt[0x45384c]
app/nvmf_tgt/nvmf_tgtapp/nvmf_tgt/nvmf_tgt[0x454011]
app/nvmf_tgt/nvmf_tgt[0xapp/nvmf_tgt/nvmf_tgt[0x45468d]
app/nvmf_tgt/nvmf_tgt[0x45384c]
app/nvmf_tgt/nvmf_tgt[0x405e41]
app/nvmf_tgt/nvmf_tgt[0x44e0fb]
app/nvmf_tgt/nvmf_tgt[0x449c11]
app/nvmf_tgt/nvmf_tgt[0x454011]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f48e94e1830]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f48e95c83dd]
======= Memory map: ========
/lib/x86_64-linux-gnu/libc.so.6(======= Memory map: ========
00400000-004ce000 r-xp 00000000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006cd000-006d0000 r--p 000cd000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d0000-006d3000 rw-p 000d0000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d3000-00711000 rw-p 00000000 00:00 0
016db000-024e1000 rw-p 00000000 00:00 0                                  [heap]
7f4844000000-7f4844021000 rw-p 00000000 00:00 0
7f4844021000-7f4848000000 ---p 00000000 00:00 0
7f484c000000-7f484c035000 rw-p 00000000 00:00 0
7f484c035000-7f4850000000 ---p 00000000 00:00 0
7f4850000000-7f4850035000 rw-p 00000000 00:00 0
7f4850035000-7f4854000000 ---p 00000000 00:00 0
7f4854000000-7f4854035000 rw-p 00000000 00:00 0
7f4854035000-7f4858000000 ---p 00000000 00:00 0
7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956                    /dev/shm/nvmf_trace.pid23576
7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
7f48615fe000-7f4861df00400000-004ce000 r-xp 00000000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006cd000-006d0000 r--p 000cd000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d0000-006d3000 rw-p 000d0000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d3000-00711000 rw-p 00000000 00:00 0
016db000-024e1000 rw-p 00000000 00:00 0                                  [heap]
7f4844000000-7f4844021000 rw-p 00000000 00:00 0
7f4844021000-7f4848000000 ---p 00000000 00:00 0
7f484c000000-7f484c035000 rw-p 00000000 00:00 0
7f484c035000-7f4850000000 ---p 00000000 00:00 0
7f4850000000-7f4850035000 rw-p 00000000 00:00 0
7f4850035000-7f4854000000 ---p 00000000 00:00 0
7f4854000000-7f4854035000 rw-p 00000000 00:00 0
7f4854035000-7f4858000000 ---p 00000000 00:00 0
7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956                    /dev/shm/nvmf_trace.pid23576
7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
7f48615fe000-7f4861dfe000 rw-p 00000000 00:00 0
7f4861dfe000-7f4861dff000 ---p 00000000 00:00 0
7f4861dff000-7f48625ff000 rw-p 00000000 00:00 0
7f48625ff000-7f4862600000 ---p 00000000 00:00 0
7f4862600000-7f4862e00000 rw-p 00000000 00:00 0
7f4862e00000-7f4863000000 rw-s 00000000 00:27 1457854                    /dev/hugepages/spdk_pid23576map_535
7f4863200000-7f4863400000 rw-s 00000000 00:27 1457903                    /dev/hugepages/spdk_pid23576map_584
7f4863400000-7f4863600000 rw-s 00000000 00:27 1457904                    /dev/hugepages/spdk_pid23576map_585
7f4863600000-7f4863800000 rw-s 00000000 00:27 1457901                    /dev/hugepages/spdk_pid23576map_582
7f4863800000-7f4863a00000 rw-s 00000000 00:27 1457902                    /dev/hugepages/spdk_pid23576map_583
7f4863a00000-7f4863c00000 rw-s 00000000 00:27 1457899                    /dev/hugepages/spdk_pid23576map_580
7f4863c00000-7f4863e00000 rw-s 00000000 00:27 1457900                    /dev/hugepages/spdk_pid23576map_581
7f4863e00000-7f4864000000 rw-s 000000Aborted (core dumped)


--
Meng Wang

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk

Hello Anna,
Thanks for the followup. Here is my config file. Only the changed part are shown. The omitted options are default.

[global]
ReactorMask 0x3C0000000

[AIO]
  AIO /dev/nvme0n1 AIO0
  AIO /dev/nvme1n1 AIO1
  AIO /dev/nvme2n1 AIO2
  AIO /dev/nvme3n1 AIO3

[Subsystem1]
  NQN nqn.2017-12.spdk:1
  Core 30
  Listen RDMA 192.168.6.21:4421<http://192.168.6.21:4421>
  AllowAnyHost Yes
  Host nqn.2017-12.spdk:1
  SN SPDK00000000000001
  Namespace AIO0 1

# Multiple subsystems are allowed.
# Namespaces backed by non-NVMe devices
[Subsystem2]
  NQN nqn.2017-12.spdk:2
  Core 31
  Listen RDMA 192.168.6.21:4422<http://192.168.6.21:4422>
  AllowAnyHost Yes
  Host nqn.2017-12.spdk:2
  SN SPDK00000000000002
  Namespace AIO1 1

[Subsystem3]
  NQN nqn.2017-12.spdk:3
  Core 32
  Listen RDMA 192.168.6.21:4423<http://192.168.6.21:4423>
  AllowAnyHost Yes
  Host nqn.2017-12.spdk:3
  SN SPDK00000000000003
  Namespace AIO2 1

[Subsystem4]
  NQN nqn.2017-12.spdk:4
  Core 33
  Listen RDMA 192.168.6.21:4424<http://192.168.6.21:4424>
  AllowAnyHost Yes
  Host nqn.2017-12.spdk:4
  SN SPDK00000000000004
  Namespace AIO3 1

I issued `nvme disconnect -d /dev/nvme4n1` in client, then core dumped.

GDB bt:

#0  0x00007f48e94f6428 in __GI_raise (sig=sig(a)entry=6)
    at ../sysdeps/unix/sysv/linux/raise.c:54
#1  0x00007f48e94f802a in __GI_abort () at abort.c:89
#2  0x00007f48e95387ea in __libc_message (do_abort=do_abort(a)entry=2,
    fmt=fmt(a)entry=0x7f48e9651e98 "*** Error in `%s': %s: 0x%s ***\n")
    at ../sysdeps/posix/libc_fatal.c:175
#3  0x00007f48e954137a in malloc_printerr (ar_ptr=<optimized out>, ptr=<optimized out>,
    str=0x7f48e9651f60 "double free or corruption (fasttop)", action=3) at malloc.c:5006
#4  _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:3867
#5  0x00007f48e954553c in __GI___libc_free (mem=<optimized out>) at malloc.c:2968
#6  0x000000000044d389 in ctrlr_destruct (ctrlr=0x20399e0) at ctrlr.c:123
#7  0x000000000044e0fb in spdk_nvmf_ctrlr_disconnect (qpair=0x1fbe9a0) at ctrlr.c:337
#8  0x0000000000449c11 in nvmf_rdma_handle_disconnect (ctx=0x1fbe9a0) at rdma.c:726
#9  0x000000000045384c in _spdk_reactor_msg_passed (
    arg1=0x449bc8 <nvmf_rdma_handle_disconnect>, arg2=0x1fbe9a0) at reactor.c:209
#10 0x00000000004537f9 in _spdk_event_queue_run_batch (reactor=0x1f50600) at reactor.c:196
#11 0x0000000000454011 in _spdk_reactor_run (arg=0x1f50600) at reactor.c:438
#12 0x0000000000477726 in eal_thread_loop (arg=0x0)
    at /home/hcd/wm/spdk/dpdk/lib/librte_eal/linuxapp/eal/eal_thread.c:182
#13 0x00007f48e98926ba in start_thread (arg=0x7f4861dfd700) at pthread_create.c:333
#14 0x00007f48e95c83dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109






--
Meng Wang

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 35700 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] Bug report: core dump when `nvme disconnect`
@ 2018-02-07  5:24 Wan, Qun
  0 siblings, 0 replies; 5+ messages in thread
From: Wan, Qun @ 2018-02-07  5:24 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6157 bytes --]

Addition:
                3. Did you use intel SSD?
                4. You may also use the following step to get the core dump info in the nvmf target to get more debug info
                                a. gdb –c .core file
                               b. ./app/nvme_tgt/nvmf_tgt
                                c. bt

Best Regards,
Anna

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Wan, Qun
Sent: Wednesday, February 7, 2018 11:50 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Bug report: core dump when `nvme disconnect`

Hi, Meng
                We can’t reproduce this issue in our machine with SPDK v18.01 with AIO backend.
target version: Fedora release 27 (Twenty Seven)  4.14.0

                Can you provide the following info in more detail?

1.       What’s the configuration file you are using?

2.       And did you disconnect >=2 times?

Best Regards,
Anna

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Meng Wang
Sent: Wednesday, February 7, 2018 7:15 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] Bug report: core dump when `nvme disconnect`

Hello all,

I configured SPDK (v18.01) with AIO backends. Then, I use 'nvme connect' to connect the remote volume as /dev/nvme4n1 in client.

When I use 'nvme disconnect -d /dev/nvme4n1', core dumped in the target server:

*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
======= Backtrace: =========
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
app/nvmf_tgt/nvmf_tgt[0x45384c]
app/nvmf_tgt/nvmf_tgtapp/nvmf_tgt/nvmf_tgt[0x454011]
app/nvmf_tgt/nvmf_tgt[0xapp/nvmf_tgt/nvmf_tgt[0x45468d]
app/nvmf_tgt/nvmf_tgt[0x45384c]
app/nvmf_tgt/nvmf_tgt[0x405e41]
app/nvmf_tgt/nvmf_tgt[0x44e0fb]
app/nvmf_tgt/nvmf_tgt[0x449c11]
app/nvmf_tgt/nvmf_tgt[0x454011]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f48e94e1830]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f48e95c83dd]
======= Memory map: ========
/lib/x86_64-linux-gnu/libc.so.6(======= Memory map: ========
00400000-004ce000 r-xp 00000000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006cd000-006d0000 r--p 000cd000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d0000-006d3000 rw-p 000d0000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d3000-00711000 rw-p 00000000 00:00 0
016db000-024e1000 rw-p 00000000 00:00 0                                  [heap]
7f4844000000-7f4844021000 rw-p 00000000 00:00 0
7f4844021000-7f4848000000 ---p 00000000 00:00 0
7f484c000000-7f484c035000 rw-p 00000000 00:00 0
7f484c035000-7f4850000000 ---p 00000000 00:00 0
7f4850000000-7f4850035000 rw-p 00000000 00:00 0
7f4850035000-7f4854000000 ---p 00000000 00:00 0
7f4854000000-7f4854035000 rw-p 00000000 00:00 0
7f4854035000-7f4858000000 ---p 00000000 00:00 0
7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956                    /dev/shm/nvmf_trace.pid23576
7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
7f48615fe000-7f4861df00400000-004ce000 r-xp 00000000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006cd000-006d0000 r--p 000cd000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d0000-006d3000 rw-p 000d0000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d3000-00711000 rw-p 00000000 00:00 0
016db000-024e1000 rw-p 00000000 00:00 0                                  [heap]
7f4844000000-7f4844021000 rw-p 00000000 00:00 0
7f4844021000-7f4848000000 ---p 00000000 00:00 0
7f484c000000-7f484c035000 rw-p 00000000 00:00 0
7f484c035000-7f4850000000 ---p 00000000 00:00 0
7f4850000000-7f4850035000 rw-p 00000000 00:00 0
7f4850035000-7f4854000000 ---p 00000000 00:00 0
7f4854000000-7f4854035000 rw-p 00000000 00:00 0
7f4854035000-7f4858000000 ---p 00000000 00:00 0
7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956                    /dev/shm/nvmf_trace.pid23576
7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
7f48615fe000-7f4861dfe000 rw-p 00000000 00:00 0
7f4861dfe000-7f4861dff000 ---p 00000000 00:00 0
7f4861dff000-7f48625ff000 rw-p 00000000 00:00 0
7f48625ff000-7f4862600000 ---p 00000000 00:00 0
7f4862600000-7f4862e00000 rw-p 00000000 00:00 0
7f4862e00000-7f4863000000 rw-s 00000000 00:27 1457854                    /dev/hugepages/spdk_pid23576map_535
7f4863200000-7f4863400000 rw-s 00000000 00:27 1457903                    /dev/hugepages/spdk_pid23576map_584
7f4863400000-7f4863600000 rw-s 00000000 00:27 1457904                    /dev/hugepages/spdk_pid23576map_585
7f4863600000-7f4863800000 rw-s 00000000 00:27 1457901                    /dev/hugepages/spdk_pid23576map_582
7f4863800000-7f4863a00000 rw-s 00000000 00:27 1457902                    /dev/hugepages/spdk_pid23576map_583
7f4863a00000-7f4863c00000 rw-s 00000000 00:27 1457899                    /dev/hugepages/spdk_pid23576map_580
7f4863c00000-7f4863e00000 rw-s 00000000 00:27 1457900                    /dev/hugepages/spdk_pid23576map_581
7f4863e00000-7f4864000000 rw-s 000000Aborted (core dumped)


--
Meng Wang

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 20526 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] Bug report: core dump when `nvme disconnect`
@ 2018-02-07  3:49 Wan, Qun
  0 siblings, 0 replies; 5+ messages in thread
From: Wan, Qun @ 2018-02-07  3:49 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5526 bytes --]

Hi, Meng
                We can’t reproduce this issue in our machine with SPDK v18.01 with AIO backend.
target version: Fedora release 27 (Twenty Seven)  4.14.0

                Can you provide the following info in more detail?

1.       What’s the configuration file you are using?

2.       And did you disconnect >=2 times?

Best Regards,
Anna

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Meng Wang
Sent: Wednesday, February 7, 2018 7:15 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Bug report: core dump when `nvme disconnect`

Hello all,

I configured SPDK (v18.01) with AIO backends. Then, I use 'nvme connect' to connect the remote volume as /dev/nvme4n1 in client.

When I use 'nvme disconnect -d /dev/nvme4n1', core dumped in the target server:

*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop): 0x00000000020399e0 ***
======= Backtrace: =========
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
app/nvmf_tgt/nvmf_tgt[0x45384c]
app/nvmf_tgt/nvmf_tgtapp/nvmf_tgt/nvmf_tgt[0x454011]
app/nvmf_tgt/nvmf_tgt[0xapp/nvmf_tgt/nvmf_tgt[0x45468d]
app/nvmf_tgt/nvmf_tgt[0x45384c]
app/nvmf_tgt/nvmf_tgt[0x405e41]
app/nvmf_tgt/nvmf_tgt[0x44e0fb]
app/nvmf_tgt/nvmf_tgt[0x449c11]
app/nvmf_tgt/nvmf_tgt[0x454011]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f48e94e1830]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f48e95c83dd]
======= Memory map: ========
/lib/x86_64-linux-gnu/libc.so.6(======= Memory map: ========
00400000-004ce000 r-xp 00000000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006cd000-006d0000 r--p 000cd000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d0000-006d3000 rw-p 000d0000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d3000-00711000 rw-p 00000000 00:00 0
016db000-024e1000 rw-p 00000000 00:00 0                                  [heap]
7f4844000000-7f4844021000 rw-p 00000000 00:00 0
7f4844021000-7f4848000000 ---p 00000000 00:00 0
7f484c000000-7f484c035000 rw-p 00000000 00:00 0
7f484c035000-7f4850000000 ---p 00000000 00:00 0
7f4850000000-7f4850035000 rw-p 00000000 00:00 0
7f4850035000-7f4854000000 ---p 00000000 00:00 0
7f4854000000-7f4854035000 rw-p 00000000 00:00 0
7f4854035000-7f4858000000 ---p 00000000 00:00 0
7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956                    /dev/shm/nvmf_trace.pid23576
7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
7f48615fe000-7f4861df00400000-004ce000 r-xp 00000000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006cd000-006d0000 r--p 000cd000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d0000-006d3000 rw-p 000d0000 08:01 795228                             /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d3000-00711000 rw-p 00000000 00:00 0
016db000-024e1000 rw-p 00000000 00:00 0                                  [heap]
7f4844000000-7f4844021000 rw-p 00000000 00:00 0
7f4844021000-7f4848000000 ---p 00000000 00:00 0
7f484c000000-7f484c035000 rw-p 00000000 00:00 0
7f484c035000-7f4850000000 ---p 00000000 00:00 0
7f4850000000-7f4850035000 rw-p 00000000 00:00 0
7f4850035000-7f4854000000 ---p 00000000 00:00 0
7f4854000000-7f4854035000 rw-p 00000000 00:00 0
7f4854035000-7f4858000000 ---p 00000000 00:00 0
7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956                    /dev/shm/nvmf_trace.pid23576
7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
7f48615fe000-7f4861dfe000 rw-p 00000000 00:00 0
7f4861dfe000-7f4861dff000 ---p 00000000 00:00 0
7f4861dff000-7f48625ff000 rw-p 00000000 00:00 0
7f48625ff000-7f4862600000 ---p 00000000 00:00 0
7f4862600000-7f4862e00000 rw-p 00000000 00:00 0
7f4862e00000-7f4863000000 rw-s 00000000 00:27 1457854                    /dev/hugepages/spdk_pid23576map_535
7f4863200000-7f4863400000 rw-s 00000000 00:27 1457903                    /dev/hugepages/spdk_pid23576map_584
7f4863400000-7f4863600000 rw-s 00000000 00:27 1457904                    /dev/hugepages/spdk_pid23576map_585
7f4863600000-7f4863800000 rw-s 00000000 00:27 1457901                    /dev/hugepages/spdk_pid23576map_582
7f4863800000-7f4863a00000 rw-s 00000000 00:27 1457902                    /dev/hugepages/spdk_pid23576map_583
7f4863a00000-7f4863c00000 rw-s 00000000 00:27 1457899                    /dev/hugepages/spdk_pid23576map_580
7f4863c00000-7f4863e00000 rw-s 00000000 00:27 1457900                    /dev/hugepages/spdk_pid23576map_581
7f4863e00000-7f4864000000 rw-s 000000Aborted (core dumped)


--
Meng Wang

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 17504 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [SPDK] Bug report: core dump when `nvme disconnect`
@ 2018-02-06 23:15 Meng Wang
  0 siblings, 0 replies; 5+ messages in thread
From: Meng Wang @ 2018-02-06 23:15 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4559 bytes --]

Hello all,

I configured SPDK (v18.01) with AIO backends. Then, I use 'nvme connect' to
connect the remote volume as /dev/nvme4n1 in client.

When I use 'nvme disconnect -d /dev/nvme4n1', core dumped in the target
server:

*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop):
0x00000000020399e0 ***
*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop):
0x00000000020399e0 ***
*** Error in `app/nvmf_tgt/nvmf_tgt': double free or corruption (fasttop):
0x00000000020399e0 ***
======= Backtrace: =========
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f48e95387e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f48e954137a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f48e954553c]
app/nvmf_tgt/nvmf_tgt[0x45384c]
app/nvmf_tgt/nvmf_tgtapp/nvmf_tgt/nvmf_tgt[0x454011]
app/nvmf_tgt/nvmf_tgt[0xapp/nvmf_tgt/nvmf_tgt[0x45468d]
app/nvmf_tgt/nvmf_tgt[0x45384c]
app/nvmf_tgt/nvmf_tgt[0x405e41]
app/nvmf_tgt/nvmf_tgt[0x44e0fb]
app/nvmf_tgt/nvmf_tgt[0x449c11]
app/nvmf_tgt/nvmf_tgt[0x454011]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f48e98926ba]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f48e94e1830]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f48e95c83dd]
======= Memory map: ========
/lib/x86_64-linux-gnu/libc.so.6(======= Memory map: ========
00400000-004ce000 r-xp 00000000 08:01 795228
 /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006cd000-006d0000 r--p 000cd000 08:01 795228
 /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d0000-006d3000 rw-p 000d0000 08:01 795228
 /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d3000-00711000 rw-p 00000000 00:00 0
016db000-024e1000 rw-p 00000000 00:00 0
[heap]
7f4844000000-7f4844021000 rw-p 00000000 00:00 0
7f4844021000-7f4848000000 ---p 00000000 00:00 0
7f484c000000-7f484c035000 rw-p 00000000 00:00 0
7f484c035000-7f4850000000 ---p 00000000 00:00 0
7f4850000000-7f4850035000 rw-p 00000000 00:00 0
7f4850035000-7f4854000000 ---p 00000000 00:00 0
7f4854000000-7f4854035000 rw-p 00000000 00:00 0
7f4854035000-7f4858000000 ---p 00000000 00:00 0
7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956
/dev/shm/nvmf_trace.pid23576
7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
7f48615fe000-7f4861df00400000-004ce000 r-xp 00000000 08:01 795228
                   /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006cd000-006d0000 r--p 000cd000 08:01 795228
 /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d0000-006d3000 rw-p 000d0000 08:01 795228
 /home/hcd/wm/spdk/app/nvmf_tgt/nvmf_tgt
006d3000-00711000 rw-p 00000000 00:00 0
016db000-024e1000 rw-p 00000000 00:00 0
[heap]
7f4844000000-7f4844021000 rw-p 00000000 00:00 0
7f4844021000-7f4848000000 ---p 00000000 00:00 0
7f484c000000-7f484c035000 rw-p 00000000 00:00 0
7f484c035000-7f4850000000 ---p 00000000 00:00 0
7f4850000000-7f4850035000 rw-p 00000000 00:00 0
7f4850035000-7f4854000000 ---p 00000000 00:00 0
7f4854000000-7f4854035000 rw-p 00000000 00:00 0
7f4854035000-7f4858000000 ---p 00000000 00:00 0
7f48594ec000-7f48615fd000 rw-s 00000000 00:16 1412956
/dev/shm/nvmf_trace.pid23576
7f48615fd000-7f48615fe000 ---p 00000000 00:00 0
7f48615fe000-7f4861dfe000 rw-p 00000000 00:00 0
7f4861dfe000-7f4861dff000 ---p 00000000 00:00 0
7f4861dff000-7f48625ff000 rw-p 00000000 00:00 0
7f48625ff000-7f4862600000 ---p 00000000 00:00 0
7f4862600000-7f4862e00000 rw-p 00000000 00:00 0
7f4862e00000-7f4863000000 rw-s 00000000 00:27 1457854
/dev/hugepages/spdk_pid23576map_535
7f4863200000-7f4863400000 rw-s 00000000 00:27 1457903
/dev/hugepages/spdk_pid23576map_584
7f4863400000-7f4863600000 rw-s 00000000 00:27 1457904
/dev/hugepages/spdk_pid23576map_585
7f4863600000-7f4863800000 rw-s 00000000 00:27 1457901
/dev/hugepages/spdk_pid23576map_582
7f4863800000-7f4863a00000 rw-s 00000000 00:27 1457902
/dev/hugepages/spdk_pid23576map_583
7f4863a00000-7f4863c00000 rw-s 00000000 00:27 1457899
/dev/hugepages/spdk_pid23576map_580
7f4863c00000-7f4863e00000 rw-s 00000000 00:27 1457900
/dev/hugepages/spdk_pid23576map_581
7f4863e00000-7f4864000000 rw-s 000000Aborted (core dumped)


-- 
Meng Wang

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 6103 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-02-08  7:39 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-07 18:53 [SPDK] Bug report: core dump when `nvme disconnect` Meng Wang
  -- strict thread matches above, loose matches on Subject: below --
2018-02-08  7:39 Wan, Qun
2018-02-07  5:24 Wan, Qun
2018-02-07  3:49 Wan, Qun
2018-02-06 23:15 Meng Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.