From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============8133601323042093262==" MIME-Version: 1.0 From: Sasha Kotchubievsky Subject: Re: [SPDK] nvmf_tgt seg fault Date: Tue, 27 Nov 2018 11:07:16 +0200 Message-ID: <6aa6b055-cdbb-63ba-86f6-193bc14ac555@dev.mellanox.co.il> In-Reply-To: 93F1E6B8-15C2-4587-89FC-983C56CE9E9F@intel.com List-ID: To: spdk@lists.01.org --===============8133601323042093262== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Great. We'll retest our use case with this patch Sasha On 11/27/2018 2:19 AM, Harris, James R wrote: > Here's a patch I'd like to throw out for discussion: > > https://review.gerrithub.io/#/c/spdk/spdk/+/434895/ > > It sort of reverts GerritHub #428716 - at least the part about registerin= g a separate MR for each hugepage. It tells DPDK to *not* free memory that= has been dynamically allocated - that way we don't have to worry about DPD= K freeing the memory in different units than it was allocated. Fixes all o= f the MR-spanning issues, and significantly relaxes bumping up against the = maximum number of MRs supported by a NIC. > > Downside is that if a user does a big allocation and then later frees it,= that application retains the memory. But the vast majority of SPDK use ca= ses allocate all of the memory up front (via mempools) and don't free it un= til the application shuts down. > > I'm driving for simplicity here. This would ensure that all buffers allo= cated via SPDK malloc routines would never span an MR boundary and we could= avoid a bunch of complexity in both the initiator and target. > > -Jim > > > > > =EF=BB=BFOn 11/22/18, 10:55 AM, "SPDK on behalf of Evgenii Kochetov" wrote: > > Hi, > = > To follow up Sasha's email with more details and add some food for t= hought. > = > First of all here is an easy way to reproduce the problem: > 1. Disable all hugeapages except 2MB > echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages > 2. Create simple NVMf target config: > [Transport] > Type RDMA > [Null] > Dev Null0 4096 4096 > [Subsystem0] > NQN nqn.2016-06.io.spdk:cnode0 > SN SPDK000DEADBEAF00 > Namespace Null0 > Listen RDMA 1.1.1.1:4420 > AllowAnyHost yes > 3. Start NVMf target app: > ./app/nvmf_tgt/nvmf_tgt -c nvmf.conf -m 0x01 -L rdma > 4. Start initiator (perf tool): > ./examples/nvme/perf/perf -q 16 -o 131072 -w read -t 10 -r 'trtype:= RDMA adrfam:IPv4 traddr:1.1.1.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cno= de0' > 5. Check for errors on target: > mlx5: host: got completion with error: > 00000000 00000000 00000000 00000000 > 00000000 00000000 00000000 00000000 > 00000001 00000000 00000000 00000000 > 00000000 9d005304 0800032d 0002bfd2 > rdma.c:2584:spdk_nvmf_rdma_poller_poll: *DEBUG*: CQ error on CQ 0xc= 27560, Request 0x13392248 (4): local protection error > = > The root cause, as it was noted already, is dynamic memory allocatio= n feature. To be more precise its part that splits allocated memory regions= into hugepage sized segments in function memory_hotplug_cb in lib/env_dpdk= /memory.c. This code was added in this change https://review.gerrithub.io/c= /spdk/spdk/+/428716. As a result separate MRs are registered for each 2MB h= ugepage. When we create memory pool for data buffers (data_buf_pool) in lib= /nvmf/rdma.c some buffers cross the hugepage (and MR) boundary. When this b= uffer is used for RDMA operation local protection error is generated. > The change itself looks reasonable and it's not clear at the moment = how this problem can be fixed. > = > As Ben said, -s parameter can be used as a workaround. In our setup = when we add '-s 1024' to target parameters no errors occur. > = > BR, Evgeniy. > = > -----Original Message----- > From: SPDK On Behalf Of Sasha Kotchubi= evsky > Sent: Thursday, November 22, 2018 3:21 PM > To: Storage Performance Development Kit ; Walke= r, Benjamin > Subject: Re: [SPDK] nvmf_tgt seg fault > = > Hi, > = > It looks like, some allocations cross huge-page boundary and MR boun= dary as well. > = > After switching to bigger huge-pages (1GB instead of 2M) we don't se= e the problem. > = > Crash, after hit "local protection error", is solved in "master". In= 18.10, the crash is result of wrong processing of op code in RDMA complet= ion. > = > Sasha > = > On 11/21/2018 7:03 PM, Walker, Benjamin wrote: > > On Tue, 2018-11-20 at 09:57 +0200, Sasha Kotchubievsky wrote: > >> We see the similar issue on ARM platform with SPDK 18.10+couple o= ur > >> ARM related patches > >> > >> target crashes after receiving completion with in error state > >> > >> rdma.c:2699:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error on CQ > >> 0x7ff220, Request 0x13586616 (4): local protection error > >> > >> target crashes after "local protection error" followed by flush e= rrors. > >> The same pattern I see in logs reported in the email thread. > > Joe, Sasha - can you both try to reproduce the issue after having = the > > NVMe-oF target pre-allocate its memory? This is the '-s' option to= the > > target. Set it to at least 4GB to be safe. I'm concerned that this= is > > a problem introduced by the patches that enable dynamic memory > > allocation (which creates multiple ibv_mrs and requires smarter sp= litting code that doesn't exist yet). > > > > > >> 90b4bd6cf9bb5805c0c6d8df982ac5f2e3d90cce improves error handling,= so > >> I'd to check if it solves the issue. > >> > >> Best regards > >> > >> Sasha > >> > >> On 11/18/2018 9:19 PM, Luse, Paul E wrote: > >>> FYI the test I ran was on master as of Fri... I can check versio= ns > >>> if you tell me the steps to get exactly what you're looking for > >>> > >>> Thx > >>> Paul > >>> > >>> -----Original Message----- > >>> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sas= ha > >>> Kotchubievsky > >>> Sent: Sunday, November 18, 2018 9:52 AM > >>> To: spdk(a)lists.01.org > >>> Subject: Re: [SPDK] nvmf_tgt seg fault > >>> > >>> Hi, > >>> > >>> Can you check the issue in latest master? > >>> > >>> Is > >>> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3A%= 2F%2Fg > >>> ithub.com%2Fspdk%2Fspdk%2Fcommit%2F90b4bd6cf9bb5805c0c6d8df982ac= 5f2e > >>> 3d90cce&data=3D02%7C01%7Cevgeniik%40mellanox.com%7C0618f1d99= 69f4b5 > >>> 81faf08d650750bd5%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6= 3678 > >>> 4860985451622&sdata=3DKGlTXoO7JkRyLYE3CcPAoXZVO8Q8VPbSIHPQng= srjxQ% > >>> 3D&reserved=3D0 merged recently changes the behavior ? > >>> > >>> Do you use upstream OFED or Mellanox MOFED? Which version ? > >>> > >>> Best regards > >>> > >>> Sasha > >>> > >>> On 11/14/2018 9:26 PM, Gruher, Joseph R wrote: > >>>> Sure, done. Issue #500, do I win a prize? :) > >>>> > >>>> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3A= %2F%2F > >>>> github.com%2Fspdk%2Fspdk%2Fissues%2F500&data=3D02%7C01%7Cev= geniik > >>>> %40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d= 2e4d > >>>> 9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=3D0sR= 9wMRj1 > >>>> CrM3MQsLXEB8cPEABFDyky8Xo%2Fs3SV97Sc%3D&reserved=3D0 > >>>> > >>>> -Joe > >>>> > >>>>> -----Original Message----- > >>>>> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of H= arris, > >>>>> James R > >>>>> Sent: Wednesday, November 14, 2018 11:16 AM > >>>>> To: Storage Performance Development Kit > >>>>> Subject: Re: [SPDK] nvmf_tgt seg fault > >>>>> > >>>>> Thanks for the report Joe. Could you file an issue in GitHub = for this? > >>>>> > >>>>> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3= A%2F%2 > >>>>> Fgithub.com%2Fspdk%2Fspdk%2Fissues&data=3D02%7C01%7Cevgeni= ik%40m > >>>>> ellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4= d9ba > >>>>> 6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=3DBAJdb= f4bpTp > >>>>> FVnbHOeiuzdlC3xya3KoXFY8Dvs8Sjqk%3D&reserved=3D0 > >>>>> > >>>>> Thanks, > >>>>> > >>>>> -Jim > >>>>> > >>>>> > >>>>> On 11/14/18, 12:14 PM, "SPDK on behalf of Gruher, Joseph R" > >>>>> wrote: > >>>>> > >>>>> Hi everyone- > >>>>> > >>>>> I'm running a dual socket Skylake server with P4510 NVMe > >>>>> and 100Gb Mellanox CX4 NIC. OS is Ubuntu 18.04 with kernel 4.= 18.16. > >>>>> SPDK version is 18.10, FIO version is 3.12. I'm running the S= PDK > >>>>> NVMeoF target and exercising it from an initiator system (simi= lar > >>>>> config to the target but with 50Gb NIC) using FIO with the bdev > >>>>> plugin. I find 128K sequential workloads reliably and immedia= tely > >>>>> seg fault nvmf_tgt. I can run 4KB random workloads without > >>>>> experiencing the seg fault, so the problem seems tied to the b= lock > >>>>> size and/or IO pattern. I can run the same IO pattern against= a > >>>>> local PCIe device using SPDK without a problem, I only see the > >>>>> failure when running the NVMeoF target with FIO running the IO > >>>>> patter from an SPDK initiator system. > >>>>> > >>>>> Steps to reproduce and seg fault output follow below. > >>>>> > >>>>> Start the target: > >>>>> sudo ~/install/spdk/app/nvmf_tgt/nvmf_tgt -m 0x0000F0 -r > >>>>> /var/tmp/spdk1.sock > >>>>> > >>>>> Configure the target: > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bde= v -b > >>>>> d1 -t pcie - a 0000:1a:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bde= v -b > >>>>> d2 -t pcie - a 0000:1b:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bde= v -b > >>>>> d3 -t pcie - a 0000:1c:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bde= v -b > >>>>> d4 -t pcie - a 0000:1d:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bde= v -b > >>>>> d5 -t pcie - a 0000:3d:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bde= v -b > >>>>> d6 -t pcie - a 0000:3e:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bde= v -b > >>>>> d7 -t pcie - a 0000:3f:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bde= v -b > >>>>> d8 -t pcie - a 0000:40:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_raid_bde= v -n > >>>>> raid1 -s 4 -r 0 -b "d1n1 d2n1 d3n1 d4n1 d5n1 d6n1 d7n1 d8n1" > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_sto= re > >>>>> raid1 > >>>>> store1 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l1 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l2 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l3 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l4 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l5 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l6 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l7 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l8 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l9 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l10 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l11 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bde= v -l > >>>>> store1 l12 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn1 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn2 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn3 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn4 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn5 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn6 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn7 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn8 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn9 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn10 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn11 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_cre= ate > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn12 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn1 store1/l1 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn2 store1/l2 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn3 store1/l3 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn4 store1/l4 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn5 store1/l5 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn6 store1/l6 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn7 store1/l7 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn8 store1/l8 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn9 store1/l9 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn10 store1/l10 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn11 store1/l11 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add= _ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn12 store1/l12 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn1 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn2 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn3 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn4 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn5 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn6 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn7 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn8 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn9 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn10 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn11 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn12 -t rdma -a 10.5.0.202 -s 4420 > >>>>> > >>>>> FIO file on initiator: > >>>>> [global] > >>>>> rw=3Drw > >>>>> rwmixread=3D100 > >>>>> numjobs=3D1 > >>>>> iodepth=3D32 > >>>>> bs=3D128k > >>>>> direct=3D1 > >>>>> thread=3D1 > >>>>> time_based=3D1 > >>>>> ramp_time=3D10 > >>>>> runtime=3D10 > >>>>> ioengine=3Dspdk_bdev > >>>>> spdk_conf=3D/home/don/fio/nvmeof.conf > >>>>> group_reporting=3D1 > >>>>> unified_rw_reporting=3D1 > >>>>> exitall=3D1 > >>>>> randrepeat=3D0 > >>>>> norandommap=3D1 > >>>>> cpus_allowed_policy=3Dsplit > >>>>> cpus_allowed=3D1-2 > >>>>> [job1] > >>>>> filename=3Db0n1 > >>>>> > >>>>> Config file on initiator: > >>>>> [Nvme] > >>>>> TransportID "trtype:RDMA traddr:10.5.0.202 trsvcid:4420 > >>>>> subnqn:nqn.2018- > >>>>> 11.io.spdk:nqn1 adrfam:IPv4" b0 > >>>>> > >>>>> Run FIO on initiator and nvmf_tgt seg faults immediate: > >>>>> sudo > >>>>> LD_PRELOAD=3D/home/don/install/spdk/examples/bdev/fio_plugin/f= io_plu > >>>>> gi > >>>>> n fio sr.ini > >>>>> > >>>>> Seg fault looks like this: > >>>>> mlx5: donsl202: got completion with error: > >>>>> 00000000 00000000 00000000 00000000 > >>>>> 00000000 00000000 00000000 00000000 > >>>>> 00000001 00000000 00000000 00000000 > >>>>> 00000000 9d005304 0800011b 0008d0d2 > >>>>> rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ e= rror > >>>>> on CQ 0x7f079c01d170, Request 0x139670660105216 (4): local pro= tection error > >>>>> rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV > >>>>> QP#1 changed to: IBV_QPS_ERR > >>>>> rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ e= rror > >>>>> on CQ 0x7f079c01d170, Request 0x139670660105216 (5): Work Requ= est > >>>>> Flushed Error > >>>>> rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV > >>>>> QP#1 changed to: IBV_QPS_ERR > >>>>> rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ e= rror > >>>>> on CQ 0x7f079c01d170, Request 0x139670660106280 (5): Work Requ= est > >>>>> Flushed Error > >>>>> rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV > >>>>> QP#1 changed to: IBV_QPS_ERR > >>>>> rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ e= rror > >>>>> on CQ 0x7f079c01d170, Request 0x139670660106280 (5): Work Requ= est > >>>>> Flushed Error > >>>>> Segmentation fault > >>>>> > >>>>> Adds this to dmesg: > >>>>> [71561.859644] nvme nvme1: Connect rejected: status 8 > >>>>> (invalid service ID). > >>>>> [71561.866466] nvme nvme1: rdma connection establishment > >>>>> failed (- > >>>>> 104) > >>>>> [71567.805288] reactor_7[9166]: segfault at 88 ip > >>>>> 00005630621e6580 sp > >>>>> 00007f07af5fc400 error 4 in nvmf_tgt[563062194000+df000] > >>>>> [71567.805293] Code: 48 8b 30 e8 82 f7 ff ff e9 7d fe f= f ff > >>>>> 0f 1f 44 00 00 41 81 > >>>>> f9 80 00 00 00 75 37 49 8b 07 4c 8b 70 40 48 c7 40 50 00 00 00= 00 > >>>>> <49> 8b 96 88 > >>>>> 00 00 00 48 89 50 58 49 8b 96 88 00 00 00 48 89 02 48 > >>>>> > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> SPDK mailing list > >>>>> SPDK(a)lists.01.org > >>>>> > >>>>> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3= A%2F%2 > >>>>> Flists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7= Cevgen > >>>>> iik%40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca65297= 1c7d > >>>>> 2e4d9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata= =3DtJBG > >>>>> bU3u6Bj64t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=3D0 > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> SPDK mailing list > >>>>> SPDK(a)lists.01.org > >>>>> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3= A%2F%2 > >>>>> Flists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7= Cevgen > >>>>> iik%40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca65297= 1c7d > >>>>> 2e4d9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata= =3DtJBG > >>>>> bU3u6Bj64t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=3D0 > >>>> _______________________________________________ > >>>> SPDK mailing list > >>>> SPDK(a)lists.01.org > >>>> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3A= %2F%2F > >>>> lists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7Ce= vgenii > >>>> k%40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7= d2e4 > >>>> d9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=3DtJ= BGbU3u > >>>> 6Bj64t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=3D0 > >>> _______________________________________________ > >>> SPDK mailing list > >>> SPDK(a)lists.01.org > >>> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3A%= 2F%2Fl > >>> ists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7Cevg= eniik% > >>> 40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e= 4d9b > >>> a6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=3DtJBGbU= 3u6Bj6 > >>> 4t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=3D0 > >>> _______________________________________________ > >>> SPDK mailing list > >>> SPDK(a)lists.01.org > >>> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3A%= 2F%2Fl > >>> ists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7Cevg= eniik% > >>> 40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e= 4d9b > >>> a6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=3DtJBGbU= 3u6Bj6 > >>> 4t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=3D0 > >> _______________________________________________ > >> SPDK mailing list > >> SPDK(a)lists.01.org > >> https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2= F%2Fli > >> sts.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7Cevgen= iik%40 > >> mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9= ba6a > >> 4d149256f461b%7C0%7C0%7C636784860985451622&sdata=3DtJBGbU3u6B= j64t5N > >> ul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=3D0 > > _______________________________________________ > > SPDK mailing list > > SPDK(a)lists.01.org > > https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F= %2Flis > > ts.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7Cevgenii= k%40me > > llanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9ba6= a4d1 > > 49256f461b%7C0%7C0%7C636784860985451622&sdata=3DtJBGbU3u6Bj64t= 5Nul5z > > x%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=3D0 > _______________________________________________ > SPDK mailing list > SPDK(a)lists.01.org > https://emea01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2= Flists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7Cevgeniik%40m= ellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9ba6a4d14925= 6f461b%7C0%7C0%7C636784860985451622&sdata=3DtJBGbU3u6Bj64t5Nul5zx%2FBNb= 36fLhAJtQPqR4PrRlc%3D&reserved=3D0 > _______________________________________________ > SPDK mailing list > SPDK(a)lists.01.org > https://lists.01.org/mailman/listinfo/spdk > = > > _______________________________________________ > SPDK mailing list > SPDK(a)lists.01.org > https://lists.01.org/mailman/listinfo/spdk --===============8133601323042093262==--