Great. We'll retest our use case with this patch Sasha On 11/27/2018 2:19 AM, Harris, James R wrote: > Here's a patch I'd like to throw out for discussion: > > https://review.gerrithub.io/#/c/spdk/spdk/+/434895/ > > It sort of reverts GerritHub #428716 - at least the part about registering a separate MR for each hugepage. It tells DPDK to *not* free memory that has been dynamically allocated - that way we don't have to worry about DPDK freeing the memory in different units than it was allocated. Fixes all of the MR-spanning issues, and significantly relaxes bumping up against the maximum number of MRs supported by a NIC. > > Downside is that if a user does a big allocation and then later frees it, that application retains the memory. But the vast majority of SPDK use cases allocate all of the memory up front (via mempools) and don't free it until the application shuts down. > > I'm driving for simplicity here. This would ensure that all buffers allocated via SPDK malloc routines would never span an MR boundary and we could avoid a bunch of complexity in both the initiator and target. > > -Jim > > > > > On 11/22/18, 10:55 AM, "SPDK on behalf of Evgenii Kochetov" wrote: > > Hi, > > To follow up Sasha's email with more details and add some food for thought. > > First of all here is an easy way to reproduce the problem: > 1. Disable all hugeapages except 2MB > echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages > 2. Create simple NVMf target config: > [Transport] > Type RDMA > [Null] > Dev Null0 4096 4096 > [Subsystem0] > NQN nqn.2016-06.io.spdk:cnode0 > SN SPDK000DEADBEAF00 > Namespace Null0 > Listen RDMA 1.1.1.1:4420 > AllowAnyHost yes > 3. Start NVMf target app: > ./app/nvmf_tgt/nvmf_tgt -c nvmf.conf -m 0x01 -L rdma > 4. Start initiator (perf tool): > ./examples/nvme/perf/perf -q 16 -o 131072 -w read -t 10 -r 'trtype:RDMA adrfam:IPv4 traddr:1.1.1.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0' > 5. Check for errors on target: > mlx5: host: got completion with error: > 00000000 00000000 00000000 00000000 > 00000000 00000000 00000000 00000000 > 00000001 00000000 00000000 00000000 > 00000000 9d005304 0800032d 0002bfd2 > rdma.c:2584:spdk_nvmf_rdma_poller_poll: *DEBUG*: CQ error on CQ 0xc27560, Request 0x13392248 (4): local protection error > > The root cause, as it was noted already, is dynamic memory allocation feature. To be more precise its part that splits allocated memory regions into hugepage sized segments in function memory_hotplug_cb in lib/env_dpdk/memory.c. This code was added in this change https://review.gerrithub.io/c/spdk/spdk/+/428716. As a result separate MRs are registered for each 2MB hugepage. When we create memory pool for data buffers (data_buf_pool) in lib/nvmf/rdma.c some buffers cross the hugepage (and MR) boundary. When this buffer is used for RDMA operation local protection error is generated. > The change itself looks reasonable and it's not clear at the moment how this problem can be fixed. > > As Ben said, -s parameter can be used as a workaround. In our setup when we add '-s 1024' to target parameters no errors occur. > > BR, Evgeniy. > > -----Original Message----- > From: SPDK On Behalf Of Sasha Kotchubievsky > Sent: Thursday, November 22, 2018 3:21 PM > To: Storage Performance Development Kit ; Walker, Benjamin > Subject: Re: [SPDK] nvmf_tgt seg fault > > Hi, > > It looks like, some allocations cross huge-page boundary and MR boundary as well. > > After switching to bigger huge-pages (1GB instead of 2M) we don't see the problem. > > Crash, after hit "local protection error", is solved in "master". In 18.10, the crash is result of wrong processing of op code in RDMA completion. > > Sasha > > On 11/21/2018 7:03 PM, Walker, Benjamin wrote: > > On Tue, 2018-11-20 at 09:57 +0200, Sasha Kotchubievsky wrote: > >> We see the similar issue on ARM platform with SPDK 18.10+couple our > >> ARM related patches > >> > >> target crashes after receiving completion with in error state > >> > >> rdma.c:2699:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error on CQ > >> 0x7ff220, Request 0x13586616 (4): local protection error > >> > >> target crashes after "local protection error" followed by flush errors. > >> The same pattern I see in logs reported in the email thread. > > Joe, Sasha - can you both try to reproduce the issue after having the > > NVMe-oF target pre-allocate its memory? This is the '-s' option to the > > target. Set it to at least 4GB to be safe. I'm concerned that this is > > a problem introduced by the patches that enable dynamic memory > > allocation (which creates multiple ibv_mrs and requires smarter splitting code that doesn't exist yet). > > > > > >> 90b4bd6cf9bb5805c0c6d8df982ac5f2e3d90cce improves error handling, so > >> I'd to check if it solves the issue. > >> > >> Best regards > >> > >> Sasha > >> > >> On 11/18/2018 9:19 PM, Luse, Paul E wrote: > >>> FYI the test I ran was on master as of Fri... I can check versions > >>> if you tell me the steps to get exactly what you're looking for > >>> > >>> Thx > >>> Paul > >>> > >>> -----Original Message----- > >>> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Sasha > >>> Kotchubievsky > >>> Sent: Sunday, November 18, 2018 9:52 AM > >>> To: spdk(a)lists.01.org > >>> Subject: Re: [SPDK] nvmf_tgt seg fault > >>> > >>> Hi, > >>> > >>> Can you check the issue in latest master? > >>> > >>> Is > >>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fg > >>> ithub.com%2Fspdk%2Fspdk%2Fcommit%2F90b4bd6cf9bb5805c0c6d8df982ac5f2e > >>> 3d90cce&data=02%7C01%7Cevgeniik%40mellanox.com%7C0618f1d9969f4b5 > >>> 81faf08d650750bd5%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63678 > >>> 4860985451622&sdata=KGlTXoO7JkRyLYE3CcPAoXZVO8Q8VPbSIHPQngsrjxQ% > >>> 3D&reserved=0 merged recently changes the behavior ? > >>> > >>> Do you use upstream OFED or Mellanox MOFED? Which version ? > >>> > >>> Best regards > >>> > >>> Sasha > >>> > >>> On 11/14/2018 9:26 PM, Gruher, Joseph R wrote: > >>>> Sure, done. Issue #500, do I win a prize? :) > >>>> > >>>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2F > >>>> github.com%2Fspdk%2Fspdk%2Fissues%2F500&data=02%7C01%7Cevgeniik > >>>> %40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d > >>>> 9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=0sR9wMRj1 > >>>> CrM3MQsLXEB8cPEABFDyky8Xo%2Fs3SV97Sc%3D&reserved=0 > >>>> > >>>> -Joe > >>>> > >>>>> -----Original Message----- > >>>>> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, > >>>>> James R > >>>>> Sent: Wednesday, November 14, 2018 11:16 AM > >>>>> To: Storage Performance Development Kit > >>>>> Subject: Re: [SPDK] nvmf_tgt seg fault > >>>>> > >>>>> Thanks for the report Joe. Could you file an issue in GitHub for this? > >>>>> > >>>>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2 > >>>>> Fgithub.com%2Fspdk%2Fspdk%2Fissues&data=02%7C01%7Cevgeniik%40m > >>>>> ellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9ba > >>>>> 6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=BAJdbf4bpTp > >>>>> FVnbHOeiuzdlC3xya3KoXFY8Dvs8Sjqk%3D&reserved=0 > >>>>> > >>>>> Thanks, > >>>>> > >>>>> -Jim > >>>>> > >>>>> > >>>>> On 11/14/18, 12:14 PM, "SPDK on behalf of Gruher, Joseph R" > >>>>> wrote: > >>>>> > >>>>> Hi everyone- > >>>>> > >>>>> I'm running a dual socket Skylake server with P4510 NVMe > >>>>> and 100Gb Mellanox CX4 NIC. OS is Ubuntu 18.04 with kernel 4.18.16. > >>>>> SPDK version is 18.10, FIO version is 3.12. I'm running the SPDK > >>>>> NVMeoF target and exercising it from an initiator system (similar > >>>>> config to the target but with 50Gb NIC) using FIO with the bdev > >>>>> plugin. I find 128K sequential workloads reliably and immediately > >>>>> seg fault nvmf_tgt. I can run 4KB random workloads without > >>>>> experiencing the seg fault, so the problem seems tied to the block > >>>>> size and/or IO pattern. I can run the same IO pattern against a > >>>>> local PCIe device using SPDK without a problem, I only see the > >>>>> failure when running the NVMeoF target with FIO running the IO > >>>>> patter from an SPDK initiator system. > >>>>> > >>>>> Steps to reproduce and seg fault output follow below. > >>>>> > >>>>> Start the target: > >>>>> sudo ~/install/spdk/app/nvmf_tgt/nvmf_tgt -m 0x0000F0 -r > >>>>> /var/tmp/spdk1.sock > >>>>> > >>>>> Configure the target: > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b > >>>>> d1 -t pcie - a 0000:1a:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b > >>>>> d2 -t pcie - a 0000:1b:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b > >>>>> d3 -t pcie - a 0000:1c:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b > >>>>> d4 -t pcie - a 0000:1d:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b > >>>>> d5 -t pcie - a 0000:3d:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b > >>>>> d6 -t pcie - a 0000:3e:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b > >>>>> d7 -t pcie - a 0000:3f:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b > >>>>> d8 -t pcie - a 0000:40:00.0 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_raid_bdev -n > >>>>> raid1 -s 4 -r 0 -b "d1n1 d2n1 d3n1 d4n1 d5n1 d6n1 d7n1 d8n1" > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_store > >>>>> raid1 > >>>>> store1 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l1 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l2 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l3 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l4 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l5 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l6 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l7 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l8 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l9 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l10 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l11 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l > >>>>> store1 l12 > >>>>> 1200000 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn1 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn2 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn3 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn4 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn5 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn6 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn7 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn8 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn9 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn10 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn11 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn12 -a > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn1 store1/l1 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn2 store1/l2 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn3 store1/l3 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn4 store1/l4 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn5 store1/l5 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn6 store1/l6 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn7 store1/l7 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn8 store1/l8 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn9 store1/l9 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn10 store1/l10 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn11 store1/l11 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns > >>>>> nqn.2018- > >>>>> 11.io.spdk:nqn12 store1/l12 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn1 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn2 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn3 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn4 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn5 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn6 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn7 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn8 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn9 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn10 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn11 -t rdma -a 10.5.0.202 -s 4420 > >>>>> sudo ./rpc.py -s /var/tmp/spdk1.sock > >>>>> nvmf_subsystem_add_listener > >>>>> nqn.2018-11.io.spdk:nqn12 -t rdma -a 10.5.0.202 -s 4420 > >>>>> > >>>>> FIO file on initiator: > >>>>> [global] > >>>>> rw=rw > >>>>> rwmixread=100 > >>>>> numjobs=1 > >>>>> iodepth=32 > >>>>> bs=128k > >>>>> direct=1 > >>>>> thread=1 > >>>>> time_based=1 > >>>>> ramp_time=10 > >>>>> runtime=10 > >>>>> ioengine=spdk_bdev > >>>>> spdk_conf=/home/don/fio/nvmeof.conf > >>>>> group_reporting=1 > >>>>> unified_rw_reporting=1 > >>>>> exitall=1 > >>>>> randrepeat=0 > >>>>> norandommap=1 > >>>>> cpus_allowed_policy=split > >>>>> cpus_allowed=1-2 > >>>>> [job1] > >>>>> filename=b0n1 > >>>>> > >>>>> Config file on initiator: > >>>>> [Nvme] > >>>>> TransportID "trtype:RDMA traddr:10.5.0.202 trsvcid:4420 > >>>>> subnqn:nqn.2018- > >>>>> 11.io.spdk:nqn1 adrfam:IPv4" b0 > >>>>> > >>>>> Run FIO on initiator and nvmf_tgt seg faults immediate: > >>>>> sudo > >>>>> LD_PRELOAD=/home/don/install/spdk/examples/bdev/fio_plugin/fio_plu > >>>>> gi > >>>>> n fio sr.ini > >>>>> > >>>>> Seg fault looks like this: > >>>>> mlx5: donsl202: got completion with error: > >>>>> 00000000 00000000 00000000 00000000 > >>>>> 00000000 00000000 00000000 00000000 > >>>>> 00000001 00000000 00000000 00000000 > >>>>> 00000000 9d005304 0800011b 0008d0d2 > >>>>> rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error > >>>>> on CQ 0x7f079c01d170, Request 0x139670660105216 (4): local protection error > >>>>> rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV > >>>>> QP#1 changed to: IBV_QPS_ERR > >>>>> rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error > >>>>> on CQ 0x7f079c01d170, Request 0x139670660105216 (5): Work Request > >>>>> Flushed Error > >>>>> rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV > >>>>> QP#1 changed to: IBV_QPS_ERR > >>>>> rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error > >>>>> on CQ 0x7f079c01d170, Request 0x139670660106280 (5): Work Request > >>>>> Flushed Error > >>>>> rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV > >>>>> QP#1 changed to: IBV_QPS_ERR > >>>>> rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error > >>>>> on CQ 0x7f079c01d170, Request 0x139670660106280 (5): Work Request > >>>>> Flushed Error > >>>>> Segmentation fault > >>>>> > >>>>> Adds this to dmesg: > >>>>> [71561.859644] nvme nvme1: Connect rejected: status 8 > >>>>> (invalid service ID). > >>>>> [71561.866466] nvme nvme1: rdma connection establishment > >>>>> failed (- > >>>>> 104) > >>>>> [71567.805288] reactor_7[9166]: segfault at 88 ip > >>>>> 00005630621e6580 sp > >>>>> 00007f07af5fc400 error 4 in nvmf_tgt[563062194000+df000] > >>>>> [71567.805293] Code: 48 8b 30 e8 82 f7 ff ff e9 7d fe ff ff > >>>>> 0f 1f 44 00 00 41 81 > >>>>> f9 80 00 00 00 75 37 49 8b 07 4c 8b 70 40 48 c7 40 50 00 00 00 00 > >>>>> <49> 8b 96 88 > >>>>> 00 00 00 48 89 50 58 49 8b 96 88 00 00 00 48 89 02 48 > >>>>> > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> SPDK mailing list > >>>>> SPDK(a)lists.01.org > >>>>> > >>>>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2 > >>>>> Flists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=02%7C01%7Cevgen > >>>>> iik%40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d > >>>>> 2e4d9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=tJBG > >>>>> bU3u6Bj64t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=0 > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> SPDK mailing list > >>>>> SPDK(a)lists.01.org > >>>>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2 > >>>>> Flists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=02%7C01%7Cevgen > >>>>> iik%40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d > >>>>> 2e4d9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=tJBG > >>>>> bU3u6Bj64t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=0 > >>>> _______________________________________________ > >>>> SPDK mailing list > >>>> SPDK(a)lists.01.org > >>>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2F > >>>> lists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=02%7C01%7Cevgenii > >>>> k%40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4 > >>>> d9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=tJBGbU3u > >>>> 6Bj64t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=0 > >>> _______________________________________________ > >>> SPDK mailing list > >>> SPDK(a)lists.01.org > >>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl > >>> ists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=02%7C01%7Cevgeniik% > >>> 40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9b > >>> a6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=tJBGbU3u6Bj6 > >>> 4t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=0 > >>> _______________________________________________ > >>> SPDK mailing list > >>> SPDK(a)lists.01.org > >>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl > >>> ists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=02%7C01%7Cevgeniik% > >>> 40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9b > >>> a6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=tJBGbU3u6Bj6 > >>> 4t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=0 > >> _______________________________________________ > >> SPDK mailing list > >> SPDK(a)lists.01.org > >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fli > >> sts.01.org%2Fmailman%2Flistinfo%2Fspdk&data=02%7C01%7Cevgeniik%40 > >> mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9ba6a > >> 4d149256f461b%7C0%7C0%7C636784860985451622&sdata=tJBGbU3u6Bj64t5N > >> ul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=0 > > _______________________________________________ > > SPDK mailing list > > SPDK(a)lists.01.org > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flis > > ts.01.org%2Fmailman%2Flistinfo%2Fspdk&data=02%7C01%7Cevgeniik%40me > > llanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9ba6a4d1 > > 49256f461b%7C0%7C0%7C636784860985451622&sdata=tJBGbU3u6Bj64t5Nul5z > > x%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=0 > _______________________________________________ > SPDK mailing list > SPDK(a)lists.01.org > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=02%7C01%7Cevgeniik%40mellanox.com%7C0618f1d9969f4b581faf08d650750bd5%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636784860985451622&sdata=tJBGbU3u6Bj64t5Nul5zx%2FBNb36fLhAJtQPqR4PrRlc%3D&reserved=0 > _______________________________________________ > SPDK mailing list > SPDK(a)lists.01.org > https://lists.01.org/mailman/listinfo/spdk > > > _______________________________________________ > SPDK mailing list > SPDK(a)lists.01.org > https://lists.01.org/mailman/listinfo/spdk