I tried this in my setup and below is my configuration : I have 4 cpus with 1 core per socket. 1 numa node. *CPU(s): 4* *On-line CPU(s) list: 0-3* *Thread(s) per core: 1* *Core(s) per socket: 1* *Socket(s): 4* *NUMA node(s): 1* *Hugepage information:* HugePages_Total: 3824 HugePages_Free: 3312 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB I start nvmf_tgt using the following options : *-r /var/run/spdk.sock -m 0xc -s 1024* *Since I am using -m 0xc, the reactors starts on core 2 and core 3. Here is the output :* *[ DPDK EAL parameters: nvmf -c 0xc -m 1024 --file-prefix=spdk_pid14924 ]* *EAL: Detected 4 lcore(s)* *EAL: No free hugepages reported in hugepages-1048576kB* *EAL: Probing VFIO support...* *app.c: 377:spdk_app_start: *NOTICE*: Total cores available: 2* *reactor.c: 654:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1* *reactor.c: 426:_spdk_reactor_run: *NOTICE*: Reactor started on core 3 on socket 0* *reactor.c: 426:_spdk_reactor_run: *NOTICE*: Reactor started on core 2 on socket 0* I run the initiator and force it to run on core 0x1. The host that I am doing 'ssh' to gets disconnected. Here is the output when the host gets disconnected : rdma.c:1458:spdk_nvmf_rdma_accept: *INFO*: Acceptor Event: RDMA_CM_EVENT_CONNECT_REQUEST rdma.c: 654:nvmf_rdma_connect: *INFO*: Connect Recv on fabric intf name rxe0, dev_name uverbs0 rdma.c: 658:nvmf_rdma_connect: *INFO*: Listen Id was 0x22a4b10 with verbs 0x22a3630. ListenAddr: 0x22a48d0 rdma.c: 664:nvmf_rdma_connect: *INFO*: Calculating Queue Depth rdma.c: 669:nvmf_rdma_connect: *INFO*: Target Max Queue Depth: 128 rdma.c: 674:nvmf_rdma_connect: *INFO*: Local NIC Max Send/Recv Queue Depth: 16384 Max Read/Write Queue Depth: 128 rdma.c: 681:nvmf_rdma_connect: *INFO*: Host (Initiator) NIC Max Incoming RDMA R/W operations: 32 Max Outgoing RDMA R/W operations: 0 rdma.c: 690:nvmf_rdma_connect: *INFO*: Host Receive Queue Size: 32 rdma.c: 691:nvmf_rdma_connect: *INFO*: Host Send Queue Size: 31 rdma.c: 697:nvmf_rdma_connect: *INFO*: Final Negotiated Queue Depth: 32 R/W Depth: 32 rdma.c: 371:spdk_nvmf_rdma_qpair_initialize: *INFO*: New RDMA Connection: 0x26b0720 rdma.c: 405:spdk_nvmf_rdma_qpair_initialize: *INFO*: Command Array: 0x7f8477a07000 Length: 800 LKey: 8bc0 rdma.c: 407:spdk_nvmf_rdma_qpair_initialize: *INFO*: Completion Array: 0x7f8477a06000 Length: 200 LKey: 8c80 rdma.c: 409:spdk_nvmf_rdma_qpair_initialize: *INFO*: In Capsule Data Array: 0x7f84777fe000 Length: 20000 LKey: 8d01 *rdma.c: 604:spdk_nvmf_rdma_event_accept: *INFO*: Sent back the accept* *rdma.c:1458:spdk_nvmf_rdma_accept: *INFO*: Acceptor Event: RDMA_CM_EVENT_ESTABLISHED* *Connection to 172.22.4.152 closed by remote host.* *Connection to 172.22.4.152 closed.* Has anyone tried this in your setup and if so, any help is highly appreciated. -- Regards Sudheendra Sampath *Note: I don't see any kernel panic, but just the logon session to the * * machine **where I am trying this gets disconnected.*