Hi, Our testing environment is: SW: CentOS Linux release 8.0.1905 (Core) Kernel: 4.18.0-80.el8.x86_64 MLNX_OFED_LINUX-4.7-1.0.0.1 HW: Mellanox Technologies MT27800 Family [ConnectX-5] CPU: Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz We used null device in target. In both sides (Target, Initiator) we ran the same SPDK version. Best regards Sasha -----Original Message----- From: Walker, Benjamin Sent: Tuesday, November 5, 2019 8:08 PM To: Storage Performance Development Kit Subject: [SPDK] Re: SPDK socket abstraction layer Can you outline your system set up so I can reproduce this? On my system currently the network stack is electing to do a deferred copy so the performance is not good. I'd love to reproduce your exact set up. I'm not surprised there are bugs at higher queue depth. The patch is definitely still a work in progress. On Nov 4, 2019 9:26 PM, allenz(a)mellanox.com wrote: Hi, With suggestion of Sasha, we synced the codes to patch (https://review.gerrithub.io/c/spdk/spdk/+/471752). We run tests on two X86-64 servers connected with Mellanox CX-5 100G. For Perf with 16 QD/4K IO, we found 14% improvement with the zero-copy patch when 1 or 2 cores were used, and 6% improvement when more cores (e.g., 8) were used. Unfortunately, when we tried to use queue depth more than 16, and bigger IO than 4K, Perf hung or got CQ error. But without the zero-copy patch, 64QD or 64K IO was ok. Best regards, Allen _______________________________________________ SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to spdk-leave(a)lists.01.org _______________________________________________ SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to spdk-leave(a)lists.01.org