Hi, Ziye: I modify the buffer size as following: //#define BUF_SMALL_POOL_SIZE 8191 #define BUF_SMALL_POOL_SIZE 16381 //#define BUF_LARGE_POOL_SIZE 1023 #define BUF_LARGE_POOL_SIZE 4095 But still got bad performance with SeqRead BS=128K + QD=32 on SPDK NULL Block Device. Any other suggestions? Thanks On Wed, Dec 23, 2020 at 10:16 AM Yang, Ziye wrote: > Hi Lego, > > If you want to test, but not changing the code, you may try the following > patch: > > https://review.spdk.io/gerrit/c/spdk/spdk/+/5664 It supports to leverage > RPC to configure the small and large buffer pool size. > > > > > Best Regards, > Ziye Yang > > -----Original Message----- > From: Yang, Ziye > Sent: Tuesday, December 22, 2020 1:46 PM > To: Storage Performance Development Kit > Subject: RE: [SPDK] iSCSI Sequential Read with BS 128K Performance > > Hi Lego, > > " Focus on BS 128K > [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read: > IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, > BW=37.7MiB/s (39.5MB/s) " > > How about changing > > #define BUF_LARGE_POOL_SIZE 1023 in lib/bdev/bdev.c > > This Macro may have impact on the performance if you testing big I/O size > test. We currently define 1023 * 64 KB with the total size. So Would you > change the value to a bigger value, and see whether it helps the test? // > Currently, we did not export it to configurable yet. > > > Best Regards, > Ziye Yang > > -----Original Message----- > From: Lego Lin > Sent: Friday, December 18, 2020 4:09 PM > To: Storage Performance Development Kit > Subject: [SPDK] iSCSI Sequential Read with BS 128K Performance > > Hi, all: > I just tested SPDK 20.10.x iSCSI performance and I got following > performance data: (All Testing with FIO QD 32) > [OK] (100%RndR_4K, 100%RndW_4K) : (read: IOPS=155k, BW=605MiB/s > (634MB/s), write: IOPS=140k, BW=547MiB/s (573MB/s)) > [OK] (100%RndR_8K, 100%RndW_8K) : (read: IOPS=147k, BW=1152MiB/s > (1208MB/s), write: IOPS=128k, BW=1003MiB/s (1051MB/s)) [NOK] > (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=210, BW=26.3MiB/s (27.5MB/s), > write: IOPS=14.6k, BW=1831MiB/s (1920MB/s)) => Read Bad, Write OK [NOK] > (100%SeqR_32K, 100%SeqR_16K) : (read: IOPS=9418, BW=294MiB/s > (309MB/s), read: IOPS=105k, BW=1641MiB/s (1721MB/s)) > => [NOK] BS_32K + QD_32 > => [OK] BS_16K + QD_32 > [OK] (100%SeqR_8K, 100%SeqR_4K) : (read: IOPS=149k, BW=1160MiB/s > (1217MB/s), read: IOPS=157k, BW=612MiB/s (642MB/s)) > > Focus on BS 128K > [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read: > IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, > BW=37.7MiB/s (39.5MB/s) > > FIO Configuration: > ioengine=libaio > direct=1 > numjobs=1 > > I also check with document: > > https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2007.pdf > Inside this document, it also suggest Sequential Read test with BS=128K + > QD8 > > I think the low performance with BS=128K + QD32 should not relate to > iSCSI, but can anyone share experience about tuning iSCSI sequential read > performance? It's weird that performance drop with high QD. Any suggestion > are welcome. > > Thanks > > Following are my test configuration > 1. Network bandwidth: 40GB > 2. TCP Setting at both target and client > tcp_timestamps: "1" > tcp_sack: "0", > tcp_rmem: "4096 87380 134217728" > tcp_wmem: "4096 87380 134217728", > tcp_mem: "4096 87380 134217728", > rmem_default: "524287", > wmem_default: "524287", > rmem_max: "268435456", > wmem_max: "268435456", > optmem_max: "268435456", > netdev_max_backlog: "300000"} > 3. number of CPU cores at target and client: 48 vcores > Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2 4. disable irqbalance > / enable CPU power government 5. run SPDK with: ./iscsi_tgt -m > 0x08007C08007C _______________________________________________ > SPDK mailing list -- spdk(a)lists.01.org > To unsubscribe send an email to spdk-leave(a)lists.01.org > _______________________________________________ > SPDK mailing list -- spdk(a)lists.01.org > To unsubscribe send an email to spdk-leave(a)lists.01.org >