From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============4929620559742425836==" MIME-Version: 1.0 From: Yang, Ziye Subject: [SPDK] Re: iSCSI Sequential Read with BS 128K Performance Date: Mon, 28 Dec 2020 03:03:20 +0000 Message-ID: In-Reply-To: CALR1AzLQuaLFoftiaejSz89C41HzBM+XfubUETg6mP3x_-6wGg@mail.gmail.com List-ID: To: spdk@lists.01.org --===============4929620559742425836== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Lego Lin, When you use RPC to set the iscsi subsystem status, = iscsi_set_options, could you use = '-x' or '--max-large-datain-per-connection' to enlarge the datain per con= nection value. The default value is 64. = The use of iscsi_set_options can be viewed from ./test/iscsi_tgt/ext4test/e= xt4test.sh as an example. If you do not want to use RPC, you can use a jso= n format. = I believe this value can affect the performance of iSCSI target with large= IO_SIZE + large QD. Thanks. Best Regards, Ziye Yang -----Original Message----- From: Lego Lin = Sent: Wednesday, December 23, 2020 10:19 PM To: Storage Performance Development Kit Subject: [SPDK] Re: iSCSI Sequential Read with BS 128K Performance Hi, Ziye: I modify the buffer size as following: //#define BUF_SMALL_POOL_SIZE 8191 #define BUF_SMALL_POOL_SIZE 16381 //#define BUF_LARGE_POOL_SIZE 1023 #define BUF_LARGE_POOL_SIZE 4095 But still got bad performance with SeqRead BS=3D128K + QD=3D32 on SPDK NULL= Block Device. Any other suggestions? Thanks On Wed, Dec 23, 2020 at 10:16 AM Yang, Ziye wrote: > Hi Lego, > > If you want to test, but not changing the code, you may try the = > following > patch: > > https://review.spdk.io/gerrit/c/spdk/spdk/+/5664 It supports to = > leverage RPC to configure the small and large buffer pool size. > > > > > Best Regards, > Ziye Yang > > -----Original Message----- > From: Yang, Ziye > Sent: Tuesday, December 22, 2020 1:46 PM > To: Storage Performance Development Kit > Subject: RE: [SPDK] iSCSI Sequential Read with BS 128K Performance > > Hi Lego, > > " Focus on BS 128K > [OK] QD1: read: IOPS=3D5543, BW=3D693MiB/s (727MB/s) [OK] QD8: read: > IOPS=3D21.1k, BW=3D2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=3D301, = > BW=3D37.7MiB/s (39.5MB/s) " > > How about changing > > #define BUF_LARGE_POOL_SIZE 1023 in lib/bdev/bdev.c > > This Macro may have impact on the performance if you testing big I/O = > size test. We currently define 1023 * 64 KB with the total size. So = > Would you change the value to a bigger value, and see whether it helps = > the test? // Currently, we did not export it to configurable yet. > > > Best Regards, > Ziye Yang > > -----Original Message----- > From: Lego Lin > Sent: Friday, December 18, 2020 4:09 PM > To: Storage Performance Development Kit > Subject: [SPDK] iSCSI Sequential Read with BS 128K Performance > > Hi, all: > I just tested SPDK 20.10.x iSCSI performance and I got following = > performance data: (All Testing with FIO QD 32) > [OK] (100%RndR_4K, 100%RndW_4K) : (read: IOPS=3D155k, BW=3D605MiB/s > (634MB/s), write: IOPS=3D140k, BW=3D547MiB/s (573MB/s)) > [OK] (100%RndR_8K, 100%RndW_8K) : (read: IOPS=3D147k, BW=3D1152MiB/s > (1208MB/s), write: IOPS=3D128k, BW=3D1003MiB/s (1051MB/s)) [NOK] = > (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=3D210, BW=3D26.3MiB/s = > (27.5MB/s), > write: IOPS=3D14.6k, BW=3D1831MiB/s (1920MB/s)) =3D> Read Bad, Write OK = > [NOK] (100%SeqR_32K, 100%SeqR_16K) : (read: IOPS=3D9418, BW=3D294MiB/s > (309MB/s), read: IOPS=3D105k, BW=3D1641MiB/s (1721MB/s)) > =3D> [NOK] BS_32K + QD_32 > =3D> [OK] BS_16K + QD_32 > [OK] (100%SeqR_8K, 100%SeqR_4K) : (read: IOPS=3D149k, BW=3D1160MiB/s > (1217MB/s), read: IOPS=3D157k, BW=3D612MiB/s (642MB/s)) > > Focus on BS 128K > [OK] QD1: read: IOPS=3D5543, BW=3D693MiB/s (727MB/s) [OK] QD8: read: > IOPS=3D21.1k, BW=3D2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=3D301, = > BW=3D37.7MiB/s (39.5MB/s) > > FIO Configuration: > ioengine=3Dlibaio > direct=3D1 > numjobs=3D1 > > I also check with document: > > https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2 > 007.pdf Inside this document, it also suggest Sequential Read test = > with BS=3D128K + > QD8 > > I think the low performance with BS=3D128K + QD32 should not relate to = > iSCSI, but can anyone share experience about tuning iSCSI sequential = > read performance? It's weird that performance drop with high QD. Any = > suggestion are welcome. > > Thanks > > Following are my test configuration > 1. Network bandwidth: 40GB > 2. TCP Setting at both target and client > tcp_timestamps: "1" > tcp_sack: "0", > tcp_rmem: "4096 87380 134217728" > tcp_wmem: "4096 87380 134217728", > tcp_mem: "4096 87380 134217728", > rmem_default: "524287", > wmem_default: "524287", > rmem_max: "268435456", > wmem_max: "268435456", > optmem_max: "268435456", > netdev_max_backlog: "300000"} > 3. number of CPU cores at target and client: 48 vcores > Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2 4. disable = > irqbalance / enable CPU power government 5. run SPDK with: ./iscsi_tgt = > -m 0x08007C08007C _______________________________________________ > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to = > spdk-leave(a)lists.01.org = > _______________________________________________ > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to = > spdk-leave(a)lists.01.org > _______________________________________________ SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to spdk-leave(a)lists.01.org --===============4929620559742425836==--