All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yang, Ziye <ziye.yang at intel.com>
To: spdk@lists.01.org
Subject: [SPDK] Re: iSCSI Sequential Read with BS 128K Performance
Date: Mon, 28 Dec 2020 03:03:20 +0000	[thread overview]
Message-ID: <BN6PR11MB4081C4A5B0AA03BF41917FDCFFD90@BN6PR11MB4081.namprd11.prod.outlook.com> (raw)
In-Reply-To: CALR1AzLQuaLFoftiaejSz89C41HzBM+XfubUETg6mP3x_-6wGg@mail.gmail.com

[-- Attachment #1: Type: text/plain, Size: 5406 bytes --]

Hi Lego Lin,

When you use RPC to set the iscsi subsystem status,  
iscsi_set_options, could you use 
'-x' or  '--max-large-datain-per-connection'  to enlarge the datain per connection value. The default value is 64. 

The use of iscsi_set_options can be viewed from ./test/iscsi_tgt/ext4test/ext4test.sh as an example.  If you do not want to use RPC, you can use a json format.  

I believe this value can affect the performance  of iSCSI target with large IO_SIZE + large QD.

Thanks.




Best Regards,
Ziye Yang

-----Original Message-----
From: Lego Lin <lego.lin(a)gmail.com> 
Sent: Wednesday, December 23, 2020 10:19 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Re: iSCSI Sequential Read with BS 128K Performance

Hi, Ziye:

I modify the buffer size as following:
//#define BUF_SMALL_POOL_SIZE                   8191
#define BUF_SMALL_POOL_SIZE                   16381
//#define BUF_LARGE_POOL_SIZE                   1023
#define BUF_LARGE_POOL_SIZE                   4095

But still got bad performance with SeqRead BS=128K + QD=32 on SPDK NULL Block Device.

Any other suggestions?

Thanks


On Wed, Dec 23, 2020 at 10:16 AM Yang, Ziye <ziye.yang(a)intel.com> wrote:

> Hi Lego,
>
> If you want to test, but not changing the code, you may try  the 
> following
> patch:
>
> https://review.spdk.io/gerrit/c/spdk/spdk/+/5664 It supports to 
> leverage RPC to configure the small and large buffer pool size.
>
>
>
>
> Best Regards,
> Ziye Yang
>
> -----Original Message-----
> From: Yang, Ziye
> Sent: Tuesday, December 22, 2020 1:46 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: RE: [SPDK] iSCSI Sequential Read with BS 128K Performance
>
> Hi Lego,
>
> " Focus on BS 128K
> [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read:
> IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, 
> BW=37.7MiB/s (39.5MB/s) "
>
> How about changing
>
> #define BUF_LARGE_POOL_SIZE                     1023   in lib/bdev/bdev.c
>
> This Macro may have impact on the performance if you testing big I/O 
> size test. We currently define 1023 * 64 KB with the total size.  So 
> Would you change the value to a bigger value, and see whether it helps 
> the test? // Currently, we did not export it to configurable yet.
>
>
> Best Regards,
> Ziye Yang
>
> -----Original Message-----
> From: Lego Lin <lego.lin(a)gmail.com>
> Sent: Friday, December 18, 2020 4:09 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] iSCSI Sequential Read with BS 128K Performance
>
> Hi, all:
> I just tested SPDK 20.10.x iSCSI performance and I got following 
> performance data: (All Testing with FIO QD 32)
> [OK] (100%RndR_4K,   100%RndW_4K)   : (read: IOPS=155k, BW=605MiB/s
> (634MB/s),   write: IOPS=140k, BW=547MiB/s (573MB/s))
> [OK] (100%RndR_8K,   100%RndW_8K)   : (read: IOPS=147k, BW=1152MiB/s
> (1208MB/s), write: IOPS=128k, BW=1003MiB/s (1051MB/s)) [NOK] 
> (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=210, BW=26.3MiB/s 
> (27.5MB/s),
> write: IOPS=14.6k, BW=1831MiB/s (1920MB/s)) => Read Bad, Write OK 
> [NOK] (100%SeqR_32K,  100%SeqR_16K)  : (read: IOPS=9418, BW=294MiB/s
> (309MB/s),   read: IOPS=105k, BW=1641MiB/s (1721MB/s))
> => [NOK] BS_32K + QD_32
> => [OK]  BS_16K + QD_32
> [OK] (100%SeqR_8K,   100%SeqR_4K)   : (read: IOPS=149k, BW=1160MiB/s
> (1217MB/s), read: IOPS=157k, BW=612MiB/s (642MB/s))
>
> Focus on BS 128K
> [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read:
> IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, 
> BW=37.7MiB/s (39.5MB/s)
>
> FIO Configuration:
> ioengine=libaio
> direct=1
> numjobs=1
>
> I also check with document:
>
> https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2
> 007.pdf Inside this document, it also suggest Sequential Read test 
> with BS=128K +
> QD8
>
> I think the low performance with BS=128K + QD32 should not relate to 
> iSCSI, but can anyone share experience about tuning iSCSI sequential 
> read performance? It's weird that performance drop with high QD. Any 
> suggestion are welcome.
>
> Thanks
>
> Following are my test configuration
> 1. Network bandwidth: 40GB
> 2. TCP Setting at both target and client
>     tcp_timestamps: "1"
>     tcp_sack: "0",
>     tcp_rmem: "4096 87380 134217728"
>     tcp_wmem: "4096 87380 134217728",
>     tcp_mem: "4096 87380 134217728",
>     rmem_default: "524287",
>     wmem_default: "524287",
>     rmem_max: "268435456",
>     wmem_max: "268435456",
>     optmem_max: "268435456",
>     netdev_max_backlog: "300000"}
> 3. number of CPU cores at target and client: 48 vcores
>      Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2 4. disable 
> irqbalance / enable CPU power government 5. run SPDK with: ./iscsi_tgt 
> -m 0x08007C08007C _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to 
> spdk-leave(a)lists.01.org 
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to 
> spdk-leave(a)lists.01.org
>
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

             reply	other threads:[~2020-12-28  3:03 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-28  3:03 Yang, Ziye [this message]
  -- strict thread matches above, loose matches on Subject: below --
2020-12-23 14:19 [SPDK] Re: iSCSI Sequential Read with BS 128K Performance Lego Lin
2020-12-23  2:16 Yang, Ziye
2020-12-22  5:46 Yang, Ziye
2020-12-18  8:11 Lego Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BN6PR11MB4081C4A5B0AA03BF41917FDCFFD90@BN6PR11MB4081.namprd11.prod.outlook.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.