All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] Re: iSCSI Sequential Read with BS 128K Performance
@ 2020-12-28  3:03 Yang, Ziye
  0 siblings, 0 replies; 5+ messages in thread
From: Yang, Ziye @ 2020-12-28  3:03 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5406 bytes --]

Hi Lego Lin,

When you use RPC to set the iscsi subsystem status,  
iscsi_set_options, could you use 
'-x' or  '--max-large-datain-per-connection'  to enlarge the datain per connection value. The default value is 64. 

The use of iscsi_set_options can be viewed from ./test/iscsi_tgt/ext4test/ext4test.sh as an example.  If you do not want to use RPC, you can use a json format.  

I believe this value can affect the performance  of iSCSI target with large IO_SIZE + large QD.

Thanks.




Best Regards,
Ziye Yang

-----Original Message-----
From: Lego Lin <lego.lin(a)gmail.com> 
Sent: Wednesday, December 23, 2020 10:19 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Re: iSCSI Sequential Read with BS 128K Performance

Hi, Ziye:

I modify the buffer size as following:
//#define BUF_SMALL_POOL_SIZE                   8191
#define BUF_SMALL_POOL_SIZE                   16381
//#define BUF_LARGE_POOL_SIZE                   1023
#define BUF_LARGE_POOL_SIZE                   4095

But still got bad performance with SeqRead BS=128K + QD=32 on SPDK NULL Block Device.

Any other suggestions?

Thanks


On Wed, Dec 23, 2020 at 10:16 AM Yang, Ziye <ziye.yang(a)intel.com> wrote:

> Hi Lego,
>
> If you want to test, but not changing the code, you may try  the 
> following
> patch:
>
> https://review.spdk.io/gerrit/c/spdk/spdk/+/5664 It supports to 
> leverage RPC to configure the small and large buffer pool size.
>
>
>
>
> Best Regards,
> Ziye Yang
>
> -----Original Message-----
> From: Yang, Ziye
> Sent: Tuesday, December 22, 2020 1:46 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: RE: [SPDK] iSCSI Sequential Read with BS 128K Performance
>
> Hi Lego,
>
> " Focus on BS 128K
> [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read:
> IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, 
> BW=37.7MiB/s (39.5MB/s) "
>
> How about changing
>
> #define BUF_LARGE_POOL_SIZE                     1023   in lib/bdev/bdev.c
>
> This Macro may have impact on the performance if you testing big I/O 
> size test. We currently define 1023 * 64 KB with the total size.  So 
> Would you change the value to a bigger value, and see whether it helps 
> the test? // Currently, we did not export it to configurable yet.
>
>
> Best Regards,
> Ziye Yang
>
> -----Original Message-----
> From: Lego Lin <lego.lin(a)gmail.com>
> Sent: Friday, December 18, 2020 4:09 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] iSCSI Sequential Read with BS 128K Performance
>
> Hi, all:
> I just tested SPDK 20.10.x iSCSI performance and I got following 
> performance data: (All Testing with FIO QD 32)
> [OK] (100%RndR_4K,   100%RndW_4K)   : (read: IOPS=155k, BW=605MiB/s
> (634MB/s),   write: IOPS=140k, BW=547MiB/s (573MB/s))
> [OK] (100%RndR_8K,   100%RndW_8K)   : (read: IOPS=147k, BW=1152MiB/s
> (1208MB/s), write: IOPS=128k, BW=1003MiB/s (1051MB/s)) [NOK] 
> (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=210, BW=26.3MiB/s 
> (27.5MB/s),
> write: IOPS=14.6k, BW=1831MiB/s (1920MB/s)) => Read Bad, Write OK 
> [NOK] (100%SeqR_32K,  100%SeqR_16K)  : (read: IOPS=9418, BW=294MiB/s
> (309MB/s),   read: IOPS=105k, BW=1641MiB/s (1721MB/s))
> => [NOK] BS_32K + QD_32
> => [OK]  BS_16K + QD_32
> [OK] (100%SeqR_8K,   100%SeqR_4K)   : (read: IOPS=149k, BW=1160MiB/s
> (1217MB/s), read: IOPS=157k, BW=612MiB/s (642MB/s))
>
> Focus on BS 128K
> [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read:
> IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, 
> BW=37.7MiB/s (39.5MB/s)
>
> FIO Configuration:
> ioengine=libaio
> direct=1
> numjobs=1
>
> I also check with document:
>
> https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2
> 007.pdf Inside this document, it also suggest Sequential Read test 
> with BS=128K +
> QD8
>
> I think the low performance with BS=128K + QD32 should not relate to 
> iSCSI, but can anyone share experience about tuning iSCSI sequential 
> read performance? It's weird that performance drop with high QD. Any 
> suggestion are welcome.
>
> Thanks
>
> Following are my test configuration
> 1. Network bandwidth: 40GB
> 2. TCP Setting at both target and client
>     tcp_timestamps: "1"
>     tcp_sack: "0",
>     tcp_rmem: "4096 87380 134217728"
>     tcp_wmem: "4096 87380 134217728",
>     tcp_mem: "4096 87380 134217728",
>     rmem_default: "524287",
>     wmem_default: "524287",
>     rmem_max: "268435456",
>     wmem_max: "268435456",
>     optmem_max: "268435456",
>     netdev_max_backlog: "300000"}
> 3. number of CPU cores at target and client: 48 vcores
>      Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2 4. disable 
> irqbalance / enable CPU power government 5. run SPDK with: ./iscsi_tgt 
> -m 0x08007C08007C _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to 
> spdk-leave(a)lists.01.org 
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to 
> spdk-leave(a)lists.01.org
>
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [SPDK] Re: iSCSI Sequential Read with BS 128K Performance
@ 2020-12-23 14:19 Lego Lin
  0 siblings, 0 replies; 5+ messages in thread
From: Lego Lin @ 2020-12-23 14:19 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4449 bytes --]

Hi, Ziye:

I modify the buffer size as following:
//#define BUF_SMALL_POOL_SIZE                   8191
#define BUF_SMALL_POOL_SIZE                   16381
//#define BUF_LARGE_POOL_SIZE                   1023
#define BUF_LARGE_POOL_SIZE                   4095

But still got bad performance with SeqRead BS=128K + QD=32 on SPDK NULL
Block Device.

Any other suggestions?

Thanks


On Wed, Dec 23, 2020 at 10:16 AM Yang, Ziye <ziye.yang(a)intel.com> wrote:

> Hi Lego,
>
> If you want to test, but not changing the code, you may try  the following
> patch:
>
> https://review.spdk.io/gerrit/c/spdk/spdk/+/5664 It supports to leverage
> RPC to configure the small and large buffer pool size.
>
>
>
>
> Best Regards,
> Ziye Yang
>
> -----Original Message-----
> From: Yang, Ziye
> Sent: Tuesday, December 22, 2020 1:46 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: RE: [SPDK] iSCSI Sequential Read with BS 128K Performance
>
> Hi Lego,
>
> " Focus on BS 128K
> [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read:
> IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301,
> BW=37.7MiB/s (39.5MB/s) "
>
> How about changing
>
> #define BUF_LARGE_POOL_SIZE                     1023   in lib/bdev/bdev.c
>
> This Macro may have impact on the performance if you testing big I/O size
> test. We currently define 1023 * 64 KB with the total size.  So Would you
> change the value to a bigger value, and see whether it helps the test? //
> Currently, we did not export it to configurable yet.
>
>
> Best Regards,
> Ziye Yang
>
> -----Original Message-----
> From: Lego Lin <lego.lin(a)gmail.com>
> Sent: Friday, December 18, 2020 4:09 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] iSCSI Sequential Read with BS 128K Performance
>
> Hi, all:
> I just tested SPDK 20.10.x iSCSI performance and I got following
> performance data: (All Testing with FIO QD 32)
> [OK] (100%RndR_4K,   100%RndW_4K)   : (read: IOPS=155k, BW=605MiB/s
> (634MB/s),   write: IOPS=140k, BW=547MiB/s (573MB/s))
> [OK] (100%RndR_8K,   100%RndW_8K)   : (read: IOPS=147k, BW=1152MiB/s
> (1208MB/s), write: IOPS=128k, BW=1003MiB/s (1051MB/s)) [NOK]
> (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=210, BW=26.3MiB/s (27.5MB/s),
> write: IOPS=14.6k, BW=1831MiB/s (1920MB/s)) => Read Bad, Write OK [NOK]
> (100%SeqR_32K,  100%SeqR_16K)  : (read: IOPS=9418, BW=294MiB/s
> (309MB/s),   read: IOPS=105k, BW=1641MiB/s (1721MB/s))
> => [NOK] BS_32K + QD_32
> => [OK]  BS_16K + QD_32
> [OK] (100%SeqR_8K,   100%SeqR_4K)   : (read: IOPS=149k, BW=1160MiB/s
> (1217MB/s), read: IOPS=157k, BW=612MiB/s (642MB/s))
>
> Focus on BS 128K
> [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read:
> IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301,
> BW=37.7MiB/s (39.5MB/s)
>
> FIO Configuration:
> ioengine=libaio
> direct=1
> numjobs=1
>
> I also check with document:
>
> https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2007.pdf
> Inside this document, it also suggest Sequential Read test with BS=128K +
> QD8
>
> I think the low performance with BS=128K + QD32 should not relate to
> iSCSI, but can anyone share experience about tuning iSCSI sequential read
> performance? It's weird that performance drop with high QD. Any suggestion
> are welcome.
>
> Thanks
>
> Following are my test configuration
> 1. Network bandwidth: 40GB
> 2. TCP Setting at both target and client
>     tcp_timestamps: "1"
>     tcp_sack: "0",
>     tcp_rmem: "4096 87380 134217728"
>     tcp_wmem: "4096 87380 134217728",
>     tcp_mem: "4096 87380 134217728",
>     rmem_default: "524287",
>     wmem_default: "524287",
>     rmem_max: "268435456",
>     wmem_max: "268435456",
>     optmem_max: "268435456",
>     netdev_max_backlog: "300000"}
> 3. number of CPU cores at target and client: 48 vcores
>      Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2 4. disable irqbalance
> / enable CPU power government 5. run SPDK with: ./iscsi_tgt -m
> 0x08007C08007C _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [SPDK] Re: iSCSI Sequential Read with BS 128K Performance
@ 2020-12-23  2:16 Yang, Ziye
  0 siblings, 0 replies; 5+ messages in thread
From: Yang, Ziye @ 2020-12-23  2:16 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3592 bytes --]

Hi Lego,

If you want to test, but not changing the code, you may try  the following patch:

https://review.spdk.io/gerrit/c/spdk/spdk/+/5664 It supports to leverage RPC to configure the small and large buffer pool size.




Best Regards,
Ziye Yang

-----Original Message-----
From: Yang, Ziye 
Sent: Tuesday, December 22, 2020 1:46 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: RE: [SPDK] iSCSI Sequential Read with BS 128K Performance

Hi Lego,

" Focus on BS 128K
[OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read: IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, BW=37.7MiB/s (39.5MB/s) "

How about changing 

#define BUF_LARGE_POOL_SIZE			1023   in lib/bdev/bdev.c

This Macro may have impact on the performance if you testing big I/O size test. We currently define 1023 * 64 KB with the total size.  So Would you change the value to a bigger value, and see whether it helps the test? // Currently, we did not export it to configurable yet.


Best Regards,
Ziye Yang

-----Original Message-----
From: Lego Lin <lego.lin(a)gmail.com>
Sent: Friday, December 18, 2020 4:09 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] iSCSI Sequential Read with BS 128K Performance

Hi, all:
I just tested SPDK 20.10.x iSCSI performance and I got following performance data: (All Testing with FIO QD 32)
[OK] (100%RndR_4K,   100%RndW_4K)   : (read: IOPS=155k, BW=605MiB/s
(634MB/s),   write: IOPS=140k, BW=547MiB/s (573MB/s))
[OK] (100%RndR_8K,   100%RndW_8K)   : (read: IOPS=147k, BW=1152MiB/s
(1208MB/s), write: IOPS=128k, BW=1003MiB/s (1051MB/s)) [NOK] (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=210, BW=26.3MiB/s (27.5MB/s),  write: IOPS=14.6k, BW=1831MiB/s (1920MB/s)) => Read Bad, Write OK [NOK] (100%SeqR_32K,  100%SeqR_16K)  : (read: IOPS=9418, BW=294MiB/s
(309MB/s),   read: IOPS=105k, BW=1641MiB/s (1721MB/s))
=> [NOK] BS_32K + QD_32
=> [OK]  BS_16K + QD_32
[OK] (100%SeqR_8K,   100%SeqR_4K)   : (read: IOPS=149k, BW=1160MiB/s
(1217MB/s), read: IOPS=157k, BW=612MiB/s (642MB/s))

Focus on BS 128K
[OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read: IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, BW=37.7MiB/s (39.5MB/s)

FIO Configuration:
ioengine=libaio
direct=1
numjobs=1

I also check with document:
https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2007.pdf
Inside this document, it also suggest Sequential Read test with BS=128K +
QD8

I think the low performance with BS=128K + QD32 should not relate to iSCSI, but can anyone share experience about tuning iSCSI sequential read performance? It's weird that performance drop with high QD. Any suggestion are welcome.

Thanks

Following are my test configuration
1. Network bandwidth: 40GB
2. TCP Setting at both target and client
    tcp_timestamps: "1"
    tcp_sack: "0",
    tcp_rmem: "4096 87380 134217728"
    tcp_wmem: "4096 87380 134217728",
    tcp_mem: "4096 87380 134217728",
    rmem_default: "524287",
    wmem_default: "524287",
    rmem_max: "268435456",
    wmem_max: "268435456",
    optmem_max: "268435456",
    netdev_max_backlog: "300000"}
3. number of CPU cores at target and client: 48 vcores
     Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2 4. disable irqbalance / enable CPU power government 5. run SPDK with: ./iscsi_tgt -m 0x08007C08007C _______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [SPDK] Re: iSCSI Sequential Read with BS 128K Performance
@ 2020-12-22  5:46 Yang, Ziye
  0 siblings, 0 replies; 5+ messages in thread
From: Yang, Ziye @ 2020-12-22  5:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3110 bytes --]

Hi Lego,

" Focus on BS 128K
[OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read: IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, BW=37.7MiB/s (39.5MB/s)
"

How about changing 

#define BUF_LARGE_POOL_SIZE			1023   in lib/bdev/bdev.c

This Macro may have impact on the performance if you testing big I/O size test. We currently define 1023 * 64 KB with the total size.  So Would you change the value to a bigger value, and see whether it helps the test? // Currently, we did not export it to configurable yet.


Best Regards,
Ziye Yang

-----Original Message-----
From: Lego Lin <lego.lin(a)gmail.com> 
Sent: Friday, December 18, 2020 4:09 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] iSCSI Sequential Read with BS 128K Performance

Hi, all:
I just tested SPDK 20.10.x iSCSI performance and I got following performance data: (All Testing with FIO QD 32)
[OK] (100%RndR_4K,   100%RndW_4K)   : (read: IOPS=155k, BW=605MiB/s
(634MB/s),   write: IOPS=140k, BW=547MiB/s (573MB/s))
[OK] (100%RndR_8K,   100%RndW_8K)   : (read: IOPS=147k, BW=1152MiB/s
(1208MB/s), write: IOPS=128k, BW=1003MiB/s (1051MB/s)) [NOK] (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=210, BW=26.3MiB/s (27.5MB/s),  write: IOPS=14.6k, BW=1831MiB/s (1920MB/s)) => Read Bad, Write OK [NOK] (100%SeqR_32K,  100%SeqR_16K)  : (read: IOPS=9418, BW=294MiB/s
(309MB/s),   read: IOPS=105k, BW=1641MiB/s (1721MB/s))
=> [NOK] BS_32K + QD_32
=> [OK]  BS_16K + QD_32
[OK] (100%SeqR_8K,   100%SeqR_4K)   : (read: IOPS=149k, BW=1160MiB/s
(1217MB/s), read: IOPS=157k, BW=612MiB/s (642MB/s))

Focus on BS 128K
[OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s) [OK] QD8: read: IOPS=21.1k, BW=2634MiB/s (2762MB/s) [NOK] QD16: read: IOPS=301, BW=37.7MiB/s (39.5MB/s)

FIO Configuration:
ioengine=libaio
direct=1
numjobs=1

I also check with document:
https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2007.pdf
Inside this document, it also suggest Sequential Read test with BS=128K +
QD8

I think the low performance with BS=128K + QD32 should not relate to iSCSI, but can anyone share experience about tuning iSCSI sequential read performance? It's weird that performance drop with high QD. Any suggestion are welcome.

Thanks

Following are my test configuration
1. Network bandwidth: 40GB
2. TCP Setting at both target and client
    tcp_timestamps: "1"
    tcp_sack: "0",
    tcp_rmem: "4096 87380 134217728"
    tcp_wmem: "4096 87380 134217728",
    tcp_mem: "4096 87380 134217728",
    rmem_default: "524287",
    wmem_default: "524287",
    rmem_max: "268435456",
    wmem_max: "268435456",
    optmem_max: "268435456",
    netdev_max_backlog: "300000"}
3. number of CPU cores at target and client: 48 vcores
     Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2 4. disable irqbalance / enable CPU power government 5. run SPDK with: ./iscsi_tgt -m 0x08007C08007C _______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [SPDK] Re: iSCSI Sequential Read with BS 128K Performance
@ 2020-12-18  8:11 Lego Lin
  0 siblings, 0 replies; 5+ messages in thread
From: Lego Lin @ 2020-12-18  8:11 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2448 bytes --]

Hi, all:

Sorry for previous email that miss 1 thing: I test performance with SPDK
NULL Block Device

On Fri, Dec 18, 2020 at 4:09 PM Lego Lin <lego.lin(a)gmail.com> wrote:

> Hi, all:
> I just tested SPDK 20.10.x iSCSI performance and I got
> following performance data: (All Testing with FIO QD 32)
> [OK] (100%RndR_4K,   100%RndW_4K)   : (read: IOPS=155k, BW=605MiB/s
> (634MB/s),   write: IOPS=140k, BW=547MiB/s (573MB/s))
> [OK] (100%RndR_8K,   100%RndW_8K)   : (read: IOPS=147k, BW=1152MiB/s
> (1208MB/s), write: IOPS=128k, BW=1003MiB/s (1051MB/s))
> [NOK] (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=210, BW=26.3MiB/s
> (27.5MB/s),  write: IOPS=14.6k, BW=1831MiB/s (1920MB/s))
> => Read Bad, Write OK
> [NOK] (100%SeqR_32K,  100%SeqR_16K)  : (read: IOPS=9418, BW=294MiB/s
> (309MB/s),   read: IOPS=105k, BW=1641MiB/s (1721MB/s))
> => [NOK] BS_32K + QD_32
> => [OK]  BS_16K + QD_32
> [OK] (100%SeqR_8K,   100%SeqR_4K)   : (read: IOPS=149k, BW=1160MiB/s
> (1217MB/s), read: IOPS=157k, BW=612MiB/s (642MB/s))
>
> Focus on BS 128K
> [OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s)
> [OK] QD8: read: IOPS=21.1k, BW=2634MiB/s (2762MB/s)
> [NOK] QD16: read: IOPS=301, BW=37.7MiB/s (39.5MB/s)
>
> FIO Configuration:
> ioengine=libaio
> direct=1
> numjobs=1
>
> I also check with document:
> https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2007.pdf
> Inside this document, it also suggest Sequential Read test with BS=128K +
> QD8
>
> I think the low performance with BS=128K + QD32 should not relate to
> iSCSI, but can anyone share experience about tuning iSCSI sequential read
> performance? It's weird that performance drop with high QD. Any suggestion
> are welcome.
>
> Thanks
>
> Following are my test configuration
> 1. Network bandwidth: 40GB
> 2. TCP Setting at both target and client
>     tcp_timestamps: "1"
>     tcp_sack: "0",
>     tcp_rmem: "4096 87380 134217728"
>     tcp_wmem: "4096 87380 134217728",
>     tcp_mem: "4096 87380 134217728",
>     rmem_default: "524287",
>     wmem_default: "524287",
>     rmem_max: "268435456",
>     wmem_max: "268435456",
>     optmem_max: "268435456",
>     netdev_max_backlog: "300000"}
> 3. number of CPU cores at target and client: 48 vcores
>      Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2
> 4. disable irqbalance / enable CPU power government
> 5. run SPDK with: ./iscsi_tgt -m 0x08007C08007C
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-12-28  3:03 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-28  3:03 [SPDK] Re: iSCSI Sequential Read with BS 128K Performance Yang, Ziye
  -- strict thread matches above, loose matches on Subject: below --
2020-12-23 14:19 Lego Lin
2020-12-23  2:16 Yang, Ziye
2020-12-22  5:46 Yang, Ziye
2020-12-18  8:11 Lego Lin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.