All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] Re: Single iSCSI max performance with SPDK NULL Block Device
@ 2020-12-09 12:36 Andrey Kuzmin
  0 siblings, 0 replies; 6+ messages in thread
From: Andrey Kuzmin @ 2020-12-09 12:36 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1327 bytes --]

On Wed, Dec 9, 2020, 15:14 Lego Lin <lego.lin(a)gmail.com> wrote:

> Hi,
>
> Currently, I am using SPDK 19.04 evaluating SPDK iSCSI Performance. As

I know, current SPDK iSCSI only uses 1 reactor to handle 1 target's iSCSI
> IO.
> When I try with SPDK NULL Block Device, the max performance with
> 100%RandWrite / 40G NIC / BS = 4K is about 155K.
>

At 155K random write IOPS, the bottleneck can easily be in the actual SSD
being used. Have you tested w/o SPDK iSCSI target being involved, with the
same SSD in the direct attachment settings?

Also, 19.04 is 1.5 years old, so you might want to upgrade to 20.10.

Regards,
Andrey

I try to finish IO immediately at different layer : SPDK BDev Layer, SCSI
> Layer.
> It looks like iSCSI protocol handling become the bottleneck of SPDK NULL
> Block Device.
> Is there any way in SPDK to let multiple reactors handle 1 iSCSI target
> IOs?
>
> Current test topology as following:


> Client: FIO <--> /dev/sdb (Test 1: with 1 thread + with/without blk-mq,
> Test 2: with 4 threads  + with/without blk-mq)
> Target: 1 SPDK iSCSI target <--> 1 LUN <--> SPDK NULL Block Device


> Thanks
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [SPDK] Re: Single iSCSI max performance with SPDK NULL Block Device
@ 2020-12-09 16:12 Andrey Kuzmin
  0 siblings, 0 replies; 6+ messages in thread
From: Andrey Kuzmin @ 2020-12-09 16:12 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6769 bytes --]

On Wed, Dec 9, 2020, 18:06 Harris, James R <james.r.harris(a)intel.com> wrote:

> Hi,
>
> If you test with just 1 target node, but use BS=512, do you get higher
> IOPs?  If so, then it means that TCP is more the limiter, not the iSCSI
> layer.
>

I'd also suggest using multiple connections per session. This should show
up with single target if TCP is a bottleneck.

Regards,
Andrey

>
> If you are trying to get maximum performance on just one connection, you
> may try configuring rmem_max and wmem_max to a higher value.  The SPDK
> performance report has a section on how we configure the TCP stack for
> testing (
> https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2007.pdf).
> See section "TCP Configuration".  Note that you'll want to do this on both
> the initiator and the target.
>
> -Jim
>
>
> On 12/9/20, 8:03 AM, "Lego Lin" <lego.lin(a)gmail.com> wrote:
>
>     Hi, Andrey:
>
>     Current target server setting about Linux
>     1. Disable irqbalance, set CPU scaling governor to performance mode
>     2. TCP configuration
>             tcp_configs = [{"file": "/proc/sys/net/ipv4/tcp_timestamps",
>     "value": "1"},
>                            {"file": "/proc/sys/net/ipv4/tcp_sack", "value":
>     "0"},
>                            {"file": "/proc/sys/net/ipv4/tcp_rmem", "value":
>     "10000000 10000000 10000000"},
>                            {"file": "/proc/sys/net/ipv4/tcp_wmem", "value":
>     "10000000 10000000 10000000"},
>                            {"file": "/proc/sys/net/ipv4/tcp_mem", "value":
>     "10000000 10000000 10000000"},
>                            {"file": "/proc/sys/net/core/rmem_default",
> "value":
>     "524287"},
>                            {"file": "/proc/sys/net/core/wmem_default",
> "value":
>     "524287"},
>                            {"file": "/proc/sys/net/core/rmem_max", "value":
>     "524287"},
>                            {"file": "/proc/sys/net/core/wmem_max", "value":
>     "524287"},
>                            {"file": "/proc/sys/net/core/optmem_max",
> "value":
>     "524287"},
>                            {"file":
> "/proc/sys/net/core/netdev_max_backlog",
>     "value": "300000"}]
>     3. NIC Card IRQ affinity : vcore 1, 24, 25
>         Storage Target Server: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
>         Number of CPU : 2 (Total vcores: 48)
>     4. Enable NIC rps_sock_flow_entries and rps_cpu to 1,24,25 also
>     5. SPDK iSCSI reactor: QPDefaultCPUMask 0x08007C08007C
>     6. I can get line rate (1180K, BS=4K, 100%RndW or 100%RndR) if I use
> >= 8
>     SPDK NULL Block Devices with >= 8 SPDK iSCSI targets.
>
>     On Wed, Dec 9, 2020 at 9:58 PM Andrey Kuzmin <
> andrey.v.kuzmin(a)gmail.com>
>     wrote:
>
>     > Ah, you're is ng null bdev, so my reference to the SSD being an
> issue is
>     > irrelevant. At 40 Gig, you should be running much faster than  155
> KIOPS,
>     > and the target threading may be an issue. OTOH, single poller with
> null
>     > bdev has been shown to reach 10 MIOPS, so I'd also look into your
> network
>     > setup.
>     >
>     > Regards,
>     > Andrey
>     >
>     > On Wed, Dec 9, 2020, 16:51 Lego Lin <lego.lin(a)gmail.com> wrote:
>     >
>     > > Hi,  Andrey
>     > >
>     > > Thanks for quick reply. I will try 20.10.
>     > > Sorry for providing wrong information. Single target with SPDK
> NULL BDev
>     > > under 40G nic card, I got
>     > > 100%RandWrite 4K : 135K
>     > > 100%RandRead 4K : 155K
>     > >
>     > > Although 135K for 1 SSD random write is enough, but not enough for
> read.
>     > > And if I compose 4 SSD into 1 raid-0 BDev, 135K is not enough.
>     > >
>     > > Anyway, I will try 20.10 first. Thanks again
>     > >
>     > > On Wed, Dec 9, 2020 at 8:36 PM Andrey Kuzmin <
> andrey.v.kuzmin(a)gmail.com>
>     > > wrote:
>     > >
>     > > > On Wed, Dec 9, 2020, 15:14 Lego Lin <lego.lin(a)gmail.com> wrote:
>     > > >
>     > > > > Hi,
>     > > > >
>     > > > > Currently, I am using SPDK 19.04 evaluating SPDK iSCSI
> Performance.
>     > As
>     > > >
>     > > > I know, current SPDK iSCSI only uses 1 reactor to handle 1
> target's
>     > iSCSI
>     > > > > IO.
>     > > > > When I try with SPDK NULL Block Device, the max performance
> with
>     > > > > 100%RandWrite / 40G NIC / BS = 4K is about 155K.
>     > > > >
>     > > >
>     > > > At 155K random write IOPS, the bottleneck can easily be in the
> actual
>     > SSD
>     > > > being used. Have you tested w/o SPDK iSCSI target being
> involved, with
>     > > the
>     > > > same SSD in the direct attachment settings?
>     > > >
>     > > > Also, 19.04 is 1.5 years old, so you might want to upgrade to
> 20.10.
>     > > >
>     > > > Regards,
>     > > > Andrey
>     > > >
>     > > > I try to finish IO immediately at different layer : SPDK BDev
> Layer,
>     > SCSI
>     > > > > Layer.
>     > > > > It looks like iSCSI protocol handling become the bottleneck of
> SPDK
>     > > NULL
>     > > > > Block Device.
>     > > > > Is there any way in SPDK to let multiple reactors handle 1
> iSCSI
>     > target
>     > > > > IOs?
>     > > > >
>     > > > > Current test topology as following:
>     > > >
>     > > >
>     > > > > Client: FIO <--> /dev/sdb (Test 1: with 1 thread + with/without
>     > blk-mq,
>     > > > > Test 2: with 4 threads  + with/without blk-mq)
>     > > > > Target: 1 SPDK iSCSI target <--> 1 LUN <--> SPDK NULL Block
> Device
>     > > >
>     > > >
>     > > > > Thanks
>     > > > > _______________________________________________
>     > > > > SPDK mailing list -- spdk(a)lists.01.org
>     > > > > To unsubscribe send an email to spdk-leave(a)lists.01.org
>     > > > >
>     > > > _______________________________________________
>     > > > SPDK mailing list -- spdk(a)lists.01.org
>     > > > To unsubscribe send an email to spdk-leave(a)lists.01.org
>     > > >
>     > > _______________________________________________
>     > > SPDK mailing list -- spdk(a)lists.01.org
>     > > To unsubscribe send an email to spdk-leave(a)lists.01.org
>     > >
>     > _______________________________________________
>     > SPDK mailing list -- spdk(a)lists.01.org
>     > To unsubscribe send an email to spdk-leave(a)lists.01.org
>     >
>     _______________________________________________
>     SPDK mailing list -- spdk(a)lists.01.org
>     To unsubscribe send an email to spdk-leave(a)lists.01.org
>
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [SPDK] Re: Single iSCSI max performance with SPDK NULL Block Device
@ 2020-12-09 15:06 Harris, James R
  0 siblings, 0 replies; 6+ messages in thread
From: Harris, James R @ 2020-12-09 15:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6012 bytes --]

Hi,

If you test with just 1 target node, but use BS=512, do you get higher IOPs?  If so, then it means that TCP is more the limiter, not the iSCSI layer.

If you are trying to get maximum performance on just one connection, you may try configuring rmem_max and wmem_max to a higher value.  The SPDK performance report has a section on how we configure the TCP stack for testing (https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2007.pdf).  See section "TCP Configuration".  Note that you'll want to do this on both the initiator and the target.

-Jim


On 12/9/20, 8:03 AM, "Lego Lin" <lego.lin(a)gmail.com> wrote:

    Hi, Andrey:

    Current target server setting about Linux
    1. Disable irqbalance, set CPU scaling governor to performance mode
    2. TCP configuration
            tcp_configs = [{"file": "/proc/sys/net/ipv4/tcp_timestamps",
    "value": "1"},
                           {"file": "/proc/sys/net/ipv4/tcp_sack", "value":
    "0"},
                           {"file": "/proc/sys/net/ipv4/tcp_rmem", "value":
    "10000000 10000000 10000000"},
                           {"file": "/proc/sys/net/ipv4/tcp_wmem", "value":
    "10000000 10000000 10000000"},
                           {"file": "/proc/sys/net/ipv4/tcp_mem", "value":
    "10000000 10000000 10000000"},
                           {"file": "/proc/sys/net/core/rmem_default", "value":
    "524287"},
                           {"file": "/proc/sys/net/core/wmem_default", "value":
    "524287"},
                           {"file": "/proc/sys/net/core/rmem_max", "value":
    "524287"},
                           {"file": "/proc/sys/net/core/wmem_max", "value":
    "524287"},
                           {"file": "/proc/sys/net/core/optmem_max", "value":
    "524287"},
                           {"file": "/proc/sys/net/core/netdev_max_backlog",
    "value": "300000"}]
    3. NIC Card IRQ affinity : vcore 1, 24, 25
        Storage Target Server: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
        Number of CPU : 2 (Total vcores: 48)
    4. Enable NIC rps_sock_flow_entries and rps_cpu to 1,24,25 also
    5. SPDK iSCSI reactor: QPDefaultCPUMask 0x08007C08007C
    6. I can get line rate (1180K, BS=4K, 100%RndW or 100%RndR) if I use >= 8
    SPDK NULL Block Devices with >= 8 SPDK iSCSI targets.

    On Wed, Dec 9, 2020 at 9:58 PM Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com>
    wrote:

    > Ah, you're is ng null bdev, so my reference to the SSD being an issue is
    > irrelevant. At 40 Gig, you should be running much faster than  155 KIOPS,
    > and the target threading may be an issue. OTOH, single poller with null
    > bdev has been shown to reach 10 MIOPS, so I'd also look into your network
    > setup.
    >
    > Regards,
    > Andrey
    >
    > On Wed, Dec 9, 2020, 16:51 Lego Lin <lego.lin(a)gmail.com> wrote:
    >
    > > Hi,  Andrey
    > >
    > > Thanks for quick reply. I will try 20.10.
    > > Sorry for providing wrong information. Single target with SPDK NULL BDev
    > > under 40G nic card, I got
    > > 100%RandWrite 4K : 135K
    > > 100%RandRead 4K : 155K
    > >
    > > Although 135K for 1 SSD random write is enough, but not enough for read.
    > > And if I compose 4 SSD into 1 raid-0 BDev, 135K is not enough.
    > >
    > > Anyway, I will try 20.10 first. Thanks again
    > >
    > > On Wed, Dec 9, 2020 at 8:36 PM Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com>
    > > wrote:
    > >
    > > > On Wed, Dec 9, 2020, 15:14 Lego Lin <lego.lin(a)gmail.com> wrote:
    > > >
    > > > > Hi,
    > > > >
    > > > > Currently, I am using SPDK 19.04 evaluating SPDK iSCSI Performance.
    > As
    > > >
    > > > I know, current SPDK iSCSI only uses 1 reactor to handle 1 target's
    > iSCSI
    > > > > IO.
    > > > > When I try with SPDK NULL Block Device, the max performance with
    > > > > 100%RandWrite / 40G NIC / BS = 4K is about 155K.
    > > > >
    > > >
    > > > At 155K random write IOPS, the bottleneck can easily be in the actual
    > SSD
    > > > being used. Have you tested w/o SPDK iSCSI target being involved, with
    > > the
    > > > same SSD in the direct attachment settings?
    > > >
    > > > Also, 19.04 is 1.5 years old, so you might want to upgrade to 20.10.
    > > >
    > > > Regards,
    > > > Andrey
    > > >
    > > > I try to finish IO immediately at different layer : SPDK BDev Layer,
    > SCSI
    > > > > Layer.
    > > > > It looks like iSCSI protocol handling become the bottleneck of SPDK
    > > NULL
    > > > > Block Device.
    > > > > Is there any way in SPDK to let multiple reactors handle 1 iSCSI
    > target
    > > > > IOs?
    > > > >
    > > > > Current test topology as following:
    > > >
    > > >
    > > > > Client: FIO <--> /dev/sdb (Test 1: with 1 thread + with/without
    > blk-mq,
    > > > > Test 2: with 4 threads  + with/without blk-mq)
    > > > > Target: 1 SPDK iSCSI target <--> 1 LUN <--> SPDK NULL Block Device
    > > >
    > > >
    > > > > Thanks
    > > > > _______________________________________________
    > > > > SPDK mailing list -- spdk(a)lists.01.org
    > > > > To unsubscribe send an email to spdk-leave(a)lists.01.org
    > > > >
    > > > _______________________________________________
    > > > SPDK mailing list -- spdk(a)lists.01.org
    > > > To unsubscribe send an email to spdk-leave(a)lists.01.org
    > > >
    > > _______________________________________________
    > > SPDK mailing list -- spdk(a)lists.01.org
    > > To unsubscribe send an email to spdk-leave(a)lists.01.org
    > >
    > _______________________________________________
    > SPDK mailing list -- spdk(a)lists.01.org
    > To unsubscribe send an email to spdk-leave(a)lists.01.org
    >
    _______________________________________________
    SPDK mailing list -- spdk(a)lists.01.org
    To unsubscribe send an email to spdk-leave(a)lists.01.org


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [SPDK] Re: Single iSCSI max performance with SPDK NULL Block Device
@ 2020-12-09 15:02 Lego Lin
  0 siblings, 0 replies; 6+ messages in thread
From: Lego Lin @ 2020-12-09 15:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4705 bytes --]

Hi, Andrey:

Current target server setting about Linux
1. Disable irqbalance, set CPU scaling governor to performance mode
2. TCP configuration
        tcp_configs = [{"file": "/proc/sys/net/ipv4/tcp_timestamps",
"value": "1"},
                       {"file": "/proc/sys/net/ipv4/tcp_sack", "value":
"0"},
                       {"file": "/proc/sys/net/ipv4/tcp_rmem", "value":
"10000000 10000000 10000000"},
                       {"file": "/proc/sys/net/ipv4/tcp_wmem", "value":
"10000000 10000000 10000000"},
                       {"file": "/proc/sys/net/ipv4/tcp_mem", "value":
"10000000 10000000 10000000"},
                       {"file": "/proc/sys/net/core/rmem_default", "value":
"524287"},
                       {"file": "/proc/sys/net/core/wmem_default", "value":
"524287"},
                       {"file": "/proc/sys/net/core/rmem_max", "value":
"524287"},
                       {"file": "/proc/sys/net/core/wmem_max", "value":
"524287"},
                       {"file": "/proc/sys/net/core/optmem_max", "value":
"524287"},
                       {"file": "/proc/sys/net/core/netdev_max_backlog",
"value": "300000"}]
3. NIC Card IRQ affinity : vcore 1, 24, 25
    Storage Target Server: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
    Number of CPU : 2 (Total vcores: 48)
4. Enable NIC rps_sock_flow_entries and rps_cpu to 1,24,25 also
5. SPDK iSCSI reactor: QPDefaultCPUMask 0x08007C08007C
6. I can get line rate (1180K, BS=4K, 100%RndW or 100%RndR) if I use >= 8
SPDK NULL Block Devices with >= 8 SPDK iSCSI targets.

On Wed, Dec 9, 2020 at 9:58 PM Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com>
wrote:

> Ah, you're is ng null bdev, so my reference to the SSD being an issue is
> irrelevant. At 40 Gig, you should be running much faster than  155 KIOPS,
> and the target threading may be an issue. OTOH, single poller with null
> bdev has been shown to reach 10 MIOPS, so I'd also look into your network
> setup.
>
> Regards,
> Andrey
>
> On Wed, Dec 9, 2020, 16:51 Lego Lin <lego.lin(a)gmail.com> wrote:
>
> > Hi,  Andrey
> >
> > Thanks for quick reply. I will try 20.10.
> > Sorry for providing wrong information. Single target with SPDK NULL BDev
> > under 40G nic card, I got
> > 100%RandWrite 4K : 135K
> > 100%RandRead 4K : 155K
> >
> > Although 135K for 1 SSD random write is enough, but not enough for read.
> > And if I compose 4 SSD into 1 raid-0 BDev, 135K is not enough.
> >
> > Anyway, I will try 20.10 first. Thanks again
> >
> > On Wed, Dec 9, 2020 at 8:36 PM Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com>
> > wrote:
> >
> > > On Wed, Dec 9, 2020, 15:14 Lego Lin <lego.lin(a)gmail.com> wrote:
> > >
> > > > Hi,
> > > >
> > > > Currently, I am using SPDK 19.04 evaluating SPDK iSCSI Performance.
> As
> > >
> > > I know, current SPDK iSCSI only uses 1 reactor to handle 1 target's
> iSCSI
> > > > IO.
> > > > When I try with SPDK NULL Block Device, the max performance with
> > > > 100%RandWrite / 40G NIC / BS = 4K is about 155K.
> > > >
> > >
> > > At 155K random write IOPS, the bottleneck can easily be in the actual
> SSD
> > > being used. Have you tested w/o SPDK iSCSI target being involved, with
> > the
> > > same SSD in the direct attachment settings?
> > >
> > > Also, 19.04 is 1.5 years old, so you might want to upgrade to 20.10.
> > >
> > > Regards,
> > > Andrey
> > >
> > > I try to finish IO immediately at different layer : SPDK BDev Layer,
> SCSI
> > > > Layer.
> > > > It looks like iSCSI protocol handling become the bottleneck of SPDK
> > NULL
> > > > Block Device.
> > > > Is there any way in SPDK to let multiple reactors handle 1 iSCSI
> target
> > > > IOs?
> > > >
> > > > Current test topology as following:
> > >
> > >
> > > > Client: FIO <--> /dev/sdb (Test 1: with 1 thread + with/without
> blk-mq,
> > > > Test 2: with 4 threads  + with/without blk-mq)
> > > > Target: 1 SPDK iSCSI target <--> 1 LUN <--> SPDK NULL Block Device
> > >
> > >
> > > > Thanks
> > > > _______________________________________________
> > > > SPDK mailing list -- spdk(a)lists.01.org
> > > > To unsubscribe send an email to spdk-leave(a)lists.01.org
> > > >
> > > _______________________________________________
> > > SPDK mailing list -- spdk(a)lists.01.org
> > > To unsubscribe send an email to spdk-leave(a)lists.01.org
> > >
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org
> > To unsubscribe send an email to spdk-leave(a)lists.01.org
> >
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [SPDK] Re: Single iSCSI max performance with SPDK NULL Block Device
@ 2020-12-09 13:58 Andrey Kuzmin
  0 siblings, 0 replies; 6+ messages in thread
From: Andrey Kuzmin @ 2020-12-09 13:58 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2720 bytes --]

Ah, you're is ng null bdev, so my reference to the SSD being an issue is
irrelevant. At 40 Gig, you should be running much faster than  155 KIOPS,
and the target threading may be an issue. OTOH, single poller with null
bdev has been shown to reach 10 MIOPS, so I'd also look into your network
setup.

Regards,
Andrey

On Wed, Dec 9, 2020, 16:51 Lego Lin <lego.lin(a)gmail.com> wrote:

> Hi,  Andrey
>
> Thanks for quick reply. I will try 20.10.
> Sorry for providing wrong information. Single target with SPDK NULL BDev
> under 40G nic card, I got
> 100%RandWrite 4K : 135K
> 100%RandRead 4K : 155K
>
> Although 135K for 1 SSD random write is enough, but not enough for read.
> And if I compose 4 SSD into 1 raid-0 BDev, 135K is not enough.
>
> Anyway, I will try 20.10 first. Thanks again
>
> On Wed, Dec 9, 2020 at 8:36 PM Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com>
> wrote:
>
> > On Wed, Dec 9, 2020, 15:14 Lego Lin <lego.lin(a)gmail.com> wrote:
> >
> > > Hi,
> > >
> > > Currently, I am using SPDK 19.04 evaluating SPDK iSCSI Performance. As
> >
> > I know, current SPDK iSCSI only uses 1 reactor to handle 1 target's iSCSI
> > > IO.
> > > When I try with SPDK NULL Block Device, the max performance with
> > > 100%RandWrite / 40G NIC / BS = 4K is about 155K.
> > >
> >
> > At 155K random write IOPS, the bottleneck can easily be in the actual SSD
> > being used. Have you tested w/o SPDK iSCSI target being involved, with
> the
> > same SSD in the direct attachment settings?
> >
> > Also, 19.04 is 1.5 years old, so you might want to upgrade to 20.10.
> >
> > Regards,
> > Andrey
> >
> > I try to finish IO immediately at different layer : SPDK BDev Layer, SCSI
> > > Layer.
> > > It looks like iSCSI protocol handling become the bottleneck of SPDK
> NULL
> > > Block Device.
> > > Is there any way in SPDK to let multiple reactors handle 1 iSCSI target
> > > IOs?
> > >
> > > Current test topology as following:
> >
> >
> > > Client: FIO <--> /dev/sdb (Test 1: with 1 thread + with/without blk-mq,
> > > Test 2: with 4 threads  + with/without blk-mq)
> > > Target: 1 SPDK iSCSI target <--> 1 LUN <--> SPDK NULL Block Device
> >
> >
> > > Thanks
> > > _______________________________________________
> > > SPDK mailing list -- spdk(a)lists.01.org
> > > To unsubscribe send an email to spdk-leave(a)lists.01.org
> > >
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org
> > To unsubscribe send an email to spdk-leave(a)lists.01.org
> >
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [SPDK] Re: Single iSCSI max performance with SPDK NULL Block Device
@ 2020-12-09 13:52 Lego Lin
  0 siblings, 0 replies; 6+ messages in thread
From: Lego Lin @ 2020-12-09 13:52 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2042 bytes --]

Hi,  Andrey

Thanks for quick reply. I will try 20.10.
Sorry for providing wrong information. Single target with SPDK NULL BDev
under 40G nic card, I got
100%RandWrite 4K : 135K
100%RandRead 4K : 155K

Although 135K for 1 SSD random write is enough, but not enough for read.
And if I compose 4 SSD into 1 raid-0 BDev, 135K is not enough.

Anyway, I will try 20.10 first. Thanks again

On Wed, Dec 9, 2020 at 8:36 PM Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com>
wrote:

> On Wed, Dec 9, 2020, 15:14 Lego Lin <lego.lin(a)gmail.com> wrote:
>
> > Hi,
> >
> > Currently, I am using SPDK 19.04 evaluating SPDK iSCSI Performance. As
>
> I know, current SPDK iSCSI only uses 1 reactor to handle 1 target's iSCSI
> > IO.
> > When I try with SPDK NULL Block Device, the max performance with
> > 100%RandWrite / 40G NIC / BS = 4K is about 155K.
> >
>
> At 155K random write IOPS, the bottleneck can easily be in the actual SSD
> being used. Have you tested w/o SPDK iSCSI target being involved, with the
> same SSD in the direct attachment settings?
>
> Also, 19.04 is 1.5 years old, so you might want to upgrade to 20.10.
>
> Regards,
> Andrey
>
> I try to finish IO immediately at different layer : SPDK BDev Layer, SCSI
> > Layer.
> > It looks like iSCSI protocol handling become the bottleneck of SPDK NULL
> > Block Device.
> > Is there any way in SPDK to let multiple reactors handle 1 iSCSI target
> > IOs?
> >
> > Current test topology as following:
>
>
> > Client: FIO <--> /dev/sdb (Test 1: with 1 thread + with/without blk-mq,
> > Test 2: with 4 threads  + with/without blk-mq)
> > Target: 1 SPDK iSCSI target <--> 1 LUN <--> SPDK NULL Block Device
>
>
> > Thanks
> > _______________________________________________
> > SPDK mailing list -- spdk(a)lists.01.org
> > To unsubscribe send an email to spdk-leave(a)lists.01.org
> >
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-12-09 16:12 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-09 12:36 [SPDK] Re: Single iSCSI max performance with SPDK NULL Block Device Andrey Kuzmin
2020-12-09 13:52 Lego Lin
2020-12-09 13:58 Andrey Kuzmin
2020-12-09 15:02 Lego Lin
2020-12-09 15:06 Harris, James R
2020-12-09 16:12 Andrey Kuzmin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.