spdk.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [SPDK] Re: Performance of spdk_trace
@ 2022-08-30  1:46 Jinhao Fan
  0 siblings, 0 replies; 2+ messages in thread
From: Jinhao Fan @ 2022-08-30  1:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2765 bytes --]

at 5:44 AM, Harris, James R <james.r.harris(a)intel.com> wrote:

> Hi Jinhao,
> 
> I’m not aware of any detailed benchmarking, but it can certainly support the trace event rates that you describe.
> 
> You can use the null bdev w/ the bdevperf application as a rough approximation.  Each bdev IO generates two trace events (one for IO start, one for IO end), so we can use the null bdev as an easy test to generate as many of those trace events as possible.
> 
> First put the following in a file named null.json:
> 
> {
>  "subsystems": [
>    {
>      "subsystem": "bdev",
>      "config": [
>        {
>          "method": "bdev_null_create",
>          "params": {
>            "name": "null0",
>            "num_blocks": 2097152,
>            "block_size": 512
>          }
>        }
>      ]
>    }
>  ]
> }
> 
> Then run the bdevperf app without events:
> 
> test/bdev/bdevperf/bdevperf -q 4 -o 1024 -w randread -t 5 -c null.json
> 
> In one of my development VMs, I see 20M IO/s.  This is roughly 50ns per IO.
> 
> Next run it with bdev trace events enabled by adding the “-e bdev” command line option:
> 
> test/bdev/bdevperf/bdevperf -q 4 -o 1024 -w randread -t 5 -c null.json -e bdev
> 
> In the same VM, I see 14M IO/s.  This is roughly 70ns per IO.  That’s 20ns extra overhead for 2 trace events.
> 
> It scales to multiple cores as well – add “-m 0xF -C” to both bdevperf command lines and it will do the same test on 4 cores at once with roughly that same 20M v. 14M IO/s on each of the 4 cores.

That’s awesome performance. Thanks!

> 
> Of course your platform will have different hardware, so results will vary somewhat.  The rate limit will also depend on the number and size of the trace arguments for each trace event.
> 
> Best regards,
> 
> Jim Harris
> 
> 
> 
> 
> From: Jinhao Fan <fanjinhao21s(a)ict.ac.cn>
> Date: Sunday, August 28, 2022 at 6:14 PM
> To: spdk(a)lists.01.org <spdk(a)lists.01.org>
> Subject: [SPDK] Performance of spdk_trace
> Hi there,
> 
> During my daily use I found the tracing system of SPDK and spdk_trace_record
> very low overhead. So I’m planning to port to other applications where it
> might generate several 100K events per second. I think such a speed should
> be normal for SPDK applications. Have you ever measured how many events per
> second is spdk_trace able to handle?
> 
> Thanks!
> 
> Jinhao Fan
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org



^ permalink raw reply	[flat|nested] 2+ messages in thread

* [SPDK] Re: Performance of spdk_trace
@ 2022-08-29 21:44 Harris, James R
  0 siblings, 0 replies; 2+ messages in thread
From: Harris, James R @ 2022-08-29 21:44 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2376 bytes --]

Hi Jinhao,

I’m not aware of any detailed benchmarking, but it can certainly support the trace event rates that you describe.

You can use the null bdev w/ the bdevperf application as a rough approximation.  Each bdev IO generates two trace events (one for IO start, one for IO end), so we can use the null bdev as an easy test to generate as many of those trace events as possible.

First put the following in a file named null.json:

{
  "subsystems": [
    {
      "subsystem": "bdev",
      "config": [
        {
          "method": "bdev_null_create",
          "params": {
            "name": "null0",
            "num_blocks": 2097152,
            "block_size": 512
          }
        }
      ]
    }
  ]
}

Then run the bdevperf app without events:

test/bdev/bdevperf/bdevperf -q 4 -o 1024 -w randread -t 5 -c null.json

In one of my development VMs, I see 20M IO/s.  This is roughly 50ns per IO.

Next run it with bdev trace events enabled by adding the “-e bdev” command line option:

test/bdev/bdevperf/bdevperf -q 4 -o 1024 -w randread -t 5 -c null.json -e bdev

In the same VM, I see 14M IO/s.  This is roughly 70ns per IO.  That’s 20ns extra overhead for 2 trace events.

It scales to multiple cores as well – add “-m 0xF -C” to both bdevperf command lines and it will do the same test on 4 cores at once with roughly that same 20M v. 14M IO/s on each of the 4 cores.

Of course your platform will have different hardware, so results will vary somewhat.  The rate limit will also depend on the number and size of the trace arguments for each trace event.

Best regards,

Jim Harris




From: Jinhao Fan <fanjinhao21s(a)ict.ac.cn>
Date: Sunday, August 28, 2022 at 6:14 PM
To: spdk(a)lists.01.org <spdk(a)lists.01.org>
Subject: [SPDK] Performance of spdk_trace
Hi there,

During my daily use I found the tracing system of SPDK and spdk_trace_record
very low overhead. So I’m planning to port to other applications where it
might generate several 100K events per second. I think such a speed should
be normal for SPDK applications. Have you ever measured how many events per
second is spdk_trace able to handle?

Thanks!

Jinhao Fan
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-08-30  1:46 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-30  1:46 [SPDK] Re: Performance of spdk_trace Jinhao Fan
  -- strict thread matches above, loose matches on Subject: below --
2022-08-29 21:44 Harris, James R

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).