All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kariuki, John K <john.k.kariuki at intel.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] spdk peer2peer dma fio latency
Date: Tue, 25 Apr 2017 20:59:28 +0000	[thread overview]
Message-ID: <C4CE0E59D8C78F49A2AB85096897D2DB715D5831@fmsmsx116.amr.corp.intel.com> (raw)
In-Reply-To: CABwN-bEp4Ox82NzWks_BGTrOkpsb0HzADc1jELASEKz1TL0Jcw@mail.gmail.com

[-- Attachment #1: Type: text/plain, Size: 1693 bytes --]

Hello
Can you provide some additional information?

1)      Have you pre-conditioned the NVMe SSDs?

2)      Which Intel Data Center NVMe SSDs are you using? I would like to look at the device spec and see the expected QD 1 latencies?

3)      Are you doing random/seq 4K reads from the device?
Thanks.

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of PR PR
Sent: Tuesday, April 18, 2017 7:05 PM
To: spdk(a)lists.01.org
Subject: [SPDK] spdk peer2peer dma fio latency

Hi, I am trying some experiments to evaluate performance of peer2peer dma. I am using spdk to control the nvme drives and fio-plugin compiled with spdk. I am seeing a weird behavior where when I run 4K IOs with IO-Depth of 1 peer2peer DMA from nvme drive to some pci device (which exposes memory via Bar1) in a different numa node has a 50th percentile latency of 17 usecs. The same  experiment but where nvme device and pcie device in same numa node (node 0) has a latency of 38 usecs. In both cases fio was running in node 0 cpu core and pci device (which exposes memory via Bar1) is connected to node 1. DMA from nvme device to host memory also takes 38 usecs.

To summarize the cases below

1. nvme (numa node 0) - pci device (numa node 1)   --- 18 usecs
2. nvme (numa node 1) - pci device (numa node 1)   --- 38 usecs
3. nvme (numa node 0) - host memory   --- 38 usecs

fio running in numa node 0 cpu core in all cases.

For higher IO Depth values cross numa case (case 1 above), latency increases steeply and performs poorly than case 2 and case 3.

Any pointers on why this could be happening?

The nvme devices used are both identical intel datacenter ssd 400G.

Thanks


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8191 bytes --]

             reply	other threads:[~2017-04-25 20:59 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-25 20:59 Kariuki, John K [this message]
  -- strict thread matches above, loose matches on Subject: below --
2017-04-19 17:56 [SPDK] spdk peer2peer dma fio latency PR PR
2017-04-19  2:05 PR PR

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C4CE0E59D8C78F49A2AB85096897D2DB715D5831@fmsmsx116.amr.corp.intel.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.