linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: John Garry <john.garry@huawei.com>,
	Ming Lei <ming.lei@redhat.com>,
	linux-nvme@lists.infradead.org, Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org,
	iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Date: Fri, 9 Jul 2021 13:34:12 +0100	[thread overview]
Message-ID: <1789d6b2-8332-0f26-2c5e-40181d09ebd6@arm.com> (raw)
In-Reply-To: <a44e8a68-d789-e3db-4fbb-404defb431f6@huawei.com>

On 2021-07-09 12:04, John Garry wrote:
> On 09/07/2021 11:26, Robin Murphy wrote:
>> n 2021-07-09 09:38, Ming Lei wrote:
>>> Hello,
>>>
>>> I observed that NVMe performance is very bad when running fio on one
>>> CPU(aarch64) in remote numa node compared with the nvme pci numa node.
>>>
>>> Please see the test result[1] 327K vs. 34.9K.
>>>
>>> Latency trace shows that one big difference is in iommu_dma_unmap_sg(),
>>> 1111 nsecs vs 25437 nsecs.
>>
>> Are you able to dig down further into that? iommu_dma_unmap_sg() 
>> itself doesn't do anything particularly special, so whatever makes a 
>> difference is probably happening at a lower level, and I suspect 
>> there's probably an SMMU involved. If for instance it turns out to go 
>> all the way down to __arm_smmu_cmdq_poll_until_consumed() because 
>> polling MMIO from the wrong node is slow, there's unlikely to be much 
>> you can do about that other than the global "go faster" knobs 
>> (iommu.strict and iommu.passthrough) with their associated compromises.
> 
> There was also the disable_msipolling option:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c#n42 
> 
> 
> But I am not sure if that platform even supports MSI polling (or has 
> smmu v3).

Hmm, I suppose in principle the MSI polling path could lead to a bit of 
cacheline ping-pong with the SMMU fetching and writing back to the sync 
command, but I'd rather find out more details of where exactly the extra 
time is being spent in this particular situation than speculate much 
further.

> You could also try iommu.forcedac=1 cmdline option. But I doubt it will 
> help since the issue was mentioned to be NUMA related.

Plus that shouldn't make any difference to unmaps anyway.

>>> [1] fio test & results
>>>
>>> 1) fio test result:
>>>
>>> - run fio on local CPU
>>> taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k
>>> + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
>>> --iodepth=64 --iodepth_batch_submit=16 
>>> --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 
>>> --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting
>>>
>>> IOPS: 327K
>>> avg latency of iommu_dma_unmap_sg(): 1111 nsecs
>>>
>>>
>>> - run fio on remote CPU
>>> taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k
>>> + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
>>> --iodepth=64 --iodepth_batch_submit=16 
>>> --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 
>>> --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting
>>>
>>> IOPS: 34.9K
>>> avg latency of iommu_dma_unmap_sg(): 25437 nsecs
>>>
>>> 2) system info
>>> [root@ampere-mtjade-04 ~]# lscpu | grep NUMA
>>> NUMA node(s):                    2
>>> NUMA node0 CPU(s):               0-79
>>> NUMA node1 CPU(s):               80-159
>>>
>>> lspci | grep NVMe
>>> 0003:01:00.0 Non-Volatile memory controller: Samsung Electronics Co 
>>> Ltd NVMe SSD Controller SM981/PM981/PM983
>>>
>>> [root@ampere-mtjade-04 ~]# cat 
>>> /sys/block/nvme1n1/device/device/numa_node 
> 
> Since it's ampere, I guess it's smmu v3.
> 
> BTW, if you remember, I did raise a performance issue of smmuv3 with 
> NVMe before:
> https://lore.kernel.org/linux-iommu/b2a6e26d-6d0d-7f0d-f222-589812f701d2@huawei.com/ 

It doesn't seem like the best-case throughput is of concern in this case 
though, and my hunch is that a ~23x discrepancy in SMMU unmap 
performance depending on locality probably isn't specific to NVMe.

Robin.

> I did have this series to improve performance for systems with lots of 
> CPUs, like above, but not accepted:
> https://lore.kernel.org/linux-iommu/1598018062-175608-1-git-send-email-john.garry@huawei.com/ 
> 
> 
> Thanks,
> John
> 

  reply	other threads:[~2021-07-09 12:34 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09  8:38 [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node Ming Lei
2021-07-09 10:16 ` Russell King (Oracle)
2021-07-09 14:21   ` Ming Lei
2021-07-09 10:26 ` Robin Murphy
2021-07-09 11:04   ` John Garry
2021-07-09 12:34     ` Robin Murphy [this message]
2021-07-09 14:24   ` Ming Lei
2021-07-19 16:14     ` John Garry
2021-07-21  1:40       ` Ming Lei
2021-07-21  9:23         ` John Garry
2021-07-21  9:59           ` Ming Lei
2021-07-21 11:07             ` John Garry
2021-07-21 11:58               ` Ming Lei
2021-07-22  7:58               ` Ming Lei
2021-07-22 10:05                 ` John Garry
2021-07-22 10:19                   ` Ming Lei
2021-07-22 11:12                     ` John Garry
2021-07-22 12:53                       ` Marc Zyngier
2021-07-22 13:54                         ` John Garry
2021-07-22 15:54                       ` Ming Lei
2021-07-22 17:40                         ` Robin Murphy
2021-07-23 10:21                           ` Ming Lei
2021-07-26  7:51                             ` John Garry
2021-07-28  1:32                               ` Ming Lei
2021-07-28 10:38                                 ` John Garry
2021-07-28 15:17                                   ` Ming Lei
2021-07-28 15:39                                     ` Robin Murphy
2021-08-10  9:36                                     ` John Garry
2021-08-10 10:35                                       ` Ming Lei
2021-07-27 17:08                             ` Robin Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1789d6b2-8332-0f26-2c5e-40181d09ebd6@arm.com \
    --to=robin.murphy@arm.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=john.garry@huawei.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).