From: John Garry <john.garry@huawei.com>
To: Will Deacon <will@kernel.org>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>,
Marc Zyngier <maz@kernel.org>, Ming Lei <ming.lei@redhat.com>,
iommu@lists.linux-foundation.org,
Robin Murphy <robin.murphy@arm.com>
Subject: Re: arm-smmu-v3 high cpu usage for NVMe
Date: Mon, 6 Apr 2020 16:11:43 +0100 [thread overview]
Message-ID: <30664ea8-548d-b0a4-81bc-e7f311f84b5f@huawei.com> (raw)
In-Reply-To: <482c00d5-8e6d-1484-820e-1e89851ad5aa@huawei.com>
On 02/04/2020 13:10, John Garry wrote:
> On 18/03/2020 20:53, Will Deacon wrote:
>>> As for arm_smmu_cmdq_issue_cmdlist(), I do note that during the
>>> testing our
>>> batch size is 1, so we're not seeing the real benefit of the batching. I
>>> can't help but think that we could improve this code to try to
>>> combine CMD
>>> SYNCs for small batches.
>>>
>>> Anyway, let me know your thoughts or any questions. I'll have a look
>>> if a
>>> get a chance for other possible bottlenecks.
>> Did you ever get any more information on this? I don't have any SMMUv3
>> hardware any more, so I can't really dig into this myself.
>>
>
Hi Will,
JFYI, I added some debug in arm_smmu_cmdq_issue_cmdlist() to get some
idea of what is going on. Perf annotate did not tell much.
I tested NVMe performance with and without Marc's patchset to spread
LPIs for managed interrupts.
Average duration of arm_smmu_cmdq_issue_cmdlist() mainline [all results
are approximations]:
owner: 6ms
non-owner: 4ms
mainline + LPI spreading patchset:
owner: 25ms
non-owner: 22ms
For this, a list would be a itlb+cmd_sync.
Please note that the LPI spreading patchset is still giving circa 25%
NVMe throughput increase. What happens there would be that we get many
more cpus involved, which creates more inter-cpu contention. But the
performance increase comes from just alleviating pressure on those
overloaded cpus.
I also notice that with the LPI spreading patchset, on average a cpu is
an "owner" in arm_smmu_cmdq_issue_cmdlist() 1 in 8, as opposed to 1 in 3
for mainline. This means that we're just creating longer chains of lists
to be published.
But I found that for a non-owner, average msi cmd_sync polling time is
12ms with the LPI spreading patchset. As such, it seems to be really
taking approx (12*2/8-1=) ~3ms to consume a single list. This seems
consistent with my finding that an owner polls consumption for 3ms also.
Without the LPI speading patchset, polling time is approx 2 and 3ms for
both owner and non-owner, respectively.
As an experiment, I did try to hack the code to use a spinlock again for
protecting the command queue, instead of current solution - and always
saw a performance drop there. To be expected. But maybe we can try to
not use a spinlock, but still serialise production+consumption to
alleviate the long polling periods.
Let me know your thoughts.
Cheers,
John
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
prev parent reply other threads:[~2020-04-06 15:12 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-21 15:17 [PATCH v2 0/8] Sort out SMMUv3 ATC invalidation and locking Will Deacon
2019-08-21 15:17 ` [PATCH v2 1/8] iommu/arm-smmu-v3: Document ordering guarantees of command insertion Will Deacon
2019-08-21 15:17 ` [PATCH v2 2/8] iommu/arm-smmu-v3: Disable detection of ATS and PRI Will Deacon
2019-08-21 15:36 ` Robin Murphy
2019-08-21 15:17 ` [PATCH v2 3/8] iommu/arm-smmu-v3: Remove boolean bitfield for 'ats_enabled' flag Will Deacon
2019-08-21 15:17 ` [PATCH v2 4/8] iommu/arm-smmu-v3: Don't issue CMD_SYNC for zero-length invalidations Will Deacon
2019-08-21 15:17 ` [PATCH v2 5/8] iommu/arm-smmu-v3: Rework enabling/disabling of ATS for PCI masters Will Deacon
2019-08-21 15:50 ` Robin Murphy
2019-08-21 15:17 ` [PATCH v2 6/8] iommu/arm-smmu-v3: Fix ATC invalidation ordering wrt main TLBs Will Deacon
2019-08-21 16:25 ` Robin Murphy
2019-08-21 15:17 ` [PATCH v2 7/8] iommu/arm-smmu-v3: Avoid locking on invalidation path when not using ATS Will Deacon
2019-08-22 12:36 ` Robin Murphy
2019-08-21 15:17 ` [PATCH v2 8/8] Revert "iommu/arm-smmu-v3: Disable detection of ATS and PRI" Will Deacon
2020-01-02 17:44 ` arm-smmu-v3 high cpu usage for NVMe John Garry
2020-03-18 20:53 ` Will Deacon
2020-03-19 12:54 ` John Garry
2020-03-19 18:43 ` Jean-Philippe Brucker
2020-03-20 10:41 ` John Garry
2020-03-20 11:18 ` Jean-Philippe Brucker
2020-03-20 16:20 ` John Garry
2020-03-20 16:33 ` Marc Zyngier
2020-03-23 9:03 ` John Garry
2020-03-23 9:16 ` Marc Zyngier
2020-03-24 9:18 ` John Garry
2020-03-24 10:43 ` Marc Zyngier
2020-03-24 11:55 ` John Garry
2020-03-24 12:07 ` Robin Murphy
2020-03-24 12:37 ` John Garry
2020-03-25 15:31 ` John Garry
2020-05-22 14:52 ` John Garry
2020-05-25 5:57 ` Song Bao Hua (Barry Song)
[not found] ` <482c00d5-8e6d-1484-820e-1e89851ad5aa@huawei.com>
2020-04-06 15:11 ` John Garry [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=30664ea8-548d-b0a4-81bc-e7f311f84b5f@huawei.com \
--to=john.garry@huawei.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jean-philippe@linaro.org \
--cc=maz@kernel.org \
--cc=ming.lei@redhat.com \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).