IOMMU Archive on lore.kernel.org
 help / color / Atom feed
From: John Garry <john.garry@huawei.com>
To: Will Deacon <will@kernel.org>, Robin Murphy <robin.murphy@arm.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>,
	Marc Zyngier <maz@kernel.org>, Ming Lei <ming.lei@redhat.com>,
	iommu@lists.linux-foundation.org, alexandru.elisei@arm.com
Subject: Re: arm-smmu-v3 high cpu usage for NVMe
Date: Fri, 22 May 2020 15:52:30 +0100
Message-ID: <4aacbd65-f1fd-55e0-c8bb-468badc9f600@huawei.com> (raw)
In-Reply-To: <c9ebe17d-66b8-1b8c-cc2c-5be0bd1501a7@huawei.com>

On 20/03/2020 10:41, John Garry wrote:

+ Barry, Alexandru

>>>>>     PerfTop:   85864 irqs/sec  kernel:89.6%  exact:  0.0% lost: 
>>>>> 0/34434 drop:
>>>>> 0/40116 [4000Hz cycles],  (all, 96 CPUs)
>>>>> -------------------------------------------------------------------------------------------------------------------------- 
>>>>>
>>>>>
>>>>>       27.43%  [kernel]          [k] arm_smmu_cmdq_issue_cmdlist
>>>>>       11.71%  [kernel]          [k] _raw_spin_unlock_irqrestore
>>>>>        6.35%  [kernel]          [k] _raw_spin_unlock_irq
>>>>>        2.65%  [kernel]          [k] get_user_pages_fast
>>>>>        2.03%  [kernel]          [k] __slab_free
>>>>>        1.55%  [kernel]          [k] tick_nohz_idle_exit
>>>>>        1.47%  [kernel]          [k] arm_lpae_map
>>>>>        1.39%  [kernel]          [k] __fget
>>>>>        1.14%  [kernel]          [k] __lock_text_start
>>>>>        1.09%  [kernel]          [k] _raw_spin_lock
>>>>>        1.08%  [kernel]          [k] bio_release_pages.part.42
>>>>>        1.03%  [kernel]          [k] __sbitmap_get_word
>>>>>        0.97%  [kernel]          [k] 
>>>>> arm_smmu_atc_inv_domain.constprop.42
>>>>>        0.91%  [kernel]          [k] fput_many
>>>>>        0.88%  [kernel]          [k] __arm_lpae_map
>>>>>

Hi Will, Robin,

I'm just getting around to look at this topic again. Here's the current 
picture for my NVMe test:

perf top -C 0 *
Samples: 808 of event 'cycles:ppp', Event count (approx.): 469909024
Overhead Shared Object Symbol
75.91% [kernel] [k] arm_smmu_cmdq_issue_cmdlist
3.28% [kernel] [k] arm_smmu_tlb_inv_range
2.42% [kernel] [k] arm_smmu_atc_inv_domain.constprop.49
2.35% [kernel] [k] _raw_spin_unlock_irqrestore
1.32% [kernel] [k] __arm_smmu_cmdq_poll_set_valid_map.isra.41
1.20% [kernel] [k] aio_complete_rw
0.96% [kernel] [k] enqueue_task_fair
0.93% [kernel] [k] gic_handle_irq
0.86% [kernel] [k] _raw_spin_lock_irqsave
0.72% [kernel] [k] put_reqs_available
0.72% [kernel] [k] sbitmap_queue_clear

* only certain CPUs run the dma unmap for my scenario, cpu0 being one of 
them.

Colleague Barry has similar findings for some other scenarios.

So we tried the latest perf NMI support wip patches, and noticed a few 
hotspots (see 
https://raw.githubusercontent.com/hisilicon/kernel-dev/fee69c8ca3784b9dd3912703cfcd4985a00f6bbb/perf%20annotate 
and 
https://raw.githubusercontent.com/hisilicon/kernel-dev/fee69c8ca3784b9dd3912703cfcd4985a00f6bbb/report.txt) 
when running some NVMe traffic:

- initial cmpxchg to get a place in the queue
	- when more CPUs get involved, we start failing at an exponential rate
0.00 :        ffff8000107a3500:       cas     x4, x2, [x27]
26.52 :        ffff8000107a3504:       mov     x0, x4 : 
arm_smmu_cmdq_issue_cmdlist():

- the queue locking
- polling cmd_sync

Some ideas to optimise:

a. initial cmpxchg
So this cmpxchg could be considered unfair. In addition, with all the 
contention on arm_smmu_cmdq.q, that cacheline would be constantly pinged 
around the system.
Maybe we can implement something similar to the idea of queued/ticketed 
spinlocks, making a CPU spin on own copy of arm_smmu_cmdq.q after 
initial cmpxchg fails, released by its leader, and releasing subsequent 
followers

b. Drop the queue_full checking in certain circumstances
If we cannot theoretically fill the queue, then stop the checking for 
queue full or similar. This should also help current problem of a., as 
the less time between cmpxchg, the less chance of failing (as we check 
queue available space between cmpxchg attempts).

So if cmdq depth > nr_available_cpus * (max batch size + 1) AND we 
always issue a cmd_sync for a batch (regardless of whether requested), 
then we should never fill (I think).

c. Don't do queue locking in certain circumstances
If we implement (and support) b. and support MSI polling, then I don't 
think that this is required.

d. More minor ideas are to move forward when the "owner" stops gathering 
to reduce time of advancing the prod, hopefully reducing cmd_sync 
polling time; and also use a smaller word size for the valid bitmap 
operations, maybe 32b atomic operations are overall more efficient (than 
64b) - mostly valid range check is < 16 bits from my observation.

Let me know your thoughts or any other ideas.

Thanks,
John

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  parent reply index

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-21 15:17 [PATCH v2 0/8] Sort out SMMUv3 ATC invalidation and locking Will Deacon
2019-08-21 15:17 ` [PATCH v2 1/8] iommu/arm-smmu-v3: Document ordering guarantees of command insertion Will Deacon
2019-08-21 15:17 ` [PATCH v2 2/8] iommu/arm-smmu-v3: Disable detection of ATS and PRI Will Deacon
2019-08-21 15:36   ` Robin Murphy
2019-08-21 15:17 ` [PATCH v2 3/8] iommu/arm-smmu-v3: Remove boolean bitfield for 'ats_enabled' flag Will Deacon
2019-08-21 15:17 ` [PATCH v2 4/8] iommu/arm-smmu-v3: Don't issue CMD_SYNC for zero-length invalidations Will Deacon
2019-08-21 15:17 ` [PATCH v2 5/8] iommu/arm-smmu-v3: Rework enabling/disabling of ATS for PCI masters Will Deacon
2019-08-21 15:50   ` Robin Murphy
2019-08-21 15:17 ` [PATCH v2 6/8] iommu/arm-smmu-v3: Fix ATC invalidation ordering wrt main TLBs Will Deacon
2019-08-21 16:25   ` Robin Murphy
2019-08-21 15:17 ` [PATCH v2 7/8] iommu/arm-smmu-v3: Avoid locking on invalidation path when not using ATS Will Deacon
2019-08-22 12:36   ` Robin Murphy
2019-08-21 15:17 ` [PATCH v2 8/8] Revert "iommu/arm-smmu-v3: Disable detection of ATS and PRI" Will Deacon
2020-01-02 17:44 ` arm-smmu-v3 high cpu usage for NVMe John Garry
2020-03-18 20:53   ` Will Deacon
2020-03-19 12:54     ` John Garry
2020-03-19 18:43       ` Jean-Philippe Brucker
2020-03-20 10:41         ` John Garry
2020-03-20 11:18           ` Jean-Philippe Brucker
2020-03-20 16:20             ` John Garry
2020-03-20 16:33               ` Marc Zyngier
2020-03-23  9:03                 ` John Garry
2020-03-23  9:16                   ` Marc Zyngier
2020-03-24  9:18                     ` John Garry
2020-03-24 10:43                       ` Marc Zyngier
2020-03-24 11:55                         ` John Garry
2020-03-24 12:07                           ` Robin Murphy
2020-03-24 12:37                             ` John Garry
2020-03-25 15:31                               ` John Garry
2020-05-22 14:52           ` John Garry [this message]
2020-05-25  5:57             ` Song Bao Hua (Barry Song)
     [not found]     ` <482c00d5-8e6d-1484-820e-1e89851ad5aa@huawei.com>
2020-04-06 15:11       ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4aacbd65-f1fd-55e0-c8bb-468badc9f600@huawei.com \
    --to=john.garry@huawei.com \
    --cc=alexandru.elisei@arm.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jean-philippe@linaro.org \
    --cc=maz@kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

IOMMU Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-iommu/0 linux-iommu/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-iommu linux-iommu/ https://lore.kernel.org/linux-iommu \
		iommu@lists.linux-foundation.org
	public-inbox-index linux-iommu

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.linux-foundation.lists.iommu


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git