All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lu Baolu <baolu.lu@linux.intel.com>
To: Chuck Lever <chuck.lever@oracle.com>,
	Robin Murphy <robin.murphy@arm.com>
Cc: "isaacm@codeaurora.org" <isaacm@codeaurora.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"will@kernel.org" <will@kernel.org>
Subject: Re: [PATCH RFC 0/9] Possible set of VT-d optimizations
Date: Fri, 29 Jan 2021 19:54:51 +0800	[thread overview]
Message-ID: <bc2e60cf-b08c-bced-aef0-5ecc470328c7@linux.intel.com> (raw)
In-Reply-To: <C8B06FC7-F426-4008-8A71-E61895368771@oracle.com>

On 2021/1/28 22:52, Chuck Lever wrote:
> 
> 
>> On Jan 28, 2021, at 8:59 AM, Robin Murphy <robin.murphy@arm.com> wrote:
>>
>> On 2021-01-27 20:00, Chuck Lever wrote:
>>> Hi-
>>> This collection of patches seems to get the best throughtput results
>>> so far. The NFS WRITE result is fully restored, and the NFS READ
>>> result is very close to fully restored.
>>> 	Children see throughput for 12 initial writers  = 5008474.03 kB/sec
>>> 	Parent sees throughput for 12 initial writers   = 4996927.80 kB/sec
>>> 	Min throughput per process                      = 416956.88 kB/sec
>>> 	Max throughput per process                      = 417910.22 kB/sec
>>> 	Avg throughput per process                      = 417372.84 kB/sec
>>> 	Min xfer                                        = 1046272.00 kB
>>> 	CPU Utilization: Wall time    2.515    CPU time    1.996    CPU utilization  79.37 %
>>> 	Children see throughput for 12 rewriters        = 5020584.59 kB/sec
>>> 	Parent sees throughput for 12 rewriters         = 5012539.29 kB/sec
>>> 	Min throughput per process                      = 417799.00 kB/sec
>>> 	Max throughput per process                      = 419082.22 kB/sec
>>> 	Avg throughput per process                      = 418382.05 kB/sec
>>> 	Min xfer                                        = 1046528.00 kB
>>> 	CPU utilization: Wall time    2.507    CPU time    2.024    CPU utilization  80.73 %
>>> 	Children see throughput for 12 readers          = 5805484.25 kB/sec
>>> 	Parent sees throughput for 12 readers           = 5799535.68 kB/sec
>>> 	Min throughput per process                      = 482888.16 kB/sec
>>> 	Max throughput per process                      = 484444.16 kB/sec
>>> 	Avg throughput per process                      = 483790.35 kB/sec
>>> 	Min xfer                                        = 1045760.00 kB
>>> 	CPU utilization: Wall time    2.167    CPU time    1.964    CPU utilization  90.63 %
>>> 	Children see throughput for 12 re-readers       = 5812227.16 kB/sec
>>> 	Parent sees throughput for 12 re-readers        = 5803793.06 kB/sec
>>> 	Min throughput per process                      = 483242.97 kB/sec
>>> 	Max throughput per process                      = 485724.41 kB/sec
>>> 	Avg throughput per process                      = 484352.26 kB/sec
>>> 	Min xfer                                        = 1043456.00 kB
>>> 	CPU utilization: Wall time    2.161    CPU time    1.976    CPU utilization  91.45 %
>>> I've included a simple-minded implementation of a map_sg op for
>>> the Intel IOMMU. This is nothing more than a copy of the loop in
>>> __iommu_map_sg() with the call to __iommu_map() replaced with a
>>> call to intel_iommu_map().
>>
>> ...which is the main reason I continue to strongly dislike patches #4-#9 (#3 definitely seems to makes sense either way, now that #1 and #2 are going to land). If a common operation is worth optimising anywhere, then it deserves optimising everywhere, so we end up with a dozen diverging copies of essentially the same code - particularly when the driver-specific functionality *is* already in the drivers, so what gets duplicated is solely the "generic" parts.
> 
> I don't disagree with that assessment, but I don't immediately see an
> alternative API arrangement that would be more successful in the short
> term. If 4/9 - 9/9 are not acceptable, then the responsible thing to
> do would be to revert:
> 
>   - 58a8bb39490d ("iommu/vt-d: Cleanup after converting to dma-iommu ops")
>   - c588072bba6b ("iommu/vt-d: Convert intel iommu driver to the iommu ops")
> 
> for v5.11, work out the proper API design, and then try the VT-d conversion
> again.

Probably we could introduce an iommu_ops->map/unmap_range() callback and
let the iommu driver to select "one call per page" map() or simply map a
range in a loop.

Best regards,
baolu

> 
> IMHO.
> 
> 
>> And if there's justification for pushing iommu_map_sg() entirely into drivers, then it's verging on self-contradictory not to do the same for iommu_map() and iommu_unmap(). Some IOMMU drivers - mainly intel-iommu, as it happens - are already implementing hacks around the "one call per page" interface being inherently inefficient, so the logical thing to do here is take a step back and reconsider the fundamental design of the whole map/unmap interface. Implementing hacks on top of hacks to make particular things faster on particular systems that particular people care about is not going to do us any favours in the long run.
>>
>> As it stands, I can easily see a weird anti-pattern emerging where people start adding code to fake up scatterlists in random drivers because they see dma_map_sg() performing paradoxically better than dma_map_page().
>>
>> Robin.
>>
>>> ---
>>> Chuck Lever (1):
>>>        iommu/vt-d: Introduce map_sg() for Intel IOMMUs
>>> Isaac J. Manjarres (5):
>>>        iommu/io-pgtable: Introduce map_sg() as a page table op
>>>        iommu/io-pgtable-arm: Hook up map_sg()
>>>        iommu/io-pgtable-arm-v7s: Hook up map_sg()
>>>        iommu: Introduce map_sg() as an IOMMU op for IOMMU drivers
>>>        iommu/arm-smmu: Hook up map_sg()
>>> Lu Baolu (1):
>>>        iommu/vt-d: Add iotlb_sync_map callback
>>> Yong Wu (2):
>>>        iommu: Move iotlb_sync_map out from __iommu_map
>>>        iommu: Add iova and size as parameters in iotlb_sync_map
>>>   drivers/iommu/arm/arm-smmu/arm-smmu.c |  19 ++++
>>>   drivers/iommu/intel/iommu.c           | 131 ++++++++++++++++++++------
>>>   drivers/iommu/io-pgtable-arm-v7s.c    |  90 ++++++++++++++++++
>>>   drivers/iommu/io-pgtable-arm.c        |  86 +++++++++++++++++
>>>   drivers/iommu/iommu.c                 |  47 +++++++--
>>>   drivers/iommu/tegra-gart.c            |   7 +-
>>>   include/linux/iommu.h                 |  16 +++-
>>>   7 files changed, 353 insertions(+), 43 deletions(-)
>>> --
>>> Chuck Lever
>>> _______________________________________________
>>> iommu mailing list
>>> iommu@lists.linux-foundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 
> --
> Chuck Lever
> 
> 
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2021-01-29 11:55 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-27 20:00 [PATCH RFC 0/9] Possible set of VT-d optimizations Chuck Lever
2021-01-27 20:00 ` [PATCH RFC 1/9] iommu: Move iotlb_sync_map out from __iommu_map Chuck Lever
2021-01-28  2:40   ` Lu Baolu
2021-01-27 20:00 ` [PATCH RFC 2/9] iommu: Add iova and size as parameters in iotlb_sync_map Chuck Lever
2021-01-28  2:44   ` Lu Baolu
2021-01-28 12:51   ` Joerg Roedel
2021-01-28 13:19     ` Will Deacon
2021-01-28 13:26       ` Joerg Roedel
2021-01-27 20:00 ` [PATCH RFC 3/9] iommu/vt-d: Add iotlb_sync_map callback Chuck Lever
2021-01-27 20:00 ` [PATCH RFC 4/9] iommu/io-pgtable: Introduce map_sg() as a page table op Chuck Lever
2021-01-27 20:00 ` [PATCH RFC 5/9] iommu/io-pgtable-arm: Hook up map_sg() Chuck Lever
2021-01-28 13:53   ` Will Deacon
2021-01-27 20:01 ` [PATCH RFC 6/9] iommu/io-pgtable-arm-v7s: " Chuck Lever
2021-01-27 20:01 ` [PATCH RFC 7/9] iommu: Introduce map_sg() as an IOMMU op for IOMMU drivers Chuck Lever
2021-01-28  2:58   ` Lu Baolu
2021-01-27 20:01 ` [PATCH RFC 8/9] iommu/arm-smmu: Hook up map_sg() Chuck Lever
2021-01-27 20:01 ` [PATCH RFC 9/9] iommu/vt-d: Introduce map_sg() for Intel IOMMUs Chuck Lever
2021-01-28  2:28 ` [PATCH RFC 0/9] Possible set of VT-d optimizations Lu Baolu
2021-01-28 13:59 ` Robin Murphy
2021-01-28 14:52   ` Chuck Lever
2021-01-29 11:54     ` Lu Baolu [this message]
2021-02-01 18:08     ` Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bc2e60cf-b08c-bced-aef0-5ecc470328c7@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=chuck.lever@oracle.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=isaacm@codeaurora.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.