iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Niklas Schnelle <schnelle@linux.ibm.com>
To: Robin Murphy <robin.murphy@arm.com>,
	Pierre Morel <pmorel@linux.ibm.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	iommu@lists.linux.dev
Cc: linux-s390@vger.kernel.org, borntraeger@linux.ibm.com,
	hca@linux.ibm.com, gor@linux.ibm.com,
	gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com,
	svens@linux.ibm.com, joro@8bytes.org, will@kernel.org,
	jgg@nvidia.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 1/2] iommu/s390: Fix race with release_device ops
Date: Thu, 01 Sep 2022 16:17:05 +0200	[thread overview]
Message-ID: <ec7cbb9963f26c4462f58c25f2c17c99a45ad766.camel@linux.ibm.com> (raw)
In-Reply-To: <8b561ad3023fc146ba0779cbd8fff14d6409c6aa.camel@linux.ibm.com>

---8<---
> > > 
> > > I do have a working prototype of using the common implementation but
> > > the big problem that I'm still searching a solution for is its
> > > performance with a virtualized IOMMU where IOTLB flushes (RPCIT on
> > > s390) are used for shadowing and are expensive and serialized. The
> > > optimization we used so far for unmap, only doing one global IOTLB
> > > flush once we run out of IOVA space, is just too much better in that
> > > scenario to just ignore. As one data point, on an NVMe I get about
> > > _twice_ the IOPS when using our existing scheme compared to strict
> > > mode. Which makes sense as IOTLB flushes are known as the bottleneck
> > > and optimizing unmap like that reduces them by almost half. Queued
> > > flushing is still much worse likely due to serialization of the
> > > shadowing, though again it works great on LPAR. To make sure it's not
> > > due to some bug in the IOMMU driver I even tried converting our
> > > existing DMA driver to layer on top of the IOMMU driver with the same
> > > result.
> > 
> > FWIW, can you approximate the same behaviour by just making IOVA_FQ_SIZE 
> > and IOVA_FQ_TIMEOUT really big, and deferring your zpci_refresh_trans() 
> > hook from .unmap to .flush_iotlb_all when in non-strict mode?
> > 
> > I'm not against the idea of trying to support this mode of operation 
> > better in the common code, since it seems like it could potentially be 
> > useful for *any* virtualised scenario where trapping to invalidate is 
> > expensive and the user is happy to trade off the additional address 
> > space/memory overhead (and even greater loss of memory protection) 
> > against that.
> > 
> > Robin.
> 
> Ah thanks for reminding me. I had tried that earlier but quickly ran
> into the size limit of per-CPU allocations. This time I turned the
> "struct iova_fq_entry entries" member into a pointer and allocted that
> with vmalloc(). Also thankfully the ops->flush_iotlb_all(), iommu_iotlb_sync(), and iommu_iotlb_sync_map() already perfectly match
> our needs.
> 
> Okay, this is _very_ interesting. With the above cranking IOVA_FQ_SIZE
> all the way to 32768 and IOVA_FQ_TIMEOUT to 4000 ms, I can get to about
> 91% of the performance of our scheme (layered on the IOMMU API). That
> also seems to be the limit. I guess there is also more overhead than
> with our bitset IOVA allocation that doesn't need any bookkeeping
> besides a "lazily unmapped" bit per page. With a more sane IOVA_FQ_SIZE
> of 8192 and 100 ms timeout I still get about 76% of the performance.
> 
> Interestingly with the above changes but default values for
> IOVA_FQ_SIZE/IOVA_FQ_TIMEOUT things are much worse than even strict
> mode (~50%) and I get less than 8% the IOPS with this NVMe.
> 
> So yeah it seems you're right and one can largely emulate our scheme
> with this. I do wonder if we could go further and do a "flush on
> running out of IOVAs" domain type with acceptable changes. My rough
> idea would be to collect lazily freed IOVAs in the same data structure
> as the free IOVAs, then on running out of those one can simply do a
> global IOTLB flush and the lazily freed IOVAs become the new free
> IOVAs. With that the global reset would be even cheaper than with our
> bitmaps. 

Ok disregard the last part, that's obviously not how the IOVA
allocation works. Will have to take an actual look.

> For a generic case one would of course also need to track the
> gather->freelist that we don't use in s390 but e.g. virtio-iommu
> doesn't seem to use that either. What do you think?
> 




  reply	other threads:[~2022-09-01 14:17 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-31 20:12 [PATCH v4 0/2] iommu/s390: fixes related to repeat attach_dev calls Matthew Rosato
2022-08-31 20:12 ` [PATCH v4 1/2] iommu/s390: Fix race with release_device ops Matthew Rosato
2022-09-01  7:56   ` Pierre Morel
2022-09-01  9:37     ` Niklas Schnelle
2022-09-01 11:01       ` Robin Murphy
2022-09-01 13:42         ` Niklas Schnelle
2022-09-01 14:17           ` Niklas Schnelle [this message]
2022-09-01 14:29           ` Robin Murphy
2022-09-01 14:34             ` Jason Gunthorpe
2022-09-01 15:03               ` Robin Murphy
2022-09-01 15:49                 ` Jason Gunthorpe
2022-09-01 17:00                   ` Robin Murphy
2022-09-01 20:28       ` Matthew Rosato
2022-09-02  7:49         ` Niklas Schnelle
2022-09-01 10:25   ` Robin Murphy
2022-09-01 16:14     ` Matthew Rosato
2022-09-01 20:37       ` Jason Gunthorpe
2022-09-02 17:11         ` Matthew Rosato
2022-09-02 17:21           ` Jason Gunthorpe
2022-09-02 18:20             ` Matthew Rosato
2022-09-05  9:46             ` Robin Murphy
2022-09-06 13:36               ` Jason Gunthorpe
2022-09-02 10:48       ` Robin Murphy
2022-08-31 20:12 ` [PATCH v4 2/2] iommu/s390: fix leak of s390_domain_device Matthew Rosato

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ec7cbb9963f26c4462f58c25f2c17c99a45ad766.camel@linux.ibm.com \
    --to=schnelle@linux.ibm.com \
    --cc=agordeev@linux.ibm.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=gerald.schaefer@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=mjrosato@linux.ibm.com \
    --cc=pmorel@linux.ibm.com \
    --cc=robin.murphy@arm.com \
    --cc=svens@linux.ibm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).