qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
	"Yan Zhao" <yan.y.zhao@intel.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"libvir-list@redhat.com" <libvir-list@redhat.com>,
	"Juan Quintela" <quintela@redhat.com>,
	qemu-devel@nongnu.org, "Eugenio Pérez" <eperezma@redhat.com>,
	"Eric Auger" <eric.auger@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [RFC v2 1/1] memory: Delete assertion in memory_region_unregister_iommu_notifier
Date: Wed, 15 Jul 2020 21:00:05 -0400	[thread overview]
Message-ID: <20200716010005.GA535743@xz-x1> (raw)
In-Reply-To: <05bb512c-ca0a-e80e-1eed-446e918ad729@redhat.com>

On Mon, Jul 13, 2020 at 12:04:16PM +0800, Jason Wang wrote:
> 
> On 2020/7/10 下午9:30, Peter Xu wrote:
> > On Fri, Jul 10, 2020 at 02:34:11PM +0800, Jason Wang wrote:
> > > On 2020/7/9 下午10:10, Peter Xu wrote:
> > > > On Thu, Jul 09, 2020 at 01:58:33PM +0800, Jason Wang wrote:
> > > > > > > - If we care the performance, it's better to implement the MAP event for
> > > > > > > vhost, otherwise it could be a lot of IOTLB miss
> > > > > > I feel like these are two things.
> > > > > > 
> > > > > > So far what we are talking about is whether vt-d should have knowledge about
> > > > > > what kind of events one iommu notifier is interested in.  I still think we
> > > > > > should keep this as answered in question 1.
> > > > > > 
> > > > > > The other question is whether we want to switch vhost from UNMAP to MAP/UNMAP
> > > > > > events even without vDMA, so that vhost can establish the mapping even before
> > > > > > IO starts.  IMHO it's doable, but only if the guest runs DPDK workloads.  When
> > > > > > the guest is using dynamic iommu page mappings, I feel like that can be even
> > > > > > slower, because then the worst case is for each IO we'll need to vmexit twice:
> > > > > > 
> > > > > >      - The first vmexit caused by an invalidation to MAP the page tables, so vhost
> > > > > >        will setup the page table before IO starts
> > > > > > 
> > > > > >      - IO/DMA triggers and completes
> > > > > > 
> > > > > >      - The second vmexit caused by another invalidation to UNMAP the page tables
> > > > > > 
> > > > > > So it seems to be worse than when vhost only uses UNMAP like right now.  At
> > > > > > least we only have one vmexit (when UNMAP).  We'll have a vhost translate()
> > > > > > request from kernel to userspace, but IMHO that's cheaper than the vmexit.
> > > > > Right but then I would still prefer to have another notifier.
> > > > > 
> > > > > Since vtd_page_walk has nothing to do with device IOTLB. IOMMU have a
> > > > > dedicated command for flushing device IOTLB. But the check for
> > > > > vtd_as_has_map_notifier is used to skip the device which can do demand
> > > > > paging via ATS or device specific way. If we have two different notifiers,
> > > > > vhost will be on the device iotlb notifier so we don't need it at all?
> > > > But we can still have iommu notifier that only registers to UNMAP even after we
> > > > introduce dev-iotlb notifier?  We don't want to do page walk for them as well.
> > > > TCG should be the only one so far, but I don't know.. maybe there can still be
> > > > new ones?
> > > 
> > > I think you're right. But looking at the codes, it looks like the check of
> > > vtd_as_has_map_notifier() was only used in:
> > > 
> > > 1) vtd_iommu_replay()
> > > 2) vtd_iotlb_page_invalidate_notify() (PSI)
> > > 
> > > For the replay, it's expensive anyhow. For PSI, I think it's just about one
> > > or few mappings, not sure it will have obvious performance impact.
> > > 
> > > And I had two questions:
> > > 
> > > 1) The codes doesn't check map for DSI or GI, does this match what spec
> > > said? (It looks to me the spec is unclear in this part)
> > Both DSI/GI should cover maps too?  E.g. vtd_sync_shadow_page_table() in
> > vtd_iotlb_domain_invalidate().
> 
> 
> I meant the code doesn't check whether there's an MAP notifier :)

It's actually checked, because it loops over vtd_as_with_notifiers, and only
MAP notifiers register to that. :)

But I agree with you that it should be cleaner to introduce the dev-iotlb
notifier type.

> 
> 
> > 
> > > 2) for the replay() I don't see other implementations (either spapr or
> > > generic one) that did unmap (actually they skip unmap explicitly), any
> > > reason for doing this in intel IOMMU?
> > I could be wrong, but I'd guess it's because vt-d implemented the caching mode
> > by leveraging the same invalidation strucuture, so it's harder to make all
> > things right (IOW, we can't clearly identify MAP with UNMAP when we receive an
> > invalidation request, because MAP/UNMAP requests look the same).
> > 
> > I didn't check others, but I believe spapr is doing it differently by using
> > some hypercalls to deliver IOMMU map/unmap requests, which seems a bit close to
> > what virtio-iommu is doing.  Anyway, the point is if we have explicit MAP/UNMAP
> > from the guest, logically the replay indeed does not need to do any unmap
> > because we don't need to call replay() on an already existing device but only
> > for e.g. hot plug.
> 
> 
> But this looks conflict with what memory_region_iommu_replay( ) did, for
> IOMMU that doesn't have a replay method, it skips UNMAP request:
> 
>     for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
>         iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, n->iommu_idx);
>         if (iotlb.perm != IOMMU_NONE) {
>             n->notify(n, &iotlb);
>         }
> 
> I guess there's no knowledge of whether guest have an explicit MAP/UMAP for
> this generic code. Or replay implies that guest doesn't have explicit
> MAP/UNMAP?

I think it matches exactly with a hot plug case?  Note that when IOMMU_NONE
could also mean the translation does not exist.  So it's actually trying to map
everything that can be translated and then notify().

> 
> (btw, the code shortcut the memory_region_notify_one(), not sure the reason)

I think it's simply because memory_region_notify_one() came later. :)

> 
> 
> >   VT-d does not have that clear interface, so VT-d needs to
> > maintain its own mapping structures, and also vt-d is using the same replay &
> > page_walk operations to sync all these structures, which complicated the vt-d
> > replay a bit.  With that, we assume replay() can be called anytime on a device,
> > and we won't notify duplicated MAPs to lower layer like vfio if it is mapped
> > before.  At the meantime, since we'll compare the latest mapping with the one
> > we cached in the iova tree, UNMAP becomes possible too.
> 
> 
> AFAIK vtd_iommu_replay() did a completely UNMAP:
> 
>     /*
>      * The replay can be triggered by either a invalidation or a newly
>      * created entry. No matter what, we release existing mappings
>      * (it means flushing caches for UNMAP-only registers).
>      */
>     vtd_address_space_unmap(vtd_as, n);
> 
> Since it doesn't do any comparison with iova tree. Will this cause
> unnecessary UNMAP to be sent to VFIO?

I feel like that can be removed now, but needs some testings...

Thanks,

-- 
Peter Xu



  reply	other threads:[~2020-07-16  1:01 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-26  6:41 [RFC v2 0/1] memory: Delete assertion in memory_region_unregister_iommu_notifier Eugenio Pérez
2020-06-26  6:41 ` [RFC v2 1/1] " Eugenio Pérez
2020-06-26 21:29   ` Peter Xu
2020-06-27  7:26     ` Yan Zhao
2020-06-27 12:57       ` Peter Xu
2020-06-28  1:36         ` Yan Zhao
2020-06-28  7:03     ` Jason Wang
2020-06-28 14:47       ` Peter Xu
2020-06-29  5:51         ` Jason Wang
2020-06-29 13:34           ` Peter Xu
2020-06-30  2:41             ` Jason Wang
2020-06-30  8:29               ` Jason Wang
2020-06-30  9:21                 ` Michael S. Tsirkin
2020-06-30  9:23                   ` Jason Wang
2020-06-30 15:20                     ` Peter Xu
2020-07-01  8:11                       ` Jason Wang
2020-07-01 12:16                         ` Peter Xu
2020-07-01 12:30                           ` Jason Wang
2020-07-01 12:41                             ` Peter Xu
2020-07-02  3:00                               ` Jason Wang
2020-06-30 15:39               ` Peter Xu
2020-07-01  8:09                 ` Jason Wang
2020-07-02  3:01                   ` Jason Wang
2020-07-02 15:45                     ` Peter Xu
2020-07-03  7:24                       ` Jason Wang
2020-07-03 13:03                         ` Peter Xu
2020-07-07  8:03                           ` Jason Wang
2020-07-07 19:54                             ` Peter Xu
2020-07-08  5:42                               ` Jason Wang
2020-07-08 14:16                                 ` Peter Xu
2020-07-09  5:58                                   ` Jason Wang
2020-07-09 14:10                                     ` Peter Xu
2020-07-10  6:34                                       ` Jason Wang
2020-07-10 13:30                                         ` Peter Xu
2020-07-13  4:04                                           ` Jason Wang
2020-07-16  1:00                                             ` Peter Xu [this message]
2020-07-16  2:54                                               ` Jason Wang
2020-07-17 14:18                                                 ` Peter Xu
2020-07-20  4:02                                                   ` Jason Wang
2020-07-20 13:03                                                     ` Peter Xu
2020-07-21  6:20                                                       ` Jason Wang
2020-07-21 15:10                                                         ` Peter Xu
2020-08-03 16:00                         ` Eugenio Pérez
2020-08-04 20:30                           ` Peter Xu
2020-08-05  5:45                             ` Jason Wang
2020-08-11 17:01     ` Eugenio Perez Martin
2020-08-11 17:10       ` Eugenio Perez Martin
2020-06-29 15:05 ` [RFC v2 0/1] " Paolo Bonzini
2020-07-03  7:39   ` Eugenio Perez Martin
2020-07-03 10:10     ` Paolo Bonzini
2020-08-11 17:55 ` [RFC v3 " Eugenio Pérez
2020-08-11 17:55   ` [RFC v3 1/1] memory: Skip bad range assertion if notifier supports arbitrary masks Eugenio Pérez
2020-08-12  2:24     ` Jason Wang
2020-08-12  8:49       ` Eugenio Perez Martin
2020-08-18 14:24         ` Eugenio Perez Martin
2020-08-19  7:15           ` Jason Wang
2020-08-19  8:22             ` Eugenio Perez Martin
2020-08-19  9:36               ` Jason Wang
2020-08-19 15:50             ` Peter Xu
2020-08-20  2:28               ` Jason Wang
2020-08-21 14:12                 ` Peter Xu
2020-09-01  3:05                   ` Jason Wang
2020-09-01 19:35                     ` Peter Xu
2020-09-02  5:13                       ` Jason Wang
2020-08-11 18:10   ` [RFC v3 0/1] memory: Delete assertion in memory_region_unregister_iommu_notifier Eugenio Perez Martin
2020-08-11 19:27     ` Peter Xu
2020-08-12 14:33       ` Eugenio Perez Martin
2020-08-12 21:12         ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200716010005.GA535743@xz-x1 \
    --to=peterx@redhat.com \
    --cc=eperezma@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=libvir-list@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=yan.y.zhao@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).