QEMU-Devel Archive on lore.kernel.org
 help / color / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
	"Yan Zhao" <yan.y.zhao@intel.com>,
	"Juan Quintela" <quintela@redhat.com>,
	qemu-devel@nongnu.org, "Peter Xu" <peterx@redhat.com>,
	"Eugenio Pérez" <eperezma@redhat.com>,
	"Eric Auger" <eric.auger@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [RFC v2 1/1] memory: Delete assertion in memory_region_unregister_iommu_notifier
Date: Tue, 30 Jun 2020 05:21:58 -0400
Message-ID: <20200630052148-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1b4eaaaf-c2ab-0da8-afb4-1b7b4221e6cf@redhat.com>

On Tue, Jun 30, 2020 at 04:29:19PM +0800, Jason Wang wrote:
> 
> On 2020/6/30 上午10:41, Jason Wang wrote:
> > 
> > On 2020/6/29 下午9:34, Peter Xu wrote:
> > > On Mon, Jun 29, 2020 at 01:51:47PM +0800, Jason Wang wrote:
> > > > On 2020/6/28 下午10:47, Peter Xu wrote:
> > > > > On Sun, Jun 28, 2020 at 03:03:41PM +0800, Jason Wang wrote:
> > > > > > On 2020/6/27 上午5:29, Peter Xu wrote:
> > > > > > > Hi, Eugenio,
> > > > > > > 
> > > > > > > (CCing Eric, Yan and Michael too)
> > > > > > > 
> > > > > > > On Fri, Jun 26, 2020 at 08:41:22AM +0200, Eugenio Pérez wrote:
> > > > > > > > diff --git a/memory.c b/memory.c
> > > > > > > > index 2f15a4b250..7f789710d2 100644
> > > > > > > > --- a/memory.c
> > > > > > > > +++ b/memory.c
> > > > > > > > @@ -1915,8 +1915,6 @@ void
> > > > > > > > memory_region_notify_one(IOMMUNotifier
> > > > > > > > *notifier,
> > > > > > > >             return;
> > > > > > > >         }
> > > > > > > > -    assert(entry->iova >= notifier->start &&
> > > > > > > > entry_end <= notifier->end);
> > > > > > > I can understand removing the assertion should solve
> > > > > > > the issue, however imho
> > > > > > > the major issue is not about this single assertion
> > > > > > > but the whole addr_mask
> > > > > > > issue behind with virtio...
> > > > > > I don't get here, it looks to the the range was from
> > > > > > guest IOMMU drivers.
> > > > > Yes.  Note that I didn't mean that it's a problem in virtio,
> > > > > it's just the fact
> > > > > that virtio is the only one I know that would like to
> > > > > support arbitrary address
> > > > > range for the translated region.  I don't know about tcg,
> > > > > but vfio should still
> > > > > need some kind of page alignment in both the address and the
> > > > > addr_mask.  We
> > > > > have that assumption too across the memory core when we do
> > > > > translations.
> > > > 
> > > > Right but it looks to me the issue is not the alignment.
> > > > 
> > > > 
> > > > > A further cause of the issue is the MSI region when vIOMMU
> > > > > enabled - currently
> > > > > we implemented the interrupt region using another memory
> > > > > region so it split the
> > > > > whole DMA region into two parts.  That's really a clean approach to IR
> > > > > implementation, however that's also a burden to the
> > > > > invalidation part because
> > > > > then we'll need to handle things like this when the listened
> > > > > range is not page
> > > > > alighed at all (neither 0-0xfedffff, nor 0xfef0000-MAX).  If
> > > > > without the IR
> > > > > region (so the whole iommu address range will be a single FlatRange),
> > > > 
> > > > Is this a bug? I remember that at least for vtd, it won't do any
> > > > DMAR on the
> > > > intrrupt address range
> > > I don't think it's a bug, at least it's working as how I
> > > understand...  that
> > > interrupt range is using an IR region, that's why I said the IR
> > > region splits
> > > the DMAR region into two pieces, so we have two FlatRange for the same
> > > IOMMUMemoryRegion.
> > 
> > 
> > I don't check the qemu code but if "a single FlatRange" means
> > 0xFEEx_xxxx is subject to DMA remapping, OS need to setup passthrough
> > mapping for that range in order to get MSI to work. This is not what vtd
> > spec said:
> > 
> > """
> > 
> > 3.14 Handling Requests to Interrupt Address Range
> > 
> > Requests without PASID to address range 0xFEEx_xxxx are treated as
> > potential interrupt requests and are not subjected to DMA remapping
> > (even if translation structures specify a mapping for this
> > range). Instead, remapping hardware can be enabled to subject such
> > interrupt requests to interrupt remapping.
> > 
> > """
> > 
> > My understanding is vtd won't do any DMA translation on 0xFEEx_xxxx even
> > if IR is not enabled.
> 
> 
> Ok, we had a dedicated mr for interrupt:
> 
> memory_region_add_subregion_overlap(MEMORY_REGION(&vtd_dev_as->iommu),
> VTD_INTERRUPT_ADDR_FIRST,
> &vtd_dev_as->iommu_ir, 1);
> 
> So it should be fine. I guess the reason that I'm asking is that I thought
> "IR" means "Interrupt remapping" but in fact it means "Interrupt Region"?
> 
> But I'm still not clear about the invalidation part for interrupt region,
> maybe you can elaborate a little more on this.
> 
> Btw, I think guest can trigger the assert in vtd_do_iommu_translate() if we
> teach vhost to DMA to that region:


Why would we want to?

> 
>     /*
>      * We have standalone memory region for interrupt addresses, we
>      * should never receive translation requests in this region.
>      */
>     assert(!vtd_is_interrupt_addr(addr));
> 
> Is this better to return false here? (We can work on the fix for vhost but
> it should be not trivial)
> 
> Thanks
> 



  reply index

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-26  6:41 [RFC v2 0/1] " Eugenio Pérez
2020-06-26  6:41 ` [RFC v2 1/1] " Eugenio Pérez
2020-06-26 21:29   ` Peter Xu
2020-06-27  7:26     ` Yan Zhao
2020-06-27 12:57       ` Peter Xu
2020-06-28  1:36         ` Yan Zhao
2020-06-28  7:03     ` Jason Wang
2020-06-28 14:47       ` Peter Xu
2020-06-29  5:51         ` Jason Wang
2020-06-29 13:34           ` Peter Xu
2020-06-30  2:41             ` Jason Wang
2020-06-30  8:29               ` Jason Wang
2020-06-30  9:21                 ` Michael S. Tsirkin [this message]
2020-06-30  9:23                   ` Jason Wang
2020-06-30 15:20                     ` Peter Xu
2020-07-01  8:11                       ` Jason Wang
2020-07-01 12:16                         ` Peter Xu
2020-07-01 12:30                           ` Jason Wang
2020-07-01 12:41                             ` Peter Xu
2020-07-02  3:00                               ` Jason Wang
2020-06-30 15:39               ` Peter Xu
2020-07-01  8:09                 ` Jason Wang
2020-07-02  3:01                   ` Jason Wang
2020-07-02 15:45                     ` Peter Xu
2020-07-03  7:24                       ` Jason Wang
2020-07-03 13:03                         ` Peter Xu
2020-07-07  8:03                           ` Jason Wang
2020-07-07 19:54                             ` Peter Xu
2020-07-08  5:42                               ` Jason Wang
2020-07-08 14:16                                 ` Peter Xu
2020-07-09  5:58                                   ` Jason Wang
2020-07-09 14:10                                     ` Peter Xu
2020-07-10  6:34                                       ` Jason Wang
2020-07-10 13:30                                         ` Peter Xu
2020-07-13  4:04                                           ` Jason Wang
2020-07-16  1:00                                             ` Peter Xu
2020-07-16  2:54                                               ` Jason Wang
2020-07-17 14:18                                                 ` Peter Xu
2020-07-20  4:02                                                   ` Jason Wang
2020-07-20 13:03                                                     ` Peter Xu
2020-07-21  6:20                                                       ` Jason Wang
2020-07-21 15:10                                                         ` Peter Xu
2020-08-03 16:00                         ` Eugenio Pérez
2020-08-04 20:30                           ` Peter Xu
2020-08-05  5:45                             ` Jason Wang
2020-06-29 15:05 ` [RFC v2 0/1] " Paolo Bonzini
2020-07-03  7:39   ` Eugenio Perez Martin
2020-07-03 10:10     ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200630052148-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=eperezma@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=yan.y.zhao@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

QEMU-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/qemu-devel/0 qemu-devel/git/0.git
	git clone --mirror https://lore.kernel.org/qemu-devel/1 qemu-devel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 qemu-devel qemu-devel/ https://lore.kernel.org/qemu-devel \
		qemu-devel@nongnu.org
	public-inbox-index qemu-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.nongnu.qemu-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git