From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39842) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b9p7Q-00030K-5R for qemu-devel@nongnu.org; Mon, 06 Jun 2016 03:38:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b9p7K-0005DJ-7m for qemu-devel@nongnu.org; Mon, 06 Jun 2016 03:38:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43572) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b9p7K-0005DE-1t for qemu-devel@nongnu.org; Mon, 06 Jun 2016 03:38:34 -0400 Date: Mon, 6 Jun 2016 15:38:25 +0800 From: Peter Xu Message-ID: <20160606073825.GH21254@pxdev.xzpeter.org> References: <1463847590-22782-1-git-send-email-bd.aviv@gmail.com> <1463847590-22782-4-git-send-email-bd.aviv@gmail.com> <20160523115342.636a5164@ul30vt.home> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20160523115342.636a5164@ul30vt.home> Subject: Re: [Qemu-devel] [PATCH v3 3/3] IOMMU: Integrate between VFIO and vIOMMU to support device assignment List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex Williamson Cc: "Aviv B.D" , qemu-devel@nongnu.org, "Michael S. Tsirkin" , Jan Kiszka Some questions not quite related to this patch content but vfio... On Mon, May 23, 2016 at 11:53:42AM -0600, Alex Williamson wrote: > On Sat, 21 May 2016 19:19:50 +0300 > "Aviv B.D" wrote: [...] > > +#if 0 > > static hwaddr vfio_container_granularity(VFIOContainer *container) > > { > > return (hwaddr)1 << ctz64(container->iova_pgsizes); > > } > > - > > +#endif Here we are fetching the smallest page size that host IOMMU support, so even if host IOMMU support large pages, it will not be used as long as guest enabled vIOMMU, right? > > > Clearly this is unacceptable, the code has a purpose. > > > static void vfio_listener_region_add(MemoryListener *listener, > > MemoryRegionSection *section) > > { > > @@ -384,11 +387,13 @@ static void vfio_listener_region_add(MemoryListener *listener, > > giommu->n.notify = vfio_iommu_map_notify; > > QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next); > > > > + vtd_register_giommu(giommu); > > vfio will not assume VT-d, this is why we register the notifier below. > > > memory_region_register_iommu_notifier(giommu->iommu, &giommu->n); > > +#if 0 > > memory_region_iommu_replay(giommu->iommu, &giommu->n, > > vfio_container_granularity(container), > > false); For memory_region_iommu_replay(), we are using vfio_container_granularity() as the granularity, which is the host IOMMU page size. However inside it: void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n, hwaddr granularity, bool is_write) { hwaddr addr; IOMMUTLBEntry iotlb; for (addr = 0; addr < memory_region_size(mr); addr += granularity) { iotlb = mr->iommu_ops->translate(mr, addr, is_write); if (iotlb.perm != IOMMU_NONE) { n->notify(n, &iotlb); } /* if (2^64 - MR size) < granularity, it's possible to get an * infinite loop here. This should catch such a wraparound */ if ((addr + granularity) < addr) { break; } } } Is it possible that iotlb mapped to a large page (or any page that is not the same as granularity)? The above code should have assumed that host/guest IOMMU are having the same page size == granularity? Thanks, -- peterx