All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
       [not found]                 ` <20190401024710.GB8853@xz-x1>
@ 2019-04-01  9:12                   ` Elijah Shakkour
  2019-04-01 10:25                     ` Peter Xu
  0 siblings, 1 reply; 16+ messages in thread
From: Elijah Shakkour @ 2019-04-01  9:12 UTC (permalink / raw)
  To: Peter Xu
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel



> -----Original Message-----
> From: Peter Xu <peterx@redhat.com>
> Sent: Monday, April 1, 2019 5:47 AM
> To: Elijah Shakkour <elijahsh@mellanox.com>
> Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> <stefanha@gmail.com>; qemu-devel@nongnu.org
> Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to
> nested (L2) VM
> 
> On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour wrote:
> 
> [...]
> 
> > I didn't have DMA nor MMIO read/write working with my old command
> line.
> > But, when I removed all CPU flags and only provided "-cpu host", I see that
> MMIO works.
> > Still, DMA read/write from emulated device doesn't work for VF. For
> example:
> > Driver provides me a buffer pointer through MMIO write, this address
> (pointer) is GPA of L2, and when I try to call pci_dma_read() with this address
> I get:
> > "
> > Unassigned mem read 0000000000000000
> > "
> 
> I don't know where this error log was dumped but if it's during DMA then I
> agree it can probably be related to vIOMMU.
> 

This log is dumped from:
memory.c: unassigned_mem_read()

> > As I said, my problem now is in translation of L2 GPA provided by driver,
> when I call DMA read/write for this address from VF.
> > Any insights?
> 
> I just noticed that you were using QEMU 2.12 [1].  If that's the case, please
> rebase to the latest QEMU, at least >=3.0 because there's major refactor of
> the shadow logic during 3.0 devel cycle AFAICT.
> 

Rebased to QEMU 3.1
Now I see the address I'm trying to read from in log but still same error:
"
Unassigned mem read 00000000f0481000
"
What do you suggest?

> > > > > > > > I'm using Knut Omang SRIOV patches rebased to QEMU v2.12.
> 
> [1]
> 
> Regards,
> 
> --
> Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-01  9:12                   ` [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM Elijah Shakkour
@ 2019-04-01 10:25                     ` Peter Xu
  2019-04-01 14:01                       ` Elijah Shakkour
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Xu @ 2019-04-01 10:25 UTC (permalink / raw)
  To: Elijah Shakkour
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel

On Mon, Apr 01, 2019 at 09:12:38AM +0000, Elijah Shakkour wrote:
> 
> 
> > -----Original Message-----
> > From: Peter Xu <peterx@redhat.com>
> > Sent: Monday, April 1, 2019 5:47 AM
> > To: Elijah Shakkour <elijahsh@mellanox.com>
> > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to
> > nested (L2) VM
> > 
> > On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour wrote:
> > 
> > [...]
> > 
> > > I didn't have DMA nor MMIO read/write working with my old command
> > line.
> > > But, when I removed all CPU flags and only provided "-cpu host", I see that
> > MMIO works.
> > > Still, DMA read/write from emulated device doesn't work for VF. For
> > example:
> > > Driver provides me a buffer pointer through MMIO write, this address
> > (pointer) is GPA of L2, and when I try to call pci_dma_read() with this address
> > I get:
> > > "
> > > Unassigned mem read 0000000000000000
> > > "
> > 
> > I don't know where this error log was dumped but if it's during DMA then I
> > agree it can probably be related to vIOMMU.
> > 
> 
> This log is dumped from:
> memory.c: unassigned_mem_read()
> 
> > > As I said, my problem now is in translation of L2 GPA provided by driver,
> > when I call DMA read/write for this address from VF.
> > > Any insights?
> > 
> > I just noticed that you were using QEMU 2.12 [1].  If that's the case, please
> > rebase to the latest QEMU, at least >=3.0 because there's major refactor of
> > the shadow logic during 3.0 devel cycle AFAICT.
> > 
> 
> Rebased to QEMU 3.1
> Now I see the address I'm trying to read from in log but still same error:
> "
> Unassigned mem read 00000000f0481000
> "
> What do you suggest?

Would you please answer the questions that Knut asked?  Is it working
for L1 guest?  How about PF?

You can also try to enable VT-d device log by appending:

  -trace enable="vtd_*"

In case it dumps anything useful for you.

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-01 10:25                     ` Peter Xu
@ 2019-04-01 14:01                       ` Elijah Shakkour
  2019-04-01 14:24                         ` Knut Omang
  0 siblings, 1 reply; 16+ messages in thread
From: Elijah Shakkour @ 2019-04-01 14:01 UTC (permalink / raw)
  To: Peter Xu
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel



> -----Original Message-----
> From: Peter Xu <peterx@redhat.com>
> Sent: Monday, April 1, 2019 1:25 PM
> To: Elijah Shakkour <elijahsh@mellanox.com>
> Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> <stefanha@gmail.com>; qemu-devel@nongnu.org
> Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to
> nested (L2) VM
> 
> On Mon, Apr 01, 2019 at 09:12:38AM +0000, Elijah Shakkour wrote:
> >
> >
> > > -----Original Message-----
> > > From: Peter Xu <peterx@redhat.com>
> > > Sent: Monday, April 1, 2019 5:47 AM
> > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough
> to
> > > nested (L2) VM
> > >
> > > On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour wrote:
> > >
> > > [...]
> > >
> > > > I didn't have DMA nor MMIO read/write working with my old command
> > > line.
> > > > But, when I removed all CPU flags and only provided "-cpu host", I
> > > > see that
> > > MMIO works.
> > > > Still, DMA read/write from emulated device doesn't work for VF.
> > > > For
> > > example:
> > > > Driver provides me a buffer pointer through MMIO write, this
> > > > address
> > > (pointer) is GPA of L2, and when I try to call pci_dma_read() with
> > > this address I get:
> > > > "
> > > > Unassigned mem read 0000000000000000 "
> > >
> > > I don't know where this error log was dumped but if it's during DMA
> > > then I agree it can probably be related to vIOMMU.
> > >
> >
> > This log is dumped from:
> > memory.c: unassigned_mem_read()
> >
> > > > As I said, my problem now is in translation of L2 GPA provided by
> > > > driver,
> > > when I call DMA read/write for this address from VF.
> > > > Any insights?
> > >
> > > I just noticed that you were using QEMU 2.12 [1].  If that's the
> > > case, please rebase to the latest QEMU, at least >=3.0 because
> > > there's major refactor of the shadow logic during 3.0 devel cycle AFAICT.
> > >
> >
> > Rebased to QEMU 3.1
> > Now I see the address I'm trying to read from in log but still same error:
> > "
> > Unassigned mem read 00000000f0481000
> > "
> > What do you suggest?
> 
> Would you please answer the questions that Knut asked?  Is it working for L1
> guest?  How about PF?

Both VF and PF are working for L1 guest.
I don't know how to passthrough PF to nested VM in hyper-v.
I don't invoke VF manually in hyper-v and pass it through to nested VM. I use hyper-v manager to configure and provide a VF for nested VM (I can see the VF only in the nested VM).

Did someone try to run emulated device in linux RH as nested L2 where L1 is windows hyper-v? Does DMA read/write work for this emulated device in this case?

> 
> You can also try to enable VT-d device log by appending:
> 
>   -trace enable="vtd_*"
> 
> In case it dumps anything useful for you.

Is there a way to open those traces to be dumped to stdout/stderr on the fly, instead of dtrace?

> 
> --
> Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-01 14:01                       ` Elijah Shakkour
@ 2019-04-01 14:24                         ` Knut Omang
  2019-04-02 15:41                           ` Elijah Shakkour
  0 siblings, 1 reply; 16+ messages in thread
From: Knut Omang @ 2019-04-01 14:24 UTC (permalink / raw)
  To: Elijah Shakkour, Peter Xu
  Cc: Michael S. Tsirkin, Alex Williamson, Marcel Apfelbaum,
	Stefan Hajnoczi, qemu-devel

On Mon, 2019-04-01 at 14:01 +0000, Elijah Shakkour wrote:
> 
> > -----Original Message-----
> > From: Peter Xu <peterx@redhat.com>
> > Sent: Monday, April 1, 2019 1:25 PM
> > To: Elijah Shakkour <elijahsh@mellanox.com>
> > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to
> > nested (L2) VM
> > 
> > On Mon, Apr 01, 2019 at 09:12:38AM +0000, Elijah Shakkour wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: Peter Xu <peterx@redhat.com>
> > > > Sent: Monday, April 1, 2019 5:47 AM
> > > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > > > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough
> > to
> > > > nested (L2) VM
> > > >
> > > > On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour wrote:
> > > >
> > > > [...]
> > > >
> > > > > I didn't have DMA nor MMIO read/write working with my old command
> > > > line.
> > > > > But, when I removed all CPU flags and only provided "-cpu host", I
> > > > > see that
> > > > MMIO works.
> > > > > Still, DMA read/write from emulated device doesn't work for VF.
> > > > > For
> > > > example:
> > > > > Driver provides me a buffer pointer through MMIO write, this
> > > > > address
> > > > (pointer) is GPA of L2, and when I try to call pci_dma_read() with
> > > > this address I get:
> > > > > "
> > > > > Unassigned mem read 0000000000000000 "
> > > >
> > > > I don't know where this error log was dumped but if it's during DMA
> > > > then I agree it can probably be related to vIOMMU.
> > > >
> > >
> > > This log is dumped from:
> > > memory.c: unassigned_mem_read()
> > >
> > > > > As I said, my problem now is in translation of L2 GPA provided by
> > > > > driver,
> > > > when I call DMA read/write for this address from VF.
> > > > > Any insights?
> > > >
> > > > I just noticed that you were using QEMU 2.12 [1].  If that's the
> > > > case, please rebase to the latest QEMU, at least >=3.0 because
> > > > there's major refactor of the shadow logic during 3.0 devel cycle AFAICT.
> > > >
> > >
> > > Rebased to QEMU 3.1
> > > Now I see the address I'm trying to read from in log but still same error:
> > > "
> > > Unassigned mem read 00000000f0481000
> > > "
> > > What do you suggest?
> > 
> > Would you please answer the questions that Knut asked?  Is it working for L1
> > guest?  How about PF?
> 
> Both VF and PF are working for L1 guest.
> I don't know how to passthrough PF to nested VM in hyper-v.

On Linux passing through VFs and PFs are the same. 
Maybe you can try passthrough with all Linux first? (first PF then VF) ?

> I don't invoke VF manually in hyper-v and pass it through to nested VM. I use hyper-v
> manager to configure and provide a VF for nested VM (I can see the VF only in the nested
> VM).
> 
> Did someone try to run emulated device in linux RH as nested L2 where L1 is windows
> hyper-v? Does DMA read/write work for this emulated device in this case?

I have never tried that, I have only used Linux as L2, Windows might be pickier about what
it expects, so starting with Linux to rule that out is probably a good idea.

> > 
> > You can also try to enable VT-d device log by appending:
> > 
> >   -trace enable="vtd_*"
> > 
> > In case it dumps anything useful for you.
> 
> Is there a way to open those traces to be dumped to stdout/stderr on the fly, instead of
> dtrace?

It's up to you what tracer(s) to configure when you build QEMU - check out 
docs/devel/tracing.txt . There's a few trace events defined in the SR/IOV patch set, you
might want to enable them as well.

Knut

> > --
> > Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-01 14:24                         ` Knut Omang
@ 2019-04-02 15:41                           ` Elijah Shakkour
  2019-04-03  2:40                             ` Peter Xu
  0 siblings, 1 reply; 16+ messages in thread
From: Elijah Shakkour @ 2019-04-02 15:41 UTC (permalink / raw)
  To: Knut Omang, Peter Xu
  Cc: Michael S. Tsirkin, Alex Williamson, Marcel Apfelbaum,
	Stefan Hajnoczi, qemu-devel



> -----Original Message-----
> From: Knut Omang <knut.omang@oracle.com>
> Sent: Monday, April 1, 2019 5:24 PM
> To: Elijah Shakkour <elijahsh@mellanox.com>; Peter Xu
> <peterx@redhat.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>; Alex Williamson
> <alex.williamson@redhat.com>; Marcel Apfelbaum
> <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi <stefanha@gmail.com>;
> qemu-devel@nongnu.org
> Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to
> nested (L2) VM
> 
> On Mon, 2019-04-01 at 14:01 +0000, Elijah Shakkour wrote:
> >
> > > -----Original Message-----
> > > From: Peter Xu <peterx@redhat.com>
> > > Sent: Monday, April 1, 2019 1:25 PM
> > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough
> to
> > > nested (L2) VM
> > >
> > > On Mon, Apr 01, 2019 at 09:12:38AM +0000, Elijah Shakkour wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Peter Xu <peterx@redhat.com>
> > > > > Sent: Monday, April 1, 2019 5:47 AM
> > > > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > > > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > > > > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > > > Subject: Re: QEMU and vIOMMU support for emulated VF
> passthrough
> > > to
> > > > > nested (L2) VM
> > > > >
> > > > > On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour wrote:
> > > > >
> > > > > [...]
> > > > >
> > > > > > I didn't have DMA nor MMIO read/write working with my old
> > > > > > command
> > > > > line.
> > > > > > But, when I removed all CPU flags and only provided "-cpu
> > > > > > host", I see that
> > > > > MMIO works.
> > > > > > Still, DMA read/write from emulated device doesn't work for VF.
> > > > > > For
> > > > > example:
> > > > > > Driver provides me a buffer pointer through MMIO write, this
> > > > > > address
> > > > > (pointer) is GPA of L2, and when I try to call pci_dma_read()
> > > > > with this address I get:
> > > > > > "
> > > > > > Unassigned mem read 0000000000000000 "
> > > > >
> > > > > I don't know where this error log was dumped but if it's during
> > > > > DMA then I agree it can probably be related to vIOMMU.
> > > > >
> > > >
> > > > This log is dumped from:
> > > > memory.c: unassigned_mem_read()
> > > >
> > > > > > As I said, my problem now is in translation of L2 GPA provided
> > > > > > by driver,
> > > > > when I call DMA read/write for this address from VF.
> > > > > > Any insights?
> > > > >
> > > > > I just noticed that you were using QEMU 2.12 [1].  If that's the
> > > > > case, please rebase to the latest QEMU, at least >=3.0 because
> > > > > there's major refactor of the shadow logic during 3.0 devel cycle
> AFAICT.
> > > > >
> > > >
> > > > Rebased to QEMU 3.1
> > > > Now I see the address I'm trying to read from in log but still same error:
> > > > "
> > > > Unassigned mem read 00000000f0481000 "
> > > > What do you suggest?
> > >
> > > Would you please answer the questions that Knut asked?  Is it
> > > working for L1 guest?  How about PF?
> >
> > Both VF and PF are working for L1 guest.
> > I don't know how to passthrough PF to nested VM in hyper-v.
> 
> On Linux passing through VFs and PFs are the same.
> Maybe you can try passthrough with all Linux first? (first PF then VF) ?
> 
> > I don't invoke VF manually in hyper-v and pass it through to nested
> > VM. I use hyper-v manager to configure and provide a VF for nested VM
> > (I can see the VF only in the nested VM).
> >
> > Did someone try to run emulated device in linux RH as nested L2 where
> > L1 is windows hyper-v? Does DMA read/write work for this emulated
> device in this case?
> 
> I have never tried that, I have only used Linux as L2, Windows might be
> pickier about what it expects, so starting with Linux to rule that out is
> probably a good idea.

Will move to this solution after I/we give up 😊

> 
> > >
> > > You can also try to enable VT-d device log by appending:
> > >
> > >   -trace enable="vtd_*"
> > >
> > > In case it dumps anything useful for you.

Here is the relevant dump (dev 01:00.01 is my VF):
"
vtd_inv_desc_cc_device context invalidate device 01:00.01
vtd_ce_not_present Context entry bus 1 devfn 1 not present
vtd_switch_address_space Device 01:00.1 switching address space (iommu enabled=1)
vtd_ce_not_present Context entry bus 1 devfn 1 not present
vtd_err Detected invalid context entry when trying to sync shadow page table
vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high 0x102 low 0x2d007003 gen 0 -> gen 2
vtd_err_dmar_slpte_resv_error iova 0xf08e7000 level 2 slpte 0x2a54008f7
vtd_fault_disabled Fault processing disabled for context entry
vtd_err_dmar_translate dev 01:00.01 iova 0x0
Unassigned mem read 00000000f08e7000
"
What do you conclude from this dump?

> >
> > Is there a way to open those traces to be dumped to stdout/stderr on
> > the fly, instead of dtrace?
> 
> It's up to you what tracer(s) to configure when you build QEMU - check out
> docs/devel/tracing.txt . There's a few trace events defined in the SR/IOV
> patch set, you might want to enable them as well.
> 
> Knut
> 
> > > --
> > > Peter Xu


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-02 15:41                           ` Elijah Shakkour
@ 2019-04-03  2:40                             ` Peter Xu
  2019-04-03 21:57                               ` Elijah Shakkour
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Xu @ 2019-04-03  2:40 UTC (permalink / raw)
  To: Elijah Shakkour
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel

On Tue, Apr 02, 2019 at 03:41:10PM +0000, Elijah Shakkour wrote:
> 
> 
> > -----Original Message-----
> > From: Knut Omang <knut.omang@oracle.com>
> > Sent: Monday, April 1, 2019 5:24 PM
> > To: Elijah Shakkour <elijahsh@mellanox.com>; Peter Xu
> > <peterx@redhat.com>
> > Cc: Michael S. Tsirkin <mst@redhat.com>; Alex Williamson
> > <alex.williamson@redhat.com>; Marcel Apfelbaum
> > <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi <stefanha@gmail.com>;
> > qemu-devel@nongnu.org
> > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to
> > nested (L2) VM
> > 
> > On Mon, 2019-04-01 at 14:01 +0000, Elijah Shakkour wrote:
> > >
> > > > -----Original Message-----
> > > > From: Peter Xu <peterx@redhat.com>
> > > > Sent: Monday, April 1, 2019 1:25 PM
> > > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > > > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough
> > to
> > > > nested (L2) VM
> > > >
> > > > On Mon, Apr 01, 2019 at 09:12:38AM +0000, Elijah Shakkour wrote:
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Peter Xu <peterx@redhat.com>
> > > > > > Sent: Monday, April 1, 2019 5:47 AM
> > > > > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > > > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > > > > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > > > > > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > > > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > > > > Subject: Re: QEMU and vIOMMU support for emulated VF
> > passthrough
> > > > to
> > > > > > nested (L2) VM
> > > > > >
> > > > > > On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour wrote:
> > > > > >
> > > > > > [...]
> > > > > >
> > > > > > > I didn't have DMA nor MMIO read/write working with my old
> > > > > > > command
> > > > > > line.
> > > > > > > But, when I removed all CPU flags and only provided "-cpu
> > > > > > > host", I see that
> > > > > > MMIO works.
> > > > > > > Still, DMA read/write from emulated device doesn't work for VF.
> > > > > > > For
> > > > > > example:
> > > > > > > Driver provides me a buffer pointer through MMIO write, this
> > > > > > > address
> > > > > > (pointer) is GPA of L2, and when I try to call pci_dma_read()
> > > > > > with this address I get:
> > > > > > > "
> > > > > > > Unassigned mem read 0000000000000000 "
> > > > > >
> > > > > > I don't know where this error log was dumped but if it's during
> > > > > > DMA then I agree it can probably be related to vIOMMU.
> > > > > >
> > > > >
> > > > > This log is dumped from:
> > > > > memory.c: unassigned_mem_read()
> > > > >
> > > > > > > As I said, my problem now is in translation of L2 GPA provided
> > > > > > > by driver,
> > > > > > when I call DMA read/write for this address from VF.
> > > > > > > Any insights?
> > > > > >
> > > > > > I just noticed that you were using QEMU 2.12 [1].  If that's the
> > > > > > case, please rebase to the latest QEMU, at least >=3.0 because
> > > > > > there's major refactor of the shadow logic during 3.0 devel cycle
> > AFAICT.
> > > > > >
> > > > >
> > > > > Rebased to QEMU 3.1
> > > > > Now I see the address I'm trying to read from in log but still same error:
> > > > > "
> > > > > Unassigned mem read 00000000f0481000 "
> > > > > What do you suggest?
> > > >
> > > > Would you please answer the questions that Knut asked?  Is it
> > > > working for L1 guest?  How about PF?
> > >
> > > Both VF and PF are working for L1 guest.
> > > I don't know how to passthrough PF to nested VM in hyper-v.
> > 
> > On Linux passing through VFs and PFs are the same.
> > Maybe you can try passthrough with all Linux first? (first PF then VF) ?
> > 
> > > I don't invoke VF manually in hyper-v and pass it through to nested
> > > VM. I use hyper-v manager to configure and provide a VF for nested VM
> > > (I can see the VF only in the nested VM).
> > >
> > > Did someone try to run emulated device in linux RH as nested L2 where
> > > L1 is windows hyper-v? Does DMA read/write work for this emulated
> > device in this case?
> > 
> > I have never tried that, I have only used Linux as L2, Windows might be
> > pickier about what it expects, so starting with Linux to rule that out is
> > probably a good idea.
> 
> Will move to this solution after I/we give up 😊
> 
> > 
> > > >
> > > > You can also try to enable VT-d device log by appending:
> > > >
> > > >   -trace enable="vtd_*"
> > > >
> > > > In case it dumps anything useful for you.
> 
> Here is the relevant dump (dev 01:00.01 is my VF):
> "
> vtd_inv_desc_cc_device context invalidate device 01:00.01
> vtd_ce_not_present Context entry bus 1 devfn 1 not present
> vtd_switch_address_space Device 01:00.1 switching address space (iommu enabled=1)
> vtd_ce_not_present Context entry bus 1 devfn 1 not present
> vtd_err Detected invalid context entry when trying to sync shadow page table

These lines mean that the guest sent a device invalidation to your VF
but the IOMMU found that the device context entry is missing.

> vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high 0x102 low 0x2d007003 gen 0 -> gen 2
> vtd_err_dmar_slpte_resv_error iova 0xf08e7000 level 2 slpte 0x2a54008f7

This line should not exist in latest QEMU.  Are you sure you're using
the latest QEMU?

I agree with Knut that I would suggest you to use use Linux in both
host and guests first to triage the issue.

Regards,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-03  2:40                             ` Peter Xu
@ 2019-04-03 21:57                               ` Elijah Shakkour
  2019-04-03 22:10                                 ` Elijah Shakkour
  0 siblings, 1 reply; 16+ messages in thread
From: Elijah Shakkour @ 2019-04-03 21:57 UTC (permalink / raw)
  To: Peter Xu
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel



> -----Original Message-----
> From: Peter Xu <peterx@redhat.com>
> Sent: Wednesday, April 3, 2019 5:40 AM
> To: Elijah Shakkour <elijahsh@mellanox.com>
> Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> <stefanha@gmail.com>; qemu-devel@nongnu.org
> Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to
> nested (L2) VM
> 
> On Tue, Apr 02, 2019 at 03:41:10PM +0000, Elijah Shakkour wrote:
> >
> >
> > > -----Original Message-----
> > > From: Knut Omang <knut.omang@oracle.com>
> > > Sent: Monday, April 1, 2019 5:24 PM
> > > To: Elijah Shakkour <elijahsh@mellanox.com>; Peter Xu
> > > <peterx@redhat.com>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>; Alex Williamson
> > > <alex.williamson@redhat.com>; Marcel Apfelbaum
> > > <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> <stefanha@gmail.com>;
> > > qemu-devel@nongnu.org
> > > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough
> to
> > > nested (L2) VM
> > >
> > > On Mon, 2019-04-01 at 14:01 +0000, Elijah Shakkour wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: Peter Xu <peterx@redhat.com>
> > > > > Sent: Monday, April 1, 2019 1:25 PM
> > > > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > > > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > > > > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > > > Subject: Re: QEMU and vIOMMU support for emulated VF
> passthrough
> > > to
> > > > > nested (L2) VM
> > > > >
> > > > > On Mon, Apr 01, 2019 at 09:12:38AM +0000, Elijah Shakkour wrote:
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Peter Xu <peterx@redhat.com>
> > > > > > > Sent: Monday, April 1, 2019 5:47 AM
> > > > > > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > > > > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > > > > > <mst@redhat.com>; Alex Williamson
> > > > > > > <alex.williamson@redhat.com>; Marcel Apfelbaum
> > > > > > > <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > > > > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > > > > > Subject: Re: QEMU and vIOMMU support for emulated VF
> > > passthrough
> > > > > to
> > > > > > > nested (L2) VM
> > > > > > >
> > > > > > > On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour wrote:
> > > > > > >
> > > > > > > [...]
> > > > > > >
> > > > > > > > I didn't have DMA nor MMIO read/write working with my old
> > > > > > > > command
> > > > > > > line.
> > > > > > > > But, when I removed all CPU flags and only provided "-cpu
> > > > > > > > host", I see that
> > > > > > > MMIO works.
> > > > > > > > Still, DMA read/write from emulated device doesn't work for VF.
> > > > > > > > For
> > > > > > > example:
> > > > > > > > Driver provides me a buffer pointer through MMIO write,
> > > > > > > > this address
> > > > > > > (pointer) is GPA of L2, and when I try to call
> > > > > > > pci_dma_read() with this address I get:
> > > > > > > > "
> > > > > > > > Unassigned mem read 0000000000000000 "
> > > > > > >
> > > > > > > I don't know where this error log was dumped but if it's
> > > > > > > during DMA then I agree it can probably be related to vIOMMU.
> > > > > > >
> > > > > >
> > > > > > This log is dumped from:
> > > > > > memory.c: unassigned_mem_read()
> > > > > >
> > > > > > > > As I said, my problem now is in translation of L2 GPA
> > > > > > > > provided by driver,
> > > > > > > when I call DMA read/write for this address from VF.
> > > > > > > > Any insights?
> > > > > > >
> > > > > > > I just noticed that you were using QEMU 2.12 [1].  If that's
> > > > > > > the case, please rebase to the latest QEMU, at least >=3.0
> > > > > > > because there's major refactor of the shadow logic during
> > > > > > > 3.0 devel cycle
> > > AFAICT.
> > > > > > >
> > > > > >
> > > > > > Rebased to QEMU 3.1
> > > > > > Now I see the address I'm trying to read from in log but still same
> error:
> > > > > > "
> > > > > > Unassigned mem read 00000000f0481000 "
> > > > > > What do you suggest?
> > > > >
> > > > > Would you please answer the questions that Knut asked?  Is it
> > > > > working for L1 guest?  How about PF?
> > > >
> > > > Both VF and PF are working for L1 guest.
> > > > I don't know how to passthrough PF to nested VM in hyper-v.
> > >
> > > On Linux passing through VFs and PFs are the same.
> > > Maybe you can try passthrough with all Linux first? (first PF then VF) ?
> > >
> > > > I don't invoke VF manually in hyper-v and pass it through to
> > > > nested VM. I use hyper-v manager to configure and provide a VF for
> > > > nested VM (I can see the VF only in the nested VM).
> > > >
> > > > Did someone try to run emulated device in linux RH as nested L2
> > > > where
> > > > L1 is windows hyper-v? Does DMA read/write work for this emulated
> > > device in this case?
> > >
> > > I have never tried that, I have only used Linux as L2, Windows might
> > > be pickier about what it expects, so starting with Linux to rule
> > > that out is probably a good idea.
> >
> > Will move to this solution after I/we give up 😊
> >
> > >
> > > > >
> > > > > You can also try to enable VT-d device log by appending:
> > > > >
> > > > >   -trace enable="vtd_*"
> > > > >
> > > > > In case it dumps anything useful for you.
> >
> > Here is the relevant dump (dev 01:00.01 is my VF):
> > "
> > vtd_inv_desc_cc_device context invalidate device 01:00.01
> > vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > vtd_switch_address_space Device 01:00.1 switching address space (iommu
> > enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > vtd_err Detected invalid context entry when trying to sync shadow page
> > table
> 
> These lines mean that the guest sent a device invalidation to your VF but the
> IOMMU found that the device context entry is missing.
> 
> > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high 0x102
> > low 0x2d007003 gen 0 -> gen 2 vtd_err_dmar_slpte_resv_error iova
> > 0xf08e7000 level 2 slpte 0x2a54008f7
> 
> This line should not exist in latest QEMU.  Are you sure you're using the latest
> QEMU?

I moved now to QEMU 4.0 RC2.
This is the what I get now:
vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high 0x102 low 0x2f007003 gen 0 -> gen 1
qemu-system-x86_64: vtd_iova_to_slpte: detected splte reserve non-zero iova=0xf0d29000, level=0x2slpte=0x29f6008f7)
vtd_fault_disabled Fault processing disabled for context entry
qemu-system-x86_64: vtd_iommu_translate: detected translation failure (dev=01:00:01, iova=0xf0d29000)
Unassigned mem read 00000000f0d29000

I'm not familiar with vIOMMU registers, but I noticed that I must report snoop control support to Hyper-V (i.e. bit 7 in extended capability register of vIOMMU) in-order to satisfy IOMMU support for SRIOV.
vIOMMU.ecap before    0xf00f5e
vIOMMU.ecap after       0xf00fde
But I see that vIOMMU doesn't really support snoop control.
Could this be the problem that fails IOVA range check in this function vtd_iova_range_check()?
If so, is there a way to work-around this issue?

> 
> I agree with Knut that I would suggest you to use use Linux in both host and
> guests first to triage the issue.
> 
> Regards,
> 
> --
> Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-03 21:57                               ` Elijah Shakkour
@ 2019-04-03 22:10                                 ` Elijah Shakkour
  2019-04-04  6:59                                   ` Peter Xu
  0 siblings, 1 reply; 16+ messages in thread
From: Elijah Shakkour @ 2019-04-03 22:10 UTC (permalink / raw)
  To: Peter Xu
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel



> -----Original Message-----
> From: Elijah Shakkour
> Sent: Thursday, April 4, 2019 12:57 AM
> To: 'Peter Xu' <peterx@redhat.com>
> Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> <stefanha@gmail.com>; qemu-devel@nongnu.org
> Subject: RE: QEMU and vIOMMU support for emulated VF passthrough to
> nested (L2) VM
> 
> 
> 
> > -----Original Message-----
> > From: Peter Xu <peterx@redhat.com>
> > Sent: Wednesday, April 3, 2019 5:40 AM
> > To: Elijah Shakkour <elijahsh@mellanox.com>
> > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> Marcel
> > Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to
> > nested (L2) VM
> >
> > On Tue, Apr 02, 2019 at 03:41:10PM +0000, Elijah Shakkour wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: Knut Omang <knut.omang@oracle.com>
> > > > Sent: Monday, April 1, 2019 5:24 PM
> > > > To: Elijah Shakkour <elijahsh@mellanox.com>; Peter Xu
> > > > <peterx@redhat.com>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>; Alex Williamson
> > > > <alex.williamson@redhat.com>; Marcel Apfelbaum
> > > > <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > <stefanha@gmail.com>;
> > > > qemu-devel@nongnu.org
> > > > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough
> > to
> > > > nested (L2) VM
> > > >
> > > > On Mon, 2019-04-01 at 14:01 +0000, Elijah Shakkour wrote:
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Peter Xu <peterx@redhat.com>
> > > > > > Sent: Monday, April 1, 2019 1:25 PM
> > > > > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > > > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > > > > <mst@redhat.com>; Alex Williamson
> > > > > > <alex.williamson@redhat.com>; Marcel Apfelbaum
> > > > > > <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > > > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > > > > Subject: Re: QEMU and vIOMMU support for emulated VF
> > passthrough
> > > > to
> > > > > > nested (L2) VM
> > > > > >
> > > > > > On Mon, Apr 01, 2019 at 09:12:38AM +0000, Elijah Shakkour wrote:
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Peter Xu <peterx@redhat.com>
> > > > > > > > Sent: Monday, April 1, 2019 5:47 AM
> > > > > > > > To: Elijah Shakkour <elijahsh@mellanox.com>
> > > > > > > > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > > > > > > > <mst@redhat.com>; Alex Williamson
> > > > > > > > <alex.williamson@redhat.com>; Marcel Apfelbaum
> > > > > > > > <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > > > > > > > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > > > > > > > Subject: Re: QEMU and vIOMMU support for emulated VF
> > > > passthrough
> > > > > > to
> > > > > > > > nested (L2) VM
> > > > > > > >
> > > > > > > > On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour
> wrote:
> > > > > > > >
> > > > > > > > [...]
> > > > > > > >
> > > > > > > > > I didn't have DMA nor MMIO read/write working with my
> > > > > > > > > old command
> > > > > > > > line.
> > > > > > > > > But, when I removed all CPU flags and only provided
> > > > > > > > > "-cpu host", I see that
> > > > > > > > MMIO works.
> > > > > > > > > Still, DMA read/write from emulated device doesn't work for
> VF.
> > > > > > > > > For
> > > > > > > > example:
> > > > > > > > > Driver provides me a buffer pointer through MMIO write,
> > > > > > > > > this address
> > > > > > > > (pointer) is GPA of L2, and when I try to call
> > > > > > > > pci_dma_read() with this address I get:
> > > > > > > > > "
> > > > > > > > > Unassigned mem read 0000000000000000 "
> > > > > > > >
> > > > > > > > I don't know where this error log was dumped but if it's
> > > > > > > > during DMA then I agree it can probably be related to vIOMMU.
> > > > > > > >
> > > > > > >
> > > > > > > This log is dumped from:
> > > > > > > memory.c: unassigned_mem_read()
> > > > > > >
> > > > > > > > > As I said, my problem now is in translation of L2 GPA
> > > > > > > > > provided by driver,
> > > > > > > > when I call DMA read/write for this address from VF.
> > > > > > > > > Any insights?
> > > > > > > >
> > > > > > > > I just noticed that you were using QEMU 2.12 [1].  If
> > > > > > > > that's the case, please rebase to the latest QEMU, at
> > > > > > > > least >=3.0 because there's major refactor of the shadow
> > > > > > > > logic during
> > > > > > > > 3.0 devel cycle
> > > > AFAICT.
> > > > > > > >
> > > > > > >
> > > > > > > Rebased to QEMU 3.1
> > > > > > > Now I see the address I'm trying to read from in log but
> > > > > > > still same
> > error:
> > > > > > > "
> > > > > > > Unassigned mem read 00000000f0481000 "
> > > > > > > What do you suggest?
> > > > > >
> > > > > > Would you please answer the questions that Knut asked?  Is it
> > > > > > working for L1 guest?  How about PF?
> > > > >
> > > > > Both VF and PF are working for L1 guest.
> > > > > I don't know how to passthrough PF to nested VM in hyper-v.
> > > >
> > > > On Linux passing through VFs and PFs are the same.
> > > > Maybe you can try passthrough with all Linux first? (first PF then VF) ?
> > > >
> > > > > I don't invoke VF manually in hyper-v and pass it through to
> > > > > nested VM. I use hyper-v manager to configure and provide a VF
> > > > > for nested VM (I can see the VF only in the nested VM).
> > > > >
> > > > > Did someone try to run emulated device in linux RH as nested L2
> > > > > where
> > > > > L1 is windows hyper-v? Does DMA read/write work for this
> > > > > emulated
> > > > device in this case?
> > > >
> > > > I have never tried that, I have only used Linux as L2, Windows
> > > > might be pickier about what it expects, so starting with Linux to
> > > > rule that out is probably a good idea.
> > >
> > > Will move to this solution after I/we give up 😊
> > >
> > > >
> > > > > >
> > > > > > You can also try to enable VT-d device log by appending:
> > > > > >
> > > > > >   -trace enable="vtd_*"
> > > > > >
> > > > > > In case it dumps anything useful for you.
> > >
> > > Here is the relevant dump (dev 01:00.01 is my VF):
> > > "
> > > vtd_inv_desc_cc_device context invalidate device 01:00.01
> > > vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > > vtd_switch_address_space Device 01:00.1 switching address space
> > > (iommu
> > > enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not
> > > present vtd_err Detected invalid context entry when trying to sync
> > > shadow page table
> >
> > These lines mean that the guest sent a device invalidation to your VF
> > but the IOMMU found that the device context entry is missing.
> >
> > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high
> > > 0x102 low 0x2d007003 gen 0 -> gen 2 vtd_err_dmar_slpte_resv_error
> > > iova
> > > 0xf08e7000 level 2 slpte 0x2a54008f7
> >
> > This line should not exist in latest QEMU.  Are you sure you're using
> > the latest QEMU?
> 
> I moved now to QEMU 4.0 RC2.
> This is the what I get now:
> vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high 0x102 low
> 0x2f007003 gen 0 -> gen 1
> qemu-system-x86_64: vtd_iova_to_slpte: detected splte reserve non-zero
> iova=0xf0d29000, level=0x2slpte=0x29f6008f7) vtd_fault_disabled Fault
> processing disabled for context entry
> qemu-system-x86_64: vtd_iommu_translate: detected translation failure
> (dev=01:00:01, iova=0xf0d29000) Unassigned mem read 00000000f0d29000
> 
> I'm not familiar with vIOMMU registers, but I noticed that I must report
> snoop control support to Hyper-V (i.e. bit 7 in extended capability register of
> vIOMMU) in-order to satisfy IOMMU support for SRIOV.
> vIOMMU.ecap before    0xf00f5e
> vIOMMU.ecap after       0xf00fde
> But I see that vIOMMU doesn't really support snoop control.
> Could this be the problem that fails IOVA range check in this function
> vtd_iova_range_check()?

Sorry, I meant the SLPTE reserved non-zero check failure in vtd_slpte_nonzero_rsvd()
And NOT IOVA range check failure (since range check didn't fail)

> If so, is there a way to work-around this issue?
> 
> >
> > I agree with Knut that I would suggest you to use use Linux in both
> > host and guests first to triage the issue.
> >
> > Regards,
> >
> > --
> > Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-03 22:10                                 ` Elijah Shakkour
@ 2019-04-04  6:59                                   ` Peter Xu
  2019-04-04  7:58                                     ` Tian, Kevin
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Xu @ 2019-04-04  6:59 UTC (permalink / raw)
  To: Elijah Shakkour
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel, Tian, Kevin

On Wed, Apr 03, 2019 at 10:10:35PM +0000, Elijah Shakkour wrote:

[...]

> > > > > > > You can also try to enable VT-d device log by appending:
> > > > > > >
> > > > > > >   -trace enable="vtd_*"
> > > > > > >
> > > > > > > In case it dumps anything useful for you.
> > > >
> > > > Here is the relevant dump (dev 01:00.01 is my VF):
> > > > "
> > > > vtd_inv_desc_cc_device context invalidate device 01:00.01
> > > > vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > > > vtd_switch_address_space Device 01:00.1 switching address space
> > > > (iommu
> > > > enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not
> > > > present vtd_err Detected invalid context entry when trying to sync
> > > > shadow page table
> > >
> > > These lines mean that the guest sent a device invalidation to your VF
> > > but the IOMMU found that the device context entry is missing.
> > >
> > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high
> > > > 0x102 low 0x2d007003 gen 0 -> gen 2 vtd_err_dmar_slpte_resv_error
> > > > iova
> > > > 0xf08e7000 level 2 slpte 0x2a54008f7
> > >
> > > This line should not exist in latest QEMU.  Are you sure you're using
> > > the latest QEMU?
> > 
> > I moved now to QEMU 4.0 RC2.
> > This is the what I get now:
> > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high 0x102 low
> > 0x2f007003 gen 0 -> gen 1
> > qemu-system-x86_64: vtd_iova_to_slpte: detected splte reserve non-zero
> > iova=0xf0d29000, level=0x2slpte=0x29f6008f7) vtd_fault_disabled Fault
> > processing disabled for context entry
> > qemu-system-x86_64: vtd_iommu_translate: detected translation failure
> > (dev=01:00:01, iova=0xf0d29000) Unassigned mem read 00000000f0d29000
> > 
> > I'm not familiar with vIOMMU registers, but I noticed that I must report
> > snoop control support to Hyper-V (i.e. bit 7 in extended capability register of
> > vIOMMU) in-order to satisfy IOMMU support for SRIOV.
> > vIOMMU.ecap before    0xf00f5e
> > vIOMMU.ecap after       0xf00fde
> > But I see that vIOMMU doesn't really support snoop control.
> > Could this be the problem that fails IOVA range check in this function
> > vtd_iova_range_check()?
> 
> Sorry, I meant the SLPTE reserved non-zero check failure in vtd_slpte_nonzero_rsvd()
> And NOT IOVA range check failure (since range check didn't fail)

Probably.  Currently VT-d emulation does not support snooping control,
and if you modify that ecap only you probably will encounter this
problem because then the guest kernel will setup the SNP bit in the
IOMMU page table entries which will violate the reserved bits in the
emulation code then you can see these errors.

Now talking about implementing the Snoop Control for Intel IOMMU for
real (which corresponds to vt-d ecap bit 7) - I'd confess I'm not 100%
clear on what does the "snooping" mean and what we need to do as an
emulator. I'm quotting from spec:

  "Snoop behavior for a memory access (to a translation structure
  entry or access to the mapped page) specifies if the access is
  coherent (snoops the processor caches) or not."

If it is only a capability showing that whether the hardware is
capable of snooping processor caches, then I don't think we need to do
much here as an emulator of VT-d simply because when we access the
data we're still from the processor's side (because we're emulating
the IOMMU behavior only) so the cache should always been coherent from
the POV of guest vCPUs, just like how the processors provide cache
coherence between two cores (so IMHO here the VT-d emulation code can
be run on one core/thread, and the vcpu which runs the guest iommu
driver can be run on another core/thread).  If so, maybe we can simply
declare support of that but we at least also need to remove the SNP
bit from vtd_paging_entry_rsvd_field[] array to reflect that we
understand that bit.

CCing Alex and Kevin to see whether I'm misunderstanding or in case of
any further input on the snooping support.

Regards,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
  2019-04-04  6:59                                   ` Peter Xu
@ 2019-04-04  7:58                                     ` Tian, Kevin
  2019-04-07 13:47                                         ` Elijah Shakkour
  0 siblings, 1 reply; 16+ messages in thread
From: Tian, Kevin @ 2019-04-04  7:58 UTC (permalink / raw)
  To: Peter Xu, Elijah Shakkour
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel

> From: Peter Xu [mailto:peterx@redhat.com]
> Sent: Thursday, April 4, 2019 3:00 PM
> 
> On Wed, Apr 03, 2019 at 10:10:35PM +0000, Elijah Shakkour wrote:
> 
> [...]
> 
> > > > > > > > You can also try to enable VT-d device log by appending:
> > > > > > > >
> > > > > > > >   -trace enable="vtd_*"
> > > > > > > >
> > > > > > > > In case it dumps anything useful for you.
> > > > >
> > > > > Here is the relevant dump (dev 01:00.01 is my VF):
> > > > > "
> > > > > vtd_inv_desc_cc_device context invalidate device 01:00.01
> > > > > vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > > > > vtd_switch_address_space Device 01:00.1 switching address space
> > > > > (iommu
> > > > > enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not
> > > > > present vtd_err Detected invalid context entry when trying to sync
> > > > > shadow page table
> > > >
> > > > These lines mean that the guest sent a device invalidation to your VF
> > > > but the IOMMU found that the device context entry is missing.
> > > >
> > > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high
> > > > > 0x102 low 0x2d007003 gen 0 -> gen 2 vtd_err_dmar_slpte_resv_error
> > > > > iova
> > > > > 0xf08e7000 level 2 slpte 0x2a54008f7
> > > >
> > > > This line should not exist in latest QEMU.  Are you sure you're using
> > > > the latest QEMU?
> > >
> > > I moved now to QEMU 4.0 RC2.
> > > This is the what I get now:
> > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high 0x102
> low
> > > 0x2f007003 gen 0 -> gen 1
> > > qemu-system-x86_64: vtd_iova_to_slpte: detected splte reserve non-zero
> > > iova=0xf0d29000, level=0x2slpte=0x29f6008f7) vtd_fault_disabled Fault
> > > processing disabled for context entry
> > > qemu-system-x86_64: vtd_iommu_translate: detected translation failure
> > > (dev=01:00:01, iova=0xf0d29000) Unassigned mem read
> 00000000f0d29000
> > >
> > > I'm not familiar with vIOMMU registers, but I noticed that I must report
> > > snoop control support to Hyper-V (i.e. bit 7 in extended capability register
> of
> > > vIOMMU) in-order to satisfy IOMMU support for SRIOV.
> > > vIOMMU.ecap before    0xf00f5e
> > > vIOMMU.ecap after       0xf00fde
> > > But I see that vIOMMU doesn't really support snoop control.
> > > Could this be the problem that fails IOVA range check in this function
> > > vtd_iova_range_check()?
> >
> > Sorry, I meant the SLPTE reserved non-zero check failure in
> vtd_slpte_nonzero_rsvd()
> > And NOT IOVA range check failure (since range check didn't fail)
> 
> Probably.  Currently VT-d emulation does not support snooping control,
> and if you modify that ecap only you probably will encounter this
> problem because then the guest kernel will setup the SNP bit in the
> IOMMU page table entries which will violate the reserved bits in the
> emulation code then you can see these errors.
> 
> Now talking about implementing the Snoop Control for Intel IOMMU for
> real (which corresponds to vt-d ecap bit 7) - I'd confess I'm not 100%
> clear on what does the "snooping" mean and what we need to do as an
> emulator. I'm quotting from spec:
> 
>   "Snoop behavior for a memory access (to a translation structure
>   entry or access to the mapped page) specifies if the access is
>   coherent (snoops the processor caches) or not."
> 
> If it is only a capability showing that whether the hardware is
> capable of snooping processor caches, then I don't think we need to do
> much here as an emulator of VT-d simply because when we access the
> data we're still from the processor's side (because we're emulating
> the IOMMU behavior only) so the cache should always been coherent from
> the POV of guest vCPUs, just like how the processors provide cache
> coherence between two cores (so IMHO here the VT-d emulation code can
> be run on one core/thread, and the vcpu which runs the guest iommu
> driver can be run on another core/thread).  If so, maybe we can simply
> declare support of that but we at least also need to remove the SNP
> bit from vtd_paging_entry_rsvd_field[] array to reflect that we
> understand that bit.
> 
> CCing Alex and Kevin to see whether I'm misunderstanding or in case of
> any further input on the snooping support.
> 

for software DMA yes snoop is guaranteed since it's just CPU access.

However for VFIO device i.e. hardware DMA, snoop should be reported
based on physical IOMMU capability. It's fine to report no snoop control 
on vIOMMU (current state) even when it's physically supported. It just 
results that L1 VMM must favor guest cache attributes instead of forcing 
WB in L1 EPT when doing nested passthrough. However it's incorrect to 
report snoop control on vIOMMU when physically it's not supported, 
otherwise L1 VMM may force WB in L1 EPT and enable snoop field in
vIOMMU 2nd level PTE with assumption that hardware snoop is guaranteed
(however it isn't). Then it becomes a correctness issue.

The thing is a bit tricky regarding to two VFIO devices which are under
two pIOMMUs with one supporting snoop and the other doesn't, which
leaves us two options:

1). create one vIOMMU without snoop control. Current state and safe.
2). create two vIOMMUs with one supporting snoop and the other
doesn't. Then report VFIO device to vIOMMU based on matching
snoop capability. match hardware topology but adds extra config
burden and footprint. 

Thanks,
Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
@ 2019-04-07 13:47                                         ` Elijah Shakkour
  0 siblings, 0 replies; 16+ messages in thread
From: Elijah Shakkour @ 2019-04-07 13:47 UTC (permalink / raw)
  To: Tian, Kevin, Peter Xu
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel



> -----Original Message-----
> From: Tian, Kevin <kevin.tian@intel.com>
> Sent: Thursday, April 4, 2019 10:58 AM
> To: Peter Xu <peterx@redhat.com>; Elijah Shakkour
> <elijahsh@mellanox.com>
> Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> <stefanha@gmail.com>; qemu-devel@nongnu.org
> Subject: RE: QEMU and vIOMMU support for emulated VF passthrough to
> nested (L2) VM
> 
> > From: Peter Xu [mailto:peterx@redhat.com]
> > Sent: Thursday, April 4, 2019 3:00 PM
> >
> > On Wed, Apr 03, 2019 at 10:10:35PM +0000, Elijah Shakkour wrote:
> >
> > [...]
> >
> > > > > > > > > You can also try to enable VT-d device log by appending:
> > > > > > > > >
> > > > > > > > >   -trace enable="vtd_*"
> > > > > > > > >
> > > > > > > > > In case it dumps anything useful for you.
> > > > > >
> > > > > > Here is the relevant dump (dev 01:00.01 is my VF):
> > > > > > "
> > > > > > vtd_inv_desc_cc_device context invalidate device 01:00.01
> > > > > > vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > > > > > vtd_switch_address_space Device 01:00.1 switching address
> > > > > > space (iommu
> > > > > > enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not
> > > > > > present vtd_err Detected invalid context entry when trying to
> > > > > > sync shadow page table
> > > > >
> > > > > These lines mean that the guest sent a device invalidation to
> > > > > your VF but the IOMMU found that the device context entry is
> missing.
> > > > >
> > > > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1
> > > > > > high
> > > > > > 0x102 low 0x2d007003 gen 0 -> gen 2
> > > > > > vtd_err_dmar_slpte_resv_error iova
> > > > > > 0xf08e7000 level 2 slpte 0x2a54008f7
> > > > >
> > > > > This line should not exist in latest QEMU.  Are you sure you're
> > > > > using the latest QEMU?
> > > >
> > > > I moved now to QEMU 4.0 RC2.
> > > > This is the what I get now:
> > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high
> > > > 0x102
> > low
> > > > 0x2f007003 gen 0 -> gen 1
> > > > qemu-system-x86_64: vtd_iova_to_slpte: detected splte reserve
> > > > non-zero iova=0xf0d29000, level=0x2slpte=0x29f6008f7)
> > > > vtd_fault_disabled Fault processing disabled for context entry
> > > > qemu-system-x86_64: vtd_iommu_translate: detected translation
> > > > failure (dev=01:00:01, iova=0xf0d29000) Unassigned mem read
> > 00000000f0d29000
> > > >
> > > > I'm not familiar with vIOMMU registers, but I noticed that I must
> > > > report snoop control support to Hyper-V (i.e. bit 7 in extended
> > > > capability register
> > of
> > > > vIOMMU) in-order to satisfy IOMMU support for SRIOV.
> > > > vIOMMU.ecap before    0xf00f5e
> > > > vIOMMU.ecap after       0xf00fde
> > > > But I see that vIOMMU doesn't really support snoop control.
> > > > Could this be the problem that fails IOVA range check in this
> > > > function vtd_iova_range_check()?
> > >
> > > Sorry, I meant the SLPTE reserved non-zero check failure in
> > vtd_slpte_nonzero_rsvd()
> > > And NOT IOVA range check failure (since range check didn't fail)
> >
> > Probably.  Currently VT-d emulation does not support snooping control,
> > and if you modify that ecap only you probably will encounter this
> > problem because then the guest kernel will setup the SNP bit in the
> > IOMMU page table entries which will violate the reserved bits in the
> > emulation code then you can see these errors.
> >
> > Now talking about implementing the Snoop Control for Intel IOMMU for
> > real (which corresponds to vt-d ecap bit 7) - I'd confess I'm not 100%
> > clear on what does the "snooping" mean and what we need to do as an
> > emulator. I'm quotting from spec:
> >
> >   "Snoop behavior for a memory access (to a translation structure
> >   entry or access to the mapped page) specifies if the access is
> >   coherent (snoops the processor caches) or not."
> >
> > If it is only a capability showing that whether the hardware is
> > capable of snooping processor caches, then I don't think we need to do
> > much here as an emulator of VT-d simply because when we access the
> > data we're still from the processor's side (because we're emulating
> > the IOMMU behavior only) so the cache should always been coherent from
> > the POV of guest vCPUs, just like how the processors provide cache
> > coherence between two cores (so IMHO here the VT-d emulation code can
> > be run on one core/thread, and the vcpu which runs the guest iommu
> > driver can be run on another core/thread).  If so, maybe we can simply
> > declare support of that but we at least also need to remove the SNP
> > bit from vtd_paging_entry_rsvd_field[] array to reflect that we
> > understand that bit.
> >
> > CCing Alex and Kevin to see whether I'm misunderstanding or in case of
> > any further input on the snooping support.
> >
> 
> for software DMA yes snoop is guaranteed since it's just CPU access.
> 
> However for VFIO device i.e. hardware DMA, snoop should be reported
> based on physical IOMMU capability. It's fine to report no snoop control on
> vIOMMU (current state) even when it's physically supported. It just results
> that L1 VMM must favor guest cache attributes instead of forcing WB in L1
> EPT when doing nested passthrough. However it's incorrect to report snoop
> control on vIOMMU when physically it's not supported, otherwise L1 VMM
> may force WB in L1 EPT and enable snoop field in vIOMMU 2nd level PTE with
> assumption that hardware snoop is guaranteed (however it isn't). Then it
> becomes a correctness issue.
> 

If my device is fully emulated, can I ignore the SNP bit in the SLPTE? What is the cost of ignoring it in such a case? What could go wrong?
(I tried to ignore it and it seems that translations work for me now).

> The thing is a bit tricky regarding to two VFIO devices which are under two
> pIOMMUs with one supporting snoop and the other doesn't, which leaves us
> two options:
> 
> 1). create one vIOMMU without snoop control. Current state and safe.
> 2). create two vIOMMUs with one supporting snoop and the other doesn't.
> Then report VFIO device to vIOMMU based on matching snoop capability.
> match hardware topology but adds extra config burden and footprint.
> 
> Thanks,
> Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
@ 2019-04-07 13:47                                         ` Elijah Shakkour
  0 siblings, 0 replies; 16+ messages in thread
From: Elijah Shakkour @ 2019-04-07 13:47 UTC (permalink / raw)
  To: Tian, Kevin, Peter Xu
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, Knut Omang, qemu-devel,
	Alex Williamson



> -----Original Message-----
> From: Tian, Kevin <kevin.tian@intel.com>
> Sent: Thursday, April 4, 2019 10:58 AM
> To: Peter Xu <peterx@redhat.com>; Elijah Shakkour
> <elijahsh@mellanox.com>
> Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> <stefanha@gmail.com>; qemu-devel@nongnu.org
> Subject: RE: QEMU and vIOMMU support for emulated VF passthrough to
> nested (L2) VM
> 
> > From: Peter Xu [mailto:peterx@redhat.com]
> > Sent: Thursday, April 4, 2019 3:00 PM
> >
> > On Wed, Apr 03, 2019 at 10:10:35PM +0000, Elijah Shakkour wrote:
> >
> > [...]
> >
> > > > > > > > > You can also try to enable VT-d device log by appending:
> > > > > > > > >
> > > > > > > > >   -trace enable="vtd_*"
> > > > > > > > >
> > > > > > > > > In case it dumps anything useful for you.
> > > > > >
> > > > > > Here is the relevant dump (dev 01:00.01 is my VF):
> > > > > > "
> > > > > > vtd_inv_desc_cc_device context invalidate device 01:00.01
> > > > > > vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > > > > > vtd_switch_address_space Device 01:00.1 switching address
> > > > > > space (iommu
> > > > > > enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not
> > > > > > present vtd_err Detected invalid context entry when trying to
> > > > > > sync shadow page table
> > > > >
> > > > > These lines mean that the guest sent a device invalidation to
> > > > > your VF but the IOMMU found that the device context entry is
> missing.
> > > > >
> > > > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1
> > > > > > high
> > > > > > 0x102 low 0x2d007003 gen 0 -> gen 2
> > > > > > vtd_err_dmar_slpte_resv_error iova
> > > > > > 0xf08e7000 level 2 slpte 0x2a54008f7
> > > > >
> > > > > This line should not exist in latest QEMU.  Are you sure you're
> > > > > using the latest QEMU?
> > > >
> > > > I moved now to QEMU 4.0 RC2.
> > > > This is the what I get now:
> > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high
> > > > 0x102
> > low
> > > > 0x2f007003 gen 0 -> gen 1
> > > > qemu-system-x86_64: vtd_iova_to_slpte: detected splte reserve
> > > > non-zero iova=0xf0d29000, level=0x2slpte=0x29f6008f7)
> > > > vtd_fault_disabled Fault processing disabled for context entry
> > > > qemu-system-x86_64: vtd_iommu_translate: detected translation
> > > > failure (dev=01:00:01, iova=0xf0d29000) Unassigned mem read
> > 00000000f0d29000
> > > >
> > > > I'm not familiar with vIOMMU registers, but I noticed that I must
> > > > report snoop control support to Hyper-V (i.e. bit 7 in extended
> > > > capability register
> > of
> > > > vIOMMU) in-order to satisfy IOMMU support for SRIOV.
> > > > vIOMMU.ecap before    0xf00f5e
> > > > vIOMMU.ecap after       0xf00fde
> > > > But I see that vIOMMU doesn't really support snoop control.
> > > > Could this be the problem that fails IOVA range check in this
> > > > function vtd_iova_range_check()?
> > >
> > > Sorry, I meant the SLPTE reserved non-zero check failure in
> > vtd_slpte_nonzero_rsvd()
> > > And NOT IOVA range check failure (since range check didn't fail)
> >
> > Probably.  Currently VT-d emulation does not support snooping control,
> > and if you modify that ecap only you probably will encounter this
> > problem because then the guest kernel will setup the SNP bit in the
> > IOMMU page table entries which will violate the reserved bits in the
> > emulation code then you can see these errors.
> >
> > Now talking about implementing the Snoop Control for Intel IOMMU for
> > real (which corresponds to vt-d ecap bit 7) - I'd confess I'm not 100%
> > clear on what does the "snooping" mean and what we need to do as an
> > emulator. I'm quotting from spec:
> >
> >   "Snoop behavior for a memory access (to a translation structure
> >   entry or access to the mapped page) specifies if the access is
> >   coherent (snoops the processor caches) or not."
> >
> > If it is only a capability showing that whether the hardware is
> > capable of snooping processor caches, then I don't think we need to do
> > much here as an emulator of VT-d simply because when we access the
> > data we're still from the processor's side (because we're emulating
> > the IOMMU behavior only) so the cache should always been coherent from
> > the POV of guest vCPUs, just like how the processors provide cache
> > coherence between two cores (so IMHO here the VT-d emulation code can
> > be run on one core/thread, and the vcpu which runs the guest iommu
> > driver can be run on another core/thread).  If so, maybe we can simply
> > declare support of that but we at least also need to remove the SNP
> > bit from vtd_paging_entry_rsvd_field[] array to reflect that we
> > understand that bit.
> >
> > CCing Alex and Kevin to see whether I'm misunderstanding or in case of
> > any further input on the snooping support.
> >
> 
> for software DMA yes snoop is guaranteed since it's just CPU access.
> 
> However for VFIO device i.e. hardware DMA, snoop should be reported
> based on physical IOMMU capability. It's fine to report no snoop control on
> vIOMMU (current state) even when it's physically supported. It just results
> that L1 VMM must favor guest cache attributes instead of forcing WB in L1
> EPT when doing nested passthrough. However it's incorrect to report snoop
> control on vIOMMU when physically it's not supported, otherwise L1 VMM
> may force WB in L1 EPT and enable snoop field in vIOMMU 2nd level PTE with
> assumption that hardware snoop is guaranteed (however it isn't). Then it
> becomes a correctness issue.
> 

If my device is fully emulated, can I ignore the SNP bit in the SLPTE? What is the cost of ignoring it in such a case? What could go wrong?
(I tried to ignore it and it seems that translations work for me now).

> The thing is a bit tricky regarding to two VFIO devices which are under two
> pIOMMUs with one supporting snoop and the other doesn't, which leaves us
> two options:
> 
> 1). create one vIOMMU without snoop control. Current state and safe.
> 2). create two vIOMMUs with one supporting snoop and the other doesn't.
> Then report VFIO device to vIOMMU based on matching snoop capability.
> match hardware topology but adds extra config burden and footprint.
> 
> Thanks,
> Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
@ 2019-04-08  0:32                                           ` Tian, Kevin
  0 siblings, 0 replies; 16+ messages in thread
From: Tian, Kevin @ 2019-04-08  0:32 UTC (permalink / raw)
  To: Elijah Shakkour, Peter Xu
  Cc: Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel

> From: Elijah Shakkour [mailto:elijahsh@mellanox.com]
> Sent: Sunday, April 7, 2019 9:47 PM
> 
> 
> > -----Original Message-----
> > From: Tian, Kevin <kevin.tian@intel.com>
> > Sent: Thursday, April 4, 2019 10:58 AM
> > To: Peter Xu <peterx@redhat.com>; Elijah Shakkour
> > <elijahsh@mellanox.com>
> > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > Subject: RE: QEMU and vIOMMU support for emulated VF passthrough to
> > nested (L2) VM
> >
> > > From: Peter Xu [mailto:peterx@redhat.com]
> > > Sent: Thursday, April 4, 2019 3:00 PM
> > >
> > > On Wed, Apr 03, 2019 at 10:10:35PM +0000, Elijah Shakkour wrote:
> > >
> > > [...]
> > >
> > > > > > > > > > You can also try to enable VT-d device log by appending:
> > > > > > > > > >
> > > > > > > > > >   -trace enable="vtd_*"
> > > > > > > > > >
> > > > > > > > > > In case it dumps anything useful for you.
> > > > > > >
> > > > > > > Here is the relevant dump (dev 01:00.01 is my VF):
> > > > > > > "
> > > > > > > vtd_inv_desc_cc_device context invalidate device 01:00.01
> > > > > > > vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > > > > > > vtd_switch_address_space Device 01:00.1 switching address
> > > > > > > space (iommu
> > > > > > > enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not
> > > > > > > present vtd_err Detected invalid context entry when trying to
> > > > > > > sync shadow page table
> > > > > >
> > > > > > These lines mean that the guest sent a device invalidation to
> > > > > > your VF but the IOMMU found that the device context entry is
> > missing.
> > > > > >
> > > > > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1
> > > > > > > high
> > > > > > > 0x102 low 0x2d007003 gen 0 -> gen 2
> > > > > > > vtd_err_dmar_slpte_resv_error iova
> > > > > > > 0xf08e7000 level 2 slpte 0x2a54008f7
> > > > > >
> > > > > > This line should not exist in latest QEMU.  Are you sure you're
> > > > > > using the latest QEMU?
> > > > >
> > > > > I moved now to QEMU 4.0 RC2.
> > > > > This is the what I get now:
> > > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high
> > > > > 0x102
> > > low
> > > > > 0x2f007003 gen 0 -> gen 1
> > > > > qemu-system-x86_64: vtd_iova_to_slpte: detected splte reserve
> > > > > non-zero iova=0xf0d29000, level=0x2slpte=0x29f6008f7)
> > > > > vtd_fault_disabled Fault processing disabled for context entry
> > > > > qemu-system-x86_64: vtd_iommu_translate: detected translation
> > > > > failure (dev=01:00:01, iova=0xf0d29000) Unassigned mem read
> > > 00000000f0d29000
> > > > >
> > > > > I'm not familiar with vIOMMU registers, but I noticed that I must
> > > > > report snoop control support to Hyper-V (i.e. bit 7 in extended
> > > > > capability register
> > > of
> > > > > vIOMMU) in-order to satisfy IOMMU support for SRIOV.
> > > > > vIOMMU.ecap before    0xf00f5e
> > > > > vIOMMU.ecap after       0xf00fde
> > > > > But I see that vIOMMU doesn't really support snoop control.
> > > > > Could this be the problem that fails IOVA range check in this
> > > > > function vtd_iova_range_check()?
> > > >
> > > > Sorry, I meant the SLPTE reserved non-zero check failure in
> > > vtd_slpte_nonzero_rsvd()
> > > > And NOT IOVA range check failure (since range check didn't fail)
> > >
> > > Probably.  Currently VT-d emulation does not support snooping control,
> > > and if you modify that ecap only you probably will encounter this
> > > problem because then the guest kernel will setup the SNP bit in the
> > > IOMMU page table entries which will violate the reserved bits in the
> > > emulation code then you can see these errors.
> > >
> > > Now talking about implementing the Snoop Control for Intel IOMMU for
> > > real (which corresponds to vt-d ecap bit 7) - I'd confess I'm not 100%
> > > clear on what does the "snooping" mean and what we need to do as an
> > > emulator. I'm quotting from spec:
> > >
> > >   "Snoop behavior for a memory access (to a translation structure
> > >   entry or access to the mapped page) specifies if the access is
> > >   coherent (snoops the processor caches) or not."
> > >
> > > If it is only a capability showing that whether the hardware is
> > > capable of snooping processor caches, then I don't think we need to do
> > > much here as an emulator of VT-d simply because when we access the
> > > data we're still from the processor's side (because we're emulating
> > > the IOMMU behavior only) so the cache should always been coherent
> from
> > > the POV of guest vCPUs, just like how the processors provide cache
> > > coherence between two cores (so IMHO here the VT-d emulation code
> can
> > > be run on one core/thread, and the vcpu which runs the guest iommu
> > > driver can be run on another core/thread).  If so, maybe we can simply
> > > declare support of that but we at least also need to remove the SNP
> > > bit from vtd_paging_entry_rsvd_field[] array to reflect that we
> > > understand that bit.
> > >
> > > CCing Alex and Kevin to see whether I'm misunderstanding or in case of
> > > any further input on the snooping support.
> > >
> >
> > for software DMA yes snoop is guaranteed since it's just CPU access.
> >
> > However for VFIO device i.e. hardware DMA, snoop should be reported
> > based on physical IOMMU capability. It's fine to report no snoop control on
> > vIOMMU (current state) even when it's physically supported. It just results
> > that L1 VMM must favor guest cache attributes instead of forcing WB in L1
> > EPT when doing nested passthrough. However it's incorrect to report snoop
> > control on vIOMMU when physically it's not supported, otherwise L1 VMM
> > may force WB in L1 EPT and enable snoop field in vIOMMU 2nd level PTE
> with
> > assumption that hardware snoop is guaranteed (however it isn't). Then it
> > becomes a correctness issue.
> >
> 
> If my device is fully emulated, can I ignore the SNP bit in the SLPTE? What is
> the cost of ignoring it in such a case? What could go wrong?
> (I tried to ignore it and it seems that translations work for me now).
> 

I'm not sure what you meant by 'ignore' here. But as earlier pointed
out by Peter, for emulated devices you don't need do anything special
here. You can just report snoop capability and then remove it from
reserved bit check in SLPTE.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
@ 2019-04-08  0:32                                           ` Tian, Kevin
  0 siblings, 0 replies; 16+ messages in thread
From: Tian, Kevin @ 2019-04-08  0:32 UTC (permalink / raw)
  To: Elijah Shakkour, Peter Xu
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, Knut Omang, qemu-devel,
	Alex Williamson

> From: Elijah Shakkour [mailto:elijahsh@mellanox.com]
> Sent: Sunday, April 7, 2019 9:47 PM
> 
> 
> > -----Original Message-----
> > From: Tian, Kevin <kevin.tian@intel.com>
> > Sent: Thursday, April 4, 2019 10:58 AM
> > To: Peter Xu <peterx@redhat.com>; Elijah Shakkour
> > <elijahsh@mellanox.com>
> > Cc: Knut Omang <knut.omang@oracle.com>; Michael S. Tsirkin
> > <mst@redhat.com>; Alex Williamson <alex.williamson@redhat.com>;
> > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Stefan Hajnoczi
> > <stefanha@gmail.com>; qemu-devel@nongnu.org
> > Subject: RE: QEMU and vIOMMU support for emulated VF passthrough to
> > nested (L2) VM
> >
> > > From: Peter Xu [mailto:peterx@redhat.com]
> > > Sent: Thursday, April 4, 2019 3:00 PM
> > >
> > > On Wed, Apr 03, 2019 at 10:10:35PM +0000, Elijah Shakkour wrote:
> > >
> > > [...]
> > >
> > > > > > > > > > You can also try to enable VT-d device log by appending:
> > > > > > > > > >
> > > > > > > > > >   -trace enable="vtd_*"
> > > > > > > > > >
> > > > > > > > > > In case it dumps anything useful for you.
> > > > > > >
> > > > > > > Here is the relevant dump (dev 01:00.01 is my VF):
> > > > > > > "
> > > > > > > vtd_inv_desc_cc_device context invalidate device 01:00.01
> > > > > > > vtd_ce_not_present Context entry bus 1 devfn 1 not present
> > > > > > > vtd_switch_address_space Device 01:00.1 switching address
> > > > > > > space (iommu
> > > > > > > enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not
> > > > > > > present vtd_err Detected invalid context entry when trying to
> > > > > > > sync shadow page table
> > > > > >
> > > > > > These lines mean that the guest sent a device invalidation to
> > > > > > your VF but the IOMMU found that the device context entry is
> > missing.
> > > > > >
> > > > > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1
> > > > > > > high
> > > > > > > 0x102 low 0x2d007003 gen 0 -> gen 2
> > > > > > > vtd_err_dmar_slpte_resv_error iova
> > > > > > > 0xf08e7000 level 2 slpte 0x2a54008f7
> > > > > >
> > > > > > This line should not exist in latest QEMU.  Are you sure you're
> > > > > > using the latest QEMU?
> > > > >
> > > > > I moved now to QEMU 4.0 RC2.
> > > > > This is the what I get now:
> > > > > vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high
> > > > > 0x102
> > > low
> > > > > 0x2f007003 gen 0 -> gen 1
> > > > > qemu-system-x86_64: vtd_iova_to_slpte: detected splte reserve
> > > > > non-zero iova=0xf0d29000, level=0x2slpte=0x29f6008f7)
> > > > > vtd_fault_disabled Fault processing disabled for context entry
> > > > > qemu-system-x86_64: vtd_iommu_translate: detected translation
> > > > > failure (dev=01:00:01, iova=0xf0d29000) Unassigned mem read
> > > 00000000f0d29000
> > > > >
> > > > > I'm not familiar with vIOMMU registers, but I noticed that I must
> > > > > report snoop control support to Hyper-V (i.e. bit 7 in extended
> > > > > capability register
> > > of
> > > > > vIOMMU) in-order to satisfy IOMMU support for SRIOV.
> > > > > vIOMMU.ecap before    0xf00f5e
> > > > > vIOMMU.ecap after       0xf00fde
> > > > > But I see that vIOMMU doesn't really support snoop control.
> > > > > Could this be the problem that fails IOVA range check in this
> > > > > function vtd_iova_range_check()?
> > > >
> > > > Sorry, I meant the SLPTE reserved non-zero check failure in
> > > vtd_slpte_nonzero_rsvd()
> > > > And NOT IOVA range check failure (since range check didn't fail)
> > >
> > > Probably.  Currently VT-d emulation does not support snooping control,
> > > and if you modify that ecap only you probably will encounter this
> > > problem because then the guest kernel will setup the SNP bit in the
> > > IOMMU page table entries which will violate the reserved bits in the
> > > emulation code then you can see these errors.
> > >
> > > Now talking about implementing the Snoop Control for Intel IOMMU for
> > > real (which corresponds to vt-d ecap bit 7) - I'd confess I'm not 100%
> > > clear on what does the "snooping" mean and what we need to do as an
> > > emulator. I'm quotting from spec:
> > >
> > >   "Snoop behavior for a memory access (to a translation structure
> > >   entry or access to the mapped page) specifies if the access is
> > >   coherent (snoops the processor caches) or not."
> > >
> > > If it is only a capability showing that whether the hardware is
> > > capable of snooping processor caches, then I don't think we need to do
> > > much here as an emulator of VT-d simply because when we access the
> > > data we're still from the processor's side (because we're emulating
> > > the IOMMU behavior only) so the cache should always been coherent
> from
> > > the POV of guest vCPUs, just like how the processors provide cache
> > > coherence between two cores (so IMHO here the VT-d emulation code
> can
> > > be run on one core/thread, and the vcpu which runs the guest iommu
> > > driver can be run on another core/thread).  If so, maybe we can simply
> > > declare support of that but we at least also need to remove the SNP
> > > bit from vtd_paging_entry_rsvd_field[] array to reflect that we
> > > understand that bit.
> > >
> > > CCing Alex and Kevin to see whether I'm misunderstanding or in case of
> > > any further input on the snooping support.
> > >
> >
> > for software DMA yes snoop is guaranteed since it's just CPU access.
> >
> > However for VFIO device i.e. hardware DMA, snoop should be reported
> > based on physical IOMMU capability. It's fine to report no snoop control on
> > vIOMMU (current state) even when it's physically supported. It just results
> > that L1 VMM must favor guest cache attributes instead of forcing WB in L1
> > EPT when doing nested passthrough. However it's incorrect to report snoop
> > control on vIOMMU when physically it's not supported, otherwise L1 VMM
> > may force WB in L1 EPT and enable snoop field in vIOMMU 2nd level PTE
> with
> > assumption that hardware snoop is guaranteed (however it isn't). Then it
> > becomes a correctness issue.
> >
> 
> If my device is fully emulated, can I ignore the SNP bit in the SLPTE? What is
> the cost of ignoring it in such a case? What could go wrong?
> (I tried to ignore it and it seems that translations work for me now).
> 

I'm not sure what you meant by 'ignore' here. But as earlier pointed
out by Peter, for emulated devices you don't need do anything special
here. You can just report snoop capability and then remove it from
reserved bit check in SLPTE.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
@ 2019-04-08  5:56                                             ` Peter Xu
  0 siblings, 0 replies; 16+ messages in thread
From: Peter Xu @ 2019-04-08  5:56 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Elijah Shakkour, Knut Omang, Michael S. Tsirkin, Alex Williamson,
	Marcel Apfelbaum, Stefan Hajnoczi, qemu-devel

On Mon, Apr 08, 2019 at 12:32:12AM +0000, Tian, Kevin wrote:

[...]

> > > > Probably.  Currently VT-d emulation does not support snooping control,
> > > > and if you modify that ecap only you probably will encounter this
> > > > problem because then the guest kernel will setup the SNP bit in the
> > > > IOMMU page table entries which will violate the reserved bits in the
> > > > emulation code then you can see these errors.
> > > >
> > > > Now talking about implementing the Snoop Control for Intel IOMMU for
> > > > real (which corresponds to vt-d ecap bit 7) - I'd confess I'm not 100%
> > > > clear on what does the "snooping" mean and what we need to do as an
> > > > emulator. I'm quotting from spec:
> > > >
> > > >   "Snoop behavior for a memory access (to a translation structure
> > > >   entry or access to the mapped page) specifies if the access is
> > > >   coherent (snoops the processor caches) or not."
> > > >
> > > > If it is only a capability showing that whether the hardware is
> > > > capable of snooping processor caches, then I don't think we need to do
> > > > much here as an emulator of VT-d simply because when we access the
> > > > data we're still from the processor's side (because we're emulating
> > > > the IOMMU behavior only) so the cache should always been coherent
> > from
> > > > the POV of guest vCPUs, just like how the processors provide cache
> > > > coherence between two cores (so IMHO here the VT-d emulation code
> > can
> > > > be run on one core/thread, and the vcpu which runs the guest iommu
> > > > driver can be run on another core/thread).  If so, maybe we can simply
> > > > declare support of that but we at least also need to remove the SNP
> > > > bit from vtd_paging_entry_rsvd_field[] array to reflect that we
> > > > understand that bit.
> > > >
> > > > CCing Alex and Kevin to see whether I'm misunderstanding or in case of
> > > > any further input on the snooping support.
> > > >
> > >
> > > for software DMA yes snoop is guaranteed since it's just CPU access.
> > >
> > > However for VFIO device i.e. hardware DMA, snoop should be reported
> > > based on physical IOMMU capability. It's fine to report no snoop control on
> > > vIOMMU (current state) even when it's physically supported. It just results
> > > that L1 VMM must favor guest cache attributes instead of forcing WB in L1
> > > EPT when doing nested passthrough. However it's incorrect to report snoop
> > > control on vIOMMU when physically it's not supported, otherwise L1 VMM
> > > may force WB in L1 EPT and enable snoop field in vIOMMU 2nd level PTE
> > with
> > > assumption that hardware snoop is guaranteed (however it isn't). Then it
> > > becomes a correctness issue.
> > >
> > 
> > If my device is fully emulated, can I ignore the SNP bit in the SLPTE? What is
> > the cost of ignoring it in such a case? What could go wrong?
> > (I tried to ignore it and it seems that translations work for me now).
> > 
> 
> I'm not sure what you meant by 'ignore' here. But as earlier pointed
> out by Peter, for emulated devices you don't need do anything special
> here. You can just report snoop capability and then remove it from
> reserved bit check in SLPTE.

Yes.  For simplicity, you can add a new patch for a new property
"x-snooping" into vtd_properties and make it false by default, then
allow the user to turn it on manually considering that the user should
be clear on the consequence of this knob.

Later on we can consider to enrich this property by checking the host
configurations when detected assigned devices (I feel like it can be a
VFIO_DMA_CC_IOMMU ioctl upon every assigned device, or container), or
more.

Regards,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM
@ 2019-04-08  5:56                                             ` Peter Xu
  0 siblings, 0 replies; 16+ messages in thread
From: Peter Xu @ 2019-04-08  5:56 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Elijah Shakkour, Michael S. Tsirkin, Stefan Hajnoczi, Knut Omang,
	qemu-devel, Alex Williamson

On Mon, Apr 08, 2019 at 12:32:12AM +0000, Tian, Kevin wrote:

[...]

> > > > Probably.  Currently VT-d emulation does not support snooping control,
> > > > and if you modify that ecap only you probably will encounter this
> > > > problem because then the guest kernel will setup the SNP bit in the
> > > > IOMMU page table entries which will violate the reserved bits in the
> > > > emulation code then you can see these errors.
> > > >
> > > > Now talking about implementing the Snoop Control for Intel IOMMU for
> > > > real (which corresponds to vt-d ecap bit 7) - I'd confess I'm not 100%
> > > > clear on what does the "snooping" mean and what we need to do as an
> > > > emulator. I'm quotting from spec:
> > > >
> > > >   "Snoop behavior for a memory access (to a translation structure
> > > >   entry or access to the mapped page) specifies if the access is
> > > >   coherent (snoops the processor caches) or not."
> > > >
> > > > If it is only a capability showing that whether the hardware is
> > > > capable of snooping processor caches, then I don't think we need to do
> > > > much here as an emulator of VT-d simply because when we access the
> > > > data we're still from the processor's side (because we're emulating
> > > > the IOMMU behavior only) so the cache should always been coherent
> > from
> > > > the POV of guest vCPUs, just like how the processors provide cache
> > > > coherence between two cores (so IMHO here the VT-d emulation code
> > can
> > > > be run on one core/thread, and the vcpu which runs the guest iommu
> > > > driver can be run on another core/thread).  If so, maybe we can simply
> > > > declare support of that but we at least also need to remove the SNP
> > > > bit from vtd_paging_entry_rsvd_field[] array to reflect that we
> > > > understand that bit.
> > > >
> > > > CCing Alex and Kevin to see whether I'm misunderstanding or in case of
> > > > any further input on the snooping support.
> > > >
> > >
> > > for software DMA yes snoop is guaranteed since it's just CPU access.
> > >
> > > However for VFIO device i.e. hardware DMA, snoop should be reported
> > > based on physical IOMMU capability. It's fine to report no snoop control on
> > > vIOMMU (current state) even when it's physically supported. It just results
> > > that L1 VMM must favor guest cache attributes instead of forcing WB in L1
> > > EPT when doing nested passthrough. However it's incorrect to report snoop
> > > control on vIOMMU when physically it's not supported, otherwise L1 VMM
> > > may force WB in L1 EPT and enable snoop field in vIOMMU 2nd level PTE
> > with
> > > assumption that hardware snoop is guaranteed (however it isn't). Then it
> > > becomes a correctness issue.
> > >
> > 
> > If my device is fully emulated, can I ignore the SNP bit in the SLPTE? What is
> > the cost of ignoring it in such a case? What could go wrong?
> > (I tried to ignore it and it seems that translations work for me now).
> > 
> 
> I'm not sure what you meant by 'ignore' here. But as earlier pointed
> out by Peter, for emulated devices you don't need do anything special
> here. You can just report snoop capability and then remove it from
> reserved bit check in SLPTE.

Yes.  For simplicity, you can add a new patch for a new property
"x-snooping" into vtd_properties and make it false by default, then
allow the user to turn it on manually considering that the user should
be clear on the consequence of this knob.

Later on we can consider to enrich this property by checking the host
configurations when detected assigned devices (I feel like it can be a
VFIO_DMA_CC_IOMMU ioctl upon every assigned device, or container), or
more.

Regards,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-04-08  6:02 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <AM6PR05MB6616EE3D161F72A121F5FFEDBD5D0@AM6PR05MB6616.eurprd05.prod.outlook.com>
     [not found] ` <20190324221319-mutt-send-email-mst@kernel.org>
     [not found]   ` <AM6PR05MB66166ED74E2718150F0CBF62BD5F0@AM6PR05MB6616.eurprd05.prod.outlook.com>
     [not found]     ` <20190326085309-mutt-send-email-mst@kernel.org>
     [not found]       ` <AM6PR05MB661611FE4DD494AF9EFB6B8ABD5F0@AM6PR05MB6616.eurprd05.prod.outlook.com>
     [not found]         ` <20190327064143.GP9149@xz-x1>
     [not found]           ` <060498b58287775ce6bd7dd704f13b6d6e185f91.camel@oracle.com>
     [not found]             ` <20190327084255.GR9149@xz-x1>
     [not found]               ` <AM6PR05MB6616E9FDE038EA2B0665DD9CBD540@AM6PR05MB6616.eurprd05.prod.outlook.com>
     [not found]                 ` <20190401024710.GB8853@xz-x1>
2019-04-01  9:12                   ` [Qemu-devel] QEMU and vIOMMU support for emulated VF passthrough to nested (L2) VM Elijah Shakkour
2019-04-01 10:25                     ` Peter Xu
2019-04-01 14:01                       ` Elijah Shakkour
2019-04-01 14:24                         ` Knut Omang
2019-04-02 15:41                           ` Elijah Shakkour
2019-04-03  2:40                             ` Peter Xu
2019-04-03 21:57                               ` Elijah Shakkour
2019-04-03 22:10                                 ` Elijah Shakkour
2019-04-04  6:59                                   ` Peter Xu
2019-04-04  7:58                                     ` Tian, Kevin
2019-04-07 13:47                                       ` Elijah Shakkour
2019-04-07 13:47                                         ` Elijah Shakkour
2019-04-08  0:32                                         ` Tian, Kevin
2019-04-08  0:32                                           ` Tian, Kevin
2019-04-08  5:56                                           ` Peter Xu
2019-04-08  5:56                                             ` Peter Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.