qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Using virtual IOMMU in guest hypervisors other than KVM and Xen?
@ 2019-10-14 20:28 Jintack Lim
  2019-10-15  2:49 ` Peter Xu
  0 siblings, 1 reply; 7+ messages in thread
From: Jintack Lim @ 2019-10-14 20:28 UTC (permalink / raw)
  To: QEMU Devel Mailing List; +Cc: Peter Xu

Hi,

I'm trying to pass through a physical network device to a nested VM
using virtual IOMMU. While I was able to do it successfully using KVM
and Xen guest hypervisors running in a VM respectively, I couldn't do
it with Hyper-V as I described below. I wonder if anyone have
successfully used virtual IOMMU in other hypervisors other than KVM
and Xen? (like Hyper-V or VMware)

The issue I have with Hyper-V is that Hyper-V gives an error that the
underlying hardware is not capable of doing passthrough. The exact
error message is as follows.

Windows Power-shell > (Get-VMHost).IovSupportReasons
The chipset on the system does not do DMA remapping, without which
SR-IOV cannot be supported.

I'm pretty sure that Hyper-V recognizes virtual IOMMU, though; I have
enabled iommu in windows boot loader[1], and I see differences when
booing a Windows VM with and without virtual IOMMU. I also checked
that virtual IOMMU traces are printed.

I have tried multiple KVM/QEMU versions including the latest ones
(kernel v5.3, QEMU 4.1.0) as well as two different Windows servers
(2016 and 2019), but I see the same result. [4]

I'd love to hear if somebody is using virtual IOMMU in Hyper-V or
VMware successfully, especially for passthrough. I also appreciate if
somebody can point out any configuration errors I have.

Here's the qemu command line I use, basically from the QEMU vt-d
page[2] and Hyper-v on KVM from kvmforum [3].

./qemu/x86_64-softmmu/qemu-system-x86_64 -device
intel-iommu,intremap=on,caching-mode=on -smp 6 -m 24G -M
q35,accel=kvm,kernel-irqchip=split -cpu
host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time -drive
if=none,file=/vm/guest0.img,id=vda,cache=none,format=raw -device
virtio-blk-pci,drive=vda --nographic -qmp
unix:/var/run/qmp,server,nowait -serial
telnet:127.0.0.1:4444,server,nowait -netdev
user,id=net0,hostfwd=tcp::2222-:22 -device
virtio-net-pci,netdev=net0,mac=de:ad:be:ef:f2:12 -netdev
tap,id=net1,vhost=on,helper=/srv/vm/qemu/qemu-bridge-helper -device
virtio-net-pci,netdev=net1,disable-modern=off,disable-legacy=on,mac=de:ad:be:ef:f2:11
-device vfio-pci,host=0000:06:10.0,id=net2 -monitor stdio -usb -device
usb-tablet -rtc base=localtime,clock=host -vnc 127.0.0.1:4 --cdrom
win19.iso --drive file=virtio-win.iso,index=3,media=cdrom

Thanks,
Jintack

[1] https://social.technet.microsoft.com/Forums/en-US/a7c2940a-af32-4dab-8b31-7a605e8cf075/a-hypervisor-feature-is-not-available-to-the-user?forum=WinServerPreview
[2] https://wiki.qemu.org/Features/VT-d
[3] https://www.linux-kvm.org/images/6/6a/HyperV-KVM.pdf
[4] https://www.mail-archive.com/qemu-devel@nongnu.org/msg568963.html


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Using virtual IOMMU in guest hypervisors other than KVM and Xen?
  2019-10-14 20:28 Using virtual IOMMU in guest hypervisors other than KVM and Xen? Jintack Lim
@ 2019-10-15  2:49 ` Peter Xu
  2019-10-16 22:01   ` Jintack Lim
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Xu @ 2019-10-15  2:49 UTC (permalink / raw)
  To: Jintack Lim; +Cc: QEMU Devel Mailing List

On Mon, Oct 14, 2019 at 01:28:49PM -0700, Jintack Lim wrote:
> Hi,

Hello, Jintack,

> 
> I'm trying to pass through a physical network device to a nested VM
> using virtual IOMMU. While I was able to do it successfully using KVM
> and Xen guest hypervisors running in a VM respectively, I couldn't do
> it with Hyper-V as I described below. I wonder if anyone have
> successfully used virtual IOMMU in other hypervisors other than KVM
> and Xen? (like Hyper-V or VMware)
> 
> The issue I have with Hyper-V is that Hyper-V gives an error that the
> underlying hardware is not capable of doing passthrough. The exact
> error message is as follows.
> 
> Windows Power-shell > (Get-VMHost).IovSupportReasons
> The chipset on the system does not do DMA remapping, without which
> SR-IOV cannot be supported.
> 
> I'm pretty sure that Hyper-V recognizes virtual IOMMU, though; I have
> enabled iommu in windows boot loader[1], and I see differences when
> booing a Windows VM with and without virtual IOMMU. I also checked
> that virtual IOMMU traces are printed.

What traces have you checked?  More explicitly, have you seen DMAR
enabled and page table setup for that specific device to be
pass-throughed?

> 
> I have tried multiple KVM/QEMU versions including the latest ones
> (kernel v5.3, QEMU 4.1.0) as well as two different Windows servers
> (2016 and 2019), but I see the same result. [4]
> 
> I'd love to hear if somebody is using virtual IOMMU in Hyper-V or
> VMware successfully, especially for passthrough. I also appreciate if
> somebody can point out any configuration errors I have.
> 
> Here's the qemu command line I use, basically from the QEMU vt-d
> page[2] and Hyper-v on KVM from kvmforum [3].
> 
> ./qemu/x86_64-softmmu/qemu-system-x86_64 -device
> intel-iommu,intremap=on,caching-mode=on -smp 6 -m 24G -M

Have you tried to use 4-level IOMMU page table (aw-bits=48 on latest
QEMU, or x-aw-bits=48 on some old ones)?  IIRC we've encountered
issues when trying to pass the SVVP Windows test with this, in which
4-level is required.  I'm not sure whether whether that is required in
general usages of vIOMMU in Windows.

> q35,accel=kvm,kernel-irqchip=split -cpu
> host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time -drive
> if=none,file=/vm/guest0.img,id=vda,cache=none,format=raw -device
> virtio-blk-pci,drive=vda --nographic -qmp
> unix:/var/run/qmp,server,nowait -serial
> telnet:127.0.0.1:4444,server,nowait -netdev
> user,id=net0,hostfwd=tcp::2222-:22 -device
> virtio-net-pci,netdev=net0,mac=de:ad:be:ef:f2:12 -netdev
> tap,id=net1,vhost=on,helper=/srv/vm/qemu/qemu-bridge-helper -device
> virtio-net-pci,netdev=net1,disable-modern=off,disable-legacy=on,mac=de:ad:be:ef:f2:11
> -device vfio-pci,host=0000:06:10.0,id=net2 -monitor stdio -usb -device
> usb-tablet -rtc base=localtime,clock=host -vnc 127.0.0.1:4 --cdrom
> win19.iso --drive file=virtio-win.iso,index=3,media=cdrom

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Using virtual IOMMU in guest hypervisors other than KVM and Xen?
  2019-10-15  2:49 ` Peter Xu
@ 2019-10-16 22:01   ` Jintack Lim
  2019-10-19  3:36     ` Peter Xu
  0 siblings, 1 reply; 7+ messages in thread
From: Jintack Lim @ 2019-10-16 22:01 UTC (permalink / raw)
  To: Peter Xu; +Cc: QEMU Devel Mailing List

On Mon, Oct 14, 2019 at 7:50 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Mon, Oct 14, 2019 at 01:28:49PM -0700, Jintack Lim wrote:
> > Hi,
>
> Hello, Jintack,
>
Hi Peter,

> >
> > I'm trying to pass through a physical network device to a nested VM
> > using virtual IOMMU. While I was able to do it successfully using KVM
> > and Xen guest hypervisors running in a VM respectively, I couldn't do
> > it with Hyper-V as I described below. I wonder if anyone have
> > successfully used virtual IOMMU in other hypervisors other than KVM
> > and Xen? (like Hyper-V or VMware)
> >
> > The issue I have with Hyper-V is that Hyper-V gives an error that the
> > underlying hardware is not capable of doing passthrough. The exact
> > error message is as follows.
> >
> > Windows Power-shell > (Get-VMHost).IovSupportReasons
> > The chipset on the system does not do DMA remapping, without which
> > SR-IOV cannot be supported.
> >
> > I'm pretty sure that Hyper-V recognizes virtual IOMMU, though; I have
> > enabled iommu in windows boot loader[1], and I see differences when
> > booing a Windows VM with and without virtual IOMMU. I also checked
> > that virtual IOMMU traces are printed.
>
> What traces have you checked?  More explicitly, have you seen DMAR
> enabled and page table setup for that specific device to be
> pass-throughed?

Thanks for the pointers. I checked that DMAR is NOT enabled. The only
registers that Windows guest accessed were Version Register,
Capability Register, and Extended Capability Register. On the other
hand, a Linux guest accessed other registers and enabled DMAR.
Here's a link to the trace I got using QEMU 4.1.0. Do you see anything
interesting there?
http://paste.ubuntu.com/p/YcSyxG9Z3x/

>
> >
> > I have tried multiple KVM/QEMU versions including the latest ones
> > (kernel v5.3, QEMU 4.1.0) as well as two different Windows servers
> > (2016 and 2019), but I see the same result. [4]
> >
> > I'd love to hear if somebody is using virtual IOMMU in Hyper-V or
> > VMware successfully, especially for passthrough. I also appreciate if
> > somebody can point out any configuration errors I have.
> >
> > Here's the qemu command line I use, basically from the QEMU vt-d
> > page[2] and Hyper-v on KVM from kvmforum [3].
> >
> > ./qemu/x86_64-softmmu/qemu-system-x86_64 -device
> > intel-iommu,intremap=on,caching-mode=on -smp 6 -m 24G -M
>
> Have you tried to use 4-level IOMMU page table (aw-bits=48 on latest
> QEMU, or x-aw-bits=48 on some old ones)?  IIRC we've encountered
> issues when trying to pass the SVVP Windows test with this, in which
> 4-level is required.  I'm not sure whether whether that is required in
> general usages of vIOMMU in Windows.

I just tried the option you mentioned, but it didn't change anything.
BTW, what version of Windows was it?

>
> > q35,accel=kvm,kernel-irqchip=split -cpu
> > host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time -drive
> > if=none,file=/vm/guest0.img,id=vda,cache=none,format=raw -device
> > virtio-blk-pci,drive=vda --nographic -qmp
> > unix:/var/run/qmp,server,nowait -serial
> > telnet:127.0.0.1:4444,server,nowait -netdev
> > user,id=net0,hostfwd=tcp::2222-:22 -device
> > virtio-net-pci,netdev=net0,mac=de:ad:be:ef:f2:12 -netdev
> > tap,id=net1,vhost=on,helper=/srv/vm/qemu/qemu-bridge-helper -device
> > virtio-net-pci,netdev=net1,disable-modern=off,disable-legacy=on,mac=de:ad:be:ef:f2:11
> > -device vfio-pci,host=0000:06:10.0,id=net2 -monitor stdio -usb -device
> > usb-tablet -rtc base=localtime,clock=host -vnc 127.0.0.1:4 --cdrom
> > win19.iso --drive file=virtio-win.iso,index=3,media=cdrom
>
> --
> Peter Xu
>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Using virtual IOMMU in guest hypervisors other than KVM and Xen?
  2019-10-16 22:01   ` Jintack Lim
@ 2019-10-19  3:36     ` Peter Xu
  2019-10-19  6:19       ` Jintack Lim
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Xu @ 2019-10-19  3:36 UTC (permalink / raw)
  To: Jintack Lim; +Cc: QEMU Devel Mailing List

On Wed, Oct 16, 2019 at 03:01:22PM -0700, Jintack Lim wrote:
> On Mon, Oct 14, 2019 at 7:50 PM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Mon, Oct 14, 2019 at 01:28:49PM -0700, Jintack Lim wrote:
> > > Hi,
> >
> > Hello, Jintack,
> >
> Hi Peter,
> 
> > >
> > > I'm trying to pass through a physical network device to a nested VM
> > > using virtual IOMMU. While I was able to do it successfully using KVM
> > > and Xen guest hypervisors running in a VM respectively, I couldn't do
> > > it with Hyper-V as I described below. I wonder if anyone have
> > > successfully used virtual IOMMU in other hypervisors other than KVM
> > > and Xen? (like Hyper-V or VMware)
> > >
> > > The issue I have with Hyper-V is that Hyper-V gives an error that the
> > > underlying hardware is not capable of doing passthrough. The exact
> > > error message is as follows.
> > >
> > > Windows Power-shell > (Get-VMHost).IovSupportReasons
> > > The chipset on the system does not do DMA remapping, without which
> > > SR-IOV cannot be supported.
> > >
> > > I'm pretty sure that Hyper-V recognizes virtual IOMMU, though; I have
> > > enabled iommu in windows boot loader[1], and I see differences when
> > > booing a Windows VM with and without virtual IOMMU. I also checked
> > > that virtual IOMMU traces are printed.
> >
> > What traces have you checked?  More explicitly, have you seen DMAR
> > enabled and page table setup for that specific device to be
> > pass-throughed?
> 
> Thanks for the pointers. I checked that DMAR is NOT enabled. The only
> registers that Windows guest accessed were Version Register,
> Capability Register, and Extended Capability Register. On the other
> hand, a Linux guest accessed other registers and enabled DMAR.
> Here's a link to the trace I got using QEMU 4.1.0. Do you see anything
> interesting there?
> http://paste.ubuntu.com/p/YcSyxG9Z3x/

Then I feel like Windows is reluctant to enable DMAR due to lacking of
some caps.

> 
> >
> > >
> > > I have tried multiple KVM/QEMU versions including the latest ones
> > > (kernel v5.3, QEMU 4.1.0) as well as two different Windows servers
> > > (2016 and 2019), but I see the same result. [4]
> > >
> > > I'd love to hear if somebody is using virtual IOMMU in Hyper-V or
> > > VMware successfully, especially for passthrough. I also appreciate if
> > > somebody can point out any configuration errors I have.
> > >
> > > Here's the qemu command line I use, basically from the QEMU vt-d
> > > page[2] and Hyper-v on KVM from kvmforum [3].
> > >
> > > ./qemu/x86_64-softmmu/qemu-system-x86_64 -device
> > > intel-iommu,intremap=on,caching-mode=on -smp 6 -m 24G -M
> >
> > Have you tried to use 4-level IOMMU page table (aw-bits=48 on latest
> > QEMU, or x-aw-bits=48 on some old ones)?  IIRC we've encountered
> > issues when trying to pass the SVVP Windows test with this, in which
> > 4-level is required.  I'm not sure whether whether that is required in
> > general usages of vIOMMU in Windows.
> 
> I just tried the option you mentioned, but it didn't change anything.
> BTW, what version of Windows was it?

Sorry I don't remember that. I didn't do the test but I was just
acknowledged that with it the test passed.  I assume you're using the
latest QEMU here because I know Windows could require another
capability (DMA draining) and it should be on by default in latest
qemu master.

At that time the complete cmdline to pass the test should be:

  -device intel-iommu,intremap=on,aw-bits=48,caching-mode=off,eim=on

I also don't remember on why caching-mode needs to be off at that
time (otherwise SVVP fails too).

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Using virtual IOMMU in guest hypervisors other than KVM and Xen?
  2019-10-19  3:36     ` Peter Xu
@ 2019-10-19  6:19       ` Jintack Lim
  2019-10-21  0:44         ` Peter Xu
  0 siblings, 1 reply; 7+ messages in thread
From: Jintack Lim @ 2019-10-19  6:19 UTC (permalink / raw)
  To: Peter Xu; +Cc: QEMU Devel Mailing List

On Fri, Oct 18, 2019 at 8:37 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Oct 16, 2019 at 03:01:22PM -0700, Jintack Lim wrote:
> > On Mon, Oct 14, 2019 at 7:50 PM Peter Xu <peterx@redhat.com> wrote:
> > >
> > > On Mon, Oct 14, 2019 at 01:28:49PM -0700, Jintack Lim wrote:
> > > > Hi,
> > >
> > > Hello, Jintack,
> > >
> > Hi Peter,
> >
> > > >
> > > > I'm trying to pass through a physical network device to a nested VM
> > > > using virtual IOMMU. While I was able to do it successfully using KVM
> > > > and Xen guest hypervisors running in a VM respectively, I couldn't do
> > > > it with Hyper-V as I described below. I wonder if anyone have
> > > > successfully used virtual IOMMU in other hypervisors other than KVM
> > > > and Xen? (like Hyper-V or VMware)
> > > >
> > > > The issue I have with Hyper-V is that Hyper-V gives an error that the
> > > > underlying hardware is not capable of doing passthrough. The exact
> > > > error message is as follows.
> > > >
> > > > Windows Power-shell > (Get-VMHost).IovSupportReasons
> > > > The chipset on the system does not do DMA remapping, without which
> > > > SR-IOV cannot be supported.
> > > >
> > > > I'm pretty sure that Hyper-V recognizes virtual IOMMU, though; I have
> > > > enabled iommu in windows boot loader[1], and I see differences when
> > > > booing a Windows VM with and without virtual IOMMU. I also checked
> > > > that virtual IOMMU traces are printed.
> > >
> > > What traces have you checked?  More explicitly, have you seen DMAR
> > > enabled and page table setup for that specific device to be
> > > pass-throughed?
> >
> > Thanks for the pointers. I checked that DMAR is NOT enabled. The only
> > registers that Windows guest accessed were Version Register,
> > Capability Register, and Extended Capability Register. On the other
> > hand, a Linux guest accessed other registers and enabled DMAR.
> > Here's a link to the trace I got using QEMU 4.1.0. Do you see anything
> > interesting there?
> > http://paste.ubuntu.com/p/YcSyxG9Z3x/
>
> Then I feel like Windows is reluctant to enable DMAR due to lacking of
> some caps.
>
> >
> > >
> > > >
> > > > I have tried multiple KVM/QEMU versions including the latest ones
> > > > (kernel v5.3, QEMU 4.1.0) as well as two different Windows servers
> > > > (2016 and 2019), but I see the same result. [4]
> > > >
> > > > I'd love to hear if somebody is using virtual IOMMU in Hyper-V or
> > > > VMware successfully, especially for passthrough. I also appreciate if
> > > > somebody can point out any configuration errors I have.
> > > >
> > > > Here's the qemu command line I use, basically from the QEMU vt-d
> > > > page[2] and Hyper-v on KVM from kvmforum [3].
> > > >
> > > > ./qemu/x86_64-softmmu/qemu-system-x86_64 -device
> > > > intel-iommu,intremap=on,caching-mode=on -smp 6 -m 24G -M
> > >
> > > Have you tried to use 4-level IOMMU page table (aw-bits=48 on latest
> > > QEMU, or x-aw-bits=48 on some old ones)?  IIRC we've encountered
> > > issues when trying to pass the SVVP Windows test with this, in which
> > > 4-level is required.  I'm not sure whether whether that is required in
> > > general usages of vIOMMU in Windows.
> >
> > I just tried the option you mentioned, but it didn't change anything.
> > BTW, what version of Windows was it?
>
> Sorry I don't remember that. I didn't do the test but I was just
> acknowledged that with it the test passed.  I assume you're using the
> latest QEMU here because I know Windows could require another
> capability (DMA draining) and it should be on by default in latest
> qemu master.

Thanks. Yes, I plan to use v2.11.0 eventually, but I'm trying to make
things work with the latest version first.

>
> At that time the complete cmdline to pass the test should be:
>
>   -device intel-iommu,intremap=on,aw-bits=48,caching-mode=off,eim=on
>
> I also don't remember on why caching-mode needs to be off at that
> time (otherwise SVVP fails too).

Thanks for providing the cmdline. However, turning off the
caching-mode with an assigned device resulted in the following error
on VM boot.
"We need to set caching-mode=on for intel-iommu to enable device assignment."
Does this mean that we can't assign a physical device all the way to a
nested VM with a Windows L1 hypervisor as of now?

Without assigning a device, I was able to boot a Windows VM with the
cmdline above and I see that DMAR in vIOMMU is enabled. Windows still
complains about DMA remapping, though. I'll investigate further.

>
> --
> Peter Xu
>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Using virtual IOMMU in guest hypervisors other than KVM and Xen?
  2019-10-19  6:19       ` Jintack Lim
@ 2019-10-21  0:44         ` Peter Xu
  2019-10-21 11:33           ` Vitaly Kuznetsov
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Xu @ 2019-10-21  0:44 UTC (permalink / raw)
  To: Jintack Lim; +Cc: Vitaly Kuznetsov, QEMU Devel Mailing List

On Fri, Oct 18, 2019 at 11:19:55PM -0700, Jintack Lim wrote:
> On Fri, Oct 18, 2019 at 8:37 PM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Wed, Oct 16, 2019 at 03:01:22PM -0700, Jintack Lim wrote:
> > > On Mon, Oct 14, 2019 at 7:50 PM Peter Xu <peterx@redhat.com> wrote:
> > > >
> > > > On Mon, Oct 14, 2019 at 01:28:49PM -0700, Jintack Lim wrote:
> > > > > Hi,
> > > >
> > > > Hello, Jintack,
> > > >
> > > Hi Peter,
> > >
> > > > >
> > > > > I'm trying to pass through a physical network device to a nested VM
> > > > > using virtual IOMMU. While I was able to do it successfully using KVM
> > > > > and Xen guest hypervisors running in a VM respectively, I couldn't do
> > > > > it with Hyper-V as I described below. I wonder if anyone have
> > > > > successfully used virtual IOMMU in other hypervisors other than KVM
> > > > > and Xen? (like Hyper-V or VMware)
> > > > >
> > > > > The issue I have with Hyper-V is that Hyper-V gives an error that the
> > > > > underlying hardware is not capable of doing passthrough. The exact
> > > > > error message is as follows.
> > > > >
> > > > > Windows Power-shell > (Get-VMHost).IovSupportReasons
> > > > > The chipset on the system does not do DMA remapping, without which
> > > > > SR-IOV cannot be supported.
> > > > >
> > > > > I'm pretty sure that Hyper-V recognizes virtual IOMMU, though; I have
> > > > > enabled iommu in windows boot loader[1], and I see differences when
> > > > > booing a Windows VM with and without virtual IOMMU. I also checked
> > > > > that virtual IOMMU traces are printed.
> > > >
> > > > What traces have you checked?  More explicitly, have you seen DMAR
> > > > enabled and page table setup for that specific device to be
> > > > pass-throughed?
> > >
> > > Thanks for the pointers. I checked that DMAR is NOT enabled. The only
> > > registers that Windows guest accessed were Version Register,
> > > Capability Register, and Extended Capability Register. On the other
> > > hand, a Linux guest accessed other registers and enabled DMAR.
> > > Here's a link to the trace I got using QEMU 4.1.0. Do you see anything
> > > interesting there?
> > > http://paste.ubuntu.com/p/YcSyxG9Z3x/
> >
> > Then I feel like Windows is reluctant to enable DMAR due to lacking of
> > some caps.
> >
> > >
> > > >
> > > > >
> > > > > I have tried multiple KVM/QEMU versions including the latest ones
> > > > > (kernel v5.3, QEMU 4.1.0) as well as two different Windows servers
> > > > > (2016 and 2019), but I see the same result. [4]
> > > > >
> > > > > I'd love to hear if somebody is using virtual IOMMU in Hyper-V or
> > > > > VMware successfully, especially for passthrough. I also appreciate if
> > > > > somebody can point out any configuration errors I have.
> > > > >
> > > > > Here's the qemu command line I use, basically from the QEMU vt-d
> > > > > page[2] and Hyper-v on KVM from kvmforum [3].
> > > > >
> > > > > ./qemu/x86_64-softmmu/qemu-system-x86_64 -device
> > > > > intel-iommu,intremap=on,caching-mode=on -smp 6 -m 24G -M
> > > >
> > > > Have you tried to use 4-level IOMMU page table (aw-bits=48 on latest
> > > > QEMU, or x-aw-bits=48 on some old ones)?  IIRC we've encountered
> > > > issues when trying to pass the SVVP Windows test with this, in which
> > > > 4-level is required.  I'm not sure whether whether that is required in
> > > > general usages of vIOMMU in Windows.
> > >
> > > I just tried the option you mentioned, but it didn't change anything.
> > > BTW, what version of Windows was it?
> >
> > Sorry I don't remember that. I didn't do the test but I was just
> > acknowledged that with it the test passed.  I assume you're using the
> > latest QEMU here because I know Windows could require another
> > capability (DMA draining) and it should be on by default in latest
> > qemu master.
> 
> Thanks. Yes, I plan to use v2.11.0 eventually, but I'm trying to make
> things work with the latest version first.
> 
> >
> > At that time the complete cmdline to pass the test should be:
> >
> >   -device intel-iommu,intremap=on,aw-bits=48,caching-mode=off,eim=on
> >
> > I also don't remember on why caching-mode needs to be off at that
> > time (otherwise SVVP fails too).
> 
> Thanks for providing the cmdline. However, turning off the
> caching-mode with an assigned device resulted in the following error
> on VM boot.
> "We need to set caching-mode=on for intel-iommu to enable device assignment."
> Does this mean that we can't assign a physical device all the way to a
> nested VM with a Windows L1 hypervisor as of now?
> 
> Without assigning a device, I was able to boot a Windows VM with the
> cmdline above and I see that DMAR in vIOMMU is enabled. Windows still
> complains about DMA remapping, though. I'll investigate further.

We're going to have other ways to do device assignment in the future
leveraging the coming nested device page tables just like ARM, but
it's still a long way until even the hardware is ready...  And we also
don't know whether Microsoft will be unhappy again on these new
bits. :)

So, I would consider bouncing that question to Microsoft, because we
first need to understand why Windows do not allow caching-mode to be
set..

My wild guess is that caching-mode will require some extra overhead in
guest OS IOMMU layer so Windows is currently lacking of, while they
might have some other way to do device assignment to L2 on their own
(e.g., do Microsoft allow to assign devices to L2 guest if on Hyperv
cloud)?

CCing Vitaly as the Hyperv/Windows expert in case for further input.

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Using virtual IOMMU in guest hypervisors other than KVM and Xen?
  2019-10-21  0:44         ` Peter Xu
@ 2019-10-21 11:33           ` Vitaly Kuznetsov
  0 siblings, 0 replies; 7+ messages in thread
From: Vitaly Kuznetsov @ 2019-10-21 11:33 UTC (permalink / raw)
  To: Peter Xu; +Cc: Jintack Lim, QEMU Devel Mailing List

Peter Xu <peterx@redhat.com> writes:

> On Fri, Oct 18, 2019 at 11:19:55PM -0700, Jintack Lim wrote:
>> On Fri, Oct 18, 2019 at 8:37 PM Peter Xu <peterx@redhat.com> wrote:
>> >
>> > On Wed, Oct 16, 2019 at 03:01:22PM -0700, Jintack Lim wrote:
>> > > On Mon, Oct 14, 2019 at 7:50 PM Peter Xu <peterx@redhat.com> wrote:
>> > > >
>> > > > On Mon, Oct 14, 2019 at 01:28:49PM -0700, Jintack Lim wrote:
>> > > > > Hi,
>> > > >
>> > > > Hello, Jintack,
>> > > >
>> > > Hi Peter,
>> > >
>> > > > >
>> > > > > I'm trying to pass through a physical network device to a nested VM
>> > > > > using virtual IOMMU. While I was able to do it successfully using KVM
>> > > > > and Xen guest hypervisors running in a VM respectively, I couldn't do
>> > > > > it with Hyper-V as I described below. I wonder if anyone have
>> > > > > successfully used virtual IOMMU in other hypervisors other than KVM
>> > > > > and Xen? (like Hyper-V or VMware)
>> > > > >
>> > > > > The issue I have with Hyper-V is that Hyper-V gives an error that the
>> > > > > underlying hardware is not capable of doing passthrough. The exact
>> > > > > error message is as follows.
>> > > > >
>> > > > > Windows Power-shell > (Get-VMHost).IovSupportReasons
>> > > > > The chipset on the system does not do DMA remapping, without which
>> > > > > SR-IOV cannot be supported.
>> > > > >
>> > > > > I'm pretty sure that Hyper-V recognizes virtual IOMMU, though; I have
>> > > > > enabled iommu in windows boot loader[1], and I see differences when
>> > > > > booing a Windows VM with and without virtual IOMMU. I also checked
>> > > > > that virtual IOMMU traces are printed.
>> > > >
>> > > > What traces have you checked?  More explicitly, have you seen DMAR
>> > > > enabled and page table setup for that specific device to be
>> > > > pass-throughed?
>> > >
>> > > Thanks for the pointers. I checked that DMAR is NOT enabled. The only
>> > > registers that Windows guest accessed were Version Register,
>> > > Capability Register, and Extended Capability Register. On the other
>> > > hand, a Linux guest accessed other registers and enabled DMAR.
>> > > Here's a link to the trace I got using QEMU 4.1.0. Do you see anything
>> > > interesting there?
>> > > http://paste.ubuntu.com/p/YcSyxG9Z3x/
>> >
>> > Then I feel like Windows is reluctant to enable DMAR due to lacking of
>> > some caps.
>> >
>> > >
>> > > >
>> > > > >
>> > > > > I have tried multiple KVM/QEMU versions including the latest ones
>> > > > > (kernel v5.3, QEMU 4.1.0) as well as two different Windows servers
>> > > > > (2016 and 2019), but I see the same result. [4]
>> > > > >
>> > > > > I'd love to hear if somebody is using virtual IOMMU in Hyper-V or
>> > > > > VMware successfully, especially for passthrough. I also appreciate if
>> > > > > somebody can point out any configuration errors I have.
>> > > > >
>> > > > > Here's the qemu command line I use, basically from the QEMU vt-d
>> > > > > page[2] and Hyper-v on KVM from kvmforum [3].
>> > > > >
>> > > > > ./qemu/x86_64-softmmu/qemu-system-x86_64 -device
>> > > > > intel-iommu,intremap=on,caching-mode=on -smp 6 -m 24G -M
>> > > >
>> > > > Have you tried to use 4-level IOMMU page table (aw-bits=48 on latest
>> > > > QEMU, or x-aw-bits=48 on some old ones)?  IIRC we've encountered
>> > > > issues when trying to pass the SVVP Windows test with this, in which
>> > > > 4-level is required.  I'm not sure whether whether that is required in
>> > > > general usages of vIOMMU in Windows.
>> > >
>> > > I just tried the option you mentioned, but it didn't change anything.
>> > > BTW, what version of Windows was it?
>> >
>> > Sorry I don't remember that. I didn't do the test but I was just
>> > acknowledged that with it the test passed.  I assume you're using the
>> > latest QEMU here because I know Windows could require another
>> > capability (DMA draining) and it should be on by default in latest
>> > qemu master.
>> 
>> Thanks. Yes, I plan to use v2.11.0 eventually, but I'm trying to make
>> things work with the latest version first.
>> 
>> >
>> > At that time the complete cmdline to pass the test should be:
>> >
>> >   -device intel-iommu,intremap=on,aw-bits=48,caching-mode=off,eim=on
>> >
>> > I also don't remember on why caching-mode needs to be off at that
>> > time (otherwise SVVP fails too).
>> 
>> Thanks for providing the cmdline. However, turning off the
>> caching-mode with an assigned device resulted in the following error
>> on VM boot.
>> "We need to set caching-mode=on for intel-iommu to enable device assignment."
>> Does this mean that we can't assign a physical device all the way to a
>> nested VM with a Windows L1 hypervisor as of now?
>> 
>> Without assigning a device, I was able to boot a Windows VM with the
>> cmdline above and I see that DMAR in vIOMMU is enabled. Windows still
>> complains about DMA remapping, though. I'll investigate further.
>
> We're going to have other ways to do device assignment in the future
> leveraging the coming nested device page tables just like ARM, but
> it's still a long way until even the hardware is ready...  And we also
> don't know whether Microsoft will be unhappy again on these new
> bits. :)
>
> So, I would consider bouncing that question to Microsoft, because we
> first need to understand why Windows do not allow caching-mode to be
> set..
>
> My wild guess is that caching-mode will require some extra overhead in
> guest OS IOMMU layer so Windows is currently lacking of, while they
> might have some other way to do device assignment to L2 on their own
> (e.g., do Microsoft allow to assign devices to L2 guest if on Hyperv
> cloud)?

AFAIU, currently not: they don't have vIOMMU support at all. 

>
> CCing Vitaly as the Hyperv/Windows expert in case for further input.

I don't know if this is useful or not but Hyper-V TLFS has the following
bits defined:

CPUID 0x40000004.EAX:

Bit 6: Recommend using DMA remapping.
Bit 7: Recommend using interrupt remapping.
Bit 8: Recommend using x2APIC MSRs.

AFAIR we don't set these bits in KVM/QEMU with the currently implemented
Hyper-V enlightenments. I'd suggest to hack things up and set these to
see what happens :-) My guess is that it may even work, looking at linux
commit 29217a474683 it doesn't seem that we heed a new hypercall or
anything specific.

-- 
Vitaly


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-10-21 11:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-14 20:28 Using virtual IOMMU in guest hypervisors other than KVM and Xen? Jintack Lim
2019-10-15  2:49 ` Peter Xu
2019-10-16 22:01   ` Jintack Lim
2019-10-19  3:36     ` Peter Xu
2019-10-19  6:19       ` Jintack Lim
2019-10-21  0:44         ` Peter Xu
2019-10-21 11:33           ` Vitaly Kuznetsov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).