All of lore.kernel.org
 help / color / mirror / Atom feed
From: Leo Yan <leo.yan@linaro.org>
To: Auger Eric <eric.auger@redhat.com>
Cc: Daniel Thompson <daniel.thompson@linaro.org>,
	Robin Murphy <robin.murphy@arm.com>,
	kvmarm@lists.cs.columbia.edu
Subject: Re: Question: KVM: Failed to bind vfio with PCI-e / SMMU on Juno-r2
Date: Wed, 13 Mar 2019 19:35:49 +0800	[thread overview]
Message-ID: <20190313113549.GK13422@leoy-ThinkPad-X240s> (raw)
In-Reply-To: <35c22d0c-7da5-4e68-effb-05c8571d8b63@redhat.com>

Hi Eric,

On Wed, Mar 13, 2019 at 11:01:33AM +0100, Auger Eric wrote:

[...]

> >   I want to confirm, if this is the recommended mode for
> >   passthrough PCI-e device to use msi both in host OS and geust OS?
> >   Or it's will be fine for host OS using msi and guest OS using
> >   INTx mode?
> 
> If the NIC supports MSIs they logically are used. This can be easily
> checked on host by issuing "cat /proc/interrupts | grep vfio". Can you
> check whether the guest received any interrupt? I remember that Robin
> said in the past that on Juno, the MSI doorbell was in the PCI host
> bridge window and possibly transactions towards the doorbell could not
> reach it since considered as peer to peer. Using GICv2M should not bring
> any performance issue. I tested that in the past with Seattle board.

I can see below info on host with launching KVM:

root@debian:~# cat /proc/interrupts | grep vfio
 46:          0          0          0          0          0          0       MSI 4194304 Edge      vfio-msi[0](0000:08:00.0)

And below is interrupts in guest:

# cat /proc/interrupts
           CPU0       CPU1       CPU2       CPU3       CPU4       CPU5
  3:        506        400        281        403        298        330     GIC-0  27 Level     arch_timer
  5:        768          0          0          0          0          0     GIC-0 101 Edge      virtio0
  6:        246          0          0          0          0          0     GIC-0 102 Edge      virtio1
  7:          2          0          0          0          0          0     GIC-0 103 Edge      virtio2
  8:        210          0          0          0          0          0     GIC-0  97 Level     ttyS0
 13:          0          0          0          0          0          0       MSI   0 Edge      eth1

> > - The second question is for GICv2m.  If I understand correctly, when
> >   passthrough PCI-e device to guest OS, in the guest OS we should
> >   create below data path for PCI-e devices:
> >                                                             +--------+
> >                                                          -> | Memory |
> >     +-----------+    +------------------+    +-------+  /   +--------+
> >     | Net card  | -> | PCI-e controller | -> | IOMMU | -
> >     +-----------+    +------------------+    +-------+  \   +--------+
> >                                                          -> | MSI    |
> >                                                             | frame  |
> >                                                             +--------+
> > 
> >   Since now the master is network card/PCI-e controller but not CPU,
> >   thus there have no 2 stages for memory accessing (VA->IPA->PA).  In
> >   this case, if we configure IOMMU (SMMU) for guest OS for address
> >   translation before switch from host to guest, right?  Or SMMU also
> >   have two stages memory mapping?
> 
> in your use case you don't have any virtual IOMMU. So the guest programs
> the assigned device with guest physical device and the virtualizer uses
> the physical IOMMU to translate this GPA into host physical address
> backing the guest RAM and the MSI frame. A single stage of the physical
> IOMMU is used (stage1).

Thanks a lot for the explaination.

> >   Another thing confuses me is I can see the MSI frame is mapped to
> >   GIC's physical address in host OS, thus the PCI-e device can send
> >   message correctly to msi frame.  But for guest OS, the MSI frame is
> >   mapped to one IPA memory region, and this region is use to emulate
> >   GICv2 msi frame rather than the hardware msi frame; thus will any
> >   access from PCI-e to this region will trap to hypervisor in CPU
> >   side so KVM hyperviso can help emulate (and inject) the interrupt
> >   for guest OS?
> 
> when the device sends an MSI it uses a host allocated IOVA for the
> physical MSI doorbell. This gets translated by the physical IOMMU,
> reaches the physical doorbell. The physical GICv2m triggers the
> associated physical SPI -> kvm irqfd -> virtual IRQ
> With GICv2M we have direct GSI mapping on guest.

Just want to confirm, in your elaborated flow the virtual IRQ will be
injected by qemu (or kvmtool) for every time but it's not needed to
interfere with IRQ's deactivation, right?

> >   Essentially, I want to check what's the expected behaviour for GICv2
> >   msi frame working mode when we want to passthrough one PCI-e device
> >   to guest OS and the PCI-e device has one static msi frame for it.
> 
> Your config was tested in the past with Seattle (not with sky2 NIC
> though). Adding Robin for the peer to peer potential concern.

Very appreciate for your help.

Thanks,
Leo Yan

      parent reply	other threads:[~2019-03-13 11:35 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-11  6:42 Question: KVM: Failed to bind vfio with PCI-e / SMMU on Juno-r2 Leo Yan
2019-03-11  6:57 ` Leo Yan
2019-03-11  8:23 ` Auger Eric
2019-03-11  9:39   ` Leo Yan
2019-03-11  9:47     ` Auger Eric
2019-03-11 14:35       ` Leo Yan
2019-03-13  8:00         ` Leo Yan
2019-03-13 10:01           ` Leo Yan
2019-03-13 10:16             ` Auger Eric
2019-03-13 10:01           ` Auger Eric
2019-03-13 10:24             ` Auger Eric
2019-03-13 11:52               ` Leo Yan
2019-03-15  9:37               ` Leo Yan
2019-03-15 11:03                 ` Auger Eric
2019-03-15 12:54                   ` Robin Murphy
2019-03-16  4:56                     ` Leo Yan
2019-03-18 12:25                       ` Robin Murphy
2019-03-19  1:33                         ` Leo Yan
2019-03-20  8:42                           ` Leo Yan
2019-03-13 11:35             ` Leo Yan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190313113549.GK13422@leoy-ThinkPad-X240s \
    --to=leo.yan@linaro.org \
    --cc=daniel.thompson@linaro.org \
    --cc=eric.auger@redhat.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.