linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Jike Song <jike.song@intel.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"igvt-g@ml01.01.org" <igvt-g@ml01.01.org>,
	"intel-gfx@lists.freedesktop.org"
	<intel-gfx@lists.freedesktop.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"White, Michael L" <michael.l.white@intel.com>,
	"Dong, Eddie" <eddie.dong@intel.com>,
	"Li, Susie" <susie.li@intel.com>,
	"Cowperthwaite, David J" <david.j.cowperthwaite@intel.com>,
	"Reddy, Raghuveer" <raghuveer.reddy@intel.com>,
	"Zhu, Libo" <libo.zhu@intel.com>,
	"Zhou, Chao" <chao.zhou@intel.com>,
	"Wang, Hongbo" <hongbo.wang@intel.com>,
	"Lv, Zhiyuan" <zhiyuan.lv@intel.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: Re: [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel
Date: Fri, 20 Nov 2015 09:40:40 -0700	[thread overview]
Message-ID: <1448037640.4697.266.camel@redhat.com> (raw)
In-Reply-To: <564EB4F2.9080605@intel.com>

On Fri, 2015-11-20 at 13:51 +0800, Jike Song wrote:
> On 11/20/2015 12:22 PM, Alex Williamson wrote:
> > On Fri, 2015-11-20 at 10:58 +0800, Jike Song wrote:
> >> On 11/19/2015 11:52 PM, Alex Williamson wrote:
> >>> On Thu, 2015-11-19 at 15:32 +0000, Stefano Stabellini wrote:
> >>>> On Thu, 19 Nov 2015, Jike Song wrote:
> >>>>> Hi Alex, thanks for the discussion.
> >>>>>
> >>>>> In addition to Kevin's replies, I have a high-level question: can VFIO
> >>>>> be used by QEMU for both KVM and Xen?
> >>>>
> >>>> No. VFIO cannot be used with Xen today. When running on Xen, the IOMMU
> >>>> is owned by Xen.
> >>>
> >>> Right, but in this case we're talking about device MMUs, which are owned
> >>> by the device driver which I think is running in dom0, right?  This
> >>> proposal doesn't require support of the system IOMMU, the dom0 driver
> >>> maps IOVA translations just as it would for itself.  We're largely
> >>> proposing use of the VFIO API to provide a common interface to expose a
> >>> PCI(e) device to QEMU, but what happens in the vGPU vendor device and
> >>> IOMMU backends is specific to the device and perhaps even specific to
> >>> the hypervisor.  Thanks,
> >>
> >> Let me conclude this, and please correct me in case of any misread: the
> >> vGPU interface between kernel and QEMU will be through VFIO, with a new
> >> VFIO backend (instead of the existing type1), for both KVMGT and XenGT?
> >
> > My primary concern is KVM and QEMU upstream, the proposal is not
> > specifically directed at XenGT, but does not exclude it either.  Xen is
> > welcome to adopt this proposal as well, it simply defines the channel
> > through which vGPUs are exposed to QEMU as the VFIO API.  The core VFIO
> > code in the Linux kernel is just as available for use in Xen dom0 as it
> > is for a KVM host. VFIO in QEMU certainly knows about some
> > accelerations for KVM, but these are almost entirely around allowing
> > eventfd based interrupts to be injected through KVM, which is something
> > I'm sure Xen could provide as well.  These accelerations are also not
> > required, VFIO based device assignment in QEMU works with or without
> > KVM.  Likewise, the VFIO kernel interface knows nothing about KVM and
> > has no dependencies on it.
> >
> > There are two components to the VFIO API, one is the type1 compliant
> > IOMMU interface, which for this proposal is really doing nothing more
> > than tracking the HVA to GPA mappings for the VM.  This much seems
> > entirely common regardless of the hypervisor.  The other part is the
> > device interface.  The lifecycle of the virtual device seems like it
> > would be entirely shared, as does much of the emulation components of
> > the device.  When we get to pinning pages, providing direct access to
> > memory ranges for a VM, and accelerating interrupts, the vGPU drivers
> > will likely need some per hypervisor branches, but these are areas where
> > that's true no matter what the interface.  I'm probably over
> > simplifying, but hopefully not too much, correct me if I'm wrong.
> >
> 
> Thanks for confirmation. For QEMU/KVM, I totally agree your point; However,
> if we take XenGT to consider, it will be a bit more complex: with Xen
> hypervisor and Dom0 kernel running in different level, it's not a straight-
> forward way for QEMU to do something like mapping a portion of MMIO BAR
> via VFIO in Dom0 kernel, instead of calling hypercalls directly.

This would need to be part of the support added for Xen.  To directly
map a device MMIO space to the VM, VFIO provides an mmap, QEMU registers
that mmap with KVM, or Xen.  It's all just MemoryRegions in QEMU.
Perhaps it's even already supported by Xen.

> I don't know if there is a better way to handle this. But I do agree that
> channels between kernel and Qemu via VFIO is a good idea, even though we
> may have to split KVMGT/XenGT in Qemu a bit.  We are currently working on
> moving all of PCI CFG emulation from kernel to Qemu, hopefully we can
> release it by end of this year and work with you guys to adjust it for
> the agreed method.

Well, moving PCI config space emulation from kernel to QEMU is exactly
the wrong direction to take for this proposal.  Config space access to
the vGPU would occur through the VFIO API.  So if you already have
config space emulation in the kernel, that's already one less piece of
work for a VFIO model, it just needs to be "wired up" through the VFIO
API.  Thanks,

Alex


  parent reply	other threads:[~2015-11-20 16:40 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-25  8:31 [Intel-gfx] [Announcement] Updates to XenGT - a Mediated Graphics Passthrough Solution from Intel Jike Song
2014-07-29 10:09 ` [Xen-devel] " Dario Faggioli
2014-07-30  9:39   ` Jike Song
2014-12-04  2:45 ` [Intel-gfx] [Announcement] 2014-Q3 release of " Jike Song
2014-12-04 10:20   ` [Xen-devel] " Fabio Fantoni
2015-01-09  8:51   ` [Intel-gfx] [Announcement] 2015-Q1 " Jike Song
2015-01-12  3:04     ` [Intel-gfx] [Announcement] 2014-Q4 " Jike Song
2015-04-10 13:23     ` [Intel-gfx] [Announcement] 2015-Q1 " Jike Song
2015-07-07  2:49       ` [Intel-gfx] [Announcement] 2015-Q2 " Jike Song
2015-10-27  9:25         ` [Intel-gfx] [Announcement] 2015-Q3 " Jike Song
2015-11-18 18:12           ` Alex Williamson
2015-11-19  4:06             ` Tian, Kevin
2015-11-19  7:22               ` Jike Song
2015-11-19 15:32                 ` Stefano Stabellini
2015-11-19 15:49                   ` Paolo Bonzini
2015-11-19 16:12                     ` Stefano Stabellini
2015-11-19 15:52                   ` Alex Williamson
2015-11-20  2:58                     ` Jike Song
2015-11-20  4:22                       ` Alex Williamson
2015-11-20  5:51                         ` Jike Song
2015-11-20  6:01                           ` Tian, Kevin
2015-11-20 16:40                           ` Alex Williamson [this message]
2015-11-23  4:52                             ` [Qemu-devel] " Jike Song
2015-11-19  8:40               ` Gerd Hoffmann
2015-11-19 11:09                 ` Paolo Bonzini
2015-11-20  2:46                   ` Jike Song
2015-11-20  6:12                 ` Tian, Kevin
2015-11-20  8:26                   ` Gerd Hoffmann
2015-11-20  8:36                     ` Tian, Kevin
2015-11-20  8:46                       ` Zhiyuan Lv
2015-12-03  6:57                     ` Tian, Kevin
2015-12-04 10:13                       ` Gerd Hoffmann
2015-11-19 20:02               ` Alex Williamson
2015-11-20  7:09                 ` Tian, Kevin
2015-11-20 17:03                   ` Alex Williamson
2015-11-20  8:10                 ` Tian, Kevin
2015-11-20 17:25                   ` Alex Williamson
2015-11-23  5:05                     ` Jike Song
2015-11-24 11:19                 ` Daniel Vetter
2015-11-24 11:49                   ` Chris Wilson
2015-11-24 12:38                   ` Gerd Hoffmann
2015-11-24 13:31                     ` Daniel Vetter
2015-11-24 14:12                       ` Gerd Hoffmann
2015-11-24 14:19                         ` Daniel Vetter
2016-01-27  6:21           ` [Intel-gfx] [Announcement] 2015-Q4 " Jike Song
2016-04-28  5:29             ` [Intel-gfx] [Announcement] 2016-Q1 " Jike Song
2016-07-22  5:42               ` [Intel-gfx] [Announcement] 2016-Q2 " Jike Song
2016-11-06 14:59                 ` [Intel-gfx] [Announcement] 2016-Q3 " Jike Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1448037640.4697.266.camel@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=chao.zhou@intel.com \
    --cc=david.j.cowperthwaite@intel.com \
    --cc=eddie.dong@intel.com \
    --cc=hongbo.wang@intel.com \
    --cc=igvt-g@ml01.01.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jike.song@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kraxel@redhat.com \
    --cc=libo.zhu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=michael.l.white@intel.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=raghuveer.reddy@intel.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=susie.li@intel.com \
    --cc=xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).