All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Alex Williamson <alex.williamson@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Cc: "Song, Jike" <jike.song@intel.com>,
	"laine@redhat.com" <laine@redhat.com>,
	"eric.auger@linaro.org" <eric.auger@linaro.org>
Subject: Re: [Qemu-devel] [RFC PATCH] vfio: Add sysfsdev property for pci & platform
Date: Mon, 25 Jan 2016 19:27:40 +0000	[thread overview]
Message-ID: <AADFC41AFE54684AB9EE6CBC0274A5D15F78CBA3@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <1453389294.32741.360.camel@redhat.com>

> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Thursday, January 21, 2016 11:15 PM
> 
> On Thu, 2016-01-21 at 07:51 +0000, Tian, Kevin wrote:
> > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > Sent: Thursday, January 21, 2016 2:07 AM
> > >
> > > vfio-pci currently requires a host= parameter, which comes in the
> > > form of a PCI address in [domain:] notation.  We
> > > expect to find a matching entry in sysfs for that under
> > > /sys/bus/pci/devices/.  vfio-platform takes a similar approach, but
> > > defines the host= parameter to be a string, which can be matched
> > > directly under /sys/bus/platform/devices/.  On the PCI side, we have
> > > some interest in using vfio to expose vGPU devices.  These are not
> > > actual discrete PCI devices, so they don't have a compatible host PCI
> > > bus address or a device link where QEMU wants to look for it.  There's
> > > also really no requirement that vfio can only be used to expose
> > > physical devices, a new vfio bus and iommu driver could expose a
> > > completely emulated device.  To fit within the vfio framework, it
> > > would need a kernel struct device and associated IOMMU group, but
> > > those are easy constraints to manage.
> > >
> > > To support such devices, which would include vGPUs, that honor the
> > > VFIO PCI programming API, but are not necessarily backed by a unique
> > > PCI address, add support for specifying any device in sysfs.  The
> > > vfio API already has support for probing the device type to ensure
> > > compatibility with either vfio-pci or vfio-platform.
> > >
> > > With this, a vfio-pci device could either be specified as:
> > >
> > > -device vfio-pci,host=02:00.0
> > >
> > > or
> > >
> > > -device vfio-pci,sysfsdev=/sys/devices/pci0000:00/0000:00:1c.0/0000:02:00.0
> > >
> > > or even
> > >
> > > -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:02:00.0
> > >
> > > When vGPU support comes along, this might look something more like:
> > >
> > > -device vfio-pci,sysfsdev=/sys/devices/virtual/intel-vgpu/vgpu0@0000:00:02.0
> > >
> > > NB - This is only a made up example path, but it should be noted that
> > > the device namespace is global for vfio, a virtual device cannot
> > > overlap with existing namespaces and should not create a name prone to
> > > conflict, such as a simple instance number.
> > >
> >
> > Thanks Alex! It's a good improvement to support coming vgpu feature.
> > Just curious. Does the virtual device name has to include a BDF format
> > or it can be random strings (e.g. just "vgpu0")? In the latter case, then
> > overlapping chance would be small.
> 
> Hi Kevin,
> 
> Yeah, looking at the vfio code again (vfio_device_get_from_name), as
> long as the name is unique within the IOMMU group, I think we'll be
> fine.  I expect that vGPUs will create singleton groups, so the
> namespace constraints I mention above are maybe not a concern.  For
> vendors that can support multiple GPUs, each with vGPUs, userspace will
> probably want some way to determine the source of a vGPU for load
> balancing and locality purpose, but that's better handled through
> parent/child device links in sysfs rather than embedding it in the
> device name.  Thanks,
> 

Agree. It's clear now.

Thanks
Kevin

  reply	other threads:[~2016-01-25 19:27 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-20 18:06 [Qemu-devel] [RFC PATCH] vfio: Add sysfsdev property for pci & platform Alex Williamson
2016-01-20 18:11 ` Daniel P. Berrange
2016-01-20 18:28   ` Alex Williamson
2016-01-21  7:09     ` P J P
2016-01-21  7:51 ` Tian, Kevin
2016-01-21 15:14   ` Alex Williamson
2016-01-25 19:27     ` Tian, Kevin [this message]
2016-01-26 15:03 ` Eric Auger
2016-01-26 17:08   ` Alex Williamson
2016-02-01 17:32     ` Eric Auger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AADFC41AFE54684AB9EE6CBC0274A5D15F78CBA3@SHSMSX101.ccr.corp.intel.com \
    --to=kevin.tian@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=eric.auger@linaro.org \
    --cc=jike.song@intel.com \
    --cc=laine@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.