KVM Archive on lore.kernel.org
 help / color / Atom feed
From: Parav Pandit <parav@mellanox.com>
To: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"kwankhede@nvidia.com" <kwankhede@nvidia.com>,
	"kevin.tian@intel.com" <kevin.tian@intel.com>,
	"cohuck@redhat.com" <cohuck@redhat.com>,
	Jiri Pirko <jiri@mellanox.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Jason Wang <jasowang@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: RE: [PATCH 0/6] VFIO mdev aggregated resources handling
Date: Wed, 4 Dec 2019 17:36:12 +0000
Message-ID: <AM0PR05MB4866757033043CC007B5C9CBD15D0@AM0PR05MB4866.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <20191108081925.GH4196@zhen-hp.sh.intel.com>

+ Jiri + Netdev since you mentioned netdev queue.

+ Jason Wang and Michael as we had similar discussion in vdpa discussion thread.

> From: Zhenyu Wang <zhenyuw@linux.intel.com>
> Sent: Friday, November 8, 2019 2:19 AM
> To: Parav Pandit <parav@mellanox.com>
> 

My apologies to reply late.
Something bad with my email client, due to which I found this patch under spam folder today.
More comments below.

> On 2019.11.07 20:37:49 +0000, Parav Pandit wrote:
> > Hi,
> >
> > > -----Original Message-----
> > > From: kvm-owner@vger.kernel.org <kvm-owner@vger.kernel.org> On
> > > Behalf Of Zhenyu Wang
> > > Sent: Thursday, October 24, 2019 12:08 AM
> > > To: kvm@vger.kernel.org
> > > Cc: alex.williamson@redhat.com; kwankhede@nvidia.com;
> > > kevin.tian@intel.com; cohuck@redhat.com
> > > Subject: [PATCH 0/6] VFIO mdev aggregated resources handling
> > >
> > > Hi,
> > >
> > > This is a refresh for previous send of this series. I got impression
> > > that some SIOV drivers would still deploy their own create and
> > > config method so stopped effort on this. But seems this would still
> > > be useful for some other SIOV driver which may simply want
> > > capability to aggregate resources. So here's refreshed series.
> > >
> > > Current mdev device create interface depends on fixed mdev type,
> > > which get uuid from user to create instance of mdev device. If user
> > > wants to use customized number of resource for mdev device, then
> > > only can create new
> > Can you please give an example of 'resource'?
> > When I grep [1], [2] and [3], I couldn't find anything related to ' aggregate'.
> 
> The resource is vendor device specific, in SIOV spec there's ADI (Assignable
> Device Interface) definition which could be e.g queue for net device, context
> for gpu, etc. I just named this interface as 'aggregate'
> for aggregation purpose, it's not used in spec doc.
> 

Some 'unknown/undefined' vendor specific resource just doesn't work.
Orchestration tool doesn't know which resource and what/how to configure for which vendor.
It has to be well defined.

You can also find such discussion in recent lgpu DRM cgroup patches series v4.

Exposing networking resource configuration in non-net namespace aware mdev sysfs at PCI device level is no-go.
Adding per file NET_ADMIN or other checks is not the approach we follow in kernel.

devlink has been a subsystem though under net, that has very rich interface for syscaller, device health, resource management and many more.
Even though it is used by net driver today, its written for generic device management at bus/device level.

Yuval has posted patches to manage PCI sub-devices [1] and updated version will be posted soon which addresses comments.

For any device slice resource management of mdev, sub-function etc, we should be using single kernel interface as devlink [2], [3].

[1] https://lore.kernel.org/netdev/1573229926-30040-1-git-send-email-yuvalav@mellanox.com/
[2] http://man7.org/linux/man-pages/man8/devlink-dev.8.html
[3] http://man7.org/linux/man-pages/man8/devlink-resource.8.html

Most modern device configuration that I am aware of is usually done via well defined ioctl() of the subsystem (vhost, virtio, vfio, rdma, nvme and more) or via netlink commands (net, devlink, rdma and more) not via sysfs.

> Thanks
> 
> >
> > > mdev type for that which may not be flexible. This requirement comes
> > > not only from to be able to allocate flexible resources for KVMGT,
> > > but also from Intel scalable IO virtualization which would use
> > > vfio/mdev to be able to allocate arbitrary resources on mdev instance.
> More info on [1] [2] [3].
> > >
> > > To allow to create user defined resources for mdev, it trys to
> > > extend mdev create interface by adding new "aggregate=xxx" parameter
> > > following UUID, for target mdev type if aggregation is supported, it
> > > can create new mdev device which contains resources combined by
> > > number of instances, e.g
> > >
> > >     echo "<uuid>,aggregate=10" > create
> > >
> > > VM manager e.g libvirt can check mdev type with "aggregation"
> > > attribute which can support this setting. If no "aggregation"
> > > attribute found for mdev type, previous behavior is still kept for
> > > one instance allocation. And new sysfs attribute
> > > "aggregated_instances" is created for each mdev device to show allocated
> number.
> > >
> > > References:
> > > [1]
> > > https://software.intel.com/en-us/download/intel-virtualization-techn
> > > ology- for-directed-io-architecture-specification
> > > [2]
> > > https://software.intel.com/en-us/download/intel-scalable-io-virtuali
> > > zation-
> > > technical-specification
> > > [3] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf
> > >
> > > Zhenyu Wang (6):
> > >   vfio/mdev: Add new "aggregate" parameter for mdev create
> > >   vfio/mdev: Add "aggregation" attribute for supported mdev type
> > >   vfio/mdev: Add "aggregated_instances" attribute for supported mdev
> > >     device
> > >   Documentation/driver-api/vfio-mediated-device.rst: Update for
> > >     vfio/mdev aggregation support
> > >   Documentation/ABI/testing/sysfs-bus-vfio-mdev: Update for vfio/mdev
> > >     aggregation support
> > >   drm/i915/gvt: Add new type with aggregation support
> > >
> > >  Documentation/ABI/testing/sysfs-bus-vfio-mdev | 24 ++++++
> > >  .../driver-api/vfio-mediated-device.rst       | 23 ++++++
> > >  drivers/gpu/drm/i915/gvt/gvt.c                |  4 +-
> > >  drivers/gpu/drm/i915/gvt/gvt.h                | 11 ++-
> > >  drivers/gpu/drm/i915/gvt/kvmgt.c              | 53 ++++++++++++-
> > >  drivers/gpu/drm/i915/gvt/vgpu.c               | 56 ++++++++++++-
> > >  drivers/vfio/mdev/mdev_core.c                 | 36 ++++++++-
> > >  drivers/vfio/mdev/mdev_private.h              |  6 +-
> > >  drivers/vfio/mdev/mdev_sysfs.c                | 79 ++++++++++++++++++-
> > >  include/linux/mdev.h                          | 19 +++++
> > >  10 files changed, 294 insertions(+), 17 deletions(-)
> > >
> > > --
> > > 2.24.0.rc0
> >
> 
> --
> Open Source Technology Center, Intel ltd.
> 
> $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

  reply index

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-24  5:08 Zhenyu Wang
2019-10-24  5:08 ` [PATCH 1/6] vfio/mdev: Add new "aggregate" parameter for mdev create Zhenyu Wang
2019-10-24  5:08 ` [PATCH 2/6] vfio/mdev: Add "aggregation" attribute for supported mdev type Zhenyu Wang
2019-10-27  6:24   ` kbuild test robot
2019-10-27  6:24   ` [RFC PATCH] vfio/mdev: mdev_type_attr_aggregation can be static kbuild test robot
2019-10-24  5:08 ` [PATCH 3/6] vfio/mdev: Add "aggregated_instances" attribute for supported mdev device Zhenyu Wang
2019-10-24  5:08 ` [PATCH 4/6] Documentation/driver-api/vfio-mediated-device.rst: Update for vfio/mdev aggregation support Zhenyu Wang
2019-10-24  5:08 ` [PATCH 5/6] Documentation/ABI/testing/sysfs-bus-vfio-mdev: " Zhenyu Wang
2019-10-24  5:08 ` [PATCH 6/6] drm/i915/gvt: Add new type with " Zhenyu Wang
2019-11-05 21:10 ` [PATCH 0/6] VFIO mdev aggregated resources handling Alex Williamson
2019-11-06  4:20   ` Zhenyu Wang
2019-11-06 18:44     ` Alex Williamson
2019-11-07 13:02       ` Cornelia Huck
2019-11-15  4:24       ` Tian, Kevin
2019-11-19 22:58         ` Alex Williamson
2019-11-20  0:46           ` Tian, Kevin
2019-11-07 20:37 ` Parav Pandit
2019-11-08  8:19   ` Zhenyu Wang
2019-12-04 17:36     ` Parav Pandit [this message]
2019-12-05  6:06       ` Zhenyu Wang
2019-12-05  6:40         ` Jason Wang
2019-12-05 19:02           ` Parav Pandit
2019-12-05 18:59         ` Parav Pandit
2019-12-06  8:03           ` Zhenyu Wang
2019-12-06 17:33             ` Parav Pandit
2019-12-10  3:33               ` Tian, Kevin
2019-12-10 19:07                 ` Alex Williamson
2019-12-10 21:08                   ` Parav Pandit
2019-12-10 22:08                     ` Alex Williamson
2019-12-10 22:40                       ` Parav Pandit

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM0PR05MB4866757033043CC007B5C9CBD15D0@AM0PR05MB4866.eurprd05.prod.outlook.com \
    --to=parav@mellanox.com \
    --cc=alex.williamson@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=jiri@mellanox.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=zhenyuw@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

KVM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \
		kvm@vger.kernel.org
	public-inbox-index kvm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.kvm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git