kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Parav Pandit <parav@mellanox.com>, Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"kwankhede@nvidia.com" <kwankhede@nvidia.com>,
	"cohuck@redhat.com" <cohuck@redhat.com>,
	Jiri Pirko <jiri@mellanox.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Jason Wang <jasowang@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: RE: [PATCH 0/6] VFIO mdev aggregated resources handling
Date: Tue, 10 Dec 2019 03:33:23 +0000	[thread overview]
Message-ID: <AADFC41AFE54684AB9EE6CBC0274A5D19D636944@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <79d0ca87-c6c7-18d5-6429-bb20041646ff@mellanox.com>

> From: Parav Pandit <parav@mellanox.com>
> Sent: Saturday, December 7, 2019 1:34 AM
> 
> On 12/6/2019 2:03 AM, Zhenyu Wang wrote:
> > On 2019.12.05 18:59:36 +0000, Parav Pandit wrote:
> >>>>
> >>>>> On 2019.11.07 20:37:49 +0000, Parav Pandit wrote:
> >>>>>> Hi,
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: kvm-owner@vger.kernel.org <kvm-owner@vger.kernel.org>
> On
> >>>>>>> Behalf Of Zhenyu Wang
> >>>>>>> Sent: Thursday, October 24, 2019 12:08 AM
> >>>>>>> To: kvm@vger.kernel.org
> >>>>>>> Cc: alex.williamson@redhat.com; kwankhede@nvidia.com;
> >>>>>>> kevin.tian@intel.com; cohuck@redhat.com
> >>>>>>> Subject: [PATCH 0/6] VFIO mdev aggregated resources handling
> >>>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> This is a refresh for previous send of this series. I got
> >>>>>>> impression that some SIOV drivers would still deploy their own
> >>>>>>> create and config method so stopped effort on this. But seems
> >>>>>>> this would still be useful for some other SIOV driver which may
> >>>>>>> simply want capability to aggregate resources. So here's refreshed
> >>> series.
> >>>>>>>
> >>>>>>> Current mdev device create interface depends on fixed mdev type,
> >>>>>>> which get uuid from user to create instance of mdev device. If
> >>>>>>> user wants to use customized number of resource for mdev device,
> >>>>>>> then only can create new
> >>>>>> Can you please give an example of 'resource'?
> >>>>>> When I grep [1], [2] and [3], I couldn't find anything related to '
> >>> aggregate'.
> >>>>>
> >>>>> The resource is vendor device specific, in SIOV spec there's ADI
> >>>>> (Assignable Device Interface) definition which could be e.g queue
> >>>>> for net device, context for gpu, etc. I just named this interface as
> >>> 'aggregate'
> >>>>> for aggregation purpose, it's not used in spec doc.
> >>>>>
> >>>>
> >>>> Some 'unknown/undefined' vendor specific resource just doesn't work.
> >>>> Orchestration tool doesn't know which resource and what/how to
> configure
> >>> for which vendor.
> >>>> It has to be well defined.
> >>>>
> >>>> You can also find such discussion in recent lgpu DRM cgroup patches
> series
> >>> v4.
> >>>>
> >>>> Exposing networking resource configuration in non-net namespace
> aware
> >>> mdev sysfs at PCI device level is no-go.
> >>>> Adding per file NET_ADMIN or other checks is not the approach we
> follow in
> >>> kernel.
> >>>>
> >>>> devlink has been a subsystem though under net, that has very rich
> interface
> >>> for syscaller, device health, resource management and many more.
> >>>> Even though it is used by net driver today, its written for generic device
> >>> management at bus/device level.
> >>>>
> >>>> Yuval has posted patches to manage PCI sub-devices [1] and updated
> version
> >>> will be posted soon which addresses comments.
> >>>>
> >>>> For any device slice resource management of mdev, sub-function etc,
> we
> >>> should be using single kernel interface as devlink [2], [3].
> >>>>
> >>>> [1]
> >>>> https://lore.kernel.org/netdev/1573229926-30040-1-git-send-email-
> yuval
> >>>> av@mellanox.com/ [2]
> >>>> http://man7.org/linux/man-pages/man8/devlink-dev.8.html
> >>>> [3] http://man7.org/linux/man-pages/man8/devlink-resource.8.html
> >>>>
> >>>> Most modern device configuration that I am aware of is usually done
> via well
> >>> defined ioctl() of the subsystem (vhost, virtio, vfio, rdma, nvme and
> more) or
> >>> via netlink commands (net, devlink, rdma and more) not via sysfs.
> >>>>
> >>>
> >>> Current vfio/mdev configuration is via documented sysfs ABI instead of
> other
> >>> ways. So this adhere to that way to introduce more configurable method
> on
> >>> mdev device for standard, it's optional and not actually vendor specific
> e.g vfio-
> >>> ap.
> >>>
> >> Some unknown/undefined resource as 'aggregate' is just not an ABI.
> >> It has to be well defined, as 'hardware_address', 'num_netdev_sqs' or
> something similar appropriate to that mdev device class.
> >> If user wants to set a parameter for a mdev regardless of vendor, they
> must have single way to do so.
> >
> > The idea is not specific for some device class, but for each mdev
> > type's resource, and be optional for each vendor. If more device class
> > specific way is preferred, then we might have very different ways for
> > different vendors. Better to avoid that, so here means to aggregate
> > number of mdev type's resources for target instance, instead of defining
> > kinds of mdev types for those number of resources.
> >
> Parameter or attribute certainly can be optional.
> But the way to aggregate them should not be vendor specific.
> Look for some excellent existing examples across subsystems, for example
> how you create aggregated netdev or block device is not depend on vendor
> or underlying device type.

I'd like to hear Alex's opinion on this. Today VFIO mdev supports two styles
of "types" imo: fixed resource definition (most cases) and dynamic resource 
definition (vfio-ap). In fixed style, a type has fixed association to a set of 
vendor specific resources (resourceX=M, resourceY=N, ...). In dynamic case, 
the user is allowed to specify actual resource X/Y/... backing the mdev 
instance post its creation. In either case, the way to identify such association 
or configurable knobs is vendor specific, maybe contained in optional 
attributes (name and description) plus additional info in vendor documents.

Then the user is assumed to clearly understand the implication of the resource
allocation under a given type, when creating a new mdev under this type.

If this assumption holds true, the aggregated attribute simply provides an
extension in the same direction of fixed-style types but allowing for more 
flexible linearly-increasing resource allocation. e.g. when using aggregate=2, 
it means creating a instance with resourceX=2M, resourceY=2N, ... under 
the specified type. Along this direction I didn't see the need of well-defined 
vendor specific attributes here. When those are actually required, I suppose 
the dynamic style would better fit. Or if the vendor driver thinks implementing 
such aggregate feature will confuse its type definition, it's optional to not 
doing so anyway.

Thanks
Kevin

  reply	other threads:[~2019-12-10  3:33 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-24  5:08 [PATCH 0/6] VFIO mdev aggregated resources handling Zhenyu Wang
2019-10-24  5:08 ` [PATCH 1/6] vfio/mdev: Add new "aggregate" parameter for mdev create Zhenyu Wang
2019-10-24  5:08 ` [PATCH 2/6] vfio/mdev: Add "aggregation" attribute for supported mdev type Zhenyu Wang
2019-10-27  6:24   ` kbuild test robot
2019-10-27  6:24   ` [RFC PATCH] vfio/mdev: mdev_type_attr_aggregation can be static kbuild test robot
2019-10-24  5:08 ` [PATCH 3/6] vfio/mdev: Add "aggregated_instances" attribute for supported mdev device Zhenyu Wang
2019-10-24  5:08 ` [PATCH 4/6] Documentation/driver-api/vfio-mediated-device.rst: Update for vfio/mdev aggregation support Zhenyu Wang
2019-10-24  5:08 ` [PATCH 5/6] Documentation/ABI/testing/sysfs-bus-vfio-mdev: " Zhenyu Wang
2019-10-24  5:08 ` [PATCH 6/6] drm/i915/gvt: Add new type with " Zhenyu Wang
2019-11-05 21:10 ` [PATCH 0/6] VFIO mdev aggregated resources handling Alex Williamson
2019-11-06  4:20   ` Zhenyu Wang
2019-11-06 18:44     ` Alex Williamson
2019-11-07 13:02       ` Cornelia Huck
2019-11-15  4:24       ` Tian, Kevin
2019-11-19 22:58         ` Alex Williamson
2019-11-20  0:46           ` Tian, Kevin
2019-11-07 20:37 ` Parav Pandit
2019-11-08  8:19   ` Zhenyu Wang
2019-12-04 17:36     ` Parav Pandit
2019-12-05  6:06       ` Zhenyu Wang
2019-12-05  6:40         ` Jason Wang
2019-12-05 19:02           ` Parav Pandit
2019-12-05 18:59         ` Parav Pandit
2019-12-06  8:03           ` Zhenyu Wang
2019-12-06 17:33             ` Parav Pandit
2019-12-10  3:33               ` Tian, Kevin [this message]
2019-12-10 19:07                 ` Alex Williamson
2019-12-10 21:08                   ` Parav Pandit
2019-12-10 22:08                     ` Alex Williamson
2019-12-10 22:40                       ` Parav Pandit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AADFC41AFE54684AB9EE6CBC0274A5D19D636944@SHSMSX104.ccr.corp.intel.com \
    --to=kevin.tian@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=jiri@mellanox.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=parav@mellanox.com \
    --cc=zhenyuw@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).