All of lore.kernel.org
 help / color / mirror / Atom feed
* [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
@ 2018-09-27  6:22 Kirti Wankhede
  2018-09-27  6:48 ` Tian, Kevin
  0 siblings, 1 reply; 7+ messages in thread
From: Kirti Wankhede @ 2018-09-27  6:22 UTC (permalink / raw)
  To: alex.williamson, eskultet, kvm, libvir-list; +Cc: Kirti Wankhede, cjia

Generally a single instance of mdev device, a share of physical device, is
assigned to user space application or a VM. There are cases when multiple
instances of mdev devices of same or different types are required by User
space application or VM. For example in case of vGPU, multiple mdev devices
of type which represents whole GPU can be assigned to one instance of
application or VM.

All types of mdev devices may not support assigning multiple mdev devices
to a user space application. In that case vendor driver can fail open()
call of mdev device. But there is no way to know User space application
about the configuration supported by vendor driver.

To expose supported configuration, vendor driver should add
'multiple_mdev_support' attribute to type-id directory if vendor driver
supports assigning multiple mdev devices of a particular type-id to one
instance of user space application or a VM.

User space application should read if 'multiple_mdev_support' attibute is
present in type-id directory of all mdev devices which are going to be
used. If all read 1 then user space application can proceed with multiple
mdev devices.

This is optional and readonly attribute.

Signed-off-by: Neo Jia <cjia@nvidia.com>
Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
---
 Documentation/ABI/testing/sysfs-bus-vfio-mdev | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
index 452dbe39270e..69e1291479ce 100644
--- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
+++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
@@ -85,6 +85,19 @@ Users:
 		a particular <type-id> that can help in understanding the
 		features provided by that type of mediated device.
 
+What:           /sys/.../mdev_supported_types/<type-id>/multiple_mdev_support
+Date:           September 2018
+Contact:        Kirti Wankhede <kwankhede@nvidia.com>
+Description:
+		Reading this attribute will return 0 or 1. Returning 1 indicates
+		multiple mdev devices of a particular <type-id> assigned to one
+		User space application is supported by vendor driver. This is
+		optional and readonly attribute.
+Users:
+		Userspace applications interested in knowing if multiple mdev
+		devices of a particular <type-id> can be assigned to one
+		instance of application.
+
 What:           /sys/.../<device>/<UUID>/
 Date:           October 2016
 Contact:        Kirti Wankhede <kwankhede@nvidia.com>
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
  2018-09-27  6:22 [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id Kirti Wankhede
@ 2018-09-27  6:48 ` Tian, Kevin
  2018-09-27 15:59   ` Alex Williamson
  0 siblings, 1 reply; 7+ messages in thread
From: Tian, Kevin @ 2018-09-27  6:48 UTC (permalink / raw)
  To: Kirti Wankhede, alex.williamson, eskultet, kvm, libvir-list; +Cc: cjia

> From: Kirti Wankhede
> Sent: Thursday, September 27, 2018 2:22 PM
> 
> Generally a single instance of mdev device, a share of physical device, is
> assigned to user space application or a VM. There are cases when multiple
> instances of mdev devices of same or different types are required by User
> space application or VM. For example in case of vGPU, multiple mdev
> devices
> of type which represents whole GPU can be assigned to one instance of
> application or VM.
> 
> All types of mdev devices may not support assigning multiple mdev devices
> to a user space application. In that case vendor driver can fail open()
> call of mdev device. But there is no way to know User space application
> about the configuration supported by vendor driver.
> 
> To expose supported configuration, vendor driver should add
> 'multiple_mdev_support' attribute to type-id directory if vendor driver
> supports assigning multiple mdev devices of a particular type-id to one
> instance of user space application or a VM.
> 
> User space application should read if 'multiple_mdev_support' attibute is
> present in type-id directory of all mdev devices which are going to be
> used. If all read 1 then user space application can proceed with multiple
> mdev devices.
> 
> This is optional and readonly attribute.

I didn't get what is the exact problem from above description. Isn't it the
basic point behind mdev concept that parent driver can create multiple
mdev instances on a physical device? VFIO usually doesn't care whether
multiple devices (pci or mdev) are assigned to same process/VM or not.
Can you give some example of actual problem behind this change?

Thanks
Kevin

> 
> Signed-off-by: Neo Jia <cjia@nvidia.com>
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> ---
>  Documentation/ABI/testing/sysfs-bus-vfio-mdev | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> index 452dbe39270e..69e1291479ce 100644
> --- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> +++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> @@ -85,6 +85,19 @@ Users:
>  		a particular <type-id> that can help in understanding the
>  		features provided by that type of mediated device.
> 
> +What:           /sys/.../mdev_supported_types/<type-
> id>/multiple_mdev_support
> +Date:           September 2018
> +Contact:        Kirti Wankhede <kwankhede@nvidia.com>
> +Description:
> +		Reading this attribute will return 0 or 1. Returning 1
> indicates
> +		multiple mdev devices of a particular <type-id> assigned to
> one
> +		User space application is supported by vendor driver. This is
> +		optional and readonly attribute.
> +Users:
> +		Userspace applications interested in knowing if multiple
> mdev
> +		devices of a particular <type-id> can be assigned to one
> +		instance of application.
> +
>  What:           /sys/.../<device>/<UUID>/
>  Date:           October 2016
>  Contact:        Kirti Wankhede <kwankhede@nvidia.com>
> --
> 2.7.0

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
  2018-09-27  6:48 ` Tian, Kevin
@ 2018-09-27 15:59   ` Alex Williamson
  2018-09-27 19:23     ` Kirti Wankhede
  0 siblings, 1 reply; 7+ messages in thread
From: Alex Williamson @ 2018-09-27 15:59 UTC (permalink / raw)
  To: Tian, Kevin; +Cc: libvir-list, Kirti Wankhede, cjia, kvm, eskultet

On Thu, 27 Sep 2018 06:48:27 +0000
"Tian, Kevin" <kevin.tian@intel.com> wrote:

> > From: Kirti Wankhede
> > Sent: Thursday, September 27, 2018 2:22 PM
> > 
> > Generally a single instance of mdev device, a share of physical device, is
> > assigned to user space application or a VM. There are cases when multiple
> > instances of mdev devices of same or different types are required by User
> > space application or VM. For example in case of vGPU, multiple mdev
> > devices
> > of type which represents whole GPU can be assigned to one instance of
> > application or VM.
> > 
> > All types of mdev devices may not support assigning multiple mdev devices
> > to a user space application. In that case vendor driver can fail open()
> > call of mdev device. But there is no way to know User space application
> > about the configuration supported by vendor driver.
> > 
> > To expose supported configuration, vendor driver should add
> > 'multiple_mdev_support' attribute to type-id directory if vendor driver
> > supports assigning multiple mdev devices of a particular type-id to one
> > instance of user space application or a VM.
> > 
> > User space application should read if 'multiple_mdev_support' attibute is
> > present in type-id directory of all mdev devices which are going to be
> > used. If all read 1 then user space application can proceed with multiple
> > mdev devices.
> > 
> > This is optional and readonly attribute.  
> 
> I didn't get what is the exact problem from above description. Isn't it the
> basic point behind mdev concept that parent driver can create multiple
> mdev instances on a physical device? VFIO usually doesn't care whether
> multiple devices (pci or mdev) are assigned to same process/VM or not.
> Can you give some example of actual problem behind this change?

The situation here is that mdev imposes no restrictions regarding how
mdev devices can be used by the user.  Multiple mdevs, regardless of
type, can be combined and used in any way the user sees fit, at least
that's the theory.  However, in practice, certain vendor drivers impose
a limitation that prevents multiple mdev devices from being used in the
same VM.  This is done by returning an error when the user attempts to
open those additional devices.  Now we're in the very predictable
situation that we want to consider that some of the vendor's mdev types
might be combined in the same userspace instance, while others still
impose a restriction, and how can we optionally expose such per mdev
type restrictions to the userspace so we have something better than
"try it and see if it works".

The below proposal assumes that a single instance per VM is the norm
and that we add something to the API to indicate multiple instances are
supported.  However, that's really never been the position of the mdev
core, where there's no concept of limiting devices per user instance;
it's a vendor driver restriction.  So I think the question is then why
should we burden vendor drivers who do not impose a restriction with an
additional field?  Does the below extension provide a better backwards
compatibility scenario?

With the current code, I think it's reasonable for userspace to assume
there are no mdev limits per userspace instance, those that exist are
vendor specific.  The below proposal is for an optional field, what
does the management tool assume when it's not supplied?  We have both
cases currently, mdev devices that allow multiple instances per VM and
those that do not, so I don't see that the user is more informed with
this addition and no attempt is made here to synchronously update all
drivers to expose this new attribute.

Does it make more sense then to add an attribute to expose the
restriction rather than the lack of restriction.  Perhaps something
like "singleton_usage_restriction".  Also if the type is meant to be a
boolean, I think we have Y/N support for that in sysfs rather than
using integers which raise the question what >1 implies.

Is there a more compelling backwards compatibility store for the below
proposal than what I'm thinking?  Thanks,

Alex

> > Signed-off-by: Neo Jia <cjia@nvidia.com>
> > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > ---
> >  Documentation/ABI/testing/sysfs-bus-vfio-mdev | 13 +++++++++++++
> >  1 file changed, 13 insertions(+)
> > 
> > diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> > b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> > index 452dbe39270e..69e1291479ce 100644
> > --- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> > +++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> > @@ -85,6 +85,19 @@ Users:
> >  		a particular <type-id> that can help in understanding the
> >  		features provided by that type of mediated device.
> > 
> > +What:           /sys/.../mdev_supported_types/<type-  
> > id>/multiple_mdev_support  
> > +Date:           September 2018
> > +Contact:        Kirti Wankhede <kwankhede@nvidia.com>
> > +Description:
> > +		Reading this attribute will return 0 or 1. Returning 1
> > indicates
> > +		multiple mdev devices of a particular <type-id> assigned to
> > one
> > +		User space application is supported by vendor driver. This is
> > +		optional and readonly attribute.
> > +Users:
> > +		Userspace applications interested in knowing if multiple
> > mdev
> > +		devices of a particular <type-id> can be assigned to one
> > +		instance of application.
> > +
> >  What:           /sys/.../<device>/<UUID>/
> >  Date:           October 2016
> >  Contact:        Kirti Wankhede <kwankhede@nvidia.com>
> > --
> > 2.7.0  
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
  2018-09-27 15:59   ` Alex Williamson
@ 2018-09-27 19:23     ` Kirti Wankhede
  2018-09-27 20:01       ` Alex Williamson
  0 siblings, 1 reply; 7+ messages in thread
From: Kirti Wankhede @ 2018-09-27 19:23 UTC (permalink / raw)
  To: Alex Williamson, Tian, Kevin; +Cc: libvir-list, cjia, kvm, eskultet


On 9/27/2018 9:29 PM, Alex Williamson wrote:
> On Thu, 27 Sep 2018 06:48:27 +0000
> "Tian, Kevin" <kevin.tian@intel.com> wrote:
> 
>>> From: Kirti Wankhede
>>> Sent: Thursday, September 27, 2018 2:22 PM
>>>
>>> Generally a single instance of mdev device, a share of physical device, is
>>> assigned to user space application or a VM. There are cases when multiple
>>> instances of mdev devices of same or different types are required by User
>>> space application or VM. For example in case of vGPU, multiple mdev
>>> devices
>>> of type which represents whole GPU can be assigned to one instance of
>>> application or VM.
>>>
>>> All types of mdev devices may not support assigning multiple mdev devices
>>> to a user space application. In that case vendor driver can fail open()
>>> call of mdev device. But there is no way to know User space application
>>> about the configuration supported by vendor driver.
>>>
>>> To expose supported configuration, vendor driver should add
>>> 'multiple_mdev_support' attribute to type-id directory if vendor driver
>>> supports assigning multiple mdev devices of a particular type-id to one
>>> instance of user space application or a VM.
>>>
>>> User space application should read if 'multiple_mdev_support' attibute is
>>> present in type-id directory of all mdev devices which are going to be
>>> used. If all read 1 then user space application can proceed with multiple
>>> mdev devices.
>>>
>>> This is optional and readonly attribute.  
>>
>> I didn't get what is the exact problem from above description. Isn't it the
>> basic point behind mdev concept that parent driver can create multiple
>> mdev instances on a physical device? VFIO usually doesn't care whether
>> multiple devices (pci or mdev) are assigned to same process/VM or not.
>> Can you give some example of actual problem behind this change?
> 
> The situation here is that mdev imposes no restrictions regarding how
> mdev devices can be used by the user.  Multiple mdevs, regardless of
> type, can be combined and used in any way the user sees fit, at least
> that's the theory.  However, in practice, certain vendor drivers impose
> a limitation that prevents multiple mdev devices from being used in the
> same VM.  This is done by returning an error when the user attempts to
> open those additional devices.  Now we're in the very predictable
> situation that we want to consider that some of the vendor's mdev types
> might be combined in the same userspace instance, while others still
> impose a restriction, and how can we optionally expose such per mdev
> type restrictions to the userspace so we have something better than
> "try it and see if it works".
> 
> The below proposal assumes that a single instance per VM is the norm
> and that we add something to the API to indicate multiple instances are
> supported.  However, that's really never been the position of the mdev
> core, where there's no concept of limiting devices per user instance;
> it's a vendor driver restriction.  So I think the question is then why
> should we burden vendor drivers who do not impose a restriction with an
> additional field?  Does the below extension provide a better backwards
> compatibility scenario?

The vendor driver who do not want to impose restriction, doesn't need to
provide this attribute. In that case, behavior would remain same as it
is now, i.e. multiple mdev instances can be used by one instance of
application.


> With the current code, I think it's reasonable for userspace to assume
> there are no mdev limits per userspace instance, those that exist are
> vendor specific.  The below proposal is for an optional field, what
> does the management tool assume when it's not supplied?  We have both
> cases currently, mdev devices that allow multiple instances per VM and
> those that do not, so I don't see that the user is more informed with
> this addition and no attempt is made here to synchronously update all
> drivers to expose this new attribute.
> 

When this attribute is not provided, behavior should remain same as it
is now. But if this attribute is provided then management tool should
read this attribute to verify that such combination is supported by
vendor driver.

> Does it make more sense then to add an attribute to expose the
> restriction rather than the lack of restriction.  Perhaps something
> like "singleton_usage_restriction".

I would prefer to expose what is supported than what is restricted.

>  Also if the type is meant to be a
> boolean, I think we have Y/N support for that in sysfs rather than
> using integers which raise the question what >1 implies.
> 

Ok. Makes sense. I'll change it to Y/N

Thanks,
Kirti

> Is there a more compelling backwards compatibility store for the below
> proposal than what I'm thinking?  Thanks,
> 
> Alex
> 
>>> Signed-off-by: Neo Jia <cjia@nvidia.com>
>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>>> ---
>>>  Documentation/ABI/testing/sysfs-bus-vfio-mdev | 13 +++++++++++++
>>>  1 file changed, 13 insertions(+)
>>>
>>> diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
>>> b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
>>> index 452dbe39270e..69e1291479ce 100644
>>> --- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
>>> +++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
>>> @@ -85,6 +85,19 @@ Users:
>>>  		a particular <type-id> that can help in understanding the
>>>  		features provided by that type of mediated device.
>>>
>>> +What:           /sys/.../mdev_supported_types/<type-  
>>> id>/multiple_mdev_support  
>>> +Date:           September 2018
>>> +Contact:        Kirti Wankhede <kwankhede@nvidia.com>
>>> +Description:
>>> +		Reading this attribute will return 0 or 1. Returning 1
>>> indicates
>>> +		multiple mdev devices of a particular <type-id> assigned to
>>> one
>>> +		User space application is supported by vendor driver. This is
>>> +		optional and readonly attribute.
>>> +Users:
>>> +		Userspace applications interested in knowing if multiple
>>> mdev
>>> +		devices of a particular <type-id> can be assigned to one
>>> +		instance of application.
>>> +
>>>  What:           /sys/.../<device>/<UUID>/
>>>  Date:           October 2016
>>>  Contact:        Kirti Wankhede <kwankhede@nvidia.com>
>>> --
>>> 2.7.0  
>>
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
  2018-09-27 19:23     ` Kirti Wankhede
@ 2018-09-27 20:01       ` Alex Williamson
  2018-09-28  1:27         ` Tian, Kevin
  2018-09-28  1:29         ` Tian, Kevin
  0 siblings, 2 replies; 7+ messages in thread
From: Alex Williamson @ 2018-09-27 20:01 UTC (permalink / raw)
  To: Kirti Wankhede; +Cc: libvir-list, Tian, Kevin, cjia, kvm, eskultet

On Fri, 28 Sep 2018 00:53:00 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 9/27/2018 9:29 PM, Alex Williamson wrote:
> > On Thu, 27 Sep 2018 06:48:27 +0000
> > "Tian, Kevin" <kevin.tian@intel.com> wrote:
> >   
> >>> From: Kirti Wankhede
> >>> Sent: Thursday, September 27, 2018 2:22 PM
> >>>
> >>> Generally a single instance of mdev device, a share of physical device, is
> >>> assigned to user space application or a VM. There are cases when multiple
> >>> instances of mdev devices of same or different types are required by User
> >>> space application or VM. For example in case of vGPU, multiple mdev
> >>> devices
> >>> of type which represents whole GPU can be assigned to one instance of
> >>> application or VM.
> >>>
> >>> All types of mdev devices may not support assigning multiple mdev devices
> >>> to a user space application. In that case vendor driver can fail open()
> >>> call of mdev device. But there is no way to know User space application
> >>> about the configuration supported by vendor driver.
> >>>
> >>> To expose supported configuration, vendor driver should add
> >>> 'multiple_mdev_support' attribute to type-id directory if vendor driver
> >>> supports assigning multiple mdev devices of a particular type-id to one
> >>> instance of user space application or a VM.
> >>>
> >>> User space application should read if 'multiple_mdev_support' attibute is
> >>> present in type-id directory of all mdev devices which are going to be
> >>> used. If all read 1 then user space application can proceed with multiple
> >>> mdev devices.
> >>>
> >>> This is optional and readonly attribute.    
> >>
> >> I didn't get what is the exact problem from above description. Isn't it the
> >> basic point behind mdev concept that parent driver can create multiple
> >> mdev instances on a physical device? VFIO usually doesn't care whether
> >> multiple devices (pci or mdev) are assigned to same process/VM or not.
> >> Can you give some example of actual problem behind this change?  
> > 
> > The situation here is that mdev imposes no restrictions regarding how
> > mdev devices can be used by the user.  Multiple mdevs, regardless of
> > type, can be combined and used in any way the user sees fit, at least
> > that's the theory.  However, in practice, certain vendor drivers impose
> > a limitation that prevents multiple mdev devices from being used in the
> > same VM.  This is done by returning an error when the user attempts to
> > open those additional devices.  Now we're in the very predictable
> > situation that we want to consider that some of the vendor's mdev types
> > might be combined in the same userspace instance, while others still
> > impose a restriction, and how can we optionally expose such per mdev
> > type restrictions to the userspace so we have something better than
> > "try it and see if it works".
> > 
> > The below proposal assumes that a single instance per VM is the norm
> > and that we add something to the API to indicate multiple instances are
> > supported.  However, that's really never been the position of the mdev
> > core, where there's no concept of limiting devices per user instance;
> > it's a vendor driver restriction.  So I think the question is then why
> > should we burden vendor drivers who do not impose a restriction with an
> > additional field?  Does the below extension provide a better backwards
> > compatibility scenario?  
> 
> The vendor driver who do not want to impose restriction, doesn't need to
> provide this attribute. In that case, behavior would remain same as it
> is now, i.e. multiple mdev instances can be used by one instance of
> application.
> 
> 
> > With the current code, I think it's reasonable for userspace to assume
> > there are no mdev limits per userspace instance, those that exist are
> > vendor specific.  The below proposal is for an optional field, what
> > does the management tool assume when it's not supplied?  We have both
> > cases currently, mdev devices that allow multiple instances per VM and
> > those that do not, so I don't see that the user is more informed with
> > this addition and no attempt is made here to synchronously update all
> > drivers to expose this new attribute.
> >   
> 
> When this attribute is not provided, behavior should remain same as it
> is now. But if this attribute is provided then management tool should
> read this attribute to verify that such combination is supported by
> vendor driver.
> 
> > Does it make more sense then to add an attribute to expose the
> > restriction rather than the lack of restriction.  Perhaps something
> > like "singleton_usage_restriction".  
> 
> I would prefer to expose what is supported than what is restricted.

Above, it's stated that vendors who do not impose a restriction do not
need to implement this.  So it's designed to expose a restriction.
We're stating that exposing multiple_mdev_support=1/Y is the equivalent
of not reporting the attribute at all, so the only reason to implement
it would be because there is a restriction.  So why masquerade as a
feature?  Thanks,

Alex

> >  Also if the type is meant to be a
> > boolean, I think we have Y/N support for that in sysfs rather than
> > using integers which raise the question what >1 implies.
> >   
> 
> Ok. Makes sense. I'll change it to Y/N
> 
> Thanks,
> Kirti
> 
> > Is there a more compelling backwards compatibility store for the below
> > proposal than what I'm thinking?  Thanks,
> > 
> > Alex
> >   
> >>> Signed-off-by: Neo Jia <cjia@nvidia.com>
> >>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >>> ---
> >>>  Documentation/ABI/testing/sysfs-bus-vfio-mdev | 13 +++++++++++++
> >>>  1 file changed, 13 insertions(+)
> >>>
> >>> diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> >>> b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> >>> index 452dbe39270e..69e1291479ce 100644
> >>> --- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> >>> +++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> >>> @@ -85,6 +85,19 @@ Users:
> >>>  		a particular <type-id> that can help in understanding the
> >>>  		features provided by that type of mediated device.
> >>>
> >>> +What:           /sys/.../mdev_supported_types/<type-    
> >>> id>/multiple_mdev_support    
> >>> +Date:           September 2018
> >>> +Contact:        Kirti Wankhede <kwankhede@nvidia.com>
> >>> +Description:
> >>> +		Reading this attribute will return 0 or 1. Returning 1
> >>> indicates
> >>> +		multiple mdev devices of a particular <type-id> assigned to
> >>> one
> >>> +		User space application is supported by vendor driver. This is
> >>> +		optional and readonly attribute.
> >>> +Users:
> >>> +		Userspace applications interested in knowing if multiple
> >>> mdev
> >>> +		devices of a particular <type-id> can be assigned to one
> >>> +		instance of application.
> >>> +
> >>>  What:           /sys/.../<device>/<UUID>/
> >>>  Date:           October 2016
> >>>  Contact:        Kirti Wankhede <kwankhede@nvidia.com>
> >>> --
> >>> 2.7.0    
> >>  
> >   

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
  2018-09-27 20:01       ` Alex Williamson
@ 2018-09-28  1:27         ` Tian, Kevin
  2018-09-28  1:29         ` Tian, Kevin
  1 sibling, 0 replies; 7+ messages in thread
From: Tian, Kevin @ 2018-09-28  1:27 UTC (permalink / raw)
  To: Alex Williamson, Kirti Wankhede; +Cc: libvir-list, cjia, kvm, eskultet

> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Friday, September 28, 2018 4:01 AM
> 
> On Fri, 28 Sep 2018 00:53:00 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
> > On 9/27/2018 9:29 PM, Alex Williamson wrote:
> > > On Thu, 27 Sep 2018 06:48:27 +0000
> > > "Tian, Kevin" <kevin.tian@intel.com> wrote:
> > >
> > >>> From: Kirti Wankhede
> > >>> Sent: Thursday, September 27, 2018 2:22 PM
> > >>>
> > >>> Generally a single instance of mdev device, a share of physical device,
> is
> > >>> assigned to user space application or a VM. There are cases when
> multiple
> > >>> instances of mdev devices of same or different types are required by
> User
> > >>> space application or VM. For example in case of vGPU, multiple mdev
> > >>> devices
> > >>> of type which represents whole GPU can be assigned to one instance
> of
> > >>> application or VM.
> > >>>
> > >>> All types of mdev devices may not support assigning multiple mdev
> devices
> > >>> to a user space application. In that case vendor driver can fail open()
> > >>> call of mdev device. But there is no way to know User space
> application
> > >>> about the configuration supported by vendor driver.
> > >>>
> > >>> To expose supported configuration, vendor driver should add
> > >>> 'multiple_mdev_support' attribute to type-id directory if vendor
> driver
> > >>> supports assigning multiple mdev devices of a particular type-id to
> one
> > >>> instance of user space application or a VM.
> > >>>
> > >>> User space application should read if 'multiple_mdev_support'
> attibute is
> > >>> present in type-id directory of all mdev devices which are going to be
> > >>> used. If all read 1 then user space application can proceed with
> multiple
> > >>> mdev devices.
> > >>>
> > >>> This is optional and readonly attribute.
> > >>
> > >> I didn't get what is the exact problem from above description. Isn't it
> the
> > >> basic point behind mdev concept that parent driver can create multiple
> > >> mdev instances on a physical device? VFIO usually doesn't care
> whether
> > >> multiple devices (pci or mdev) are assigned to same process/VM or not.
> > >> Can you give some example of actual problem behind this change?
> > >
> > > The situation here is that mdev imposes no restrictions regarding how
> > > mdev devices can be used by the user.  Multiple mdevs, regardless of
> > > type, can be combined and used in any way the user sees fit, at least
> > > that's the theory.  However, in practice, certain vendor drivers impose
> > > a limitation that prevents multiple mdev devices from being used in the
> > > same VM.  This is done by returning an error when the user attempts to
> > > open those additional devices.  Now we're in the very predictable
> > > situation that we want to consider that some of the vendor's mdev
> types
> > > might be combined in the same userspace instance, while others still
> > > impose a restriction, and how can we optionally expose such per mdev
> > > type restrictions to the userspace so we have something better than
> > > "try it and see if it works".
> > >
> > > The below proposal assumes that a single instance per VM is the norm
> > > and that we add something to the API to indicate multiple instances are
> > > supported.  However, that's really never been the position of the mdev
> > > core, where there's no concept of limiting devices per user instance;
> > > it's a vendor driver restriction.  So I think the question is then why
> > > should we burden vendor drivers who do not impose a restriction with
> an
> > > additional field?  Does the below extension provide a better backwards
> > > compatibility scenario?
> >
> > The vendor driver who do not want to impose restriction, doesn't need to
> > provide this attribute. In that case, behavior would remain same as it
> > is now, i.e. multiple mdev instances can be used by one instance of
> > application.
> >
> >
> > > With the current code, I think it's reasonable for userspace to assume
> > > there are no mdev limits per userspace instance, those that exist are
> > > vendor specific.  The below proposal is for an optional field, what
> > > does the management tool assume when it's not supplied?  We have
> both
> > > cases currently, mdev devices that allow multiple instances per VM and
> > > those that do not, so I don't see that the user is more informed with
> > > this addition and no attempt is made here to synchronously update all
> > > drivers to expose this new attribute.
> > >
> >
> > When this attribute is not provided, behavior should remain same as it
> > is now. But if this attribute is provided then management tool should
> > read this attribute to verify that such combination is supported by
> > vendor driver.
> >
> > > Does it make more sense then to add an attribute to expose the
> > > restriction rather than the lack of restriction.  Perhaps something
> > > like "singleton_usage_restriction".
> >
> > I would prefer to expose what is supported than what is restricted.
> 
> Above, it's stated that vendors who do not impose a restriction do not
> need to implement this.  So it's designed to expose a restriction.
> We're stating that exposing multiple_mdev_support=1/Y is the equivalent
> of not reporting the attribute at all, so the only reason to implement
> it would be because there is a restriction.  So why masquerade as a
> feature?  Thanks,
> 

Agree it looks more like a restriction than feature. Especially a natural thought
of missing a feature knob means no support of the feature, which doesn't
fit the intention here. Also what default behavior should high level mgmt..
stack assume if the attribute is missing? some vendor drivers allows while
other doesn't today. Then do we ask to include vendor-specific knowledge 
for "remain same behavior"?

Thanks
Kevin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
  2018-09-27 20:01       ` Alex Williamson
  2018-09-28  1:27         ` Tian, Kevin
@ 2018-09-28  1:29         ` Tian, Kevin
  1 sibling, 0 replies; 7+ messages in thread
From: Tian, Kevin @ 2018-09-28  1:29 UTC (permalink / raw)
  To: Alex Williamson, Kirti Wankhede; +Cc: libvir-list, cjia, kvm, eskultet

> From: Tian, Kevin
> Sent: Friday, September 28, 2018 9:27 AM
> 
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Friday, September 28, 2018 4:01 AM
> >
> > On Fri, 28 Sep 2018 00:53:00 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >
> > > On 9/27/2018 9:29 PM, Alex Williamson wrote:
> > > > On Thu, 27 Sep 2018 06:48:27 +0000
> > > > "Tian, Kevin" <kevin.tian@intel.com> wrote:
> > > >
> > > >>> From: Kirti Wankhede
> > > >>> Sent: Thursday, September 27, 2018 2:22 PM
> > > >>>
> > > >>> Generally a single instance of mdev device, a share of physical
> device,
> > is
> > > >>> assigned to user space application or a VM. There are cases when
> > multiple
> > > >>> instances of mdev devices of same or different types are required
> by
> > User
> > > >>> space application or VM. For example in case of vGPU, multiple
> mdev
> > > >>> devices
> > > >>> of type which represents whole GPU can be assigned to one
> instance
> > of
> > > >>> application or VM.
> > > >>>
> > > >>> All types of mdev devices may not support assigning multiple mdev
> > devices
> > > >>> to a user space application. In that case vendor driver can fail open()
> > > >>> call of mdev device. But there is no way to know User space
> > application
> > > >>> about the configuration supported by vendor driver.
> > > >>>
> > > >>> To expose supported configuration, vendor driver should add
> > > >>> 'multiple_mdev_support' attribute to type-id directory if vendor
> > driver
> > > >>> supports assigning multiple mdev devices of a particular type-id to
> > one
> > > >>> instance of user space application or a VM.
> > > >>>
> > > >>> User space application should read if 'multiple_mdev_support'
> > attibute is
> > > >>> present in type-id directory of all mdev devices which are going to
> be
> > > >>> used. If all read 1 then user space application can proceed with
> > multiple
> > > >>> mdev devices.
> > > >>>
> > > >>> This is optional and readonly attribute.
> > > >>
> > > >> I didn't get what is the exact problem from above description. Isn't it
> > the
> > > >> basic point behind mdev concept that parent driver can create
> multiple
> > > >> mdev instances on a physical device? VFIO usually doesn't care
> > whether
> > > >> multiple devices (pci or mdev) are assigned to same process/VM or
> not.
> > > >> Can you give some example of actual problem behind this change?
> > > >
> > > > The situation here is that mdev imposes no restrictions regarding how
> > > > mdev devices can be used by the user.  Multiple mdevs, regardless of
> > > > type, can be combined and used in any way the user sees fit, at least
> > > > that's the theory.  However, in practice, certain vendor drivers impose
> > > > a limitation that prevents multiple mdev devices from being used in
> the
> > > > same VM.  This is done by returning an error when the user attempts
> to
> > > > open those additional devices.  Now we're in the very predictable
> > > > situation that we want to consider that some of the vendor's mdev
> > types
> > > > might be combined in the same userspace instance, while others still
> > > > impose a restriction, and how can we optionally expose such per
> mdev
> > > > type restrictions to the userspace so we have something better than
> > > > "try it and see if it works".
> > > >
> > > > The below proposal assumes that a single instance per VM is the
> norm
> > > > and that we add something to the API to indicate multiple instances
> are
> > > > supported.  However, that's really never been the position of the
> mdev
> > > > core, where there's no concept of limiting devices per user instance;
> > > > it's a vendor driver restriction.  So I think the question is then why
> > > > should we burden vendor drivers who do not impose a restriction
> with
> > an
> > > > additional field?  Does the below extension provide a better
> backwards
> > > > compatibility scenario?
> > >
> > > The vendor driver who do not want to impose restriction, doesn't need
> to
> > > provide this attribute. In that case, behavior would remain same as it
> > > is now, i.e. multiple mdev instances can be used by one instance of
> > > application.
> > >
> > >
> > > > With the current code, I think it's reasonable for userspace to assume
> > > > there are no mdev limits per userspace instance, those that exist are
> > > > vendor specific.  The below proposal is for an optional field, what
> > > > does the management tool assume when it's not supplied?  We have
> > both
> > > > cases currently, mdev devices that allow multiple instances per VM
> and
> > > > those that do not, so I don't see that the user is more informed with
> > > > this addition and no attempt is made here to synchronously update all
> > > > drivers to expose this new attribute.
> > > >
> > >
> > > When this attribute is not provided, behavior should remain same as it
> > > is now. But if this attribute is provided then management tool should
> > > read this attribute to verify that such combination is supported by
> > > vendor driver.
> > >
> > > > Does it make more sense then to add an attribute to expose the
> > > > restriction rather than the lack of restriction.  Perhaps something
> > > > like "singleton_usage_restriction".
> > >
> > > I would prefer to expose what is supported than what is restricted.
> >
> > Above, it's stated that vendors who do not impose a restriction do not
> > need to implement this.  So it's designed to expose a restriction.
> > We're stating that exposing multiple_mdev_support=1/Y is the equivalent
> > of not reporting the attribute at all, so the only reason to implement
> > it would be because there is a restriction.  So why masquerade as a
> > feature?  Thanks,
> >
> 
> Agree it looks more like a restriction than feature. Especially a natural
> thought
> of missing a feature knob means no support of the feature, which doesn't
> fit the intention here. Also what default behavior should high level mgmt..
> stack assume if the attribute is missing? some vendor drivers allows while
> other doesn't today. Then do we ask to include vendor-specific knowledge
> for "remain same behavior"?

forgot the latter part. Just realized Alex already explained the current
behavior - "try it and see if it works". :-)

Thanks
Kevin

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-09-28  1:29 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-27  6:22 [libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id Kirti Wankhede
2018-09-27  6:48 ` Tian, Kevin
2018-09-27 15:59   ` Alex Williamson
2018-09-27 19:23     ` Kirti Wankhede
2018-09-27 20:01       ` Alex Williamson
2018-09-28  1:27         ` Tian, Kevin
2018-09-28  1:29         ` Tian, Kevin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.