All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yishai Hadas via Virtualization <virtualization@lists.linux-foundation.org>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: kvm@vger.kernel.org, mst@redhat.com, maorg@nvidia.com,
	virtualization@lists.linux-foundation.org, jgg@nvidia.com,
	jiri@nvidia.com, leonro@nvidia.com
Subject: Re: [PATCH V1 vfio 9/9] vfio/virtio: Introduce a vfio driver over virtio devices
Date: Sun, 29 Oct 2023 18:13:34 +0200	[thread overview]
Message-ID: <144f8eaa-635b-4791-b64d-5c3a4681806e@nvidia.com> (raw)
In-Reply-To: <20231026115539.72c01af9.alex.williamson@redhat.com>

On 26/10/2023 20:55, Alex Williamson wrote:
> On Thu, 26 Oct 2023 15:08:12 +0300
> Yishai Hadas <yishaih@nvidia.com> wrote:
>
>> On 25/10/2023 22:13, Alex Williamson wrote:
>>> On Wed, 25 Oct 2023 17:35:51 +0300
>>> Yishai Hadas <yishaih@nvidia.com> wrote:
>>>   
>>>> On 24/10/2023 22:57, Alex Williamson wrote:
>>>>> On Tue, 17 Oct 2023 16:42:17 +0300
>>>>> Yishai Hadas <yishaih@nvidia.com> wrote:
>     
>>>>>> +		if (copy_to_user(buf + copy_offset, &val32, copy_count))
>>>>>> +			return -EFAULT;
>>>>>> +	}
>>>>>> +
>>>>>> +	if (range_intersect_range(pos, count, PCI_SUBSYSTEM_ID, sizeof(val16),
>>>>>> +				  &copy_offset, &copy_count, NULL)) {
>>>>>> +		/*
>>>>>> +		 * Transitional devices use the PCI subsystem device id as
>>>>>> +		 * virtio device id, same as legacy driver always did.
>>>>> Where did we require the subsystem vendor ID to be 0x1af4?  This
>>>>> subsystem device ID really only makes since given that subsystem
>>>>> vendor ID, right?  Otherwise I don't see that non-transitional devices,
>>>>> such as the VF, have a hard requirement per the spec for the subsystem
>>>>> vendor ID.
>>>>>
>>>>> Do we want to make this only probe the correct subsystem vendor ID or do
>>>>> we want to emulate the subsystem vendor ID as well?  I don't see this is
>>>>> correct without one of those options.
>>>> Looking in the 1.x spec we can see the below.
>>>>
>>>> Legacy Interfaces: A Note on PCI Device Discovery
>>>>
>>>> "Transitional devices MUST have the PCI Subsystem
>>>> Device ID matching the Virtio Device ID, as indicated in section 5 ...
>>>> This is to match legacy drivers."
>>>>
>>>> However, there is no need to enforce Subsystem Vendor ID.
>>>>
>>>> This is what we followed here.
>>>>
>>>> Makes sense ?
>>> So do I understand correctly that virtio dictates the subsystem device
>>> ID for all subsystem vendor IDs that implement a legacy virtio
>>> interface?  Ok, but this device didn't actually implement a legacy
>>> virtio interface.  The device itself is not tranistional, we're imposing
>>> an emulated transitional interface onto it.  So did the subsystem vendor
>>> agree to have their subsystem device ID managed by the virtio committee
>>> or might we create conflicts?  I imagine we know we don't have a
>>> conflict if we also virtualize the subsystem vendor ID.
>>>   
>> The non transitional net device in the virtio spec defined as the below
>> tuple.
>> T_A: VID=0x1AF4, DID=0x1040, Subsys_VID=FOO, Subsys_DID=0x40.
>>
>> And transitional net device in the virtio spec for a vendor FOO is
>> defined as:
>> T_B: VID=0x1AF4,DID=0x1000,Subsys_VID=FOO, subsys_DID=0x1
>>
>> This driver is converting T_A to T_B, which both are defined by the
>> virtio spec.
>> Hence, it does not conflict for the subsystem vendor, it is fine.
> Surprising to me that the virtio spec dictates subsystem device ID in
> all cases.  The further discussion in this thread seems to indicate we
> need to virtualize subsystem vendor ID for broader driver compatibility
> anyway.
>
>>> BTW, it would be a lot easier for all of the config space emulation here
>>> if we could make use of the existing field virtualization in
>>> vfio-pci-core.  In fact you'll see in vfio_config_init() that
>>> PCI_DEVICE_ID is already virtualized for VFs, so it would be enough to
>>> simply do the following to report the desired device ID:
>>>
>>> 	*(__le16 *)&vconfig[PCI_DEVICE_ID] = cpu_to_le16(0x1000);
>> I would prefer keeping things simple and have one place/flow that
>> handles all the fields as we have now as part of the driver.
> That's the same argument I'd make for re-using the core code, we don't
> need multiple implementations handling merging physical and virtual
> bits within config space.
>
>> In any case, I'll further look at that option for managing the DEVICE_ID
>> towards V2.
>>
>>> It appears everything in this function could be handled similarly by
>>> vfio-pci-core if the right fields in the perm_bits.virt and .write
>>> bits could be manipulated and vconfig modified appropriately.  I'd look
>>> for a way that a variant driver could provide an alternate set of
>>> permissions structures for various capabilities.  Thanks,
>> OK
>>
>> However, let's not block V2 and the series acceptance as of that.
>>
>> It can always be some future refactoring as part of other series that
>> will bring the infra-structure that is needed for that.
> We're already on the verge of the v6.7 merge window, so this looks like
> v6.8 material anyway.  We have time.  Thanks,

OK

I sent V2 having all the other notes handled to share and get feedback 
from both you and Michael.

Let's continue from there to see what is needed towards v6.8.

Thanks,
Yishai

>
> Alex
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

WARNING: multiple messages have this Message-ID (diff)
From: Yishai Hadas <yishaih@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: <mst@redhat.com>, <jasowang@redhat.com>, <jgg@nvidia.com>,
	<kvm@vger.kernel.org>,
	<virtualization@lists.linux-foundation.org>, <parav@nvidia.com>,
	<feliu@nvidia.com>, <jiri@nvidia.com>, <kevin.tian@intel.com>,
	<joao.m.martins@oracle.com>, <si-wei.liu@oracle.com>,
	<leonro@nvidia.com>, <maorg@nvidia.com>
Subject: Re: [PATCH V1 vfio 9/9] vfio/virtio: Introduce a vfio driver over virtio devices
Date: Sun, 29 Oct 2023 18:13:34 +0200	[thread overview]
Message-ID: <144f8eaa-635b-4791-b64d-5c3a4681806e@nvidia.com> (raw)
In-Reply-To: <20231026115539.72c01af9.alex.williamson@redhat.com>

On 26/10/2023 20:55, Alex Williamson wrote:
> On Thu, 26 Oct 2023 15:08:12 +0300
> Yishai Hadas <yishaih@nvidia.com> wrote:
>
>> On 25/10/2023 22:13, Alex Williamson wrote:
>>> On Wed, 25 Oct 2023 17:35:51 +0300
>>> Yishai Hadas <yishaih@nvidia.com> wrote:
>>>   
>>>> On 24/10/2023 22:57, Alex Williamson wrote:
>>>>> On Tue, 17 Oct 2023 16:42:17 +0300
>>>>> Yishai Hadas <yishaih@nvidia.com> wrote:
>     
>>>>>> +		if (copy_to_user(buf + copy_offset, &val32, copy_count))
>>>>>> +			return -EFAULT;
>>>>>> +	}
>>>>>> +
>>>>>> +	if (range_intersect_range(pos, count, PCI_SUBSYSTEM_ID, sizeof(val16),
>>>>>> +				  &copy_offset, &copy_count, NULL)) {
>>>>>> +		/*
>>>>>> +		 * Transitional devices use the PCI subsystem device id as
>>>>>> +		 * virtio device id, same as legacy driver always did.
>>>>> Where did we require the subsystem vendor ID to be 0x1af4?  This
>>>>> subsystem device ID really only makes since given that subsystem
>>>>> vendor ID, right?  Otherwise I don't see that non-transitional devices,
>>>>> such as the VF, have a hard requirement per the spec for the subsystem
>>>>> vendor ID.
>>>>>
>>>>> Do we want to make this only probe the correct subsystem vendor ID or do
>>>>> we want to emulate the subsystem vendor ID as well?  I don't see this is
>>>>> correct without one of those options.
>>>> Looking in the 1.x spec we can see the below.
>>>>
>>>> Legacy Interfaces: A Note on PCI Device Discovery
>>>>
>>>> "Transitional devices MUST have the PCI Subsystem
>>>> Device ID matching the Virtio Device ID, as indicated in section 5 ...
>>>> This is to match legacy drivers."
>>>>
>>>> However, there is no need to enforce Subsystem Vendor ID.
>>>>
>>>> This is what we followed here.
>>>>
>>>> Makes sense ?
>>> So do I understand correctly that virtio dictates the subsystem device
>>> ID for all subsystem vendor IDs that implement a legacy virtio
>>> interface?  Ok, but this device didn't actually implement a legacy
>>> virtio interface.  The device itself is not tranistional, we're imposing
>>> an emulated transitional interface onto it.  So did the subsystem vendor
>>> agree to have their subsystem device ID managed by the virtio committee
>>> or might we create conflicts?  I imagine we know we don't have a
>>> conflict if we also virtualize the subsystem vendor ID.
>>>   
>> The non transitional net device in the virtio spec defined as the below
>> tuple.
>> T_A: VID=0x1AF4, DID=0x1040, Subsys_VID=FOO, Subsys_DID=0x40.
>>
>> And transitional net device in the virtio spec for a vendor FOO is
>> defined as:
>> T_B: VID=0x1AF4,DID=0x1000,Subsys_VID=FOO, subsys_DID=0x1
>>
>> This driver is converting T_A to T_B, which both are defined by the
>> virtio spec.
>> Hence, it does not conflict for the subsystem vendor, it is fine.
> Surprising to me that the virtio spec dictates subsystem device ID in
> all cases.  The further discussion in this thread seems to indicate we
> need to virtualize subsystem vendor ID for broader driver compatibility
> anyway.
>
>>> BTW, it would be a lot easier for all of the config space emulation here
>>> if we could make use of the existing field virtualization in
>>> vfio-pci-core.  In fact you'll see in vfio_config_init() that
>>> PCI_DEVICE_ID is already virtualized for VFs, so it would be enough to
>>> simply do the following to report the desired device ID:
>>>
>>> 	*(__le16 *)&vconfig[PCI_DEVICE_ID] = cpu_to_le16(0x1000);
>> I would prefer keeping things simple and have one place/flow that
>> handles all the fields as we have now as part of the driver.
> That's the same argument I'd make for re-using the core code, we don't
> need multiple implementations handling merging physical and virtual
> bits within config space.
>
>> In any case, I'll further look at that option for managing the DEVICE_ID
>> towards V2.
>>
>>> It appears everything in this function could be handled similarly by
>>> vfio-pci-core if the right fields in the perm_bits.virt and .write
>>> bits could be manipulated and vconfig modified appropriately.  I'd look
>>> for a way that a variant driver could provide an alternate set of
>>> permissions structures for various capabilities.  Thanks,
>> OK
>>
>> However, let's not block V2 and the series acceptance as of that.
>>
>> It can always be some future refactoring as part of other series that
>> will bring the infra-structure that is needed for that.
> We're already on the verge of the v6.7 merge window, so this looks like
> v6.8 material anyway.  We have time.  Thanks,

OK

I sent V2 having all the other notes handled to share and get feedback 
from both you and Michael.

Let's continue from there to see what is needed towards v6.8.

Thanks,
Yishai

>
> Alex
>


  parent reply	other threads:[~2023-10-29 16:14 UTC|newest]

Thread overview: 100+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-17 13:42 [PATCH V1 vfio 0/9] Introduce a vfio driver over virtio devices Yishai Hadas
2023-10-17 13:42 ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 1/9] virtio-pci: Fix common config map for modern device Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 2/9] virtio: Define feature bit for administration virtqueue Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 3/9] virtio-pci: Introduce admin virtqueue Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 4/9] virtio-pci: Introduce admin command sending function Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 5/9] virtio-pci: Introduce admin commands Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 6/9] virtio-pci: Introduce APIs to execute legacy IO " Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 20:33   ` kernel test robot
2023-10-17 20:33     ` kernel test robot
2023-10-22  1:14   ` kernel test robot
2023-10-22  1:14     ` kernel test robot
2023-10-24 21:01   ` Michael S. Tsirkin
2023-10-24 21:01     ` Michael S. Tsirkin
2023-10-25  9:18     ` Yishai Hadas via Virtualization
2023-10-25 10:17       ` Michael S. Tsirkin
2023-10-25 10:17         ` Michael S. Tsirkin
2023-10-25 13:00         ` Yishai Hadas
2023-10-25 13:00           ` Yishai Hadas via Virtualization
2023-10-25 13:04           ` Michael S. Tsirkin
2023-10-25 13:04             ` Michael S. Tsirkin
2023-10-25 13:44           ` Michael S. Tsirkin
2023-10-25 13:44             ` Michael S. Tsirkin
2023-10-25 14:03             ` Yishai Hadas
2023-10-25 14:03               ` Yishai Hadas via Virtualization
2023-10-25 16:31               ` Michael S. Tsirkin
2023-10-25 16:31                 ` Michael S. Tsirkin
2023-10-25  9:36     ` Yishai Hadas
2023-10-25  9:36       ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 7/9] vfio/pci: Expose vfio_pci_core_setup_barmap() Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 8/9] vfio/pci: Expose vfio_pci_iowrite/read##size() Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 9/9] vfio/virtio: Introduce a vfio driver over virtio devices Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 20:24   ` Alex Williamson
2023-10-17 20:24     ` Alex Williamson
2023-10-18  9:01     ` Yishai Hadas
2023-10-18  9:01       ` Yishai Hadas via Virtualization
2023-10-18 12:51       ` Alex Williamson
2023-10-18 12:51         ` Alex Williamson
2023-10-18 13:06         ` Parav Pandit
2023-10-18 13:06           ` Parav Pandit via Virtualization
2023-10-18 16:33     ` Jason Gunthorpe
2023-10-18 18:29       ` Alex Williamson
2023-10-18 18:29         ` Alex Williamson
2023-10-18 19:28         ` Jason Gunthorpe
2023-10-24 19:57   ` Alex Williamson
2023-10-24 19:57     ` Alex Williamson
2023-10-25 14:35     ` Yishai Hadas
2023-10-25 14:35       ` Yishai Hadas via Virtualization
2023-10-25 16:28       ` Michael S. Tsirkin
2023-10-25 16:28         ` Michael S. Tsirkin
2023-10-25 19:13       ` Alex Williamson
2023-10-25 19:13         ` Alex Williamson
2023-10-26 12:08         ` Yishai Hadas
2023-10-26 12:08           ` Yishai Hadas via Virtualization
2023-10-26 12:12           ` Michael S. Tsirkin
2023-10-26 12:12             ` Michael S. Tsirkin
2023-10-26 12:40             ` Parav Pandit
2023-10-26 12:40               ` Parav Pandit via Virtualization
2023-10-26 13:15               ` Michael S. Tsirkin
2023-10-26 13:15                 ` Michael S. Tsirkin
2023-10-26 13:28                 ` Parav Pandit
2023-10-26 13:28                   ` Parav Pandit via Virtualization
2023-10-26 15:06                   ` Michael S. Tsirkin
2023-10-26 15:06                     ` Michael S. Tsirkin
2023-10-26 15:09                     ` Parav Pandit
2023-10-26 15:09                       ` Parav Pandit via Virtualization
2023-10-26 15:46                       ` Michael S. Tsirkin
2023-10-26 15:46                         ` Michael S. Tsirkin
2023-10-26 15:56                         ` Parav Pandit
2023-10-26 15:56                           ` Parav Pandit via Virtualization
2023-10-26 17:55           ` Alex Williamson
2023-10-26 17:55             ` Alex Williamson
2023-10-26 19:49             ` Michael S. Tsirkin
2023-10-26 19:49               ` Michael S. Tsirkin
2023-10-29 16:13             ` Yishai Hadas via Virtualization [this message]
2023-10-29 16:13               ` Yishai Hadas
2023-10-22  8:20 ` [PATCH V1 vfio 0/9] " Yishai Hadas
2023-10-22  8:20   ` Yishai Hadas via Virtualization
2023-10-22  9:12   ` Michael S. Tsirkin
2023-10-22  9:12     ` Michael S. Tsirkin
2023-10-23 15:33   ` Alex Williamson
2023-10-23 15:33     ` Alex Williamson
2023-10-23 15:42     ` Jason Gunthorpe
2023-10-23 16:09       ` Alex Williamson
2023-10-23 16:09         ` Alex Williamson
2023-10-23 16:20         ` Jason Gunthorpe
2023-10-23 16:45           ` Alex Williamson
2023-10-23 16:45             ` Alex Williamson
2023-10-23 17:27             ` Jason Gunthorpe
2023-10-25  8:34       ` Tian, Kevin
2023-10-25  8:34         ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=144f8eaa-635b-4791-b64d-5c3a4681806e@nvidia.com \
    --to=virtualization@lists.linux-foundation.org \
    --cc=alex.williamson@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=jiri@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=leonro@nvidia.com \
    --cc=maorg@nvidia.com \
    --cc=mst@redhat.com \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.