On Thu, Dec 14, 2017 at 01:53:16PM +0800, Wei Wang wrote: > On 12/13/2017 08:35 PM, Stefan Hajnoczi wrote: > > On Wed, Dec 13, 2017 at 04:11:45PM +0800, Wei Wang wrote: > > > > I think the current approach is fine for a prototype but is not suitable > > for wider use by the community because it: > > 1. Does not scale to multiple device types (net, scsi, blk, etc) > > 2. Does not scale as the vhost-user protocol changes > > 3. It is hard to make slaves run in both host userspace and the guest > > > > It would be good to solve these problems so that vhost-pci can become > > successful. It's very hard to fix these things after the code is merged > > because guests will depend on the device interface. > > > > Here are the points in detail (in order of importance): > > > > 1. Does not scale to multiple device types (net, scsi, blk, etc) > > > > vhost-user is being applied to new device types beyond virtio-net. > > There will be demand for supporting other device types besides > > virtio-net with vhost-pci. > > > > This patch series requires defining a new virtio device type for each > > vhost-user device type. It is a lot of work to design a new virtio > > device. Additionally, the new virtio device type should become part of > > the VIRTIO standard, which can also take some time and requires writing > > a standards document. > > > > 2. Does not scale as the vhost-user protocol changes > > > > When the vhost-user protocol changes it will be necessary to update the > > vhost-pci device interface to reflect those changes. Each protocol > > change requires thinking how the virtio devices need to look in order to > > support the new behavior. Changes to the vhost-user protocol will > > result in changes to the VIRTIO specification for the vhost-pci virtio > > devices. > > > > 3. It is hard to make slaves run in both host userspace and the guest > > > > If a vhost-user slave wishes to support running in host userspace and > > the guest then not much code can be shared between these two modes since > > the interfaces are so different. > > > > How would you solve these issues? > > 1st one: I think we can factor out a common vhost-pci device layer in QEMU. > Specific devices (net, scsi etc) emulation comes on top of it. The > vhost-user protocol sets up VhostPCIDev only. So we will have something like > this: > > struct VhostPCINet { > struct VhostPCIDev vp_dev; > u8 mac[8]; > .... > } Defining VhostPCIDev is an important step to making it easy to implement other device types. I'm interested is seeing how this would look either in code or in a more detailed outline. I wonder what the device-specific parts will be. This patch series does not implement a fully functional vhost-user-net device so I'm not sure. > 2nd one: I think we need to view it the other way around: If there is a > demand to change the protocol, then where is the demand from? I think mostly > it is because there is some new features from the device/driver. That is, we > first have already thought about how the virtio device looks like with the > new feature, then we add the support to the protocol. The vhost-user protocol will change when people using host userspace slaves decide to change it. They may not know or care about vhost-pci, so the virtio changes will be an afterthought that falls on whoever wants to support vhost-pci. This is why I think it makes a lot more sense to stick to the vhost-user protocol as the vhost-pci slave interface instead of inventing a new interface on top of it. > I'm not sure how would > it cause not scaling well, and how using another GuestSlave-to-QemuMaster > changes the story (we will also need to patch the GuestSlave inside the VM > to support the vhost-user negotiation of the new feature), in comparison to > the standard virtio feature negotiation. Plus the VIRTIO specification needs to be updated. And if the vhost-user protocol change affects all device types then it may be necessary to change multiple virtio devices! This is O(1) vs O(N). > 3rd one: I'm not able to solve this one, as discussed, there are too many > differences and it's too complex. I prefer the direction of simply gating > the vhost-user protocol and deliver to the guest what it should see (just > what this patch series shows). You would need to solve this issue to show > this direction is simpler :) #3 is nice to have but not critical. In the approach I suggested it would be done by implementing vfio vhost-pci for libvhost-user or DPDK. Stefan