On Mon, May 24, 2021 at 04:59:28PM +0200, Christoph Hellwig wrote: > On Thu, May 20, 2021 at 03:13:05PM +0100, Stefan Hajnoczi wrote: > > Possible drawbacks of this approach: > > > > - Hardware virtio_blk implementations may find virtqueue_disable_cb() > > expensive since it requires DMA. If such devices become popular then > > the virtio_blk driver could use a similar approach to NVMe when > > VIRTIO_F_ACCESS_PLATFORM is detected in the future. > > > > - If a blk_poll() thread is descheduled it not only hurts polling > > performance but also delays completion of non-REQ_HIPRI requests on > > that virtqueue since vq notifications are disabled. > > Yes, I think this is a dangerous configuration. What argument exists > again just using dedicated poll queues? Aside from what Paolo described (the lack of a hardware interface to designate polling queues), the poll_queues=N parameter needs to be added to the full guest and host software stack. Users also need to learn about this so they can enable it in all the places. This is much more hassle for users to configure. The dynamic polling mode approach requires no configuration. Paolo's suggestion is to notify the driver when irqs need to be re-enabled if the polling thread is descheduled. I actually have a prototype that achieves something similar here: https://gitlab.com/stefanha/linux/-/commits/cpuidle-haltpoll-virtqueue It's a different approach from the current patch series: the cpuidle driver provides poll_start/stop() callbacks and virtio_blk participates in cpuidle-haltpoll. That means all virtio-blk devices temporarily use polling mode while the vCPU is doing cpuidle-haltpoll polling. The neat thing about the cpuidle approach is: 1. Applications don't need to set the RWF_HIPRI flag! 2. Other drivers besides virtio_blk can participate in polling too. Maybe we should go with cpuidle polling instead? Stefan