From mboxrd@z Thu Jan 1 00:00:00 1970 From: Martin Wilck Subject: Re: [PATCH] virtio-rng: return available data with O_NONBLOCK Date: Wed, 15 Jul 2020 09:05:56 +0200 Message-ID: <4b9cc7d60fdbd8fa41686fb94ed55e354fe6fa20.camel@suse.com> References: <20200714220019.10854-1-mwilck@suse.com> Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-15" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20200714220019.10854-1-mwilck@suse.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel2=m.gmane-mx.org@nongnu.org Sender: "Qemu-devel" To: "Michael S. Tsirkin" , Jason Wang Cc: qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org List-Id: virtualization@lists.linuxfoundation.org On Wed, 2020-07-15 at 00:00 +0200, mwilck@suse.com wrote: > From: Martin Wilck > > If a program opens /dev/hwrng with O_NONBLOCK and uses poll() and > non-blocking read() to retrieve random data, it ends up in a tight > loop with poll() always returning POLLIN and read() returning EAGAIN. > This repeats forever until some process makes a blocking read() call. > The reason is that virtio_read() always returns 0 in non-blocking > mode, > even if data is available. > > The following test program illustrates the behavior. ... > This can be observed in the real word e.g. with nested qemu/KVM > virtual > machines, if both the "outer" and "inner" VMs have a virtio-rng > device. > If the "inner" VM requests random data, qemu running in the "outer" > VM > uses this device in a non-blocking manner like the test program > above. > > Fix it by returning available data if it exists. > > Signed-off-by: Martin Wilck > --- > drivers/char/hw_random/virtio-rng.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/drivers/char/hw_random/virtio-rng.c > b/drivers/char/hw_random/virtio-rng.c > index 79a6e47b5fbc..94806308d814 100644 > --- a/drivers/char/hw_random/virtio-rng.c > +++ b/drivers/char/hw_random/virtio-rng.c > @@ -59,6 +59,9 @@ static int virtio_read(struct hwrng *rng, void > *buf, size_t size, bool wait) > if (vi->hwrng_removed) > return -ENODEV; > > + if (vi->data_avail >= size || (vi->data_avail && !wait)) > + return vi->data_avail; > + > if (!vi->busy) { > vi->busy = true; > reinit_completion(&vi->have_data); This patch was nonsense. I'm sorry. Looking into it again. Martin