From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59278) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aQwfv-0001Sg-0v for qemu-devel@nongnu.org; Wed, 03 Feb 2016 07:36:47 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aQwfq-0003kI-VL for qemu-devel@nongnu.org; Wed, 03 Feb 2016 07:36:46 -0500 Received: from mx1.redhat.com ([209.132.183.28]:50051) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aQwfq-0003kE-Q0 for qemu-devel@nongnu.org; Wed, 03 Feb 2016 07:36:42 -0500 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id DC8AB3B718 for ; Wed, 3 Feb 2016 12:36:41 +0000 (UTC) Date: Wed, 3 Feb 2016 18:06:39 +0530 From: Amit Shah Message-ID: <20160203123639.GA20527@grmbl.mre> References: <1453465198-11000-1-git-send-email-lprosek@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1453465198-11000-1-git-send-email-lprosek@redhat.com> Subject: Re: [Qemu-devel] [PATCH] rng-random: implement request queue List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ladi Prosek Cc: pagupta@redhat.com, qemu-devel@nongnu.org Hi Ladi, Adding Pankaj to CC, he too looked at this recently. On (Fri) 22 Jan 2016 [13:19:58], Ladi Prosek wrote: > If the guest adds a buffer to the virtio queue while another buffer > is still pending and hasn't been filled and returned by the rng > device, rng-random internally discards the pending request, which > leads to the second buffer getting stuck in the queue. For the guest > this manifests as delayed completion of reads from virtio-rng, i.e. > a read is completed only after another read is issued. > > This patch adds an internal queue of requests, analogous to what > rng-egd uses, to make sure that requests and responses are balanced > and correctly ordered. ... and this can lead to breaking migration (the queue of requests on the host needs to be migrated, else the new host will have no idea of the queue). I think we should limit the queue size to 1 instead. Multiple rng requests should not be common, because if we did have entropy, we'd just service the guest request and be done with it. If we haven't replied to the guest, it just means that the host itself is waiting for more entropy, or is waiting for the timeout before the guest's ratelimit is lifted. So, instead of fixing this using a queue, how about limiting the size of the vq to have just one element at a time? Thanks, Amit