All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ladi Prosek <lprosek@redhat.com>
To: Amit Shah <amit.shah@redhat.com>
Cc: pagupta@redhat.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] rng-random: implement request queue
Date: Wed, 3 Feb 2016 13:02:34 -0500 (EST)	[thread overview]
Message-ID: <831310576.31418757.1454522554452.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <20160203123639.GA20527@grmbl.mre>

Hi Amit,

----- Original Message -----
> Hi Ladi,
> 
> Adding Pankaj to CC, he too looked at this recently.
> 
> On (Fri) 22 Jan 2016 [13:19:58], Ladi Prosek wrote:
> > If the guest adds a buffer to the virtio queue while another buffer
> > is still pending and hasn't been filled and returned by the rng
> > device, rng-random internally discards the pending request, which
> > leads to the second buffer getting stuck in the queue. For the guest
> > this manifests as delayed completion of reads from virtio-rng, i.e.
> > a read is completed only after another read is issued.
> > 
> > This patch adds an internal queue of requests, analogous to what
> > rng-egd uses, to make sure that requests and responses are balanced
> > and correctly ordered.
> 
> ... and this can lead to breaking migration (the queue of requests on
> the host needs to be migrated, else the new host will have no idea of
> the queue).

I was under the impression that clearing the queue pre-migration as
implemented by the RngBackendClass::cancel_requests callback is enough.
If it wasn't, the rgn-egd backend would be already broken as its
queueing logic is pretty much identical.

/**
 * rng_backend_cancel_requests:
 * @s: the backend to cancel all pending requests in
 *
 * Cancels all pending requests submitted by @rng_backend_request_entropy.  This
 * should be used by a device during reset or in preparation for live migration
 * to stop tracking any request.
 */
void rng_backend_cancel_requests(RngBackend *s);

Upon closer inspection though, this function appears to have no callers.
Either I'm missing something or there's another bug to be fixed.

> I think we should limit the queue size to 1 instead.  Multiple rng
> requests should not be common, because if we did have entropy, we'd
> just service the guest request and be done with it.  If we haven't
> replied to the guest, it just means that the host itself is waiting
> for more entropy, or is waiting for the timeout before the guest's
> ratelimit is lifted.

The scenario I had in mind is multiple processes in the guest
requesting entropy at the same time, no ratelimit, and fast entropy
source in the host. Being able to queue up requests would definitely
help boost performance, I think I even benchmarked it but I must
have lost the numbers. I can set it up again and rerun the benchmark
if you're interested.
 
> So, instead of fixing this using a queue, how about limiting the size
> of the vq to have just one element at a time?

I don't believe that this is a good solution. Although perfectly valid
spec-wise, I can see how a one-element queue could confuse less than
perfect driver implementations. Additionally, the driver would have to
implement some kind of a guest-side queueing logic and serialize its
requests or else be dropping them if the virtqueue is full. Overall,
I don't think that it's completely crazy to call it a breaking change.

> Thanks,
> 
> 		Amit
> 
> 

  reply	other threads:[~2016-02-03 18:02 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-22 12:19 [Qemu-devel] [PATCH] rng-random: implement request queue Ladi Prosek
2016-02-03 12:36 ` Amit Shah
2016-02-03 18:02   ` Ladi Prosek [this message]
2016-02-03 18:44   ` Paolo Bonzini
2016-02-04  8:53     ` Pankaj Gupta
2016-02-04 17:36       ` Ladi Prosek
2016-02-05  5:31         ` Pankaj Gupta
2016-02-04 17:24     ` Ladi Prosek
2016-02-04 18:07       ` Ladi Prosek
2016-02-05  8:32         ` Paolo Bonzini
2016-03-03  5:05         ` Amit Shah
2016-03-03  9:30           ` Ladi Prosek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=831310576.31418757.1454522554452.JavaMail.zimbra@redhat.com \
    --to=lprosek@redhat.com \
    --cc=amit.shah@redhat.com \
    --cc=pagupta@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.