qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Ming Lin <mlin@kernel.org>,
	linux-nvme@lists.infradead.org, qemu-devel@nongnu.org
Cc: fes@google.com, axboe@fb.com, Rob Nelson <rlnelson@google.com>,
	virtualization@lists.linux-foundation.org, keith.busch@intel.com,
	tytso@mit.edu, Christoph Hellwig <hch@lst.de>,
	Mihai Rusu <dizzy@google.com>
Subject: Re: [Qemu-devel] [PATCH -qemu] nvme: support Google vendor extension
Date: Thu, 19 Nov 2015 11:37:54 +0100	[thread overview]
Message-ID: <564DA682.8050706@redhat.com> (raw)
In-Reply-To: <1447825624-17011-3-git-send-email-mlin@kernel.org>



On 18/11/2015 06:47, Ming Lin wrote:
> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>          }
>  
>          start_sqs = nvme_cq_full(cq) ? 1 : 0;
> -        cq->head = new_head;
> +        /* When the mapped pointer memory area is setup, we don't rely on
> +         * the MMIO written values to update the head pointer. */
> +        if (!cq->db_addr) {
> +            cq->head = new_head;
> +        }

You are still checking

        if (new_head >= cq->size) {
            return;
        }

above.  I think this is incorrect when the extension is present, and
furthermore it's the only case where val is being used.

If you're not using val, you could use ioeventfd for the MMIO.  An
ioeventfd cuts the MMIO cost by at least 55% and up to 70%. Here are
quick and dirty measurements from kvm-unit-tests's vmexit.flat
benchmark, on two very different machines:

			Haswell-EP		Ivy Bridge i7
  MMIO memory write	5100 -> 2250 (55%)	7000 -> 3000 (58%)
  I/O port write	3800 -> 1150 (70%)	4100 -> 1800 (57%)

You would need to allocate two eventfds for each qid, one for the sq and
one for the cq.  Also, processing the queues is now bounced to the QEMU
iothread, so you can probably get rid of sq->timer and cq->timer.

Paolo

  reply	other threads:[~2015-11-19 10:38 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-18  5:47 [Qemu-devel] [RFC PATCH 0/2] Google extension to improve qemu-nvme performance Ming Lin
2015-11-18  5:47 ` [Qemu-devel] [PATCH -kernel] nvme: improve performance for virtual NVMe devices Ming Lin
2015-11-18  5:47 ` [Qemu-devel] [PATCH -qemu] nvme: support Google vendor extension Ming Lin
2015-11-19 10:37   ` Paolo Bonzini [this message]
2015-11-20  8:11     ` Ming Lin
2015-11-20  8:58       ` Paolo Bonzini
2015-11-20 23:05         ` Ming Lin
2015-11-21 12:56           ` Paolo Bonzini
2015-11-22  7:45             ` Ming Lin
2015-11-24  6:29               ` Ming Lin
2015-11-24 11:01                 ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=564DA682.8050706@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=axboe@fb.com \
    --cc=dizzy@google.com \
    --cc=fes@google.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mlin@kernel.org \
    --cc=qemu-devel@nongnu.org \
    --cc=rlnelson@google.com \
    --cc=tytso@mit.edu \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).