kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vitaly Kuznetsov <vkuznets@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: kvm@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>,
	Michael Tsirkin <mst@redhat.com>,
	Julia Suvorova <jsuvorov@redhat.com>,
	Andy Lutomirski <luto@kernel.org>,
	Andrew Jones <drjones@redhat.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory
Date: Tue, 01 Sep 2020 16:43:25 +0200	[thread overview]
Message-ID: <87eenlwoaa.fsf@vitty.brq.redhat.com> (raw)
In-Reply-To: <20200825212526.GC8235@xz-x1>

Peter Xu <peterx@redhat.com> writes:

> On Fri, Aug 07, 2020 at 04:12:29PM +0200, Vitaly Kuznetsov wrote:
>> When testing Linux kernel boot with QEMU q35 VM and direct kernel boot
>> I observed 8193 accesses to PCI hole memory. When such exit is handled
>> in KVM without exiting to userspace, it takes roughly 0.000001 sec.
>> Handling the same exit in userspace is six times slower (0.000006 sec) so
>> the overal; difference is 0.04 sec. This may be significant for 'microvm'
>> ideas.
>
> Sorry to comment so late, but just curious... have you looked at what's those
> 8000+ accesses to PCI holes and what they're used for?  What I can think of are
> some port IO reads (e.g. upon vendor ID field) during BIOS to scan the devices
> attached.  Though those should be far less than 8000+, and those should also be
> pio rather than mmio.

And sorry for replying late)

We explicitly want MMIO instead of PIO to speed things up, afaiu PIO
requires two exits per device (and we exit all the way to
QEMU). Julia/Michael know better about the size of the space.

>
> If this is only an overhead for virt (since baremetal mmios should be fast),
> I'm also thinking whether we can make it even better to skip those pci hole
> reads.  Because we know we're virt, so it also gives us possibility that we may
> provide those information in a better way than reading PCI holes in the guest?

This means let's invent a PV interface and if we decide to go down this
road, I'd even argue for abandoning PCI completely. E.g. we can do
something similar to Hyper-V's Vmbus.

-- 
Vitaly


  reply	other threads:[~2020-09-01 14:44 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-07 14:12 [PATCH v2 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory Vitaly Kuznetsov
2020-08-07 14:12 ` [PATCH v2 1/3] KVM: x86: move kvm_vcpu_gfn_to_memslot() out of try_async_pf() Vitaly Kuznetsov
2020-08-14  1:40   ` Sean Christopherson
2020-09-01 14:15     ` Vitaly Kuznetsov
2020-09-04  3:47       ` Sean Christopherson
2020-08-07 14:12 ` [PATCH v2 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory Vitaly Kuznetsov
2020-08-14  2:31   ` Sean Christopherson
2020-08-14 14:30     ` Michael S. Tsirkin
2020-08-17 16:32       ` Sean Christopherson
2020-08-21  1:46         ` Michael S. Tsirkin
2020-08-22  3:19           ` Sean Christopherson
2020-09-01 14:39     ` Vitaly Kuznetsov
2020-09-03  9:35       ` Michael S. Tsirkin
2020-08-07 14:12 ` [PATCH v2 3/3] KVM: selftests: add KVM_MEM_PCI_HOLE test Vitaly Kuznetsov
2020-08-25 21:25 ` [PATCH v2 0/3] KVM: x86: KVM_MEM_PCI_HOLE memory Peter Xu
2020-09-01 14:43   ` Vitaly Kuznetsov [this message]
2020-09-01 20:00     ` Peter Xu
2020-09-02  8:59       ` Vitaly Kuznetsov
2020-09-04  6:12         ` Sean Christopherson
2020-09-04  7:29           ` Gerd Hoffmann
2020-09-04 16:00             ` Sean Christopherson
2020-09-07  8:37               ` Vitaly Kuznetsov
2020-09-07 11:32                 ` Michael S. Tsirkin
2020-09-11 17:00                   ` Sean Christopherson
2020-09-18 12:34                     ` Michael S. Tsirkin
2020-09-21 17:21                       ` Sean Christopherson
2020-09-07 10:52             ` Michael S. Tsirkin
2020-09-18  9:33               ` Gerd Hoffmann
2020-09-07 10:54           ` Michael S. Tsirkin
2020-09-18  8:49           ` Gerd Hoffmann
2020-09-04  7:20   ` Gerd Hoffmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87eenlwoaa.fsf@vitty.brq.redhat.com \
    --to=vkuznets@redhat.com \
    --cc=drjones@redhat.com \
    --cc=jmattson@google.com \
    --cc=jsuvorov@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).