linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ajay Kaher <akaher@vmware.com>
To: Nadav Amit <namit@vmware.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>,
	"bhelgaas@google.com" <bhelgaas@google.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"bp@alien8.de" <bp@alien8.de>,
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"hpa@zytor.com" <hpa@zytor.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	"rostedt@goodmis.org" <rostedt@goodmis.org>,
	Srivatsa Bhat <srivatsab@vmware.com>,
	"srivatsa@csail.mit.edu" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	Anish Swaminathan <anishs@vmware.com>,
	Vasavi Sirnapalli <vsirnapalli@vmware.com>,
	"er.ajay.kaher@gmail.com" <er.ajay.kaher@gmail.com>,
	Bjorn Helgaas <helgaas@kernel.org>
Subject: Re: [PATCH] MMIO should have more priority then IO
Date: Mon, 11 Jul 2022 17:53:54 +0000	[thread overview]
Message-ID: <F9E62470-71EA-40DD-875C-6B2B1831F3ED@vmware.com> (raw)
In-Reply-To: <83C436BD-E12E-420C-B651-B3788F1C4683@vmware.com>


On 11/07/22, 10:34 PM, "Nadav Amit" <namit@vmware.com> wrote:

> On Jul 10, 2022, at 11:31 PM, Ajay Kaher <akaher@vmware.com> wrote:
>
> During boot-time there are many PCI reads. Currently, when these reads are
> performed by a virtual machine, they all cause a VM-exit, and therefore each
> one of them induces a considerable overhead.
>
> When using MMIO (but not PIO), it is possible to map the PCI BARs of the
> virtual machine to some memory area that holds the values that the “emulated
> hardware” is supposed to return. The memory region is mapped as "read-only”
> in the NPT/EPT, so reads from these BAR regions would be treated as regular
> memory reads. Writes would still be trapped and emulated by the hypervisor.

I guess some typo mistake in above paragraph, it's per-device PCI config space
i.e. 4KB ECAM not PCI BARs. Please read above paragraph as:

When using MMIO (but not PIO), it is possible to map the PCI config space of the
virtual machine to some memory area that holds the values that the “emulated
hardware” is supposed to return. The memory region is mapped as "read-only”
in the NPT/EPT, so reads from these PCI config space would be treated as regular
memory reads. Writes would still be trapped and emulated by the hypervisor.

We will send v2 or new patch which will be VMware specific.

> I have a vague recollection from some similar project that I had 10 years
> ago that this might not work for certain emulated device registers. For
> instance some hardware registers, specifically those the report hardware
> events, are “clear-on-read”. Apparently, Ajay took that into consideration.
>
> That is the reason for this quite amazing difference - several orders of
> magnitude - between the overhead that is caused by raw_pci_read(): 120us for
> PIO and 100ns for MMIO. Admittedly, I do not understand why PIO access would
> take 120us (I would have expected it to be 10 times faster, at least), but
> the benefit is quite clear.



  reply	other threads:[~2022-07-11 17:54 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-28 16:29 [PATCH] MMIO should have more priority then IO Ajay Kaher
2022-06-28 18:09 ` Bjorn Helgaas
2022-07-08  5:56   ` Ajay Kaher
2022-07-08 12:56     ` Matthew Wilcox
2022-07-08 16:45       ` Nadav Amit
2022-07-08 17:55         ` Matthew Wilcox
2022-07-08 18:35           ` Nadav Amit
2022-07-08 18:43             ` Matthew Wilcox
2022-07-08 19:49               ` Nadav Amit
2022-07-11  6:31                 ` Ajay Kaher
2022-07-11 17:04                   ` Nadav Amit
2022-07-11 17:53                     ` Ajay Kaher [this message]
2022-07-11 18:18                       ` H. Peter Anvin
2022-07-11 19:43               ` Paolo Bonzini
2022-07-12 21:20               ` Shuah Khan
2022-08-31 21:44                 ` Matthew Wilcox
2022-08-31 22:23                   ` Shuah Khan
2022-06-29  6:12 ` Greg KH
2022-06-29  6:37   ` H. Peter Anvin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F9E62470-71EA-40DD-875C-6B2B1831F3ED@vmware.com \
    --to=akaher@vmware.com \
    --cc=amakhalov@vmware.com \
    --cc=anishs@vmware.com \
    --cc=bhelgaas@google.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=er.ajay.kaher@gmail.com \
    --cc=helgaas@kernel.org \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=namit@vmware.com \
    --cc=rostedt@goodmis.org \
    --cc=srivatsa@csail.mit.edu \
    --cc=srivatsab@vmware.com \
    --cc=tglx@linutronix.de \
    --cc=vsirnapalli@vmware.com \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).