All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
To: Christoffer Dall <christoffer.dall@linaro.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	Marc Zyngier <marc.zyngier@arm.com>,
	Andrew Jones <drjones@redhat.com>,
	Laszlo Ersek <lersek@redhat.com>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Alexander Graf <agraf@suse.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Subject: issues with emulated PCI MMIO backed by host memory under KVM
Date: Fri, 24 Jun 2016 16:04:45 +0200	[thread overview]
Message-ID: <CAKv+Gu8MWxUvySvsQW+DpohLLHSi8ZYm4o7gCE2a4q0iC=cuHA@mail.gmail.com> (raw)

Hi all,

This old subject came up again in a discussion related to PCIe support
for QEMU/KVM under Tianocore. The fact that we need to map PCI MMIO
regions as cacheable is preventing us from reusing a significant slice
of the PCIe support infrastructure, and so I'd like to bring this up
again, perhaps just to reiterate why we're simply out of luck.

To refresh your memories, the issue is that on ARM, PCI MMIO regions
for emulated devices may be backed by memory that is mapped cacheable
by the host. Note that this has nothing to do with the device being
DMA coherent or not: in this case, we are dealing with regions that
are not memory from the POV of the guest, and it is reasonable for the
guest to assume that accesses to such a region are not visible to the
device before they hit the actual PCI MMIO window and are translated
into cycles on the PCI bus. That means that mapping such a region
cacheable is a strange thing to do, in fact, and it is unlikely that
patches implementing this against the generic PCI stack in Tianocore
will be accepted by the maintainers.

Note that this issue not only affects framebuffers on PCI cards, it
also affects emulated USB host controllers (perhaps Alex can remind us
which one exactly?) and likely other emulated generic PCI devices as
well.

Since the issue exists only for emulated PCI devices whose MMIO
regions are backed by host memory, is there any way we can already
distinguish such memslots from ordinary ones? If we can, is there
anything we could do to treat these specially? Perhaps something like
using read-only memslots so we can at least trap guest writes instead
of having main memory going out of sync with the caches unnoticed? I
am just brainstorming here ...

In any case, it would be good to put this to bed one way or the other
(assuming it hasn't been put to bed already)

Thanks,
Ard.

             reply	other threads:[~2016-06-24 13:59 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-24 14:04 Ard Biesheuvel [this message]
2016-06-24 14:57 ` issues with emulated PCI MMIO backed by host memory under KVM Andrew Jones
2016-06-27  8:17   ` Marc Zyngier
2016-06-24 18:16 ` Ard Biesheuvel
2016-06-25  7:15   ` Alexander Graf
2016-06-25  7:19 ` Alexander Graf
2016-06-27  8:11   ` Marc Zyngier
2016-06-27  9:16 ` Christoffer Dall
2016-06-27  9:47   ` Ard Biesheuvel
2016-06-27 10:34     ` Christoffer Dall
2016-06-27 12:30       ` Ard Biesheuvel
2016-06-27 13:35         ` Christoffer Dall
2016-06-27 13:57           ` Ard Biesheuvel
2016-06-27 14:29             ` Alexander Graf
2016-06-28 11:02               ` Laszlo Ersek
2016-06-28 10:04             ` Christoffer Dall
2016-06-28 11:06               ` Laszlo Ersek
2016-06-28 12:20                 ` Christoffer Dall
2016-06-28 13:10                   ` Catalin Marinas
2016-06-28 13:19                     ` Ard Biesheuvel
2016-06-28 13:25                       ` Catalin Marinas
2016-06-28 14:02                         ` Andrew Jones
2016-06-27 14:24       ` Alexander Graf
2016-06-28 10:55       ` Laszlo Ersek
2016-06-28 13:14         ` Ard Biesheuvel
2016-06-28 13:32           ` Laszlo Ersek
2016-06-29  7:12             ` Gerd Hoffmann
2016-06-28 15:23         ` Alexander Graf
2016-06-27 13:15     ` Peter Maydell
2016-06-27 13:49       ` Mark Rutland
2016-06-27 14:10         ` Peter Maydell
2016-06-28 10:05           ` Christoffer Dall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKv+Gu8MWxUvySvsQW+DpohLLHSi8ZYm4o7gCE2a4q0iC=cuHA@mail.gmail.com' \
    --to=ard.biesheuvel@linaro.org \
    --cc=agraf@suse.de \
    --cc=catalin.marinas@arm.com \
    --cc=christoffer.dall@linaro.org \
    --cc=drjones@redhat.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=lersek@redhat.com \
    --cc=marc.zyngier@arm.com \
    --cc=peter.maydell@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.