All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: Le Tan <tamlokveer@gmail.com>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	wei.huang2@amd.com, qemu-devel@nongnu.org,
	Luiz Capitulino <lcapitulino@redhat.com>,
	Auger Eric <eric.auger@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Wei Yang <richardw.yang@linux.intel.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH PROTOTYPE 3/6] vfio: Implement support for sparse RAM memory regions
Date: Wed, 18 Nov 2020 12:01:43 -0500	[thread overview]
Message-ID: <20201118170143.GC29639@xz-x1> (raw)
In-Reply-To: <6141422c-1427-2a8d-b3ff-3c49ab1b59d2@redhat.com>

On Wed, Nov 18, 2020 at 05:14:22PM +0100, David Hildenbrand wrote:
> That did the trick! Thanks!!!

Great!  At the meantime, I've a few questions majorly about memory unplugging
below, which could be naive - I know little on that, please bare with me.. :)

> 
> virtio-mem + vfio + iommu seems to work. More testing to be done.
> 
> However, malicious guests can play nasty tricks like
> 
> a) Unplugging plugged virtio-mem blocks while they are mapped via an
>    IOMMU
> 
> 1. Guest: map memory location X located on a virtio-mem device inside a
>    plugged block into the IOMMU
>    -> QEMU IOMMU notifier: create vfio DMA mapping
>    -> VFIO pins memory of unplugged blocks (populating memory)
> 2. Guest: Request to unplug memory location X via virtio-mem device
>    -> QEMU virtio-mem: discards the memory.
>    -> VFIO still has the memory pinned

When unplug some memory, does the user need to first do something to notify the
guest kernel that "this memory is going to be unplugged soon" (assuming echo
"offline" to some dev file)?  Then the kernel should be responsible to prepare
for that before it really happens, e.g., migrate anonymous pages out from this
memory block.  I don't know what would happen if some pages on the memblock
were used for DMA like this and we want to unplug it.  Ideally I thought it
should fail the "echo offline" operation with something like EBUSY if it can't
notify the device driver about this, or it's hard to.

IMHO this question not really related to vIOMMU, but a general question for
unplugging. Say, what would happen if we unplug some memory with DMA buffers
without vIOMMU at all?  The buffer will be invalid right after unplugging, so
the guest kernel should either fail the operation trying to unplug, or at least
tell the device drivers about this somehow?

> 
> We consume more memory than intended. In case virtio-memory would get
> replugged and used, we would have an inconsistency. IOMMU device resets/ fix
> it (whereby all VFIO mappings are removed via the IOMMU notifier).
> 
> 
> b) Mapping unplugged virtio-mem blocks via an IOMMU
> 
> 1. Guest: map memory location X located on a virtio-mem device inside an
>    unplugged block
>    -> QEMU IOMMU notifier: create vfio DMA mapping
>    -> VFIO pins memory of unplugged blocks (populating memory)

For this case, I would expect vfio_get_xlat_addr() to fail directly if the
guest driver force to map some IOVA onto an invalid range of the virtio-mem
device.  Even before that, since the guest should know that this region of
virtio-mem is not valid since unplugged, so shouldn't the guest kernel directly
fail the dma_map() upon such a region even before the mapping message reaching
QEMU?

Thanks,

> 
> Memory that's supposed to be discarded now consumes memory. This is similar
> to a malicious guest simply writing to unplugged memory blocks (to be
> tackled with "protection of unplugged memory" in the future) - however
> memory will also get pinned.
> 
> 
> To prohibit b) from happening, we would have to disallow creating the VFIO
> mapping (fairly easy).
> 
> To prohibit a), there would have to be some notification to IOMMU
> implementations to unmap/refresh whenever an IOMMU entry still points at
> memory that is getting discarded (and the VM is doing something it's not
> supposed to do).

-- 
Peter Xu



  reply	other threads:[~2020-11-18 17:07 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-24 16:04 [PATCH PROTOTYPE 0/6] virtio-mem: vfio support David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 1/6] memory: Introduce sparse RAM handler for memory regions David Hildenbrand
2020-10-20 19:24   ` Peter Xu
2020-10-20 20:13     ` David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 2/6] virtio-mem: Impelement SparseRAMHandler interface David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 3/6] vfio: Implement support for sparse RAM memory regions David Hildenbrand
2020-10-20 19:44   ` Peter Xu
2020-10-20 20:01     ` David Hildenbrand
2020-10-20 20:44       ` Peter Xu
2020-11-12 10:11         ` David Hildenbrand
2020-11-18 13:04         ` David Hildenbrand
2020-11-18 15:23           ` Peter Xu
2020-11-18 16:14             ` David Hildenbrand
2020-11-18 17:01               ` Peter Xu [this message]
2020-11-18 17:37                 ` David Hildenbrand
2020-11-18 19:05                   ` Peter Xu
2020-11-18 19:20                     ` David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 4/6] memory: Extend ram_block_discard_(require|disable) by two discard types David Hildenbrand
2020-10-20 19:17   ` Peter Xu
2020-10-20 19:58     ` David Hildenbrand
2020-10-20 20:49       ` Peter Xu
2020-10-20 21:30         ` Peter Xu
2020-09-24 16:04 ` [PATCH PROTOTYPE 5/6] virtio-mem: Require only RAM_BLOCK_DISCARD_T_COORDINATED discards David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 6/6] vfio: Disable only RAM_BLOCK_DISCARD_T_UNCOORDINATED discards David Hildenbrand
2020-09-24 19:30 ` [PATCH PROTOTYPE 0/6] virtio-mem: vfio support no-reply
2020-09-29 17:02 ` Dr. David Alan Gilbert
2020-09-29 17:05   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201118170143.GC29639@xz-x1 \
    --to=peterx@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=lcapitulino@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta.linux@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richardw.yang@linux.intel.com \
    --cc=tamlokveer@gmail.com \
    --cc=wei.huang2@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.