All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: KVM guests reading/writing guest memory directly and accurately
       [not found] <994f40e1-2a5b-4b7a-a4aa-23f824881d8a@www.fastmail.com>
@ 2021-01-24 16:07 ` Peter Maydell
  2021-01-25 23:32   ` Berto Furth
  0 siblings, 1 reply; 2+ messages in thread
From: Peter Maydell @ 2021-01-24 16:07 UTC (permalink / raw)
  To: Berto Furth; +Cc: qemu-arm, QEMU Developers

On Sun, 24 Jan 2021 at 07:22, Berto Furth <bertofurth@sent.com> wrote:
> Can anyone give me some advice on how a machine or device can read and write kvm guest ram memory and get a guaranteed up to date result? Can someone point me at an example in the latest QEMU source code? I'm working with an ARM-32 guest (-cpu host,aarch64=off) running on an ARM-64 host (Cortex A72 - Raspberry Pi4b).
>
> I have a problem where if I write directly to my guest RAM, (such as a DMA transfer) then the guest can't see the change straight away. Similarly when the host writes memory, the guest doesn't see the change until much later.
>
> If during a KVM_EXIT_MMIO the qemu host changes some values in guest ram memory (via address_space_write() or cpu_physical_memory_rw() etc...) , is there a way to make the guest be able to accurately read that memory as soon as the exit is complete. Additionally if a guest changes a value in ram just before a KVM_EXIT_MMIO, is there a way to ensure that the QEMU host can then read the up to date newly set values?

With KVM I think this is just normal "multiple threads/CPUs both
accessing one in-memory data structure" effects, so you need a
memory barrier to ensure that what one thread has written is
visible to the other. I think that the idea is that
the functions in include/sysemu/dma.h provide a dma_barrier() (which is
just a CPU memory barrier) and some wrapper functions which put in the
barrier on the right side of a read or write operation. On the guest
side it should already be using the right barrier insns in order
to ensure that real hardware DMA sees the right view of RAM...

We're very inconsistent about using these -- I've never liked the way
we have separate 'dma' operations here rather than having the normal
functions Just Work. But I haven't ever looked very deeply into what
the requirements in this area are.

> I understand that the proper thing to do is to set up the memory region in question as MMIO so that when the guest accesses this memory a KVM_EXIT_MMIO will occur but the memory region in question has to be executable and MMIO memory areas are not executable in QEMU. In addition it's not easily possible to predict before hand exactly what memory addresses are going to be involved in DMA, so I'd prefer to avoid having to dynamically construct little MMIO memory islands inside the main guest ram space as the guest runs.

You only want to mark regions as MMIO if they need to actually come
out to QEMU for the guest memory access to be handled -- typically
this is device MMIO-mapped register banks. Normal RAM isn't mapped
as MMIO.

> I'm assuming that the guest could be modified to disable d-caching (modify the ARM register SCTLR / p15 ?) and that may help but I'm desperately trying to avoid that if possible because I'm working with a proprietary "blob" on the guest that I don't have all the source code for.

With Arm KVM doing this wouldn't help; in fact it would make things
worse, because then the view of guest RAM that the guest sees has
the non-cacheable attribute, but the view of guest RAM that QEMU
has mapped is still cacheable, so the two get hopelessly mismatched
ideas of what the RAM contents are.

(Side note: if the guest wants to map RAM as non-cacheable, this
won't work with Arm KVM (unless the host CPU supports FEAT_S2FWB,
which is an Armv8.4 feature), for the same "QEMU and guest view
of the same block of RAM disagree about whether it's cached" reason.
The most commonly encountered case of this is that you can't use a
normal VGA PCI graphics device model with KVM, because the guest maps
the graphics RAM on the device non-cacheable.)

> I know it's not very professional of me to make an emotional plea, but I've been working on this for weeks and I am desperately hoping someone can point to a solution for me. I am not a KVM expert and so I'm hoping I'm just missing something simple and obvious that one of you can quickly point out for me.

Nah, this isn't obvious stuff -- a lot of QEMU's internals aren't
very well documented and often are inconsistent about whether they
do things correctly or not...

thanks
-- PMM


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: KVM guests reading/writing guest memory directly and accurately
  2021-01-24 16:07 ` KVM guests reading/writing guest memory directly and accurately Peter Maydell
@ 2021-01-25 23:32   ` Berto Furth
  0 siblings, 0 replies; 2+ messages in thread
From: Berto Furth @ 2021-01-25 23:32 UTC (permalink / raw)
  To: Peter Maydell; +Cc: qemu-arm, QEMU Developers

Thanks so much for your help and encouragement Peter. I really appreciate it.

All the best!

On Mon, 25 Jan 2021, at 03:07, Peter Maydell wrote:
> On Sun, 24 Jan 2021 at 07:22, Berto Furth <bertofurth@sent.com> wrote:
> > Can anyone give me some advice on how a machine or device can read and write kvm guest ram memory and get a guaranteed up to date result? Can someone point me at an example in the latest QEMU source code? I'm working with an ARM-32 guest (-cpu host,aarch64=off) running on an ARM-64 host (Cortex A72 - Raspberry Pi4b).
> >
> > I have a problem where if I write directly to my guest RAM, (such as a DMA transfer) then the guest can't see the change straight away. Similarly when the host writes memory, the guest doesn't see the change until much later.
> >
> > If during a KVM_EXIT_MMIO the qemu host changes some values in guest ram memory (via address_space_write() or cpu_physical_memory_rw() etc...) , is there a way to make the guest be able to accurately read that memory as soon as the exit is complete. Additionally if a guest changes a value in ram just before a KVM_EXIT_MMIO, is there a way to ensure that the QEMU host can then read the up to date newly set values?
> 
> With KVM I think this is just normal "multiple threads/CPUs both
> accessing one in-memory data structure" effects, so you need a
> memory barrier to ensure that what one thread has written is
> visible to the other. I think that the idea is that
> the functions in include/sysemu/dma.h provide a dma_barrier() (which is
> just a CPU memory barrier) and some wrapper functions which put in the
> barrier on the right side of a read or write operation. On the guest
> side it should already be using the right barrier insns in order
> to ensure that real hardware DMA sees the right view of RAM...
> 
> We're very inconsistent about using these -- I've never liked the way
> we have separate 'dma' operations here rather than having the normal
> functions Just Work. But I haven't ever looked very deeply into what
> the requirements in this area are.
> 
> > I understand that the proper thing to do is to set up the memory region in question as MMIO so that when the guest accesses this memory a KVM_EXIT_MMIO will occur but the memory region in question has to be executable and MMIO memory areas are not executable in QEMU. In addition it's not easily possible to predict before hand exactly what memory addresses are going to be involved in DMA, so I'd prefer to avoid having to dynamically construct little MMIO memory islands inside the main guest ram space as the guest runs.
> 
> You only want to mark regions as MMIO if they need to actually come
> out to QEMU for the guest memory access to be handled -- typically
> this is device MMIO-mapped register banks. Normal RAM isn't mapped
> as MMIO.
> 
> > I'm assuming that the guest could be modified to disable d-caching (modify the ARM register SCTLR / p15 ?) and that may help but I'm desperately trying to avoid that if possible because I'm working with a proprietary "blob" on the guest that I don't have all the source code for.
> 
> With Arm KVM doing this wouldn't help; in fact it would make things
> worse, because then the view of guest RAM that the guest sees has
> the non-cacheable attribute, but the view of guest RAM that QEMU
> has mapped is still cacheable, so the two get hopelessly mismatched
> ideas of what the RAM contents are.
> 
> (Side note: if the guest wants to map RAM as non-cacheable, this
> won't work with Arm KVM (unless the host CPU supports FEAT_S2FWB,
> which is an Armv8.4 feature), for the same "QEMU and guest view
> of the same block of RAM disagree about whether it's cached" reason.
> The most commonly encountered case of this is that you can't use a
> normal VGA PCI graphics device model with KVM, because the guest maps
> the graphics RAM on the device non-cacheable.)
> 
> > I know it's not very professional of me to make an emotional plea, but I've been working on this for weeks and I am desperately hoping someone can point to a solution for me. I am not a KVM expert and so I'm hoping I'm just missing something simple and obvious that one of you can quickly point out for me.
> 
> Nah, this isn't obvious stuff -- a lot of QEMU's internals aren't
> very well documented and often are inconsistent about whether they
> do things correctly or not...
> 
> thanks
> -- PMM
>


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-01-26  1:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <994f40e1-2a5b-4b7a-a4aa-23f824881d8a@www.fastmail.com>
2021-01-24 16:07 ` KVM guests reading/writing guest memory directly and accurately Peter Maydell
2021-01-25 23:32   ` Berto Furth

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.