qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] About cpu_physical_memory_map()
@ 2018-05-30  0:24 Huaicheng Li
  2018-05-31 10:10 ` Peter Maydell
  0 siblings, 1 reply; 4+ messages in thread
From: Huaicheng Li @ 2018-05-30  0:24 UTC (permalink / raw)
  To: qemu-devel, kvm

Dear QEMU/KVM developers,

I was trying to map a buffer in host QEMU process to a guest user space
application. I tried to achieve this
by allocating a buffer in the guest application first, then map this buffer
to QEMU process address space via
GVA -> GPA --> HVA (GPA to HVA is done via cpu_physical_memory_map). Last,
I wrote a host kernel driver to
walk QEMU process's page table and change corresponding page table entries
of HVA to the HPA of the target
buffer.

Basically, the idea is to keep

GVA --> GPA --> HVA mapping (this step is to map guest buffer into QEMU's
address space)

and change

HVA --> HPA1 (HPA1 is the physical base addr of the buffer malloc'ed by the
guest application)
to
HVA --> HPA2 (HPA2 is the physical base addr of the target buffer we want
to remap into guest application's address space)

The above change is done by my kernel module to change the page table
entries in QEMU's page table in the host system.
Under this case, I expect to see GVA will point to HPA2, instead of HPA1.

With the above change, when I access HVA in QEMU process, I find it indeed
points to HPA2.
However, inside guest OS, the application's GVA still points to HPA1

I have cleaned TLBs of QEMU process's page table as well as the guest
application's page table (in guest OS) accordingly
but the guest application's GVA is still mapped to HPA1, instead of HPA2.

Does QEMU maintain a fixed GPA to HVA mapping? After goingthrough the code
of "cpu_physical_memory_map()", I think
HVA is calculated as ramblock->host + GPA since guest RAM space is a
mmap'ed area in QEMU's address space and GPA
is an offset within that area. Thus, GPA -> HVA mapping is fixed during
runtime. Is QEMU/KVM doing
another layer of TLB caching so as to the guest application picks up the
old mapping to HPA1, instead of HPA2?

Any comments are appreciated.

Thank you!

Best,
Huaicheng

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] About cpu_physical_memory_map()
  2018-05-30  0:24 [Qemu-devel] About cpu_physical_memory_map() Huaicheng Li
@ 2018-05-31 10:10 ` Peter Maydell
  2018-06-01  8:00   ` Huaicheng Li
  0 siblings, 1 reply; 4+ messages in thread
From: Peter Maydell @ 2018-05-31 10:10 UTC (permalink / raw)
  To: Huaicheng Li; +Cc: QEMU Developers, kvm-devel

On 30 May 2018 at 01:24, Huaicheng Li <huaicheng@cs.uchicago.edu> wrote:
> Dear QEMU/KVM developers,
>
> I was trying to map a buffer in host QEMU process to a guest user space
> application. I tried to achieve this
> by allocating a buffer in the guest application first, then map this buffer
> to QEMU process address space via
> GVA -> GPA --> HVA (GPA to HVA is done via cpu_physical_memory_map). Last,
> I wrote a host kernel driver to
> walk QEMU process's page table and change corresponding page table entries
> of HVA to the HPA of the target
> buffer.

This seems like the wrong way round to try to do this. As a rule
of thumb, you'll have an easier life if you have things behave
similarly to how they would in real hardware. So it'll be simpler
if you start with the buffer in the host QEMU process, map this
in to the guest's physical address space at some GPA, tell the
guest kernel that that's the GPA to use, and have the guest kernel
map that GPA into the guest userspace process's virtual address space.
(Think of how you would map a framebuffer, for instance.)

Changing the host page table entries for QEMU under its feet seems
like it's never going to work reliably.

(I think the specific problem you're running into is that guest memory
is both mapped into the QEMU host process and also exposed to the
guest VM. The former is controlled by the page tables for the
QEMU host process, but the latter is a different set of page tables,
which QEMU asks the kernel to configure, using KVM_SET_USER_MEMORY_REGION
ioctls.)

thanks
-- PMM

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] About cpu_physical_memory_map()
  2018-05-31 10:10 ` Peter Maydell
@ 2018-06-01  8:00   ` Huaicheng Li
  2018-06-06  4:22     ` Huaicheng Li
  0 siblings, 1 reply; 4+ messages in thread
From: Huaicheng Li @ 2018-06-01  8:00 UTC (permalink / raw)
  To: peter.maydell; +Cc: qemu-devel, kvm

Hi Peter,

Thank you a lot for the analysis!

So it'll be simpler
> if you start with the buffer in the host QEMU process, map this
> in to the guest's physical address space at some GPA, tell the
> guest kernel that that's the GPA to use, and have the guest kernel
> map that GPA into the guest userspace process's virtual address space.
> (Think of how you would map a framebuffer, for instance.)


This makes sense to me. Could you help provide a pointer where I can refer
to similar implementations?
Should I do something like this during system memory initialization:

    memory_region_init_ram_ptr(my_mr, owner, "mybuf", buf_size, buf); //
where buf is the buffer in QEMU AS
    memory_region_add_subregion(system_memory, GPA_OFFSET, my_mr);

If I set guest memory to be "-m 1G", can I make "GPA_OFFSET" beyond 1GB
(e.g. 2GB)? This way, the guest OS
won't be able to access my buffer and use it like other regular RAM.

Thanks!

Best,
Huaicheng




On Thu, May 31, 2018 at 3:11 AM Peter Maydell <peter.maydell@linaro.org>
wrote:

> On 30 May 2018 at 01:24, Huaicheng Li <huaicheng@cs.uchicago.edu> wrote:
> > Dear QEMU/KVM developers,
> >
> > I was trying to map a buffer in host QEMU process to a guest user space
> > application. I tried to achieve this
> > by allocating a buffer in the guest application first, then map this
> buffer
> > to QEMU process address space via
> > GVA -> GPA --> HVA (GPA to HVA is done via cpu_physical_memory_map).
> Last,
> > I wrote a host kernel driver to
> > walk QEMU process's page table and change corresponding page table
> entries
> > of HVA to the HPA of the target
> > buffer.
>
> This seems like the wrong way round to try to do this. As a rule
> of thumb, you'll have an easier life if you have things behave
> similarly to how they would in real hardware. So it'll be simpler
> if you start with the buffer in the host QEMU process, map this
> in to the guest's physical address space at some GPA, tell the
> guest kernel that that's the GPA to use, and have the guest kernel
> map that GPA into the guest userspace process's virtual address space.
> (Think of how you would map a framebuffer, for instance.)
>
> Changing the host page table entries for QEMU under its feet seems
> like it's never going to work reliably.
>
> (I think the specific problem you're running into is that guest memory
> is both mapped into the QEMU host process and also exposed to the
> guest VM. The former is controlled by the page tables for the
> QEMU host process, but the latter is a different set of page tables,
> which QEMU asks the kernel to configure, using KVM_SET_USER_MEMORY_REGION
> ioctls.)
>
> thanks
> -- PMM
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] About cpu_physical_memory_map()
  2018-06-01  8:00   ` Huaicheng Li
@ 2018-06-06  4:22     ` Huaicheng Li
  0 siblings, 0 replies; 4+ messages in thread
From: Huaicheng Li @ 2018-06-06  4:22 UTC (permalink / raw)
  To: Peter Maydell; +Cc: qemu-devel

Hi Peter,

Just a follow up on my previous question. I have figured it out by trying
it out with QEMU.
I'm writing to thank you again for your help! I really appreciate that.

Thank you!

Best,
Huaicheng

On Fri, Jun 1, 2018 at 1:00 AM Huaicheng Li <huaicheng@cs.uchicago.edu>
wrote:

> Hi Peter,
>
> Thank you a lot for the analysis!
>
> So it'll be simpler
>> if you start with the buffer in the host QEMU process, map this
>> in to the guest's physical address space at some GPA, tell the
>> guest kernel that that's the GPA to use, and have the guest kernel
>> map that GPA into the guest userspace process's virtual address space.
>> (Think of how you would map a framebuffer, for instance.)
>
>
> This makes sense to me. Could you help provide a pointer where I can refer
> to similar implementations?
> Should I do something like this during system memory initialization:
>
>     memory_region_init_ram_ptr(my_mr, owner, "mybuf", buf_size, buf); //
> where buf is the buffer in QEMU AS
>     memory_region_add_subregion(system_memory, GPA_OFFSET, my_mr);
>
> If I set guest memory to be "-m 1G", can I make "GPA_OFFSET" beyond 1GB
> (e.g. 2GB)? This way, the guest OS
> won't be able to access my buffer and use it like other regular RAM.
>
> Thanks!
>
> Best,
> Huaicheng
>
>
>
>
> On Thu, May 31, 2018 at 3:11 AM Peter Maydell <peter.maydell@linaro.org>
> wrote:
>
>> On 30 May 2018 at 01:24, Huaicheng Li <huaicheng@cs.uchicago.edu> wrote:
>> > Dear QEMU/KVM developers,
>> >
>> > I was trying to map a buffer in host QEMU process to a guest user space
>> > application. I tried to achieve this
>> > by allocating a buffer in the guest application first, then map this
>> buffer
>> > to QEMU process address space via
>> > GVA -> GPA --> HVA (GPA to HVA is done via cpu_physical_memory_map).
>> Last,
>> > I wrote a host kernel driver to
>> > walk QEMU process's page table and change corresponding page table
>> entries
>> > of HVA to the HPA of the target
>> > buffer.
>>
>> This seems like the wrong way round to try to do this. As a rule
>> of thumb, you'll have an easier life if you have things behave
>> similarly to how they would in real hardware. So it'll be simpler
>> if you start with the buffer in the host QEMU process, map this
>> in to the guest's physical address space at some GPA, tell the
>> guest kernel that that's the GPA to use, and have the guest kernel
>> map that GPA into the guest userspace process's virtual address space.
>> (Think of how you would map a framebuffer, for instance.)
>>
>> Changing the host page table entries for QEMU under its feet seems
>> like it's never going to work reliably.
>>
>> (I think the specific problem you're running into is that guest memory
>> is both mapped into the QEMU host process and also exposed to the
>> guest VM. The former is controlled by the page tables for the
>> QEMU host process, but the latter is a different set of page tables,
>> which QEMU asks the kernel to configure, using KVM_SET_USER_MEMORY_REGION
>> ioctls.)
>>
>> thanks
>> -- PMM
>>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-06-06  4:25 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-30  0:24 [Qemu-devel] About cpu_physical_memory_map() Huaicheng Li
2018-05-31 10:10 ` Peter Maydell
2018-06-01  8:00   ` Huaicheng Li
2018-06-06  4:22     ` Huaicheng Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).