Hi,
On 04.02.19 13:21, Julien Grall wrote:
> What I meant is the virtual address stays the same but the guest physical address may change. I don't see how this could be broken today, can you explain it?
I suppose guest's mapping change is not quite atomic from the hypervisor point of view, so domain could be caught in the middle.
>
>> Moreover, having that buffer mapped to XEN will reduce context switch time as a side effect.
>
> I am still unsure whether we really want to keep that always mapped.
>
> Each guest can support up to 128 vCPUs. So we would have 128 runstates mapped. Each runstate would take up to 2 pages. This means that each guest would require up to 1MB of vmap.
Here buffer allocation on XEN side might benefit, even aligning/fitting the runstate into one page might work. But I understand it is undesirable and requires lot of changes
> I thought more about it during the week-end. I would actually not implement get_gfn but implement a function similar to get_page_from_gva on x86. The reason behind this is the function on Arm is quite complex as it caters many different use case.
I'll look into this. But I have to massage my yocto first.
--
Sincerely,
Andrii Anisov.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel