On Wed, Sep 24, 2014 at 11:24 PM, Tamas K Lengyel < tamas.lengyel@zentific.com> wrote: > > > On Wed, Sep 24, 2014 at 10:52 PM, Julien Grall > wrote: > >> Hello Tamas, >> >> On 24/09/2014 18:13, Tamas K Lengyel wrote: >> >>> >>> >>> On Wed, Sep 24, 2014 at 6:51 PM, Julien Grall >> > wrote: >>> > >>> > > + && hypercall_preempt_check() ) >>> > > + { >>> > > + rc = progress; >>> > > + goto out; >>> > >>> > Jumping directly to the label "out" will skip flushing the TLB >>> for the >>> > domain. While it wasn't critical until now, partial redo during >>> > insertion/allocation or hypercall preemption only for >>> relinquish, the >>> > guest may use the wrong permission because the TLB hasn't been >>> flushed. >>> > >>> > At the same time, it looks like you never request to flush for >>> the >>> > MEMACCESS operation (see *flush = true). Does memaccess does a >>> TLB flush >>> > somewhere else? >>> > >>> > >>> > Yes, at the end of p2m_set_mem_access once all PTEs are updated >>> > successfully. I guess we could flush the TLB as we are progressing >>> as >>> > well, it wouldn't hurt. >>> >>> We should flush the TLB as we are progressing because the guest may >>> technically continue to run... >>> >>> >>> Hm, I think the guest is always paused while mem_access is being set via >>> memop but sure, it can't hurt. >>> >> >> I didn't find any domain pause call neither in the hypervisor nor in >> xen-access. >> >> >> Unless the pause is done by the hypervisor via the same hypercall, it's >> safer to flush the TLB if it's necessary. >> > > Ack, and you are right, I don't know why I thought it was paused. > > >> >> This case made me also think about another possible issue. Permission >>> are checked in raw_copy_{from,to}_guest_helper during virtual address >>> translation to a physical address. >>> >>> As you modified the attribute in the P2M, the copy may failed >>> because of >>> the lake of permission. >>> >>> >>> I'm not entire sure what you mean. Can you elaborate? >>> >> >> Xen has a bunch of functions raw_copy_{from,to}_guest helpers which copy >> data from/to the guest. >> >> Since XSA-98, Xen checks that the guest has effectively the right to >> read/write (depending of the helpers) a specific mapping before copying the >> data. >> > > Ah yes I remember seeing that, it's passed through get_page_from_gva and > uses the MMU to do the translation directly. > > >> >> If the guest page doesn't have the good right, the helper will fail and >> therefore so do the hypercall. >> >> When memaccess is used, a RAM page may have its permission lower down. >> When the helpers are called, the code to check memory access during >> permission violation will never be called... and the guest will receive an >> hypercall failure. >> > >> This is not the right behavior, the hypercall should only fail if the >> permissions are effectively wrong after check mem access has been called. >> > > Ack, I guess the straight forward solution here would be to forward the > read/write event to the mem_access listener in the inline gvirt_to_maddr > function. For mem_access however I should really include both the gvaddr > and gpaddr values, which means the translation would happen twice, once > with gva_to_ipa (that uses the MMU without the flag-based permission checks > to translate), forward to mem_access, then do the check with the flag-based > permission check again if mem_access is clear. Would that be an acceptable > approach in your opinion? > > So since this the hypervisor that's doing the memory access, we can just fix up the mem_access permissions and forgo sending mem_events. This is absolutely not the best solution and I would prefer in the future to include forwarding these events as well, but based on the discussion about a similar topic recently with Andres and Andrew Cooper ( http://www.gossamer-threads.com/lists/xen/devel/347276) this would be consistent with how such accesses are handled on x86 as well. To actually make the mem_access listener decide whether these accesses should go forward or fail would require some significant more infrastructure, as regular EPT/ARM memory accesses are automatically re-tried after the mem_access listener unpauses the domain. Here, no such luck, we would need to have the hypervisor wait for a reply (maybe with a time-out limit) before it could go forward. Tamas