All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/hvm: finish IOREQ correctly on completion path
@ 2019-03-08 21:30 Igor Druzhinin
  2019-03-11 10:30 ` Paul Durrant
  0 siblings, 1 reply; 7+ messages in thread
From: Igor Druzhinin @ 2019-03-08 21:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Igor Druzhinin, wei.liu2, andrew.cooper3, paul.durrant, jbeulich,
	roger.pau

Since the introduction of linear_{read,write}() helpers in 3bdec530a5
(x86/HVM: split page straddling emulated accesses in more cases) the
completion path for IOREQs has been broken: if there is an IOREQ in
progress but hvm_copy_{to,from}_guest_linear() returns HVMTRANS_okay
(e.g. when P2M type of source/destination has been changed by IOREQ
handler) the execution will never re-enter hvmemul_do_io() where
IOREQs are completed. This usually results in a domain crash upon
the execution of the next IOREQ entering hvmemul_do_io() and finding
the remnants of the previous IOREQ in the state machine.

This particular issue has been discovered in relation to p2m_ioreq_server
type where an emulator changed the memory type between p2m_ioreq_server
and p2m_ram_rw in process of responding to IOREQ which made hvm_copy_..()
to behave differently on the way back. But could be also applied
to a case where e.g. an emulator balloons memory to/from the guest in
response to MMIO read/write, etc.

Fix it by checking if IOREQ completion is required before trying to
finish a memory access immediately through hvm_copy_..(), re-enter
hvmemul_do_io() otherwise.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
 xen/arch/x86/hvm/emulate.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 41aac28..36f8fee 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
                        uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     pagefault_info_t pfinfo;
-    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
+    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
+    int rc = HVMTRANS_bad_gfn_to_mfn;
+
+    /*
+     * If the memory access can be handled immediately - do it,
+     * otherwise re-enter ioreq completion path to properly consume it.
+     */
+    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
+        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
 
     switch ( rc )
     {
@@ -1123,7 +1131,15 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data,
                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     pagefault_info_t pfinfo;
-    int rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
+    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
+    int rc = HVMTRANS_bad_gfn_to_mfn;
+
+    /*
+     * If the memory access can be handled immediately - do it,
+     * otherwise re-enter ioreq completion path to properly consume it.
+     */
+    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
+        rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
 
     switch ( rc )
     {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] x86/hvm: finish IOREQ correctly on completion path
  2019-03-08 21:30 [PATCH] x86/hvm: finish IOREQ correctly on completion path Igor Druzhinin
@ 2019-03-11 10:30 ` Paul Durrant
  2019-03-11 11:03   ` Jan Beulich
  0 siblings, 1 reply; 7+ messages in thread
From: Paul Durrant @ 2019-03-11 10:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Igor Druzhinin, Wei Liu, jbeulich, Roger Pau Monne

> -----Original Message-----
> From: Igor Druzhinin [mailto:igor.druzhinin@citrix.com]
> Sent: 08 March 2019 21:31
> To: xen-devel@lists.xenproject.org
> Cc: Paul Durrant <Paul.Durrant@citrix.com>; jbeulich@suse.com; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Roger Pau Monne <roger.pau@citrix.com>;
> Igor Druzhinin <igor.druzhinin@citrix.com>
> Subject: [PATCH] x86/hvm: finish IOREQ correctly on completion path
> 
> Since the introduction of linear_{read,write}() helpers in 3bdec530a5
> (x86/HVM: split page straddling emulated accesses in more cases) the
> completion path for IOREQs has been broken: if there is an IOREQ in
> progress but hvm_copy_{to,from}_guest_linear() returns HVMTRANS_okay
> (e.g. when P2M type of source/destination has been changed by IOREQ
> handler) the execution will never re-enter hvmemul_do_io() where
> IOREQs are completed. This usually results in a domain crash upon
> the execution of the next IOREQ entering hvmemul_do_io() and finding
> the remnants of the previous IOREQ in the state machine.
> 
> This particular issue has been discovered in relation to p2m_ioreq_server
> type where an emulator changed the memory type between p2m_ioreq_server
> and p2m_ram_rw in process of responding to IOREQ which made hvm_copy_..()
> to behave differently on the way back. But could be also applied
> to a case where e.g. an emulator balloons memory to/from the guest in
> response to MMIO read/write, etc.
> 
> Fix it by checking if IOREQ completion is required before trying to
> finish a memory access immediately through hvm_copy_..(), re-enter
> hvmemul_do_io() otherwise.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
>  xen/arch/x86/hvm/emulate.c | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 41aac28..36f8fee 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
>                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      pagefault_info_t pfinfo;
> -    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> +    int rc = HVMTRANS_bad_gfn_to_mfn;
> +
> +    /*
> +     * If the memory access can be handled immediately - do it,
> +     * otherwise re-enter ioreq completion path to properly consume it.
> +     */
> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
> +        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);

I think this is the right thing to do but can we change the text to something like:

"If there is pending ioreq then we must be re-issuing an access that was previously handed as MMIO. Thus it is imperative that we handle this access in the same way to guarantee completion and hence clean up any interim state."

  Paul

> 
>      switch ( rc )
>      {
> @@ -1123,7 +1131,15 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data,
>                          uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      pagefault_info_t pfinfo;
> -    int rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> +    int rc = HVMTRANS_bad_gfn_to_mfn;
> +
> +    /*
> +     * If the memory access can be handled immediately - do it,
> +     * otherwise re-enter ioreq completion path to properly consume it.
> +     */
> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
> +        rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> 
>      switch ( rc )
>      {
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] x86/hvm: finish IOREQ correctly on completion path
  2019-03-11 10:30 ` Paul Durrant
@ 2019-03-11 11:03   ` Jan Beulich
  2019-03-11 11:09     ` Paul Durrant
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2019-03-11 11:03 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, xen-devel, Igor Druzhinin

>>> On 11.03.19 at 11:30, <Paul.Durrant@citrix.com> wrote:
>> From: Igor Druzhinin [mailto:igor.druzhinin@citrix.com]
>> Sent: 08 March 2019 21:31
>> 
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
>>                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
>>  {
>>      pagefault_info_t pfinfo;
>> -    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
>> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
>> +    int rc = HVMTRANS_bad_gfn_to_mfn;
>> +
>> +    /*
>> +     * If the memory access can be handled immediately - do it,
>> +     * otherwise re-enter ioreq completion path to properly consume it.
>> +     */
>> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
>> +        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> 
> I think this is the right thing to do

It's not, and that's why I had written that earlier explanation which
you then responded to saying that the issue needs to be dealt with
by enforcing a consistent view of MMIO (or not) during initial try and
replay. That's _not_ what the above does in the general case: It
simply forces _all_ accesses into the slow path, thus re-introducing
the problem of page straddling accesses not getting routed correctly.

Even worse, it forces _all_ memory accesses by the insn under
emulation into the MMIO path. While this would happen to be okay
for a PUSH from MMIO (because the read comes first, and hence the
write would no longer see a pending IOREQ), it's wrong for (among
many other cases) a POP to MMIO, as the read (from stack, i.e. RAM)
will be replayed first, while the IOREQ is still marked incomplete. I'd
expect this to trigger the very domain_crash() in hvmemul_do_io()
that was also triggering because of the P2M type change behind our
backs.

Together with Andrew's pointing out of the ballooning case as
another situation where machine state changes behind our backs
during insn emulation, I meanwhile think that we really need to
lock out machine state changes while insn emulation is in flight for
one of the guest's vCPU-s. As this may be rather bad for
performance, we'd need to strive to make the lock out window
as small as possible. Otoh my hope would be that (in the common
case) there aren't too frequent machine state changes in the first
place.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] x86/hvm: finish IOREQ correctly on completion path
  2019-03-11 11:03   ` Jan Beulich
@ 2019-03-11 11:09     ` Paul Durrant
  2019-03-11 11:32       ` Jan Beulich
  0 siblings, 1 reply; 7+ messages in thread
From: Paul Durrant @ 2019-03-11 11:09 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, xen-devel, Igor Druzhinin

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 11 March 2019 11:04
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Igor Druzhinin <igor.druzhinin@citrix.com>; Roger Pau
> Monne <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: RE: [PATCH] x86/hvm: finish IOREQ correctly on completion path
> 
> >>> On 11.03.19 at 11:30, <Paul.Durrant@citrix.com> wrote:
> >> From: Igor Druzhinin [mailto:igor.druzhinin@citrix.com]
> >> Sent: 08 March 2019 21:31
> >>
> >> --- a/xen/arch/x86/hvm/emulate.c
> >> +++ b/xen/arch/x86/hvm/emulate.c
> >> @@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
> >>                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
> >>  {
> >>      pagefault_info_t pfinfo;
> >> -    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> >> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> >> +    int rc = HVMTRANS_bad_gfn_to_mfn;
> >> +
> >> +    /*
> >> +     * If the memory access can be handled immediately - do it,
> >> +     * otherwise re-enter ioreq completion path to properly consume it.
> >> +     */
> >> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
> >> +        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> >
> > I think this is the right thing to do
> 
> It's not, and that's why I had written that earlier explanation which
> you then responded to saying that the issue needs to be dealt with
> by enforcing a consistent view of MMIO (or not) during initial try and
> replay. That's _not_ what the above does in the general case: It
> simply forces _all_ accesses into the slow path, thus re-introducing
> the problem of page straddling accesses not getting routed correctly.

Why? If there is no pending ioreq then why would the call to hvm_copy_from_guest_linear() not happen? AFAICT vio->io_req will only be updated when hvmemul_do_io() issues a new ioreq, so the test appears correct. How is that _all_ access fail this test?

  Paul

> 
> Even worse, it forces _all_ memory accesses by the insn under
> emulation into the MMIO path. While this would happen to be okay
> for a PUSH from MMIO (because the read comes first, and hence the
> write would no longer see a pending IOREQ), it's wrong for (among
> many other cases) a POP to MMIO, as the read (from stack, i.e. RAM)
> will be replayed first, while the IOREQ is still marked incomplete. I'd
> expect this to trigger the very domain_crash() in hvmemul_do_io()
> that was also triggering because of the P2M type change behind our
> backs.
> 
> Together with Andrew's pointing out of the ballooning case as
> another situation where machine state changes behind our backs
> during insn emulation, I meanwhile think that we really need to
> lock out machine state changes while insn emulation is in flight for
> one of the guest's vCPU-s. As this may be rather bad for
> performance, we'd need to strive to make the lock out window
> as small as possible. Otoh my hope would be that (in the common
> case) there aren't too frequent machine state changes in the first
> place.
> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] x86/hvm: finish IOREQ correctly on completion path
  2019-03-11 11:09     ` Paul Durrant
@ 2019-03-11 11:32       ` Jan Beulich
  2019-03-11 11:39         ` Paul Durrant
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2019-03-11 11:32 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, xen-devel, Igor Druzhinin

>>> On 11.03.19 at 12:09, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 11 March 2019 11:04
>> 
>> >>> On 11.03.19 at 11:30, <Paul.Durrant@citrix.com> wrote:
>> >> From: Igor Druzhinin [mailto:igor.druzhinin@citrix.com]
>> >> Sent: 08 March 2019 21:31
>> >>
>> >> --- a/xen/arch/x86/hvm/emulate.c
>> >> +++ b/xen/arch/x86/hvm/emulate.c
>> >> @@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
>> >>                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
>> >>  {
>> >>      pagefault_info_t pfinfo;
>> >> -    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
>> >> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
>> >> +    int rc = HVMTRANS_bad_gfn_to_mfn;
>> >> +
>> >> +    /*
>> >> +     * If the memory access can be handled immediately - do it,
>> >> +     * otherwise re-enter ioreq completion path to properly consume it.
>> >> +     */
>> >> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
>> >> +        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
>> >
>> > I think this is the right thing to do
>> 
>> It's not, and that's why I had written that earlier explanation which
>> you then responded to saying that the issue needs to be dealt with
>> by enforcing a consistent view of MMIO (or not) during initial try and
>> replay. That's _not_ what the above does in the general case: It
>> simply forces _all_ accesses into the slow path, thus re-introducing
>> the problem of page straddling accesses not getting routed correctly.
> 
> Why? If there is no pending ioreq then why would the call to 
> hvm_copy_from_guest_linear() not happen? AFAICT vio->io_req will only be 
> updated when hvmemul_do_io() issues a new ioreq, so the test appears correct. 
> How is that _all_ access fail this test?

"All" was too heavy, as per this discussion:

>> Even worse, it forces _all_ memory accesses by the insn under
>> emulation into the MMIO path. While this would happen to be okay
>> for a PUSH from MMIO (because the read comes first, and hence the
>> write would no longer see a pending IOREQ), it's wrong for (among
>> many other cases) a POP to MMIO, as the read (from stack, i.e. RAM)
>> will be replayed first, while the IOREQ is still marked incomplete. I'd
>> expect this to trigger the very domain_crash() in hvmemul_do_io()
>> that was also triggering because of the P2M type change behind our
>> backs.

I should have said "all accesses preceding the one really accessing
MMIO". Using the provided example of POP, the linear_read() invocation
during replay (to read the stack) will find a pending IOREQ, and wrongly
go the MMIO path. This would, in this example, be correct only for
linear_write() to do. So the suggested change is correct only for any
insn accessing no more than one memory location (if there's no memory
access then of course we won't make it here in the first place).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] x86/hvm: finish IOREQ correctly on completion path
  2019-03-11 11:32       ` Jan Beulich
@ 2019-03-11 11:39         ` Paul Durrant
  2019-03-11 13:58           ` Jan Beulich
  0 siblings, 1 reply; 7+ messages in thread
From: Paul Durrant @ 2019-03-11 11:39 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, xen-devel, Igor Druzhinin

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 11 March 2019 11:33
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Igor Druzhinin <igor.druzhinin@citrix.com>; Roger Pau
> Monne <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: RE: [PATCH] x86/hvm: finish IOREQ correctly on completion path
> 
> >>> On 11.03.19 at 12:09, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 11 March 2019 11:04
> >>
> >> >>> On 11.03.19 at 11:30, <Paul.Durrant@citrix.com> wrote:
> >> >> From: Igor Druzhinin [mailto:igor.druzhinin@citrix.com]
> >> >> Sent: 08 March 2019 21:31
> >> >>
> >> >> --- a/xen/arch/x86/hvm/emulate.c
> >> >> +++ b/xen/arch/x86/hvm/emulate.c
> >> >> @@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned int bytes, void
> *p_data,
> >> >>                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
> >> >>  {
> >> >>      pagefault_info_t pfinfo;
> >> >> -    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> >> >> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> >> >> +    int rc = HVMTRANS_bad_gfn_to_mfn;
> >> >> +
> >> >> +    /*
> >> >> +     * If the memory access can be handled immediately - do it,
> >> >> +     * otherwise re-enter ioreq completion path to properly consume it.
> >> >> +     */
> >> >> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
> >> >> +        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> >> >
> >> > I think this is the right thing to do
> >>
> >> It's not, and that's why I had written that earlier explanation which
> >> you then responded to saying that the issue needs to be dealt with
> >> by enforcing a consistent view of MMIO (or not) during initial try and
> >> replay. That's _not_ what the above does in the general case: It
> >> simply forces _all_ accesses into the slow path, thus re-introducing
> >> the problem of page straddling accesses not getting routed correctly.
> >
> > Why? If there is no pending ioreq then why would the call to
> > hvm_copy_from_guest_linear() not happen? AFAICT vio->io_req will only be
> > updated when hvmemul_do_io() issues a new ioreq, so the test appears correct.
> > How is that _all_ access fail this test?
> 
> "All" was too heavy, as per this discussion:
> 
> >> Even worse, it forces _all_ memory accesses by the insn under
> >> emulation into the MMIO path. While this would happen to be okay
> >> for a PUSH from MMIO (because the read comes first, and hence the
> >> write would no longer see a pending IOREQ), it's wrong for (among
> >> many other cases) a POP to MMIO, as the read (from stack, i.e. RAM)
> >> will be replayed first, while the IOREQ is still marked incomplete. I'd
> >> expect this to trigger the very domain_crash() in hvmemul_do_io()
> >> that was also triggering because of the P2M type change behind our
> >> backs.
> 
> I should have said "all accesses preceding the one really accessing
> MMIO". Using the provided example of POP, the linear_read() invocation
> during replay (to read the stack) will find a pending IOREQ, and wrongly
> go the MMIO path. This would, in this example, be correct only for
> linear_write() to do. So the suggested change is correct only for any
> insn accessing no more than one memory location (if there's no memory
> access then of course we won't make it here in the first place).

Ok, thanks for the clarification. So, the problem is that memory accesses are not stashed (understandably I guess) in the mmio_cache. If they were then forcing the code down the MMIO path would work. So, what we probably need is a cache of all accesses that an instruction has made to date, regardless of whether they hit RAM or I/O emulation.

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] x86/hvm: finish IOREQ correctly on completion path
  2019-03-11 11:39         ` Paul Durrant
@ 2019-03-11 13:58           ` Jan Beulich
  0 siblings, 0 replies; 7+ messages in thread
From: Jan Beulich @ 2019-03-11 13:58 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, xen-devel, Igor Druzhinin

>>> On 11.03.19 at 12:39, <Paul.Durrant@citrix.com> wrote:
> Ok, thanks for the clarification. So, the problem is that memory accesses 
> are not stashed (understandably I guess) in the mmio_cache. If they were then 
> forcing the code down the MMIO path would work. So, what we probably need is 
> a cache of all accesses that an instruction has made to date, regardless of 
> whether they hit RAM or I/O emulation.

Right, and that's the plan I have anyway when reworking my series
"x86/HVM: implement memory read caching" hopefully more to
Andrew's liking. But this is going to take some time, and hence the
hope was to have some short term fix addressing at least the specific
p2m_ioreq_server issue at hand.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-03-11 13:58 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-08 21:30 [PATCH] x86/hvm: finish IOREQ correctly on completion path Igor Druzhinin
2019-03-11 10:30 ` Paul Durrant
2019-03-11 11:03   ` Jan Beulich
2019-03-11 11:09     ` Paul Durrant
2019-03-11 11:32       ` Jan Beulich
2019-03-11 11:39         ` Paul Durrant
2019-03-11 13:58           ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.