All of lore.kernel.org
 help / color / mirror / Atom feed
* XenGT is still regressed on master
@ 2019-03-07 12:46 Igor Druzhinin
  2019-03-08 11:55 ` Jan Beulich
  0 siblings, 1 reply; 14+ messages in thread
From: Igor Druzhinin @ 2019-03-07 12:46 UTC (permalink / raw)
  To: xen-devel, Jan Beulich; +Cc: Andrew Cooper, Paul Durrant

Jan,

We've noticed that there is still a regression with p2m_ioreq_server P2M
type on master. Since the commit 940faf0279 (x86/HVM: split page
straddling emulated accesses in more cases) the behavior of write and
rmw instruction emulation changed (possibly unintentionally) so that it
might not re-enter hvmemul_do_io() on IOREQ completion which should be
done in order to avoid breaking IOREQ state machine. What we're seeing
instead is a domain crash here:

static int hvmemul_do_io(
    bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int
...
    case STATE_IORESP_READY:
        vio->io_req.state = STATE_IOREQ_NONE;
        p = vio->io_req;

        /* Verify the emulation request has been correctly re-issued */
        if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) ||
             (p.addr != addr) ||
             (p.size != size) ||
             (p.count > *reps) ||
             (p.dir != dir) ||
             (p.df != df) ||
             (p.data_is_ptr != data_is_addr) ||
             (data_is_addr && (p.data != data)) )
            domain_crash(currd);

This is happening on processing of the next IOREQ after the one that
hasn't been completed properly due to p2mt being changed in IOREQ
handler by XenGT kernel module. So it hit HVMTRANS_okay case in
linear_write() helper on the way back and didn't re-enter hvmemul_do_io().

The bug could be mitigated by the following patch but since it's you who
introduced this helper you might have better ideas how to avoid the
problem in a clean way here.

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1139,13 +1139,11 @@ static int linear_write(unsigned long addr,
unsigned int bytes, void *p_data,
     {
         unsigned int offset, part1;

-    case HVMTRANS_okay:
-        return X86EMUL_OKAY;
-
     case HVMTRANS_bad_linear_to_gfn:
         x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
         return X86EMUL_EXCEPTION;

+    case HVMTRANS_okay:
     case HVMTRANS_bad_gfn_to_mfn:
         offset = addr & ~PAGE_MASK;
         if ( offset + bytes <= PAGE_SIZE )

Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-07 12:46 XenGT is still regressed on master Igor Druzhinin
@ 2019-03-08 11:55 ` Jan Beulich
  2019-03-08 13:37   ` Andrew Cooper
  2019-03-08 14:25   ` Igor Druzhinin
  0 siblings, 2 replies; 14+ messages in thread
From: Jan Beulich @ 2019-03-08 11:55 UTC (permalink / raw)
  To: Igor Druzhinin; +Cc: Juergen Gross, Andrew Cooper, Paul Durrant, xen-devel

>>> On 07.03.19 at 13:46, <igor.druzhinin@citrix.com> wrote:
> We've noticed that there is still a regression with p2m_ioreq_server P2M
> type on master. Since the commit 940faf0279 (x86/HVM: split page

Afaics it's 3bdec530a5. I'm also slightly confused by the use of "still":
Iirc so far I've had only an informal indication of the problem, without
any details, from Andrew.

Also Cc-ing Jürgen to be explicitly aware of the regression, albeit I
assume he has noticed the report already anyway.

> straddling emulated accesses in more cases) the behavior of write and
> rmw instruction emulation changed (possibly unintentionally) so that it
> might not re-enter hvmemul_do_io() on IOREQ completion which should be
> done in order to avoid breaking IOREQ state machine. What we're seeing
> instead is a domain crash here:
> 
> static int hvmemul_do_io(
>     bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int
> ...
>     case STATE_IORESP_READY:
>         vio->io_req.state = STATE_IOREQ_NONE;
>         p = vio->io_req;
> 
>         /* Verify the emulation request has been correctly re-issued */
>         if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) ||
>              (p.addr != addr) ||
>              (p.size != size) ||
>              (p.count > *reps) ||
>              (p.dir != dir) ||
>              (p.df != df) ||
>              (p.data_is_ptr != data_is_addr) ||
>              (data_is_addr && (p.data != data)) )
>             domain_crash(currd);
> 
> This is happening on processing of the next IOREQ after the one that
> hasn't been completed properly due to p2mt being changed in IOREQ
> handler by XenGT kernel module. So it hit HVMTRANS_okay case in
> linear_write() helper on the way back and didn't re-enter hvmemul_do_io().

Am I to take this to mean that the first time round we take the
HVMTRANS_bad_gfn_to_mfn exit from __hvm_copy() due to finding
p2m_ioreq_server, but in the course of processing the request the
page's type gets changed and hence we don't take that same path
the second time? If so, my first reaction is to blame the kernel
module: Machine state (of the VM) may not change while processing
a write, other than to carry out the _direct_ effects of the write. I
don't think a p2m type change is supposed to be occurring as a side
effect.

> The bug could be mitigated by the following patch but since it's you who
> introduced this helper you might have better ideas how to avoid the
> problem in a clean way here.
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1139,13 +1139,11 @@ static int linear_write(unsigned long addr,
> unsigned int bytes, void *p_data,
>      {
>          unsigned int offset, part1;
> 
> -    case HVMTRANS_okay:
> -        return X86EMUL_OKAY;
> -
>      case HVMTRANS_bad_linear_to_gfn:
>          x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
>          return X86EMUL_EXCEPTION;
> 
> +    case HVMTRANS_okay:
>      case HVMTRANS_bad_gfn_to_mfn:
>          offset = addr & ~PAGE_MASK;
>          if ( offset + bytes <= PAGE_SIZE )

This is (I'm inclined to say "of course") not an appropriate change in
the general case: Getting back HVMTRANS_okay means the write
was carried out, and hence it shouldn't be tried to be carried out a
2nd time.

I take it that changing the kernel driver would at best be sub-optimal
though, so a hypervisor-only fix would be better.

Assuming the p2m type change arrives via XEN_DMOP_set_mem_type,
I think what we need to do is delay the actual change until no ioreq
is pending anymore, kind of like the VM event subsystem delays
certain CR and MSR writes until VM entry time. In this situation we'd
then further have to restrict the number of such pending changes,
because we need to record the request(s) somewhere. Question is
how many of such bufferable requests would be enough: Unless we
wanted to enforce that only ordinary aligned writes (and rmw-s) can
get us into this state, apart from writes crossing page boundaries we
may need to be able to cope with AVX512's scatter insns.

The other alternative I can think of would be to record the specific
failure case (finding p2m_ioreq_server) in the first pass, and act upon
it (instead of your mitigation above) during the retry. At the first
glance it would seem more cumbersome to do the recording at that
layer though.

Let me know what you think, and also whether you want me to try to
prototype something, or whether you want to give it a shot.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 11:55 ` Jan Beulich
@ 2019-03-08 13:37   ` Andrew Cooper
  2019-03-08 14:49     ` Jan Beulich
  2019-03-08 14:25   ` Igor Druzhinin
  1 sibling, 1 reply; 14+ messages in thread
From: Andrew Cooper @ 2019-03-08 13:37 UTC (permalink / raw)
  To: Jan Beulich, Igor Druzhinin; +Cc: Juergen Gross, xen-devel, Paul Durrant

On 08/03/2019 11:55, Jan Beulich wrote:
>>>> On 07.03.19 at 13:46, <igor.druzhinin@citrix.com> wrote:
>> We've noticed that there is still a regression with p2m_ioreq_server P2M
>> type on master. Since the commit 940faf0279 (x86/HVM: split page
> Afaics it's 3bdec530a5. I'm also slightly confused by the use of "still":
> Iirc so far I've had only an informal indication of the problem, without
> any details, from Andrew.

I believe the "still" refers to the identified changeset breaking GVT-g,
and your followup series "[PATCH 0/3] x86/HVM: honor r/o p2m types, in
particular during emulation", while making things better, didn't
actually fixing the original reported bug.

I personally wouldn't have phrased the sentence quite like that, but it
is a correct statement.

> Also Cc-ing Jürgen to be explicitly aware of the regression, albeit I
> assume he has noticed the report already anyway.

Sadly, I suspect it is too late and too complicated to fix for 4.12 to
fix at this point.

>
>> straddling emulated accesses in more cases) the behavior of write and
>> rmw instruction emulation changed (possibly unintentionally) so that it
>> might not re-enter hvmemul_do_io() on IOREQ completion which should be
>> done in order to avoid breaking IOREQ state machine. What we're seeing
>> instead is a domain crash here:
>>
>> static int hvmemul_do_io(
>>     bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int
>> ...
>>     case STATE_IORESP_READY:
>>         vio->io_req.state = STATE_IOREQ_NONE;
>>         p = vio->io_req;
>>
>>         /* Verify the emulation request has been correctly re-issued */
>>         if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) ||
>>              (p.addr != addr) ||
>>              (p.size != size) ||
>>              (p.count > *reps) ||
>>              (p.dir != dir) ||
>>              (p.df != df) ||
>>              (p.data_is_ptr != data_is_addr) ||
>>              (data_is_addr && (p.data != data)) )
>>             domain_crash(currd);
>>
>> This is happening on processing of the next IOREQ after the one that
>> hasn't been completed properly due to p2mt being changed in IOREQ
>> handler by XenGT kernel module. So it hit HVMTRANS_okay case in
>> linear_write() helper on the way back and didn't re-enter hvmemul_do_io().
> Am I to take this to mean that the first time round we take the
> HVMTRANS_bad_gfn_to_mfn exit from __hvm_copy() due to finding
> p2m_ioreq_server, but in the course of processing the request the
> page's type gets changed and hence we don't take that same path
> the second time?

I believe so, yes.

> If so, my first reaction is to blame the kernel
> module: Machine state (of the VM) may not change while processing
> a write, other than to carry out the _direct_ effects of the write. I
> don't think a p2m type change is supposed to be occurring as a side
> effect.

This is an especially unhelpful point of view, (and unreasonable IMO),
as you pushed for this interface over the alternatives which were
proposed originally.

Responding to an emulation request necessarily involves making state
changes in the VM.  When the state change in question is around the
tracking of shadow pagetables, the change is non-negotiable as far as
the higher level functionality is concerned.

>> The bug could be mitigated by the following patch but since it's you who
>> introduced this helper you might have better ideas how to avoid the
>> problem in a clean way here.
>>
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -1139,13 +1139,11 @@ static int linear_write(unsigned long addr,
>> unsigned int bytes, void *p_data,
>>      {
>>          unsigned int offset, part1;
>>
>> -    case HVMTRANS_okay:
>> -        return X86EMUL_OKAY;
>> -
>>      case HVMTRANS_bad_linear_to_gfn:
>>          x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
>>          return X86EMUL_EXCEPTION;
>>
>> +    case HVMTRANS_okay:
>>      case HVMTRANS_bad_gfn_to_mfn:
>>          offset = addr & ~PAGE_MASK;
>>          if ( offset + bytes <= PAGE_SIZE )
> This is (I'm inclined to say "of course") not an appropriate change in
> the general case: Getting back HVMTRANS_okay means the write
> was carried out, and hence it shouldn't be tried to be carried out a
> 2nd time.

I agree - this isn't a viable fix but it does help to pinpoint the problem.

> I take it that changing the kernel driver would at best be sub-optimal
> though, so a hypervisor-only fix would be better.

This problem isn't specific to p2m_ioreq_server.  A guest which balloons
in a frame which is currently the target of a pending MMIO emulation
will hit the same issue.

This is a general problem with the ioreq response state machine
handling.  My longterm plans for emulation changes would fix this, but
they definitely aren't a viable short term fix.

The only viable fix I see in the short term is to mark the ioreq
response as done before reentering the emulation model, so in the cases
that we do take a different path, a stale ioreq isn't left in place, but
I fully admit that I haven't spent too long thinking through the
implications of this, and whether it is possible in practice.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 11:55 ` Jan Beulich
  2019-03-08 13:37   ` Andrew Cooper
@ 2019-03-08 14:25   ` Igor Druzhinin
  2019-03-08 14:58     ` Jan Beulich
  1 sibling, 1 reply; 14+ messages in thread
From: Igor Druzhinin @ 2019-03-08 14:25 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Juergen Gross, Andrew Cooper, Paul Durrant, xen-devel

On 08/03/2019 11:55, Jan Beulich wrote:
> If so, my first reaction is to blame the kernel module:
> Machine state (of the VM) may not change while processing
> a write, other than to carry out the _direct_ effects of the write. I
> don't think a p2m type change is supposed to be occurring as a side
> effect.

I disagree. Anything may change during processing of IOREQ and it's
hypervisor that shouldn't make any assumptions on the machine state
before and after the request got sent.

>> The bug could be mitigated by the following patch but since it's you who
>> introduced this helper you might have better ideas how to avoid the
>> problem in a clean way here.
>>
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -1139,13 +1139,11 @@ static int linear_write(unsigned long addr,
>> unsigned int bytes, void *p_data,
>>      {
>>          unsigned int offset, part1;
>>
>> -    case HVMTRANS_okay:
>> -        return X86EMUL_OKAY;
>> -
>>      case HVMTRANS_bad_linear_to_gfn:
>>          x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
>>          return X86EMUL_EXCEPTION;
>>
>> +    case HVMTRANS_okay:
>>      case HVMTRANS_bad_gfn_to_mfn:
>>          offset = addr & ~PAGE_MASK;
>>          if ( offset + bytes <= PAGE_SIZE )
> 
> This is (I'm inclined to say "of course") not an appropriate change in
> the general case: Getting back HVMTRANS_okay means the write
> was carried out, and hence it shouldn't be tried to be carried out a
> 2nd time.
> 
> I take it that changing the kernel driver would at best be sub-optimal
> though, so a hypervisor-only fix would be better.

Yes, changing the kernel module is very undesirable for many reasons.

> Assuming the p2m type change arrives via XEN_DMOP_set_mem_type,
> I think what we need to do is delay the actual change until no ioreq
> is pending anymore, kind of like the VM event subsystem delays
> certain CR and MSR writes until VM entry time. In this situation we'd
> then further have to restrict the number of such pending changes,
> because we need to record the request(s) somewhere. Question is
> how many of such bufferable requests would be enough: Unless we
> wanted to enforce that only ordinary aligned writes (and rmw-s) can
> get us into this state, apart from writes crossing page boundaries we
> may need to be able to cope with AVX512's scatter insns.
> 
> The other alternative I can think of would be to record the specific
> failure case (finding p2m_ioreq_server) in the first pass, and act upon
> it (instead of your mitigation above) during the retry. At the first
> glance it would seem more cumbersome to do the recording at that
> layer though.

I like the latter suggestion more. Seems less invasive and prone to
regressions. I'd like to try to implement it. Although I think the
hypervisor check should be more general: like if IOREQ is in progress
don't try to got through fast-path and re-enter IOREQ completion path.

What if we just check !hvm_ioreq_needs_completion() before returning
X86EMUL_OKAY i.e. fall through to the bad_gfn_to_mfn case if that check
fails as Paul suggested?

Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 13:37   ` Andrew Cooper
@ 2019-03-08 14:49     ` Jan Beulich
  0 siblings, 0 replies; 14+ messages in thread
From: Jan Beulich @ 2019-03-08 14:49 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Juergen Gross, Igor Druzhinin, Paul Durrant, xen-devel

>>> On 08.03.19 at 14:37, <andrew.cooper3@citrix.com> wrote:
> On 08/03/2019 11:55, Jan Beulich wrote:
>>>>> On 07.03.19 at 13:46, <igor.druzhinin@citrix.com> wrote:
>>> We've noticed that there is still a regression with p2m_ioreq_server P2M
>>> type on master. Since the commit 940faf0279 (x86/HVM: split page
>>> straddling emulated accesses in more cases) the behavior of write and
>>> rmw instruction emulation changed (possibly unintentionally) so that it
>>> might not re-enter hvmemul_do_io() on IOREQ completion which should be
>>> done in order to avoid breaking IOREQ state machine. What we're seeing
>>> instead is a domain crash here:
>>>
>>> static int hvmemul_do_io(
>>>     bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int
>>> ...
>>>     case STATE_IORESP_READY:
>>>         vio->io_req.state = STATE_IOREQ_NONE;
>>>         p = vio->io_req;
>>>
>>>         /* Verify the emulation request has been correctly re-issued */
>>>         if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) ||
>>>              (p.addr != addr) ||
>>>              (p.size != size) ||
>>>              (p.count > *reps) ||
>>>              (p.dir != dir) ||
>>>              (p.df != df) ||
>>>              (p.data_is_ptr != data_is_addr) ||
>>>              (data_is_addr && (p.data != data)) )
>>>             domain_crash(currd);
>>>
>>> This is happening on processing of the next IOREQ after the one that
>>> hasn't been completed properly due to p2mt being changed in IOREQ
>>> handler by XenGT kernel module. So it hit HVMTRANS_okay case in
>>> linear_write() helper on the way back and didn't re-enter hvmemul_do_io().
>> Am I to take this to mean that the first time round we take the
>> HVMTRANS_bad_gfn_to_mfn exit from __hvm_copy() due to finding
>> p2m_ioreq_server, but in the course of processing the request the
>> page's type gets changed and hence we don't take that same path
>> the second time?
> 
> I believe so, yes.
> 
>> If so, my first reaction is to blame the kernel
>> module: Machine state (of the VM) may not change while processing
>> a write, other than to carry out the _direct_ effects of the write. I
>> don't think a p2m type change is supposed to be occurring as a side
>> effect.
> 
> This is an especially unhelpful point of view, (and unreasonable IMO),
> as you pushed for this interface over the alternatives which were
> proposed originally.

I have to admit that I don't recall any details of that discussion, and
hence also whether all the implications (including the behind-our-
back change of p2m type) were actually both understood and put
on the table. Nor do I recall what the alternatives were.

> Responding to an emulation request necessarily involves making state
> changes in the VM.  When the state change in question is around the
> tracking of shadow pagetables, the change is non-negotiable as far as
> the higher level functionality is concerned.

Bare hardware doesn't know anything like p2m types, so whether a
change like this is acceptable (or even necessary) is questionable at
least. The "physical" properties of memory, after all, don't normally
change at all while a system is up. We're bending the rules anyway.

>>> The bug could be mitigated by the following patch but since it's you who
>>> introduced this helper you might have better ideas how to avoid the
>>> problem in a clean way here.
>>>
>>> --- a/xen/arch/x86/hvm/emulate.c
>>> +++ b/xen/arch/x86/hvm/emulate.c
>>> @@ -1139,13 +1139,11 @@ static int linear_write(unsigned long addr,
>>> unsigned int bytes, void *p_data,
>>>      {
>>>          unsigned int offset, part1;
>>>
>>> -    case HVMTRANS_okay:
>>> -        return X86EMUL_OKAY;
>>> -
>>>      case HVMTRANS_bad_linear_to_gfn:
>>>          x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
>>>          return X86EMUL_EXCEPTION;
>>>
>>> +    case HVMTRANS_okay:
>>>      case HVMTRANS_bad_gfn_to_mfn:
>>>          offset = addr & ~PAGE_MASK;
>>>          if ( offset + bytes <= PAGE_SIZE )
>> This is (I'm inclined to say "of course") not an appropriate change in
>> the general case: Getting back HVMTRANS_okay means the write
>> was carried out, and hence it shouldn't be tried to be carried out a
>> 2nd time.
> 
> I agree - this isn't a viable fix but it does help to pinpoint the problem.
> 
>> I take it that changing the kernel driver would at best be sub-optimal
>> though, so a hypervisor-only fix would be better.
> 
> This problem isn't specific to p2m_ioreq_server.  A guest which balloons
> in a frame which is currently the target of a pending MMIO emulation
> will hit the same issue.

Hmm, good point.

> This is a general problem with the ioreq response state machine
> handling.  My longterm plans for emulation changes would fix this, but
> they definitely aren't a viable short term fix.
> 
> The only viable fix I see in the short term is to mark the ioreq
> response as done before reentering the emulation model, so in the cases
> that we do take a different path, a stale ioreq isn't left in place, but
> I fully admit that I haven't spent too long thinking through the
> implications of this, and whether it is possible in practice.

I can't see how this would help without also the buffering patches
of mine that you want me to re-write basically from scratch:
Re-execution may occur not only once, but multiple times, because
memory accesses wider than 8 bytes can't be sent to qemu. Our
view of the (virtual) machine has to remain consistent throughout
the emulation of a single insn, to guarantee that the same paths
get taken for every re-execution run (or more precisely the initial
parts of it that had been executed at least once already). Any
change we make (as requested by the guest from another vCPU
or by the controlling domain) has to leave unaffected any in-flight
insn emulation.

By doing what you suggest to do you'd paper over one aspect of
the problem, but we'd be liable to find other similar issues down
the road.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 14:25   ` Igor Druzhinin
@ 2019-03-08 14:58     ` Jan Beulich
  2019-03-08 15:18       ` Igor Druzhinin
  0 siblings, 1 reply; 14+ messages in thread
From: Jan Beulich @ 2019-03-08 14:58 UTC (permalink / raw)
  To: Igor Druzhinin; +Cc: Juergen Gross, Andrew Cooper, Paul Durrant, xen-devel

>>> On 08.03.19 at 15:25, <igor.druzhinin@citrix.com> wrote:
> On 08/03/2019 11:55, Jan Beulich wrote:
>> Assuming the p2m type change arrives via XEN_DMOP_set_mem_type,
>> I think what we need to do is delay the actual change until no ioreq
>> is pending anymore, kind of like the VM event subsystem delays
>> certain CR and MSR writes until VM entry time. In this situation we'd
>> then further have to restrict the number of such pending changes,
>> because we need to record the request(s) somewhere. Question is
>> how many of such bufferable requests would be enough: Unless we
>> wanted to enforce that only ordinary aligned writes (and rmw-s) can
>> get us into this state, apart from writes crossing page boundaries we
>> may need to be able to cope with AVX512's scatter insns.
>> 
>> The other alternative I can think of would be to record the specific
>> failure case (finding p2m_ioreq_server) in the first pass, and act upon
>> it (instead of your mitigation above) during the retry. At the first
>> glance it would seem more cumbersome to do the recording at that
>> layer though.
> 
> I like the latter suggestion more. Seems less invasive and prone to
> regressions. I'd like to try to implement it. Although I think the
> hypervisor check should be more general: like if IOREQ is in progress
> don't try to got through fast-path and re-enter IOREQ completion path.
> 
> What if we just check !hvm_ioreq_needs_completion() before returning
> X86EMUL_OKAY i.e. fall through to the bad_gfn_to_mfn case if that check
> fails as Paul suggested?

I didn't see such a suggestion, I think, and I'm afraid it would still not
be correct in the general case. As said before, Getting back
HVMTRANS_okay means the write did actually complete, and no
second attempt to do the write should be done.

If anything I could see the code transition from STATE_IORESP_READY
to STATE_IOREQ_NONE in that case, with a comment throughly
explaining why that's needed.

Independent of the specific solution - any change to linear_write()
would imo want suitably mirroring to linear_read(), which implies that
anything that can't validly be done to the latter is also unlikely to be
valid for the former.

And then, following Andrew's response, wouldn't what you suggest
again take care of only the p2m_ioreq_server special case? (Since
the immediate issue is with just this type, such a special case fix
might still be acceptable, as long as it comes with a promise to deal
with the general case down the road.)

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 14:58     ` Jan Beulich
@ 2019-03-08 15:18       ` Igor Druzhinin
  2019-03-08 16:14         ` Jan Beulich
  0 siblings, 1 reply; 14+ messages in thread
From: Igor Druzhinin @ 2019-03-08 15:18 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Juergen Gross, Andrew Cooper, Paul Durrant, xen-devel

On 08/03/2019 14:58, Jan Beulich wrote:
>>>> On 08.03.19 at 15:25, <igor.druzhinin@citrix.com> wrote:
>> On 08/03/2019 11:55, Jan Beulich wrote:
>>
>> I like the latter suggestion more. Seems less invasive and prone to
>> regressions. I'd like to try to implement it. Although I think the
>> hypervisor check should be more general: like if IOREQ is in progress
>> don't try to got through fast-path and re-enter IOREQ completion path.
>>
>> What if we just check !hvm_ioreq_needs_completion() before returning
>> X86EMUL_OKAY i.e. fall through to the bad_gfn_to_mfn case if that check
>> fails as Paul suggested?
> 
> I didn't see such a suggestion, I think, and I'm afraid it would still not
> be correct in the general case. As said before, Getting back
> HVMTRANS_okay means the write did actually complete, and no
> second attempt to do the write should be done.

What if we don't do hvm_copy() in that case and will go to slow-path
directly, would that be better?

> If anything I could see the code transition from STATE_IORESP_READY
> to STATE_IOREQ_NONE in that case, with a comment throughly
> explaining why that's needed.

I don't think it's a good idea to transfer IOREQ state in different
places - IOREQ consumption happens now in hvmemul_do_io() function and
we need to re-enter it to finally finish with IOREQ processing. If we
want to change it - the entire infrastructure needs to be re-architected.

> Independent of the specific solution - any change to linear_write()
> would imo want suitably mirroring to linear_read(), which implies that
> anything that can't validly be done to the latter is also unlikely to be
> valid for the former.

Seems reasonable, I'd also prefer the solution to be symmetric.

> And then, following Andrew's response, wouldn't what you suggest
> again take care of only the p2m_ioreq_server special case? (Since
> the immediate issue is with just this type, such a special case fix
> might still be acceptable, as long as it comes with a promise to deal
> with the general case down the road.)
I think the idea of completing IOREQ if it's still in progress is
general enough (we could also look for other possibly latent places in
the code where we bail prematurely).

Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 15:18       ` Igor Druzhinin
@ 2019-03-08 16:14         ` Jan Beulich
  2019-03-08 18:37           ` Igor Druzhinin
  0 siblings, 1 reply; 14+ messages in thread
From: Jan Beulich @ 2019-03-08 16:14 UTC (permalink / raw)
  To: Igor Druzhinin; +Cc: Juergen Gross, Andrew Cooper, Paul Durrant, xen-devel

>>> On 08.03.19 at 16:18, <igor.druzhinin@citrix.com> wrote:
> On 08/03/2019 14:58, Jan Beulich wrote:
>>>>> On 08.03.19 at 15:25, <igor.druzhinin@citrix.com> wrote:
>>> On 08/03/2019 11:55, Jan Beulich wrote:
>>>
>>> I like the latter suggestion more. Seems less invasive and prone to
>>> regressions. I'd like to try to implement it. Although I think the
>>> hypervisor check should be more general: like if IOREQ is in progress
>>> don't try to got through fast-path and re-enter IOREQ completion path.
>>>
>>> What if we just check !hvm_ioreq_needs_completion() before returning
>>> X86EMUL_OKAY i.e. fall through to the bad_gfn_to_mfn case if that check
>>> fails as Paul suggested?
>> 
>> I didn't see such a suggestion, I think, and I'm afraid it would still not
>> be correct in the general case. As said before, Getting back
>> HVMTRANS_okay means the write did actually complete, and no
>> second attempt to do the write should be done.
> 
> What if we don't do hvm_copy() in that case and will go to slow-path
> directly, would that be better?

Ah yes, that looks like a better approach (provided Paul also gives it
his okay). There being an ioreq in flight is a fair indication that we will
want to enter hvmemul_linear_mmio_{read,write}().

One caveat though: What do you suggest to do with page straddling
accesses? The commit introducing these functions was, after all to
deal with this special case. The in-flight request we observe there
could be for the leading or trailing part of the access that's being
re-executed. While these could perhaps be told apart by looking at
the low address bits, similarly what do you do for multi-part (but
perhaps well aligned) accesses like cmps*, vgather*, or vscatter*?

>> If anything I could see the code transition from STATE_IORESP_READY
>> to STATE_IOREQ_NONE in that case, with a comment throughly
>> explaining why that's needed.
> 
> I don't think it's a good idea to transfer IOREQ state in different
> places - IOREQ consumption happens now in hvmemul_do_io() function and
> we need to re-enter it to finally finish with IOREQ processing. If we
> want to change it - the entire infrastructure needs to be re-architected.

Sure - hence my reference to a justifying comment. If at all possible
we should indeed avoid this.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 16:14         ` Jan Beulich
@ 2019-03-08 18:37           ` Igor Druzhinin
  2019-03-11  7:14             ` Juergen Gross
  2019-03-11  9:35             ` Jan Beulich
  0 siblings, 2 replies; 14+ messages in thread
From: Igor Druzhinin @ 2019-03-08 18:37 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Juergen Gross, Andrew Cooper, Paul Durrant, xen-devel

On 08/03/2019 16:14, Jan Beulich wrote:
>>>> On 08.03.19 at 16:18, <igor.druzhinin@citrix.com> wrote:
>> On 08/03/2019 14:58, Jan Beulich wrote:
>>>>>> On 08.03.19 at 15:25, <igor.druzhinin@citrix.com> wrote:
>>>> On 08/03/2019 11:55, Jan Beulich wrote:
>>>>
>>>> I like the latter suggestion more. Seems less invasive and prone to
>>>> regressions. I'd like to try to implement it. Although I think the
>>>> hypervisor check should be more general: like if IOREQ is in progress
>>>> don't try to got through fast-path and re-enter IOREQ completion path.
>>>>
>>>> What if we just check !hvm_ioreq_needs_completion() before returning
>>>> X86EMUL_OKAY i.e. fall through to the bad_gfn_to_mfn case if that check
>>>> fails as Paul suggested?
>>>
>>> I didn't see such a suggestion, I think, and I'm afraid it would still not
>>> be correct in the general case. As said before, Getting back
>>> HVMTRANS_okay means the write did actually complete, and no
>>> second attempt to do the write should be done.
>>
>> What if we don't do hvm_copy() in that case and will go to slow-path
>> directly, would that be better?
> 
> Ah yes, that looks like a better approach (provided Paul also gives it
> his okay). There being an ioreq in flight is a fair indication that we will
> want to enter hvmemul_linear_mmio_{read,write}().
> 
> One caveat though: What do you suggest to do with page straddling
> accesses? The commit introducing these functions was, after all to
> deal with this special case. The in-flight request we observe there
> could be for the leading or trailing part of the access that's being
> re-executed. While these could perhaps be told apart by looking at
> the low address bits, similarly what do you do for multi-part (but
> perhaps well aligned) accesses like cmps*, vgather*, or vscatter*?

I don't think there is a problem here: IOREQs are issued sequentially
for each part of the access. hvmemul_linear_mmio_access() makes sure one
chunk of the access isn't straddling a page boundary and we're finishing
the requested part immediately after an IOREQ for it got issued. I don't
see how it could enter linear_{read,write}() for a wrong part unless the
emulation layer above is broken.

Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 18:37           ` Igor Druzhinin
@ 2019-03-11  7:14             ` Juergen Gross
  2019-03-11 13:20               ` Igor Druzhinin
  2019-03-11  9:35             ` Jan Beulich
  1 sibling, 1 reply; 14+ messages in thread
From: Juergen Gross @ 2019-03-11  7:14 UTC (permalink / raw)
  To: Igor Druzhinin, Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, xen-devel

On 08/03/2019 19:37, Igor Druzhinin wrote:
> On 08/03/2019 16:14, Jan Beulich wrote:
>>>>> On 08.03.19 at 16:18, <igor.druzhinin@citrix.com> wrote:
>>> On 08/03/2019 14:58, Jan Beulich wrote:
>>>>>>> On 08.03.19 at 15:25, <igor.druzhinin@citrix.com> wrote:
>>>>> On 08/03/2019 11:55, Jan Beulich wrote:
>>>>>
>>>>> I like the latter suggestion more. Seems less invasive and prone to
>>>>> regressions. I'd like to try to implement it. Although I think the
>>>>> hypervisor check should be more general: like if IOREQ is in progress
>>>>> don't try to got through fast-path and re-enter IOREQ completion path.
>>>>>
>>>>> What if we just check !hvm_ioreq_needs_completion() before returning
>>>>> X86EMUL_OKAY i.e. fall through to the bad_gfn_to_mfn case if that check
>>>>> fails as Paul suggested?
>>>>
>>>> I didn't see such a suggestion, I think, and I'm afraid it would still not
>>>> be correct in the general case. As said before, Getting back
>>>> HVMTRANS_okay means the write did actually complete, and no
>>>> second attempt to do the write should be done.
>>>
>>> What if we don't do hvm_copy() in that case and will go to slow-path
>>> directly, would that be better?
>>
>> Ah yes, that looks like a better approach (provided Paul also gives it
>> his okay). There being an ioreq in flight is a fair indication that we will
>> want to enter hvmemul_linear_mmio_{read,write}().
>>
>> One caveat though: What do you suggest to do with page straddling
>> accesses? The commit introducing these functions was, after all to
>> deal with this special case. The in-flight request we observe there
>> could be for the leading or trailing part of the access that's being
>> re-executed. While these could perhaps be told apart by looking at
>> the low address bits, similarly what do you do for multi-part (but
>> perhaps well aligned) accesses like cmps*, vgather*, or vscatter*?
> 
> I don't think there is a problem here: IOREQs are issued sequentially
> for each part of the access. hvmemul_linear_mmio_access() makes sure one
> chunk of the access isn't straddling a page boundary and we're finishing
> the requested part immediately after an IOREQ for it got issued. I don't
> see how it could enter linear_{read,write}() for a wrong part unless the
> emulation layer above is broken.

Any estimate when we can expect patches? The 4.12 release is pending and
this is the only remaining regression I'm aware of.

If you tell me there is no reasonable chance of anything acceptable
being posted this week I'd go on with the release process and any fix
will be delayed until 4.13 / 4.12.1.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-08 18:37           ` Igor Druzhinin
  2019-03-11  7:14             ` Juergen Gross
@ 2019-03-11  9:35             ` Jan Beulich
  2019-03-11 10:21               ` Paul Durrant
  1 sibling, 1 reply; 14+ messages in thread
From: Jan Beulich @ 2019-03-11  9:35 UTC (permalink / raw)
  To: Igor Druzhinin; +Cc: Juergen Gross, Andrew Cooper, Paul Durrant, xen-devel

>>> On 08.03.19 at 19:37, <igor.druzhinin@citrix.com> wrote:
> On 08/03/2019 16:14, Jan Beulich wrote:
>> One caveat though: What do you suggest to do with page straddling
>> accesses? The commit introducing these functions was, after all to
>> deal with this special case. The in-flight request we observe there
>> could be for the leading or trailing part of the access that's being
>> re-executed. While these could perhaps be told apart by looking at
>> the low address bits, similarly what do you do for multi-part (but
>> perhaps well aligned) accesses like cmps*, vgather*, or vscatter*?
> 
> I don't think there is a problem here: IOREQs are issued sequentially
> for each part of the access. hvmemul_linear_mmio_access() makes sure one
> chunk of the access isn't straddling a page boundary and we're finishing
> the requested part immediately after an IOREQ for it got issued. I don't
> see how it could enter linear_{read,write}() for a wrong part unless the
> emulation layer above is broken.

First of all - there's no way to bypass linear_{read,write}(). The
question is what path is to be taken inside those functions.

Of the multiple memory accesses (besides the insn fetch) that an insn
may do, some may access RAM and some may access MMIO. So for
certain of the accesses we may _need_ to call
hvm_copy_{from,to}_guest_linear(), while for others we may want
(need) to bypass it as you have outlined.

Perhaps, besides the examples given before, a PUSH/POP from/to MMIO
is another relevant case to consider: In the common case the stack
would be in RAM.

And the page straddling change in particular was to deal with cases
where an individual access would have part of it map to RAM and
another part go to MMIO. I.e. we'd again go both routes within
linear_{read,write}() in the course of emulating a single insn.

The fact that we're issuing IOREQs sequentially doesn't help the
situation - upon re-execution all prior memory accesses will be re-
played. It's just that previously completed IOREQs won't be re-
issued themselves, due to their results having got recorded into the
struct hvm_mmio_cache instances hanging off of struct vcpu. See
hvmemul_phys_mmio_access(), hvmemul_find_mmio_cache(), and
hvmemul_linear_mmio_access().

Looking at hvmemul_find_mmio_cache() I now realize that commit
35a61c05ea ("x86emul: adjust handling of AVX2 gathers") didn't
have the intended effect, due to the domain_crash() there.
Apparently (seeing the commit's description) I did look at
hvmemul_linear_mmio_access() only back then.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-11  9:35             ` Jan Beulich
@ 2019-03-11 10:21               ` Paul Durrant
  2019-03-11 10:39                 ` Jan Beulich
  0 siblings, 1 reply; 14+ messages in thread
From: Paul Durrant @ 2019-03-11 10:21 UTC (permalink / raw)
  To: 'Jan Beulich', Igor Druzhinin
  Cc: Juergen Gross, Andrew Cooper, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 11 March 2019 09:35
> To: Igor Druzhinin <igor.druzhinin@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>; Juergen Gross <jgross@suse.com>
> Subject: Re: XenGT is still regressed on master
> 
> >>> On 08.03.19 at 19:37, <igor.druzhinin@citrix.com> wrote:
> > On 08/03/2019 16:14, Jan Beulich wrote:
> >> One caveat though: What do you suggest to do with page straddling
> >> accesses? The commit introducing these functions was, after all to
> >> deal with this special case. The in-flight request we observe there
> >> could be for the leading or trailing part of the access that's being
> >> re-executed. While these could perhaps be told apart by looking at
> >> the low address bits, similarly what do you do for multi-part (but
> >> perhaps well aligned) accesses like cmps*, vgather*, or vscatter*?
> >
> > I don't think there is a problem here: IOREQs are issued sequentially
> > for each part of the access. hvmemul_linear_mmio_access() makes sure one
> > chunk of the access isn't straddling a page boundary and we're finishing
> > the requested part immediately after an IOREQ for it got issued. I don't
> > see how it could enter linear_{read,write}() for a wrong part unless the
> > emulation layer above is broken.
> 
> First of all - there's no way to bypass linear_{read,write}(). The
> question is what path is to be taken inside those functions.
> 
> Of the multiple memory accesses (besides the insn fetch) that an insn
> may do, some may access RAM and some may access MMIO. So for
> certain of the accesses we may _need_ to call
> hvm_copy_{from,to}_guest_linear(), while for others we may want
> (need) to bypass it as you have outlined.
> 
> Perhaps, besides the examples given before, a PUSH/POP from/to MMIO
> is another relevant case to consider: In the common case the stack
> would be in RAM.
> 
> And the page straddling change in particular was to deal with cases
> where an individual access would have part of it map to RAM and
> another part go to MMIO. I.e. we'd again go both routes within
> linear_{read,write}() in the course of emulating a single insn.
> 
> The fact that we're issuing IOREQs sequentially doesn't help the
> situation - upon re-execution all prior memory accesses will be re-
> played. It's just that previously completed IOREQs won't be re-
> issued themselves, due to their results having got recorded into the
> struct hvm_mmio_cache instances hanging off of struct vcpu. See
> hvmemul_phys_mmio_access(), hvmemul_find_mmio_cache(), and
> hvmemul_linear_mmio_access().
> 

AIUI the problem is not one of re-issuing or caching here. The problem is that something has changed from MMIO to memory within the scope of a single emulation meaning that we take the wrong path in linear_read() or linear_write(). The crucial thing is that if we start an MMIO for an access then we need to make sure we treat re-issuing of the same access as MMIO, even if RAM has magically appeared in the P2M in the meantime.

  Paul

> Looking at hvmemul_find_mmio_cache() I now realize that commit
> 35a61c05ea ("x86emul: adjust handling of AVX2 gathers") didn't
> have the intended effect, due to the domain_crash() there.
> Apparently (seeing the commit's description) I did look at
> hvmemul_linear_mmio_access() only back then.
> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-11 10:21               ` Paul Durrant
@ 2019-03-11 10:39                 ` Jan Beulich
  0 siblings, 0 replies; 14+ messages in thread
From: Jan Beulich @ 2019-03-11 10:39 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Juergen Gross, Andrew Cooper, xen-devel, Igor Druzhinin

>>> On 11.03.19 at 11:21, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 11 March 2019 09:35
>> 
>> >>> On 08.03.19 at 19:37, <igor.druzhinin@citrix.com> wrote:
>> > On 08/03/2019 16:14, Jan Beulich wrote:
>> >> One caveat though: What do you suggest to do with page straddling
>> >> accesses? The commit introducing these functions was, after all to
>> >> deal with this special case. The in-flight request we observe there
>> >> could be for the leading or trailing part of the access that's being
>> >> re-executed. While these could perhaps be told apart by looking at
>> >> the low address bits, similarly what do you do for multi-part (but
>> >> perhaps well aligned) accesses like cmps*, vgather*, or vscatter*?
>> >
>> > I don't think there is a problem here: IOREQs are issued sequentially
>> > for each part of the access. hvmemul_linear_mmio_access() makes sure one
>> > chunk of the access isn't straddling a page boundary and we're finishing
>> > the requested part immediately after an IOREQ for it got issued. I don't
>> > see how it could enter linear_{read,write}() for a wrong part unless the
>> > emulation layer above is broken.
>> 
>> First of all - there's no way to bypass linear_{read,write}(). The
>> question is what path is to be taken inside those functions.
>> 
>> Of the multiple memory accesses (besides the insn fetch) that an insn
>> may do, some may access RAM and some may access MMIO. So for
>> certain of the accesses we may _need_ to call
>> hvm_copy_{from,to}_guest_linear(), while for others we may want
>> (need) to bypass it as you have outlined.
>> 
>> Perhaps, besides the examples given before, a PUSH/POP from/to MMIO
>> is another relevant case to consider: In the common case the stack
>> would be in RAM.
>> 
>> And the page straddling change in particular was to deal with cases
>> where an individual access would have part of it map to RAM and
>> another part go to MMIO. I.e. we'd again go both routes within
>> linear_{read,write}() in the course of emulating a single insn.
>> 
>> The fact that we're issuing IOREQs sequentially doesn't help the
>> situation - upon re-execution all prior memory accesses will be re-
>> played. It's just that previously completed IOREQs won't be re-
>> issued themselves, due to their results having got recorded into the
>> struct hvm_mmio_cache instances hanging off of struct vcpu. See
>> hvmemul_phys_mmio_access(), hvmemul_find_mmio_cache(), and
>> hvmemul_linear_mmio_access().
> 
> AIUI the problem is not one of re-issuing or caching here. The problem is 
> that something has changed from MMIO to memory within the scope of a single 
> emulation meaning that we take the wrong path in linear_read() or 
> linear_write(). The crucial thing is that if we start an MMIO for an access 
> then we need to make sure we treat re-issuing of the same access as MMIO, even 
> if RAM has magically appeared in the P2M in the meantime.

I agree. I've merely tried to explain to Igor what the overall picture is,
to make clear what cases all need to continue to work with whatever
change might be considered.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: XenGT is still regressed on master
  2019-03-11  7:14             ` Juergen Gross
@ 2019-03-11 13:20               ` Igor Druzhinin
  0 siblings, 0 replies; 14+ messages in thread
From: Igor Druzhinin @ 2019-03-11 13:20 UTC (permalink / raw)
  To: Juergen Gross, Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, xen-devel

On 11/03/2019 07:14, Juergen Gross wrote:
> Any estimate when we can expect patches? The 4.12 release is pending and
> this is the only remaining regression I'm aware of.
> 
> If you tell me there is no reasonable chance of anything acceptable
> being posted this week I'd go on with the release process and any fix
> will be delayed until 4.13 / 4.12.1.

As more problems were identified by Jan I don't think it's reasonable to
delay 4.12 any further. I'd not expect a complete fix this week.

Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-03-11 13:20 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-07 12:46 XenGT is still regressed on master Igor Druzhinin
2019-03-08 11:55 ` Jan Beulich
2019-03-08 13:37   ` Andrew Cooper
2019-03-08 14:49     ` Jan Beulich
2019-03-08 14:25   ` Igor Druzhinin
2019-03-08 14:58     ` Jan Beulich
2019-03-08 15:18       ` Igor Druzhinin
2019-03-08 16:14         ` Jan Beulich
2019-03-08 18:37           ` Igor Druzhinin
2019-03-11  7:14             ` Juergen Gross
2019-03-11 13:20               ` Igor Druzhinin
2019-03-11  9:35             ` Jan Beulich
2019-03-11 10:21               ` Paul Durrant
2019-03-11 10:39                 ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.