* [Qemu-devel] [BUG] VM abort after migration
@ 2019-07-03 8:34 longpeng
2019-07-08 9:47 ` Dr. David Alan Gilbert
0 siblings, 1 reply; 8+ messages in thread
From: longpeng @ 2019-07-03 8:34 UTC (permalink / raw)
To: jasowang, quintela, dgilbert, v.maffione
Cc: longpeng, Gonglei (Arei), qemu-devel
Hi guys,
We found a qemu core in our testing environment, the assertion
'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
the bus->irq_count[i] is '-1'.
Through analysis, it was happened after VM migration and we think
it was caused by the following sequence:
*Migration Source*
1. save bus pci.0 state, including irq_count[x] ( =0 , old )
2. save E1000:
e1000_pre_save
e1000_mit_timer
set_interrupt_cause
pci_set_irq --> update pci_dev->irq_state to 1 and
update bus->irq_count[x] to 1 ( new )
the irq_state sent to dest.
*Migration Dest*
1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
2. If the e1000 need change irqline , it would call to pci_irq_handler(),
the irq_state maybe change to 0 and bus->irq_count[x] will become
-1 in this situation.
3. do VM reboot then the assertion will be triggered.
We also found some guys faced the similar problem:
[1] https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
[2] https://bugs.launchpad.net/qemu/+bug/1702621
Is there some patches to fix this problem ?
Can we save pcibus state after all the pci devs are saved ?
Thanks,
Longpeng(Mike)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [BUG] VM abort after migration
2019-07-03 8:34 [Qemu-devel] [BUG] VM abort after migration longpeng
@ 2019-07-08 9:47 ` Dr. David Alan Gilbert
2019-07-10 3:25 ` Jason Wang
0 siblings, 1 reply; 8+ messages in thread
From: Dr. David Alan Gilbert @ 2019-07-08 9:47 UTC (permalink / raw)
To: longpeng; +Cc: jasowang, Gonglei (Arei), qemu-devel, v.maffione, quintela
* longpeng (longpeng2@huawei.com) wrote:
> Hi guys,
>
> We found a qemu core in our testing environment, the assertion
> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
> the bus->irq_count[i] is '-1'.
>
> Through analysis, it was happened after VM migration and we think
> it was caused by the following sequence:
>
> *Migration Source*
> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
> 2. save E1000:
> e1000_pre_save
> e1000_mit_timer
> set_interrupt_cause
> pci_set_irq --> update pci_dev->irq_state to 1 and
> update bus->irq_count[x] to 1 ( new )
> the irq_state sent to dest.
>
> *Migration Dest*
> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
> the irq_state maybe change to 0 and bus->irq_count[x] will become
> -1 in this situation.
> 3. do VM reboot then the assertion will be triggered.
>
> We also found some guys faced the similar problem:
> [1] https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
> [2] https://bugs.launchpad.net/qemu/+bug/1702621
>
> Is there some patches to fix this problem ?
I don't remember any.
> Can we save pcibus state after all the pci devs are saved ?
Does this problem only happen with e1000? I think so.
If it's only e1000 I think we should fix it - I think once the VM is
stopped for doing the device migration it shouldn't be raising
interrupts.
Dave
> Thanks,
> Longpeng(Mike)
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [BUG] VM abort after migration
2019-07-08 9:47 ` Dr. David Alan Gilbert
@ 2019-07-10 3:25 ` Jason Wang
2019-07-10 3:36 ` Longpeng (Mike)
0 siblings, 1 reply; 8+ messages in thread
From: Jason Wang @ 2019-07-10 3:25 UTC (permalink / raw)
To: Dr. David Alan Gilbert, longpeng
Cc: Gonglei (Arei), qemu-devel, v.maffione, quintela
On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
> * longpeng (longpeng2@huawei.com) wrote:
>> Hi guys,
>>
>> We found a qemu core in our testing environment, the assertion
>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
>> the bus->irq_count[i] is '-1'.
>>
>> Through analysis, it was happened after VM migration and we think
>> it was caused by the following sequence:
>>
>> *Migration Source*
>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
>> 2. save E1000:
>> e1000_pre_save
>> e1000_mit_timer
>> set_interrupt_cause
>> pci_set_irq --> update pci_dev->irq_state to 1 and
>> update bus->irq_count[x] to 1 ( new )
>> the irq_state sent to dest.
>>
>> *Migration Dest*
>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
>> the irq_state maybe change to 0 and bus->irq_count[x] will become
>> -1 in this situation.
>> 3. do VM reboot then the assertion will be triggered.
>>
>> We also found some guys faced the similar problem:
>> [1] https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
>> [2] https://bugs.launchpad.net/qemu/+bug/1702621
>>
>> Is there some patches to fix this problem ?
> I don't remember any.
>
>> Can we save pcibus state after all the pci devs are saved ?
> Does this problem only happen with e1000? I think so.
> If it's only e1000 I think we should fix it - I think once the VM is
> stopped for doing the device migration it shouldn't be raising
> interrupts.
I wonder maybe we can simply fix this by no setting ICS on pre_save()
but scheduling mit timer unconditionally in post_load().
Thanks
>
> Dave
>
>> Thanks,
>> Longpeng(Mike)
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [BUG] VM abort after migration
2019-07-10 3:25 ` Jason Wang
@ 2019-07-10 3:36 ` Longpeng (Mike)
2019-07-10 3:57 ` Jason Wang
0 siblings, 1 reply; 8+ messages in thread
From: Longpeng (Mike) @ 2019-07-10 3:36 UTC (permalink / raw)
To: Jason Wang, Dr. David Alan Gilbert
Cc: Gonglei (Arei), qemu-devel, v.maffione, quintela
在 2019/7/10 11:25, Jason Wang 写道:
>
> On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
>> * longpeng (longpeng2@huawei.com) wrote:
>>> Hi guys,
>>>
>>> We found a qemu core in our testing environment, the assertion
>>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
>>> the bus->irq_count[i] is '-1'.
>>>
>>> Through analysis, it was happened after VM migration and we think
>>> it was caused by the following sequence:
>>>
>>> *Migration Source*
>>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
>>> 2. save E1000:
>>> e1000_pre_save
>>> e1000_mit_timer
>>> set_interrupt_cause
>>> pci_set_irq --> update pci_dev->irq_state to 1 and
>>> update bus->irq_count[x] to 1 ( new )
>>> the irq_state sent to dest.
>>>
>>> *Migration Dest*
>>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
>>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
>>> the irq_state maybe change to 0 and bus->irq_count[x] will become
>>> -1 in this situation.
>>> 3. do VM reboot then the assertion will be triggered.
>>>
>>> We also found some guys faced the similar problem:
>>> [1] https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
>>> [2] https://bugs.launchpad.net/qemu/+bug/1702621
>>>
>>> Is there some patches to fix this problem ?
>> I don't remember any.
>>
>>> Can we save pcibus state after all the pci devs are saved ?
>> Does this problem only happen with e1000? I think so.
>> If it's only e1000 I think we should fix it - I think once the VM is
>> stopped for doing the device migration it shouldn't be raising
>> interrupts.
>
>
> I wonder maybe we can simply fix this by no setting ICS on pre_save() but
> scheduling mit timer unconditionally in post_load().
>
I also think this is a bug of e1000 because we find more cores with the same
frame thease days.
I'm not familiar with e1000 so hope someone could fix it, thanks. :)
> Thanks
>
>
>>
>> Dave
>>
>>> Thanks,
>>> Longpeng(Mike)
>> --
>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
> .
>
--
Regards,
Longpeng(Mike)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [BUG] VM abort after migration
2019-07-10 3:36 ` Longpeng (Mike)
@ 2019-07-10 3:57 ` Jason Wang
2019-07-10 8:27 ` Longpeng (Mike)
2019-07-27 6:10 ` Longpeng (Mike)
0 siblings, 2 replies; 8+ messages in thread
From: Jason Wang @ 2019-07-10 3:57 UTC (permalink / raw)
To: Longpeng (Mike), Dr. David Alan Gilbert
Cc: Gonglei (Arei), qemu-devel, v.maffione, quintela
[-- Attachment #1: Type: text/plain, Size: 2409 bytes --]
On 2019/7/10 上午11:36, Longpeng (Mike) wrote:
> 在 2019/7/10 11:25, Jason Wang 写道:
>> On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
>>> * longpeng (longpeng2@huawei.com) wrote:
>>>> Hi guys,
>>>>
>>>> We found a qemu core in our testing environment, the assertion
>>>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
>>>> the bus->irq_count[i] is '-1'.
>>>>
>>>> Through analysis, it was happened after VM migration and we think
>>>> it was caused by the following sequence:
>>>>
>>>> *Migration Source*
>>>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
>>>> 2. save E1000:
>>>> e1000_pre_save
>>>> e1000_mit_timer
>>>> set_interrupt_cause
>>>> pci_set_irq --> update pci_dev->irq_state to 1 and
>>>> update bus->irq_count[x] to 1 ( new )
>>>> the irq_state sent to dest.
>>>>
>>>> *Migration Dest*
>>>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
>>>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
>>>> the irq_state maybe change to 0 and bus->irq_count[x] will become
>>>> -1 in this situation.
>>>> 3. do VM reboot then the assertion will be triggered.
>>>>
>>>> We also found some guys faced the similar problem:
>>>> [1] https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
>>>> [2] https://bugs.launchpad.net/qemu/+bug/1702621
>>>>
>>>> Is there some patches to fix this problem ?
>>> I don't remember any.
>>>
>>>> Can we save pcibus state after all the pci devs are saved ?
>>> Does this problem only happen with e1000? I think so.
>>> If it's only e1000 I think we should fix it - I think once the VM is
>>> stopped for doing the device migration it shouldn't be raising
>>> interrupts.
>>
>> I wonder maybe we can simply fix this by no setting ICS on pre_save() but
>> scheduling mit timer unconditionally in post_load().
>>
> I also think this is a bug of e1000 because we find more cores with the same
> frame thease days.
>
> I'm not familiar with e1000 so hope someone could fix it, thanks. :)
>
Draft a path in attachment, please test.
Thanks
>> Thanks
>>
>>
>>> Dave
>>>
>>>> Thanks,
>>>> Longpeng(Mike)
>>> --
>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>> .
>>
[-- Attachment #2: 0001-e1000-don-t-raise-interrupt-in-pre_save.patch --]
[-- Type: text/x-patch, Size: 1547 bytes --]
From afe9258486672d76d7bf133ac9032a0d457bcd0b Mon Sep 17 00:00:00 2001
From: Jason Wang <jasowang@redhat.com>
Date: Wed, 10 Jul 2019 11:52:53 +0800
Subject: [PATCH] e1000: don't raise interrupt in pre_save()
We should not raise any interrupt after VM has been stopped but this
is what e1000 currently did when mit timer is active in
pre_save(). Fixing this by scheduling a timer in post_load() which can
make sure the interrupt was raised when VM is running.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/e1000.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/hw/net/e1000.c b/hw/net/e1000.c
index 1dc1466332..a023ceb27c 100644
--- a/hw/net/e1000.c
+++ b/hw/net/e1000.c
@@ -1381,11 +1381,6 @@ static int e1000_pre_save(void *opaque)
E1000State *s = opaque;
NetClientState *nc = qemu_get_queue(s->nic);
- /* If the mitigation timer is active, emulate a timeout now. */
- if (s->mit_timer_on) {
- e1000_mit_timer(s);
- }
-
/*
* If link is down and auto-negotiation is supported and ongoing,
* complete auto-negotiation immediately. This allows us to look
@@ -1423,7 +1418,8 @@ static int e1000_post_load(void *opaque, int version_id)
s->mit_irq_level = false;
}
s->mit_ide = 0;
- s->mit_timer_on = false;
+ s->mit_timer_on = true;
+ timer_mod(s->mit_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 1);
/* nc.link_down can't be migrated, so infer link_down according
* to link status bit in mac_reg[STATUS].
--
2.19.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [BUG] VM abort after migration
2019-07-10 3:57 ` Jason Wang
@ 2019-07-10 8:27 ` Longpeng (Mike)
2019-07-27 6:10 ` Longpeng (Mike)
1 sibling, 0 replies; 8+ messages in thread
From: Longpeng (Mike) @ 2019-07-10 8:27 UTC (permalink / raw)
To: Jason Wang, Dr. David Alan Gilbert
Cc: Gonglei (Arei), qemu-devel, v.maffione, quintela
在 2019/7/10 11:57, Jason Wang 写道:
>
> On 2019/7/10 上午11:36, Longpeng (Mike) wrote:
>> 在 2019/7/10 11:25, Jason Wang 写道:
>>> On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
>>>> * longpeng (longpeng2@huawei.com) wrote:
>>>>> Hi guys,
>>>>>
>>>>> We found a qemu core in our testing environment, the assertion
>>>>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
>>>>> the bus->irq_count[i] is '-1'.
>>>>>
>>>>> Through analysis, it was happened after VM migration and we think
>>>>> it was caused by the following sequence:
>>>>>
>>>>> *Migration Source*
>>>>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
>>>>> 2. save E1000:
>>>>> e1000_pre_save
>>>>> e1000_mit_timer
>>>>> set_interrupt_cause
>>>>> pci_set_irq --> update pci_dev->irq_state to 1 and
>>>>> update bus->irq_count[x] to 1 ( new )
>>>>> the irq_state sent to dest.
>>>>>
>>>>> *Migration Dest*
>>>>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
>>>>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
>>>>> the irq_state maybe change to 0 and bus->irq_count[x] will become
>>>>> -1 in this situation.
>>>>> 3. do VM reboot then the assertion will be triggered.
>>>>>
>>>>> We also found some guys faced the similar problem:
>>>>> [1] https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
>>>>> [2] https://bugs.launchpad.net/qemu/+bug/1702621
>>>>>
>>>>> Is there some patches to fix this problem ?
>>>> I don't remember any.
>>>>
>>>>> Can we save pcibus state after all the pci devs are saved ?
>>>> Does this problem only happen with e1000? I think so.
>>>> If it's only e1000 I think we should fix it - I think once the VM is
>>>> stopped for doing the device migration it shouldn't be raising
>>>> interrupts.
>>>
>>> I wonder maybe we can simply fix this by no setting ICS on pre_save() but
>>> scheduling mit timer unconditionally in post_load().
>>>
>> I also think this is a bug of e1000 because we find more cores with the same
>> frame thease days.
>>
>> I'm not familiar with e1000 so hope someone could fix it, thanks. :)
>>
>
> Draft a path in attachment, please test.
>
Thanks. We'll test it for a few weeks and then give you the feedback. :)
> Thanks
>
>
>>> Thanks
>>>
>>>
>>>> Dave
>>>>
>>>>> Thanks,
>>>>> Longpeng(Mike)
>>>> --
>>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>> .
>>>
--
Regards,
Longpeng(Mike)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [BUG] VM abort after migration
2019-07-10 3:57 ` Jason Wang
2019-07-10 8:27 ` Longpeng (Mike)
@ 2019-07-27 6:10 ` Longpeng (Mike)
2019-07-29 8:16 ` Jason Wang
1 sibling, 1 reply; 8+ messages in thread
From: Longpeng (Mike) @ 2019-07-27 6:10 UTC (permalink / raw)
To: Jason Wang
Cc: quintela, Gonglei (Arei), Dr. David Alan Gilbert, v.maffione, qemu-devel
在 2019/7/10 11:57, Jason Wang 写道:
>
> On 2019/7/10 上午11:36, Longpeng (Mike) wrote:
>> 在 2019/7/10 11:25, Jason Wang 写道:
>>> On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
>>>> * longpeng (longpeng2@huawei.com) wrote:
>>>>> Hi guys,
>>>>>
>>>>> We found a qemu core in our testing environment, the assertion
>>>>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
>>>>> the bus->irq_count[i] is '-1'.
>>>>>
>>>>> Through analysis, it was happened after VM migration and we think
>>>>> it was caused by the following sequence:
>>>>>
>>>>> *Migration Source*
>>>>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
>>>>> 2. save E1000:
>>>>> e1000_pre_save
>>>>> e1000_mit_timer
>>>>> set_interrupt_cause
>>>>> pci_set_irq --> update pci_dev->irq_state to 1 and
>>>>> update bus->irq_count[x] to 1 ( new )
>>>>> the irq_state sent to dest.
>>>>>
>>>>> *Migration Dest*
>>>>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
>>>>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
>>>>> the irq_state maybe change to 0 and bus->irq_count[x] will become
>>>>> -1 in this situation.
>>>>> 3. do VM reboot then the assertion will be triggered.
>>>>>
>>>>> We also found some guys faced the similar problem:
>>>>> [1] https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
>>>>> [2] https://bugs.launchpad.net/qemu/+bug/1702621
>>>>>
>>>>> Is there some patches to fix this problem ?
>>>> I don't remember any.
>>>>
>>>>> Can we save pcibus state after all the pci devs are saved ?
>>>> Does this problem only happen with e1000? I think so.
>>>> If it's only e1000 I think we should fix it - I think once the VM is
>>>> stopped for doing the device migration it shouldn't be raising
>>>> interrupts.
>>>
>>> I wonder maybe we can simply fix this by no setting ICS on pre_save() but
>>> scheduling mit timer unconditionally in post_load().
>>>
>> I also think this is a bug of e1000 because we find more cores with the same
>> frame thease days.
>>
>> I'm not familiar with e1000 so hope someone could fix it, thanks. :)
>>
>
> Draft a path in attachment, please test.
>
Hi Jason,
We've tested the patch for about two weeks, everything went well, thanks!
Feel free to add my:
Reported-and-tested-by: Longpeng <longpeng2@huawei.com>
> Thanks
>
>
>>> Thanks
>>>
>>>
>>>> Dave
>>>>
>>>>> Thanks,
>>>>> Longpeng(Mike)
>>>> --
>>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>> .
>>>
--
Regards,
Longpeng(Mike)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [BUG] VM abort after migration
2019-07-27 6:10 ` Longpeng (Mike)
@ 2019-07-29 8:16 ` Jason Wang
0 siblings, 0 replies; 8+ messages in thread
From: Jason Wang @ 2019-07-29 8:16 UTC (permalink / raw)
To: Longpeng (Mike)
Cc: qemu-devel, Gonglei (Arei), Dr. David Alan Gilbert, v.maffione, quintela
On 2019/7/27 下午2:10, Longpeng (Mike) wrote:
> 在 2019/7/10 11:57, Jason Wang 写道:
>> On 2019/7/10 上午11:36, Longpeng (Mike) wrote:
>>> 在 2019/7/10 11:25, Jason Wang 写道:
>>>> On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
>>>>> * longpeng (longpeng2@huawei.com) wrote:
>>>>>> Hi guys,
>>>>>>
>>>>>> We found a qemu core in our testing environment, the assertion
>>>>>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
>>>>>> the bus->irq_count[i] is '-1'.
>>>>>>
>>>>>> Through analysis, it was happened after VM migration and we think
>>>>>> it was caused by the following sequence:
>>>>>>
>>>>>> *Migration Source*
>>>>>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
>>>>>> 2. save E1000:
>>>>>> e1000_pre_save
>>>>>> e1000_mit_timer
>>>>>> set_interrupt_cause
>>>>>> pci_set_irq --> update pci_dev->irq_state to 1 and
>>>>>> update bus->irq_count[x] to 1 ( new )
>>>>>> the irq_state sent to dest.
>>>>>>
>>>>>> *Migration Dest*
>>>>>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
>>>>>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
>>>>>> the irq_state maybe change to 0 and bus->irq_count[x] will become
>>>>>> -1 in this situation.
>>>>>> 3. do VM reboot then the assertion will be triggered.
>>>>>>
>>>>>> We also found some guys faced the similar problem:
>>>>>> [1] https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
>>>>>> [2] https://bugs.launchpad.net/qemu/+bug/1702621
>>>>>>
>>>>>> Is there some patches to fix this problem ?
>>>>> I don't remember any.
>>>>>
>>>>>> Can we save pcibus state after all the pci devs are saved ?
>>>>> Does this problem only happen with e1000? I think so.
>>>>> If it's only e1000 I think we should fix it - I think once the VM is
>>>>> stopped for doing the device migration it shouldn't be raising
>>>>> interrupts.
>>>> I wonder maybe we can simply fix this by no setting ICS on pre_save() but
>>>> scheduling mit timer unconditionally in post_load().
>>>>
>>> I also think this is a bug of e1000 because we find more cores with the same
>>> frame thease days.
>>>
>>> I'm not familiar with e1000 so hope someone could fix it, thanks. :)
>>>
>> Draft a path in attachment, please test.
>>
> Hi Jason,
>
> We've tested the patch for about two weeks, everything went well, thanks!
>
> Feel free to add my:
> Reported-and-tested-by: Longpeng <longpeng2@huawei.com>
Applied.
Thanks
>> Thanks
>>
>>
>>>> Thanks
>>>>
>>>>
>>>>> Dave
>>>>>
>>>>>> Thanks,
>>>>>> Longpeng(Mike)
>>>>> --
>>>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>>> .
>>>>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2019-07-29 8:17 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-03 8:34 [Qemu-devel] [BUG] VM abort after migration longpeng
2019-07-08 9:47 ` Dr. David Alan Gilbert
2019-07-10 3:25 ` Jason Wang
2019-07-10 3:36 ` Longpeng (Mike)
2019-07-10 3:57 ` Jason Wang
2019-07-10 8:27 ` Longpeng (Mike)
2019-07-27 6:10 ` Longpeng (Mike)
2019-07-29 8:16 ` Jason Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).