xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load
       [not found] ` <ee7d61de-ed38-acc4-1666-cd886d76cc14@redhat.com>
@ 2020-06-17  3:16   ` Igor Druzhinin
  2020-06-17 12:44     ` Laszlo Ersek
  0 siblings, 1 reply; 2+ messages in thread
From: Igor Druzhinin @ 2020-06-17  3:16 UTC (permalink / raw)
  To: Laszlo Ersek, devel, xen-devel
  Cc: julien, jordan.l.justen, Ray Ni, ard.biesheuvel, anthony.perard,
	Paolo Bonzini

On 16/06/2020 19:42, Laszlo Ersek wrote
> If I understand correctly, TimerInterruptHandler()
> [OvmfPkg/8254TimerDxe/Timer.c] currently does the following:
> 
> - RaiseTPL (TPL_HIGH_LEVEL) --> mask interrupts from being delivered
> 
> - mLegacy8259->EndOfInterrupt() --> permit the PIC to generate further
> interrupts (= make them pending)
> 
> - RestoreTPL() --> unmask interrupts (allow delivery)
> 
> RestoreTPL() is always expected to invoke handlers (on its own stack)
> that have just been unmasked, so that behavior is not unexpected, in my
> opinion.

Yes, this is where I'd like to have a confirmation - opening a window
for uncontrollable number of nested interrupts with a small stack
looks dangerous.

> What seems unexpected is the queueing of a huge number of timer
> interrupts. I would think a timer interrupt is either pending or not
> pending (i.e. if it's already pending, then the next generated interrupt
> is coalesced, not queued). While there would still be a window between
> the EOI and the unmasking, I don't think it would normally allow for a
> *huge* number of queued interrupts (and consequently a stack overflow).

It's not a window between EOI and unmasking but the very fact vCPU is 
descheduled for a considerable amount of time that causes backlog of
timer interrupts to build up. This is Xen default behavior and is
configurable (there are several timer modes including coalescing
you mention). That is done for compatibility with some guests basing
time accounting on the number of periodic interrupts they receive.

> So I basically see the root of the problem in the interrupts being
> queued rather than coalesced. I'm pretty unfamiliar with this x86 area
> (= the 8259 PIC in general), but the following wiki article seems to
> agree with my suspicion:
> 
> https://wiki.osdev.org/8259_PIC#How_does_the_8259_PIC_chip_work.3F
> 
>     [...] and whether there's an interrupt already pending. If the
>     channel is unmasked and there's no interrupt pending, the PIC will
>     raise the interrupt line [...]
> 
> Can we say that the interrupt queueing (as opposed to coalescing) is a
> Xen issue?

I can admit that the whole issue might be Xen specific if that form
of timer mode is not used in QEMU-KVM. What mode is typical there
then? We might consider switching Xen to a different mode if so, as I believe
those guests are not in support for many years.

> (Hmmm... maybe the hypervisor *has* to queue the timer interrupts,
> otherwise some of them would simply be lost, and the guest would lose
> track of time.)
> 
> Either way, I'm not sure what the best approach is. This driver was
> moved under OvmfPkg from PcAtChipsetPkg in commit 1a3ffdff82e6
> ("OvmfPkg: Copy 8254TimerDxe driver from PcAtChipsetPkg", 2019-04-11).
> HpetTimerDxe also lives under PcAtChipsetPkg.
> 
> So I think I'll have to rely on the expertise of Ray here (CC'd).

Also note that since the issue might be Xen specific we might want to
try to fix it in XenTimer only - I modified 8254Timer due to the
fact Xen is still present in general config (but that should soon
go away).

> Also, I recall a recent-ish QEMU commit that seems vaguely related
> (i.e., to timer interrupt coalescing -- see 7a3e29b12f5a, "mc146818rtc:
> fix timer interrupt reinjection again", 2019-11-19), so I'm CC'ing Paolo
> too.

Hmm that looks more like a RTC implementation specific issue.

> Some more comments / questions below:
> 
>>
>> diff --git a/OvmfPkg/8254TimerDxe/Timer.c b/OvmfPkg/8254TimerDxe/Timer.c
>> index 67e22f5..fd1691b 100644
>> --- a/OvmfPkg/8254TimerDxe/Timer.c
>> +++ b/OvmfPkg/8254TimerDxe/Timer.c
>> @@ -79,8 +79,6 @@ TimerInterruptHandler (
>>  
>>    OriginalTPL = gBS->RaiseTPL (TPL_HIGH_LEVEL);
>>  
>> -  mLegacy8259->EndOfInterrupt (mLegacy8259, Efi8259Irq0);
>> -
>>    if (mTimerNotifyFunction != NULL) {
>>      //
>>      // @bug : This does not handle missed timer interrupts
>> @@ -89,6 +87,9 @@ TimerInterruptHandler (
>>    }
>>  
>>    gBS->RestoreTPL (OriginalTPL);
>> +
>> +  DisableInterrupts ();
>> +  mLegacy8259->EndOfInterrupt (mLegacy8259, Efi8259Irq0);
>>  }
> 
> So this briefly (temporarily) unmasks interrupt delivery (between
> RestoreTPL() and DisableInterrupts()) while the PIC is still blocked
> from generating more, and then unblocks the PIC.
> 
> It looks plausible for preventing the unbounded recursion per se, but
> why is it safe to leave the function with interrupts disabled? Before
> the patch, that didn't use to be the case.

Quickly looking through the code it appears to me the first thing that
caller does after interrupt handler - it clears interrupt flag to make
sure those disabled. So I don't see any assumption that interrupts should
be enabled on exiting. But I might not know about all of the possible
combinations here.

Igor


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load
  2020-06-17  3:16   ` [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load Igor Druzhinin
@ 2020-06-17 12:44     ` Laszlo Ersek
  0 siblings, 0 replies; 2+ messages in thread
From: Laszlo Ersek @ 2020-06-17 12:44 UTC (permalink / raw)
  To: Igor Druzhinin, devel, xen-devel
  Cc: julien, jordan.l.justen, Ray Ni, ard.biesheuvel, anthony.perard,
	Paolo Bonzini

On 06/17/20 05:16, Igor Druzhinin wrote:
> On 16/06/2020 19:42, Laszlo Ersek wrote
>> If I understand correctly, TimerInterruptHandler()
>> [OvmfPkg/8254TimerDxe/Timer.c] currently does the following:
>>
>> - RaiseTPL (TPL_HIGH_LEVEL) --> mask interrupts from being delivered
>>
>> - mLegacy8259->EndOfInterrupt() --> permit the PIC to generate further
>> interrupts (= make them pending)
>>
>> - RestoreTPL() --> unmask interrupts (allow delivery)
>>
>> RestoreTPL() is always expected to invoke handlers (on its own stack)
>> that have just been unmasked, so that behavior is not unexpected, in my
>> opinion.
> 
> Yes, this is where I'd like to have a confirmation - opening a window
> for uncontrollable number of nested interrupts with a small stack
> looks dangerous.

Sorry, I meant the above more generally. The sentence

  RestoreTPL() is always expected to invoke handlers (on its own stack)
  that have just been unmasked

doesn't only refer to actual timer hardware interrupts (in connection to
TPL_HGIH_LEVEL), but also to invoking event notification functions that
have been queued while running at the raised TPL.

Quoting "EFI_BOOT_SERVICES.CreateEvent()" from the spec:

    Events exist in one of two states, “waiting” or “signaled.” When an
    event is created, firmware puts it in the “waiting” state. When the
    event is signaled, firmware changes its state to “signaled” and, if
    EVT_NOTIFY_SIGNAL is specified, places a call to its notification
    function in a FIFO queue. There is a queue for each of the “basic”
    task priority levels defined in Section 7.1 (TPL_CALLBACK, and
    TPL_NOTIFY). The functions in these queues are invoked in FIFO
    order, starting with the highest priority level queue and proceeding
    to the lowest priority queue that is unmasked by the current TPL. If
    the current TPL is equal to or greater than the queued notification,
    it will wait until the TPL is lowered via
    EFI_BOOT_SERVICES.RestoreTPL().

In practice, when the event is signaled, and the current TPL is not
masking the TPL of the associated notify function, then the notify
function is called internally to signaling the event. Otherwise, if the
unmasking occurs via RestoreTPL(), then the queued notification
functions are invoked on the stack of RestoreTPL() -- in other words,
internally to the RestoreTPL() function call itself.

So all I meant was that notification functions running internally to
RestoreTPL() was by design.

What's unexpected is the "uncontrollable number" of nested interrupts.

> 
>> What seems unexpected is the queueing of a huge number of timer
>> interrupts. I would think a timer interrupt is either pending or not
>> pending (i.e. if it's already pending, then the next generated interrupt
>> is coalesced, not queued). While there would still be a window between
>> the EOI and the unmasking, I don't think it would normally allow for a
>> *huge* number of queued interrupts (and consequently a stack overflow).
> 
> It's not a window between EOI and unmasking but the very fact vCPU is 
> descheduled for a considerable amount of time that causes backlog of
> timer interrupts to build up. This is Xen default behavior and is
> configurable (there are several timer modes including coalescing
> you mention). That is done for compatibility with some guests basing
> time accounting on the number of periodic interrupts they receive.

OK, thanks for explaining.

> 
>> So I basically see the root of the problem in the interrupts being
>> queued rather than coalesced. I'm pretty unfamiliar with this x86 area
>> (= the 8259 PIC in general), but the following wiki article seems to
>> agree with my suspicion:
>>
>> https://wiki.osdev.org/8259_PIC#How_does_the_8259_PIC_chip_work.3F
>>
>>     [...] and whether there's an interrupt already pending. If the
>>     channel is unmasked and there's no interrupt pending, the PIC will
>>     raise the interrupt line [...]
>>
>> Can we say that the interrupt queueing (as opposed to coalescing) is a
>> Xen issue?
> 
> I can admit that the whole issue might be Xen specific if that form
> of timer mode is not used in QEMU-KVM. What mode is typical there
> then?

That question is too difficult for me to answer :(

> We might consider switching Xen to a different mode if so, as I believe
> those guests are not in support for many years.

Can you perhaps test this hypothesis? If you select the coalescing timer
mode for the Xen guest in question, does the symptom go away?

> 
>> (Hmmm... maybe the hypervisor *has* to queue the timer interrupts,
>> otherwise some of them would simply be lost, and the guest would lose
>> track of time.)
>>
>> Either way, I'm not sure what the best approach is. This driver was
>> moved under OvmfPkg from PcAtChipsetPkg in commit 1a3ffdff82e6
>> ("OvmfPkg: Copy 8254TimerDxe driver from PcAtChipsetPkg", 2019-04-11).
>> HpetTimerDxe also lives under PcAtChipsetPkg.
>>
>> So I think I'll have to rely on the expertise of Ray here (CC'd).
> 
> Also note that since the issue might be Xen specific we might want to
> try to fix it in XenTimer only - I modified 8254Timer due to the
> fact Xen is still present in general config (but that should soon
> go away).

We could also modify 8254TimerDxe like this:

- provide the new variant of the TimerInterruptHandler() function for
Xen only, without touching the existent one -- simply introduce it as a
new function,

- in TimerDriverInitialize(), first call XenDetected() from
XenPlatformLib, then choose the argument for the
mCpu->RegisterInterruptHandler() call accordingly.

This wouldn't be difficult to locate and revert when
<https://bugzilla.tianocore.org/show_bug.cgi?id=2122> is addressed. (It
would be easy to find by grepping for XenDetected().)

[...]

Thanks!
Laszlo



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-06-17 12:45 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1592275782-9369-1-git-send-email-igor.druzhinin@citrix.com>
     [not found] ` <ee7d61de-ed38-acc4-1666-cd886d76cc14@redhat.com>
2020-06-17  3:16   ` [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load Igor Druzhinin
2020-06-17 12:44     ` Laszlo Ersek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).