xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
@ 2020-01-29  9:28 Varad Gautam
  2020-01-29 10:08 ` Julien Grall
  2020-01-29 10:30 ` Roger Pau Monné
  0 siblings, 2 replies; 7+ messages in thread
From: Varad Gautam @ 2020-01-29  9:28 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Varad Gautam, Julien Grall, Roger Pau Monné

XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS.
In that scenario, it is possible to receive multiple _pirq_guest_unbind
calls for the same pirq from domain_kill, if the pirq has not yet been
removed from the domain's pirq_tree, as:
  domain_kill()
    -> domain_relinquish_resources()
      -> pci_release_devices()
        -> pci_clean_dpci_irq()
          -> pirq_guest_unbind()
            -> __pirq_guest_unbind()

For a shared pirq (nr_guests > 1), the first call would zap the current
domain from the pirq's guests[] list, but the action handler is never freed
as there are other guests using this pirq. As a result, on the second call,
__pirq_guest_unbind searches for the current domain which has been removed
from the guests[] list, and hits a BUG_ON.

Make __pirq_guest_unbind safe to be called multiple times by letting xen
continue if a shared pirq has already been unbound from this guest. The
PIRQ will be cleaned up from the domain's pirq_tree during the destruction
in complete_domain_destroy anyways.

Signed-off-by: Varad Gautam <vrd@amazon.de>
Reviewed-by: Paul Durrant <paul@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>

v2: Split the check on action->nr_guests > 0 and make it an ASSERT.
v3: Style fixups.
---
 xen/arch/x86/irq.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 310ac00..4b172eb 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1681,7 +1681,20 @@ static irq_guest_action_t *__pirq_guest_unbind(
 
     for ( i = 0; (i < action->nr_guests) && (action->guest[i] != d); i++ )
         continue;
-    BUG_ON(i == action->nr_guests);
+    if ( i == action->nr_guests )
+    {
+        ASSERT(action->nr_guests > 0);
+        /*
+         * In case the pirq was shared, unbound for this domain in an earlier
+         * call, but still existed on the domain's pirq_tree, we still reach
+         * here if there are any later unbind calls on the same pirq. Return
+         * if such an unbind happens.
+         */
+        if ( action->shareable )
+            return NULL;
+        BUG();
+    }
+
     memmove(&action->guest[i], &action->guest[i+1],
             (action->nr_guests-i-1) * sizeof(action->guest[0]));
     action->nr_guests--;
-- 
2.7.4




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
  2020-01-29  9:28 [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs Varad Gautam
@ 2020-01-29 10:08 ` Julien Grall
  2020-01-29 10:30 ` Roger Pau Monné
  1 sibling, 0 replies; 7+ messages in thread
From: Julien Grall @ 2020-01-29 10:08 UTC (permalink / raw)
  To: Varad Gautam, xen-devel; +Cc: Andrew Cooper, Roger Pau Monné

Hi,

On 29/01/2020 09:28, Varad Gautam wrote:
> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS.
> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> calls for the same pirq from domain_kill, if the pirq has not yet been
> removed from the domain's pirq_tree, as:
>    domain_kill()
>      -> domain_relinquish_resources()
>        -> pci_release_devices()
>          -> pci_clean_dpci_irq()
>            -> pirq_guest_unbind()
>              -> __pirq_guest_unbind()
> 
> For a shared pirq (nr_guests > 1), the first call would zap the current
> domain from the pirq's guests[] list, but the action handler is never freed
> as there are other guests using this pirq. As a result, on the second call,
> __pirq_guest_unbind searches for the current domain which has been removed
> from the guests[] list, and hits a BUG_ON.
> 
> Make __pirq_guest_unbind safe to be called multiple times by letting xen
> continue if a shared pirq has already been unbound from this guest. The
> PIRQ will be cleaned up from the domain's pirq_tree during the destruction
> in complete_domain_destroy anyways.
> 
> Signed-off-by: Varad Gautam <vrd@amazon.de>
> Reviewed-by: Paul Durrant <paul@xen.org>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Julien Grall <julien@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> v2: Split the check on action->nr_guests > 0 and make it an ASSERT.
> v3: Style fixups.
> ---
>   xen/arch/x86/irq.c | 15 ++++++++++++++-
>   1 file changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 310ac00..4b172eb 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -1681,7 +1681,20 @@ static irq_guest_action_t *__pirq_guest_unbind(
>   
>       for ( i = 0; (i < action->nr_guests) && (action->guest[i] != d); i++ )
>           continue;
> -    BUG_ON(i == action->nr_guests);
> +    if ( i == action->nr_guests )
> +    {
> +        ASSERT(action->nr_guests > 0);
> +        /*
> +         * In case the pirq was shared, unbound for this domain in an earlier
> +         * call, but still existed on the domain's pirq_tree, we still reach
> +         * here if there are any later unbind calls on the same pirq. Return
> +         * if such an unbind happens.
> +         */
> +        if ( action->shareable )
> +            return NULL;
> +        BUG();

Please see my comment on v2 about the BUG(). I am still unconvinced this 
is the right error handling here.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
  2020-01-29  9:28 [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs Varad Gautam
  2020-01-29 10:08 ` Julien Grall
@ 2020-01-29 10:30 ` Roger Pau Monné
  2020-01-29 11:47   ` Jan Beulich
  1 sibling, 1 reply; 7+ messages in thread
From: Roger Pau Monné @ 2020-01-29 10:30 UTC (permalink / raw)
  To: Varad Gautam; +Cc: xen-devel, Julien Grall, Andrew Cooper

Hello,

Thanks for the patch! Next time could you please try to reply to the
previous questions before sending a new version:

https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg00257.html

On Wed, Jan 29, 2020 at 10:28:07AM +0100, Varad Gautam wrote:
> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS.
> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> calls for the same pirq from domain_kill, if the pirq has not yet been
> removed from the domain's pirq_tree, as:
>   domain_kill()
>     -> domain_relinquish_resources()
>       -> pci_release_devices()
>         -> pci_clean_dpci_irq()
>           -> pirq_guest_unbind()
>             -> __pirq_guest_unbind()
> 
> For a shared pirq (nr_guests > 1), the first call would zap the current
> domain from the pirq's guests[] list, but the action handler is never freed
> as there are other guests using this pirq. As a result, on the second call,
> __pirq_guest_unbind searches for the current domain which has been removed
> from the guests[] list, and hits a BUG_ON.
> 
> Make __pirq_guest_unbind safe to be called multiple times by letting xen
> continue if a shared pirq has already been unbound from this guest. The
> PIRQ will be cleaned up from the domain's pirq_tree during the destruction
> in complete_domain_destroy anyways.

So AFAICT this is because pt_pirq_softirq_active() returns true in
pci_clean_dpci_irq() and hence the iteration is stopped and
hvm_domain_irq(d)->dpci is not set to NULL.

Would it be possible to clean the already processed IRQs from the
domain pirq_tree?

pci_clean_dpci_irq() already seems to free part of this structure, and
would be nicer IMO if we didn't leave cleaned up stuff behind on
ERESTART.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
  2020-01-29 10:30 ` Roger Pau Monné
@ 2020-01-29 11:47   ` Jan Beulich
  2020-03-05  9:36     ` Jan Beulich
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2020-01-29 11:47 UTC (permalink / raw)
  To: Roger Pau Monné, Varad Gautam; +Cc: xen-devel, Julien Grall, Andrew Cooper

On 29.01.2020 11:30, Roger Pau Monné wrote:
> Hello,
> 
> Thanks for the patch! Next time could you please try to reply to the
> previous questions before sending a new version:
> 
> https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg00257.html
> 
> On Wed, Jan 29, 2020 at 10:28:07AM +0100, Varad Gautam wrote:
>> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS.
>> In that scenario, it is possible to receive multiple _pirq_guest_unbind
>> calls for the same pirq from domain_kill, if the pirq has not yet been
>> removed from the domain's pirq_tree, as:
>>   domain_kill()
>>     -> domain_relinquish_resources()
>>       -> pci_release_devices()
>>         -> pci_clean_dpci_irq()
>>           -> pirq_guest_unbind()
>>             -> __pirq_guest_unbind()
>>
>> For a shared pirq (nr_guests > 1), the first call would zap the current
>> domain from the pirq's guests[] list, but the action handler is never freed
>> as there are other guests using this pirq. As a result, on the second call,
>> __pirq_guest_unbind searches for the current domain which has been removed
>> from the guests[] list, and hits a BUG_ON.
>>
>> Make __pirq_guest_unbind safe to be called multiple times by letting xen
>> continue if a shared pirq has already been unbound from this guest. The
>> PIRQ will be cleaned up from the domain's pirq_tree during the destruction
>> in complete_domain_destroy anyways.
> 
> So AFAICT this is because pt_pirq_softirq_active() returns true in
> pci_clean_dpci_irq() and hence the iteration is stopped and
> hvm_domain_irq(d)->dpci is not set to NULL.
> 
> Would it be possible to clean the already processed IRQs from the
> domain pirq_tree?

This might work, perhaps by way of invoking unmap_domain_pirq()
right after pirq_guest_unbind(), as long as hvm_dirq_assist() (as
called from dpci_softirq()) can be made skip all actual work it
means to do in such a case. Unfortunately the two ->masked fields
acted upon are different between __pirq_guest_unbind() and
hvm_dirq_assist().

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
  2020-01-29 11:47   ` Jan Beulich
@ 2020-03-05  9:36     ` Jan Beulich
  2020-03-05 17:37       ` Durrant, Paul
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2020-03-05  9:36 UTC (permalink / raw)
  To: Varad Gautam; +Cc: xen-devel, Andrew Cooper, Julien Grall, Roger Pau Monné

On 29.01.2020 12:47, Jan Beulich wrote:
> On 29.01.2020 11:30, Roger Pau Monné wrote:
>> Hello,
>>
>> Thanks for the patch! Next time could you please try to reply to the
>> previous questions before sending a new version:
>>
>> https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg00257.html
>>
>> On Wed, Jan 29, 2020 at 10:28:07AM +0100, Varad Gautam wrote:
>>> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS.
>>> In that scenario, it is possible to receive multiple _pirq_guest_unbind
>>> calls for the same pirq from domain_kill, if the pirq has not yet been
>>> removed from the domain's pirq_tree, as:
>>>   domain_kill()
>>>     -> domain_relinquish_resources()
>>>       -> pci_release_devices()
>>>         -> pci_clean_dpci_irq()
>>>           -> pirq_guest_unbind()
>>>             -> __pirq_guest_unbind()
>>>
>>> For a shared pirq (nr_guests > 1), the first call would zap the current
>>> domain from the pirq's guests[] list, but the action handler is never freed
>>> as there are other guests using this pirq. As a result, on the second call,
>>> __pirq_guest_unbind searches for the current domain which has been removed
>>> from the guests[] list, and hits a BUG_ON.
>>>
>>> Make __pirq_guest_unbind safe to be called multiple times by letting xen
>>> continue if a shared pirq has already been unbound from this guest. The
>>> PIRQ will be cleaned up from the domain's pirq_tree during the destruction
>>> in complete_domain_destroy anyways.
>>
>> So AFAICT this is because pt_pirq_softirq_active() returns true in
>> pci_clean_dpci_irq() and hence the iteration is stopped and
>> hvm_domain_irq(d)->dpci is not set to NULL.
>>
>> Would it be possible to clean the already processed IRQs from the
>> domain pirq_tree?
> 
> This might work, perhaps by way of invoking unmap_domain_pirq()
> right after pirq_guest_unbind(), as long as hvm_dirq_assist() (as
> called from dpci_softirq()) can be made skip all actual work it
> means to do in such a case. Unfortunately the two ->masked fields
> acted upon are different between __pirq_guest_unbind() and
> hvm_dirq_assist().

Ping? Unless I hear back soon, I'm afraid I'm going to drop this
patch from my "pending" mail folder, as not being agreed whether
to stick to the current version or whether to go this alternative
route. A more "natural" approach to fixing the issue would be
quite nice, after all.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
  2020-03-05  9:36     ` Jan Beulich
@ 2020-03-05 17:37       ` Durrant, Paul
  2020-03-06 15:07         ` Durrant, Paul
  0 siblings, 1 reply; 7+ messages in thread
From: Durrant, Paul @ 2020-03-05 17:37 UTC (permalink / raw)
  To: Jan Beulich, Gautam, Varad
  Cc: xen-devel, Roger Pau Monné, Julien Grall, Andrew Cooper

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Jan Beulich
> Sent: 05 March 2020 09:37
> To: Gautam, Varad <vrd@amazon.de>
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper <andrew.cooper3@citrix.com>; Julien Grall
> <julien@xen.org>; Roger Pau Monné <roger.pau@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
> 
> On 29.01.2020 12:47, Jan Beulich wrote:
> > On 29.01.2020 11:30, Roger Pau Monné wrote:
> >> Hello,
> >>
> >> Thanks for the patch! Next time could you please try to reply to the
> >> previous questions before sending a new version:
> >>
> >> https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg00257.html
> >>
> >> On Wed, Jan 29, 2020 at 10:28:07AM +0100, Varad Gautam wrote:
> >>> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS.
> >>> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> >>> calls for the same pirq from domain_kill, if the pirq has not yet been
> >>> removed from the domain's pirq_tree, as:
> >>>   domain_kill()
> >>>     -> domain_relinquish_resources()
> >>>       -> pci_release_devices()
> >>>         -> pci_clean_dpci_irq()
> >>>           -> pirq_guest_unbind()
> >>>             -> __pirq_guest_unbind()
> >>>
> >>> For a shared pirq (nr_guests > 1), the first call would zap the current
> >>> domain from the pirq's guests[] list, but the action handler is never freed
> >>> as there are other guests using this pirq. As a result, on the second call,
> >>> __pirq_guest_unbind searches for the current domain which has been removed
> >>> from the guests[] list, and hits a BUG_ON.
> >>>
> >>> Make __pirq_guest_unbind safe to be called multiple times by letting xen
> >>> continue if a shared pirq has already been unbound from this guest. The
> >>> PIRQ will be cleaned up from the domain's pirq_tree during the destruction
> >>> in complete_domain_destroy anyways.
> >>
> >> So AFAICT this is because pt_pirq_softirq_active() returns true in
> >> pci_clean_dpci_irq() and hence the iteration is stopped and
> >> hvm_domain_irq(d)->dpci is not set to NULL.
> >>
> >> Would it be possible to clean the already processed IRQs from the
> >> domain pirq_tree?
> >
> > This might work, perhaps by way of invoking unmap_domain_pirq()
> > right after pirq_guest_unbind(), as long as hvm_dirq_assist() (as
> > called from dpci_softirq()) can be made skip all actual work it
> > means to do in such a case. Unfortunately the two ->masked fields
> > acted upon are different between __pirq_guest_unbind() and
> > hvm_dirq_assist().
> 
> Ping? Unless I hear back soon, I'm afraid I'm going to drop this
> patch from my "pending" mail folder, as not being agreed whether
> to stick to the current version or whether to go this alternative
> route. A more "natural" approach to fixing the issue would be
> quite nice, after all.

I'll try to pick this up tomorrow as I helped diagnose the problem being fixed.

  Paul

> 
> Jan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
  2020-03-05 17:37       ` Durrant, Paul
@ 2020-03-06 15:07         ` Durrant, Paul
  0 siblings, 0 replies; 7+ messages in thread
From: Durrant, Paul @ 2020-03-06 15:07 UTC (permalink / raw)
  To: Durrant, Paul, Jan Beulich, Gautam, Varad
  Cc: xen-devel, Andrew Cooper, Julien Grall, Roger Pau Monné

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Durrant, Paul
> Sent: 05 March 2020 17:37
> To: Jan Beulich <jbeulich@suse.com>; Gautam, Varad <vrd@amazon.de>
> Cc: xen-devel@lists.xenproject.org; Roger Pau Monné <roger.pau@citrix.com>; Julien Grall
> <julien@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
> 
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Jan Beulich
> > Sent: 05 March 2020 09:37
> > To: Gautam, Varad <vrd@amazon.de>
> > Cc: xen-devel@lists.xenproject.org; Andrew Cooper <andrew.cooper3@citrix.com>; Julien Grall
> > <julien@xen.org>; Roger Pau Monné <roger.pau@citrix.com>
> > Subject: Re: [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs
> >
> > On 29.01.2020 12:47, Jan Beulich wrote:
> > > On 29.01.2020 11:30, Roger Pau Monné wrote:
> > >> Hello,
> > >>
> > >> Thanks for the patch! Next time could you please try to reply to the
> > >> previous questions before sending a new version:
> > >>
> > >> https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg00257.html
> > >>
> > >> On Wed, Jan 29, 2020 at 10:28:07AM +0100, Varad Gautam wrote:
> > >>> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS.
> > >>> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> > >>> calls for the same pirq from domain_kill, if the pirq has not yet been
> > >>> removed from the domain's pirq_tree, as:
> > >>>   domain_kill()
> > >>>     -> domain_relinquish_resources()
> > >>>       -> pci_release_devices()
> > >>>         -> pci_clean_dpci_irq()
> > >>>           -> pirq_guest_unbind()
> > >>>             -> __pirq_guest_unbind()
> > >>>
> > >>> For a shared pirq (nr_guests > 1), the first call would zap the current
> > >>> domain from the pirq's guests[] list, but the action handler is never freed
> > >>> as there are other guests using this pirq. As a result, on the second call,
> > >>> __pirq_guest_unbind searches for the current domain which has been removed
> > >>> from the guests[] list, and hits a BUG_ON.
> > >>>
> > >>> Make __pirq_guest_unbind safe to be called multiple times by letting xen
> > >>> continue if a shared pirq has already been unbound from this guest. The
> > >>> PIRQ will be cleaned up from the domain's pirq_tree during the destruction
> > >>> in complete_domain_destroy anyways.
> > >>
> > >> So AFAICT this is because pt_pirq_softirq_active() returns true in
> > >> pci_clean_dpci_irq() and hence the iteration is stopped and
> > >> hvm_domain_irq(d)->dpci is not set to NULL.
> > >>
> > >> Would it be possible to clean the already processed IRQs from the
> > >> domain pirq_tree?
> > >
> > > This might work, perhaps by way of invoking unmap_domain_pirq()
> > > right after pirq_guest_unbind(), as long as hvm_dirq_assist() (as
> > > called from dpci_softirq()) can be made skip all actual work it
> > > means to do in such a case. Unfortunately the two ->masked fields
> > > acted upon are different between __pirq_guest_unbind() and
> > > hvm_dirq_assist().
> >
> > Ping? Unless I hear back soon, I'm afraid I'm going to drop this
> > patch from my "pending" mail folder, as not being agreed whether
> > to stick to the current version or whether to go this alternative
> > route. A more "natural" approach to fixing the issue would be
> > quite nice, after all.
> 
> I'll try to pick this up tomorrow as I helped diagnose the problem being fixed.
> 

I'm looking at this now and I am finding the code very confusing, but I think we could achieve the cleanup by passing back the irq index out of __pirq_guest_unbind() such that pirq_guest_unbind() can call clean_domain_irq_pirq(). I still haven't got much of a clue as to how all the data structures hang together though.

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-03-06 15:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-29  9:28 [Xen-devel] [PATCH v3] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs Varad Gautam
2020-01-29 10:08 ` Julien Grall
2020-01-29 10:30 ` Roger Pau Monné
2020-01-29 11:47   ` Jan Beulich
2020-03-05  9:36     ` Jan Beulich
2020-03-05 17:37       ` Durrant, Paul
2020-03-06 15:07         ` Durrant, Paul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).