All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andre Przywara <andre.przywara@arm.com>
To: Jan Glauber <jan.glauber@caviumnetworks.com>,
	Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: Potential deadlock in vgic
Date: Fri, 4 May 2018 15:51:20 +0100	[thread overview]
Message-ID: <1f343d54-03a2-10b4-1413-c0fa4a84bb7e@arm.com> (raw)
In-Reply-To: <20180504130854.GA14663@hc>

Hi,

On 04/05/18 14:08, Jan Glauber wrote:
> On Fri, May 04, 2018 at 02:47:42PM +0200, Christoffer Dall wrote:
>> Hi Jan,
>>
>> On Fri, May 04, 2018 at 01:03:44PM +0200, Jan Glauber wrote:
>>> Hi all,
>>>
>>> enabling lockdep I see the following reported in the host when I start a kvm guest:
>>>
>>> [12399.954245]        CPU0                    CPU1
>>> [12399.958762]        ----                    ----
>>> [12399.963279]   lock(&(&dist->lpi_list_lock)->rlock);
>>> [12399.968146]                                local_irq_disable();
>>> [12399.974052]                                lock(&(&vgic_cpu->ap_list_lock)->rlock);
>>> [12399.981696]                                lock(&(&dist->lpi_list_lock)->rlock);
>>> [12399.989081]   <Interrupt>
>>> [12399.991688]     lock(&(&vgic_cpu->ap_list_lock)->rlock);
>>> [12399.996989]
>>>                 *** DEADLOCK ***
>>>
>>> [12400.002897] 2 locks held by qemu-system-aar/5597:
>>> [12400.007587]  #0: 0000000042beb9dc (&vcpu->mutex){+.+.}, at: kvm_vcpu_ioctl+0x7c/0xa68
>>> [12400.015411]  #1: 00000000c45d644a (&(&vgic_cpu->ap_list_lock)->rlock){-.-.}, at: kvm_vgic_sync_hwstate+0x8c/0x328
>>>
>>>
>>> There is nothing unusual in my config or qemu parameters, I can upload these
>>> if needed. I see this on ThunderX and ThunderX2 and also with older kernels
>>> (4.13+ distribution kernel).
>>>
>>> I tried making the lpi_list_lock irq safe but that just leads to different
>>> warnings. The locking here seems to be quite sophisticated and I'm not familiar
>>> with it.
>>
>> That's unfortunate.  The problem here is that we end up violating our
>> locking order, which stipulates that ap_list_lock must be taken before
>> the lpi_list_lock.
>>
>> Give that we can take the ap_list_lock from interrupt context (timers
>> firing), the only solution I can easily think of is to change
>> lpi_list_lock takers to disable interrupts as well.
>>
>> Which warnings did you encounter with that approach?
> 
> Hi Christoffer,
> 
> making lpi_list_lock irq safe I get:
> 
> [  394.239174] ========================================================
> [  394.245515] WARNING: possible irq lock inversion dependency detected
> [  394.251857] 4.17.0-rc3-jang+ #72 Not tainted
> [  394.256114] --------------------------------------------------------
> [  394.262454] qemu-system-aar/5596 just changed the state of lock:
> [  394.268448] 00000000da3f09ef (&(&irq->irq_lock)->rlock#3){+...}, at: update_affinity+0x3c/0xa8
> [  394.277066] but this lock was taken by another, HARDIRQ-safe lock in the past:
> [  394.284274]  (&(&vgic_cpu->ap_list_lock)->rlock){-.-.}
> [  394.284278] 
>                
>                and interrupts could create inverse lock ordering between them.
> 
> [  394.300777] 
>                other info that might help us debug this:
> [  394.307292]  Possible interrupt unsafe locking scenario:
> 
> [  394.314066]        CPU0                    CPU1
> [  394.318584]        ----                    ----
> [  394.323101]   lock(&(&irq->irq_lock)->rlock#3);
> [  394.327622]                                local_irq_disable();
> [  394.333528]                                lock(&(&vgic_cpu->ap_list_lock)->rlock);
> [  394.341172]                                lock(&(&irq->irq_lock)->rlock#3);
> [  394.348210]   <Interrupt>
> [  394.350817]     lock(&(&vgic_cpu->ap_list_lock)->rlock);
> [  394.356118] 

That's weird, as that shouldn't happen anymore. IIRC we switched *both*
ap_list_lock and irq_lock over to be IRQ safe, so the first lock on CPU0
would disable IRQs, making the interrupt afterwards impossible.
Did we actually forget some irq_lock's to convert over and lockdep is
picking those up?

So if this is the case, we should be fine by making the missing ones
irqsave as well, then adding _irqsave to the lpi_list_lock, which has
just a few users and guards only short critical sections contained
within a function.

Cheers,
Andre.

>                 *** DEADLOCK ***
> 
> [  394.362025] 4 locks held by qemu-system-aar/5596:
> [  394.366716]  #0: 00000000719c7423 (&vcpu->mutex){+.+.}, at: kvm_vcpu_ioctl+0x7c/0xa68
> [  394.374545]  #1: 0000000060090841 (&kvm->srcu){....}, at: kvm_handle_guest_abort+0x11c/0xb70
> [  394.382984]  #2: 0000000064647766 (&its->cmd_lock){+.+.}, at: vgic_mmio_write_its_cwriter+0x44/0xa8
> [  394.392022]  #3: 0000000075f90a8a (&its->its_lock){+.+.}, at: vgic_its_process_commands.part.11+0xac/0x780
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> 

WARNING: multiple messages have this Message-ID (diff)
From: andre.przywara@arm.com (Andre Przywara)
To: linux-arm-kernel@lists.infradead.org
Subject: Potential deadlock in vgic
Date: Fri, 4 May 2018 15:51:20 +0100	[thread overview]
Message-ID: <1f343d54-03a2-10b4-1413-c0fa4a84bb7e@arm.com> (raw)
In-Reply-To: <20180504130854.GA14663@hc>

Hi,

On 04/05/18 14:08, Jan Glauber wrote:
> On Fri, May 04, 2018 at 02:47:42PM +0200, Christoffer Dall wrote:
>> Hi Jan,
>>
>> On Fri, May 04, 2018 at 01:03:44PM +0200, Jan Glauber wrote:
>>> Hi all,
>>>
>>> enabling lockdep I see the following reported in the host when I start a kvm guest:
>>>
>>> [12399.954245]        CPU0                    CPU1
>>> [12399.958762]        ----                    ----
>>> [12399.963279]   lock(&(&dist->lpi_list_lock)->rlock);
>>> [12399.968146]                                local_irq_disable();
>>> [12399.974052]                                lock(&(&vgic_cpu->ap_list_lock)->rlock);
>>> [12399.981696]                                lock(&(&dist->lpi_list_lock)->rlock);
>>> [12399.989081]   <Interrupt>
>>> [12399.991688]     lock(&(&vgic_cpu->ap_list_lock)->rlock);
>>> [12399.996989]
>>>                 *** DEADLOCK ***
>>>
>>> [12400.002897] 2 locks held by qemu-system-aar/5597:
>>> [12400.007587]  #0: 0000000042beb9dc (&vcpu->mutex){+.+.}, at: kvm_vcpu_ioctl+0x7c/0xa68
>>> [12400.015411]  #1: 00000000c45d644a (&(&vgic_cpu->ap_list_lock)->rlock){-.-.}, at: kvm_vgic_sync_hwstate+0x8c/0x328
>>>
>>>
>>> There is nothing unusual in my config or qemu parameters, I can upload these
>>> if needed. I see this on ThunderX and ThunderX2 and also with older kernels
>>> (4.13+ distribution kernel).
>>>
>>> I tried making the lpi_list_lock irq safe but that just leads to different
>>> warnings. The locking here seems to be quite sophisticated and I'm not familiar
>>> with it.
>>
>> That's unfortunate.  The problem here is that we end up violating our
>> locking order, which stipulates that ap_list_lock must be taken before
>> the lpi_list_lock.
>>
>> Give that we can take the ap_list_lock from interrupt context (timers
>> firing), the only solution I can easily think of is to change
>> lpi_list_lock takers to disable interrupts as well.
>>
>> Which warnings did you encounter with that approach?
> 
> Hi Christoffer,
> 
> making lpi_list_lock irq safe I get:
> 
> [  394.239174] ========================================================
> [  394.245515] WARNING: possible irq lock inversion dependency detected
> [  394.251857] 4.17.0-rc3-jang+ #72 Not tainted
> [  394.256114] --------------------------------------------------------
> [  394.262454] qemu-system-aar/5596 just changed the state of lock:
> [  394.268448] 00000000da3f09ef (&(&irq->irq_lock)->rlock#3){+...}, at: update_affinity+0x3c/0xa8
> [  394.277066] but this lock was taken by another, HARDIRQ-safe lock in the past:
> [  394.284274]  (&(&vgic_cpu->ap_list_lock)->rlock){-.-.}
> [  394.284278] 
>                
>                and interrupts could create inverse lock ordering between them.
> 
> [  394.300777] 
>                other info that might help us debug this:
> [  394.307292]  Possible interrupt unsafe locking scenario:
> 
> [  394.314066]        CPU0                    CPU1
> [  394.318584]        ----                    ----
> [  394.323101]   lock(&(&irq->irq_lock)->rlock#3);
> [  394.327622]                                local_irq_disable();
> [  394.333528]                                lock(&(&vgic_cpu->ap_list_lock)->rlock);
> [  394.341172]                                lock(&(&irq->irq_lock)->rlock#3);
> [  394.348210]   <Interrupt>
> [  394.350817]     lock(&(&vgic_cpu->ap_list_lock)->rlock);
> [  394.356118] 

That's weird, as that shouldn't happen anymore. IIRC we switched *both*
ap_list_lock and irq_lock over to be IRQ safe, so the first lock on CPU0
would disable IRQs, making the interrupt afterwards impossible.
Did we actually forget some irq_lock's to convert over and lockdep is
picking those up?

So if this is the case, we should be fine by making the missing ones
irqsave as well, then adding _irqsave to the lpi_list_lock, which has
just a few users and guards only short critical sections contained
within a function.

Cheers,
Andre.

>                 *** DEADLOCK ***
> 
> [  394.362025] 4 locks held by qemu-system-aar/5596:
> [  394.366716]  #0: 00000000719c7423 (&vcpu->mutex){+.+.}, at: kvm_vcpu_ioctl+0x7c/0xa68
> [  394.374545]  #1: 0000000060090841 (&kvm->srcu){....}, at: kvm_handle_guest_abort+0x11c/0xb70
> [  394.382984]  #2: 0000000064647766 (&its->cmd_lock){+.+.}, at: vgic_mmio_write_its_cwriter+0x44/0xa8
> [  394.392022]  #3: 0000000075f90a8a (&its->its_lock){+.+.}, at: vgic_its_process_commands.part.11+0xac/0x780
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> 

  parent reply	other threads:[~2018-05-04 14:51 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-04 11:03 Potential deadlock in vgic Jan Glauber
2018-05-04 11:03 ` Jan Glauber
2018-05-04 12:47 ` Christoffer Dall
2018-05-04 12:47   ` Christoffer Dall
2018-05-04 13:08   ` Jan Glauber
2018-05-04 13:08     ` Jan Glauber
2018-05-04 13:41     ` Marc Zyngier
2018-05-04 13:41       ` Marc Zyngier
2018-05-04 14:51     ` Andre Przywara [this message]
2018-05-04 14:51       ` Andre Przywara
2018-05-04 15:17     ` Andre Przywara
2018-05-04 15:17       ` Andre Przywara
2018-05-04 16:26       ` Jan Glauber
2018-05-04 16:26         ` Jan Glauber
2018-05-04 16:29         ` Andre Przywara
2018-05-04 16:29           ` Andre Przywara
2018-05-04 16:31       ` Jan Glauber
2018-05-04 16:31         ` Jan Glauber
2018-05-11 14:29         ` Andre Przywara
2018-05-11 14:29           ` Andre Przywara
2018-05-15 11:54           ` Jan Glauber
2018-05-15 11:54             ` Jan Glauber

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1f343d54-03a2-10b4-1413-c0fa4a84bb7e@arm.com \
    --to=andre.przywara@arm.com \
    --cc=christoffer.dall@arm.com \
    --cc=jan.glauber@caviumnetworks.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=marc.zyngier@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.