* [PATCH 0/2] KVM: arm64: vgic_irq: Fix memory leaks @ 2020-04-14 3:03 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm Cc: maz, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel, Zenghui Yu A memory leak on vgic_irq structure was recently reported by kmemleak on the guest destroy (or shutdown). It turned out that there're still pending interrupts (LPI) staying in the vcpu's ap_list during destroy so that KVM can't free the vgic_irq structure due to an extra refcount. Patch #1 is intended to fix this issue. Patch #2 is a memory leak fix on the error path, noticed while debugging. Zenghui Yu (2): KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) -- 2.19.1 ^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 0/2] KVM: arm64: vgic_irq: Fix memory leaks @ 2020-04-14 3:03 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm Cc: suzuki.poulose, maz, linux-kernel, yezengruan, james.morse, linux-arm-kernel, Zenghui Yu, wanghaibin.wang, julien.thierry.kdev A memory leak on vgic_irq structure was recently reported by kmemleak on the guest destroy (or shutdown). It turned out that there're still pending interrupts (LPI) staying in the vcpu's ap_list during destroy so that KVM can't free the vgic_irq structure due to an extra refcount. Patch #1 is intended to fix this issue. Patch #2 is a memory leak fix on the error path, noticed while debugging. Zenghui Yu (2): KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) -- 2.19.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 0/2] KVM: arm64: vgic_irq: Fix memory leaks @ 2020-04-14 3:03 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm; +Cc: maz, linux-kernel, linux-arm-kernel A memory leak on vgic_irq structure was recently reported by kmemleak on the guest destroy (or shutdown). It turned out that there're still pending interrupts (LPI) staying in the vcpu's ap_list during destroy so that KVM can't free the vgic_irq structure due to an extra refcount. Patch #1 is intended to fix this issue. Patch #2 is a memory leak fix on the error path, noticed while debugging. Zenghui Yu (2): KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) -- 2.19.1 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy 2020-04-14 3:03 ` Zenghui Yu (?) @ 2020-04-14 3:03 ` Zenghui Yu -1 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm Cc: maz, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel, Zenghui Yu It's likely that the vcpu fails to handle all virtual interrupts if userspace decides to destroy it, leaving the pending ones stay in the ap_list. If the un-handled one is a LPI, its vgic_irq structure will be eventually leaked because of an extra refcount increment in vgic_queue_irq_unlock(). This was detected by kmemleak on almost every guest destroy, the backtrace is as follows: unreferenced object 0xffff80725aed5500 (size 128): comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... backtrace: [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 [<0000000078197602>] io_mem_abort+0x484/0x7b8 [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 [<00000000e0d0cd65>] handle_exit+0x24c/0x770 [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 Fix it by retiring all pending LPIs in the ap_list on the destroy path. p.s. I can also reproduce it on a normal guest shutdown. It is because userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while the guest is being shutdown and unable to handle it. A little strange though and haven't dig further... Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> --- virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c index a963b9d766b7..53ec9b9d9bc4 100644 --- a/virt/kvm/arm/vgic/vgic-init.c +++ b/virt/kvm/arm/vgic/vgic-init.c @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + /* + * Retire all pending LPIs on this vcpu anyway as we're + * going to destroy it. + */ + vgic_flush_pending_lpis(vcpu); + INIT_LIST_HEAD(&vgic_cpu->ap_list_head); } -- 2.19.1 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy @ 2020-04-14 3:03 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm Cc: suzuki.poulose, maz, linux-kernel, yezengruan, james.morse, linux-arm-kernel, Zenghui Yu, wanghaibin.wang, julien.thierry.kdev It's likely that the vcpu fails to handle all virtual interrupts if userspace decides to destroy it, leaving the pending ones stay in the ap_list. If the un-handled one is a LPI, its vgic_irq structure will be eventually leaked because of an extra refcount increment in vgic_queue_irq_unlock(). This was detected by kmemleak on almost every guest destroy, the backtrace is as follows: unreferenced object 0xffff80725aed5500 (size 128): comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... backtrace: [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 [<0000000078197602>] io_mem_abort+0x484/0x7b8 [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 [<00000000e0d0cd65>] handle_exit+0x24c/0x770 [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 Fix it by retiring all pending LPIs in the ap_list on the destroy path. p.s. I can also reproduce it on a normal guest shutdown. It is because userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while the guest is being shutdown and unable to handle it. A little strange though and haven't dig further... Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> --- virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c index a963b9d766b7..53ec9b9d9bc4 100644 --- a/virt/kvm/arm/vgic/vgic-init.c +++ b/virt/kvm/arm/vgic/vgic-init.c @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + /* + * Retire all pending LPIs on this vcpu anyway as we're + * going to destroy it. + */ + vgic_flush_pending_lpis(vcpu); + INIT_LIST_HEAD(&vgic_cpu->ap_list_head); } -- 2.19.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy @ 2020-04-14 3:03 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm; +Cc: maz, linux-kernel, linux-arm-kernel It's likely that the vcpu fails to handle all virtual interrupts if userspace decides to destroy it, leaving the pending ones stay in the ap_list. If the un-handled one is a LPI, its vgic_irq structure will be eventually leaked because of an extra refcount increment in vgic_queue_irq_unlock(). This was detected by kmemleak on almost every guest destroy, the backtrace is as follows: unreferenced object 0xffff80725aed5500 (size 128): comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... backtrace: [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 [<0000000078197602>] io_mem_abort+0x484/0x7b8 [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 [<00000000e0d0cd65>] handle_exit+0x24c/0x770 [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 Fix it by retiring all pending LPIs in the ap_list on the destroy path. p.s. I can also reproduce it on a normal guest shutdown. It is because userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while the guest is being shutdown and unable to handle it. A little strange though and haven't dig further... Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> --- virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c index a963b9d766b7..53ec9b9d9bc4 100644 --- a/virt/kvm/arm/vgic/vgic-init.c +++ b/virt/kvm/arm/vgic/vgic-init.c @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + /* + * Retire all pending LPIs on this vcpu anyway as we're + * going to destroy it. + */ + vgic_flush_pending_lpis(vcpu); + INIT_LIST_HEAD(&vgic_cpu->ap_list_head); } -- 2.19.1 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy 2020-04-14 3:03 ` Zenghui Yu (?) @ 2020-04-14 10:54 ` Marc Zyngier -1 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-14 10:54 UTC (permalink / raw) To: Zenghui Yu Cc: kvmarm, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel On Tue, 14 Apr 2020 11:03:47 +0800 Zenghui Yu <yuzenghui@huawei.com> wrote: Hi Zenghui, > It's likely that the vcpu fails to handle all virtual interrupts if > userspace decides to destroy it, leaving the pending ones stay in the > ap_list. If the un-handled one is a LPI, its vgic_irq structure will > be eventually leaked because of an extra refcount increment in > vgic_queue_irq_unlock(). > > This was detected by kmemleak on almost every guest destroy, the > backtrace is as follows: > > unreferenced object 0xffff80725aed5500 (size 128): > comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) > hex dump (first 32 bytes): > 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... > c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... > backtrace: > [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 > [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 > [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 > [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 > [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 > [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 > [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 > [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 > [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 > [<0000000078197602>] io_mem_abort+0x484/0x7b8 > [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 > [<00000000e0d0cd65>] handle_exit+0x24c/0x770 > [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 > [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 > [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 > [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 > > Fix it by retiring all pending LPIs in the ap_list on the destroy path. > > p.s. I can also reproduce it on a normal guest shutdown. It is because > userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while > the guest is being shutdown and unable to handle it. A little strange > though and haven't dig further... What userspace are you using? You'd hope that the VMM would stop processing I/Os when destroying the guest. But we still need to handle it anyway, and I thing this fix makes sense. > > Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > --- > virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c > index a963b9d766b7..53ec9b9d9bc4 100644 > --- a/virt/kvm/arm/vgic/vgic-init.c > +++ b/virt/kvm/arm/vgic/vgic-init.c > @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) > { > struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; > > + /* > + * Retire all pending LPIs on this vcpu anyway as we're > + * going to destroy it. > + */ > + vgic_flush_pending_lpis(vcpu); > + > INIT_LIST_HEAD(&vgic_cpu->ap_list_head); > } > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? Otherwise, looks good. If you agree with the above, I can fix that locally, no need to resend this patch. Thanks, M. -- Jazz is not dead. It just smells funny... ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy @ 2020-04-14 10:54 ` Marc Zyngier 0 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-14 10:54 UTC (permalink / raw) To: Zenghui Yu Cc: suzuki.poulose, linux-kernel, yezengruan, james.morse, linux-arm-kernel, wanghaibin.wang, kvmarm, julien.thierry.kdev On Tue, 14 Apr 2020 11:03:47 +0800 Zenghui Yu <yuzenghui@huawei.com> wrote: Hi Zenghui, > It's likely that the vcpu fails to handle all virtual interrupts if > userspace decides to destroy it, leaving the pending ones stay in the > ap_list. If the un-handled one is a LPI, its vgic_irq structure will > be eventually leaked because of an extra refcount increment in > vgic_queue_irq_unlock(). > > This was detected by kmemleak on almost every guest destroy, the > backtrace is as follows: > > unreferenced object 0xffff80725aed5500 (size 128): > comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) > hex dump (first 32 bytes): > 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... > c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... > backtrace: > [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 > [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 > [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 > [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 > [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 > [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 > [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 > [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 > [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 > [<0000000078197602>] io_mem_abort+0x484/0x7b8 > [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 > [<00000000e0d0cd65>] handle_exit+0x24c/0x770 > [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 > [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 > [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 > [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 > > Fix it by retiring all pending LPIs in the ap_list on the destroy path. > > p.s. I can also reproduce it on a normal guest shutdown. It is because > userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while > the guest is being shutdown and unable to handle it. A little strange > though and haven't dig further... What userspace are you using? You'd hope that the VMM would stop processing I/Os when destroying the guest. But we still need to handle it anyway, and I thing this fix makes sense. > > Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > --- > virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c > index a963b9d766b7..53ec9b9d9bc4 100644 > --- a/virt/kvm/arm/vgic/vgic-init.c > +++ b/virt/kvm/arm/vgic/vgic-init.c > @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) > { > struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; > > + /* > + * Retire all pending LPIs on this vcpu anyway as we're > + * going to destroy it. > + */ > + vgic_flush_pending_lpis(vcpu); > + > INIT_LIST_HEAD(&vgic_cpu->ap_list_head); > } > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? Otherwise, looks good. If you agree with the above, I can fix that locally, no need to resend this patch. Thanks, M. -- Jazz is not dead. It just smells funny... _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy @ 2020-04-14 10:54 ` Marc Zyngier 0 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-14 10:54 UTC (permalink / raw) To: Zenghui Yu; +Cc: linux-kernel, linux-arm-kernel, kvmarm On Tue, 14 Apr 2020 11:03:47 +0800 Zenghui Yu <yuzenghui@huawei.com> wrote: Hi Zenghui, > It's likely that the vcpu fails to handle all virtual interrupts if > userspace decides to destroy it, leaving the pending ones stay in the > ap_list. If the un-handled one is a LPI, its vgic_irq structure will > be eventually leaked because of an extra refcount increment in > vgic_queue_irq_unlock(). > > This was detected by kmemleak on almost every guest destroy, the > backtrace is as follows: > > unreferenced object 0xffff80725aed5500 (size 128): > comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) > hex dump (first 32 bytes): > 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... > c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... > backtrace: > [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 > [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 > [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 > [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 > [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 > [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 > [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 > [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 > [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 > [<0000000078197602>] io_mem_abort+0x484/0x7b8 > [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 > [<00000000e0d0cd65>] handle_exit+0x24c/0x770 > [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 > [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 > [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 > [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 > > Fix it by retiring all pending LPIs in the ap_list on the destroy path. > > p.s. I can also reproduce it on a normal guest shutdown. It is because > userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while > the guest is being shutdown and unable to handle it. A little strange > though and haven't dig further... What userspace are you using? You'd hope that the VMM would stop processing I/Os when destroying the guest. But we still need to handle it anyway, and I thing this fix makes sense. > > Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > --- > virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c > index a963b9d766b7..53ec9b9d9bc4 100644 > --- a/virt/kvm/arm/vgic/vgic-init.c > +++ b/virt/kvm/arm/vgic/vgic-init.c > @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) > { > struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; > > + /* > + * Retire all pending LPIs on this vcpu anyway as we're > + * going to destroy it. > + */ > + vgic_flush_pending_lpis(vcpu); > + > INIT_LIST_HEAD(&vgic_cpu->ap_list_head); > } > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? Otherwise, looks good. If you agree with the above, I can fix that locally, no need to resend this patch. Thanks, M. -- Jazz is not dead. It just smells funny... _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy 2020-04-14 10:54 ` Marc Zyngier (?) @ 2020-04-14 11:17 ` Zenghui Yu -1 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 11:17 UTC (permalink / raw) To: Marc Zyngier Cc: kvmarm, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel Hi Marc, On 2020/4/14 18:54, Marc Zyngier wrote: > On Tue, 14 Apr 2020 11:03:47 +0800 > Zenghui Yu <yuzenghui@huawei.com> wrote: > > Hi Zenghui, > >> It's likely that the vcpu fails to handle all virtual interrupts if >> userspace decides to destroy it, leaving the pending ones stay in the >> ap_list. If the un-handled one is a LPI, its vgic_irq structure will >> be eventually leaked because of an extra refcount increment in >> vgic_queue_irq_unlock(). >> >> This was detected by kmemleak on almost every guest destroy, the >> backtrace is as follows: >> >> unreferenced object 0xffff80725aed5500 (size 128): >> comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) >> hex dump (first 32 bytes): >> 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... >> c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... >> backtrace: >> [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 >> [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 >> [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 >> [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 >> [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 >> [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 >> [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 >> [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 >> [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 >> [<0000000078197602>] io_mem_abort+0x484/0x7b8 >> [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 >> [<00000000e0d0cd65>] handle_exit+0x24c/0x770 >> [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 >> [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 >> [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 >> [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 >> >> Fix it by retiring all pending LPIs in the ap_list on the destroy path. >> >> p.s. I can also reproduce it on a normal guest shutdown. It is because >> userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while >> the guest is being shutdown and unable to handle it. A little strange >> though and haven't dig further... > > What userspace are you using? You'd hope that the VMM would stop > processing I/Os when destroying the guest. But we still need to handle > it anyway, and I thing this fix makes sense. I'm using Qemu (master) for debugging. Looks like an interrupt corresponding to a virtio device configuration change, triggered after all other devices had freed their irqs. Not sure if it's expected. >> >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >> --- >> virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ >> 1 file changed, 6 insertions(+) >> >> diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c >> index a963b9d766b7..53ec9b9d9bc4 100644 >> --- a/virt/kvm/arm/vgic/vgic-init.c >> +++ b/virt/kvm/arm/vgic/vgic-init.c >> @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) >> { >> struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; >> >> + /* >> + * Retire all pending LPIs on this vcpu anyway as we're >> + * going to destroy it. >> + */ >> + vgic_flush_pending_lpis(vcpu); >> + >> INIT_LIST_HEAD(&vgic_cpu->ap_list_head); >> } >> > > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? I was just thinking that the ap_list_head may not be empty (besides LPI, with other active or pending interrupts), so leave it unchanged. > Otherwise, looks good. If you agree with the above, I can fix that > locally, no need to resend this patch. Thanks, Zenghui ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy @ 2020-04-14 11:17 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 11:17 UTC (permalink / raw) To: Marc Zyngier Cc: suzuki.poulose, linux-kernel, yezengruan, james.morse, linux-arm-kernel, wanghaibin.wang, kvmarm, julien.thierry.kdev Hi Marc, On 2020/4/14 18:54, Marc Zyngier wrote: > On Tue, 14 Apr 2020 11:03:47 +0800 > Zenghui Yu <yuzenghui@huawei.com> wrote: > > Hi Zenghui, > >> It's likely that the vcpu fails to handle all virtual interrupts if >> userspace decides to destroy it, leaving the pending ones stay in the >> ap_list. If the un-handled one is a LPI, its vgic_irq structure will >> be eventually leaked because of an extra refcount increment in >> vgic_queue_irq_unlock(). >> >> This was detected by kmemleak on almost every guest destroy, the >> backtrace is as follows: >> >> unreferenced object 0xffff80725aed5500 (size 128): >> comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) >> hex dump (first 32 bytes): >> 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... >> c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... >> backtrace: >> [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 >> [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 >> [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 >> [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 >> [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 >> [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 >> [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 >> [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 >> [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 >> [<0000000078197602>] io_mem_abort+0x484/0x7b8 >> [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 >> [<00000000e0d0cd65>] handle_exit+0x24c/0x770 >> [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 >> [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 >> [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 >> [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 >> >> Fix it by retiring all pending LPIs in the ap_list on the destroy path. >> >> p.s. I can also reproduce it on a normal guest shutdown. It is because >> userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while >> the guest is being shutdown and unable to handle it. A little strange >> though and haven't dig further... > > What userspace are you using? You'd hope that the VMM would stop > processing I/Os when destroying the guest. But we still need to handle > it anyway, and I thing this fix makes sense. I'm using Qemu (master) for debugging. Looks like an interrupt corresponding to a virtio device configuration change, triggered after all other devices had freed their irqs. Not sure if it's expected. >> >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >> --- >> virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ >> 1 file changed, 6 insertions(+) >> >> diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c >> index a963b9d766b7..53ec9b9d9bc4 100644 >> --- a/virt/kvm/arm/vgic/vgic-init.c >> +++ b/virt/kvm/arm/vgic/vgic-init.c >> @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) >> { >> struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; >> >> + /* >> + * Retire all pending LPIs on this vcpu anyway as we're >> + * going to destroy it. >> + */ >> + vgic_flush_pending_lpis(vcpu); >> + >> INIT_LIST_HEAD(&vgic_cpu->ap_list_head); >> } >> > > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? I was just thinking that the ap_list_head may not be empty (besides LPI, with other active or pending interrupts), so leave it unchanged. > Otherwise, looks good. If you agree with the above, I can fix that > locally, no need to resend this patch. Thanks, Zenghui _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy @ 2020-04-14 11:17 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 11:17 UTC (permalink / raw) To: Marc Zyngier; +Cc: linux-kernel, linux-arm-kernel, kvmarm Hi Marc, On 2020/4/14 18:54, Marc Zyngier wrote: > On Tue, 14 Apr 2020 11:03:47 +0800 > Zenghui Yu <yuzenghui@huawei.com> wrote: > > Hi Zenghui, > >> It's likely that the vcpu fails to handle all virtual interrupts if >> userspace decides to destroy it, leaving the pending ones stay in the >> ap_list. If the un-handled one is a LPI, its vgic_irq structure will >> be eventually leaked because of an extra refcount increment in >> vgic_queue_irq_unlock(). >> >> This was detected by kmemleak on almost every guest destroy, the >> backtrace is as follows: >> >> unreferenced object 0xffff80725aed5500 (size 128): >> comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) >> hex dump (first 32 bytes): >> 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... >> c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... >> backtrace: >> [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 >> [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 >> [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 >> [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 >> [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 >> [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 >> [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 >> [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 >> [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 >> [<0000000078197602>] io_mem_abort+0x484/0x7b8 >> [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 >> [<00000000e0d0cd65>] handle_exit+0x24c/0x770 >> [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 >> [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 >> [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 >> [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 >> >> Fix it by retiring all pending LPIs in the ap_list on the destroy path. >> >> p.s. I can also reproduce it on a normal guest shutdown. It is because >> userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while >> the guest is being shutdown and unable to handle it. A little strange >> though and haven't dig further... > > What userspace are you using? You'd hope that the VMM would stop > processing I/Os when destroying the guest. But we still need to handle > it anyway, and I thing this fix makes sense. I'm using Qemu (master) for debugging. Looks like an interrupt corresponding to a virtio device configuration change, triggered after all other devices had freed their irqs. Not sure if it's expected. >> >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >> --- >> virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ >> 1 file changed, 6 insertions(+) >> >> diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c >> index a963b9d766b7..53ec9b9d9bc4 100644 >> --- a/virt/kvm/arm/vgic/vgic-init.c >> +++ b/virt/kvm/arm/vgic/vgic-init.c >> @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) >> { >> struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; >> >> + /* >> + * Retire all pending LPIs on this vcpu anyway as we're >> + * going to destroy it. >> + */ >> + vgic_flush_pending_lpis(vcpu); >> + >> INIT_LIST_HEAD(&vgic_cpu->ap_list_head); >> } >> > > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? I was just thinking that the ap_list_head may not be empty (besides LPI, with other active or pending interrupts), so leave it unchanged. > Otherwise, looks good. If you agree with the above, I can fix that > locally, no need to resend this patch. Thanks, Zenghui _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy 2020-04-14 11:17 ` Zenghui Yu (?) @ 2020-04-14 13:15 ` Marc Zyngier -1 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-14 13:15 UTC (permalink / raw) To: Zenghui Yu Cc: kvmarm, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel On Tue, 14 Apr 2020 19:17:49 +0800 Zenghui Yu <yuzenghui@huawei.com> wrote: > Hi Marc, > > On 2020/4/14 18:54, Marc Zyngier wrote: > > On Tue, 14 Apr 2020 11:03:47 +0800 > > Zenghui Yu <yuzenghui@huawei.com> wrote: > > > > Hi Zenghui, > > > >> It's likely that the vcpu fails to handle all virtual interrupts if > >> userspace decides to destroy it, leaving the pending ones stay in the > >> ap_list. If the un-handled one is a LPI, its vgic_irq structure will > >> be eventually leaked because of an extra refcount increment in > >> vgic_queue_irq_unlock(). > >> > >> This was detected by kmemleak on almost every guest destroy, the > >> backtrace is as follows: > >> > >> unreferenced object 0xffff80725aed5500 (size 128): > >> comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) > >> hex dump (first 32 bytes): > >> 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... > >> c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... > >> backtrace: > >> [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 > >> [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 > >> [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 > >> [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 > >> [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 > >> [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 > >> [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 > >> [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 > >> [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 > >> [<0000000078197602>] io_mem_abort+0x484/0x7b8 > >> [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 > >> [<00000000e0d0cd65>] handle_exit+0x24c/0x770 > >> [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 > >> [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 > >> [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 > >> [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 > >> > >> Fix it by retiring all pending LPIs in the ap_list on the destroy path. > >> > >> p.s. I can also reproduce it on a normal guest shutdown. It is because > >> userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while > >> the guest is being shutdown and unable to handle it. A little strange > >> though and haven't dig further... > > > > What userspace are you using? You'd hope that the VMM would stop > > processing I/Os when destroying the guest. But we still need to handle > > it anyway, and I thing this fix makes sense. > > I'm using Qemu (master) for debugging. Looks like an interrupt > corresponding to a virtio device configuration change, triggered after > all other devices had freed their irqs. Not sure if it's expected. > > >> > >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > >> --- > >> virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ > >> 1 file changed, 6 insertions(+) > >> > >> diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c > >> index a963b9d766b7..53ec9b9d9bc4 100644 > >> --- a/virt/kvm/arm/vgic/vgic-init.c > >> +++ b/virt/kvm/arm/vgic/vgic-init.c > >> @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) > >> { > >> struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; > >> >> + /* > >> + * Retire all pending LPIs on this vcpu anyway as we're > >> + * going to destroy it. > >> + */ > >> + vgic_flush_pending_lpis(vcpu); > >> + > >> INIT_LIST_HEAD(&vgic_cpu->ap_list_head); > >> } > >> > > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? > > I was just thinking that the ap_list_head may not be empty (besides LPI, > with other active or pending interrupts), so leave it unchanged. It isn't clear what purpose this serves (the vcpus are about to be freed, and so are the ap_lists), but I guess it doesn't hurt either. I'll queue both patches. Thanks, M. -- Jazz is not dead. It just smells funny... ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy @ 2020-04-14 13:15 ` Marc Zyngier 0 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-14 13:15 UTC (permalink / raw) To: Zenghui Yu Cc: suzuki.poulose, linux-kernel, yezengruan, james.morse, linux-arm-kernel, wanghaibin.wang, kvmarm, julien.thierry.kdev On Tue, 14 Apr 2020 19:17:49 +0800 Zenghui Yu <yuzenghui@huawei.com> wrote: > Hi Marc, > > On 2020/4/14 18:54, Marc Zyngier wrote: > > On Tue, 14 Apr 2020 11:03:47 +0800 > > Zenghui Yu <yuzenghui@huawei.com> wrote: > > > > Hi Zenghui, > > > >> It's likely that the vcpu fails to handle all virtual interrupts if > >> userspace decides to destroy it, leaving the pending ones stay in the > >> ap_list. If the un-handled one is a LPI, its vgic_irq structure will > >> be eventually leaked because of an extra refcount increment in > >> vgic_queue_irq_unlock(). > >> > >> This was detected by kmemleak on almost every guest destroy, the > >> backtrace is as follows: > >> > >> unreferenced object 0xffff80725aed5500 (size 128): > >> comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) > >> hex dump (first 32 bytes): > >> 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... > >> c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... > >> backtrace: > >> [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 > >> [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 > >> [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 > >> [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 > >> [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 > >> [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 > >> [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 > >> [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 > >> [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 > >> [<0000000078197602>] io_mem_abort+0x484/0x7b8 > >> [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 > >> [<00000000e0d0cd65>] handle_exit+0x24c/0x770 > >> [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 > >> [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 > >> [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 > >> [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 > >> > >> Fix it by retiring all pending LPIs in the ap_list on the destroy path. > >> > >> p.s. I can also reproduce it on a normal guest shutdown. It is because > >> userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while > >> the guest is being shutdown and unable to handle it. A little strange > >> though and haven't dig further... > > > > What userspace are you using? You'd hope that the VMM would stop > > processing I/Os when destroying the guest. But we still need to handle > > it anyway, and I thing this fix makes sense. > > I'm using Qemu (master) for debugging. Looks like an interrupt > corresponding to a virtio device configuration change, triggered after > all other devices had freed their irqs. Not sure if it's expected. > > >> > >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > >> --- > >> virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ > >> 1 file changed, 6 insertions(+) > >> > >> diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c > >> index a963b9d766b7..53ec9b9d9bc4 100644 > >> --- a/virt/kvm/arm/vgic/vgic-init.c > >> +++ b/virt/kvm/arm/vgic/vgic-init.c > >> @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) > >> { > >> struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; > >> >> + /* > >> + * Retire all pending LPIs on this vcpu anyway as we're > >> + * going to destroy it. > >> + */ > >> + vgic_flush_pending_lpis(vcpu); > >> + > >> INIT_LIST_HEAD(&vgic_cpu->ap_list_head); > >> } > >> > > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? > > I was just thinking that the ap_list_head may not be empty (besides LPI, > with other active or pending interrupts), so leave it unchanged. It isn't clear what purpose this serves (the vcpus are about to be freed, and so are the ap_lists), but I guess it doesn't hurt either. I'll queue both patches. Thanks, M. -- Jazz is not dead. It just smells funny... _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy @ 2020-04-14 13:15 ` Marc Zyngier 0 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-14 13:15 UTC (permalink / raw) To: Zenghui Yu; +Cc: linux-kernel, linux-arm-kernel, kvmarm On Tue, 14 Apr 2020 19:17:49 +0800 Zenghui Yu <yuzenghui@huawei.com> wrote: > Hi Marc, > > On 2020/4/14 18:54, Marc Zyngier wrote: > > On Tue, 14 Apr 2020 11:03:47 +0800 > > Zenghui Yu <yuzenghui@huawei.com> wrote: > > > > Hi Zenghui, > > > >> It's likely that the vcpu fails to handle all virtual interrupts if > >> userspace decides to destroy it, leaving the pending ones stay in the > >> ap_list. If the un-handled one is a LPI, its vgic_irq structure will > >> be eventually leaked because of an extra refcount increment in > >> vgic_queue_irq_unlock(). > >> > >> This was detected by kmemleak on almost every guest destroy, the > >> backtrace is as follows: > >> > >> unreferenced object 0xffff80725aed5500 (size 128): > >> comm "CPU 5/KVM", pid 40711, jiffies 4298024754 (age 166366.512s) > >> hex dump (first 32 bytes): > >> 00 00 00 00 00 00 00 00 08 01 a9 73 6d 80 ff ff ...........sm... > >> c8 61 ee a9 00 20 ff ff 28 1e 55 81 6c 80 ff ff .a... ..(.U.l... > >> backtrace: > >> [<000000004bcaa122>] kmem_cache_alloc_trace+0x2dc/0x418 > >> [<0000000069c7dabb>] vgic_add_lpi+0x88/0x418 > >> [<00000000bfefd5c5>] vgic_its_cmd_handle_mapi+0x4dc/0x588 > >> [<00000000cf993975>] vgic_its_process_commands.part.5+0x484/0x1198 > >> [<000000004bd3f8e3>] vgic_its_process_commands+0x50/0x80 > >> [<00000000b9a65b2b>] vgic_mmio_write_its_cwriter+0xac/0x108 > >> [<0000000009641ebb>] dispatch_mmio_write+0xd0/0x188 > >> [<000000008f79d288>] __kvm_io_bus_write+0x134/0x240 > >> [<00000000882f39ac>] kvm_io_bus_write+0xe0/0x150 > >> [<0000000078197602>] io_mem_abort+0x484/0x7b8 > >> [<0000000060954e3c>] kvm_handle_guest_abort+0x4cc/0xa58 > >> [<00000000e0d0cd65>] handle_exit+0x24c/0x770 > >> [<00000000b44a7fad>] kvm_arch_vcpu_ioctl_run+0x460/0x1988 > >> [<0000000025fb897c>] kvm_vcpu_ioctl+0x4f8/0xee0 > >> [<000000003271e317>] do_vfs_ioctl+0x160/0xcd8 > >> [<00000000e7f39607>] ksys_ioctl+0x98/0xd8 > >> > >> Fix it by retiring all pending LPIs in the ap_list on the destroy path. > >> > >> p.s. I can also reproduce it on a normal guest shutdown. It is because > >> userspace still send LPIs to vcpu (through KVM_SIGNAL_MSI ioctl) while > >> the guest is being shutdown and unable to handle it. A little strange > >> though and haven't dig further... > > > > What userspace are you using? You'd hope that the VMM would stop > > processing I/Os when destroying the guest. But we still need to handle > > it anyway, and I thing this fix makes sense. > > I'm using Qemu (master) for debugging. Looks like an interrupt > corresponding to a virtio device configuration change, triggered after > all other devices had freed their irqs. Not sure if it's expected. > > >> > >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > >> --- > >> virt/kvm/arm/vgic/vgic-init.c | 6 ++++++ > >> 1 file changed, 6 insertions(+) > >> > >> diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c > >> index a963b9d766b7..53ec9b9d9bc4 100644 > >> --- a/virt/kvm/arm/vgic/vgic-init.c > >> +++ b/virt/kvm/arm/vgic/vgic-init.c > >> @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) > >> { > >> struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; > >> >> + /* > >> + * Retire all pending LPIs on this vcpu anyway as we're > >> + * going to destroy it. > >> + */ > >> + vgic_flush_pending_lpis(vcpu); > >> + > >> INIT_LIST_HEAD(&vgic_cpu->ap_list_head); > >> } > >> > > I guess that at this stage, the INIT_LIST_HEAD() is superfluous, right? > > I was just thinking that the ap_list_head may not be empty (besides LPI, > with other active or pending interrupts), so leave it unchanged. It isn't clear what purpose this serves (the vcpus are about to be freed, and so are the ap_lists), but I guess it doesn't hurt either. I'll queue both patches. Thanks, M. -- Jazz is not dead. It just smells funny... _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() 2020-04-14 3:03 ` Zenghui Yu (?) @ 2020-04-14 3:03 ` Zenghui Yu -1 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm Cc: maz, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel, Zenghui Yu If we're going to fail out the vgic_add_lpi(), let's make sure the allocated vgic_irq memory is also freed. Though it seems that both cases are unlikely to fail. Cc: Zengruan Ye <yezengruan@huawei.com> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> --- virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index d53d34a33e35..3c3b6a0f2dce 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, * the respective config data from memory here upon mapping the LPI. */ ret = update_lpi_config(kvm, irq, NULL, false); - if (ret) + if (ret) { + kfree(irq); return ERR_PTR(ret); + } ret = vgic_v3_lpi_sync_pending_status(kvm, irq); - if (ret) + if (ret) { + kfree(irq); return ERR_PTR(ret); + } return irq; } -- 2.19.1 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() @ 2020-04-14 3:03 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm Cc: suzuki.poulose, maz, linux-kernel, yezengruan, james.morse, linux-arm-kernel, Zenghui Yu, wanghaibin.wang, julien.thierry.kdev If we're going to fail out the vgic_add_lpi(), let's make sure the allocated vgic_irq memory is also freed. Though it seems that both cases are unlikely to fail. Cc: Zengruan Ye <yezengruan@huawei.com> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> --- virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index d53d34a33e35..3c3b6a0f2dce 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, * the respective config data from memory here upon mapping the LPI. */ ret = update_lpi_config(kvm, irq, NULL, false); - if (ret) + if (ret) { + kfree(irq); return ERR_PTR(ret); + } ret = vgic_v3_lpi_sync_pending_status(kvm, irq); - if (ret) + if (ret) { + kfree(irq); return ERR_PTR(ret); + } return irq; } -- 2.19.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() @ 2020-04-14 3:03 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-14 3:03 UTC (permalink / raw) To: kvmarm; +Cc: maz, linux-kernel, linux-arm-kernel If we're going to fail out the vgic_add_lpi(), let's make sure the allocated vgic_irq memory is also freed. Though it seems that both cases are unlikely to fail. Cc: Zengruan Ye <yezengruan@huawei.com> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> --- virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index d53d34a33e35..3c3b6a0f2dce 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, * the respective config data from memory here upon mapping the LPI. */ ret = update_lpi_config(kvm, irq, NULL, false); - if (ret) + if (ret) { + kfree(irq); return ERR_PTR(ret); + } ret = vgic_v3_lpi_sync_pending_status(kvm, irq); - if (ret) + if (ret) { + kfree(irq); return ERR_PTR(ret); + } return irq; } -- 2.19.1 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() 2020-04-14 3:03 ` Zenghui Yu (?) @ 2020-04-16 1:17 ` Zenghui Yu -1 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-16 1:17 UTC (permalink / raw) To: kvmarm Cc: maz, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel On 2020/4/14 11:03, Zenghui Yu wrote: > If we're going to fail out the vgic_add_lpi(), let's make sure the > allocated vgic_irq memory is also freed. Though it seems that both > cases are unlikely to fail. > > Cc: Zengruan Ye <yezengruan@huawei.com> > Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > --- > virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c > index d53d34a33e35..3c3b6a0f2dce 100644 > --- a/virt/kvm/arm/vgic/vgic-its.c > +++ b/virt/kvm/arm/vgic/vgic-its.c > @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, > * the respective config data from memory here upon mapping the LPI. > */ > ret = update_lpi_config(kvm, irq, NULL, false); > - if (ret) > + if (ret) { > + kfree(irq); > return ERR_PTR(ret); > + } > > ret = vgic_v3_lpi_sync_pending_status(kvm, irq); > - if (ret) > + if (ret) { > + kfree(irq); > return ERR_PTR(ret); > + } Looking at it again, I realized that this error handling is still not complete. Maybe we should use a vgic_put_irq() instead so that we can also properly delete the vgic_irq from lpi_list. Marc, what do you think? Could you please help to fix it, or I can resend it. Thanks, Zenghui ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() @ 2020-04-16 1:17 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-16 1:17 UTC (permalink / raw) To: kvmarm Cc: suzuki.poulose, maz, linux-kernel, yezengruan, james.morse, linux-arm-kernel, wanghaibin.wang, julien.thierry.kdev On 2020/4/14 11:03, Zenghui Yu wrote: > If we're going to fail out the vgic_add_lpi(), let's make sure the > allocated vgic_irq memory is also freed. Though it seems that both > cases are unlikely to fail. > > Cc: Zengruan Ye <yezengruan@huawei.com> > Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > --- > virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c > index d53d34a33e35..3c3b6a0f2dce 100644 > --- a/virt/kvm/arm/vgic/vgic-its.c > +++ b/virt/kvm/arm/vgic/vgic-its.c > @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, > * the respective config data from memory here upon mapping the LPI. > */ > ret = update_lpi_config(kvm, irq, NULL, false); > - if (ret) > + if (ret) { > + kfree(irq); > return ERR_PTR(ret); > + } > > ret = vgic_v3_lpi_sync_pending_status(kvm, irq); > - if (ret) > + if (ret) { > + kfree(irq); > return ERR_PTR(ret); > + } Looking at it again, I realized that this error handling is still not complete. Maybe we should use a vgic_put_irq() instead so that we can also properly delete the vgic_irq from lpi_list. Marc, what do you think? Could you please help to fix it, or I can resend it. Thanks, Zenghui _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() @ 2020-04-16 1:17 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-16 1:17 UTC (permalink / raw) To: kvmarm; +Cc: maz, linux-kernel, linux-arm-kernel On 2020/4/14 11:03, Zenghui Yu wrote: > If we're going to fail out the vgic_add_lpi(), let's make sure the > allocated vgic_irq memory is also freed. Though it seems that both > cases are unlikely to fail. > > Cc: Zengruan Ye <yezengruan@huawei.com> > Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> > --- > virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c > index d53d34a33e35..3c3b6a0f2dce 100644 > --- a/virt/kvm/arm/vgic/vgic-its.c > +++ b/virt/kvm/arm/vgic/vgic-its.c > @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, > * the respective config data from memory here upon mapping the LPI. > */ > ret = update_lpi_config(kvm, irq, NULL, false); > - if (ret) > + if (ret) { > + kfree(irq); > return ERR_PTR(ret); > + } > > ret = vgic_v3_lpi_sync_pending_status(kvm, irq); > - if (ret) > + if (ret) { > + kfree(irq); > return ERR_PTR(ret); > + } Looking at it again, I realized that this error handling is still not complete. Maybe we should use a vgic_put_irq() instead so that we can also properly delete the vgic_irq from lpi_list. Marc, what do you think? Could you please help to fix it, or I can resend it. Thanks, Zenghui _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() 2020-04-16 1:17 ` Zenghui Yu (?) @ 2020-04-16 17:23 ` Marc Zyngier -1 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-16 17:23 UTC (permalink / raw) To: Zenghui Yu Cc: kvmarm, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel On 2020-04-16 02:17, Zenghui Yu wrote: > On 2020/4/14 11:03, Zenghui Yu wrote: >> If we're going to fail out the vgic_add_lpi(), let's make sure the >> allocated vgic_irq memory is also freed. Though it seems that both >> cases are unlikely to fail. >> >> Cc: Zengruan Ye <yezengruan@huawei.com> >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >> --- >> virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- >> 1 file changed, 6 insertions(+), 2 deletions(-) >> >> diff --git a/virt/kvm/arm/vgic/vgic-its.c >> b/virt/kvm/arm/vgic/vgic-its.c >> index d53d34a33e35..3c3b6a0f2dce 100644 >> --- a/virt/kvm/arm/vgic/vgic-its.c >> +++ b/virt/kvm/arm/vgic/vgic-its.c >> @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm >> *kvm, u32 intid, >> * the respective config data from memory here upon mapping the >> LPI. >> */ >> ret = update_lpi_config(kvm, irq, NULL, false); >> - if (ret) >> + if (ret) { >> + kfree(irq); >> return ERR_PTR(ret); >> + } >> ret = vgic_v3_lpi_sync_pending_status(kvm, irq); >> - if (ret) >> + if (ret) { >> + kfree(irq); >> return ERR_PTR(ret); >> + } > > Looking at it again, I realized that this error handling is still not > complete. Maybe we should use a vgic_put_irq() instead so that we can > also properly delete the vgic_irq from lpi_list. Yes, this is a more correct fix indeed. There is still a bit of a bizarre behaviour if you have two vgic_add_lpi() racing to create the same interrupt, which is pretty dodgy anyway (it means we have two MAPI at the same time...). You end-up with re-reading the state from memory... Oh well. > Marc, what do you think? Could you please help to fix it, or I can > resend it. I've fixed it as such (with a comment for a good measure): diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index 3c3b6a0f2dce..c012a52b19f5 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -96,16 +96,19 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, * We "cache" the configuration table entries in our struct vgic_irq's. * However we only have those structs for mapped IRQs, so we read in * the respective config data from memory here upon mapping the LPI. + * + * Should any of these fail, behave as if we couldn't create the LPI + * by dropping the refcount and returning the error. */ ret = update_lpi_config(kvm, irq, NULL, false); if (ret) { - kfree(irq); + vgic_put_irq(kvm, irq); return ERR_PTR(ret); } ret = vgic_v3_lpi_sync_pending_status(kvm, irq); if (ret) { - kfree(irq); + vgic_put_irq(kvm, irq); return ERR_PTR(ret); } Let me know if you agree with that. Thanks, M. -- Jazz is not dead. It just smells funny... ^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() @ 2020-04-16 17:23 ` Marc Zyngier 0 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-16 17:23 UTC (permalink / raw) To: Zenghui Yu Cc: suzuki.poulose, linux-kernel, yezengruan, james.morse, linux-arm-kernel, wanghaibin.wang, kvmarm, julien.thierry.kdev On 2020-04-16 02:17, Zenghui Yu wrote: > On 2020/4/14 11:03, Zenghui Yu wrote: >> If we're going to fail out the vgic_add_lpi(), let's make sure the >> allocated vgic_irq memory is also freed. Though it seems that both >> cases are unlikely to fail. >> >> Cc: Zengruan Ye <yezengruan@huawei.com> >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >> --- >> virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- >> 1 file changed, 6 insertions(+), 2 deletions(-) >> >> diff --git a/virt/kvm/arm/vgic/vgic-its.c >> b/virt/kvm/arm/vgic/vgic-its.c >> index d53d34a33e35..3c3b6a0f2dce 100644 >> --- a/virt/kvm/arm/vgic/vgic-its.c >> +++ b/virt/kvm/arm/vgic/vgic-its.c >> @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm >> *kvm, u32 intid, >> * the respective config data from memory here upon mapping the >> LPI. >> */ >> ret = update_lpi_config(kvm, irq, NULL, false); >> - if (ret) >> + if (ret) { >> + kfree(irq); >> return ERR_PTR(ret); >> + } >> ret = vgic_v3_lpi_sync_pending_status(kvm, irq); >> - if (ret) >> + if (ret) { >> + kfree(irq); >> return ERR_PTR(ret); >> + } > > Looking at it again, I realized that this error handling is still not > complete. Maybe we should use a vgic_put_irq() instead so that we can > also properly delete the vgic_irq from lpi_list. Yes, this is a more correct fix indeed. There is still a bit of a bizarre behaviour if you have two vgic_add_lpi() racing to create the same interrupt, which is pretty dodgy anyway (it means we have two MAPI at the same time...). You end-up with re-reading the state from memory... Oh well. > Marc, what do you think? Could you please help to fix it, or I can > resend it. I've fixed it as such (with a comment for a good measure): diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index 3c3b6a0f2dce..c012a52b19f5 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -96,16 +96,19 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, * We "cache" the configuration table entries in our struct vgic_irq's. * However we only have those structs for mapped IRQs, so we read in * the respective config data from memory here upon mapping the LPI. + * + * Should any of these fail, behave as if we couldn't create the LPI + * by dropping the refcount and returning the error. */ ret = update_lpi_config(kvm, irq, NULL, false); if (ret) { - kfree(irq); + vgic_put_irq(kvm, irq); return ERR_PTR(ret); } ret = vgic_v3_lpi_sync_pending_status(kvm, irq); if (ret) { - kfree(irq); + vgic_put_irq(kvm, irq); return ERR_PTR(ret); } Let me know if you agree with that. Thanks, M. -- Jazz is not dead. It just smells funny... _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() @ 2020-04-16 17:23 ` Marc Zyngier 0 siblings, 0 replies; 27+ messages in thread From: Marc Zyngier @ 2020-04-16 17:23 UTC (permalink / raw) To: Zenghui Yu; +Cc: linux-kernel, linux-arm-kernel, kvmarm On 2020-04-16 02:17, Zenghui Yu wrote: > On 2020/4/14 11:03, Zenghui Yu wrote: >> If we're going to fail out the vgic_add_lpi(), let's make sure the >> allocated vgic_irq memory is also freed. Though it seems that both >> cases are unlikely to fail. >> >> Cc: Zengruan Ye <yezengruan@huawei.com> >> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >> --- >> virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- >> 1 file changed, 6 insertions(+), 2 deletions(-) >> >> diff --git a/virt/kvm/arm/vgic/vgic-its.c >> b/virt/kvm/arm/vgic/vgic-its.c >> index d53d34a33e35..3c3b6a0f2dce 100644 >> --- a/virt/kvm/arm/vgic/vgic-its.c >> +++ b/virt/kvm/arm/vgic/vgic-its.c >> @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm >> *kvm, u32 intid, >> * the respective config data from memory here upon mapping the >> LPI. >> */ >> ret = update_lpi_config(kvm, irq, NULL, false); >> - if (ret) >> + if (ret) { >> + kfree(irq); >> return ERR_PTR(ret); >> + } >> ret = vgic_v3_lpi_sync_pending_status(kvm, irq); >> - if (ret) >> + if (ret) { >> + kfree(irq); >> return ERR_PTR(ret); >> + } > > Looking at it again, I realized that this error handling is still not > complete. Maybe we should use a vgic_put_irq() instead so that we can > also properly delete the vgic_irq from lpi_list. Yes, this is a more correct fix indeed. There is still a bit of a bizarre behaviour if you have two vgic_add_lpi() racing to create the same interrupt, which is pretty dodgy anyway (it means we have two MAPI at the same time...). You end-up with re-reading the state from memory... Oh well. > Marc, what do you think? Could you please help to fix it, or I can > resend it. I've fixed it as such (with a comment for a good measure): diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index 3c3b6a0f2dce..c012a52b19f5 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -96,16 +96,19 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, * We "cache" the configuration table entries in our struct vgic_irq's. * However we only have those structs for mapped IRQs, so we read in * the respective config data from memory here upon mapping the LPI. + * + * Should any of these fail, behave as if we couldn't create the LPI + * by dropping the refcount and returning the error. */ ret = update_lpi_config(kvm, irq, NULL, false); if (ret) { - kfree(irq); + vgic_put_irq(kvm, irq); return ERR_PTR(ret); } ret = vgic_v3_lpi_sync_pending_status(kvm, irq); if (ret) { - kfree(irq); + vgic_put_irq(kvm, irq); return ERR_PTR(ret); } Let me know if you agree with that. Thanks, M. -- Jazz is not dead. It just smells funny... _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() 2020-04-16 17:23 ` Marc Zyngier (?) @ 2020-04-17 6:40 ` Zenghui Yu -1 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-17 6:40 UTC (permalink / raw) To: Marc Zyngier Cc: kvmarm, james.morse, julien.thierry.kdev, suzuki.poulose, wanghaibin.wang, yezengruan, linux-arm-kernel, linux-kernel On 2020/4/17 1:23, Marc Zyngier wrote: > On 2020-04-16 02:17, Zenghui Yu wrote: >> On 2020/4/14 11:03, Zenghui Yu wrote: >>> If we're going to fail out the vgic_add_lpi(), let's make sure the >>> allocated vgic_irq memory is also freed. Though it seems that both >>> cases are unlikely to fail. >>> >>> Cc: Zengruan Ye <yezengruan@huawei.com> >>> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >>> --- >>> virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- >>> 1 file changed, 6 insertions(+), 2 deletions(-) >>> >>> diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c >>> index d53d34a33e35..3c3b6a0f2dce 100644 >>> --- a/virt/kvm/arm/vgic/vgic-its.c >>> +++ b/virt/kvm/arm/vgic/vgic-its.c >>> @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm >>> *kvm, u32 intid, >>> * the respective config data from memory here upon mapping the >>> LPI. >>> */ >>> ret = update_lpi_config(kvm, irq, NULL, false); >>> - if (ret) >>> + if (ret) { >>> + kfree(irq); >>> return ERR_PTR(ret); >>> + } >>> ret = vgic_v3_lpi_sync_pending_status(kvm, irq); >>> - if (ret) >>> + if (ret) { >>> + kfree(irq); >>> return ERR_PTR(ret); >>> + } >> >> Looking at it again, I realized that this error handling is still not >> complete. Maybe we should use a vgic_put_irq() instead so that we can >> also properly delete the vgic_irq from lpi_list. > > Yes, this is a more correct fix indeed. There is still a bit of a bizarre > behaviour if you have two vgic_add_lpi() racing to create the same > interrupt, > which is pretty dodgy anyway (it means we have two MAPI at the same > time...). > You end-up with re-reading the state from memory... Oh well. > >> Marc, what do you think? Could you please help to fix it, or I can >> resend it. > > I've fixed it as such (with a comment for a good measure): > > diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c > index 3c3b6a0f2dce..c012a52b19f5 100644 > --- a/virt/kvm/arm/vgic/vgic-its.c > +++ b/virt/kvm/arm/vgic/vgic-its.c > @@ -96,16 +96,19 @@ static struct vgic_irq *vgic_add_lpi(struct kvm > *kvm, u32 intid, > * We "cache" the configuration table entries in our struct > vgic_irq's. > * However we only have those structs for mapped IRQs, so we read in > * the respective config data from memory here upon mapping the LPI. > + * > + * Should any of these fail, behave as if we couldn't create the LPI > + * by dropping the refcount and returning the error. > */ > ret = update_lpi_config(kvm, irq, NULL, false); > if (ret) { > - kfree(irq); > + vgic_put_irq(kvm, irq); > return ERR_PTR(ret); > } > > ret = vgic_v3_lpi_sync_pending_status(kvm, irq); > if (ret) { > - kfree(irq); > + vgic_put_irq(kvm, irq); > return ERR_PTR(ret); > } > > > Let me know if you agree with that. Agreed. Thanks for the fix! Zenghui ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() @ 2020-04-17 6:40 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-17 6:40 UTC (permalink / raw) To: Marc Zyngier Cc: suzuki.poulose, linux-kernel, yezengruan, james.morse, linux-arm-kernel, wanghaibin.wang, kvmarm, julien.thierry.kdev On 2020/4/17 1:23, Marc Zyngier wrote: > On 2020-04-16 02:17, Zenghui Yu wrote: >> On 2020/4/14 11:03, Zenghui Yu wrote: >>> If we're going to fail out the vgic_add_lpi(), let's make sure the >>> allocated vgic_irq memory is also freed. Though it seems that both >>> cases are unlikely to fail. >>> >>> Cc: Zengruan Ye <yezengruan@huawei.com> >>> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >>> --- >>> virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- >>> 1 file changed, 6 insertions(+), 2 deletions(-) >>> >>> diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c >>> index d53d34a33e35..3c3b6a0f2dce 100644 >>> --- a/virt/kvm/arm/vgic/vgic-its.c >>> +++ b/virt/kvm/arm/vgic/vgic-its.c >>> @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm >>> *kvm, u32 intid, >>> * the respective config data from memory here upon mapping the >>> LPI. >>> */ >>> ret = update_lpi_config(kvm, irq, NULL, false); >>> - if (ret) >>> + if (ret) { >>> + kfree(irq); >>> return ERR_PTR(ret); >>> + } >>> ret = vgic_v3_lpi_sync_pending_status(kvm, irq); >>> - if (ret) >>> + if (ret) { >>> + kfree(irq); >>> return ERR_PTR(ret); >>> + } >> >> Looking at it again, I realized that this error handling is still not >> complete. Maybe we should use a vgic_put_irq() instead so that we can >> also properly delete the vgic_irq from lpi_list. > > Yes, this is a more correct fix indeed. There is still a bit of a bizarre > behaviour if you have two vgic_add_lpi() racing to create the same > interrupt, > which is pretty dodgy anyway (it means we have two MAPI at the same > time...). > You end-up with re-reading the state from memory... Oh well. > >> Marc, what do you think? Could you please help to fix it, or I can >> resend it. > > I've fixed it as such (with a comment for a good measure): > > diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c > index 3c3b6a0f2dce..c012a52b19f5 100644 > --- a/virt/kvm/arm/vgic/vgic-its.c > +++ b/virt/kvm/arm/vgic/vgic-its.c > @@ -96,16 +96,19 @@ static struct vgic_irq *vgic_add_lpi(struct kvm > *kvm, u32 intid, > * We "cache" the configuration table entries in our struct > vgic_irq's. > * However we only have those structs for mapped IRQs, so we read in > * the respective config data from memory here upon mapping the LPI. > + * > + * Should any of these fail, behave as if we couldn't create the LPI > + * by dropping the refcount and returning the error. > */ > ret = update_lpi_config(kvm, irq, NULL, false); > if (ret) { > - kfree(irq); > + vgic_put_irq(kvm, irq); > return ERR_PTR(ret); > } > > ret = vgic_v3_lpi_sync_pending_status(kvm, irq); > if (ret) { > - kfree(irq); > + vgic_put_irq(kvm, irq); > return ERR_PTR(ret); > } > > > Let me know if you agree with that. Agreed. Thanks for the fix! Zenghui _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() @ 2020-04-17 6:40 ` Zenghui Yu 0 siblings, 0 replies; 27+ messages in thread From: Zenghui Yu @ 2020-04-17 6:40 UTC (permalink / raw) To: Marc Zyngier; +Cc: linux-kernel, linux-arm-kernel, kvmarm On 2020/4/17 1:23, Marc Zyngier wrote: > On 2020-04-16 02:17, Zenghui Yu wrote: >> On 2020/4/14 11:03, Zenghui Yu wrote: >>> If we're going to fail out the vgic_add_lpi(), let's make sure the >>> allocated vgic_irq memory is also freed. Though it seems that both >>> cases are unlikely to fail. >>> >>> Cc: Zengruan Ye <yezengruan@huawei.com> >>> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> >>> --- >>> virt/kvm/arm/vgic/vgic-its.c | 8 ++++++-- >>> 1 file changed, 6 insertions(+), 2 deletions(-) >>> >>> diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c >>> index d53d34a33e35..3c3b6a0f2dce 100644 >>> --- a/virt/kvm/arm/vgic/vgic-its.c >>> +++ b/virt/kvm/arm/vgic/vgic-its.c >>> @@ -98,12 +98,16 @@ static struct vgic_irq *vgic_add_lpi(struct kvm >>> *kvm, u32 intid, >>> * the respective config data from memory here upon mapping the >>> LPI. >>> */ >>> ret = update_lpi_config(kvm, irq, NULL, false); >>> - if (ret) >>> + if (ret) { >>> + kfree(irq); >>> return ERR_PTR(ret); >>> + } >>> ret = vgic_v3_lpi_sync_pending_status(kvm, irq); >>> - if (ret) >>> + if (ret) { >>> + kfree(irq); >>> return ERR_PTR(ret); >>> + } >> >> Looking at it again, I realized that this error handling is still not >> complete. Maybe we should use a vgic_put_irq() instead so that we can >> also properly delete the vgic_irq from lpi_list. > > Yes, this is a more correct fix indeed. There is still a bit of a bizarre > behaviour if you have two vgic_add_lpi() racing to create the same > interrupt, > which is pretty dodgy anyway (it means we have two MAPI at the same > time...). > You end-up with re-reading the state from memory... Oh well. > >> Marc, what do you think? Could you please help to fix it, or I can >> resend it. > > I've fixed it as such (with a comment for a good measure): > > diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c > index 3c3b6a0f2dce..c012a52b19f5 100644 > --- a/virt/kvm/arm/vgic/vgic-its.c > +++ b/virt/kvm/arm/vgic/vgic-its.c > @@ -96,16 +96,19 @@ static struct vgic_irq *vgic_add_lpi(struct kvm > *kvm, u32 intid, > * We "cache" the configuration table entries in our struct > vgic_irq's. > * However we only have those structs for mapped IRQs, so we read in > * the respective config data from memory here upon mapping the LPI. > + * > + * Should any of these fail, behave as if we couldn't create the LPI > + * by dropping the refcount and returning the error. > */ > ret = update_lpi_config(kvm, irq, NULL, false); > if (ret) { > - kfree(irq); > + vgic_put_irq(kvm, irq); > return ERR_PTR(ret); > } > > ret = vgic_v3_lpi_sync_pending_status(kvm, irq); > if (ret) { > - kfree(irq); > + vgic_put_irq(kvm, irq); > return ERR_PTR(ret); > } > > > Let me know if you agree with that. Agreed. Thanks for the fix! Zenghui _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2020-04-17 6:41 UTC | newest] Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-04-14 3:03 [PATCH 0/2] KVM: arm64: vgic_irq: Fix memory leaks Zenghui Yu 2020-04-14 3:03 ` Zenghui Yu 2020-04-14 3:03 ` Zenghui Yu 2020-04-14 3:03 ` [PATCH 1/2] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy Zenghui Yu 2020-04-14 3:03 ` Zenghui Yu 2020-04-14 3:03 ` Zenghui Yu 2020-04-14 10:54 ` Marc Zyngier 2020-04-14 10:54 ` Marc Zyngier 2020-04-14 10:54 ` Marc Zyngier 2020-04-14 11:17 ` Zenghui Yu 2020-04-14 11:17 ` Zenghui Yu 2020-04-14 11:17 ` Zenghui Yu 2020-04-14 13:15 ` Marc Zyngier 2020-04-14 13:15 ` Marc Zyngier 2020-04-14 13:15 ` Marc Zyngier 2020-04-14 3:03 ` [PATCH 2/2] KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() Zenghui Yu 2020-04-14 3:03 ` Zenghui Yu 2020-04-14 3:03 ` Zenghui Yu 2020-04-16 1:17 ` Zenghui Yu 2020-04-16 1:17 ` Zenghui Yu 2020-04-16 1:17 ` Zenghui Yu 2020-04-16 17:23 ` Marc Zyngier 2020-04-16 17:23 ` Marc Zyngier 2020-04-16 17:23 ` Marc Zyngier 2020-04-17 6:40 ` Zenghui Yu 2020-04-17 6:40 ` Zenghui Yu 2020-04-17 6:40 ` Zenghui Yu
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.