linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* (no subject)
@ 2022-11-21 11:54 Pingfan Liu
  2022-11-21 11:57 ` // a kdump hang caused by PPC pci patch series Pingfan Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Pingfan Liu @ 2022-11-21 11:54 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Coiby Xu, Cédric Le Goater

[-- Attachment #1: Type: text/plain, Size: 1322 bytes --]

Hello Powerpc folks,

I encounter an kdump bug, which I bisect and pin commit 174db9e7f775
("powerpc/pseries/pci: Add support of MSI domains to PHB hotplug")

In that case, using Fedora 36 as host, the mentioned commit as the
guest kernel, and virto-block disk, the kdump kernel will hang:

[    0.000000] Kernel command line: elfcorehdr=0x22c00000
no_timer_check net.ifnames=0 console=tty0 console=hvc0,115200n8
irqpoll maxcpus=1 noirqdistrib reset_devices cgroup_disable=memory
     numa=off udev.children-max=2 ehea.use_mcs=0 panic=10
kvm_cma_resv_ratio=0 transparent_hugepage=never novmcoredd
hugetlb_cma=0
    ...
    [    7.763260] virtio_blk virtio2: 32/0/0 default/read/poll queues
    [    7.771391] virtio_blk virtio2: [vda] 20971520 512-byte logical
blocks (10.7 GB/10.0 GiB)
    [   68.398234] systemd-udevd[187]: virtio2: Worker [190]
processing SEQNUM=1193 is taking a long time
    [  188.398258] systemd-udevd[187]: virtio2: Worker [190]
processing SEQNUM=1193 killed


During my test, I found that in very rare cases, the kdump can success
(I guess it may be due to the cpu id).  And if using either maxcpus=2
or using scsi-disk, then kdump can also success.  And before the
mentioned commit, kdump can also success.

The attachment contains the xml to reproduce that bug.

Do you have any ideas?

Thanks

[-- Attachment #2: virtblk-hang.xml --]
[-- Type: text/xml, Size: 3275 bytes --]

<domain type="kvm">
  <name>rhel9</name>
  <uuid>6266c1c1-1e74-4046-b959-33d94877b387</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://redhat.com/rhel/8-unknown"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit="KiB">16777216</memory>
  <currentMemory unit="KiB">16777216</currentMemory>
  <vcpu placement="static">16</vcpu>
  <os>
    <type arch="ppc64le" machine="pseries-rhel8.6.0">hvm</type>
    <boot dev="hd"/>
  </os>
  <cpu mode="custom" match="exact" check="none">
    <model fallback="forbid">POWER9</model>
  </cpu>
  <clock offset="utc"/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2"/>
      <source file="/var/lib/libvirt/images/rhel-guest-image-9.1-20220701.0.ppc64le.qcow2"/>
      <target dev="vda" bus="virtio"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0"/>
    </disk>
    <controller type="usb" index="0" model="qemu-xhci" ports="15">
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0"/>
    </controller>
    <controller type="pci" index="0" model="pci-root">
      <model name="spapr-pci-host-bridge"/>
      <target index="0"/>
    </controller>
    <controller type="virtio-serial" index="0">
      <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0"/>
    </controller>
    <interface type="network">
	    <mac address="52:54:00:74:c9:50"/>
	    <source network="default"/>
	    <model type="virtio"/>
	    <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
    </interface>
    <serial type="pty">
      <target type="spapr-vio-serial" port="0">
        <model name="spapr-vty"/>
      </target>
      <address type="spapr-vio" reg="0x30000000"/>
    </serial>
    <console type="pty">

      <target type="serial" port="0"/>
      <address type="spapr-vio" reg="0x30000000"/>
    </console>
    <channel type="unix">
      <target type="virtio" name="org.qemu.guest_agent.0"/>
      <address type="virtio-serial" controller="0" bus="0" port="1"/>
    </channel>
    <input type="tablet" bus="usb">
      <address type="usb" bus="0" port="1"/>
    </input>
    <input type="keyboard" bus="usb">
      <address type="usb" bus="0" port="2"/>
    </input>
    <tpm model="tpm-spapr">
      <backend type="emulator" version="2.0"/>
      <address type="spapr-vio" reg="0x00004000"/>
    </tpm>
    <graphics type="vnc" port="-1" autoport="yes">
      <listen type="address"/>
    </graphics>
    <audio id="1" type="none"/>
    <video>
      <model type="vga" vram="16384" heads="1" primary="yes"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
    </video>
    <memballoon model="virtio">
      <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
    </memballoon>
    <rng model="virtio">
      <backend model="random">/dev/urandom</backend>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x0"/>
    </rng>
    <panic model="pseries"/>
  </devices>
</domain>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: // a kdump hang caused by PPC pci patch series
  2022-11-21 11:54 Pingfan Liu
@ 2022-11-21 11:57 ` Pingfan Liu
  2022-11-21 12:57   ` Cédric Le Goater
  0 siblings, 1 reply; 6+ messages in thread
From: Pingfan Liu @ 2022-11-21 11:57 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Coiby Xu, Cédric Le Goater

Sorry that forget a subject.

On Mon, Nov 21, 2022 at 7:54 PM Pingfan Liu <kernelfans@gmail.com> wrote:
>
> Hello Powerpc folks,
>
> I encounter an kdump bug, which I bisect and pin commit 174db9e7f775
> ("powerpc/pseries/pci: Add support of MSI domains to PHB hotplug")
>
> In that case, using Fedora 36 as host, the mentioned commit as the
> guest kernel, and virto-block disk, the kdump kernel will hang:
>
> [    0.000000] Kernel command line: elfcorehdr=0x22c00000
> no_timer_check net.ifnames=0 console=tty0 console=hvc0,115200n8
> irqpoll maxcpus=1 noirqdistrib reset_devices cgroup_disable=memory
>      numa=off udev.children-max=2 ehea.use_mcs=0 panic=10
> kvm_cma_resv_ratio=0 transparent_hugepage=never novmcoredd
> hugetlb_cma=0
>     ...
>     [    7.763260] virtio_blk virtio2: 32/0/0 default/read/poll queues
>     [    7.771391] virtio_blk virtio2: [vda] 20971520 512-byte logical
> blocks (10.7 GB/10.0 GiB)
>     [   68.398234] systemd-udevd[187]: virtio2: Worker [190]
> processing SEQNUM=1193 is taking a long time
>     [  188.398258] systemd-udevd[187]: virtio2: Worker [190]
> processing SEQNUM=1193 killed
>
>
> During my test, I found that in very rare cases, the kdump can success
> (I guess it may be due to the cpu id).  And if using either maxcpus=2
> or using scsi-disk, then kdump can also success.  And before the
> mentioned commit, kdump can also success.
>
> The attachment contains the xml to reproduce that bug.
>
> Do you have any ideas?
>
> Thanks

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: // a kdump hang caused by PPC pci patch series
  2022-11-21 11:57 ` // a kdump hang caused by PPC pci patch series Pingfan Liu
@ 2022-11-21 12:57   ` Cédric Le Goater
  2022-11-22  3:29     ` Pingfan Liu
  2022-11-24  8:31     ` Pingfan Liu
  0 siblings, 2 replies; 6+ messages in thread
From: Cédric Le Goater @ 2022-11-21 12:57 UTC (permalink / raw)
  To: Pingfan Liu, linuxppc-dev; +Cc: Coiby Xu

On 11/21/22 12:57, Pingfan Liu wrote:
> Sorry that forget a subject.
> 
> On Mon, Nov 21, 2022 at 7:54 PM Pingfan Liu <kernelfans@gmail.com> wrote:
>>
>> Hello Powerpc folks,
>>
>> I encounter an kdump bug, which I bisect and pin commit 174db9e7f775
>> ("powerpc/pseries/pci: Add support of MSI domains to PHB hotplug")
>> In that case, using Fedora 36 as host, the mentioned commit as the
>> guest kernel, and virto-block disk, the kdump kernel will hang:

The host kernel should be using the PowerNV platform and not pseries
or are you running a nested L2 guest on KVM/pseries L1 ?

And as far as I remember, the patch above only impacts the IBM PowerVM
hypervisor, not KVM, and PHB hotplug, or kdump induces some hot-plugging
I am not aware of.

Also, if indeed, this is a L2 guest, the XIVE interrupt controller is
emulated in QEMU, "info pic" should return:

   ...
   irqchip: emulated

>>
>> [    0.000000] Kernel command line: elfcorehdr=0x22c00000
>> no_timer_check net.ifnames=0 console=tty0 console=hvc0,115200n8
>> irqpoll maxcpus=1 noirqdistrib reset_devices cgroup_disable=memory
>>       numa=off udev.children-max=2 ehea.use_mcs=0 panic=10
>> kvm_cma_resv_ratio=0 transparent_hugepage=never novmcoredd
>> hugetlb_cma=0
>>      ...
>>      [    7.763260] virtio_blk virtio2: 32/0/0 default/read/poll queues
>>      [    7.771391] virtio_blk virtio2: [vda] 20971520 512-byte logical
>> blocks (10.7 GB/10.0 GiB)
>>      [   68.398234] systemd-udevd[187]: virtio2: Worker [190]
>> processing SEQNUM=1193 is taking a long time
>>      [  188.398258] systemd-udevd[187]: virtio2: Worker [190]
>> processing SEQNUM=1193 killed
>>
>>
>> During my test, I found that in very rare cases, the kdump can success
>> (I guess it may be due to the cpu id).  And if using either maxcpus=2
>> or using scsi-disk, then kdump can also success.  And before the
>> mentioned commit, kdump can also success.
>>
>> The attachment contains the xml to reproduce that bug.
>>
>> Do you have any ideas?

Most certainly an interrupt not being delivered. You can check the status
on the host with :

   virsh qemu-monitor-command --hmp <domain>  "info pic"



Thanks,

C.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: // a kdump hang caused by PPC pci patch series
  2022-11-21 12:57   ` Cédric Le Goater
@ 2022-11-22  3:29     ` Pingfan Liu
  2022-11-24  8:31     ` Pingfan Liu
  1 sibling, 0 replies; 6+ messages in thread
From: Pingfan Liu @ 2022-11-22  3:29 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: Coiby Xu, linuxppc-dev

Hi Gedric,

Appreciate your insight. Please see the comment inline below.

On Mon, Nov 21, 2022 at 8:57 PM Cédric Le Goater <clg@kaod.org> wrote:
>
> On 11/21/22 12:57, Pingfan Liu wrote:
> > Sorry that forget a subject.
> >
> > On Mon, Nov 21, 2022 at 7:54 PM Pingfan Liu <kernelfans@gmail.com> wrote:
> >>
> >> Hello Powerpc folks,
> >>
> >> I encounter an kdump bug, which I bisect and pin commit 174db9e7f775
> >> ("powerpc/pseries/pci: Add support of MSI domains to PHB hotplug")
> >> In that case, using Fedora 36 as host, the mentioned commit as the
> >> guest kernel, and virto-block disk, the kdump kernel will hang:
>
> The host kernel should be using the PowerNV platform and not pseries
> or are you running a nested L2 guest on KVM/pseries L1 ?
>

Host kernel ran on P9 bare metal. And here PowerKVM is used.

> And as far as I remember, the patch above only impacts the IBM PowerVM
> hypervisor, not KVM, and PHB hotplug, or kdump induces some hot-plugging
> I am not aware of.
>

Sorry that my information is not clear.
The suspect series is "[PATCH 00/31] powerpc: Modernize the PCI/MSI
support", and in the main line, beginning from commit 786e5b102a00
("powerpc/pseries/pci: Introduce __find_pe_total_msi()").

I tried to bisect, and the commit a5f3d2c17b07 ("powerpc/pseries/pci:
Add MSI domains") even hangs the first kernel. So I went ahead to find
the next functional change on pseries, which is commit 174db9e7f775
("powerpc/pseries/pci: Add support of MSI domains to PHB hotplug").


> Also, if indeed, this is a L2 guest, the XIVE interrupt controller is
> emulated in QEMU, "info pic" should return:
>
>    ...
>    irqchip: emulated
>
> >>
> >> [    0.000000] Kernel command line: elfcorehdr=0x22c00000
> >> no_timer_check net.ifnames=0 console=tty0 console=hvc0,115200n8
> >> irqpoll maxcpus=1 noirqdistrib reset_devices cgroup_disable=memory
> >>       numa=off udev.children-max=2 ehea.use_mcs=0 panic=10
> >> kvm_cma_resv_ratio=0 transparent_hugepage=never novmcoredd
> >> hugetlb_cma=0
> >>      ...
> >>      [    7.763260] virtio_blk virtio2: 32/0/0 default/read/poll queues
> >>      [    7.771391] virtio_blk virtio2: [vda] 20971520 512-byte logical
> >> blocks (10.7 GB/10.0 GiB)
> >>      [   68.398234] systemd-udevd[187]: virtio2: Worker [190]
> >> processing SEQNUM=1193 is taking a long time
> >>      [  188.398258] systemd-udevd[187]: virtio2: Worker [190]
> >> processing SEQNUM=1193 killed
> >>
> >>
> >> During my test, I found that in very rare cases, the kdump can success
> >> (I guess it may be due to the cpu id).  And if using either maxcpus=2
> >> or using scsi-disk, then kdump can also success.  And before the
> >> mentioned commit, kdump can also success.
> >>
> >> The attachment contains the xml to reproduce that bug.
> >>
> >> Do you have any ideas?
>
> Most certainly an interrupt not being delivered. You can check the status
> on the host with :
>
>    virsh qemu-monitor-command --hmp <domain>  "info pic"
>

OK, I will try to occupy a P9 machine and have a shot. I will update
the info later.


Thanks,

Pingfa
>
>
> Thanks,
>
> C.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: // a kdump hang caused by PPC pci patch series
  2022-11-21 12:57   ` Cédric Le Goater
  2022-11-22  3:29     ` Pingfan Liu
@ 2022-11-24  8:31     ` Pingfan Liu
  2022-11-24  8:44       ` Cédric Le Goater
  1 sibling, 1 reply; 6+ messages in thread
From: Pingfan Liu @ 2022-11-24  8:31 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: Coiby Xu, linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 2449 bytes --]

On Mon, Nov 21, 2022 at 8:57 PM Cédric Le Goater <clg@kaod.org> wrote:
>
> On 11/21/22 12:57, Pingfan Liu wrote:
> > Sorry that forget a subject.
> >
> > On Mon, Nov 21, 2022 at 7:54 PM Pingfan Liu <kernelfans@gmail.com> wrote:
> >>
> >> Hello Powerpc folks,
> >>
> >> I encounter an kdump bug, which I bisect and pin commit 174db9e7f775
> >> ("powerpc/pseries/pci: Add support of MSI domains to PHB hotplug")
> >> In that case, using Fedora 36 as host, the mentioned commit as the
> >> guest kernel, and virto-block disk, the kdump kernel will hang:
>
> The host kernel should be using the PowerNV platform and not pseries
> or are you running a nested L2 guest on KVM/pseries L1 ?
>
> And as far as I remember, the patch above only impacts the IBM PowerVM
> hypervisor, not KVM, and PHB hotplug, or kdump induces some hot-plugging
> I am not aware of.
>
> Also, if indeed, this is a L2 guest, the XIVE interrupt controller is
> emulated in QEMU, "info pic" should return:
>
>    ...
>    irqchip: emulated
>
> >>
> >> [    0.000000] Kernel command line: elfcorehdr=0x22c00000
> >> no_timer_check net.ifnames=0 console=tty0 console=hvc0,115200n8
> >> irqpoll maxcpus=1 noirqdistrib reset_devices cgroup_disable=memory
> >>       numa=off udev.children-max=2 ehea.use_mcs=0 panic=10
> >> kvm_cma_resv_ratio=0 transparent_hugepage=never novmcoredd
> >> hugetlb_cma=0
> >>      ...
> >>      [    7.763260] virtio_blk virtio2: 32/0/0 default/read/poll queues
> >>      [    7.771391] virtio_blk virtio2: [vda] 20971520 512-byte logical
> >> blocks (10.7 GB/10.0 GiB)
> >>      [   68.398234] systemd-udevd[187]: virtio2: Worker [190]
> >> processing SEQNUM=1193 is taking a long time
> >>      [  188.398258] systemd-udevd[187]: virtio2: Worker [190]
> >> processing SEQNUM=1193 killed
> >>
> >>
> >> During my test, I found that in very rare cases, the kdump can success
> >> (I guess it may be due to the cpu id).  And if using either maxcpus=2
> >> or using scsi-disk, then kdump can also success.  And before the
> >> mentioned commit, kdump can also success.
> >>
> >> The attachment contains the xml to reproduce that bug.
> >>
> >> Do you have any ideas?
>
> Most certainly an interrupt not being delivered. You can check the status
> on the host with :
>
>    virsh qemu-monitor-command --hmp <domain>  "info pic"
>

Please pick it up from the attachment.

Thanks,

    Pingfan

[-- Attachment #2: pseries_msi_lost.txt --]
[-- Type: text/plain, Size: 21210 bytes --]

Script started on 2022-11-24 03:22:55-05:00 [TERM="xterm-256color" TTY="/dev/pts/0" COLUMNS="172" LINES="41"]
^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]#  virsh qemu-monitor-command --hmp  rhel9 "info pic"
^[[?2004l\rCPU[0000]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0000]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0000]:   OS    00   ff  00    00   ff  00  ff   ff  80000400
CPU[0000]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0000]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0001]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0001]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0001]:   OS    00   ff  00    00   ff  00  ff   ff  80000401
CPU[0001]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0001]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0002]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0002]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0002]:   OS    00   ff  00    00   ff  00  ff   ff  80000402
CPU[0002]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0002]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0003]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0003]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0003]:   OS    00   ff  00    00   ff  00  ff   ff  80000403
CPU[0003]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0003]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0004]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0004]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0004]:   OS    00   ff  00    00   ff  00  ff   ff  80000404
CPU[0004]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0004]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0005]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0005]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0005]:   OS    00   ff  00    00   ff  00  ff   ff  80000405
CPU[0005]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0005]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0006]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0006]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0006]:   OS    00   ff  00    00   ff  00  ff   ff  80000406
CPU[0006]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0006]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0007]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0007]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0007]:   OS    00   ff  00    00   ff  00  ff   ff  80000407
CPU[0007]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0007]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0008]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0008]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0008]:   OS    00   ff  00    00   ff  00  ff   ff  80000408
CPU[0008]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0008]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0009]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0009]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0009]:   OS    00   ff  00    00   ff  00  ff   ff  80000409
CPU[0009]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0009]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000a]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000a]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000a]:   OS    00   ff  00    00   ff  00  ff   ff  8000040a
CPU[000a]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000a]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000b]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000b]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000b]:   OS    00   ff  00    00   ff  00  ff   ff  8000040b
CPU[000b]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000b]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000c]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000c]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000c]:   OS    00   ff  00    00   ff  00  ff   ff  8000040c
CPU[000c]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000c]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000d]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000d]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000d]:   OS    00   ff  00    00   ff  00  ff   ff  8000040d
CPU[000d]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000d]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000e]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000e]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000e]:   OS    00   ff  00    00   ff  00  ff   ff  8000040e
CPU[000e]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000e]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000f]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000f]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000f]:   OS    00   ff  00    00   ff  00  ff   ff  8000040f
CPU[000f]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000f]: PHYS    00   00  00    00   00  00  00   ff  00000000
  LISN         PQ    EISN     CPU/PRIO EQ
  00000000 MSI --    00000010   0/6   1161/16384 @2c4e0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00000001 MSI --    00000010   1/6    626/16384 @2c8b0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00000002 MSI --    00000010   2/6    946/16384 @2c980000 ^1 [ 80000010 80000030 80000030 80000030 80000030 ^00000000 ]
  00000003 MSI --    00000010   3/6    751/16384 @2ca50000 ^1 [ 80000033 80000033 80000033 80000033 80000033 ^00000000 ]
  00000004 MSI --    00000010   4/6   1513/16384 @2cba0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00000005 MSI --    00000010   5/6   1226/16384 @2cc70000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00000006 MSI --    00000010   6/6   1118/16384 @2cd60000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00000007 MSI --    00000010   7/6   1263/16384 @2cf30000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00000008 MSI --    00000010   8/6    999/16384 @2e000000 ^1 [ 8000003a 8000003a 8000003a 8000003a 80000010 ^00000000 ]
  00000009 MSI --    00000010   9/6    797/16384 @2e0d0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000000a MSI --    00000010  10/6   1068/16384 @2e1a0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000000b MSI --    00000010  11/6   1244/16384 @2e270000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000000c MSI --    00000010  12/6   1373/16384 @2e360000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000000d MSI --    00000010  13/6    761/16384 @2e430000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000000e MSI --    00000010  14/6   1287/16384 @2e580000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000000f MSI --    00000010  15/6   1258/16384 @2e650000 ^1 [ 80000010 80000010 80000010 80000041 80000010 ^00000000 ]
  00001000 MSI --    00000017   0/6   1161/16384 @2c4e0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001001 MSI --    0000002b   3/6    751/16384 @2ca50000 ^1 [ 80000033 80000033 80000033 80000033 80000033 ^00000000 ]
  00001100 MSI -Q  M 00000000 
  00001104 MSI --    00000018   2/6    946/16384 @2c980000 ^1 [ 80000010 80000030 80000030 80000030 80000030 ^00000000 ]
  000011f0 MSI --    00000019   9/6    797/16384 @2e0d0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001200 LSI -Q  M 00000000 
  00001201 LSI --    00000015  14/6   1287/16384 @2e580000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001202 LSI --    00000016   1/6    626/16384 @2c8b0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001203 LSI -Q  M 00000000 
  00001300 MSI --    0000001b   3/6    751/16384 @2ca50000 ^1 [ 80000033 80000033 80000033 80000033 80000033 ^00000000 ]
  00001301 MSI --    0000001c   4/6   1513/16384 @2cba0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001302 MSI --    0000001d   5/6   1226/16384 @2cc70000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001303 MSI --    0000001e   6/6   1118/16384 @2cd60000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001304 MSI --    0000001f   7/6   1263/16384 @2cf30000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001305 MSI --    00000020   8/6    999/16384 @2e000000 ^1 [ 8000003a 8000003a 8000003a 8000003a 80000010 ^00000000 ]
  00001306 MSI --    00000021   9/6    797/16384 @2e0d0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001307 MSI --    00000022  10/6   1068/16384 @2e1a0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001308 MSI --    00000023  11/6   1244/16384 @2e270000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001309 MSI --    00000024  12/6   1373/16384 @2e360000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000130a MSI --    00000025  13/6    761/16384 @2e430000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000130b MSI --    00000026  14/6   1287/16384 @2e580000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000130c MSI --    00000027  15/6   1258/16384 @2e650000 ^1 [ 80000010 80000010 80000010 80000041 80000010 ^00000000 ]
  0000130d MSI --    00000028   0/6   1161/16384 @2c4e0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000130e MSI --    00000029   1/6    626/16384 @2c8b0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000130f MSI --    0000002a   2/6    946/16384 @2c980000 ^1 [ 80000010 80000030 80000030 80000030 80000030 ^00000000 ]
  00001310 MSI --    0000002c  15/6   1258/16384 @2e650000 ^1 [ 80000010 80000010 80000010 80000041 80000010 ^00000000 ]
  00001311 MSI --    0000002d   0/6   1161/16384 @2c4e0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001312 MSI --    0000002e   4/6   1513/16384 @2cba0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001313 MSI --    00000031   0/6   1161/16384 @2c4e0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001314 MSI --    00000032   1/6    626/16384 @2c8b0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001315 MSI --    00000034   2/6    946/16384 @2c980000 ^1 [ 80000010 80000030 80000030 80000030 80000030 ^00000000 ]
  00001316 MSI --    00000035   3/6    751/16384 @2ca50000 ^1 [ 80000033 80000033 80000033 80000033 80000033 ^00000000 ]
  00001317 MSI --    00000036   4/6   1513/16384 @2cba0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001318 MSI --    00000037   5/6   1226/16384 @2cc70000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001319 MSI --    00000038   6/6   1118/16384 @2cd60000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000131a MSI --    00000039   7/6   1263/16384 @2cf30000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000131b MSI --    0000003a   8/6    999/16384 @2e000000 ^1 [ 8000003a 8000003a 8000003a 8000003a 80000010 ^00000000 ]
  0000131c MSI --    0000003b   9/6    797/16384 @2e0d0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000131d MSI --    0000003c  10/6   1068/16384 @2e1a0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000131e MSI --    0000003d  11/6   1244/16384 @2e270000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  0000131f MSI --    0000003e  12/6   1373/16384 @2e360000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001320 MSI --    0000003f  13/6    761/16384 @2e430000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001321 MSI --    00000040  14/6   1287/16384 @2e580000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001322 MSI --    00000041  15/6   1258/16384 @2e650000 ^1 [ 80000010 80000010 80000010 80000041 80000010 ^00000000 ]
  00001324 MSI --    0000002f   1/6    626/16384 @2c8b0000 ^1 [ 80000010 80000010 80000010 80000010 80000010 ^00000000 ]
  00001325 MSI --    00000030   2/6    946/16384 @2c980000 ^1 [ 80000010 80000030 80000030 80000030 80000030 ^00000000 ]
  00001326 MSI --    00000033   3/6    751/16384 @2ca50000 ^1 [ 80000033 80000033 80000033 80000033 80000033 ^00000000 ]
  00001327 MSI -Q  M 00000000 
irqchip: in-kernel


^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]# 
^[[?2004l\r^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]# 
^[[?2004l\r^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]# 
^[[?2004l\r^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]# 
^[[?2004l\r^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]# 
^[[?2004l\r^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]# 
^[[?2004l\r^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]#  virsh qemu-monitor-command --hmp  rhel9 "info pic"
^[[?2004l\rCPU[0000]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0000]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0000]:   OS    00   00  00    00   ff  00  ff   00  80000400
CPU[0000]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0000]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0001]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0001]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0001]:   OS    00   ff  00    00   ff  00  ff   ff  80000401
CPU[0001]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0001]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0002]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0002]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0002]:   OS    00   00  00    00   ff  00  ff   00  80000402
CPU[0002]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0002]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0003]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0003]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0003]:   OS    00   00  00    00   ff  00  ff   00  80000403
CPU[0003]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0003]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0004]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0004]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0004]:   OS    00   00  00    00   ff  00  ff   00  80000404
CPU[0004]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0004]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0005]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0005]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0005]:   OS    00   00  00    00   ff  00  ff   00  80000405
CPU[0005]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0005]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0006]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0006]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0006]:   OS    00   00  00    00   ff  00  ff   00  80000406
CPU[0006]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0006]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0007]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0007]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0007]:   OS    00   00  00    00   ff  00  ff   00  80000407
CPU[0007]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0007]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0008]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0008]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0008]:   OS    00   00  00    00   ff  00  ff   00  80000408
CPU[0008]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0008]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[0009]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[0009]: USER    00   00  00    00   00  00  00   00  00000000
CPU[0009]:   OS    00   00  00    00   ff  00  ff   00  80000409
CPU[0009]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[0009]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000a]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000a]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000a]:   OS    00   00  00    00   ff  00  ff   00  8000040a
CPU[000a]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000a]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000b]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000b]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000b]:   OS    00   00  00    00   ff  00  ff   00  8000040b
CPU[000b]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000b]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000c]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000c]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000c]:   OS    00   00  00    00   ff  00  ff   00  8000040c
CPU[000c]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000c]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000d]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000d]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000d]:   OS    00   00  00    00   ff  00  ff   00  8000040d
CPU[000d]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000d]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000e]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000e]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000e]:   OS    00   00  00    00   ff  00  ff   00  8000040e
CPU[000e]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000e]: PHYS    00   00  00    00   00  00  00   ff  00000000
CPU[000f]:   QW   NSR CPPR IPB LSMFB ACK# INC AGE PIPR  W2
CPU[000f]: USER    00   00  00    00   00  00  00   00  00000000
CPU[000f]:   OS    00   00  00    00   ff  00  ff   00  8000040f
CPU[000f]: POOL    00   00  00    00   00  00  00   00  00000000
CPU[000f]: PHYS    00   00  00    00   00  00  00   ff  00000000
  LISN         PQ    EISN     CPU/PRIO EQ
  00000000 MSI --    00000010   1/6     65/16384 @124e0000 ^1 [ 8000001b 8000001b 8000001b 8000001b 8000001b ^00000000 ]
  00000001 MSI -Q  M 00000000 
  00000002 MSI -Q  M 00000000 
  00000003 MSI -Q  M 00000000 
  00000004 MSI -Q  M 00000000 
  00000005 MSI -Q  M 00000000 
  00000006 MSI -Q  M 00000000 
  00000007 MSI -Q  M 00000000 
  00000008 MSI -Q  M 00000000 
  00000009 MSI -Q  M 00000000 
  0000000a MSI -Q  M 00000000 
  0000000b MSI -Q  M 00000000 
  0000000c MSI -Q  M 00000000 
  0000000d MSI -Q  M 00000000 
  0000000e MSI -Q  M 00000000 
  0000000f MSI -Q  M 00000000 
  00001000 MSI --    00000017   1/6     65/16384 @124e0000 ^1 [ 8000001b 8000001b 8000001b 8000001b 8000001b ^00000000 ]
  00001001 MSI --    0000001d   1/6     65/16384 @124e0000 ^1 [ 8000001b 8000001b 8000001b 8000001b 8000001b ^00000000 ]
  00001100 MSI -Q  M 00000000 
  00001104 MSI -Q    00000018 
  000011f0 MSI -Q  M 00000000 
  00001200 LSI -Q  M 00000000 
  00001201 LSI -Q    00000015 
  00001202 LSI --    00000016   1/6     65/16384 @124e0000 ^1 [ 8000001b 8000001b 8000001b 8000001b 8000001b ^00000000 ]
  00001203 LSI -Q  M 00000000 
  00001300 MSI --    0000001b   1/6     65/16384 @124e0000 ^1 [ 8000001b 8000001b 8000001b 8000001b 8000001b ^00000000 ]
  00001301 MSI --    0000001c   1/6     65/16384 @124e0000 ^1 [ 8000001b 8000001b 8000001b 8000001b 8000001b ^00000000 ]
  00001302 MSI --    0000001e   1/6     65/16384 @124e0000 ^1 [ 8000001b 8000001b 8000001b 8000001b 8000001b ^00000000 ]
  00001303 MSI -Q  M 0000001e 
  00001304 MSI -Q  M 0000001f 
  00001305 MSI -Q  M 00000020 
  00001306 MSI -Q  M 00000021 
  00001307 MSI -Q  M 00000022 
  00001308 MSI -Q  M 00000023 
  00001309 MSI -Q  M 00000024 
  0000130a MSI -Q  M 00000025 
  0000130b MSI -Q  M 00000026 
  0000130c MSI -Q  M 00000027 
  0000130d MSI -Q  M 00000028 
  0000130e MSI -Q  M 00000029 
  0000130f MSI -Q  M 0000002a 
  00001310 MSI -Q  M 0000002c 
  00001311 MSI --    0000002d   1/6     65/16384 @124e0000 ^1 [ 8000001b 8000001b 8000001b 8000001b 8000001b ^00000000 ]
  00001312 MSI -Q  M 0000002e 
irqchip: in-kernel


^[]0;root@ibm-p9wr-02:~\a^[[?2004h[root@ibm-p9wr-02 ~]# ^[[?2004l
exit

Script done on 2022-11-24 03:28:16-05:00 [COMMAND_EXIT_CODE="0"]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: // a kdump hang caused by PPC pci patch series
  2022-11-24  8:31     ` Pingfan Liu
@ 2022-11-24  8:44       ` Cédric Le Goater
  0 siblings, 0 replies; 6+ messages in thread
From: Cédric Le Goater @ 2022-11-24  8:44 UTC (permalink / raw)
  To: Pingfan Liu; +Cc: Coiby Xu, linuxppc-dev

On 11/24/22 09:31, Pingfan Liu wrote:
> On Mon, Nov 21, 2022 at 8:57 PM Cédric Le Goater <clg@kaod.org> wrote:
>>
>> On 11/21/22 12:57, Pingfan Liu wrote:
>>> Sorry that forget a subject.
>>>
>>> On Mon, Nov 21, 2022 at 7:54 PM Pingfan Liu <kernelfans@gmail.com> wrote:
>>>>
>>>> Hello Powerpc folks,
>>>>
>>>> I encounter an kdump bug, which I bisect and pin commit 174db9e7f775
>>>> ("powerpc/pseries/pci: Add support of MSI domains to PHB hotplug")
>>>> In that case, using Fedora 36 as host, the mentioned commit as the
>>>> guest kernel, and virto-block disk, the kdump kernel will hang:
>>
>> The host kernel should be using the PowerNV platform and not pseries
>> or are you running a nested L2 guest on KVM/pseries L1 ?
>>
>> And as far as I remember, the patch above only impacts the IBM PowerVM
>> hypervisor, not KVM, and PHB hotplug, or kdump induces some hot-plugging
>> I am not aware of.
>>
>> Also, if indeed, this is a L2 guest, the XIVE interrupt controller is
>> emulated in QEMU, "info pic" should return:
>>
>>     ...
>>     irqchip: emulated
>>
>>>>
>>>> [    0.000000] Kernel command line: elfcorehdr=0x22c00000
>>>> no_timer_check net.ifnames=0 console=tty0 console=hvc0,115200n8
>>>> irqpoll maxcpus=1 noirqdistrib reset_devices cgroup_disable=memory
>>>>        numa=off udev.children-max=2 ehea.use_mcs=0 panic=10
>>>> kvm_cma_resv_ratio=0 transparent_hugepage=never novmcoredd
>>>> hugetlb_cma=0
>>>>       ...
>>>>       [    7.763260] virtio_blk virtio2: 32/0/0 default/read/poll queues
>>>>       [    7.771391] virtio_blk virtio2: [vda] 20971520 512-byte logical
>>>> blocks (10.7 GB/10.0 GiB)
>>>>       [   68.398234] systemd-udevd[187]: virtio2: Worker [190]
>>>> processing SEQNUM=1193 is taking a long time
>>>>       [  188.398258] systemd-udevd[187]: virtio2: Worker [190]
>>>> processing SEQNUM=1193 killed
>>>>
>>>>
>>>> During my test, I found that in very rare cases, the kdump can success
>>>> (I guess it may be due to the cpu id).  And if using either maxcpus=2
>>>> or using scsi-disk, then kdump can also success.  And before the
>>>> mentioned commit, kdump can also success.
>>>>
>>>> The attachment contains the xml to reproduce that bug.
>>>>
>>>> Do you have any ideas?
>>
>> Most certainly an interrupt not being delivered. You can check the status
>> on the host with :
>>
>>     virsh qemu-monitor-command --hmp <domain>  "info pic"
>>
> 
> Please pick it up from the attachment.

Nothing wrong on the guest side. No pending interrupts. Not before or
after kdump. Next step is to look at KVM. I suggest you file a bug.

Thanks,

C.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-11-24  9:24 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-21 11:54 Pingfan Liu
2022-11-21 11:57 ` // a kdump hang caused by PPC pci patch series Pingfan Liu
2022-11-21 12:57   ` Cédric Le Goater
2022-11-22  3:29     ` Pingfan Liu
2022-11-24  8:31     ` Pingfan Liu
2022-11-24  8:44       ` Cédric Le Goater

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).