From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============2254083869902911957==" MIME-Version: 1.0 From: Shahar Salzman Subject: Re: [SPDK] Strange CI failure Date: Thu, 31 Jan 2019 08:47:10 +0000 Message-ID: In-Reply-To: EA913ED399BBA34AA4EAC2EDC24CDD009C25B211@FMSMSX105.amr.corp.intel.com List-ID: To: spdk@lists.01.org --===============2254083869902911957== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I rebased, CI now passes. Thanks! ________________________________ From: SPDK on behalf of Howell, Seth Sent: Wednesday, January 30, 2019 4:28 PM To: Storage Performance Development Kit Subject: Re: [SPDK] Strange CI failure Hi Shahar, I apologize for the inconvenience. There was a change to the nvme-cli repo = that when applied to the chandler test pool caused consistent failures. A c= hange has since been merged to the SPDK repo. Please rebase your changes on= master to prevent this failure on future versions of your patch. Again, I'm sorry for any inconvenience this has caused. Thank you, Seth Howell -----Original Message----- From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Shahar Salzman Sent: Wednesday, January 30, 2019 5:18 AM To: Harris, James R ; Storage Performance Devel= opment Kit Subject: Re: [SPDK] Strange CI failure It looks like there are now consistent failures in iscsi and spdk-nvme-cli = tests, I tried to retrigger and the failures happened again: spdk nvme cli: https://ci.spdk.io/spdk/builds/review/253dd179d38ac2b608f5adf1edad56e1ec6eb= 519.1548848568/fedora-03/build.log iscsi: https://ci.spdk.io/spdk/builds/review/253dd179d38ac2b608f5adf1edad56e1ec6eb= 519.1548848568/fedora-09/build.log ________________________________ From: Harris, James R Sent: Tuesday, January 29, 2019 6:09 PM To: Storage Performance Development Kit; Shahar Salzman Subject: Re: [SPDK] Strange CI failure Thanks Shahar. For now, you can reply to your own patch on GerritHub with = just the word "retrigger" - it will re-run your patch through the test pool= . That will get your patch unblocked while Paul looks at the intermittent = test failure. -Jim =EF=BB=BFOn 1/29/19, 8:48 AM, "SPDK on behalf of Luse, Paul E" wrote: Thanks! I've got a few hours of meetings coming up but here's what I s= ee. If you can repro that'd be great, we can get a github issue up and goi= ng. If not I can look deeper into this later if someone else doesn't jump = in by then with an "aha" moment :) Starting SPDK v19.01-pre / DPDK 18.11.0 initialization... [ DPDK EAL parameters: identify -c 0x1 -n 1 -m 0 --base-virtaddr=3D0x20= 0000000000 --file-prefix=3Dspdk0 --proc-type=3Dauto ] EAL: Detected 16 lcore(s) EAL: Detected 2 NUMA nodes EAL: Auto-detected process type: SECONDARY EAL: Multi-process socket /var/run/dpdk/spdk0/mp_socket_835807_c029d817= e596b EAL: Probing VFIO support... EAL: VFIO support initialized test/nvme/nvme.sh: line 108: 835807 Segmentation fault (core dumpe= d) $rootdir/examples/nvme/identify/identify -i 0 08:50:18 # trap - ERR 08:50:18 # print_backtrace 08:50:18 # [[ ehxBE =3D~ e ]] 08:50:18 # local shell_options=3DehxBE 08:50:18 # set +x =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Backtrace start: =3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D From: Shahar Salzman [mailto:shahar.salzman(a)kaminario.com] Sent: Tuesday, January 29, 2019 8:35 AM To: Luse, Paul E ; Storage Performance Develop= ment Kit Subject: Re: Strange CI failure https://ci.spdk.io/spdk-jenkins/results/autotest-per-patch/builds/21382= /archive/nvme_phy_autotest/build.log I can copy paste it if you cannot reach the link. ________________________________ From: SPDK > on behalf of Luse, Paul E > Sent: Tuesday, January 29, 2019 5:22 PM To: Storage Performance Development Kit Subject: Re: [SPDK] Strange CI failure Can you send a link to the full log? -----Original Message----- From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Shahar Sal= zman Sent: Tuesday, January 29, 2019 8:21 AM To: Storage Performance Development Kit > Subject: [SPDK] Strange CI failure Hi, I have encountered a CI failure that has nothing to do with my code. The reason that I know it has nothing to do with it, is that the change= is a gdb macro. Do we know that this test machine is unstable? Here is the backtrace: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Backtrace start: =3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D in test/nvme/nvme.sh:108 -> main() ... 103 report_test_completion "nightly_nvme_reset" 104 timing_exit reset 105 fi 106 107 timing_enter identify =3D> 108 $rootdir/examples/nvme/identify/identify -i 0 109 for bdf in $(iter_pci_class_code 01 08 02); do 110 $rootdir/examples/nvme/identify/identify -r "trtype:PCIe tradd= r:${bdf}" -i 0 111 done 112 timing_exit identify 113 ... Shahar _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk --===============2254083869902911957==--