From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yongseok Koh Subject: Re: Mellanox ConnectX-5 crashes and mbuf leak Date: Fri, 6 Oct 2017 22:31:00 +0000 Message-ID: <374F8C13-CFB0-42FD-8993-BF7F0401F891@mellanox.com> References: <5d1f07c4-5933-806d-4d11-8fdfabc701d7@allegro-packets.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Cc: Adrien Mazarguil , =?iso-8859-1?Q?N=E9lio_Laranjeiro?= , "dev@dpdk.org" , Ferruh Yigit To: Martin Weiser Return-path: Received: from EUR03-DB5-obe.outbound.protection.outlook.com (mail-eopbgr40057.outbound.protection.outlook.com [40.107.4.57]) by dpdk.org (Postfix) with ESMTP id CA4F92B88 for ; Sat, 7 Oct 2017 00:31:02 +0200 (CEST) In-Reply-To: Content-Language: en-US Content-ID: <000CE835BBF831478A85536B4ED444C5@eurprd05.prod.outlook.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, Martin Even though I had done quite serious tests before sending out the patch, I figured out deadlock could happen if the Rx queue size is smaller. It is = 128 by default in testpmd while I usually use 256. I've fixed the bug and submitted a new patch [1], which actually reverts th= e previous patch. So, you can apply the attached with disregarding the old on= e. And I have also done extensive tests for this new patch but please let me k= now your test results. [1] "net/mlx5: fix deadlock due to buffered slots in Rx SW ring" at http://dpdk.org/dev/patchwork/patch/29847 Thanks, Yongseok diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c b/drivers/net/mlx5/mlx5_r= xtx_vec_sse.c index aff3359..9d37954 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.c @@ -549,7 +549,7 @@ rxq_replenish_bulk_mbuf(struct rxq *rxq, uint16_t n) { const uint16_t q_n =3D 1 << rxq->elts_n; const uint16_t q_mask =3D q_n - 1; - const uint16_t elts_idx =3D rxq->rq_ci & q_mask; + uint16_t elts_idx =3D rxq->rq_ci & q_mask; struct rte_mbuf **elts =3D &(*rxq->elts)[elts_idx]; volatile struct mlx5_wqe_data_seg *wq =3D &(*rxq->wqes)[elts_idx]; unsigned int i; @@ -567,6 +567,11 @@ rxq_replenish_bulk_mbuf(struct rxq *rxq, uint16_t n) wq[i].addr =3D rte_cpu_to_be_64((uintptr_t)elts[i]->buf_add= r + RTE_PKTMBUF_HEADROOM); rxq->rq_ci +=3D n; + /* Prevent overflowing into consumed mbufs. */ + elts_idx =3D rxq->rq_ci & q_mask; + for (i =3D 0; i < MLX5_VPMD_DESCS_PER_LOOP; i +=3D 2) + _mm_storeu_si128((__m128i *)&(*rxq->elts)[elts_idx + i], + _mm_set1_epi64x((uintptr_t)&rxq->fake_mbuf= )); rte_wmb(); *rxq->rq_db =3D rte_cpu_to_be_32(rxq->rq_ci); } > On Oct 6, 2017, at 7:10 AM, Martin Weiser wrote: >=20 > Hi Yongseok, >=20 > unfortunately in a quick test using testpmd and ~20Gb/s of traffic with > your patch traffic forwarding always stops completely after a few seconds= . >=20 > I wanted to test this with the current master of dpdk-next-net but after > "net/mlx5: support upstream rdma-core" it will not compile against > MLNX_OFED_LINUX-4.1-1.0.2.0. > So i used the last commit before that (v17.08-306-gf214841) and applied > your patch leading to the result described above. > Apart from your patch no other modifications were made and without the > patch testpmd forwards the traffic without a problem (in this > configuration mbufs should never run out so this test was never affected > by the original issue). >=20 > For this test I simply used testpmd with the following command line: > "testpmd -c 0xfe -- -i" and issued the "start" command. As traffic > generator I used t-rex with the sfr traffic profile. >=20 > Best regards, > Martin >=20 >=20 >=20 > On 05.10.17 23:46, Yongseok Koh wrote: >> Hi, Martin >>=20 >> Thanks for your thorough and valuable reporting. We could reproduce it. = I found >> a bug and fixed it. Please refer to the patch [1] I sent to the mailing = list. >> This might not be automatically applicable to v17.08 as I rebased it on = top of >> Nelio's flow cleanup patch. But as this is a simple patch, you can easil= y apply >> it manually. >>=20 >> Thanks, >> Yongseok >>=20 >> [1] https://emea01.safelinks.protection.outlook.com/?url=3Dhttp%3A%2F%2F= dpdk.org%2Fdev%2Fpatchwork%2Fpatch%2F29781&data=3D02%7C01%7Cyskoh%40mellano= x.com%7C61eea153c6ca4966b26c08d50cc3f763%7Ca652971c7d2e4d9ba6a4d149256f461b= %7C0%7C0%7C636428958171139449&sdata=3Dd%2BEj79F%2BRZ03rkREti%2Fhaw9pYl8kF5b= G7CkhK1kGQSs%3D&reserved=3D0 >>=20 >>> On Sep 26, 2017, at 2:23 AM, Martin Weiser wrote: >>>=20 >>> Hi, >>>=20 >>> we are currently testing the Mellanox ConnectX-5 100G NIC with DPDK >>> 17.08 as well as dpdk-net-next and are >>> experiencing mbuf leaks as well as crashes (and in some instances even >>> kernel panics in a mlx5 module) under >>> certain load conditions. >>>=20 >>> We initially saw these issues only in our own DPDK-based application an= d >>> it took some effort to reproduce this >>> in one of the DPDK example applications. However with the attached patc= h >>> to the load-balancer example we can >>> reproduce the issues reliably. >>>=20 >>> The patch may look weird at first but I will explain why I made these >>> changes: >>>=20 >>> * the sleep introduced in the worker threads simulates heavy processing >>> which causes the software rx rings to fill >>> up under load. If the rings are large enough (I increased the ring >>> size with the load-balancer command line option >>> as you can see in the example call further down) the mbuf pool may run >>> empty and I believe this leads to a malfunction >>> in the mlx5 driver. As soon as this happens the NIC will stop >>> forwarding traffic, probably because the driver >>> cannot allocate mbufs for the packets received by the NIC. >>> Unfortunately when this happens most of the mbufs will >>> never return to the mbuf pool so that even when the traffic stops the >>> pool will remain almost empty and the >>> application will not forward traffic even at a very low rate. >>>=20 >>> * the use of the reference count in the mbuf in addition to the >>> situation described above is what makes the >>> mlx5 DPDK driver crash almost immediately under load. In our >>> application we rely on this feature to be able to forward >>> the packet quickly and still send the packet to a worker thread for >>> analysis and finally free the packet when analysis is >>> done. Here I simulated this by increasing the mbuf reference count >>> immediately after receiving the mbuf from the >>> driver and then calling rte_pktmbuf_free in the worker thread which >>> should only decrement the reference count again >>> and not actually free the mbuf. >>>=20 >>> We executed the patched load-balancer application with the following >>> command line: >>>=20 >>> ./build/load_balancer -l 3-7 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx >>> "(0,3),(1,3)" --w "4" --lpm "16.0.0.0/8=3D>0; 48.0.0.0/8=3D>1;" --pos-l= b 29 >>> --rsz "1024, 32768, 1024, 1024" >>>=20 >>> Then we generated traffic using the t-rex traffic generator and the sfr >>> test case. On our machine the issues start >>> to happen when the traffic exceeds ~6 Gbps but this may vary depending >>> on how powerful the test machine is (by >>> the way we were able to reproduce this on different types of hardware). >>>=20 >>> A typical stacktrace looks like this: >>>=20 >>> Thread 1 "load_balancer" received signal SIGSEGV, Segmentation fault= . >>> 0x0000000000614475 in _mm_storeu_si128 (__B=3D..., __P=3D>> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716 >>> 716 __builtin_ia32_storedqu ((char *)__P, (__v16qi)__B); >>> (gdb) bt >>> #0 0x0000000000614475 in _mm_storeu_si128 (__B=3D..., __P=3D>> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716 >>> #1 rxq_cq_decompress_v (elts=3D0x7fff3732bef0, cq=3D0x7ffff7f99380, >>> rxq=3D0x7fff3732a980) at >>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:679 >>> #2 rxq_burst_v (pkts_n=3D, pkts=3D0xa7c7b0 , >>> rxq=3D0x7fff3732a980) at >>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1242 >>> #3 mlx5_rx_burst_vec (dpdk_rxq=3D0x7fff3732a980, pkts=3D>> out>, pkts_n=3D) at >>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1277 >>> #4 0x000000000043c11d in rte_eth_rx_burst (nb_pkts=3D3599, >>> rx_pkts=3D0xa7c7b0 , queue_id=3D0, port_id=3D0 '\000') >>> at >>> /root/dpdk-next-net//x86_64-native-linuxapp-gcc/include/rte_ethdev.h:27= 81 >>> #5 app_lcore_io_rx (lp=3Dlp@entry=3D0xa7c700 , >>> n_workers=3Dn_workers@entry=3D1, bsz_rd=3Dbsz_rd@entry=3D144, >>> bsz_wr=3Dbsz_wr@entry=3D144, pos_lb=3Dpos_lb@entry=3D29 '\035') >>> at /root/dpdk-next-net/examples/load_balancer/runtime.c:198 >>> #6 0x0000000000447dc0 in app_lcore_main_loop_io () at >>> /root/dpdk-next-net/examples/load_balancer/runtime.c:485 >>> #7 app_lcore_main_loop (arg=3D) at >>> /root/dpdk-next-net/examples/load_balancer/runtime.c:669 >>> #8 0x0000000000495e8b in rte_eal_mp_remote_launch () >>> #9 0x0000000000441e0d in main (argc=3D, >>> argv=3D) at >>> /root/dpdk-next-net/examples/load_balancer/main.c:99 >>>=20 >>> The crash does not always happen at the exact same spot but in our test= s >>> always in the same function. >>> In a few instances instead of an application crash the system froze >>> completely with what appeared to be a kernel >>> panic. The last output looked like a crash in the interrupt handler of = a >>> mlx5 module but unfortunately I cannot >>> provide the exact output right now. >>>=20 >>> All tests were performed under Ubuntu 16.04 server running a >>> 4.4.0-96-generic kernel and the lasted Mellanox OFED >>> MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64 was used. >>>=20 >>> Any help with this issue is greatly appreciated. >>>=20 >>> Best regards, >>> Martin >>>=20 >>> >=20