From mboxrd@z Thu Jan 1 00:00:00 1970 From: Martin Weiser Subject: Re: Mellanox ConnectX-5 crashes and mbuf leak Date: Fri, 6 Oct 2017 16:10:12 +0200 Message-ID: References: <5d1f07c4-5933-806d-4d11-8fdfabc701d7@allegro-packets.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: Adrien Mazarguil , =?UTF-8?Q?N=c3=a9lio_Laranjeiro?= , "dev@dpdk.org" To: Yongseok Koh Return-path: Received: from smtprelay07.ispgateway.de (smtprelay07.ispgateway.de [134.119.228.104]) by dpdk.org (Postfix) with ESMTP id 9BA33DD2 for ; Fri, 6 Oct 2017 16:10:17 +0200 (CEST) In-Reply-To: Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Yongseok, unfortunately in a quick test using testpmd and ~20Gb/s of traffic with your patch traffic forwarding always stops completely after a few seconds= =2E I wanted to test this with the current master of dpdk-next-net but after "net/mlx5: support upstream rdma-core" it will not compile against MLNX_OFED_LINUX-4.1-1.0.2.0. So i used the last commit before that (v17.08-306-gf214841) and applied your patch leading to the result described above. Apart from your patch no other modifications were made and without the patch testpmd forwards the traffic without a problem (in this configuration mbufs should never run out so this test was never affected by the original issue). For this test I simply used testpmd with the following command line: "testpmd -c 0xfe -- -i" and issued the "start" command. As traffic generator I used t-rex with the sfr traffic profile. Best regards, Martin On 05.10.17 23:46, Yongseok Koh wrote: > Hi, Martin > > Thanks for your thorough and valuable reporting. We could reproduce it.= I found > a bug and fixed it. Please refer to the patch [1] I sent to the mailing= list. > This might not be automatically applicable to v17.08 as I rebased it on= top of > Nelio's flow cleanup patch. But as this is a simple patch, you can easi= ly apply > it manually. > > Thanks, > Yongseok > > [1] http://dpdk.org/dev/patchwork/patch/29781 > >> On Sep 26, 2017, at 2:23 AM, Martin Weiser wrote: >> >> Hi, >> >> we are currently testing the Mellanox ConnectX-5 100G NIC with DPDK >> 17.08 as well as dpdk-net-next and are >> experiencing mbuf leaks as well as crashes (and in some instances even= >> kernel panics in a mlx5 module) under >> certain load conditions. >> >> We initially saw these issues only in our own DPDK-based application a= nd >> it took some effort to reproduce this >> in one of the DPDK example applications. However with the attached pat= ch >> to the load-balancer example we can >> reproduce the issues reliably. >> >> The patch may look weird at first but I will explain why I made these >> changes: >> >> * the sleep introduced in the worker threads simulates heavy processin= g >> which causes the software rx rings to fill >> up under load. If the rings are large enough (I increased the ring >> size with the load-balancer command line option >> as you can see in the example call further down) the mbuf pool may r= un >> empty and I believe this leads to a malfunction >> in the mlx5 driver. As soon as this happens the NIC will stop >> forwarding traffic, probably because the driver >> cannot allocate mbufs for the packets received by the NIC. >> Unfortunately when this happens most of the mbufs will >> never return to the mbuf pool so that even when the traffic stops th= e >> pool will remain almost empty and the >> application will not forward traffic even at a very low rate. >> >> * the use of the reference count in the mbuf in addition to the >> situation described above is what makes the >> mlx5 DPDK driver crash almost immediately under load. In our >> application we rely on this feature to be able to forward >> the packet quickly and still send the packet to a worker thread for >> analysis and finally free the packet when analysis is >> done. Here I simulated this by increasing the mbuf reference count >> immediately after receiving the mbuf from the >> driver and then calling rte_pktmbuf_free in the worker thread which >> should only decrement the reference count again >> and not actually free the mbuf. >> >> We executed the patched load-balancer application with the following >> command line: >> >> ./build/load_balancer -l 3-7 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx >> "(0,3),(1,3)" --w "4" --lpm "16.0.0.0/8=3D>0; 48.0.0.0/8=3D>1;" --pos-= lb 29 >> --rsz "1024, 32768, 1024, 1024" >> >> Then we generated traffic using the t-rex traffic generator and the sf= r >> test case. On our machine the issues start >> to happen when the traffic exceeds ~6 Gbps but this may vary depending= >> on how powerful the test machine is (by >> the way we were able to reproduce this on different types of hardware)= =2E >> >> A typical stacktrace looks like this: >> >> Thread 1 "load_balancer" received signal SIGSEGV, Segmentation fau= lt. >> 0x0000000000614475 in _mm_storeu_si128 (__B=3D..., __P=3D> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716 >> 716 __builtin_ia32_storedqu ((char *)__P, (__v16qi)__B); >> (gdb) bt >> #0 0x0000000000614475 in _mm_storeu_si128 (__B=3D..., __P=3D> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716 >> #1 rxq_cq_decompress_v (elts=3D0x7fff3732bef0, cq=3D0x7ffff7f9938= 0, >> rxq=3D0x7fff3732a980) at >> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:679 >> #2 rxq_burst_v (pkts_n=3D, pkts=3D0xa7c7b0 , >> rxq=3D0x7fff3732a980) at >> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1242 >> #3 mlx5_rx_burst_vec (dpdk_rxq=3D0x7fff3732a980, pkts=3D> out>, pkts_n=3D) at >> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1277 >> #4 0x000000000043c11d in rte_eth_rx_burst (nb_pkts=3D3599, >> rx_pkts=3D0xa7c7b0 , queue_id=3D0, port_id=3D0 '\000') >> at >> /root/dpdk-next-net//x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2= 781 >> #5 app_lcore_io_rx (lp=3Dlp@entry=3D0xa7c700 , >> n_workers=3Dn_workers@entry=3D1, bsz_rd=3Dbsz_rd@entry=3D144, >> bsz_wr=3Dbsz_wr@entry=3D144, pos_lb=3Dpos_lb@entry=3D29 '\035') >> at /root/dpdk-next-net/examples/load_balancer/runtime.c:198 >> #6 0x0000000000447dc0 in app_lcore_main_loop_io () at >> /root/dpdk-next-net/examples/load_balancer/runtime.c:485 >> #7 app_lcore_main_loop (arg=3D) at >> /root/dpdk-next-net/examples/load_balancer/runtime.c:669 >> #8 0x0000000000495e8b in rte_eal_mp_remote_launch () >> #9 0x0000000000441e0d in main (argc=3D, >> argv=3D) at >> /root/dpdk-next-net/examples/load_balancer/main.c:99 >> >> The crash does not always happen at the exact same spot but in our tes= ts >> always in the same function. >> In a few instances instead of an application crash the system froze >> completely with what appeared to be a kernel >> panic. The last output looked like a crash in the interrupt handler of= a >> mlx5 module but unfortunately I cannot >> provide the exact output right now. >> >> All tests were performed under Ubuntu 16.04 server running a >> 4.4.0-96-generic kernel and the lasted Mellanox OFED >> MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64 was used. >> >> Any help with this issue is greatly appreciated. >> >> Best regards, >> Martin >> >>