From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andreas Hartmann Subject: Re: Linux 4.14 - regression: broken tun/tap / bridge network with virtio - bisected Date: Fri, 8 Dec 2017 08:21:16 +0100 Message-ID: <11c25b88-af9b-a1f7-b5f5-0420c75916d7@01019freenet.de> References: <9615150a-eb78-2f9d-798f-6aa460932aec@01019freenet.de> <2e2392b7-84c5-be89-b0e5-5bae3b2fdaed@01019freenet.de> <4efbaf24-f419-2c8e-c705-59a5242b0575@01019freenet.de> <881560f8-54ec-e946-50cb-b2e80ddb5f97@01019freenet.de> <73b7a7b0-4264-2bd0-9e65-69841377f09f@redhat.com> <401a0715-fd28-63a3-8dfd-e89835d70db0@01019freenet.de> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: netdev@vger.kernel.org, Michal Kubecek To: Jason Wang , David Miller Return-path: Received: from mx1.mailbox.org ([80.241.60.212]:49411 "EHLO mx1.mailbox.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751113AbdLHHXu (ORCPT ); Fri, 8 Dec 2017 02:23:50 -0500 In-Reply-To: Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 12/06/2017 at 04:08 AM Jason Wang wrote: > > > On 2017年12月06日 00:23, Andreas Hartmann wrote: >> On 12/05/2017 at 04:50 AM Jason Wang wrote: >>> >>> On 2017年12月05日 00:28, Andreas Hartmann wrote: >>>> On 12/03/2017 at 12:35 PM Andreas Hartmann wrote: >>>>> On 12/01/2017 at 11:11 AM Andreas Hartmann wrote: >>>>>> Hello! >>>>>> >>>>>> I hopefully could get rid of both of my problems (hanging network w/ >>>>>> virtio) and endless hanging qemu-process on VM shutdown by upgrading >>>>>> qemu from 2.6.2 to 2.10.1. I hope it will persist. >>>>> It didn't persist. 10h later - same problems happened again. It's just >>>>> much harder to trigger the problems. >>>>> >>>>> I'm now trying it with >>>>> >>>>> CONFIG_RCU_NOCB_CPU=y and >>>>> rcu_nocbs=0-15 >>>>> >>>>> Since then, I didn't see any problem any more. But this doesn't mean >>>>> anything until now ... . >>>> Didn't work ether. Disabling vhost_net's zcopy hadn't any effect, too. >>>> >>>> => It's just finally broken since >>>> >>>> 2ddf71e23cc246e95af72a6deed67b4a50a7b81c >>>> net: add notifier hooks for devmap bpf map >>> Hi: >>> >>> Did you use XDP devmap in host? If not, please double check it was the >>> first bad commit since the patch should only work when XDP/devmap is >>> used on host. >> How do I know if XDP/devmap is enabled / used? Could you please give >> some hint? >> >> >> Thanks, >> Andreas > > Something like: > > ./ip link | grep xdp > 10: tap0: mtu 1500 xdp qdisc mq master > kvmbr0 state UNKNOWN mode DEFAULT group default qlen 1000 >     prog/xdp id 4 tag 0381911915bc8d7f > > But you should have some recent version of ip. Thanks for this hint - I'm not using xdp. Therefore I rechecked my bisect and detected a mistake. The rebisect now leads to [v2,RFC,11/13] net: Remove all references to SKB_GSO_UDP. [1] For the repeated bisect, I switched back to the original qemu 2.6.2 (instead of 2.10.1), because problems can be seen reliably with 2.6.2. All my VMs are using virtio_net. BTW: I couldn't see the problems (sometimes, the VM couldn't be stopped at all) if all my VMs are using e1000 as interface instead. This finding now matches pretty much the responsible UDP-package which caused the stall. I already mentioned it here [2]. To prove it, I reverted from the patch series "[PATCH v2 RFC 0/13] Remove UDP Fragmentation Offload support" [3] 11/13 [v2,RFC,11/13] net: Remove all references to SKB_GSO_UDP. [4] 12/13 [v2,RFC,12/13] inet: Remove software UFO fragmenting code. [5] 13/13 [v2,RFC,13/13] net: Kill NETIF_F_UFO and SKB_GSO_UDP. [6] and applied it to Linux 4.14.4. It compiled fine and is running fine. The vnet doesn't die anymore. Yet, I can't say if the qemu stop hangs are gone, too. Obviously, there is something broken with the new UDP handling. Could you please analyze this problem? I could test some more patches ... . Thanks, kind regards, Andreas [1] http://patchwork.ozlabs.org/patch/785411/ [2] https://www.mail-archive.com/netdev@vger.kernel.org/msg201635.html [3] http://lists.openwall.net/netdev/2017/07/07/26 [4] http://patchwork.ozlabs.org/patch/785411/ [5] https://patchwork.ozlabs.org/patch/785413/ [6] https://patchwork.ozlabs.org/patch/785412/