From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37CDFC43603 for ; Wed, 18 Dec 2019 12:40:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 077502146E for ; Wed, 18 Dec 2019 12:40:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VA/8yF/B" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726774AbfLRMkR (ORCPT ); Wed, 18 Dec 2019 07:40:17 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:32255 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726591AbfLRMkR (ORCPT ); Wed, 18 Dec 2019 07:40:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1576672815; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0vDXqS2/spb1yIJmrOHtwV66dHgnN5IvboMGxw/Q1J0=; b=VA/8yF/BHq53OLVflPHGaqJHtgI9WkAkv4nVqH7XcGDB2pBGCqZKAlLFXvpvvMavVA7lC5 sk4rEJvVcy4/kuaDxtqoYBL8eyIlJh1sdom0xtpAZX/wC/lBM/ePju7LjDfs83zDsyB0G7 fLeIwaAPEw9C1fQaFsVVGjjYMZpc/U8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-339-VT86JVg3Mkau2bWwanSSCQ-1; Wed, 18 Dec 2019 07:40:12 -0500 X-MC-Unique: VT86JVg3Mkau2bWwanSSCQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ED79F107ACFE; Wed, 18 Dec 2019 12:40:09 +0000 (UTC) Received: from carbon (ovpn-200-37.brq.redhat.com [10.40.200.37]) by smtp.corp.redhat.com (Postfix) with ESMTP id 59CCB60C18; Wed, 18 Dec 2019 12:40:03 +0000 (UTC) Date: Wed, 18 Dec 2019 13:40:01 +0100 From: Jesper Dangaard Brouer To: =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= Cc: Netdev , Alexei Starovoitov , Daniel Borkmann , bpf , David Miller , Jakub Kicinski , Jesper Dangaard Brouer , John Fastabend , "Karlsson, Magnus" , Jonathan Lemon , Maciej Fijalkowski , brouer@redhat.com Subject: Re: [PATCH bpf-next 0/8] Simplify xdp_do_redirect_map()/xdp_do_flush_map() and XDP maps Message-ID: <20191218134001.319349bc@carbon> In-Reply-To: References: <20191218105400.2895-1-bjorn.topel@gmail.com> <20191218121132.4023f4f1@carbon> <20191218130346.1a346606@carbon> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Wed, 18 Dec 2019 13:18:10 +0100 Bj=C3=B6rn T=C3=B6pel wrote: > On Wed, 18 Dec 2019 at 13:04, Jesper Dangaard Brouer = wrote: > > > > On Wed, 18 Dec 2019 12:39:53 +0100 > > Bj=C3=B6rn T=C3=B6pel wrote: > > =20 > > > On Wed, 18 Dec 2019 at 12:11, Jesper Dangaard Brouer wrote: =20 > > > > > > > > On Wed, 18 Dec 2019 11:53:52 +0100 > > > > Bj=C3=B6rn T=C3=B6pel wrote: > > > > =20 > > > > > $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0 > > > > > > > > > > Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs > > > > > XDP-cpumap CPU:to pps drop-pps extra-info > > > > > XDP-RX 20 7723038 0 0 > > > > > XDP-RX total 7723038 0 > > > > > cpumap_kthread total 0 0 0 > > > > > redirect_err total 0 0 > > > > > xdp_exception total 0 0 =20 > > > > > > > > Hmm... I'm missing some counters on the kthread side. > > > > =20 > > > > > > Oh? Any ideas why? I just ran the upstream sample straight off. =20 > > > > Looks like it happened in commit: bbaf6029c49c ("samples/bpf: Convert > > XDP samples to libbpf usage") (Cc Maciej). > > > > The old bpf_load.c will auto attach the tracepoints... for and libbpf > > you have to be explicit about it. > > > > Can I ask you to also run a test with --stress-mode for > > ./xdp_redirect_cpu, to flush out any potential RCU race-conditions > > (don't provide output, this is just a robustness test). > > =20 >=20 > Sure! Other than that, does the command line above make sense? I'm > blasting UDP packets to core 20, and the idea was to re-route them to > 22. Yes, and I love that you are using CPUMAP xdp_redirect_cpu as a test. Explaining what is doing on (so you can say if this is what you wanted to test): The "XDP-RX" number is the raw XDP redirect number, but the remote CPU, where the network stack is started, cannot operate at 7.7Mpps. Which the lacking tracepoint numbers should have shown. You still can observe results via nstat, e.g.: # nstat -n && sleep 1 && nstat On the remote CPU 22, the SKB will be constructed, and likely dropped due overloading network stack and due to not having an UDP listen port. I sometimes use: # iptables -t raw -I PREROUTING -p udp --dport 9 -j DROP To drop the UDP packets in a earlier and consistent stage. The CPUMAP have carefully been designed to avoid that a "producer" can be slowed down by memory operations done by the "consumer", this is mostly achieved via ptr_ring and careful bulking (cache-lines). As your driver i40e doesn't have 'page_pool', then you are not affected by the return channel. Funny test/details: i40e uses a refcnt recycle scheme, based off the size of the RX-ring, thus it is affected by a longer outstanding queue. The CPUMAP have an intermediate queue, that will be full in this overload setting. Try to increase or decrease the parameter --qsize (remember to place it as first argument), and see if this was the limiting factor for your XDP-RX number. --=20 Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer