From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1217CA9EB9 for ; Wed, 23 Oct 2019 17:42:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 78ADA21906 for ; Wed, 23 Oct 2019 17:42:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YUs/2M2P" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729283AbfJWRmO (ORCPT ); Wed, 23 Oct 2019 13:42:14 -0400 Received: from mail-lf1-f68.google.com ([209.85.167.68]:35869 "EHLO mail-lf1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731112AbfJWRmO (ORCPT ); Wed, 23 Oct 2019 13:42:14 -0400 Received: by mail-lf1-f68.google.com with SMTP id u16so16808079lfq.3; Wed, 23 Oct 2019 10:42:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=DZCbcR5Iy8/K2bF0tkHCyOQWDKeY6OQ5I8XUt1jrWCM=; b=YUs/2M2PHadXLRMoF7pOYzScYZGY5PCeCsacPCyueqtEj0qdNzM4Hr6KLafL5+NB07 GZPx8D8QEvJLLeMKadbuv97D8R2zpdg0Ziauh0POYTa4DPFb380D6bfvf/LfcKADPWF/ 0qmN8vv+uAK24QP9PUMsMke4QlFhVSzinnXBeVLZvy+gPrPezAXDgfnZuSmCcpWMHhMa k/iCPSGipJ73bZ5YaIE7GFlkKvX4d3vmHazVrnv+RnqYPcY+lWoo6w2rva77ZnXx6DTF bpa88lWkJnl4YI/LBrWDClkO7dX0gzpd0qyrcRCBYBeMCAKAzseVbzr49j4PF5Fg1MV0 +OvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=DZCbcR5Iy8/K2bF0tkHCyOQWDKeY6OQ5I8XUt1jrWCM=; b=Wg8VsB7zSUcg0WIJP1rrbTUCVaaK934vHIEH0MS5eMbE2JrhHKID/Axqy7bsUYBqeY FR7jLf62JPYS4zf2OwrimUe/l4JuKwvq7KF3eBKZOnbGByyGStXRXumaL8I+ndoWmkje YmKILwIqvBkC94F0XXN/gH9wEuW2YvjtAvXxJypocaCb74lxpxXrxuJc1VxlVFT9sJ64 J10hh0jMQzCvBC8kGTeXTTv+P3Ix3ZAGlf55ytX3JK3xwXeCYdJ2YyV6hhUiA3S4F+Z+ LrOAMdIqyL6oKkqdf8iSayBrTsTOZOmZMmRni035t1x4HaQvbU7d+978Zjsj1LuAhxte e3XQ== X-Gm-Message-State: APjAAAWFRgFweiVae+1BuQ3TTp8xxdezQvHvfEUG8ZlBaEzsGldBaYVL Jl5DoqA7vk0RSCVAe07tgqrsoi9OXNNF8oISujs= X-Google-Smtp-Source: APXvYqxbM46DYnjFMelwxJ6ieSaTahz1CzBnxDFA50cYAXoSzWE707p5aV7UO5MvdUZUvwLsXF7/fX6INTZgyFoEx+g= X-Received: by 2002:a19:fc1e:: with SMTP id a30mr10401625lfi.167.1571852531741; Wed, 23 Oct 2019 10:42:11 -0700 (PDT) MIME-Version: 1.0 References: <1570515415-45593-3-git-send-email-sridhar.samudrala@intel.com> <3ED8E928C4210A4289A677D2FEB48235140134CE@fmsmsx111.amr.corp.intel.com> <2bc26acd-170d-634e-c066-71557b2b3e4f@intel.com> <2032d58c-916f-d26a-db14-bd5ba6ad92b9@intel.com> <20191019001449.fk3gnhih4nx724pm@ast-mbp> <6f281517-3785-ce46-65de-e2f78576783b@intel.com> <20191019022525.w5xbwkav2cpqkfwi@ast-mbp> <877e4zd8py.fsf@toke.dk> <7642a460-9ba3-d9f7-6cf8-aac45c7eef0d@intel.com> <8956a643-0163-5345-34fa-3566762a2b7d@intel.com> In-Reply-To: <8956a643-0163-5345-34fa-3566762a2b7d@intel.com> From: Alexei Starovoitov Date: Wed, 23 Oct 2019 10:42:00 -0700 Message-ID: Subject: Re: [Intel-wired-lan] FW: [PATCH bpf-next 2/4] xsk: allow AF_XDP sockets to receive packets directly from a queue To: "Samudrala, Sridhar" Cc: =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= , Jakub Kicinski , Netdev , intel-wired-lan , "Herbert, Tom" , "Fijalkowski, Maciej" , "bpf@vger.kernel.org" , =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , "Karlsson, Magnus" Content-Type: text/plain; charset="UTF-8" Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Tue, Oct 22, 2019 at 12:06 PM Samudrala, Sridhar wrote: > > OK. Here is another data point that shows the perf report with the same test but CPU mitigations > turned OFF. Here bpf_prog overhead goes down from almost (10.18 + 4.51)% to (3.23 + 1.44%). > > 21.40% ksoftirqd/28 [i40e] [k] i40e_clean_rx_irq_zc > 14.13% xdpsock [i40e] [k] i40e_clean_rx_irq_zc > 8.33% ksoftirqd/28 [kernel.vmlinux] [k] xsk_rcv > 6.09% ksoftirqd/28 [kernel.vmlinux] [k] xdp_do_redirect > 5.19% xdpsock xdpsock [.] main > 3.48% ksoftirqd/28 [kernel.vmlinux] [k] bpf_xdp_redirect_map > 3.23% ksoftirqd/28 bpf_prog_3c8251c7e0fef8db [k] bpf_prog_3c8251c7e0fef8db > > So a major component of the bpf_prog overhead seems to be due to the CPU vulnerability mitigations. I feel that it's an incorrect conclusion because JIT is not doing any retpolines (because there are no indirect calls in bpf). There should be no difference in bpf_prog runtime with or without mitigations. Also you're running root, so no spectre mitigations either. This 3% seems like a lot for a function that does few loads that should hit d-cache and one direct call. Please investigate why you're seeing this 10% cpu cost when mitigations are on. perf report/annotate is the best. Also please double check that you're using the latest perf. Since bpf performance analysis was greatly improved several versions ago. I don't think old perf will be showing bogus numbers, but better to run the latest. > The other component is the bpf_xdp_redirect_map() codepath. > > Let me know if it helps to collect any other data that should further help with the perf analysis. > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexei Starovoitov Date: Wed, 23 Oct 2019 10:42:00 -0700 Subject: [Intel-wired-lan] FW: [PATCH bpf-next 2/4] xsk: allow AF_XDP sockets to receive packets directly from a queue In-Reply-To: <8956a643-0163-5345-34fa-3566762a2b7d@intel.com> References: <1570515415-45593-3-git-send-email-sridhar.samudrala@intel.com> <3ED8E928C4210A4289A677D2FEB48235140134CE@fmsmsx111.amr.corp.intel.com> <2bc26acd-170d-634e-c066-71557b2b3e4f@intel.com> <2032d58c-916f-d26a-db14-bd5ba6ad92b9@intel.com> <20191019001449.fk3gnhih4nx724pm@ast-mbp> <6f281517-3785-ce46-65de-e2f78576783b@intel.com> <20191019022525.w5xbwkav2cpqkfwi@ast-mbp> <877e4zd8py.fsf@toke.dk> <7642a460-9ba3-d9f7-6cf8-aac45c7eef0d@intel.com> <8956a643-0163-5345-34fa-3566762a2b7d@intel.com> Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: intel-wired-lan@osuosl.org List-ID: On Tue, Oct 22, 2019 at 12:06 PM Samudrala, Sridhar wrote: > > OK. Here is another data point that shows the perf report with the same test but CPU mitigations > turned OFF. Here bpf_prog overhead goes down from almost (10.18 + 4.51)% to (3.23 + 1.44%). > > 21.40% ksoftirqd/28 [i40e] [k] i40e_clean_rx_irq_zc > 14.13% xdpsock [i40e] [k] i40e_clean_rx_irq_zc > 8.33% ksoftirqd/28 [kernel.vmlinux] [k] xsk_rcv > 6.09% ksoftirqd/28 [kernel.vmlinux] [k] xdp_do_redirect > 5.19% xdpsock xdpsock [.] main > 3.48% ksoftirqd/28 [kernel.vmlinux] [k] bpf_xdp_redirect_map > 3.23% ksoftirqd/28 bpf_prog_3c8251c7e0fef8db [k] bpf_prog_3c8251c7e0fef8db > > So a major component of the bpf_prog overhead seems to be due to the CPU vulnerability mitigations. I feel that it's an incorrect conclusion because JIT is not doing any retpolines (because there are no indirect calls in bpf). There should be no difference in bpf_prog runtime with or without mitigations. Also you're running root, so no spectre mitigations either. This 3% seems like a lot for a function that does few loads that should hit d-cache and one direct call. Please investigate why you're seeing this 10% cpu cost when mitigations are on. perf report/annotate is the best. Also please double check that you're using the latest perf. Since bpf performance analysis was greatly improved several versions ago. I don't think old perf will be showing bogus numbers, but better to run the latest. > The other component is the bpf_xdp_redirect_map() codepath. > > Let me know if it helps to collect any other data that should further help with the perf analysis. >