From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B1D3C433B4 for ; Fri, 14 May 2021 02:53:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72AAE6143D for ; Fri, 14 May 2021 02:53:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230204AbhENCyu (ORCPT ); Thu, 13 May 2021 22:54:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229981AbhENCyu (ORCPT ); Thu, 13 May 2021 22:54:50 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09C5BC061574; Thu, 13 May 2021 19:53:39 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id i5so18404784pgm.0; Thu, 13 May 2021 19:53:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=J5IurG2LaPoI6GLjY84j9lFhYUx5eDvCmRkvT3NLakQ=; b=IJPo7Fit6um8lRdzq7QCII9iH7pHFiu7X/3Bf47lTQhxumxQzb5+SM/1wHayf4t+OX qp4XX7HvEG3R1jCseK0f8eXLy7uGzsRjvnUQIgm88dBg+hHQc7pFXwAH8kdpsiOaDWk1 ZERt6fD13BXiPLgGVM829CoxkV32clTSAjBZvmjC8od2GetoM9h3vyXReXb/U0ircXqc NGxCfhuEkoL2TDagRK9b3irE3UyPhrgMYs3YO2uqblBkQqCK++Z12+kFmWs0Jb5sqhnC tUdgGUR5BltyoMT6imvD4vYyuB3m4Yj+0wOB9P13kHfeDTdTkl0GMGaMDPFQIs2RxuP2 uV1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=J5IurG2LaPoI6GLjY84j9lFhYUx5eDvCmRkvT3NLakQ=; b=tCs82l97utyCSiuYB6912ZJjqoulpDbldWXY/k+BBucMsYLIT1b/Z878+x2yRiDczG QViwSKc//C43+MmJ27GjPfALqD4BcoZr2ze1t1SHqRWCRW2XyVJRGdffwkdG1RAmAgT7 xhRwKBLdG4w7knWpTORTCyoBIYRppoGsWbE81JrFv2FhXQD4lqJ2dhMiWf1jnjgIuIpA 8qGyAaGyxQoAIWwZQxZnksB2XY2lv2/Vr88uIJEtHFX1wcbB58UqxjbQgaQXiw2ggRV0 TMdXTLHn4V9uq25rnnMFOoKToVDM30BpQZFy9byS5DnsZ4wdU5/ATtbX2+TjcyQhML8b +cGw== X-Gm-Message-State: AOAM533kQEzjalrUwuUamSgB/Xd1HrfJsOkGQbWYFw+Z3ViFut3Wxt8m 19v9u6YZgqwvkJTfSFIK+3pZKGBTnEt3wtyGyxo= X-Google-Smtp-Source: ABdhPJxo8AxTvMJcCnN/DXqEdcGwYlofZ4pY714CNaHCz3dtzLNg63DrCjykVhx9VN4Tb3fD+HcI1CBR6ASpi+hJOJo= X-Received: by 2002:a65:45c3:: with SMTP id m3mr43861900pgr.179.1620960818551; Thu, 13 May 2021 19:53:38 -0700 (PDT) MIME-Version: 1.0 References: <20210402192823.bqwgipmky3xsucs5@ast-mbp> <20210402234500.by3wigegeluy5w7j@ast-mbp> <20210412230151.763nqvaadrrg77kd@ast-mbp.dhcp.thefacebook.com> <20210427020159.hhgyfkjhzjk3lxgs@ast-mbp.dhcp.thefacebook.com> In-Reply-To: From: Cong Wang Date: Thu, 13 May 2021 19:53:27 -0700 Message-ID: Subject: Re: [RFC Patch bpf-next] bpf: introduce bpf timer To: Jamal Hadi Salim Cc: Joe Stringer , Alexei Starovoitov , Linux Kernel Network Developers , bpf , Xiongchun Duan , Dongdong Wang , Muchun Song , Cong Wang , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , Pedro Tammela Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Thu, May 13, 2021 at 11:46 AM Jamal Hadi Salim wrote: > > On 2021-05-12 6:43 p.m., Jamal Hadi Salim wrote: > > > > > Will run some tests tomorrow to see the effect of batching vs nobatch > > and capture cost of syscalls and cpu. > > > > So here are some numbers: > Processor: Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz > This machine is very similar to where a real deployment > would happen. > > Hyperthreading turned off so we can dedicate the core to the > dumping process and Performance mode on, so no frequency scaling > meddling. > Tests were ran about 3 times each. Results eye-balled to make > sure deviation was reasonable. > 100% of the one core was used just for dumping during each run. I checked with Cilium users here at Bytedance, they actually observed 100% CPU usage too. > > bpftool does linear retrieval whereas our tool does batch dumping. > bpftool does print the dumped results, for our tool we just count > the number of entries retrieved (cost would have been higher if > we actually printed). In any case in the real setup there is > a processing cost which is much higher. > > Summary is: the dumping is problematic costwise as the number of > entries increase. While batching does improve things it doesnt > solve our problem (Like i said we have upto 16M entries and most > of the time we are dumping useless things) Thank you for sharing these numbers! Hopefully they could convince people here to accept the bpf timer. I will include your use case and performance number in my next update.