From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2352FC43217 for ; Mon, 28 Nov 2022 23:03:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231722AbiK1XDT (ORCPT ); Mon, 28 Nov 2022 18:03:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229731AbiK1XDS (ORCPT ); Mon, 28 Nov 2022 18:03:18 -0500 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09F1A2BB0D for ; Mon, 28 Nov 2022 15:03:17 -0800 (PST) Received: by mail-qv1-xf30.google.com with SMTP id mn15so4632534qvb.13 for ; Mon, 28 Nov 2022 15:03:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=0Robu8nmUKTiKQlMhDxt0GuMaQhgSeZWzrSFCOi0NpI=; b=HHflKt+THv9OY/GIgjE5v0xPgupcT0GXslgFF8T0VTrHj5z7mWEqIwyD2UFst52vGx EVm7/6mH/x5UpBPjFKEl30bYsfYtMvbabF32aQj0TJrkkPVk9QTO8qxy6MYLzC6Syhuz x35MnrmfKnhY0VeBJssE3Ngj43cChWTDcNtmDqwVy7rCe3EcPSBoGO+kR/yNRdcytpVU Xa3WCnbt3M3hZURo/dFzB7591imZIjSQYNIWPWbWl0pV0KLsibdlOs0Z1gsUYLkyA58S dKS+7is3dQpMY5smprFnLyWBsRBkxpOSfQhYyAWn3NcsuCL4j0E1hrSAFWX5oq1hcvsT hh+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0Robu8nmUKTiKQlMhDxt0GuMaQhgSeZWzrSFCOi0NpI=; b=lFlt1hMcILi4zKeoy0YDhu9ArBTJ2bLcPzltQfWXEmDd5fvLDXbA2LI6hRPEBEeXuW zrsfJGLJqKam6KZyRX9RD6WhxRSn7rfoU/sIIjpmz7xNxkEQkf5KQHusr1mn27oK7chk pFmiJyEuMstv6uAtDVNk3OKnpCMc40B+6bY0LAoQePpR4kZbmzgndcciRNT6wsy/zoq+ GN+rgtSy+ssDNclB3GcJ/Rn26Z541JdX5uPLPqN65Fnk/1F68ztEDhASN3kV+deabtYq B5nZGPlOvKowJO1Cl3mPL1q8ZdlKfkJMvRVQzjQuOqFNKrJZFJGLd/EPqp8Boarl5iyu FYTg== X-Gm-Message-State: ANoB5pm+PVviWlXmaUCUVVtiIsRxTWA0wR3o27VW/P+jTttDJxKzP1jL hoBQcbzlrv6vZBUdUydqGOZWg+nKBDzHKMC9ShvlMw== X-Google-Smtp-Source: AA0mqf4OiAWyyIEEmsi5bKKFM6ACFPviKV2T7jtJwZhCHGqoALp8BMwVUSZhtu+66bveKV3VPHGv2+Mh/aOFMTjpVDk= X-Received: by 2002:a0c:c3cc:0:b0:4c6:a05d:f67e with SMTP id p12-20020a0cc3cc000000b004c6a05df67emr32481992qvi.4.1669676596112; Mon, 28 Nov 2022 15:03:16 -0800 (PST) MIME-Version: 1.0 References: <20221105025146.238209-1-horenchuang@bytedance.com> In-Reply-To: From: "Hao Xiang ." Date: Mon, 28 Nov 2022 15:03:05 -0800 Message-ID: Subject: Re: [External] Re: [PATCH bpf-next v1 0/4] Add BPF htab map's used size for monitoring To: Alexei Starovoitov Cc: "Ho-Ren (Jack) Chuang" , Alexei Starovoitov , Hao Luo , Jiri Olsa , Jiri Olsa , Andrii Nakryiko , Daniel Borkmann , John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Quentin Monnet , Mykola Lysenko , Shuah Khan , Nathan Chancellor , Nick Desaulniers , Tom Rix , Joanne Koong , Kui-Feng Lee , Lorenzo Bianconi , Maxim Mikityanskiy , Punit Agrawal , Yifei Ma , Xiaoning Ding , bpf , Ho-Ren Chuang , LKML , "open list:KERNEL SELFTEST FRAMEWORK" , clang-built-linux Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Hi Alexei, we can use the existing switch bpf_stats_enabled around the added overhead. The switch is turned off by default so I believe there will be no extra overhead once we do that. Can you please have a second thought on this? On Mon, Nov 7, 2022 at 4:30 PM Hao Xiang . wrote: > > Hi Alexei, > > We understand the concern on added performance overhead. We had some > discussion about this while working on the patch and decided to give > it a try (my bad). > > Adding some more context. We are leveraging the BPF_OBJ_GET_INFO_BY_FD > syscall to trace CPU usage per prog and memory usage per map. We would > like to use this patch to add an interface for map types to return its > internal "count". For instance, we are thinking of having the below > map types to report the "count" and those won't add overhead to the > hot path. > 1. ringbuf to return its "count" by calculating the distance between > producer_pos and consumer_pos > 2. queue and stack to return its "count" from the head's position > 3. dev map hash to returns its "count" from items > > There are other map types, similar to the hashtab pre-allocation case, > will introduce overhead in the hot path in order to count the stats. I > think we can find alternative solutions for those (eg, iterate the map > and count, count only if bpf_stats_enabled switch is on, etc). There > are cases where this can't be done at the application level because > applications don't see the internal stats in order to do the right > counting. > > We can remove the counting for the pre-allocated case in this patch. > Please let us know what you think. > > Thanks, Hao > > On Sat, Nov 5, 2022 at 9:20 AM Alexei Starovoitov > wrote: > > > > On Fri, Nov 4, 2022 at 7:52 PM Ho-Ren (Jack) Chuang > > wrote: > > > > > > Hello everyone, > > > > > > We have prepared patches to address an issue from a previous discussion. > > > The previous discussion email thread is here: https://lore.kernel.org/all/CAADnVQLBt0snxv4bKwg1WKQ9wDFbaDCtZ03v1-LjOTYtsKPckQ@mail.gmail.com/ > > > > Rephrasing what was said earlier. > > We're not keeping the count of elements in a preallocated hash map > > and we are not going to add one. > > The bpf prog needs to do the accounting on its own if it needs > > this kind of statistics. > > Keeping the count for non-prealloc is already significant performance > > overhead. We don't trade performance for stats.