From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FECBC19F2B for ; Thu, 4 Aug 2022 00:30:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238702AbiHDAaN (ORCPT ); Wed, 3 Aug 2022 20:30:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237226AbiHDAaG (ORCPT ); Wed, 3 Aug 2022 20:30:06 -0400 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47CBC57231 for ; Wed, 3 Aug 2022 17:30:05 -0700 (PDT) Received: by mail-qv1-xf30.google.com with SMTP id i7so14130581qvr.8 for ; Wed, 03 Aug 2022 17:30:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=m6XkC9SsYbws11fU/lKUsxSKL8HIjijhC5ukFJOVAg0=; b=lxEIIB2x2iibG4PBjXLIXWtMcmcGyMl4ihpwyUY7PqULj9eFbsANj8sp9BO+MIUc8m zgNMHvRyeehg1eLXEKH5zHrm09x2KfCSBSu6bQ61eEnE7/U8ZFIDtE1DBHsg89oyx8Xi TzUM3+OD77maP7r32Y/hBG7zSRhgSIYYBeLg9HrW+kyLW+hNXP7nZNBWp/2n1zhfso8F 1xMgQDUNo4e32+zlSmzj1CyN7BCj/a/ucsdHY4YiGVIc5X0eHj8/l1dm4sVUjbwNn/g1 bdg8VkTe1CITpGeKNCaNEXdPGxxj/gQnFejEErFBpDXIlSYDeEqCFDviYLvbd692DIio Hz4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=m6XkC9SsYbws11fU/lKUsxSKL8HIjijhC5ukFJOVAg0=; b=cHpyVwz3WqYSopH0F+q1UWhOGtedjTIQ1OmOnTj0eYJzkg9y7PSq4zYSnoPnOc0djo JXvA6NKJtcHem50sifqsOQhnq0s84HpRdHwYnKHYZMfVKTFEFeKk8VqWAVOszsHHYzMo heD3RkFxCqqjcGqjelCKCvSrd4vu5lR5F0BKD3y083LMuRgqBbkRHU1+Xw9Rvfv2jlgp nktHBWcfEswLvNKle9feHnrvUIsUBlJn2SZMcNF7BP7sRwd91LWSzUbkhJ3WEpo7z8g6 ZivzJhefe6uAHGLkCg/UJ+eEFkEFlsIg23gNblIrD7OI+jx4CyRc9Le7MtJ0Bo/lkr5u l33g== X-Gm-Message-State: ACgBeo1yOLFvb+AJuGIlJ/hwzNPFm0pt+I/EMFJNd6kL5uk77o649Uxu +9I0wASX7qXdhReYFWzxXxIiQjMaCkY8M5QzY1zXLQ== X-Google-Smtp-Source: AA6agR6lahu4GM/VNiA4RD1/ceM/yj+jVGPjA2YZnH7aavqPzYhk9txYF8tPePFZuNAGK+/RFKNZQettinIlm70U8Dw= X-Received: by 2002:a0c:b31a:0:b0:473:8062:b1b4 with SMTP id s26-20020a0cb31a000000b004738062b1b4mr24728083qve.85.1659573004159; Wed, 03 Aug 2022 17:30:04 -0700 (PDT) MIME-Version: 1.0 References: <20220722174829.3422466-1-yosryahmed@google.com> <20220722174829.3422466-5-yosryahmed@google.com> In-Reply-To: From: Hao Luo Date: Wed, 3 Aug 2022 17:29:53 -0700 Message-ID: Subject: Re: [PATCH bpf-next v5 4/8] bpf: Introduce cgroup iter To: Andrii Nakryiko Cc: Yosry Ahmed , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , Tejun Heo , Zefan Li , Johannes Weiner , Shuah Khan , Michal Hocko , KP Singh , Benjamin Tissoires , John Fastabend , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Roman Gushchin , David Rientjes , Stanislav Fomichev , Greg Thelen , Shakeel Butt , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, cgroups@vger.kernel.org, Kui-Feng Lee Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 3, 2022 at 1:40 PM Andrii Nakryiko wrote: > > On Wed, Aug 3, 2022 at 1:30 PM Hao Luo wrote: > > > > On Tue, Aug 2, 2022 at 3:50 PM Andrii Nakryiko > > wrote: > > > > > > On Tue, Aug 2, 2022 at 3:27 PM Hao Luo wrote: > > > > > > > > On Mon, Aug 1, 2022 at 8:43 PM Andrii Nakryiko > > > > wrote: > > > > > > > > > > On Fri, Jul 22, 2022 at 10:48 AM Yosry Ahmed wrote: [...] > > > > > > +}; > > > > > > + > > > > > > union bpf_iter_link_info { > > > > > > struct { > > > > > > __u32 map_fd; > > > > > > } map; > > > > > > + > > > > > > + /* cgroup_iter walks either the live descendants of a cgroup subtree, or the > > > > > > + * ancestors of a given cgroup. > > > > > > + */ > > > > > > + struct { > > > > > > + /* Cgroup file descriptor. This is root of the subtree if walking > > > > > > + * descendants; it's the starting cgroup if walking the ancestors. > > > > > > + * If it is left 0, the traversal starts from the default cgroup v2 > > > > > > + * root. For walking v1 hierarchy, one should always explicitly > > > > > > + * specify the cgroup_fd. > > > > > > + */ > > > > > > + __u32 cgroup_fd; > > > > > > > > > > Now, similar to what I argued in regard of pidfd vs pid, I think the > > > > > same applied to cgroup_fd vs cgroup_id. Why can't we support both? > > > > > cgroup_fd has some benefits, but cgroup_id is nice due to simplicity > > > > > and not having to open/close/keep extra FDs (which can add up if we > > > > > want to periodically query something about a large set of cgroups). > > > > > Please see my arguments from [0] above. > > > > > > > > > > Thoughts? > > > > > > > > > > > > > We can support both, it's a good idea IMO. But what exactly is the > > > > interface going to look like? Can you be more specific about that? > > > > Below is something I tried based on your description. > > > > > > > > @@ -91,6 +91,18 @@ union bpf_iter_link_info { > > > > struct { > > > > __u32 map_fd; > > > > } map; > > > > + struct { > > > > + /* PRE/POST/UP/SELF */ > > > > + __u32 order; > > > > + struct { > > > > + __u32 cgroup_fd; > > > > + __u64 cgroup_id; > > > > + } cgroup; > > > > + struct { > > > > + __u32 pid_fd; > > > > + __u64 pid; > > > > + } task; > > > > + }; > > > > }; > > > > > > > > > > So I wouldn't combine task and cgroup definition together, let's keep > > > them independent. > > > > > > then for cgroup we can do something like: > > > > > > struct { > > > __u32 order; > > > __u32 cgroup_fd; /* cgroup_fd ^ cgroup_id, exactly one can be non-zero */ > > > __u32 cgroup_id; > > > } cgroup > > > > > > Similar idea with task, but it's a bit more complicated because there > > > we have target that can be pid, pidfd, or cgroup (cgroup_fd and > > > cgroup_id). I haven't put much thought into the best representation, > > > though. > > > > > > > The cgroup part sounds good to me. For the full picture, how about > > this? I'm just trying a prototype, hoping that it can help people to > > get a clear picture. > > > > union bpf_iter_link_info { > > struct { > > __u32 map_fd; > > } map; > > struct { > > __u32 order; /* PRE/POST/UP/SELF */ > > __u32 cgroup_fd; > > __u64 cgroup_id; > > } cgroup; > > lgtm > > > struct { > > __u32 pid; > > __u32 pid_fd; > > __u64 cgroup_id; > > __u32 cgroup_fd; > > __u32 mode; /* SELF or others */ > > I'd move mode to be first. I'm undecided on using 4 separate fields > for pid/pid_fd/cgroup_{id,fd} vs a single union (or just generic "u64 > target" and then mode can define how we should treat target -- > whether it's pid, pid_fd, cgroup ID or FD. I'm fine either way, I > think. But for cgroup case not having to duplicate PRE/POST/UP/SELF > for cgroup id and then for cgroup fd seems like a win. So separate > fields might be better. It's also pretty extendable. And I'm > personally not worried about using few more bytes in bpf_attr for > disjoin fields like this. > Sounds good. Thanks for clarification. Using separate fields looks good to me. Since we settled on the cgroup part, I will apply update in cgroup_iter v7. > > } task; > > }; > > > > > > > > + __u32 traversal_order; > > > > > > + } cgroup; > > > > > > }; > > > > > > > > > > > > /* BPF syscall commands, see bpf(2) man-page for more details. */ > > > > > > > > > > [...]