From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=BAYES_00,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 682A6C433B4 for ; Sun, 9 May 2021 07:14:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4522861377 for ; Sun, 9 May 2021 07:14:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229699AbhEIHPF (ORCPT ); Sun, 9 May 2021 03:15:05 -0400 Received: from mail-lj1-f178.google.com ([209.85.208.178]:43972 "EHLO mail-lj1-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229610AbhEIHPE (ORCPT ); Sun, 9 May 2021 03:15:04 -0400 Received: by mail-lj1-f178.google.com with SMTP id w15so16854491ljo.10 for ; Sun, 09 May 2021 00:14:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=eXUE+5ZQfok9t39b/97Fg/w+ZqXTTKWUS3l6waHj4k4=; b=uhrd4KhXTuAWuF7XMsWnsGMLeuomIyhWmjcr9ICOBe3GMt9C2pQWyBKqDsj4sP4Qa1 6BdtsmEnxVx+hWn5mYg6uuk62DKIdRhR2EvzG4zdZxn27oOMtcFrmro3+88057zYC5yV mKK+PSeJ3hslEa/BnL5JqbVCjese+v47JqK6itVqdzlH6J8kXXRegpraGF9QrpoJ7iZA qlBWo1lqoLVk06pmPW+ifDvv9AyYTxyCV6gv4ozVcCQ24QpBnrOU+5gox4iYxvsIn4W6 EwZCxpoQ/LZRsNHF9bIFfwZLsP2URtX3L9P4WCE3964MitVo23MJx+S8A7zTI1h6OyB3 n76A== X-Gm-Message-State: AOAM5324i4so+3WbEFnr7WSaTNEc3+/9mDQxxrmP81AP8pPFr50SpNvV izFt2uoDz2bFE/6U7DcWIhK6/Irc0L8RPxPKe00= X-Google-Smtp-Source: ABdhPJwAAq8D7UXl95heRyjFeu/LD9n1bTBEy2HGSUfRvp0zFzGwH0wgsYGG5Pk56cKxkyvWOn0pV74oHu7e+HdUgQk= X-Received: by 2002:a2e:7807:: with SMTP id t7mr14605417ljc.393.1620544439544; Sun, 09 May 2021 00:13:59 -0700 (PDT) MIME-Version: 1.0 References: <20210413155337.644993-1-namhyung@kernel.org> <20210413155337.644993-2-namhyung@kernel.org> In-Reply-To: From: Namhyung Kim Date: Sun, 9 May 2021 00:13:48 -0700 Message-ID: Subject: Re: [PATCH v3 1/2] perf/core: Share an event with multiple cgroups To: Peter Zijlstra Cc: Ingo Molnar , Arnaldo Carvalho de Melo , Jiri Olsa , Mark Rutland , Alexander Shishkin , LKML , Stephane Eranian , Andi Kleen , Ian Rogers , Song Liu , Tejun Heo , kernel test robot , Thomas Gleixner Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Peter, Thinking about the interface a bit more... On Fri, Apr 16, 2021 at 4:59 AM Peter Zijlstra wrote: > > On Fri, Apr 16, 2021 at 08:22:38PM +0900, Namhyung Kim wrote: > > On Fri, Apr 16, 2021 at 7:28 PM Peter Zijlstra wrote: > > > > > > On Fri, Apr 16, 2021 at 11:29:30AM +0200, Peter Zijlstra wrote: > > > > > > > > So I think we've had proposals for being able to close fds in the past; > > > > > while preserving groups etc. We've always pushed back on that because of > > > > > the resource limit issue. By having each counter be a filedesc we get a > > > > > natural limit on the amount of resources you can consume. And in that > > > > > respect, having to use 400k fds is things working as designed. > > > > > > > > > > Anyway, there might be a way around this.. > > > > > > So how about we flip the whole thing sideways, instead of doing one > > > event for multiple cgroups, do an event for multiple-cpus. > > > > > > Basically, allow: > > > > > > perf_event_open(.pid=fd, cpu=-1, .flag=PID_CGROUP); > > > > > > Which would have the kernel create nr_cpus events [the corrolary is that > > > we'd probably also allow: (.pid=-1, cpu=-1) ]. > > > > Do you mean it'd have separate perf_events per cpu internally? > > From a cpu's perspective, there's nothing changed, right? > > Then it will have the same performance problem as of now. > > Yes, but we'll not end up in ioctl() hell. The interface is sooo much > better. The performance thing just means we need to think harder. So I'd like to have vector support for cgroups but it could be extended later. So open with a flag that it'd accept a vector fd = perf_event_open(.pid=-1, .cpu=N, .flag=VECTOR); Then it'd still need an additional interface (probably ioctl) to set (or append) the vector. ioctl(fd, ADD_VECTOR, { .type = VEC_CGROUP, .nr = N, ... }); Maybe we also need to add FORMAT_VECTOR and use read(v) or friends to read the contents for each entry. It'd be nice if it can have a vector-specific info like cgroup-id in this case. What do you think? Thanks, Namhyung