From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9052C43461 for ; Fri, 7 May 2021 16:19:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 920AD61460 for ; Fri, 7 May 2021 16:19:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236381AbhEGQUa (ORCPT ); Fri, 7 May 2021 12:20:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235256AbhEGQU0 (ORCPT ); Fri, 7 May 2021 12:20:26 -0400 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61850C061761; Fri, 7 May 2021 09:19:25 -0700 (PDT) Received: by mail-oi1-x22b.google.com with SMTP id i81so9191793oif.6; Fri, 07 May 2021 09:19:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=qmKVqg7d/tyBssbkHcX4lb9L0spImu6dFwQNA/cUsoc=; b=TVokbc8KNQQbGL3qA1/Kz4Oed1kdEJO/hyIlYMgPtDC/0bV+KZJBbfySiPMZRixrRc eYodhe/Mvo96FjT2JvAKLO9z2EoBOgoscrTg/SE8v8PUR9K1VYEdMPzICpZXZUVNikNr wD3DmLDkjInZgyNiBtYACW2CWBREIOVhBkFJBxu1VZcm+YUNOiNGRCcNKVCSVdjYJPup 10P/9vP3zC9/xpLG/ONtAEaXSH010IN3O4mGVAKKylqbLlcx5CuBFW+Usauo5f2C3Wgc ONUzW4DONXGPQmEx3zMdeXQrS5lHyOyppxCl/L3H826+pAi9qoANT7nVY+0VWCKLIRYM VkDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qmKVqg7d/tyBssbkHcX4lb9L0spImu6dFwQNA/cUsoc=; b=Tgo6EGealDlHbNeRnQDd9lkX+xp6wsx2P1rHYdt1ImHAb1hSzkt5PceHXD08lS4Ea2 dP1j0LTOly6PRF8YWu6G6q1Q0r6UFa0fZrvyujKvpWcvc5J3AbxCsudqjK/B6G87J+Qt Yi6UCgQVgOrr9EanY7iVNhExGSpkAHfrddd5kqBLrUWeZgRvCLDbv6WCh7Bjmv39jeBG Gysd13Y/PyjcZpju+B3/Z8NnxP2N8UjaejQAbrAQrq52moUqThd9G/XvMCYnoh9I/M3b aw9eZIHPb825pqcez11wLcpo8UY72K0AKhA0waVeELhz8rF8IXOLBcpRphEjOniXW9dv 3dhQ== X-Gm-Message-State: AOAM530tv6Uec0fK2beEXXCkGdtCvLjRDUfDM+2bRycN0tFMUUn03Y33 fwnld89bVpDLt/smttYodYlin+wZouBvIPtl71k= X-Google-Smtp-Source: ABdhPJzpu3/BjOJOHdwsyxiK5CZzQrIUelczJPVCjtaXzWRKhHA8aKj7ziNB44R3vvlNhFa+llggGKoIix6AlErqOEI= X-Received: by 2002:aca:c08a:: with SMTP id q132mr14674624oif.5.1620404364715; Fri, 07 May 2021 09:19:24 -0700 (PDT) MIME-Version: 1.0 References: <20201103232805.6uq4zg3gdvw2iiki@ast-mbp.dhcp.thefacebook.com> In-Reply-To: From: Alex Deucher Date: Fri, 7 May 2021 12:19:13 -0400 Message-ID: Subject: Re: [RFC] Add BPF_PROG_TYPE_CGROUP_IOCTL To: Daniel Vetter Cc: Kenny Ho , Song Liu , Andrii Nakryiko , DRI Development , Daniel Borkmann , Kenny Ho , "open list:CONTROL GROUP (CGROUP)" , Brian Welty , John Fastabend , Alexei Starovoitov , amd-gfx list , Martin KaFai Lau , Linux-Fsdevel , Alexander Viro , Network Development , KP Singh , Yonghong Song , bpf , Dave Airlie , Alexei Starovoitov , Alex Deucher Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Fri, May 7, 2021 at 12:13 PM Daniel Vetter wrote: > > On Fri, May 07, 2021 at 11:33:46AM -0400, Kenny Ho wrote: > > On Fri, May 7, 2021 at 4:59 AM Daniel Vetter wrote: > > > > > > Hm I missed that. I feel like time-sliced-of-a-whole gpu is the easier gpu > > > cgroups controler to get started, since it's much closer to other cgroups > > > that control bandwidth of some kind. Whether it's i/o bandwidth or compute > > > bandwidht is kinda a wash. > > sriov/time-sliced-of-a-whole gpu does not really need a cgroup > > interface since each slice appears as a stand alone device. This is > > already in production (not using cgroup) with users. The cgroup > > proposal has always been parallel to that in many sense: 1) spatial > > partitioning as an independent but equally valid use case as time > > sharing, 2) sub-device resource control as opposed to full device > > control motivated by the workload characterization paper. It was > > never about time vs space in terms of use cases but having new API for > > users to be able to do spatial subdevice partitioning. > > > > > CU mask feels a lot more like an isolation/guaranteed forward progress > > > kind of thing, and I suspect that's always going to be a lot more gpu hw > > > specific than anything we can reasonably put into a general cgroups > > > controller. > > The first half is correct but I disagree with the conclusion. The > > analogy I would use is multi-core CPU. The capability of individual > > CPU cores, core count and core arrangement may be hw specific but > > there are general interfaces to support selection of these cores. CU > > mask may be hw specific but spatial partitioning as an idea is not. > > Most gpu vendors have the concept of sub-device compute units (EU, SE, > > etc.); OpenCL has the concept of subdevice in the language. I don't > > see any obstacle for vendors to implement spatial partitioning just > > like many CPU vendors support the idea of multi-core. > > > > > Also for the time slice cgroups thing, can you pls give me pointers to > > > these old patches that had it, and how it's done? I very obviously missed > > > that part. > > I think you misunderstood what I wrote earlier. The original proposal > > was about spatial partitioning of subdevice resources not time sharing > > using cgroup (since time sharing is already supported elsewhere.) > > Well SRIOV time-sharing is for virtualization. cgroups is for > containerization, which is just virtualization but with less overhead and > more security bugs. > > More or less. > > So either I get things still wrong, or we'll get time-sharing for > virtualization, and partitioning of CU for containerization. That doesn't > make that much sense to me. You could still potentially do SR-IOV for containerization. You'd just pass one of the PCI VFs (virtual functions) to the container and you'd automatically get the time slice. I don't see why cgroups would be a factor there. Alex > > Since time-sharing is the first thing that's done for virtualization I > think it's probably also the most reasonable to start with for containers. > -Daniel > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx