From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02304C433DB for ; Wed, 3 Feb 2021 11:10:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BCC2C64F86 for ; Wed, 3 Feb 2021 11:10:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234065AbhBCLKY (ORCPT ); Wed, 3 Feb 2021 06:10:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234009AbhBCLKR (ORCPT ); Wed, 3 Feb 2021 06:10:17 -0500 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8F22C061786 for ; Wed, 3 Feb 2021 03:09:36 -0800 (PST) Received: by mail-wr1-x42e.google.com with SMTP id d16so23680229wro.11 for ; Wed, 03 Feb 2021 03:09:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=10v1VhHSAkUfosKyz8E/GXqf1Nzg9103R9baLtggV6c=; b=AwADaWJ4LwJCoyZjbksaAIXa2APpKqAzdcH9SXnJSG7fszSPLuUH+3Bwq8LRRDPDdi NoWIXSFbFk7LIjysbkr0z+9qT5D6IV+LPHIDP1dNAXkIXUoE8fy0uJRszcDfXUx0bqw7 A7GDs34BM4mVn+70Za7HOYwh3WFfPNv53flts= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=10v1VhHSAkUfosKyz8E/GXqf1Nzg9103R9baLtggV6c=; b=ZhoPGBkt0M8tPCiEY1FV5qxIHlr9zH2OYXbmwREJNJCItc2TfWw9hLsGaSLRfqoHna yg8/m1J57RWBxA8IeOWoJjoHRpKZNc5/uFfvum5WoNbb21nbeKXQdYiIkCloO+icGpER akuimfrroiT50Ce+qk4xOplA/BbabTEv3B3QYaCmD2sZoClOXFyDL4qrO4Q1+Ge2s1Jd MhdyKis4x/dWPyVHhZFrZ08B4vS7xpFMzEOjzK69TVovQ+1u1zU3U9KmArJleuoS0fl4 26ok+X7fDdQ+Q7tqpl6aLjfSpRPkQ9Y14L/b5Vr7NnUVAnfimVbazUBnvSKRy9Ku+CGG fOVg== X-Gm-Message-State: AOAM530/6JKrCQAZ9jz1Ljn0WI0GsEpHDm04SRxbuLbznCsJW61J1P62 WOocPVrdoc0Ia+ai1t2KyiTuMw== X-Google-Smtp-Source: ABdhPJz/Tt+QvRistfYVb5CQI5w5vU2oPDBqvoYzreN0pUZNDkogq1vrI5kxcwA1KSZxNTuzut/k+A== X-Received: by 2002:a5d:6351:: with SMTP id b17mr2881295wrw.410.1612350575262; Wed, 03 Feb 2021 03:09:35 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id z15sm2969771wrs.25.2021.02.03.03.09.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Feb 2021 03:09:34 -0800 (PST) Date: Wed, 3 Feb 2021 12:09:32 +0100 From: Daniel Vetter To: Kenny Ho Cc: Daniel Vetter , Alexei Starovoitov , Dave Airlie , Kenny Ho , Alexander Viro , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , bpf , Network Development , Linux-Fsdevel , "open list:CONTROL GROUP (CGROUP)" , Alex Deucher , amd-gfx list , DRI Development , Brian Welty Subject: Re: [RFC] Add BPF_PROG_TYPE_CGROUP_IOCTL Message-ID: References: <20201103053244.khibmr66p7lhv7ge@ast-mbp.dhcp.thefacebook.com> <20201103210418.q7hddyl7rvdplike@ast-mbp.dhcp.thefacebook.com> <20201103232805.6uq4zg3gdvw2iiki@ast-mbp.dhcp.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Operating-System: Linux phenom 5.7.0-1-amd64 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Mon, Feb 01, 2021 at 11:51:07AM -0500, Kenny Ho wrote: > [Resent in plain text.] > > On Mon, Feb 1, 2021 at 9:49 AM Daniel Vetter wrote: > > - there's been a pile of cgroups proposal to manage gpus at the drm > > subsystem level, some by Kenny, and frankly this at least looks a bit > > like a quick hack to sidestep the consensus process for that. > No Daniel, this is quick *draft* to get a conversation going. Bpf was > actually a path suggested by Tejun back in 2018 so I think you are > mischaracterizing this quite a bit. > > "2018-11-20 Kenny Ho: > To put the questions in more concrete terms, let say a user wants to > expose certain part of a gpu to a particular cgroup similar to the > way selective cpu cores are exposed to a cgroup via cpuset, how > should we go about enabling such functionality? > > 2018-11-20 Tejun Heo: > Do what the intel driver or bpf is doing? It's not difficult to hook > into cgroup for identification purposes." Yeah, but if you go full amd specific for this, you might as well have a specific BPF hook which is called in amdgpu/kfd and returns you the CU mask for a given cgroups (and figures that out however it pleases). Not a generic framework which lets you build pretty much any possible cgroups controller for anything else using BPF. Trying to filter anything at the generic ioctl just doesn't feel like a great idea that's long term maintainable. E.g. what happens if there's new uapi for command submission/context creation and now your bpf filter isn't catching all access anymore? If it's an explicit hook that explicitly computes the CU mask, then we can add more checks as needed. With ioctl that's impossible. Plus I'm also not sure whether that's really a good idea still, since if cloud companies have to built their own bespoke container stuff for every gpu vendor, that's quite a bad platform we're building. And "I'd like to make sure my gpu is used fairly among multiple tenents" really isn't a use-case that's specific to amd. If this would be something very hw specific like cache assignment and quality of service stuff or things like that, then vendor specific imo makes sense. But for CU masks essentially we're cutting the compute resources up in some way, and I kinda expect everyone with a gpu who cares about isolating workloads with cgroups wants to do that. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch