From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,MSGID_FROM_MTA_HEADER, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2602C433B4 for ; Fri, 14 May 2021 08:05:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 48096613B5 for ; Fri, 14 May 2021 08:05:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48096613B5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8E4CF6EE3A; Fri, 14 May 2021 08:05:11 +0000 (UTC) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2078.outbound.protection.outlook.com [40.107.94.78]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7C94F6EE3A; Fri, 14 May 2021 08:05:10 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Cj9yBZs1eHi7qpYkIkt/YHD/ptlsU+zcovkDZpNpZacYygCmZvxrtuvMwAccb8Lc0EJdoQgAvyTcjArP/n6ARIx6ONcG0R4dnuwgi1NUnYJSxOSnhR9jHO059bzGJA2FIZxYpTdu3snzDbYq++berwVoeexnxZgDbZzC8D7t+HjUkQTdhosJin0VMQSwqlpdvqA+2HPrOg8G21kuElx/eSDQQueVtldWDmHt8VtXWEZtGDFJ2EPB42Cxl611w1w/qLsk2mL7626H9R8NAty5cLCvxieIdSPEVVUoSRTyyeVEgt2g3tTkYfzob5JDxsfqCMBe52dCOUIycmZpn1BVsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yp4VWWSqzje51i4z4KvvZ2b9clM3s3JSou2sjzJx/CQ=; b=kD0FnA9vUpIJeXGXIYigBtIDtirflEKMIRH3lhvkTyQAhRye9OokzyFCYQPy876/iJDS2UxlRBP4+IAFB/cHV+Gwoqq1EQrgXt8QoqfSC3tSIwWPuYZgvUAkDrggXmG3rhGIE0fXBMQ7KO3lzLCjBNf6DygVipRkFUy0OKxEyj/jUnhbSPSBxcf4aaid5yMJTWqZn3udxZCs7YaOPZub/KiLZlj143+suI8Na7TQEjuJP0eUwDu1lkzFqAxBxAz4lGnDeMud6j1tyT+d64m5CvxfA+NGofs7OHgjvyV1bdTYDz79GVOcFX96rNE4QsPyzhG8Pihw6JY8fqAElFkjfQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yp4VWWSqzje51i4z4KvvZ2b9clM3s3JSou2sjzJx/CQ=; b=cx9dd3gotgVp92DylpRPRWtcisC558G5YP4y+0+w3/NknfpTi3i8zcG4jo6NCgg4xXu1G5LxCjwITWnJOx97rocXotusz0yfvzYzlqLFHsHsuMSoVxLoseyS/MbPt0ChJ4vy6qLIZY4K8s14cE3KHujpfuqMoLvf6iJNu6maqZM= Authentication-Results: ffwll.ch; dkim=none (message not signed) header.d=none;ffwll.ch; dmarc=none action=none header.from=amd.com; Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19) by MN2PR12MB4848.namprd12.prod.outlook.com (2603:10b6:208:1be::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Fri, 14 May 2021 08:04:54 +0000 Received: from MN2PR12MB3775.namprd12.prod.outlook.com ([fe80::6d4d:4674:1cf6:8d34]) by MN2PR12MB3775.namprd12.prod.outlook.com ([fe80::6d4d:4674:1cf6:8d34%6]) with mapi id 15.20.4129.028; Fri, 14 May 2021 08:04:54 +0000 Subject: Re: [PATCH 0/7] Per client engine busyness To: "Nieto, David M" , Alex Deucher , Tvrtko Ursulin References: <20210513110002.3641705-1-tvrtko.ursulin@linux.intel.com> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: <39ccc2ef-05d1-d9f0-0639-ea86bef58b80@amd.com> Date: Fri, 14 May 2021 10:04:49 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 In-Reply-To: Content-Type: multipart/alternative; boundary="------------33218AAAA33C6B4900170AC6" Content-Language: en-US X-Originating-IP: [2a02:908:1252:fb60:cf70:a0fd:8c48:efd4] X-ClientProxiedBy: PR0P264CA0144.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1a::36) To MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from [IPv6:2a02:908:1252:fb60:cf70:a0fd:8c48:efd4] (2a02:908:1252:fb60:cf70:a0fd:8c48:efd4) by PR0P264CA0144.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1a::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend Transport; Fri, 14 May 2021 08:04:53 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 641c5c4b-d29a-4b52-d97f-08d916aef4ed X-MS-TrafficTypeDiagnostic: MN2PR12MB4848: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:466; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iZpMjDdkpUPn/EZk+FK938coHQz/XFQLPVDHL6WPu+Hth2YhuqC4txnCX8j2bAllPLfA1vO28QYaOexbxiXBg0EMlTdaAhufTL+qCym5mZa6j9fgpSPZHtTWuJ6T6sQ+PxA2Nw1M2ox6R7v7cEGEj8zZkGp4qwcCvf4FOXtXTLEDzI1X0HPP2Mffckp7tqXBxXb2A7ZjPrLmG9YPK5UMVOBaasvXpCLwj8bd0myCYXWPK+rdJh83Ecf7BC2QUs2vsIEmDaNtdFvyyx3EY2XUCUAsD0lywvNyZG8D+pR5fyLF6r8igLcYIW6xQJTGWwgoAAbZ+EY1+kKZwHsNB4QMHLvXhyKlfRmoi2oS5rHsRNXoTSOhvsBLHjpUi9SChA6yLKtefbnjFkr0VVc19R1be4K8WiBUjwuywwWIrrhh9awoIAUadqO/yOcg6Pjj8ZkhaqWMz7VbNZ2Bu5JsNODI/TFk3joQ2+UMRC8zLm8AcUlaHCaNjen9HC5OHdTmLvlV3snBMk0rROZQyenVgavXcqcejZ8VJLyAardgeszf/hF0rOvgDX2TWpv5wYxsx+xQNTXGhLvcka2BGzAe666LeXbHugpku7Yy1/Ou9Ib0/MgTelJnAVC2H4HjO9moCJQMMCdEV5EDQYU5BWGX1RtIkzMZhzPtbcDgnng5rJ9rW6Tdp8WHURvTyThSdsA4BMRP47zyDCZPUQONzZtaTIUqFMw8Wp97LMFss1ftqsNmHf0= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MN2PR12MB3775.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(136003)(376002)(346002)(366004)(39860400002)(396003)(52116002)(33964004)(2906002)(38100700002)(4326008)(53546011)(66476007)(36756003)(31696002)(86362001)(66556008)(478600001)(8676002)(19627405001)(6486002)(5660300002)(30864003)(31686004)(83380400001)(8936002)(66946007)(2616005)(966005)(45080400002)(6666004)(54906003)(186003)(166002)(110136005)(316002)(16526019)(45980500001)(43740500002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eTNkbk9VWVB6N1MrQVc3NisrYVJHZGhNUWRaNDh2bWtxQi92OGZ0MDYyalR3?= =?utf-8?B?ZGIyNUszSW9uOFJINHphTjBkM0FNSldJNWptcFFKYXJHbU1JMVUrUnl2MXNV?= =?utf-8?B?WXR5Y08zYXBiOVZuSnFxNExVZk11M2UzNEJBbFdBb29jdUJMYzlPM0d4b0lw?= =?utf-8?B?RVBobVpWNVpYT0ZCMHh0RzU4dkxRTngvYTB4T3pRNHFROCt3ZVBPcTBMeFdu?= =?utf-8?B?YlFza3JwZUJuejFxMWV1Q3ZsTEhmTk1yRm9uT1JsWUxncUE0ZXVieUxTR016?= =?utf-8?B?VmxwQlpIejRsZUs0TkxnM0xxa0NXeEJpeDhUYnJzUzM4RUEyV0J4UzFLSldJ?= =?utf-8?B?TkM3NEVMQkZaeTJjTGc1U21GRG5EZVJJT3JTcXZ1S2lpM0pEQVNwRExVY3Fw?= =?utf-8?B?ZGVoMlZocDExdEJ5NE1uK1Ywdk1jNkRqZmVjQlRyMkVoQkFBVEpVU3ZkS2kr?= =?utf-8?B?andDNVdFVGN3UTVHcUZGby80UFNpQkN1V2E5ekNrcVlYL3JNa3RnS1duV1gw?= =?utf-8?B?M0xCR0UwZlRjcmMxenNEQ1RSR2tCTDVrcGVwUkRUeVZQMGlGQVU4R0dPSUl3?= =?utf-8?B?UjhIbWQ0eFRBWEpEemZLdDJabmZJbTFDZGQveGVyUUlJaC9WRWdKYURlWXFN?= =?utf-8?B?SmZtZUJvakkyS2o1RFZhOEQ5dEFUaFVaNWY3Z2tHdFFSdDV5THl6WE0rdXJo?= =?utf-8?B?eWdpbHErdWdKK1o2OFAvc2I5NTlUa1B5ZGxKRkxjNjIzMkpXMDEwM1JxQlNy?= =?utf-8?B?MkZBajlwRmdGd0VTYTNscnZVWG9ySHoxT3BpcjcrY2haVGxSbVc1bmxkRUtl?= =?utf-8?B?WEdOTkkwZGV6N01kTDl6VCtzNG1KT0hhMExnQldFWll0cGRKMDNsNlExMGsy?= =?utf-8?B?SktTS0xjdEJaNStCU2FZdFI4M0gxcGkxaVNtVjk3RW5sWGpHRlRIM3huV3FT?= =?utf-8?B?R0xnTUpNMzhJMlVaMTY4L2NHeVZVdFNZeEtwalZJVThkNys3NW9lbEdOM05x?= =?utf-8?B?S0Rrc3ZhRlB6NEptNklvdWxtbFpqTjJVMjc0akI0L2k3RDJRM1ZFSDJ0RlQx?= =?utf-8?B?SElUNk04YjVMQVlFL215MnRBUmtMckk1bTNFQ0NKdTdHbU8rdzE5andwUXJW?= =?utf-8?B?WXVCcWtEM1M4ZytwaGNmejk3MkJWeXZTRUM2K1hGSjJUbjh3bWI5Vmdrem0y?= =?utf-8?B?cFJUbDN4OTdzek1nTHpLRFNJbXhsaG1yeVkrTlYrckdFNTBIeGxlTzF2VlpI?= =?utf-8?B?QzU1Rmk3eGF1K0IyYVkzLzVWU29JaHN0V29kSlFrUFJkVnJaY3RnSzMybUov?= =?utf-8?B?dmg3KzcySTM0NHBYV0VjRFRrOWtSTEpqZGFZNVBhUDk1M0VaTldFWUgrTjFh?= =?utf-8?B?QmpiY1NqQVVhNjhIa00vU0NZMGhmOW9JMUlkSWZOUk9iUk13cXNsVlF4ZXpE?= =?utf-8?B?SkwyNXFQdm9aVlg3aDg1aUlKRERVSVpBSmI2aGZmQUVCdUZ3b1Z0UDRDbUdY?= =?utf-8?B?SnpBOUEwVWFJbHgvb3FhNW55U210ekpMS2JxSWhtcnNjR1ZPTFVlcHVrWGpF?= =?utf-8?B?aUM1aHFXM0NENWhWOElXZmVCRmRFQ0o0K2p5bUx4ZnpEd3RlaGVqcFZPbE01?= =?utf-8?B?U01WcWQ0SFdXOTVZNTlFdUVTOC9IanM3cWt0eUpFZ2NmczZRS2RaN2d6Q2Ew?= =?utf-8?B?ZUtRVTh3YXR0RWRUZUk4bGpoSXd0bEVlbHRZSW9iRHI1VHlnL3B3M1lRUTNi?= =?utf-8?B?S0tyQU9XV3MrMzQyRkFvMDFCanJBcnM3cm0xY1RxVEJRakY2T0FSWTZSc2Fi?= =?utf-8?B?aHRmSHEvcnJJODV5STc2Yk1BWFByVWtKOWV4djlvYnlVZHFucGhJenQ2bys3?= =?utf-8?Q?hEYqZKk4xKMLw?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 641c5c4b-d29a-4b52-d97f-08d916aef4ed X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2021 08:04:54.1243 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: xLqW2RmKOiifw9CgdowRjRJN5ZTCIwhxZ2lwvVpEIQCPGmSeXn4C9i29OqXFAGPt X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4848 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Intel Graphics Development , Maling list - DRI developers Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" --------------33218AAAA33C6B4900170AC6 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Well in my opinion exposing it through fdinfo turned out to be a really clean approach. It describes exactly the per file descriptor information we need. Making that device driver independent is potentially useful as well. Regards, Christian. Am 14.05.21 um 09:22 schrieb Nieto, David M: > > [AMD Official Use Only - Internal Distribution Only] > > > We had entertained the idea of exposing the processes as sysfs nodes > as you proposed, but we had concerns about exposing process info in > there, especially since /proc already exists for that purpose. > > I think if you were to follow that approach, we could have tools like > top that support exposing GPU engine usage. > ------------------------------------------------------------------------ > *From:* Alex Deucher > *Sent:* Thursday, May 13, 2021 10:58 PM > *To:* Tvrtko Ursulin ; Nieto, David M > ; Koenig, Christian > *Cc:* Intel Graphics Development ; > Maling list - DRI developers ; Daniel > Vetter > *Subject:* Re: [PATCH 0/7] Per client engine busyness > + David, Christian > > On Thu, May 13, 2021 at 12:41 PM Tvrtko Ursulin > wrote: > > > > > > Hi, > > > > On 13/05/2021 16:48, Alex Deucher wrote: > > > On Thu, May 13, 2021 at 7:00 AM Tvrtko Ursulin > > > wrote: > > >> > > >> From: Tvrtko Ursulin > > >> > > >> Resurrect of the previosuly merged per client engine busyness > patches. In a > > >> nutshell it enables intel_gpu_top to be more top(1) like useful > and show not > > >> only physical GPU engine usage but per process view as well. > > >> > > >> Example screen capture: > > >> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > >> intel-gpu-top -  906/ 955 MHz;    0% RC6; 5.30 Watts;      933 irqs/s > > >> > > >>        IMC reads:     4414 MiB/s > > >>       IMC writes:     3805 MiB/s > > >> > > >>            ENGINE BUSY                                      > MI_SEMA MI_WAIT > > >>       Render/3D/0   93.46% |████████████████████████████████▋  > |      0%      0% > > >>         Blitter/0    0.00% |                                   > |      0%      0% > > >>           Video/0    0.00% |                                   > |      0%      0% > > >>    VideoEnhance/0    0.00% |                                   > |      0%      0% > > >> > > >>    PID            NAME  Render/3D Blitter        Video      > VideoEnhance > > >>   2733       neverball |██████▌ ||            ||            > ||            | > > >>   2047            Xorg |███▊ ||            ||            > ||            | > > >>   2737        glxgears |█▍ ||            ||            > ||            | > > >>   2128           xfwm4 | ||            ||            ||            | > > >>   2047            Xorg | ||            ||            ||            | > > >> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > >> > > >> Internally we track time spent on engines for each struct > intel_context, both > > >> for current and past contexts belonging to each open DRM file. > > >> > > >> This can serve as a building block for several features from the > wanted list: > > >> smarter scheduler decisions, getrusage(2)-like per-GEM-context > functionality > > >> wanted by some customers, setrlimit(2) like controls, cgroups > controller, > > >> dynamic SSEU tuning, ... > > >> > > >> To enable userspace access to the tracked data, we expose time > spent on GPU per > > >> client and per engine class in sysfs with a hierarchy like the below: > > >> > > >>          # cd /sys/class/drm/card0/clients/ > > >>          # tree > > >>          . > > >>          ├── 7 > > >>          │   ├── busy > > >>          │   │   ├── 0 > > >>          │   │   ├── 1 > > >>          │   │   ├── 2 > > >>          │   │   └── 3 > > >>          │   ├── name > > >>          │   └── pid > > >>          ├── 8 > > >>          │   ├── busy > > >>          │   │   ├── 0 > > >>          │   │   ├── 1 > > >>          │   │   ├── 2 > > >>          │   │   └── 3 > > >>          │   ├── name > > >>          │   └── pid > > >>          └── 9 > > >>              ├── busy > > >>              │   ├── 0 > > >>              │   ├── 1 > > >>              │   ├── 2 > > >>              │   └── 3 > > >>              ├── name > > >>              └── pid > > >> > > >> Files in 'busy' directories are numbered using the engine class > ABI values and > > >> they contain accumulated nanoseconds each client spent on engines > of a > > >> respective class. > > > > > > We did something similar in amdgpu using the gpu scheduler.  We then > > > expose the data via fdinfo.  See > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-misc%2Fcommit%2F%3Fid%3D1774baa64f9395fa884ea9ed494bcb043f3b83f5&data=04%7C01%7CDavid.Nieto%40amd.com%7C5e3c05578ef14be3692508d9169d55bf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637565687273144615%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mt1EIL%2Fc9pHCXR%2FYSd%2BTr1e64XHoeYcdQ2cYufJ%2FcYQ%3D&reserved=0 > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-misc%2Fcommit%2F%3Fid%3D874442541133f78c78b6880b8cc495bab5c61704&data=04%7C01%7CDavid.Nieto%40amd.com%7C5e3c05578ef14be3692508d9169d55bf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637565687273144615%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2F3zMGw0LPTC1kG4NebTwUPTx7QCtEyw%2B4JToXDK5QXI%3D&reserved=0 > > > > > Interesting! > > > > Is yours wall time or actual GPU time taking preemption and such into > > account? Do you have some userspace tools parsing this data and how to > > do you client discovery? Presumably there has to be a better way that > > going through all open file descriptors? > > Wall time.  It uses the fences in the scheduler to calculate engine > time.  We have some python scripts to make it look pretty, but mainly > just reading the files directly.  If you know the process, you can > look it up in procfs. > > > > > Our implementation was merged in January but Daniel took it out recently > > because he wanted to have discussion about a common vendor framework for > > this whole story on dri-devel. I think. +Daniel to comment. > > > > I couldn't find the patch you pasted on the mailing list to see if there > > was any such discussion around your version. > > It was on the amd-gfx mailing list. > > Alex > > > > > Regards, > > > > Tvrtko > > > > > > > > Alex > > > > > > > > >> > > >> Tvrtko Ursulin (7): > > >>    drm/i915: Expose list of clients in sysfs > > >>    drm/i915: Update client name on context create > > >>    drm/i915: Make GEM contexts track DRM clients > > >>    drm/i915: Track runtime spent in closed and unreachable GEM > contexts > > >>    drm/i915: Track all user contexts per client > > >>    drm/i915: Track context current active time > > >>    drm/i915: Expose per-engine client busyness > > >> > > >> drivers/gpu/drm/i915/Makefile                 |   5 +- > > >> drivers/gpu/drm/i915/gem/i915_gem_context.c   |  61 ++- > > >> .../gpu/drm/i915/gem/i915_gem_context_types.h |  16 +- > > >> drivers/gpu/drm/i915/gt/intel_context.c       |  27 +- > > >> drivers/gpu/drm/i915/gt/intel_context.h       |  15 +- > > >> drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +- > > >> .../drm/i915/gt/intel_execlists_submission.c  |  23 +- > > >> .../gpu/drm/i915/gt/intel_gt_clock_utils.c    |   4 + > > >> drivers/gpu/drm/i915/gt/intel_lrc.c           |  27 +- > > >> drivers/gpu/drm/i915/gt/intel_lrc.h           |  24 ++ > > >> drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +- > > >> drivers/gpu/drm/i915/i915_drm_client.c        | 365 > ++++++++++++++++++ > > >> drivers/gpu/drm/i915/i915_drm_client.h        | 123 ++++++ > > >> drivers/gpu/drm/i915/i915_drv.c               |   6 + > > >> drivers/gpu/drm/i915/i915_drv.h               |   5 + > > >> drivers/gpu/drm/i915/i915_gem.c               |  21 +- > > >> drivers/gpu/drm/i915/i915_gpu_error.c         |  31 +- > > >> drivers/gpu/drm/i915/i915_gpu_error.h         |   2 +- > > >> drivers/gpu/drm/i915/i915_sysfs.c             |   8 + > > >>   19 files changed, 716 insertions(+), 81 deletions(-) > > >>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c > > >>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h > > >> > > >> -- > > >> 2.30.2 > > >> --------------33218AAAA33C6B4900170AC6 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit Well in my opinion exposing it through fdinfo turned out to be a really clean approach.

It describes exactly the per file descriptor information we need.

Making that device driver independent is potentially useful as well.

Regards,
Christian.

Am 14.05.21 um 09:22 schrieb Nieto, David M:

[AMD Official Use Only - Internal Distribution Only]


We had entertained the idea of exposing the processes as sysfs nodes as you proposed, but we had concerns about exposing process info in there, especially since /proc already exists for that purpose.

I think if you were to follow that approach, we could have tools like top that support exposing GPU engine usage.

From: Alex Deucher <alexdeucher@gmail.com>
Sent: Thursday, May 13, 2021 10:58 PM
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>; Nieto, David M <David.Nieto@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>
Cc: Intel Graphics Development <Intel-gfx@lists.freedesktop.org>; Maling list - DRI developers <dri-devel@lists.freedesktop.org>; Daniel Vetter <daniel@ffwll.ch>
Subject: Re: [PATCH 0/7] Per client engine busyness
 
+ David, Christian

On Thu, May 13, 2021 at 12:41 PM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> Hi,
>
> On 13/05/2021 16:48, Alex Deucher wrote:
> > On Thu, May 13, 2021 at 7:00 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> Resurrect of the previosuly merged per client engine busyness patches. In a
> >> nutshell it enables intel_gpu_top to be more top(1) like useful and show not
> >> only physical GPU engine usage but per process view as well.
> >>
> >> Example screen capture:
> >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >> intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s
> >>
> >>        IMC reads:     4414 MiB/s
> >>       IMC writes:     3805 MiB/s
> >>
> >>            ENGINE      BUSY                                      MI_SEMA MI_WAIT
> >>       Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
> >>         Blitter/0    0.00% |                                   |      0%      0%
> >>           Video/0    0.00% |                                   |      0%      0%
> >>    VideoEnhance/0    0.00% |                                   |      0%      0%
> >>
> >>    PID            NAME  Render/3D      Blitter        Video      VideoEnhance
> >>   2733       neverball |██████▌     ||            ||            ||            |
> >>   2047            Xorg |███▊        ||            ||            ||            |
> >>   2737        glxgears |█▍          ||            ||            ||            |
> >>   2128           xfwm4 |            ||            ||            ||            |
> >>   2047            Xorg |            ||            ||            ||            |
> >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >>
> >> Internally we track time spent on engines for each struct intel_context, both
> >> for current and past contexts belonging to each open DRM file.
> >>
> >> This can serve as a building block for several features from the wanted list:
> >> smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
> >> wanted by some customers, setrlimit(2) like controls, cgroups controller,
> >> dynamic SSEU tuning, ...
> >>
> >> To enable userspace access to the tracked data, we expose time spent on GPU per
> >> client and per engine class in sysfs with a hierarchy like the below:
> >>
> >>          # cd /sys/class/drm/card0/clients/
> >>          # tree
> >>          .
> >>          ├── 7
> >>          │   ├── busy
> >>          │   │   ├── 0
> >>          │   │   ├── 1
> >>          │   │   ├── 2
> >>          │   │   └── 3
> >>          │   ├── name
> >>          │   └── pid
> >>          ├── 8
> >>          │   ├── busy
> >>          │   │   ├── 0
> >>          │   │   ├── 1
> >>          │   │   ├── 2
> >>          │   │   └── 3
> >>          │   ├── name
> >>          │   └── pid
> >>          └── 9
> >>              ├── busy
> >>              │   ├── 0
> >>              │   ├── 1
> >>              │   ├── 2
> >>              │   └── 3
> >>              ├── name
> >>              └── pid
> >>
> >> Files in 'busy' directories are numbered using the engine class ABI values and
> >> they contain accumulated nanoseconds each client spent on engines of a
> >> respective class.
> >
> > We did something similar in amdgpu using the gpu scheduler.  We then
> > expose the data via fdinfo.  See
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-misc%2Fcommit%2F%3Fid%3D1774baa64f9395fa884ea9ed494bcb043f3b83f5&amp;data=04%7C01%7CDavid.Nieto%40amd.com%7C5e3c05578ef14be3692508d9169d55bf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637565687273144615%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=mt1EIL%2Fc9pHCXR%2FYSd%2BTr1e64XHoeYcdQ2cYufJ%2FcYQ%3D&amp;reserved=0
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-misc%2Fcommit%2F%3Fid%3D874442541133f78c78b6880b8cc495bab5c61704&amp;data=04%7C01%7CDavid.Nieto%40amd.com%7C5e3c05578ef14be3692508d9169d55bf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637565687273144615%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=%2F3zMGw0LPTC1kG4NebTwUPTx7QCtEyw%2B4JToXDK5QXI%3D&amp;reserved=0
>
> Interesting!
>
> Is yours wall time or actual GPU time taking preemption and such into
> account? Do you have some userspace tools parsing this data and how to
> do you client discovery? Presumably there has to be a better way that
> going through all open file descriptors?

Wall time.  It uses the fences in the scheduler to calculate engine
time.  We have some python scripts to make it look pretty, but mainly
just reading the files directly.  If you know the process, you can
look it up in procfs.

>
> Our implementation was merged in January but Daniel took it out recently
> because he wanted to have discussion about a common vendor framework for
> this whole story on dri-devel. I think. +Daniel to comment.
>
> I couldn't find the patch you pasted on the mailing list to see if there
> was any such discussion around your version.

It was on the amd-gfx mailing list.

Alex

>
> Regards,
>
> Tvrtko
>
> >
> > Alex
> >
> >
> >>
> >> Tvrtko Ursulin (7):
> >>    drm/i915: Expose list of clients in sysfs
> >>    drm/i915: Update client name on context create
> >>    drm/i915: Make GEM contexts track DRM clients
> >>    drm/i915: Track runtime spent in closed and unreachable GEM contexts
> >>    drm/i915: Track all user contexts per client
> >>    drm/i915: Track context current active time
> >>    drm/i915: Expose per-engine client busyness
> >>
> >>   drivers/gpu/drm/i915/Makefile                 |   5 +-
> >>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  61 ++-
> >>   .../gpu/drm/i915/gem/i915_gem_context_types.h |  16 +-
> >>   drivers/gpu/drm/i915/gt/intel_context.c       |  27 +-
> >>   drivers/gpu/drm/i915/gt/intel_context.h       |  15 +-
> >>   drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
> >>   .../drm/i915/gt/intel_execlists_submission.c  |  23 +-
> >>   .../gpu/drm/i915/gt/intel_gt_clock_utils.c    |   4 +
> >>   drivers/gpu/drm/i915/gt/intel_lrc.c           |  27 +-
> >>   drivers/gpu/drm/i915/gt/intel_lrc.h           |  24 ++
> >>   drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
> >>   drivers/gpu/drm/i915/i915_drm_client.c        | 365 ++++++++++++++++++
> >>   drivers/gpu/drm/i915/i915_drm_client.h        | 123 ++++++
> >>   drivers/gpu/drm/i915/i915_drv.c               |   6 +
> >>   drivers/gpu/drm/i915/i915_drv.h               |   5 +
> >>   drivers/gpu/drm/i915/i915_gem.c               |  21 +-
> >>   drivers/gpu/drm/i915/i915_gpu_error.c         |  31 +-
> >>   drivers/gpu/drm/i915/i915_gpu_error.h         |   2 +-
> >>   drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
> >>   19 files changed, 716 insertions(+), 81 deletions(-)
> >>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
> >>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
> >>
> >> --
> >> 2.30.2
> >>

--------------33218AAAA33C6B4900170AC6-- From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,MIME_HTML_MOSTLY, MSGID_FROM_MTA_HEADER,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FC07C433ED for ; Fri, 14 May 2021 08:05:17 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0868E6144F for ; Fri, 14 May 2021 08:05:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0868E6144F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 356606EE40; Fri, 14 May 2021 08:05:12 +0000 (UTC) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2078.outbound.protection.outlook.com [40.107.94.78]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7C94F6EE3A; Fri, 14 May 2021 08:05:10 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Cj9yBZs1eHi7qpYkIkt/YHD/ptlsU+zcovkDZpNpZacYygCmZvxrtuvMwAccb8Lc0EJdoQgAvyTcjArP/n6ARIx6ONcG0R4dnuwgi1NUnYJSxOSnhR9jHO059bzGJA2FIZxYpTdu3snzDbYq++berwVoeexnxZgDbZzC8D7t+HjUkQTdhosJin0VMQSwqlpdvqA+2HPrOg8G21kuElx/eSDQQueVtldWDmHt8VtXWEZtGDFJ2EPB42Cxl611w1w/qLsk2mL7626H9R8NAty5cLCvxieIdSPEVVUoSRTyyeVEgt2g3tTkYfzob5JDxsfqCMBe52dCOUIycmZpn1BVsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yp4VWWSqzje51i4z4KvvZ2b9clM3s3JSou2sjzJx/CQ=; b=kD0FnA9vUpIJeXGXIYigBtIDtirflEKMIRH3lhvkTyQAhRye9OokzyFCYQPy876/iJDS2UxlRBP4+IAFB/cHV+Gwoqq1EQrgXt8QoqfSC3tSIwWPuYZgvUAkDrggXmG3rhGIE0fXBMQ7KO3lzLCjBNf6DygVipRkFUy0OKxEyj/jUnhbSPSBxcf4aaid5yMJTWqZn3udxZCs7YaOPZub/KiLZlj143+suI8Na7TQEjuJP0eUwDu1lkzFqAxBxAz4lGnDeMud6j1tyT+d64m5CvxfA+NGofs7OHgjvyV1bdTYDz79GVOcFX96rNE4QsPyzhG8Pihw6JY8fqAElFkjfQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yp4VWWSqzje51i4z4KvvZ2b9clM3s3JSou2sjzJx/CQ=; b=cx9dd3gotgVp92DylpRPRWtcisC558G5YP4y+0+w3/NknfpTi3i8zcG4jo6NCgg4xXu1G5LxCjwITWnJOx97rocXotusz0yfvzYzlqLFHsHsuMSoVxLoseyS/MbPt0ChJ4vy6qLIZY4K8s14cE3KHujpfuqMoLvf6iJNu6maqZM= Authentication-Results: ffwll.ch; dkim=none (message not signed) header.d=none;ffwll.ch; dmarc=none action=none header.from=amd.com; Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19) by MN2PR12MB4848.namprd12.prod.outlook.com (2603:10b6:208:1be::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Fri, 14 May 2021 08:04:54 +0000 Received: from MN2PR12MB3775.namprd12.prod.outlook.com ([fe80::6d4d:4674:1cf6:8d34]) by MN2PR12MB3775.namprd12.prod.outlook.com ([fe80::6d4d:4674:1cf6:8d34%6]) with mapi id 15.20.4129.028; Fri, 14 May 2021 08:04:54 +0000 To: "Nieto, David M" , Alex Deucher , Tvrtko Ursulin References: <20210513110002.3641705-1-tvrtko.ursulin@linux.intel.com> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: <39ccc2ef-05d1-d9f0-0639-ea86bef58b80@amd.com> Date: Fri, 14 May 2021 10:04:49 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 In-Reply-To: Content-Language: en-US X-Originating-IP: [2a02:908:1252:fb60:cf70:a0fd:8c48:efd4] X-ClientProxiedBy: PR0P264CA0144.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1a::36) To MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from [IPv6:2a02:908:1252:fb60:cf70:a0fd:8c48:efd4] (2a02:908:1252:fb60:cf70:a0fd:8c48:efd4) by PR0P264CA0144.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1a::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend Transport; Fri, 14 May 2021 08:04:53 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 641c5c4b-d29a-4b52-d97f-08d916aef4ed X-MS-TrafficTypeDiagnostic: MN2PR12MB4848: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:466; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iZpMjDdkpUPn/EZk+FK938coHQz/XFQLPVDHL6WPu+Hth2YhuqC4txnCX8j2bAllPLfA1vO28QYaOexbxiXBg0EMlTdaAhufTL+qCym5mZa6j9fgpSPZHtTWuJ6T6sQ+PxA2Nw1M2ox6R7v7cEGEj8zZkGp4qwcCvf4FOXtXTLEDzI1X0HPP2Mffckp7tqXBxXb2A7ZjPrLmG9YPK5UMVOBaasvXpCLwj8bd0myCYXWPK+rdJh83Ecf7BC2QUs2vsIEmDaNtdFvyyx3EY2XUCUAsD0lywvNyZG8D+pR5fyLF6r8igLcYIW6xQJTGWwgoAAbZ+EY1+kKZwHsNB4QMHLvXhyKlfRmoi2oS5rHsRNXoTSOhvsBLHjpUi9SChA6yLKtefbnjFkr0VVc19R1be4K8WiBUjwuywwWIrrhh9awoIAUadqO/yOcg6Pjj8ZkhaqWMz7VbNZ2Bu5JsNODI/TFk3joQ2+UMRC8zLm8AcUlaHCaNjen9HC5OHdTmLvlV3snBMk0rROZQyenVgavXcqcejZ8VJLyAardgeszf/hF0rOvgDX2TWpv5wYxsx+xQNTXGhLvcka2BGzAe666LeXbHugpku7Yy1/Ou9Ib0/MgTelJnAVC2H4HjO9moCJQMMCdEV5EDQYU5BWGX1RtIkzMZhzPtbcDgnng5rJ9rW6Tdp8WHURvTyThSdsA4BMRP47zyDCZPUQONzZtaTIUqFMw8Wp97LMFss1ftqsNmHf0= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MN2PR12MB3775.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(136003)(376002)(346002)(366004)(39860400002)(396003)(52116002)(33964004)(2906002)(38100700002)(4326008)(53546011)(66476007)(36756003)(31696002)(86362001)(66556008)(478600001)(8676002)(19627405001)(6486002)(5660300002)(30864003)(31686004)(83380400001)(8936002)(66946007)(2616005)(966005)(45080400002)(6666004)(54906003)(186003)(166002)(110136005)(316002)(16526019)(45980500001)(43740500002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eTNkbk9VWVB6N1MrQVc3NisrYVJHZGhNUWRaNDh2bWtxQi92OGZ0MDYyalR3?= =?utf-8?B?ZGIyNUszSW9uOFJINHphTjBkM0FNSldJNWptcFFKYXJHbU1JMVUrUnl2MXNV?= =?utf-8?B?WXR5Y08zYXBiOVZuSnFxNExVZk11M2UzNEJBbFdBb29jdUJMYzlPM0d4b0lw?= =?utf-8?B?RVBobVpWNVpYT0ZCMHh0RzU4dkxRTngvYTB4T3pRNHFROCt3ZVBPcTBMeFdu?= =?utf-8?B?YlFza3JwZUJuejFxMWV1Q3ZsTEhmTk1yRm9uT1JsWUxncUE0ZXVieUxTR016?= =?utf-8?B?VmxwQlpIejRsZUs0TkxnM0xxa0NXeEJpeDhUYnJzUzM4RUEyV0J4UzFLSldJ?= =?utf-8?B?TkM3NEVMQkZaeTJjTGc1U21GRG5EZVJJT3JTcXZ1S2lpM0pEQVNwRExVY3Fw?= =?utf-8?B?ZGVoMlZocDExdEJ5NE1uK1Ywdk1jNkRqZmVjQlRyMkVoQkFBVEpVU3ZkS2kr?= =?utf-8?B?andDNVdFVGN3UTVHcUZGby80UFNpQkN1V2E5ekNrcVlYL3JNa3RnS1duV1gw?= =?utf-8?B?M0xCR0UwZlRjcmMxenNEQ1RSR2tCTDVrcGVwUkRUeVZQMGlGQVU4R0dPSUl3?= =?utf-8?B?UjhIbWQ0eFRBWEpEemZLdDJabmZJbTFDZGQveGVyUUlJaC9WRWdKYURlWXFN?= =?utf-8?B?SmZtZUJvakkyS2o1RFZhOEQ5dEFUaFVaNWY3Z2tHdFFSdDV5THl6WE0rdXJo?= =?utf-8?B?eWdpbHErdWdKK1o2OFAvc2I5NTlUa1B5ZGxKRkxjNjIzMkpXMDEwM1JxQlNy?= =?utf-8?B?MkZBajlwRmdGd0VTYTNscnZVWG9ySHoxT3BpcjcrY2haVGxSbVc1bmxkRUtl?= =?utf-8?B?WEdOTkkwZGV6N01kTDl6VCtzNG1KT0hhMExnQldFWll0cGRKMDNsNlExMGsy?= =?utf-8?B?SktTS0xjdEJaNStCU2FZdFI4M0gxcGkxaVNtVjk3RW5sWGpHRlRIM3huV3FT?= =?utf-8?B?R0xnTUpNMzhJMlVaMTY4L2NHeVZVdFNZeEtwalZJVThkNys3NW9lbEdOM05x?= =?utf-8?B?S0Rrc3ZhRlB6NEptNklvdWxtbFpqTjJVMjc0akI0L2k3RDJRM1ZFSDJ0RlQx?= =?utf-8?B?SElUNk04YjVMQVlFL215MnRBUmtMckk1bTNFQ0NKdTdHbU8rdzE5andwUXJW?= =?utf-8?B?WXVCcWtEM1M4ZytwaGNmejk3MkJWeXZTRUM2K1hGSjJUbjh3bWI5Vmdrem0y?= =?utf-8?B?cFJUbDN4OTdzek1nTHpLRFNJbXhsaG1yeVkrTlYrckdFNTBIeGxlTzF2VlpI?= =?utf-8?B?QzU1Rmk3eGF1K0IyYVkzLzVWU29JaHN0V29kSlFrUFJkVnJaY3RnSzMybUov?= =?utf-8?B?dmg3KzcySTM0NHBYV0VjRFRrOWtSTEpqZGFZNVBhUDk1M0VaTldFWUgrTjFh?= =?utf-8?B?QmpiY1NqQVVhNjhIa00vU0NZMGhmOW9JMUlkSWZOUk9iUk13cXNsVlF4ZXpE?= =?utf-8?B?SkwyNXFQdm9aVlg3aDg1aUlKRERVSVpBSmI2aGZmQUVCdUZ3b1Z0UDRDbUdY?= =?utf-8?B?SnpBOUEwVWFJbHgvb3FhNW55U210ekpMS2JxSWhtcnNjR1ZPTFVlcHVrWGpF?= =?utf-8?B?aUM1aHFXM0NENWhWOElXZmVCRmRFQ0o0K2p5bUx4ZnpEd3RlaGVqcFZPbE01?= =?utf-8?B?U01WcWQ0SFdXOTVZNTlFdUVTOC9IanM3cWt0eUpFZ2NmczZRS2RaN2d6Q2Ew?= =?utf-8?B?ZUtRVTh3YXR0RWRUZUk4bGpoSXd0bEVlbHRZSW9iRHI1VHlnL3B3M1lRUTNi?= =?utf-8?B?S0tyQU9XV3MrMzQyRkFvMDFCanJBcnM3cm0xY1RxVEJRakY2T0FSWTZSc2Fi?= =?utf-8?B?aHRmSHEvcnJJODV5STc2Yk1BWFByVWtKOWV4djlvYnlVZHFucGhJenQ2bys3?= =?utf-8?Q?hEYqZKk4xKMLw?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 641c5c4b-d29a-4b52-d97f-08d916aef4ed X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2021 08:04:54.1243 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: xLqW2RmKOiifw9CgdowRjRJN5ZTCIwhxZ2lwvVpEIQCPGmSeXn4C9i29OqXFAGPt X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4848 Subject: Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Intel Graphics Development , Maling list - DRI developers Content-Type: multipart/mixed; boundary="===============1481147362==" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" --===============1481147362== Content-Type: multipart/alternative; boundary="------------33218AAAA33C6B4900170AC6" Content-Language: en-US --------------33218AAAA33C6B4900170AC6 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Well in my opinion exposing it through fdinfo turned out to be a really clean approach. It describes exactly the per file descriptor information we need. Making that device driver independent is potentially useful as well. Regards, Christian. Am 14.05.21 um 09:22 schrieb Nieto, David M: > > [AMD Official Use Only - Internal Distribution Only] > > > We had entertained the idea of exposing the processes as sysfs nodes > as you proposed, but we had concerns about exposing process info in > there, especially since /proc already exists for that purpose. > > I think if you were to follow that approach, we could have tools like > top that support exposing GPU engine usage. > ------------------------------------------------------------------------ > *From:* Alex Deucher > *Sent:* Thursday, May 13, 2021 10:58 PM > *To:* Tvrtko Ursulin ; Nieto, David M > ; Koenig, Christian > *Cc:* Intel Graphics Development ; > Maling list - DRI developers ; Daniel > Vetter > *Subject:* Re: [PATCH 0/7] Per client engine busyness > + David, Christian > > On Thu, May 13, 2021 at 12:41 PM Tvrtko Ursulin > wrote: > > > > > > Hi, > > > > On 13/05/2021 16:48, Alex Deucher wrote: > > > On Thu, May 13, 2021 at 7:00 AM Tvrtko Ursulin > > > wrote: > > >> > > >> From: Tvrtko Ursulin > > >> > > >> Resurrect of the previosuly merged per client engine busyness > patches. In a > > >> nutshell it enables intel_gpu_top to be more top(1) like useful > and show not > > >> only physical GPU engine usage but per process view as well. > > >> > > >> Example screen capture: > > >> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > >> intel-gpu-top -  906/ 955 MHz;    0% RC6; 5.30 Watts;      933 irqs/s > > >> > > >>        IMC reads:     4414 MiB/s > > >>       IMC writes:     3805 MiB/s > > >> > > >>            ENGINE BUSY                                      > MI_SEMA MI_WAIT > > >>       Render/3D/0   93.46% |████████████████████████████████▋  > |      0%      0% > > >>         Blitter/0    0.00% |                                   > |      0%      0% > > >>           Video/0    0.00% |                                   > |      0%      0% > > >>    VideoEnhance/0    0.00% |                                   > |      0%      0% > > >> > > >>    PID            NAME  Render/3D Blitter        Video      > VideoEnhance > > >>   2733       neverball |██████▌ ||            ||            > ||            | > > >>   2047            Xorg |███▊ ||            ||            > ||            | > > >>   2737        glxgears |█▍ ||            ||            > ||            | > > >>   2128           xfwm4 | ||            ||            ||            | > > >>   2047            Xorg | ||            ||            ||            | > > >> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > >> > > >> Internally we track time spent on engines for each struct > intel_context, both > > >> for current and past contexts belonging to each open DRM file. > > >> > > >> This can serve as a building block for several features from the > wanted list: > > >> smarter scheduler decisions, getrusage(2)-like per-GEM-context > functionality > > >> wanted by some customers, setrlimit(2) like controls, cgroups > controller, > > >> dynamic SSEU tuning, ... > > >> > > >> To enable userspace access to the tracked data, we expose time > spent on GPU per > > >> client and per engine class in sysfs with a hierarchy like the below: > > >> > > >>          # cd /sys/class/drm/card0/clients/ > > >>          # tree > > >>          . > > >>          ├── 7 > > >>          │   ├── busy > > >>          │   │   ├── 0 > > >>          │   │   ├── 1 > > >>          │   │   ├── 2 > > >>          │   │   └── 3 > > >>          │   ├── name > > >>          │   └── pid > > >>          ├── 8 > > >>          │   ├── busy > > >>          │   │   ├── 0 > > >>          │   │   ├── 1 > > >>          │   │   ├── 2 > > >>          │   │   └── 3 > > >>          │   ├── name > > >>          │   └── pid > > >>          └── 9 > > >>              ├── busy > > >>              │   ├── 0 > > >>              │   ├── 1 > > >>              │   ├── 2 > > >>              │   └── 3 > > >>              ├── name > > >>              └── pid > > >> > > >> Files in 'busy' directories are numbered using the engine class > ABI values and > > >> they contain accumulated nanoseconds each client spent on engines > of a > > >> respective class. > > > > > > We did something similar in amdgpu using the gpu scheduler.  We then > > > expose the data via fdinfo.  See > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-misc%2Fcommit%2F%3Fid%3D1774baa64f9395fa884ea9ed494bcb043f3b83f5&data=04%7C01%7CDavid.Nieto%40amd.com%7C5e3c05578ef14be3692508d9169d55bf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637565687273144615%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mt1EIL%2Fc9pHCXR%2FYSd%2BTr1e64XHoeYcdQ2cYufJ%2FcYQ%3D&reserved=0 > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-misc%2Fcommit%2F%3Fid%3D874442541133f78c78b6880b8cc495bab5c61704&data=04%7C01%7CDavid.Nieto%40amd.com%7C5e3c05578ef14be3692508d9169d55bf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637565687273144615%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2F3zMGw0LPTC1kG4NebTwUPTx7QCtEyw%2B4JToXDK5QXI%3D&reserved=0 > > > > > Interesting! > > > > Is yours wall time or actual GPU time taking preemption and such into > > account? Do you have some userspace tools parsing this data and how to > > do you client discovery? Presumably there has to be a better way that > > going through all open file descriptors? > > Wall time.  It uses the fences in the scheduler to calculate engine > time.  We have some python scripts to make it look pretty, but mainly > just reading the files directly.  If you know the process, you can > look it up in procfs. > > > > > Our implementation was merged in January but Daniel took it out recently > > because he wanted to have discussion about a common vendor framework for > > this whole story on dri-devel. I think. +Daniel to comment. > > > > I couldn't find the patch you pasted on the mailing list to see if there > > was any such discussion around your version. > > It was on the amd-gfx mailing list. > > Alex > > > > > Regards, > > > > Tvrtko > > > > > > > > Alex > > > > > > > > >> > > >> Tvrtko Ursulin (7): > > >>    drm/i915: Expose list of clients in sysfs > > >>    drm/i915: Update client name on context create > > >>    drm/i915: Make GEM contexts track DRM clients > > >>    drm/i915: Track runtime spent in closed and unreachable GEM > contexts > > >>    drm/i915: Track all user contexts per client > > >>    drm/i915: Track context current active time > > >>    drm/i915: Expose per-engine client busyness > > >> > > >> drivers/gpu/drm/i915/Makefile                 |   5 +- > > >> drivers/gpu/drm/i915/gem/i915_gem_context.c   |  61 ++- > > >> .../gpu/drm/i915/gem/i915_gem_context_types.h |  16 +- > > >> drivers/gpu/drm/i915/gt/intel_context.c       |  27 +- > > >> drivers/gpu/drm/i915/gt/intel_context.h       |  15 +- > > >> drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +- > > >> .../drm/i915/gt/intel_execlists_submission.c  |  23 +- > > >> .../gpu/drm/i915/gt/intel_gt_clock_utils.c    |   4 + > > >> drivers/gpu/drm/i915/gt/intel_lrc.c           |  27 +- > > >> drivers/gpu/drm/i915/gt/intel_lrc.h           |  24 ++ > > >> drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +- > > >> drivers/gpu/drm/i915/i915_drm_client.c        | 365 > ++++++++++++++++++ > > >> drivers/gpu/drm/i915/i915_drm_client.h        | 123 ++++++ > > >> drivers/gpu/drm/i915/i915_drv.c               |   6 + > > >> drivers/gpu/drm/i915/i915_drv.h               |   5 + > > >> drivers/gpu/drm/i915/i915_gem.c               |  21 +- > > >> drivers/gpu/drm/i915/i915_gpu_error.c         |  31 +- > > >> drivers/gpu/drm/i915/i915_gpu_error.h         |   2 +- > > >> drivers/gpu/drm/i915/i915_sysfs.c             |   8 + > > >>   19 files changed, 716 insertions(+), 81 deletions(-) > > >>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c > > >>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h > > >> > > >> -- > > >> 2.30.2 > > >> --------------33218AAAA33C6B4900170AC6 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit Well in my opinion exposing it through fdinfo turned out to be a really clean approach.

It describes exactly the per file descriptor information we need.

Making that device driver independent is potentially useful as well.

Regards,
Christian.

Am 14.05.21 um 09:22 schrieb Nieto, David M:

[AMD Official Use Only - Internal Distribution Only]


We had entertained the idea of exposing the processes as sysfs nodes as you proposed, but we had concerns about exposing process info in there, especially since /proc already exists for that purpose.

I think if you were to follow that approach, we could have tools like top that support exposing GPU engine usage.

From: Alex Deucher <alexdeucher@gmail.com>
Sent: Thursday, May 13, 2021 10:58 PM
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>; Nieto, David M <David.Nieto@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>
Cc: Intel Graphics Development <Intel-gfx@lists.freedesktop.org>; Maling list - DRI developers <dri-devel@lists.freedesktop.org>; Daniel Vetter <daniel@ffwll.ch>
Subject: Re: [PATCH 0/7] Per client engine busyness
 
+ David, Christian

On Thu, May 13, 2021 at 12:41 PM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> Hi,
>
> On 13/05/2021 16:48, Alex Deucher wrote:
> > On Thu, May 13, 2021 at 7:00 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> Resurrect of the previosuly merged per client engine busyness patches. In a
> >> nutshell it enables intel_gpu_top to be more top(1) like useful and show not
> >> only physical GPU engine usage but per process view as well.
> >>
> >> Example screen capture:
> >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >> intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s
> >>
> >>        IMC reads:     4414 MiB/s
> >>       IMC writes:     3805 MiB/s
> >>
> >>            ENGINE      BUSY                                      MI_SEMA MI_WAIT
> >>       Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
> >>         Blitter/0    0.00% |                                   |      0%      0%
> >>           Video/0    0.00% |                                   |      0%      0%
> >>    VideoEnhance/0    0.00% |                                   |      0%      0%
> >>
> >>    PID            NAME  Render/3D      Blitter        Video      VideoEnhance
> >>   2733       neverball |██████▌     ||            ||            ||            |
> >>   2047            Xorg |███▊        ||            ||            ||            |
> >>   2737        glxgears |█▍          ||            ||            ||            |
> >>   2128           xfwm4 |            ||            ||            ||            |
> >>   2047            Xorg |            ||            ||            ||            |
> >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >>
> >> Internally we track time spent on engines for each struct intel_context, both
> >> for current and past contexts belonging to each open DRM file.
> >>
> >> This can serve as a building block for several features from the wanted list:
> >> smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
> >> wanted by some customers, setrlimit(2) like controls, cgroups controller,
> >> dynamic SSEU tuning, ...
> >>
> >> To enable userspace access to the tracked data, we expose time spent on GPU per
> >> client and per engine class in sysfs with a hierarchy like the below:
> >>
> >>          # cd /sys/class/drm/card0/clients/
> >>          # tree
> >>          .
> >>          ├── 7
> >>          │   ├── busy
> >>          │   │   ├── 0
> >>          │   │   ├── 1
> >>          │   │   ├── 2
> >>          │   │   └── 3
> >>          │   ├── name
> >>          │   └── pid
> >>          ├── 8
> >>          │   ├── busy
> >>          │   │   ├── 0
> >>          │   │   ├── 1
> >>          │   │   ├── 2
> >>          │   │   └── 3
> >>          │   ├── name
> >>          │   └── pid
> >>          └── 9
> >>              ├── busy
> >>              │   ├── 0
> >>              │   ├── 1
> >>              │   ├── 2
> >>              │   └── 3
> >>              ├── name
> >>              └── pid
> >>
> >> Files in 'busy' directories are numbered using the engine class ABI values and
> >> they contain accumulated nanoseconds each client spent on engines of a
> >> respective class.
> >
> > We did something similar in amdgpu using the gpu scheduler.  We then
> > expose the data via fdinfo.  See
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-misc%2Fcommit%2F%3Fid%3D1774baa64f9395fa884ea9ed494bcb043f3b83f5&amp;data=04%7C01%7CDavid.Nieto%40amd.com%7C5e3c05578ef14be3692508d9169d55bf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637565687273144615%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=mt1EIL%2Fc9pHCXR%2FYSd%2BTr1e64XHoeYcdQ2cYufJ%2FcYQ%3D&amp;reserved=0
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-misc%2Fcommit%2F%3Fid%3D874442541133f78c78b6880b8cc495bab5c61704&amp;data=04%7C01%7CDavid.Nieto%40amd.com%7C5e3c05578ef14be3692508d9169d55bf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637565687273144615%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=%2F3zMGw0LPTC1kG4NebTwUPTx7QCtEyw%2B4JToXDK5QXI%3D&amp;reserved=0
>
> Interesting!
>
> Is yours wall time or actual GPU time taking preemption and such into
> account? Do you have some userspace tools parsing this data and how to
> do you client discovery? Presumably there has to be a better way that
> going through all open file descriptors?

Wall time.  It uses the fences in the scheduler to calculate engine
time.  We have some python scripts to make it look pretty, but mainly
just reading the files directly.  If you know the process, you can
look it up in procfs.

>
> Our implementation was merged in January but Daniel took it out recently
> because he wanted to have discussion about a common vendor framework for
> this whole story on dri-devel. I think. +Daniel to comment.
>
> I couldn't find the patch you pasted on the mailing list to see if there
> was any such discussion around your version.

It was on the amd-gfx mailing list.

Alex

>
> Regards,
>
> Tvrtko
>
> >
> > Alex
> >
> >
> >>
> >> Tvrtko Ursulin (7):
> >>    drm/i915: Expose list of clients in sysfs
> >>    drm/i915: Update client name on context create
> >>    drm/i915: Make GEM contexts track DRM clients
> >>    drm/i915: Track runtime spent in closed and unreachable GEM contexts
> >>    drm/i915: Track all user contexts per client
> >>    drm/i915: Track context current active time
> >>    drm/i915: Expose per-engine client busyness
> >>
> >>   drivers/gpu/drm/i915/Makefile                 |   5 +-
> >>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  61 ++-
> >>   .../gpu/drm/i915/gem/i915_gem_context_types.h |  16 +-
> >>   drivers/gpu/drm/i915/gt/intel_context.c       |  27 +-
> >>   drivers/gpu/drm/i915/gt/intel_context.h       |  15 +-
> >>   drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
> >>   .../drm/i915/gt/intel_execlists_submission.c  |  23 +-
> >>   .../gpu/drm/i915/gt/intel_gt_clock_utils.c    |   4 +
> >>   drivers/gpu/drm/i915/gt/intel_lrc.c           |  27 +-
> >>   drivers/gpu/drm/i915/gt/intel_lrc.h           |  24 ++
> >>   drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
> >>   drivers/gpu/drm/i915/i915_drm_client.c        | 365 ++++++++++++++++++
> >>   drivers/gpu/drm/i915/i915_drm_client.h        | 123 ++++++
> >>   drivers/gpu/drm/i915/i915_drv.c               |   6 +
> >>   drivers/gpu/drm/i915/i915_drv.h               |   5 +
> >>   drivers/gpu/drm/i915/i915_gem.c               |  21 +-
> >>   drivers/gpu/drm/i915/i915_gpu_error.c         |  31 +-
> >>   drivers/gpu/drm/i915/i915_gpu_error.h         |   2 +-
> >>   drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
> >>   19 files changed, 716 insertions(+), 81 deletions(-)
> >>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
> >>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
> >>
> >> --
> >> 2.30.2
> >>

--------------33218AAAA33C6B4900170AC6-- --===============1481147362== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx --===============1481147362==--