From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 365B8C433F5 for ; Tue, 14 Sep 2021 22:59:54 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C56E560E76 for ; Tue, 14 Sep 2021 22:59:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C56E560E76 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 09FC46E835; Tue, 14 Sep 2021 22:59:49 +0000 (UTC) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2068.outbound.protection.outlook.com [40.107.212.68]) by gabe.freedesktop.org (Postfix) with ESMTPS id 89E416E830; Tue, 14 Sep 2021 22:59:47 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dpTzrGjrk0BmGmAq/UTIsd+8PcACDXZtSmU1E0dmgjdZtL4uRfKto6nGJHGJCljhwees6TwuUjvx0nxIlmprt5EGtd7jrqVKfV4bh/K3Okfnq7YxbsqpHy2kKQNFK2mRrPn+EzhUK5oJq8dnyIsoQ5a02CHpzlkHJzEF932QRJyKbdUQCqRvC7HPfRT804D6NySZiuuhP9c8A0IHjbgQpohME2BE2JLmlzvfz8RKSl8rqugbOZMOei10QKnoL8RaX4w5EEs/XtIphstY/tFMkdbbaiQwzhS7/GEhFqoTd/p+IjGhZM2U5ipthb9Jm69aG6SCXR+ySCMZukIF/CT19A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=TVnCW/t1lb5Lne3K6V1x2Gv+ph8brUUS/u8FuFOoqYA=; b=NQ2T6PfM6h20h9VArVeq5bfgYbKUUz3wsPzZ2Xrl2RgIlMGUyD0LQcXn1gVoj1d5fXfF5R5JKWPFDhfpfNdDbrQmWpOnYeASX2aLJj5TATN8A33PV/5uxfE5J13O122Ii5Rx+58jHvC9QwAxnO4SJtQYdIE3Y6ckIspn9WezUVw+4v5uY9TDXhjqPsgwmgo97C+qZQpg+a6mjvqGz+mVUmYs9yvtdgXy8UmNEAOWpU+rnUU7bivTNCeV8LMoARD+Ch6+8rFWFtyuFDQuocYM87tq9ke/eomC+vKbKUm3rxFwoixTC5rLb3SyHlz/9pL2fnIiBtR0HBJkVtTVlM1Ufg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TVnCW/t1lb5Lne3K6V1x2Gv+ph8brUUS/u8FuFOoqYA=; b=nFINIChVjz7t3sSpC9PigkFkMGd26xT3ZbNIVJ1ZcGMcu7VRwH8b1HXSxZ5ivBkfrh5IQxX+nlqBDhLypIo8NGLNsuosvbG0++5ydRYzg9j0nG2IuaQsWbKdpUDbOlspA3T+A3BzJP0tlEHukRJ7i3z10LLvGsabKq6nkGAEaCM= Received: from SN6PR12MB4623.namprd12.prod.outlook.com (2603:10b6:805:e9::17) by SA0PR12MB4381.namprd12.prod.outlook.com (2603:10b6:806:70::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.16; Tue, 14 Sep 2021 22:59:38 +0000 Received: from SN6PR12MB4623.namprd12.prod.outlook.com ([fe80::c4ac:2505:5bd8:295d]) by SN6PR12MB4623.namprd12.prod.outlook.com ([fe80::c4ac:2505:5bd8:295d%6]) with mapi id 15.20.4500.019; Tue, 14 Sep 2021 22:59:38 +0000 From: "Grodzovsky, Andrey" To: Alex Deucher , =?iso-8859-1?Q?Christian_K=F6nig?= CC: "Liu, Monk" , amd-gfx list , Maling list - DRI developers Subject: Re: [PATCH 1/2] drm/sched: fix the bug of time out calculation(v4) Thread-Topic: [PATCH 1/2] drm/sched: fix the bug of time out calculation(v4) Thread-Index: AQHXnsrfYEjjAkZF2EWgzw9h8IOc+auOvcGAgADofICAFFsXAIAAOTS7 Date: Tue, 14 Sep 2021 22:59:38 +0000 Message-ID: References: <1630457207-13107-1-git-send-email-Monk.Liu@amd.com> <28709f7f-8a48-40ad-87bb-c2f0dd89da38@gmail.com> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: gmail.com; dkim=none (message not signed) header.d=none;gmail.com; dmarc=none action=none header.from=amd.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 47d3702b-40a3-4d60-9f91-08d977d35443 x-ms-traffictypediagnostic: SA0PR12MB4381: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:268; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: Cn6Tx0VT35AFnfiNqj+3/O3kTQX1SFVMu8f6+1ajK3qsICbu4lGIUaih2COYriNb/2bhAIjaaVvxedaMliRdwPeCzriJIYuVl0H+c5Vc+G/8I8QHIWw1AErfJGvA5sJ46wIU0K84g9khBqQ/qVSlkkzF3qHRt4wHhfcYPNfBkbErWxcIfoK6675LJa83GG3Exzw4Hr6uzaQ/gXL+yjwmCzqih9KdJt5TcWjoGsjo7t5OQwhBMydJJ4164Izwzkv3OJP/4yJYQwqxdI6oFpO8rbohFi2kiCFChKYjnV0d6PM0P5dp1lLe8qTww0XLmsL+3uaEGkmY0pIbGB61BODuYKsWdFapYySPuSztqSJeAvH3ZKmIu5lIK7bsC6Lh8+kGqy9lvV358eUWmH4lh9ctp5ECrrxb0WXkESCQ+FTRhQ9DJlqh1twxVET2m19jzoeTkyDTqdqptxmbScdf4T9z9sypP2DOGKhnAP+yRRgj+Wyt8m04Gvqf+W3bllWMM2BqJ1m4HZn3yeRWBIUnm07jqly5OMvel7QrefqFFRT9NLlUC7NbyYPsXizX6cB7cFdLMMDPjNpBzdacIHSBu+TSfBZFSpwueqdc1uLf5QRcFov6QYk41dQbooYQGhg5FfmO8c6Yr0xYPWUCL32McsprfcfSJqa3zYIQtHxb0fqCFTUUHKlKvC04p68Ho7+yc1e12H2+XgiRDAg8vNWiZt0Q4A== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN6PR12MB4623.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(136003)(396003)(39860400002)(366004)(346002)(26005)(4326008)(478600001)(52536014)(316002)(86362001)(54906003)(186003)(110136005)(83380400001)(71200400001)(6506007)(91956017)(76116006)(2906002)(66574015)(66476007)(66556008)(64756008)(66946007)(122000001)(66446008)(5660300002)(8936002)(38100700002)(53546011)(38070700005)(8676002)(55016002)(33656002)(7696005)(9686003); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?Q?8bYWaBqWaYNntLLmQ1rX2PeMOGF+Bv79dYamiRcb6L4o2GSxEi3reDduKB?= =?iso-8859-1?Q?FkH663egQolq2tFGGELPbBpkpHD75Zi/xfMAPTOhBkluZG496cUv5yJaEp?= =?iso-8859-1?Q?eW8GSTtC8J/sARDm4lVdhEMo0B79+c6TUWu7XYpbkpwI/XCYdN2KbwB4i2?= =?iso-8859-1?Q?5ze0/1295LEdQGj6vBH9iq3JvJl9CNDd0W4TwiaOXJvGotKNar0OkrWV7g?= =?iso-8859-1?Q?qn6xp/sY1SnG6o5LdbpXXnsZQpExMABhpdONrPvs1hPvczNPN9NqN8fTk0?= =?iso-8859-1?Q?rFtPlE2JMWIJ8haR+/J4IAm6SbZT+tZ+eEbMAR4XhtZPh3qOU0nYiHgw7O?= =?iso-8859-1?Q?AwINW02Y0imqwTwxjuRcmgmwkpoz0YSBnjFbldsrUgE1FdS2HtqxD9su8p?= =?iso-8859-1?Q?vv9OS2vouXmfQMCVlyomhplmIkWTEoKmv/Cq2R8kckXSVvQqPoCl/1c68O?= =?iso-8859-1?Q?edgtNX+qWxVIuuf1QplEWGD0RLGBVOpRP3p6SHgZe/LPXhqZP5burqjIbT?= =?iso-8859-1?Q?K6rF3+y9Xas/nxg6sCZqnCPdOPsamPE/4xein9WSpi9BodnFaLQejEB4oe?= =?iso-8859-1?Q?d8+nN1P7MhZSys+Or1DCJMjY5+NysG5lqyfx5g7+atqyJMnnghP/cc7zBj?= =?iso-8859-1?Q?DCbSSdXtIjVU1U7WvboS98HSEJ7qhkMgqaTxOeiDHEp/Ov6ouOsFmUt8vH?= =?iso-8859-1?Q?uL0UnnBbBjkwPTDjpK035M+6zRT/ck+Ac9QOVRBZKTsLvVXM57AzA7jMBO?= =?iso-8859-1?Q?CxFUVLE8QboVdSECJPEnD+XZ/RkY6rqjK3fs1s8g8d3v2ErY+RjjBHURoH?= =?iso-8859-1?Q?Q9VnnPaSLB91C8BuLaiJ9zHvn4pjooqznjICmzgyejax/eKI1JGP1z6zKk?= =?iso-8859-1?Q?6yQfQvIsncPibvjqSob5qVIkdRUB716Wd+ec/lSgW31YcNqG1rAVR7LRgy?= =?iso-8859-1?Q?9PIz4YhFapmtWhn4SK82j7DSCBAyIJiSn/mD0jMmKrwMPId9d5pj2uTm+U?= =?iso-8859-1?Q?5ia/Cx6wh2VIzGYw70Dg0WkwNfMsMXsUbaJNtZ5MvqjvjnOWgJyAoLJGLA?= =?iso-8859-1?Q?ZtozFOldf4oLVs39aEXIvZmSU/xpbG1LJz0kRApyB4u8s5pkwO2lIkJv2l?= =?iso-8859-1?Q?XFDLa1g381HLRpPumHQ3kAidS6d97dpqUhfZTVuAyLC05Ik3d4gPjQBQ3a?= =?iso-8859-1?Q?3AecUCnRWY7jvhuo4AbcQGAmc9PIcl5iWZVuh9lXwYmxoTjjjGkQcb4915?= =?iso-8859-1?Q?QV8qnC6B5VB5znXPz17yS2ffYpDuzguA56jMLpFjjvBJIu/jlJ80nAqcBH?= =?iso-8859-1?Q?WI4MjgKutz+/SP65BAYME9Pp1I8+kfnvjfLYPcj1x7G81Ck=3D?= Content-Type: multipart/alternative; boundary="_000_SN6PR12MB4623F8A1707C870938E08140EADA9SN6PR12MB4623namp_" MIME-Version: 1.0 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB4623.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 47d3702b-40a3-4d60-9f91-08d977d35443 X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Sep 2021 22:59:38.4129 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 2pr0TZWouXAqlKaN4uHboNIMRnIfjyMPHGv2CHTcEfSQs9e1zxP/yDfEtd5E6mRmrZrT+gW+ZpzwPJHTArEavg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4381 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" --_000_SN6PR12MB4623F8A1707C870938E08140EADA9SN6PR12MB4623namp_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable AFAIK this one is independent. Christian, can you confirm ? Andrey ________________________________ From: amd-gfx on behalf of Alex Deu= cher Sent: 14 September 2021 15:33 To: Christian K=F6nig Cc: Liu, Monk ; amd-gfx list ; Maling list - DRI developers Subject: Re: [PATCH 1/2] drm/sched: fix the bug of time out calculation(v4) Was this fix independent of the other discussions? Should this be applied to drm-misc? Alex On Wed, Sep 1, 2021 at 4:42 PM Alex Deucher wrote: > > On Wed, Sep 1, 2021 at 2:50 AM Christian K=F6nig > wrote: > > > > Am 01.09.21 um 02:46 schrieb Monk Liu: > > > issue: > > > in cleanup_job the cancle_delayed_work will cancel a TO timer > > > even the its corresponding job is still running. > > > > > > fix: > > > do not cancel the timer in cleanup_job, instead do the cancelling > > > only when the heading job is signaled, and if there is a "next" job > > > we start_timeout again. > > > > > > v2: > > > further cleanup the logic, and do the TDR timer cancelling if the sig= naled job > > > is the last one in its scheduler. > > > > > > v3: > > > change the issue description > > > remove the cancel_delayed_work in the begining of the cleanup_job > > > recover the implement of drm_sched_job_begin. > > > > > > v4: > > > remove the kthread_should_park() checking in cleanup_job routine, > > > we should cleanup the signaled job asap > > > > > > TODO: > > > 1)introduce pause/resume scheduler in job_timeout to serial the handl= ing > > > of scheduler and job_timeout. > > > 2)drop the bad job's del and insert in scheduler due to above seriali= zation > > > (no race issue anymore with the serialization) > > > > > > tested-by: jingwen > > > Signed-off-by: Monk Liu > > > > Reviewed-by: Christian K=F6nig > > > > Are you planning to push this to drm-misc? > > Alex > > > > > --- > > > drivers/gpu/drm/scheduler/sched_main.c | 26 +++++++++--------------= --- > > > 1 file changed, 9 insertions(+), 17 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm= /scheduler/sched_main.c > > > index a2a9536..3e0bbc7 100644 > > > --- a/drivers/gpu/drm/scheduler/sched_main.c > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c > > > @@ -676,15 +676,6 @@ drm_sched_get_cleanup_job(struct drm_gpu_schedul= er *sched) > > > { > > > struct drm_sched_job *job, *next; > > > > > > - /* > > > - * Don't destroy jobs while the timeout worker is running OR t= hread > > > - * is being parked and hence assumed to not touch pending_list > > > - */ > > > - if ((sched->timeout !=3D MAX_SCHEDULE_TIMEOUT && > > > - !cancel_delayed_work(&sched->work_tdr)) || > > > - kthread_should_park()) > > > - return NULL; > > > - > > > spin_lock(&sched->job_list_lock); > > > > > > job =3D list_first_entry_or_null(&sched->pending_list, > > > @@ -693,17 +684,21 @@ drm_sched_get_cleanup_job(struct drm_gpu_schedu= ler *sched) > > > if (job && dma_fence_is_signaled(&job->s_fence->finished)) { > > > /* remove job from pending_list */ > > > list_del_init(&job->list); > > > + > > > + /* cancel this job's TO timer */ > > > + cancel_delayed_work(&sched->work_tdr); > > > /* make the scheduled timestamp more accurate */ > > > next =3D list_first_entry_or_null(&sched->pending_list, > > > typeof(*next), list); > > > - if (next) > > > + > > > + if (next) { > > > next->s_fence->scheduled.timestamp =3D > > > job->s_fence->finished.timestamp; > > > - > > > + /* start TO timer for next job */ > > > + drm_sched_start_timeout(sched); > > > + } > > > } else { > > > job =3D NULL; > > > - /* queue timeout for next job */ > > > - drm_sched_start_timeout(sched); > > > } > > > > > > spin_unlock(&sched->job_list_lock); > > > @@ -791,11 +786,8 @@ static int drm_sched_main(void *param) > > > (entity =3D drm_sched_select_= entity(sched))) || > > > kthread_should_stop()); > > > > > > - if (cleanup_job) { > > > + if (cleanup_job) > > > sched->ops->free_job(cleanup_job); > > > - /* queue timeout for next job */ > > > - drm_sched_start_timeout(sched); > > > - } > > > > > > if (!entity) > > > continue; > > --_000_SN6PR12MB4623F8A1707C870938E08140EADA9SN6PR12MB4623namp_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
AFAIK this one is independent.

Christian, can you confirm ?

Andrey

From: amd-gfx <amd-gfx-b= ounces@lists.freedesktop.org> on behalf of Alex Deucher <alexdeucher@= gmail.com>
Sent: 14 September 2021 15:33
To: Christian K=F6nig <ckoenig.leichtzumerken@gmail.com>
Cc: Liu, Monk <Monk.Liu@amd.com>; amd-gfx list <amd-gfx@lis= ts.freedesktop.org>; Maling list - DRI developers <dri-devel@lists.fr= eedesktop.org>
Subject: Re: [PATCH 1/2] drm/sched: fix the bug of time out calculat= ion(v4)
 
Was this fix independent of the other discussions?=   Should this be
applied to drm-misc?

Alex

On Wed, Sep 1, 2021 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> w= rote:
>
> On Wed, Sep 1, 2021 at 2:50 AM Christian K=F6nig
> <ckoenig.leichtzumerken@gmail.com> wrote:
> >
> > Am 01.09.21 um 02:46 schrieb Monk Liu:
> > > issue:
> > > in cleanup_job the cancle_delayed_work will cancel a TO time= r
> > > even the its corresponding job is still running.
> > >
> > > fix:
> > > do not cancel the timer in cleanup_job, instead do the cance= lling
> > > only when the heading job is signaled, and if there is a &qu= ot;next" job
> > > we start_timeout again.
> > >
> > > v2:
> > > further cleanup the logic, and do the TDR timer cancelling i= f the signaled job
> > > is the last one in its scheduler.
> > >
> > > v3:
> > > change the issue description
> > > remove the cancel_delayed_work in the begining of the cleanu= p_job
> > > recover the implement of drm_sched_job_begin.
> > >
> > > v4:
> > > remove the kthread_should_park() checking in cleanup_job rou= tine,
> > > we should cleanup the signaled job asap
> > >
> > > TODO:
> > > 1)introduce pause/resume scheduler in job_timeout to serial = the handling
> > > of scheduler and job_timeout.
> > > 2)drop the bad job's del and insert in scheduler due to abov= e serialization
> > > (no race issue anymore with the serialization)
> > >
> > > tested-by: jingwen <jingwen.chen@@amd.com>
> > > Signed-off-by: Monk Liu <Monk.Liu@amd.com>
> >
> > Reviewed-by: Christian K=F6nig <christian.koenig@amd.com> > >
>
> Are you planning to push this to drm-misc?
>
> Alex
>
>
> > > ---
> > >   drivers/gpu/drm/scheduler/sched_main.c | 26 ++++= +++++-----------------
> > >   1 file changed, 9 insertions(+), 17 deletions(-)=
> > >
> > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/driver= s/gpu/drm/scheduler/sched_main.c
> > > index a2a9536..3e0bbc7 100644
> > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > @@ -676,15 +676,6 @@ drm_sched_get_cleanup_job(struct drm_gp= u_scheduler *sched)
> > >   {
> > >       struct drm_sched_job *jo= b, *next;
> > >
> > > -     /*
> > > -      * Don't destroy jobs while t= he timeout worker is running  OR thread
> > > -      * is being parked and hence = assumed to not touch pending_list
> > > -      */
> > > -     if ((sched->timeout !=3D MAX_SC= HEDULE_TIMEOUT &&
> > > -         !cancel_de= layed_work(&sched->work_tdr)) ||
> > > -         kthread_sh= ould_park())
> > > -          = ;   return NULL;
> > > -
> > >       spin_lock(&sched->= ;job_list_lock);
> > >
> > >       job =3D list_first_entry= _or_null(&sched->pending_list,
> > > @@ -693,17 +684,21 @@ drm_sched_get_cleanup_job(struct drm_g= pu_scheduler *sched)
> > >       if (job && dma_f= ence_is_signaled(&job->s_fence->finished)) {
> > >          &= nbsp;    /* remove job from pending_list */
> > >          &= nbsp;    list_del_init(&job->list);
> > > +
> > > +          = ;   /* cancel this job's TO timer */
> > > +          = ;   cancel_delayed_work(&sched->work_tdr);
> > >          &= nbsp;    /* make the scheduled timestamp more accurate */ > > >          &= nbsp;    next =3D list_first_entry_or_null(&sched->pe= nding_list,
> > >          &= nbsp;           &nbs= p;            &= nbsp;           typeof(*n= ext), list);
> > > -          = ;   if (next)
> > > +
> > > +          = ;   if (next) {
> > >          &= nbsp;            nex= t->s_fence->scheduled.timestamp =3D
> > >          &= nbsp;           &nbs= p;        job->s_fence->finished.t= imestamp;
> > > -
> > > +          = ;           /* start TO t= imer for next job */
> > > +          = ;           drm_sched_sta= rt_timeout(sched);
> > > +          = ;   }
> > >       } else {
> > >          &= nbsp;    job =3D NULL;
> > > -          = ;   /* queue timeout for next job */
> > > -          = ;   drm_sched_start_timeout(sched);
> > >       }
> > >
> > >       spin_unlock(&sched-&= gt;job_list_lock);
> > > @@ -791,11 +786,8 @@ static int drm_sched_main(void *param)<= br> > > >          &= nbsp;           &nbs= p;            &= nbsp;     (entity =3D drm_sched_select_entity(sched))) = ||
> > >          &= nbsp;           &nbs= p;            &= nbsp;    kthread_should_stop());
> > >
> > > -          = ;   if (cleanup_job) {
> > > +          = ;   if (cleanup_job)
> > >          &= nbsp;            sch= ed->ops->free_job(cleanup_job);
> > > -          = ;           /* queue time= out for next job */
> > > -          = ;           drm_sched_sta= rt_timeout(sched);
> > > -          = ;   }
> > >
> > >          &= nbsp;    if (!entity)
> > >          &= nbsp;            con= tinue;
> >
--_000_SN6PR12MB4623F8A1707C870938E08140EADA9SN6PR12MB4623namp_--