From: "Christian König" <christian.koenig@amd.com> To: "Daniel Vetter" <daniel.vetter@ffwll.ch>, "Christian König" <deathsimple@vodafone.de> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>, Thomas Hellstrom <thellstrom@vmware.com>, nouveau <nouveau@lists.freedesktop.org>, LKML <linux-kernel@vger.kernel.org>, dri-devel <dri-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>, "Deucher, Alexander" <alexander.deucher@amd.com> Subject: Re: [Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation for fences Date: Wed, 23 Jul 2014 11:27:44 +0200 [thread overview] Message-ID: <53CF8010.9060809@amd.com> (raw) In-Reply-To: <CAKMK7uE1G-EjbJ2RAXC3ZfkaCo=Gber5PJuRG623Yoz9D=FmGw@mail.gmail.com> Am 23.07.2014 10:54, schrieb Daniel Vetter: > On Wed, Jul 23, 2014 at 10:46 AM, Christian König > <deathsimple@vodafone.de> wrote: >> Am 23.07.2014 10:42, schrieb Daniel Vetter: >> >>> On Wed, Jul 23, 2014 at 10:25 AM, Maarten Lankhorst >>> <maarten.lankhorst@canonical.com> wrote: >>>> In this case if the sync was to i915 the i915 lockup procedure would take >>>> care of itself. It wouldn't fix radeon, but it would at least unblock your >>>> intel card again. I haven't specifically added a special case to attempt to >>>> unblock external fences, but I've considered it. :-) >>> Actually the i915 reset stuff relies crucially on being able to kick >>> all waiters holding driver locks. Since the current fence code only >>> exposes an opaque wait function without exposing the underlying wait >>> queue we won't be able to sleep on both the fence queue and the reset >>> queue. So would pose a problem if we add fence_wait calls to our >>> driver. >> >> And apart from that I really think that I misunderstood Maarten. But his >> explanation sounds like i915 would do a reset because Radeon is locked up, >> right? >> >> Well if that's really the case then I would question the interface even >> more, cause that is really nonsense. > I disagree - the entire point of fences is that we can do multi-gpu > work asynchronously. So by the time we'll notice that radeon's dead we > have accepted the batch from userspace already. The only way to get > rid of it again is through our reset machinery, which also tells > userspace that we couldn't execute the batch. Whether we actually need > to do a hw reset depends upon whether we've committed the batch to the > hw already. Atm that's always the case, but the scheduler will change > that. So I have no issues with intel doing a reset when other drivers > don't signal fences. You submit a job to the hardware and then block the job to wait for radeon to be finished? Well than this would indeed require a hardware reset, but wouldn't that make the whole problem even worse? I mean currently we block one userspace process to wait for other hardware to be finished with a buffer, but what you are describing here blocks the whole hardware to wait for other hardware which in the end blocks all userspace process accessing the hardware. Talking about alternative approaches wouldn't it be simpler to just offload the waiting to a different kernel or userspace thread? Christian. > > Also this isn't a problem with the interface really, but with the > current implementation for radeon. And getting cross-driver reset > notifications right will require more work either way. > -Daniel
WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig-5C7GfCeVMHo@public.gmane.org> To: "Daniel Vetter" <daniel.vetter-/w4YWyX8dFk@public.gmane.org>, "Christian König" <deathsimple-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org> Cc: Thomas Hellstrom <thellstrom-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>, nouveau <nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>, LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>, dri-devel <dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>, "Deucher, Alexander" <alexander.deucher-5C7GfCeVMHo@public.gmane.org>, Ben Skeggs <bskeggs-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> Subject: Re: [PATCH 09/17] drm/radeon: use common fence implementation for fences Date: Wed, 23 Jul 2014 11:27:44 +0200 [thread overview] Message-ID: <53CF8010.9060809@amd.com> (raw) In-Reply-To: <CAKMK7uE1G-EjbJ2RAXC3ZfkaCo=Gber5PJuRG623Yoz9D=FmGw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> Am 23.07.2014 10:54, schrieb Daniel Vetter: > On Wed, Jul 23, 2014 at 10:46 AM, Christian König > <deathsimple@vodafone.de> wrote: >> Am 23.07.2014 10:42, schrieb Daniel Vetter: >> >>> On Wed, Jul 23, 2014 at 10:25 AM, Maarten Lankhorst >>> <maarten.lankhorst@canonical.com> wrote: >>>> In this case if the sync was to i915 the i915 lockup procedure would take >>>> care of itself. It wouldn't fix radeon, but it would at least unblock your >>>> intel card again. I haven't specifically added a special case to attempt to >>>> unblock external fences, but I've considered it. :-) >>> Actually the i915 reset stuff relies crucially on being able to kick >>> all waiters holding driver locks. Since the current fence code only >>> exposes an opaque wait function without exposing the underlying wait >>> queue we won't be able to sleep on both the fence queue and the reset >>> queue. So would pose a problem if we add fence_wait calls to our >>> driver. >> >> And apart from that I really think that I misunderstood Maarten. But his >> explanation sounds like i915 would do a reset because Radeon is locked up, >> right? >> >> Well if that's really the case then I would question the interface even >> more, cause that is really nonsense. > I disagree - the entire point of fences is that we can do multi-gpu > work asynchronously. So by the time we'll notice that radeon's dead we > have accepted the batch from userspace already. The only way to get > rid of it again is through our reset machinery, which also tells > userspace that we couldn't execute the batch. Whether we actually need > to do a hw reset depends upon whether we've committed the batch to the > hw already. Atm that's always the case, but the scheduler will change > that. So I have no issues with intel doing a reset when other drivers > don't signal fences. You submit a job to the hardware and then block the job to wait for radeon to be finished? Well than this would indeed require a hardware reset, but wouldn't that make the whole problem even worse? I mean currently we block one userspace process to wait for other hardware to be finished with a buffer, but what you are describing here blocks the whole hardware to wait for other hardware which in the end blocks all userspace process accessing the hardware. Talking about alternative approaches wouldn't it be simpler to just offload the waiting to a different kernel or userspace thread? Christian. > > Also this isn't a problem with the interface really, but with the > current implementation for radeon. And getting cross-driver reset > notifications right will require more work either way. > -Daniel _______________________________________________ Nouveau mailing list Nouveau@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/nouveau
next prev parent reply other threads:[~2014-07-23 9:27 UTC|newest] Thread overview: 165+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-07-09 12:29 [PATCH 00/17] Convert TTM to the new fence interface Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 01/17] drm/ttm: add interruptible parameter to ttm_eu_reserve_buffers Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 02/17] drm/ttm: kill off some members to ttm_validate_buffer Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 03/17] drm/nouveau: add reservation to nouveau_gem_ioctl_cpu_prep Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 04/17] drm/nouveau: require reservations for nouveau_fence_sync and nouveau_bo_fence Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 05/17] drm/ttm: call ttm_bo_wait while inside a reservation Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 06/17] drm/ttm: kill fence_lock Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 07/17] drm/nouveau: rework to new fence interface Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 08/17] drm/radeon: add timeout argument to radeon_fence_wait_seq Maarten Lankhorst 2014-07-09 12:29 ` Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 09/17] drm/radeon: use common fence implementation for fences Maarten Lankhorst 2014-07-09 12:57 ` Deucher, Alexander 2014-07-09 12:57 ` Deucher, Alexander 2014-07-09 13:23 ` [PATCH v2 " Maarten Lankhorst 2014-07-09 13:23 ` Maarten Lankhorst 2014-07-10 17:27 ` Alex Deucher 2014-07-10 17:27 ` Alex Deucher 2014-07-22 4:05 ` [PATCH " Dave Airlie 2014-07-22 4:05 ` Dave Airlie 2014-07-22 4:05 ` Dave Airlie 2014-07-22 8:43 ` Christian König 2014-07-22 8:43 ` Christian König 2014-07-22 11:46 ` Daniel Vetter 2014-07-22 11:46 ` Daniel Vetter 2014-07-22 11:52 ` Daniel Vetter 2014-07-22 11:52 ` Daniel Vetter 2014-07-22 11:57 ` Daniel Vetter 2014-07-22 11:57 ` Daniel Vetter 2014-07-22 12:19 ` Christian König 2014-07-22 12:19 ` Christian König 2014-07-22 13:26 ` [Nouveau] " Daniel Vetter 2014-07-22 13:26 ` Daniel Vetter 2014-07-22 13:45 ` Christian König 2014-07-22 13:45 ` Christian König 2014-07-22 14:44 ` [Nouveau] " Maarten Lankhorst 2014-07-22 14:44 ` Maarten Lankhorst 2014-07-22 15:02 ` [Nouveau] " Christian König 2014-07-22 15:18 ` Maarten Lankhorst 2014-07-22 15:17 ` Daniel Vetter 2014-07-22 15:17 ` Daniel Vetter 2014-07-22 15:35 ` [Nouveau] " Christian König 2014-07-22 15:35 ` Christian König 2014-07-22 15:42 ` Daniel Vetter 2014-07-22 15:42 ` Daniel Vetter 2014-07-22 15:59 ` Christian König 2014-07-22 15:59 ` Christian König 2014-07-22 16:21 ` Daniel Vetter 2014-07-22 16:21 ` Daniel Vetter 2014-07-22 16:39 ` Christian König 2014-07-22 16:39 ` Christian König 2014-07-22 16:52 ` Daniel Vetter 2014-07-22 16:52 ` Daniel Vetter 2014-07-22 16:43 ` Daniel Vetter 2014-07-22 16:43 ` Daniel Vetter 2014-07-23 6:40 ` Maarten Lankhorst 2014-07-23 6:40 ` Maarten Lankhorst 2014-07-23 6:52 ` Christian König 2014-07-23 6:52 ` Christian König 2014-07-23 7:02 ` [Nouveau] " Daniel Vetter 2014-07-23 7:02 ` Daniel Vetter 2014-07-23 7:06 ` [Nouveau] " Maarten Lankhorst 2014-07-23 7:06 ` Maarten Lankhorst 2014-07-23 7:09 ` Daniel Vetter 2014-07-23 7:09 ` Daniel Vetter 2014-07-23 7:15 ` Christian König 2014-07-23 7:15 ` Christian König 2014-07-23 7:32 ` [Nouveau] " Maarten Lankhorst 2014-07-23 7:32 ` Maarten Lankhorst 2014-07-23 7:41 ` Christian König 2014-07-23 7:41 ` Christian König 2014-07-23 7:26 ` [Nouveau] " Christian König 2014-07-23 7:26 ` Christian König 2014-07-23 7:31 ` Daniel Vetter 2014-07-23 7:31 ` Daniel Vetter 2014-07-23 7:37 ` Christian König 2014-07-23 7:37 ` Christian König 2014-07-23 7:51 ` [Nouveau] " Maarten Lankhorst 2014-07-23 7:51 ` Maarten Lankhorst 2014-07-23 7:58 ` [Nouveau] " Christian König 2014-07-23 7:58 ` Christian König 2014-07-23 8:07 ` [Nouveau] " Daniel Vetter 2014-07-23 8:07 ` Daniel Vetter 2014-07-23 8:20 ` [Nouveau] " Christian König 2014-07-23 8:20 ` Christian König 2014-07-23 8:25 ` Maarten Lankhorst 2014-07-23 8:25 ` Maarten Lankhorst 2014-07-23 8:42 ` [Nouveau] " Daniel Vetter 2014-07-23 8:42 ` Daniel Vetter 2014-07-23 8:46 ` Christian König 2014-07-23 8:46 ` Christian König 2014-07-23 8:54 ` [Nouveau] " Daniel Vetter 2014-07-23 8:54 ` Daniel Vetter 2014-07-23 9:27 ` Christian König [this message] 2014-07-23 9:27 ` Christian König 2014-07-23 9:30 ` [Nouveau] " Daniel Vetter 2014-07-23 9:30 ` Daniel Vetter 2014-07-23 9:36 ` [Nouveau] " Christian König 2014-07-23 9:36 ` Christian König 2014-07-23 9:38 ` [Nouveau] " Maarten Lankhorst 2014-07-23 9:38 ` Maarten Lankhorst 2014-07-23 9:39 ` Christian König 2014-07-23 9:39 ` Christian König 2014-07-23 9:39 ` [Nouveau] " Daniel Vetter 2014-07-23 9:39 ` Daniel Vetter 2014-07-23 9:44 ` Daniel Vetter 2014-07-23 9:44 ` Daniel Vetter 2014-07-23 9:47 ` [Nouveau] " Christian König 2014-07-23 9:47 ` Christian König 2014-07-23 9:52 ` Daniel Vetter 2014-07-23 9:52 ` Daniel Vetter 2014-07-23 9:55 ` [Nouveau] " Maarten Lankhorst 2014-07-23 9:55 ` Maarten Lankhorst 2014-07-23 10:13 ` [Nouveau] " Christian König 2014-07-23 10:13 ` Christian König 2014-07-23 10:52 ` [Nouveau] " Daniel Vetter 2014-07-23 10:52 ` Daniel Vetter 2014-07-23 12:36 ` Christian König 2014-07-23 12:36 ` Christian König 2014-07-23 12:42 ` Daniel Vetter 2014-07-23 12:42 ` Daniel Vetter 2014-07-23 13:16 ` Maarten Lankhorst 2014-07-23 13:16 ` Maarten Lankhorst 2014-07-23 14:05 ` [Nouveau] " Maarten Lankhorst 2014-07-23 14:05 ` Maarten Lankhorst 2014-07-24 13:47 ` [Nouveau] " Christian König 2014-07-24 13:47 ` Christian König 2014-07-23 8:01 ` Daniel Vetter 2014-07-23 8:01 ` Daniel Vetter 2014-07-23 8:31 ` Christian König 2014-07-23 8:31 ` Christian König 2014-07-23 12:35 ` Rob Clark 2014-07-23 12:35 ` Rob Clark 2014-07-22 14:05 ` Maarten Lankhorst 2014-07-22 14:24 ` Christian König 2014-07-22 14:27 ` Maarten Lankhorst 2014-07-22 14:39 ` Christian König 2014-07-22 14:47 ` Maarten Lankhorst 2014-07-22 15:16 ` Christian König 2014-07-22 15:19 ` Daniel Vetter 2014-07-22 15:19 ` Daniel Vetter 2014-07-22 15:42 ` Alex Deucher 2014-07-22 15:42 ` Alex Deucher 2014-07-22 15:48 ` Daniel Vetter 2014-07-22 15:48 ` Daniel Vetter 2014-07-22 19:14 ` Jesse Barnes 2014-07-22 19:14 ` Jesse Barnes 2014-07-23 9:47 ` [Nouveau] " Daniel Vetter 2014-07-23 9:47 ` Daniel Vetter 2014-07-23 15:37 ` [Nouveau] " Jesse Barnes 2014-07-23 15:37 ` Jesse Barnes 2014-07-22 11:51 ` Maarten Lankhorst 2014-07-22 11:51 ` Maarten Lankhorst 2014-07-09 12:29 ` [PATCH 10/17] drm/qxl: rework to new fence interface Maarten Lankhorst 2014-07-09 12:30 ` [PATCH 11/17] drm/vmwgfx: get rid of different types of fence_flags entirely Maarten Lankhorst 2014-07-09 12:30 ` [PATCH 12/17] drm/vmwgfx: rework to new fence interface Maarten Lankhorst 2014-07-09 12:30 ` [PATCH 13/17] drm/ttm: flip the switch, and convert to dma_fence Maarten Lankhorst 2014-07-09 12:30 ` [PATCH 14/17] drm/nouveau: use rcu in nouveau_gem_ioctl_cpu_prep Maarten Lankhorst 2014-07-09 12:30 ` [PATCH 15/17] drm/radeon: use rcu waits in some ioctls Maarten Lankhorst 2014-07-09 12:30 ` [PATCH 16/17] drm/vmwgfx: use rcu in vmw_user_dmabuf_synccpu_grab Maarten Lankhorst 2014-07-09 12:30 ` [PATCH 17/17] drm/ttm: use rcu in core ttm Maarten Lankhorst 2014-07-09 13:09 ` [PATCH 00/17] Convert TTM to the new fence interface Mike Lothian 2014-07-09 13:21 ` Maarten Lankhorst 2014-07-09 13:21 ` Maarten Lankhorst 2014-07-10 21:37 ` Thomas Hellström 2014-07-10 21:37 ` Thomas Hellström
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=53CF8010.9060809@amd.com \ --to=christian.koenig@amd.com \ --cc=alexander.deucher@amd.com \ --cc=bskeggs@redhat.com \ --cc=daniel.vetter@ffwll.ch \ --cc=deathsimple@vodafone.de \ --cc=dri-devel@lists.freedesktop.org \ --cc=linux-kernel@vger.kernel.org \ --cc=maarten.lankhorst@canonical.com \ --cc=nouveau@lists.freedesktop.org \ --cc=thellstrom@vmware.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.