All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: "David Airlie" <airlied@linux.ie>,
	dri-devel@lists.freedesktop.org,
	"Gurchetan Singh" <gurchetansingh@chromium.org>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Dmitry Osipenko" <digetx@gmail.com>,
	"Steven Price" <steven.price@arm.com>,
	"Gustavo Padovan" <gustavo.padovan@collabora.com>,
	"Alyssa Rosenzweig" <alyssa.rosenzweig@collabora.com>,
	"Thomas Zimmermann" <tzimmermann@suse.de>,
	"Christian König" <ckoenig.leichtzumerken@gmail.com>,
	virtualization@lists.linux-foundation.org,
	"Daniel Almeida" <daniel.almeida@collabora.com>,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Gert Wollny" <gert.wollny@collabora.com>,
	"Emil Velikov" <emil.l.velikov@gmail.com>,
	linux-kernel@vger.kernel.org,
	"Robin Murphy" <robin.murphy@arm.com>
Subject: Re: [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
Date: Wed, 11 May 2022 17:29:21 +0200	[thread overview]
Message-ID: <YnvWUbh5QDDs6u2B@phenom.ffwll.local> (raw)
In-Reply-To: <3a362c32-870c-1d73-bba6-bbdcd62dc326@collabora.com>

On Wed, May 11, 2022 at 06:14:00PM +0300, Dmitry Osipenko wrote:
> On 5/11/22 17:24, Christian König wrote:
> > Am 11.05.22 um 15:00 schrieb Daniel Vetter:
> >> On Tue, May 10, 2022 at 04:39:53PM +0300, Dmitry Osipenko wrote:
> >>> [SNIP]
> >>> Since vmapping implies implicit pinning, we can't use a separate lock in
> >>> drm_gem_shmem_vmap() because we need to protect the
> >>> drm_gem_shmem_get_pages(), which is invoked by drm_gem_shmem_vmap() to
> >>> pin the pages and requires the dma_resv_lock to be locked.
> >>>
> >>> Hence the problem is:
> >>>
> >>> 1. If dma-buf importer holds the dma_resv_lock and invokes
> >>> dma_buf_vmap() -> drm_gem_shmem_vmap(), then drm_gem_shmem_vmap() shall
> >>> not take the dma_resv_lock.
> >>>
> >>> 2. Since dma-buf locking convention isn't specified, we can't assume
> >>> that dma-buf importer holds the dma_resv_lock around dma_buf_vmap().
> >>>
> >>> The possible solutions are:
> >>>
> >>> 1. Specify the dma_resv_lock convention for dma-bufs and make all
> >>> drivers to follow it.
> >>>
> >>> 2. Make only DRM drivers to hold dma_resv_lock around dma_buf_vmap().
> >>> Other non-DRM drivers will get the lockdep warning.
> >>>
> >>> 3. Make drm_gem_shmem_vmap() to take the dma_resv_lock and get deadlock
> >>> if dma-buf importer holds the lock.
> >>>
> >>> ...
> >> Yeah this is all very annoying.
> > 
> > Ah, yes that topic again :)
> > 
> > I think we could relatively easily fix that by just defining and
> > enforcing that the dma_resv_lock must have be taken by the caller when
> > dma_buf_vmap() is called.
> > 
> > A two step approach should work:
> > 1. Move the call to dma_resv_lock() into the dma_buf_vmap() function and
> > remove all lock taking from the vmap callback implementations.
> > 2. Move the call to dma_resv_lock() into the callers of dma_buf_vmap()
> > and enforce that the function is called with the lock held.
> 
> I've doubts about the need to move out the dma_resv_lock() into the
> callers of dma_buf_vmap()..
> 
> I looked through all the dma_buf_vmap() users and neither of them
> interacts with dma_resv_lock() at all, i.e. nobody takes the lock
> in/outside of dma_buf_vmap(). Hence it's easy and more practical to make
> dma_buf_mmap/vmap() to take the dma_resv_lock by themselves.

i915_gem_dmabuf_vmap -> i915_gem_object_pin_map_unlocked ->
  i915_gem_object_lock -> dma_resv_lock

And all the ttm drivers should work similarly. So there's definitely
drivers which grab dma_resv_lock from their vmap callback.

> It's unclear to me which driver may ever want to do the mapping under
> the dma_resv_lock. But if we will ever have such a driver that will need
> to map imported buffer under dma_resv_lock, then we could always add the
> dma_buf_vmap_locked() variant of the function. In this case the locking
> rule will sound like this:
> 
> "All dma-buf importers are responsible for holding the dma-reservation
> lock around the dmabuf->ops->mmap/vmap() calls."
> 
> > It shouldn't be that hard to clean up. The last time I looked into it my
> > main problem was that we didn't had any easy unit test for it.
> 
> Do we have any tests for dma-bufs at all? It's unclear to me what you
> are going to test in regards to the reservation locks, could you please
> clarify?

Unfortunately not really :-/ Only way really is to grab a driver which
needs vmap (those are mostly display drivers) on an imported buffer, and
see what happens.

2nd best is liberally sprinkling lockdep annotations all over the place
and throwing it at intel ci (not sure amd ci is accessible to the public)
and then hoping that's good enough. Stuff like might_lock and
dma_resv_assert_held.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: "Christian König" <ckoenig.leichtzumerken@gmail.com>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"Thomas Zimmermann" <tzimmermann@suse.de>,
	"Daniel Stone" <daniel@fooishbar.org>,
	"David Airlie" <airlied@linux.ie>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Gurchetan Singh" <gurchetansingh@chromium.org>,
	"Chia-I Wu" <olvaffe@gmail.com>,
	"Daniel Almeida" <daniel.almeida@collabora.com>,
	"Gert Wollny" <gert.wollny@collabora.com>,
	"Gustavo Padovan" <gustavo.padovan@collabora.com>,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	"Maxime Ripard" <mripard@kernel.org>,
	"Rob Herring" <robh@kernel.org>,
	"Steven Price" <steven.price@arm.com>,
	"Alyssa Rosenzweig" <alyssa.rosenzweig@collabora.com>,
	"Rob Clark" <robdclark@gmail.com>,
	"Emil Velikov" <emil.l.velikov@gmail.com>,
	"Robin Murphy" <robin.murphy@arm.com>,
	"Dmitry Osipenko" <digetx@gmail.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
Date: Wed, 11 May 2022 17:29:21 +0200	[thread overview]
Message-ID: <YnvWUbh5QDDs6u2B@phenom.ffwll.local> (raw)
In-Reply-To: <3a362c32-870c-1d73-bba6-bbdcd62dc326@collabora.com>

On Wed, May 11, 2022 at 06:14:00PM +0300, Dmitry Osipenko wrote:
> On 5/11/22 17:24, Christian König wrote:
> > Am 11.05.22 um 15:00 schrieb Daniel Vetter:
> >> On Tue, May 10, 2022 at 04:39:53PM +0300, Dmitry Osipenko wrote:
> >>> [SNIP]
> >>> Since vmapping implies implicit pinning, we can't use a separate lock in
> >>> drm_gem_shmem_vmap() because we need to protect the
> >>> drm_gem_shmem_get_pages(), which is invoked by drm_gem_shmem_vmap() to
> >>> pin the pages and requires the dma_resv_lock to be locked.
> >>>
> >>> Hence the problem is:
> >>>
> >>> 1. If dma-buf importer holds the dma_resv_lock and invokes
> >>> dma_buf_vmap() -> drm_gem_shmem_vmap(), then drm_gem_shmem_vmap() shall
> >>> not take the dma_resv_lock.
> >>>
> >>> 2. Since dma-buf locking convention isn't specified, we can't assume
> >>> that dma-buf importer holds the dma_resv_lock around dma_buf_vmap().
> >>>
> >>> The possible solutions are:
> >>>
> >>> 1. Specify the dma_resv_lock convention for dma-bufs and make all
> >>> drivers to follow it.
> >>>
> >>> 2. Make only DRM drivers to hold dma_resv_lock around dma_buf_vmap().
> >>> Other non-DRM drivers will get the lockdep warning.
> >>>
> >>> 3. Make drm_gem_shmem_vmap() to take the dma_resv_lock and get deadlock
> >>> if dma-buf importer holds the lock.
> >>>
> >>> ...
> >> Yeah this is all very annoying.
> > 
> > Ah, yes that topic again :)
> > 
> > I think we could relatively easily fix that by just defining and
> > enforcing that the dma_resv_lock must have be taken by the caller when
> > dma_buf_vmap() is called.
> > 
> > A two step approach should work:
> > 1. Move the call to dma_resv_lock() into the dma_buf_vmap() function and
> > remove all lock taking from the vmap callback implementations.
> > 2. Move the call to dma_resv_lock() into the callers of dma_buf_vmap()
> > and enforce that the function is called with the lock held.
> 
> I've doubts about the need to move out the dma_resv_lock() into the
> callers of dma_buf_vmap()..
> 
> I looked through all the dma_buf_vmap() users and neither of them
> interacts with dma_resv_lock() at all, i.e. nobody takes the lock
> in/outside of dma_buf_vmap(). Hence it's easy and more practical to make
> dma_buf_mmap/vmap() to take the dma_resv_lock by themselves.

i915_gem_dmabuf_vmap -> i915_gem_object_pin_map_unlocked ->
  i915_gem_object_lock -> dma_resv_lock

And all the ttm drivers should work similarly. So there's definitely
drivers which grab dma_resv_lock from their vmap callback.

> It's unclear to me which driver may ever want to do the mapping under
> the dma_resv_lock. But if we will ever have such a driver that will need
> to map imported buffer under dma_resv_lock, then we could always add the
> dma_buf_vmap_locked() variant of the function. In this case the locking
> rule will sound like this:
> 
> "All dma-buf importers are responsible for holding the dma-reservation
> lock around the dmabuf->ops->mmap/vmap() calls."
> 
> > It shouldn't be that hard to clean up. The last time I looked into it my
> > main problem was that we didn't had any easy unit test for it.
> 
> Do we have any tests for dma-bufs at all? It's unclear to me what you
> are going to test in regards to the reservation locks, could you please
> clarify?

Unfortunately not really :-/ Only way really is to grab a driver which
needs vmap (those are mostly display drivers) on an imported buffer, and
see what happens.

2nd best is liberally sprinkling lockdep annotations all over the place
and throwing it at intel ci (not sure amd ci is accessible to the public)
and then hoping that's good enough. Stuff like might_lock and
dma_resv_assert_held.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: "David Airlie" <airlied@linux.ie>,
	dri-devel@lists.freedesktop.org,
	"Gurchetan Singh" <gurchetansingh@chromium.org>,
	"Dmitry Osipenko" <digetx@gmail.com>,
	"Rob Herring" <robh@kernel.org>,
	"Daniel Stone" <daniel@fooishbar.org>,
	"Steven Price" <steven.price@arm.com>,
	"Gustavo Padovan" <gustavo.padovan@collabora.com>,
	"Alyssa Rosenzweig" <alyssa.rosenzweig@collabora.com>,
	"Chia-I Wu" <olvaffe@gmail.com>,
	"Thomas Zimmermann" <tzimmermann@suse.de>,
	"Christian König" <ckoenig.leichtzumerken@gmail.com>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	"Maxime Ripard" <mripard@kernel.org>,
	virtualization@lists.linux-foundation.org,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Gert Wollny" <gert.wollny@collabora.com>,
	"Emil Velikov" <emil.l.velikov@gmail.com>,
	linux-kernel@vger.kernel.org, "Rob Clark" <robdclark@gmail.com>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"Robin Murphy" <robin.murphy@arm.com>
Subject: Re: [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
Date: Wed, 11 May 2022 17:29:21 +0200	[thread overview]
Message-ID: <YnvWUbh5QDDs6u2B@phenom.ffwll.local> (raw)
In-Reply-To: <3a362c32-870c-1d73-bba6-bbdcd62dc326@collabora.com>

On Wed, May 11, 2022 at 06:14:00PM +0300, Dmitry Osipenko wrote:
> On 5/11/22 17:24, Christian König wrote:
> > Am 11.05.22 um 15:00 schrieb Daniel Vetter:
> >> On Tue, May 10, 2022 at 04:39:53PM +0300, Dmitry Osipenko wrote:
> >>> [SNIP]
> >>> Since vmapping implies implicit pinning, we can't use a separate lock in
> >>> drm_gem_shmem_vmap() because we need to protect the
> >>> drm_gem_shmem_get_pages(), which is invoked by drm_gem_shmem_vmap() to
> >>> pin the pages and requires the dma_resv_lock to be locked.
> >>>
> >>> Hence the problem is:
> >>>
> >>> 1. If dma-buf importer holds the dma_resv_lock and invokes
> >>> dma_buf_vmap() -> drm_gem_shmem_vmap(), then drm_gem_shmem_vmap() shall
> >>> not take the dma_resv_lock.
> >>>
> >>> 2. Since dma-buf locking convention isn't specified, we can't assume
> >>> that dma-buf importer holds the dma_resv_lock around dma_buf_vmap().
> >>>
> >>> The possible solutions are:
> >>>
> >>> 1. Specify the dma_resv_lock convention for dma-bufs and make all
> >>> drivers to follow it.
> >>>
> >>> 2. Make only DRM drivers to hold dma_resv_lock around dma_buf_vmap().
> >>> Other non-DRM drivers will get the lockdep warning.
> >>>
> >>> 3. Make drm_gem_shmem_vmap() to take the dma_resv_lock and get deadlock
> >>> if dma-buf importer holds the lock.
> >>>
> >>> ...
> >> Yeah this is all very annoying.
> > 
> > Ah, yes that topic again :)
> > 
> > I think we could relatively easily fix that by just defining and
> > enforcing that the dma_resv_lock must have be taken by the caller when
> > dma_buf_vmap() is called.
> > 
> > A two step approach should work:
> > 1. Move the call to dma_resv_lock() into the dma_buf_vmap() function and
> > remove all lock taking from the vmap callback implementations.
> > 2. Move the call to dma_resv_lock() into the callers of dma_buf_vmap()
> > and enforce that the function is called with the lock held.
> 
> I've doubts about the need to move out the dma_resv_lock() into the
> callers of dma_buf_vmap()..
> 
> I looked through all the dma_buf_vmap() users and neither of them
> interacts with dma_resv_lock() at all, i.e. nobody takes the lock
> in/outside of dma_buf_vmap(). Hence it's easy and more practical to make
> dma_buf_mmap/vmap() to take the dma_resv_lock by themselves.

i915_gem_dmabuf_vmap -> i915_gem_object_pin_map_unlocked ->
  i915_gem_object_lock -> dma_resv_lock

And all the ttm drivers should work similarly. So there's definitely
drivers which grab dma_resv_lock from their vmap callback.

> It's unclear to me which driver may ever want to do the mapping under
> the dma_resv_lock. But if we will ever have such a driver that will need
> to map imported buffer under dma_resv_lock, then we could always add the
> dma_buf_vmap_locked() variant of the function. In this case the locking
> rule will sound like this:
> 
> "All dma-buf importers are responsible for holding the dma-reservation
> lock around the dmabuf->ops->mmap/vmap() calls."
> 
> > It shouldn't be that hard to clean up. The last time I looked into it my
> > main problem was that we didn't had any easy unit test for it.
> 
> Do we have any tests for dma-bufs at all? It's unclear to me what you
> are going to test in regards to the reservation locks, could you please
> clarify?

Unfortunately not really :-/ Only way really is to grab a driver which
needs vmap (those are mostly display drivers) on an imported buffer, and
see what happens.

2nd best is liberally sprinkling lockdep annotations all over the place
and throwing it at intel ci (not sure amd ci is accessible to the public)
and then hoping that's good enough. Stuff like might_lock and
dma_resv_assert_held.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2022-05-11 15:29 UTC|newest]

Thread overview: 123+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-17 22:36 [PATCH v4 00/15] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2022-04-17 22:36 ` Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 01/15] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling Dmitry Osipenko
2022-04-17 22:36   ` Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 02/15] drm/virtio: Check whether transferred 2D BO is shmem Dmitry Osipenko
2022-04-17 22:36   ` Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 03/15] drm/virtio: Unlock GEM reservations on virtio_gpu_object_shmem_init() error Dmitry Osipenko
2022-04-17 22:36   ` Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 04/15] drm/virtio: Unlock reservations on dma_resv_reserve_fences() error Dmitry Osipenko
2022-04-17 22:36   ` Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 05/15] drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb() Dmitry Osipenko
2022-04-17 22:36   ` Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 06/15] drm/virtio: Simplify error handling of virtio_gpu_object_create() Dmitry Osipenko
2022-04-17 22:36   ` Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 07/15] drm/virtio: Improve DMA API usage for shmem BOs Dmitry Osipenko
2022-04-17 22:36   ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 08/15] drm/virtio: Use dev_is_pci() Dmitry Osipenko
2022-04-17 22:37   ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 09/15] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Dmitry Osipenko
2022-04-17 22:37   ` Dmitry Osipenko
2022-04-18 18:25   ` Thomas Zimmermann
2022-04-18 18:25     ` Thomas Zimmermann
2022-04-18 18:25     ` Thomas Zimmermann
2022-04-18 19:43     ` Dmitry Osipenko
2022-04-18 19:43       ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Dmitry Osipenko
2022-04-17 22:37   ` Dmitry Osipenko
2022-04-18 18:38   ` Thomas Zimmermann
2022-04-18 18:38     ` Thomas Zimmermann
2022-04-18 19:18     ` Dmitry Osipenko
2022-04-27 14:50       ` Daniel Vetter
2022-04-27 14:50         ` Daniel Vetter
2022-04-27 14:50         ` Daniel Vetter
2022-04-28 18:31         ` Dmitry Osipenko
2022-05-04  8:21           ` Daniel Vetter
2022-05-04  8:21             ` Daniel Vetter
2022-05-04  8:21             ` Daniel Vetter
2022-05-04 15:56             ` Dmitry Osipenko
2022-05-05  8:12               ` Daniel Vetter
2022-05-05  8:12                 ` Daniel Vetter
2022-05-05  8:12                 ` Daniel Vetter
2022-05-05 22:49                 ` Dmitry Osipenko
2022-05-09 13:42                   ` Daniel Vetter
2022-05-09 13:42                     ` Daniel Vetter
2022-05-09 13:42                     ` Daniel Vetter
2022-05-10 13:39                     ` Dmitry Osipenko
2022-05-10 13:39                       ` Dmitry Osipenko
2022-05-11 13:00                       ` Daniel Vetter
2022-05-11 13:00                         ` Daniel Vetter
2022-05-11 13:00                         ` Daniel Vetter
2022-05-11 14:24                         ` Christian König
2022-05-11 14:24                           ` Christian König
2022-05-11 15:07                           ` Daniel Vetter
2022-05-11 15:07                             ` Daniel Vetter
2022-05-11 15:07                             ` Daniel Vetter
2022-05-11 15:14                           ` Dmitry Osipenko
2022-05-11 15:14                             ` Dmitry Osipenko
2022-05-11 15:29                             ` Daniel Vetter [this message]
2022-05-11 15:29                               ` Daniel Vetter
2022-05-11 15:29                               ` Daniel Vetter
2022-05-11 15:40                               ` Dmitry Osipenko
2022-05-11 15:40                                 ` Dmitry Osipenko
2022-05-11 19:05                                 ` Daniel Vetter
2022-05-11 19:05                                   ` Daniel Vetter
2022-05-11 19:05                                   ` Daniel Vetter
2022-05-12  7:29                                   ` Christian König
2022-05-12  7:29                                     ` Christian König
2022-05-12 14:15                                     ` Daniel Vetter
2022-05-12 14:15                                       ` Daniel Vetter
2022-05-12 14:15                                       ` Daniel Vetter
2022-04-17 22:37 ` [PATCH v4 11/15] drm/shmem-helper: Add generic memory shrinker Dmitry Osipenko
2022-04-17 22:37   ` Dmitry Osipenko
2022-04-19  7:22   ` Thomas Zimmermann
2022-04-19  7:22     ` Thomas Zimmermann
2022-04-19 20:40     ` Dmitry Osipenko
2022-04-27 15:03       ` Daniel Vetter
2022-04-27 15:03         ` Daniel Vetter
2022-04-27 15:03         ` Daniel Vetter
2022-04-28 18:20         ` Dmitry Osipenko
2022-05-04  8:24           ` Daniel Vetter
2022-05-04  8:24             ` Daniel Vetter
2022-05-04  8:24             ` Daniel Vetter
2022-06-19 16:54           ` Rob Clark
2022-06-19 16:54             ` Rob Clark
2022-06-19 16:54             ` Rob Clark
2022-05-05  8:34   ` Thomas Zimmermann
2022-05-05  8:34     ` Thomas Zimmermann
2022-05-05 11:59     ` Daniel Vetter
2022-05-05 11:59       ` Daniel Vetter
2022-05-05 11:59       ` Daniel Vetter
2022-05-06  0:10     ` Dmitry Osipenko
2022-05-09 13:49       ` Daniel Vetter
2022-05-09 13:49         ` Daniel Vetter
2022-05-09 13:49         ` Daniel Vetter
2022-05-10 13:47         ` Dmitry Osipenko
2022-05-10 13:47           ` Dmitry Osipenko
2022-05-11 13:09           ` Daniel Vetter
2022-05-11 13:09             ` Daniel Vetter
2022-05-11 13:09             ` Daniel Vetter
2022-05-11 16:06             ` Dmitry Osipenko
2022-05-11 16:06               ` Dmitry Osipenko
2022-05-11 19:09               ` Daniel Vetter
2022-05-11 19:09                 ` Daniel Vetter
2022-05-11 19:09                 ` Daniel Vetter
2022-05-12 11:36                 ` Dmitry Osipenko
2022-05-12 11:36                   ` Dmitry Osipenko
2022-05-12 17:04                   ` Daniel Vetter
2022-05-12 17:04                     ` Daniel Vetter
2022-05-12 17:04                     ` Daniel Vetter
2022-05-12 19:04                     ` Dmitry Osipenko
2022-05-12 19:04                       ` Dmitry Osipenko
2022-05-19 14:13                       ` Daniel Vetter
2022-05-19 14:13                         ` Daniel Vetter
2022-05-19 14:13                         ` Daniel Vetter
2022-05-19 14:29                         ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 12/15] drm/virtio: Support memory shrinking Dmitry Osipenko
2022-04-17 22:37   ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 13/15] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
2022-04-17 22:37   ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 14/15] drm/shmem-helper: Make drm_gem_shmem_get_pages() private Dmitry Osipenko
2022-04-17 22:37   ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 15/15] drm/shmem-helper: Remove drm_gem_shmem_purge() Dmitry Osipenko
2022-04-17 22:37   ` Dmitry Osipenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YnvWUbh5QDDs6u2B@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=airlied@linux.ie \
    --cc=alyssa.rosenzweig@collabora.com \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=daniel.almeida@collabora.com \
    --cc=digetx@gmail.com \
    --cc=dmitry.osipenko@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=emil.l.velikov@gmail.com \
    --cc=gert.wollny@collabora.com \
    --cc=gurchetansingh@chromium.org \
    --cc=gustavo.padovan@collabora.com \
    --cc=kraxel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=steven.price@arm.com \
    --cc=tomeu.vizoso@collabora.com \
    --cc=tzimmermann@suse.de \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.