Am 16.02.21 um 14:27 schrieb Thomas Zimmermann: > Hi > > this is a shadow-buffered plane. Did you consider using the new helpers > for shadow-buffered planes? They will map the user BO for you and > provide the mapping in the plane state. > > From there, you should implement your own plane state on top of struct > drm_shadow_plane_state, and also move all the other allocations and > vmaps into prepare_fb and cleanup_fb. Most of this is not actually > allowed in commit tails. All we'd have to do is to export the reset, > duplicate and destroy code; similar to what > __drm_atomic_helper_plane_reset() does. AFAICT the cursor_bo is used to implement double buffering for the cursor image. Ideally, you can do what ast does: pre-allocate/vmap 2 BOs at the end of the vram. Then pageflip between them in atomic_update(). Resolves all the allocation and mapping headaches. Best regards Thomas > > Best regards > Thomas > > > Am 16.02.21 um 12:37 schrieb Gerd Hoffmann: >> We don't have to map in atomic_update callback then, >> making locking a bit less complicated. >> >> Signed-off-by: Gerd Hoffmann >> --- >>   drivers/gpu/drm/qxl/qxl_display.c | 14 +++++--------- >>   1 file changed, 5 insertions(+), 9 deletions(-) >> >> diff --git a/drivers/gpu/drm/qxl/qxl_display.c >> b/drivers/gpu/drm/qxl/qxl_display.c >> index 7500560db8e4..39b8c5116d34 100644 >> --- a/drivers/gpu/drm/qxl/qxl_display.c >> +++ b/drivers/gpu/drm/qxl/qxl_display.c >> @@ -584,7 +584,6 @@ static void qxl_cursor_atomic_update(struct >> drm_plane *plane, >>       struct drm_gem_object *obj; >>       struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo >> = NULL; >>       int ret; >> -    struct dma_buf_map user_map; >>       struct dma_buf_map cursor_map; >>       void *user_ptr; >>       int size = 64*64*4; >> @@ -599,11 +598,8 @@ static void qxl_cursor_atomic_update(struct >> drm_plane *plane, >>           obj = fb->obj[0]; >>           user_bo = gem_to_qxl_bo(obj); >> -        /* pinning is done in the prepare/cleanup framevbuffer */ >> -        ret = qxl_bo_kmap_locked(user_bo, &user_map); >> -        if (ret) >> -            goto out_free_release; >> -        user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction >> properly */ >> +        /* mapping is done in the prepare/cleanup framevbuffer */ >> +        user_ptr = user_bo->map.vaddr; /* TODO: Use mapping >> abstraction properly */ >>           ret = qxl_alloc_bo_reserved(qdev, release, >>                           sizeof(struct qxl_cursor) + size, >> @@ -639,7 +635,6 @@ static void qxl_cursor_atomic_update(struct >> drm_plane *plane, >>           cursor->chunk.data_size = size; >>           memcpy(cursor->chunk.data, user_ptr, size); >>           qxl_bo_kunmap_locked(cursor_bo); >> -        qxl_bo_kunmap_locked(user_bo); >>           cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); >>           cmd->u.set.visible = 1; >> @@ -778,6 +773,7 @@ static int qxl_plane_prepare_fb(struct drm_plane >> *plane, >>       struct drm_gem_object *obj; >>       struct qxl_bo *user_bo; >>       struct qxl_surface surf; >> +    struct dma_buf_map unused; >>       if (!new_state->fb) >>           return 0; >> @@ -815,7 +811,7 @@ static int qxl_plane_prepare_fb(struct drm_plane >> *plane, >>           } >>       } >> -    return qxl_bo_pin(user_bo); >> +    return qxl_bo_kmap(user_bo, &unused); >>   } >>   static void qxl_plane_cleanup_fb(struct drm_plane *plane, >> @@ -834,7 +830,7 @@ static void qxl_plane_cleanup_fb(struct drm_plane >> *plane, >>       obj = old_state->fb->obj[0]; >>       user_bo = gem_to_qxl_bo(obj); >> -    qxl_bo_unpin(user_bo); >> +    qxl_bo_kunmap(user_bo); >>       if (old_state->fb != plane->state->fb && user_bo->shadow) { >>           qxl_bo_unpin(user_bo->shadow); >> > > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer