linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chia-I Wu <olvaffe@gmail.com>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: ML dri-devel <dri-devel@lists.freedesktop.org>,
	Gurchetan Singh <gurchetansingh@chromium.org>,
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
	"open list:VIRTIO GPU DRIVER" 
	<virtualization@lists.linux-foundation.org>,
	open list <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve}
Date: Sat, 6 Jul 2019 22:30:25 -0700	[thread overview]
Message-ID: <CAPaKu7Q1_2-_HNr8Fkh7K61UGUfuAdPHWckH5g4fWt9s2YWiRA@mail.gmail.com> (raw)
In-Reply-To: <20190705085325.am2reucblvxc6qpg@sirius.home.kraxel.org>

[-- Attachment #1: Type: text/plain, Size: 2547 bytes --]

On Fri, Jul 5, 2019 at 1:53 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Thu, Jul 04, 2019 at 12:17:48PM -0700, Chia-I Wu wrote:
> > On Thu, Jul 4, 2019 at 4:10 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > >
> > >   Hi,
> > >
> > > > > -       r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
> > > > > +       r = reservation_object_lock_interruptible(bo->gem_base.resv, NULL);
> > > > Can you elaborate a bit about how TTM keeps the BOs alive in, for
> > > > example, virtio_gpu_transfer_from_host_ioctl?  In that function, only
> > > > three TTM functions are called: ttm_bo_reserve, ttm_bo_validate, and
> > > > ttm_bo_unreserve.  I am curious how they keep the BO alive.
> > >
> > > It can't go away between reserve and unreserve, and I think it also
> > > can't be evicted then.  Havn't checked how ttm implements that.
> > Hm, but the vbuf using the BO outlives the reserve/unreserve section.
> > The NO_EVICT flag applies only when the BO is still alive.  Someone
> > needs to hold a reference to the BO to keep it alive, otherwise the BO
> > can go away before the vbuf is retired.
>
> Note that patches 14+15 rework virtio_gpu_transfer_*_ioctl to keep
> gem reference until the command is finished and patch 17 drops
> virtio_gpu_object_{reserve,unreserve} altogether.
>
> Maybe I should try to reorder the series, then squash 6+17 to reduce
> confusion.  I suspect that'll cause quite a few conflicts though ...
This may be well-known and is what you meant by "the fence keeps the
bo alive", but I finally realize that ttm_bo_put delays the deletion
of a BO when it is busy.

In the current design, vbuf does not hold references to its BOs.  Nor
do fences.  It is possible for a BO to lose all its references and
gets virtio_gpu_gem_free_object()ed  while it is still busy.  The key
is ttm_bo_put.

ttm_bo_put calls ttm_bo_cleanup_refs_or_queue to decide whether to
delete the BO immediately (when the BO is already idle) or to queue
the BO to a delayed delete list (when the BO is still busy).  If a BO
is queued to the delayed delete list, ttm_bo_delayed_delete is called
every 10ms (HZ/100 to be exact) to scan through the list and delete
idled BOs.

I wrote a simple test (attached) and added a bunch of printk's to confirm this.

Anyway, I believe the culprit is patch 11, when we switch from
ttm_bo_put to drm_gem_shmem_free_object to free a BO whose last
reference is gone.  The deletion becomes immediately after the switch.
We need to fix vbuf to refcount its BOs before we can do the switch.


>
> cheers,
>   Gerd
>

[-- Attachment #2: virtio-gpu-bo.c --]
[-- Type: text/x-c-code, Size: 2659 bytes --]

/* gcc -std=c11 -D_GNU_SOURCE -o virtio-gpu-bo virtio-gpu-bo.c */

#include <assert.h>
#include <stdint.h>
#include <stdio.h>

#include <fcntl.h>
#include <libdrm/drm.h>
#include <sys/ioctl.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>

#define PIPE_BUFFER 0
#define VIRGL_FORMAT_R8_UNORM 64
#define VIRGL_BIND_CONSTANT_BUFFER (1 << 6)
#define DRM_VIRTGPU_RESOURCE_CREATE 0x04
#define DRM_IOCTL_VIRTGPU_RESOURCE_CREATE \
    DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_RESOURCE_CREATE, \
            struct drm_virtgpu_resource_create)
struct drm_virtgpu_resource_create {
    uint32_t target;
    uint32_t format;
    uint32_t bind;
    uint32_t width;
    uint32_t height;
    uint32_t depth;
    uint32_t array_size;
    uint32_t last_level;
    uint32_t nr_samples;
    uint32_t flags;
    uint32_t bo_handle;
    uint32_t res_handle;
    uint32_t size;
    uint32_t stride;
};

struct drm_virtgpu_3d_box { 
    uint32_t x, y, z;
    uint32_t w, h, d;
};

#define DRM_VIRTGPU_TRANSFER_TO_HOST 0x07
#define DRM_IOCTL_VIRTGPU_TRANSFER_TO_HOST \
    DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_TRANSFER_TO_HOST, \
            struct drm_virtgpu_3d_transfer_to_host)
struct drm_virtgpu_3d_transfer_to_host {
    uint32_t bo_handle;
    struct drm_virtgpu_3d_box box;
    uint32_t level;
    uint32_t offset; 
}; 

static uint32_t buffer_create(int fd, uint32_t size)
{
    struct drm_virtgpu_resource_create args = {
        .target = PIPE_BUFFER,
        .format = VIRGL_FORMAT_R8_UNORM,
        .bind = VIRGL_BIND_CONSTANT_BUFFER,
        .width = size,
        .height = 1,
        .depth = 1,
        .array_size = 1,
        .nr_samples = 1,
    };
    int ret = ioctl(fd, DRM_IOCTL_VIRTGPU_RESOURCE_CREATE, &args);
    assert(!ret);
    return args.bo_handle;
}

static void buffer_close(int fd, uint32_t bo)
{
    struct drm_gem_close args = {
        .handle = bo,
    };
    int ret = ioctl(fd, DRM_IOCTL_GEM_CLOSE, &args);
    assert(!ret);
}
static void transfer_to_host(int fd, uint32_t bo, uint32_t size)
{
    struct drm_virtgpu_3d_transfer_to_host args = {
        .bo_handle = bo,
        .box.w = size,
        .box.h = 1,
        .box.d = 1,
    };
    int ret = ioctl(fd, DRM_IOCTL_VIRTGPU_TRANSFER_TO_HOST, &args);
    assert(!ret);
}

int main()
{
    const uint32_t size = 1 * 1024 * 1024;

    int fd = open("/dev/dri/renderD128", O_RDWR);
    assert(fd >= 0);

    uint32_t bo = buffer_create(fd, size);
    printf("transfer and close the BO immediately...\n");
    transfer_to_host(fd, bo, size);
    buffer_close(fd, bo);

    printf("wait for 1 second...\n");
    usleep(1000 * 1000);

    close(fd);

    return 0;
}

  reply	other threads:[~2019-07-07  5:30 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20190702141903.1131-1-kraxel@redhat.com>
2019-07-02 14:18 ` [PATCH v6 01/18] drm/virtio: pass gem reservation object to ttm init Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 02/18] drm/virtio: switch virtio_gpu_wait_ioctl() to gem helper Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 03/18] drm/virtio: simplify cursor updates Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 04/18] drm/virtio: remove virtio_gpu_object_wait Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 05/18] drm/virtio: drop no_wait argument from virtio_gpu_object_reserve Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 06/18] drm/virtio: remove ttm calls from in virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
2019-07-03 18:02   ` Chia-I Wu
2019-07-04 11:10     ` Gerd Hoffmann
2019-07-04 19:17       ` Chia-I Wu
2019-07-05  8:53         ` Gerd Hoffmann
2019-07-07  5:30           ` Chia-I Wu [this message]
2019-07-02 14:18 ` [PATCH v6 07/18] drm/virtio: add virtio_gpu_object_array & helpers Gerd Hoffmann
2019-07-03 18:31   ` Chia-I Wu
2019-07-03 19:52     ` Chia-I Wu
2019-07-04 11:19       ` Gerd Hoffmann
2019-07-04 11:11     ` Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 08/18] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing Gerd Hoffmann
2019-07-03 18:49   ` Chia-I Wu
2019-07-04 11:25     ` Gerd Hoffmann
2019-07-04 18:46       ` Chia-I Wu
2019-07-11  2:35         ` Chia-I Wu
2019-07-02 14:18 ` [PATCH v6 09/18] drm/virtio: rework virtio_gpu_object_create fencing Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 10/18] drm/virtio: drop virtio_gpu_object_list_validate/virtio_gpu_unref_list Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 11/18] drm/virtio: switch from ttm to gem shmem helpers Gerd Hoffmann
2019-07-04 13:33   ` Emil Velikov
2019-07-17  6:04   ` Chia-I Wu
2019-07-02 14:18 ` [PATCH v6 12/18] drm/virtio: remove virtio_gpu_alloc_object Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 13/18] drm/virtio: drop virtio_gpu_object_{ref,unref} Gerd Hoffmann
2019-07-02 14:18 ` [PATCH v6 14/18] drm/virtio: rework virtio_gpu_transfer_from_host_ioctl fencing Gerd Hoffmann
2019-07-03 20:05   ` Chia-I Wu
2019-07-04 11:47     ` Gerd Hoffmann
2019-07-04 18:55       ` Chia-I Wu
2019-07-05  9:01         ` Gerd Hoffmann
2019-07-02 14:19 ` [PATCH v6 15/18] drm/virtio: rework virtio_gpu_transfer_to_host_ioctl fencing Gerd Hoffmann
2019-07-03 19:55   ` Chia-I Wu
2019-07-04 11:51     ` Gerd Hoffmann
2019-07-04 19:08       ` Chia-I Wu
2019-07-05  9:05         ` Gerd Hoffmann
2019-07-05 14:07           ` Gerd Hoffmann
2019-07-02 14:19 ` [PATCH v6 16/18] drm/virtio: rework virtio_gpu_cmd_context_{attach,detach}_resource Gerd Hoffmann
     [not found]   ` <CAAfnVBmKotCfkrM4hph4++FDrVUYR8WKpomP7Y0-aergqHTSyA@mail.gmail.com>
2019-07-04 12:00     ` Gerd Hoffmann
2019-07-02 14:19 ` [PATCH v6 17/18] drm/virtio: drop virtio_gpu_object_{reserve,unreserve} Gerd Hoffmann
2019-07-02 14:19 ` [PATCH v6 18/18] drm/virtio: add fence sanity check Gerd Hoffmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPaKu7Q1_2-_HNr8Fkh7K61UGUfuAdPHWckH5g4fWt9s2YWiRA@mail.gmail.com \
    --to=olvaffe@gmail.com \
    --cc=airlied@linux.ie \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gurchetansingh@chromium.org \
    --cc=kraxel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).