From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753227AbeCPIXr (ORCPT ); Fri, 16 Mar 2018 04:23:47 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:52791 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752949AbeCPIXh (ORCPT ); Fri, 16 Mar 2018 04:23:37 -0400 X-Google-Smtp-Source: AG47ELvfYdlwKhB90ePFDF9doPcMAh1Tu1+nNhw15f8gGz2h0X9QVm4VfbDT3xYbOb4knEp7LUDE/w== Date: Fri, 16 Mar 2018 09:23:30 +0100 From: Daniel Vetter To: Oleksandr Andrushchenko Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, airlied@linux.ie, daniel.vetter@intel.com, seanpaul@chromium.org, gustavo@padovan.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com, Oleksandr Andrushchenko Subject: Re: [PATCH RESEND v2 1/2] drm/xen-front: Add support for Xen PV display frontend Message-ID: <20180316082330.GF25297@phenom.ffwll.local> Mail-Followup-To: Oleksandr Andrushchenko , xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, airlied@linux.ie, daniel.vetter@intel.com, seanpaul@chromium.org, gustavo@padovan.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com, Oleksandr Andrushchenko References: <1520958066-22875-1-git-send-email-andr2000@gmail.com> <1520958066-22875-2-git-send-email-andr2000@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1520958066-22875-2-git-send-email-andr2000@gmail.com> X-Operating-System: Linux phenom 4.15.0-1-amd64 User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 13, 2018 at 06:21:05PM +0200, Oleksandr Andrushchenko wrote: > From: Oleksandr Andrushchenko > > Add support for Xen para-virtualized frontend display driver. > Accompanying backend [1] is implemented as a user-space application > and its helper library [2], capable of running as a Weston client > or DRM master. > Configuration of both backend and frontend is done via > Xen guest domain configuration options [3]. > > Driver limitations: > 1. Only primary plane without additional properties is supported. > 2. Only one video mode supported which resolution is configured via XenStore. > 3. All CRTCs operate at fixed frequency of 60Hz. > > 1. Implement Xen bus state machine for the frontend driver according to > the state diagram and recovery flow from display para-virtualized > protocol: xen/interface/io/displif.h. > > 2. Read configuration values from Xen store according > to xen/interface/io/displif.h protocol: > - read connector(s) configuration > - read buffer allocation mode (backend/frontend) > > 3. Handle Xen event channels: > - create for all configured connectors and publish > corresponding ring references and event channels in Xen store, > so backend can connect > - implement event channels interrupt handlers > - create and destroy event channels with respect to Xen bus state > > 4. Implement shared buffer handling according to the > para-virtualized display device protocol at xen/interface/io/displif.h: > - handle page directories according to displif protocol: > - allocate and share page directories > - grant references to the required set of pages for the > page directory > - allocate xen balllooned pages via Xen balloon driver > with alloc_xenballooned_pages/free_xenballooned_pages > - grant references to the required set of pages for the > shared buffer itself > - implement pages map/unmap for the buffers allocated by the > backend (gnttab_map_refs/gnttab_unmap_refs) > > 5. Implement kernel modesetiing/connector handling using > DRM simple KMS helper pipeline: > > - implement KMS part of the driver with the help of DRM > simple pipepline helper which is possible due to the fact > that the para-virtualized driver only supports a single > (primary) plane: > - initialize connectors according to XenStore configuration > - handle frame done events from the backend > - create and destroy frame buffers and propagate those > to the backend > - propagate set/reset mode configuration to the backend on display > enable/disable callbacks > - send page flip request to the backend and implement logic for > reporting backend IO errors on prepare fb callback > > - implement virtual connector handling: > - support only pixel formats suitable for single plane modes > - make sure the connector is always connected > - support a single video mode as per para-virtualized driver > configuration > > 6. Implement GEM handling depending on driver mode of operation: > depending on the requirements for the para-virtualized environment, namely > requirements dictated by the accompanying DRM/(v)GPU drivers running in both > host and guest environments, number of operating modes of para-virtualized > display driver are supported: > - display buffers can be allocated by either frontend driver or backend > - display buffers can be allocated to be contiguous in memory or not > > Note! Frontend driver itself has no dependency on contiguous memory for > its operation. > > 6.1. Buffers allocated by the frontend driver. > > The below modes of operation are configured at compile-time via > frontend driver's kernel configuration. > > 6.1.1. Front driver configured to use GEM CMA helpers > This use-case is useful when used with accompanying DRM/vGPU driver in > guest domain which was designed to only work with contiguous buffers, > e.g. DRM driver based on GEM CMA helpers: such drivers can only import > contiguous PRIME buffers, thus requiring frontend driver to provide > such. In order to implement this mode of operation para-virtualized > frontend driver can be configured to use GEM CMA helpers. > > 6.1.2. Front driver doesn't use GEM CMA > If accompanying drivers can cope with non-contiguous memory then, to > lower pressure on CMA subsystem of the kernel, driver can allocate > buffers from system memory. > > Note! If used with accompanying DRM/(v)GPU drivers this mode of operation > may require IOMMU support on the platform, so accompanying DRM/vGPU > hardware can still reach display buffer memory while importing PRIME > buffers from the frontend driver. > > 6.2. Buffers allocated by the backend > > This mode of operation is run-time configured via guest domain configuration > through XenStore entries. > > For systems which do not provide IOMMU support, but having specific > requirements for display buffers it is possible to allocate such buffers > at backend side and share those with the frontend. > For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting > physically contiguous memory, this allows implementing zero-copying > use-cases. > > Note, while using this scenario the following should be considered: > a) If guest domain dies then pages/grants received from the backend > cannot be claimed back > b) Misbehaving guest may send too many requests to the > backend exhausting its grant references and memory > (consider this from security POV). > > Note! Configuration options 1.1 (contiguous display buffers) and 2 (backend > allocated buffers) are not supported at the same time. > > 7. Handle communication with the backend: > - send requests and wait for the responses according > to the displif protocol > - serialize access to the communication channel > - time-out used for backend communication is set to 3000 ms > - manage display buffers shared with the backend > > [1] https://github.com/xen-troops/displ_be > [2] https://github.com/xen-troops/libxenbe > [3] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/man/xl.cfg.pod.5.in;h=a699367779e2ae1212ff8f638eff0206ec1a1cc9;hb=refs/heads/master#l1257 S-o-b line went missing here :-) I've read through it, 2 actual review comments (around hot-unplug and around the error recovery for failed flips), a few bikesheds, but looks all reasonable to me. And much easier to read as one big patch (it's just 3k). One more thing I'd do as a follow-up (don't rewrite everything, this is close to merge, better to get it in first): You have a lot of indirections and function calls across sources files. That's kinda ok if you have a huge driver with 100+k lines of code where you have to split things up. But for a small driver like yours here it's a bit overkill. Personally I'd merge at least the xen backend stuff into the corresponding kms code, but that's up to you. And as mentioned, if you decide to do that, a follow-up patch (once this has merged) is perfectly fine. -Daniel > --- > drivers/gpu/drm/Kconfig | 2 + > drivers/gpu/drm/Makefile | 1 + > drivers/gpu/drm/xen/Kconfig | 30 ++ > drivers/gpu/drm/xen/Makefile | 17 + > drivers/gpu/drm/xen/xen_drm_front.c | 690 ++++++++++++++++++++++++++++ > drivers/gpu/drm/xen/xen_drm_front.h | 77 ++++ > drivers/gpu/drm/xen/xen_drm_front_cfg.c | 77 ++++ > drivers/gpu/drm/xen/xen_drm_front_cfg.h | 37 ++ > drivers/gpu/drm/xen/xen_drm_front_conn.c | 146 ++++++ > drivers/gpu/drm/xen/xen_drm_front_conn.h | 27 ++ > drivers/gpu/drm/xen/xen_drm_front_drv.c | 239 ++++++++++ > drivers/gpu/drm/xen/xen_drm_front_drv.h | 78 ++++ > drivers/gpu/drm/xen/xen_drm_front_evtchnl.c | 383 +++++++++++++++ > drivers/gpu/drm/xen/xen_drm_front_evtchnl.h | 79 ++++ > drivers/gpu/drm/xen/xen_drm_front_gem.c | 335 ++++++++++++++ > drivers/gpu/drm/xen/xen_drm_front_gem.h | 41 ++ > drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 74 +++ > drivers/gpu/drm/xen/xen_drm_front_kms.c | 324 +++++++++++++ > drivers/gpu/drm/xen/xen_drm_front_kms.h | 25 + > drivers/gpu/drm/xen/xen_drm_front_shbuf.c | 432 +++++++++++++++++ > drivers/gpu/drm/xen/xen_drm_front_shbuf.h | 72 +++ > 21 files changed, 3186 insertions(+) > create mode 100644 drivers/gpu/drm/xen/Kconfig > create mode 100644 drivers/gpu/drm/xen/Makefile > create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front.h > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.h > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.h > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.h > > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig > index deeefa7a1773..757825ac60df 100644 > --- a/drivers/gpu/drm/Kconfig > +++ b/drivers/gpu/drm/Kconfig > @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig" > > source "drivers/gpu/drm/tve200/Kconfig" > > +source "drivers/gpu/drm/xen/Kconfig" > + > # Keep legacy drivers last > > menuconfig DRM_LEGACY > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile > index 50093ff4479b..9d66657ea117 100644 > --- a/drivers/gpu/drm/Makefile > +++ b/drivers/gpu/drm/Makefile > @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ > obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ > obj-$(CONFIG_DRM_PL111) += pl111/ > obj-$(CONFIG_DRM_TVE200) += tve200/ > +obj-$(CONFIG_DRM_XEN) += xen/ > diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig > new file mode 100644 > index 000000000000..4f4abc91f3b6 > --- /dev/null > +++ b/drivers/gpu/drm/xen/Kconfig > @@ -0,0 +1,30 @@ > +config DRM_XEN > + bool "DRM Support for Xen guest OS" > + depends on XEN > + help > + Choose this option if you want to enable DRM support > + for Xen. > + > +config DRM_XEN_FRONTEND > + tristate "Para-virtualized frontend driver for Xen guest OS" > + depends on DRM_XEN > + depends on DRM > + select DRM_KMS_HELPER > + select VIDEOMODE_HELPERS > + select XEN_XENBUS_FRONTEND > + help > + Choose this option if you want to enable a para-virtualized > + frontend DRM/KMS driver for Xen guest OSes. > + > +config DRM_XEN_FRONTEND_CMA > + bool "Use DRM CMA to allocate dumb buffers" > + depends on DRM_XEN_FRONTEND > + select DRM_KMS_CMA_HELPER > + select DRM_GEM_CMA_HELPER > + help > + Use DRM CMA helpers to allocate display buffers. > + This is useful for the use-cases when guest driver needs to > + share or export buffers to other drivers which only expect > + contiguous buffers. > + Note: in this mode driver cannot use buffers allocated > + by the backend. > diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile > new file mode 100644 > index 000000000000..ac1b82f2a860 > --- /dev/null > +++ b/drivers/gpu/drm/xen/Makefile > @@ -0,0 +1,17 @@ > +# SPDX-License-Identifier: GPL-2.0 OR MIT > + > +drm_xen_front-objs := xen_drm_front.o \ > + xen_drm_front_drv.o \ > + xen_drm_front_kms.o \ > + xen_drm_front_conn.o \ > + xen_drm_front_evtchnl.o \ > + xen_drm_front_shbuf.o \ > + xen_drm_front_cfg.o > + > +ifeq ($(CONFIG_DRM_XEN_FRONTEND_CMA),y) > + drm_xen_front-objs += xen_drm_front_gem_cma.o > +else > + drm_xen_front-objs += xen_drm_front_gem.o > +endif > + > +obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o > diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c > new file mode 100644 > index 000000000000..dbabdf98f896 > --- /dev/null > +++ b/drivers/gpu/drm/xen/xen_drm_front.c > @@ -0,0 +1,690 @@ > +// SPDX-License-Identifier: GPL-2.0 OR MIT > + > +/* > + * Xen para-virtual DRM device > + * > + * Copyright (C) 2016-2018 EPAM Systems Inc. > + * > + * Author: Oleksandr Andrushchenko > + */ > + > +#include > + > +#include > + > +#include > +#include > +#include > + > +#include > + > +#include "xen_drm_front.h" > +#include "xen_drm_front_drv.h" > +#include "xen_drm_front_evtchnl.h" > +#include "xen_drm_front_shbuf.h" > + > +struct xen_drm_front_dbuf { > + struct list_head list; > + uint64_t dbuf_cookie; > + uint64_t fb_cookie; > + struct xen_drm_front_shbuf *shbuf; > +}; > + > +static int dbuf_add_to_list(struct xen_drm_front_info *front_info, > + struct xen_drm_front_shbuf *shbuf, uint64_t dbuf_cookie) > +{ > + struct xen_drm_front_dbuf *dbuf; > + > + dbuf = kzalloc(sizeof(*dbuf), GFP_KERNEL); > + if (!dbuf) > + return -ENOMEM; > + > + dbuf->dbuf_cookie = dbuf_cookie; > + dbuf->shbuf = shbuf; > + list_add(&dbuf->list, &front_info->dbuf_list); > + return 0; > +} > + > +static struct xen_drm_front_dbuf *dbuf_get(struct list_head *dbuf_list, > + uint64_t dbuf_cookie) > +{ > + struct xen_drm_front_dbuf *buf, *q; > + > + list_for_each_entry_safe(buf, q, dbuf_list, list) > + if (buf->dbuf_cookie == dbuf_cookie) > + return buf; > + > + return NULL; > +} > + > +static void dbuf_flush_fb(struct list_head *dbuf_list, uint64_t fb_cookie) > +{ > + struct xen_drm_front_dbuf *buf, *q; > + > + list_for_each_entry_safe(buf, q, dbuf_list, list) > + if (buf->fb_cookie == fb_cookie) > + xen_drm_front_shbuf_flush(buf->shbuf); > +} > + > +static void dbuf_free(struct list_head *dbuf_list, uint64_t dbuf_cookie) > +{ > + struct xen_drm_front_dbuf *buf, *q; > + > + list_for_each_entry_safe(buf, q, dbuf_list, list) > + if (buf->dbuf_cookie == dbuf_cookie) { > + list_del(&buf->list); > + xen_drm_front_shbuf_unmap(buf->shbuf); > + xen_drm_front_shbuf_free(buf->shbuf); > + kfree(buf); > + break; > + } > +} > + > +static void dbuf_free_all(struct list_head *dbuf_list) > +{ > + struct xen_drm_front_dbuf *buf, *q; > + > + list_for_each_entry_safe(buf, q, dbuf_list, list) { > + list_del(&buf->list); > + xen_drm_front_shbuf_unmap(buf->shbuf); > + xen_drm_front_shbuf_free(buf->shbuf); > + kfree(buf); > + } > +} > + > +static struct xendispl_req *be_prepare_req( > + struct xen_drm_front_evtchnl *evtchnl, uint8_t operation) > +{ > + struct xendispl_req *req; > + > + req = RING_GET_REQUEST(&evtchnl->u.req.ring, > + evtchnl->u.req.ring.req_prod_pvt); > + req->operation = operation; > + req->id = evtchnl->evt_next_id++; > + evtchnl->evt_id = req->id; > + return req; > +} > + > +static int be_stream_do_io(struct xen_drm_front_evtchnl *evtchnl, > + struct xendispl_req *req) > +{ > + reinit_completion(&evtchnl->u.req.completion); > + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED)) > + return -EIO; > + > + xen_drm_front_evtchnl_flush(evtchnl); > + return 0; > +} > + > +static int be_stream_wait_io(struct xen_drm_front_evtchnl *evtchnl) > +{ > + if (wait_for_completion_timeout(&evtchnl->u.req.completion, > + msecs_to_jiffies(XEN_DRM_FRONT_WAIT_BACK_MS)) <= 0) > + return -ETIMEDOUT; > + > + return evtchnl->u.req.resp_status; > +} > + > +int xen_drm_front_mode_set(struct xen_drm_front_drm_pipeline *pipeline, > + uint32_t x, uint32_t y, uint32_t width, uint32_t height, > + uint32_t bpp, uint64_t fb_cookie) > +{ > + struct xen_drm_front_evtchnl *evtchnl; > + struct xen_drm_front_info *front_info; > + struct xendispl_req *req; > + unsigned long flags; > + int ret; > + > + front_info = pipeline->drm_info->front_info; > + evtchnl = &front_info->evt_pairs[pipeline->index].req; > + if (unlikely(!evtchnl)) > + return -EIO; > + > + mutex_lock(&front_info->req_io_lock); > + > + spin_lock_irqsave(&front_info->io_lock, flags); > + req = be_prepare_req(evtchnl, XENDISPL_OP_SET_CONFIG); > + req->op.set_config.x = x; > + req->op.set_config.y = y; > + req->op.set_config.width = width; > + req->op.set_config.height = height; > + req->op.set_config.bpp = bpp; > + req->op.set_config.fb_cookie = fb_cookie; > + > + ret = be_stream_do_io(evtchnl, req); > + spin_unlock_irqrestore(&front_info->io_lock, flags); > + > + if (ret == 0) > + ret = be_stream_wait_io(evtchnl); > + > + mutex_unlock(&front_info->req_io_lock); > + return ret; > +} > + > +static int be_dbuf_create_int(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie, uint32_t width, uint32_t height, > + uint32_t bpp, uint64_t size, struct page **pages, > + struct sg_table *sgt) > +{ > + struct xen_drm_front_evtchnl *evtchnl; > + struct xen_drm_front_shbuf *shbuf; > + struct xendispl_req *req; > + struct xen_drm_front_shbuf_cfg buf_cfg; > + unsigned long flags; > + int ret; > + > + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req; > + if (unlikely(!evtchnl)) > + return -EIO; > + > + memset(&buf_cfg, 0, sizeof(buf_cfg)); > + buf_cfg.xb_dev = front_info->xb_dev; > + buf_cfg.pages = pages; > + buf_cfg.size = size; > + buf_cfg.sgt = sgt; > + buf_cfg.be_alloc = front_info->cfg.be_alloc; > + > + shbuf = xen_drm_front_shbuf_alloc(&buf_cfg); > + if (!shbuf) > + return -ENOMEM; > + > + ret = dbuf_add_to_list(front_info, shbuf, dbuf_cookie); > + if (ret < 0) { > + xen_drm_front_shbuf_free(shbuf); > + return ret; > + } > + > + mutex_lock(&front_info->req_io_lock); > + > + spin_lock_irqsave(&front_info->io_lock, flags); > + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_CREATE); > + req->op.dbuf_create.gref_directory = > + xen_drm_front_shbuf_get_dir_start(shbuf); > + req->op.dbuf_create.buffer_sz = size; > + req->op.dbuf_create.dbuf_cookie = dbuf_cookie; > + req->op.dbuf_create.width = width; > + req->op.dbuf_create.height = height; > + req->op.dbuf_create.bpp = bpp; > + if (buf_cfg.be_alloc) > + req->op.dbuf_create.flags |= XENDISPL_DBUF_FLG_REQ_ALLOC; > + > + ret = be_stream_do_io(evtchnl, req); > + spin_unlock_irqrestore(&front_info->io_lock, flags); > + > + if (ret < 0) > + goto fail; > + > + ret = be_stream_wait_io(evtchnl); > + if (ret < 0) > + goto fail; > + > + ret = xen_drm_front_shbuf_map(shbuf); > + if (ret < 0) > + goto fail; > + > + mutex_unlock(&front_info->req_io_lock); > + return 0; > + > +fail: > + mutex_unlock(&front_info->req_io_lock); > + dbuf_free(&front_info->dbuf_list, dbuf_cookie); > + return ret; > +} > + > +int xen_drm_front_dbuf_create_from_sgt(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie, uint32_t width, uint32_t height, > + uint32_t bpp, uint64_t size, struct sg_table *sgt) > +{ > + return be_dbuf_create_int(front_info, dbuf_cookie, width, height, > + bpp, size, NULL, sgt); > +} > + > +int xen_drm_front_dbuf_create_from_pages(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie, uint32_t width, uint32_t height, > + uint32_t bpp, uint64_t size, struct page **pages) > +{ > + return be_dbuf_create_int(front_info, dbuf_cookie, width, height, > + bpp, size, pages, NULL); > +} The above two wrappers seem a bit much, just to set sgt = NULL or pages = NULL in one of them. I'd drop them, but that's a bikeshed so feel free to ignore. > + > +int xen_drm_front_dbuf_destroy(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie) > +{ > + struct xen_drm_front_evtchnl *evtchnl; > + struct xendispl_req *req; > + unsigned long flags; > + bool be_alloc; > + int ret; > + > + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req; > + if (unlikely(!evtchnl)) > + return -EIO; > + > + be_alloc = front_info->cfg.be_alloc; > + > + /* > + * for the backend allocated buffer release references now, so backend > + * can free the buffer > + */ > + if (be_alloc) > + dbuf_free(&front_info->dbuf_list, dbuf_cookie); > + > + mutex_lock(&front_info->req_io_lock); > + > + spin_lock_irqsave(&front_info->io_lock, flags); > + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_DESTROY); > + req->op.dbuf_destroy.dbuf_cookie = dbuf_cookie; > + > + ret = be_stream_do_io(evtchnl, req); > + spin_unlock_irqrestore(&front_info->io_lock, flags); > + > + if (ret == 0) > + ret = be_stream_wait_io(evtchnl); > + > + /* > + * do this regardless of communication status with the backend: > + * if we cannot remove remote resources remove what we can locally > + */ > + if (!be_alloc) > + dbuf_free(&front_info->dbuf_list, dbuf_cookie); > + > + mutex_unlock(&front_info->req_io_lock); > + return ret; > +} > + > +int xen_drm_front_fb_attach(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width, > + uint32_t height, uint32_t pixel_format) > +{ > + struct xen_drm_front_evtchnl *evtchnl; > + struct xen_drm_front_dbuf *buf; > + struct xendispl_req *req; > + unsigned long flags; > + int ret; > + > + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req; > + if (unlikely(!evtchnl)) > + return -EIO; > + > + buf = dbuf_get(&front_info->dbuf_list, dbuf_cookie); > + if (!buf) > + return -EINVAL; > + > + buf->fb_cookie = fb_cookie; > + > + mutex_lock(&front_info->req_io_lock); > + > + spin_lock_irqsave(&front_info->io_lock, flags); > + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_ATTACH); > + req->op.fb_attach.dbuf_cookie = dbuf_cookie; > + req->op.fb_attach.fb_cookie = fb_cookie; > + req->op.fb_attach.width = width; > + req->op.fb_attach.height = height; > + req->op.fb_attach.pixel_format = pixel_format; > + > + ret = be_stream_do_io(evtchnl, req); > + spin_unlock_irqrestore(&front_info->io_lock, flags); > + > + if (ret == 0) > + ret = be_stream_wait_io(evtchnl); > + > + mutex_unlock(&front_info->req_io_lock); > + return ret; > +} > + > +int xen_drm_front_fb_detach(struct xen_drm_front_info *front_info, > + uint64_t fb_cookie) > +{ > + struct xen_drm_front_evtchnl *evtchnl; > + struct xendispl_req *req; > + unsigned long flags; > + int ret; > + > + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req; > + if (unlikely(!evtchnl)) > + return -EIO; > + > + mutex_lock(&front_info->req_io_lock); > + > + spin_lock_irqsave(&front_info->io_lock, flags); > + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_DETACH); > + req->op.fb_detach.fb_cookie = fb_cookie; > + > + ret = be_stream_do_io(evtchnl, req); > + spin_unlock_irqrestore(&front_info->io_lock, flags); > + > + if (ret == 0) > + ret = be_stream_wait_io(evtchnl); > + > + mutex_unlock(&front_info->req_io_lock); > + return ret; > +} > + > +int xen_drm_front_page_flip(struct xen_drm_front_info *front_info, > + int conn_idx, uint64_t fb_cookie) > +{ > + struct xen_drm_front_evtchnl *evtchnl; > + struct xendispl_req *req; > + unsigned long flags; > + int ret; > + > + if (unlikely(conn_idx >= front_info->num_evt_pairs)) > + return -EINVAL; > + > + dbuf_flush_fb(&front_info->dbuf_list, fb_cookie); > + evtchnl = &front_info->evt_pairs[conn_idx].req; > + > + mutex_lock(&front_info->req_io_lock); > + > + spin_lock_irqsave(&front_info->io_lock, flags); > + req = be_prepare_req(evtchnl, XENDISPL_OP_PG_FLIP); > + req->op.pg_flip.fb_cookie = fb_cookie; > + > + ret = be_stream_do_io(evtchnl, req); > + spin_unlock_irqrestore(&front_info->io_lock, flags); > + > + if (ret == 0) > + ret = be_stream_wait_io(evtchnl); > + > + mutex_unlock(&front_info->req_io_lock); > + return ret; > +} > + > +void xen_drm_front_unload(struct xen_drm_front_info *front_info) > +{ > + if (front_info->xb_dev->state != XenbusStateReconfiguring) > + return; > + > + DRM_DEBUG("Can try removing driver now\n"); > + xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising); > +} > + > +static int xen_drm_drv_probe(struct platform_device *pdev) > +{ > + /* > + * The device is not spawn from a device tree, so arch_setup_dma_ops > + * is not called, thus leaving the device with dummy DMA ops. > + * This makes the device return error on PRIME buffer import, which > + * is not correct: to fix this call of_dma_configure() with a NULL > + * node to set default DMA ops. > + */ > + of_dma_configure(&pdev->dev, NULL); > + return xen_drm_front_drv_probe(pdev); > +} > + > +static int xen_drm_drv_remove(struct platform_device *pdev) > +{ > + return xen_drm_front_drv_remove(pdev); > +} > + > +struct platform_device_info xen_drm_front_platform_info = { > + .name = XENDISPL_DRIVER_NAME, > + .id = 0, > + .num_res = 0, > + .dma_mask = DMA_BIT_MASK(32), > +}; > + > +static struct platform_driver xen_drm_front_front_info = { > + .probe = xen_drm_drv_probe, > + .remove = xen_drm_drv_remove, > + .driver = { > + .name = XENDISPL_DRIVER_NAME, > + }, > +}; > + > +static void xen_drm_drv_deinit(struct xen_drm_front_info *front_info) > +{ > + if (!front_info->drm_pdrv_registered) > + return; > + > + if (front_info->drm_pdev) > + platform_device_unregister(front_info->drm_pdev); > + > + platform_driver_unregister(&xen_drm_front_front_info); > + front_info->drm_pdrv_registered = false; > + front_info->drm_pdev = NULL; > +} > + > +static int xen_drm_drv_init(struct xen_drm_front_info *front_info) > +{ > + int ret; > + > + ret = platform_driver_register(&xen_drm_front_front_info); > + if (ret < 0) > + return ret; > + > + front_info->drm_pdrv_registered = true; > + /* pass card configuration via platform data */ > + xen_drm_front_platform_info.data = &front_info->cfg; > + xen_drm_front_platform_info.size_data = sizeof(front_info->cfg); > + > + front_info->drm_pdev = platform_device_register_full( > + &xen_drm_front_platform_info); > + if (IS_ERR_OR_NULL(front_info->drm_pdev)) { > + DRM_ERROR("Failed to register " XENDISPL_DRIVER_NAME " PV DRM driver\n"); > + front_info->drm_pdev = NULL; > + xen_drm_drv_deinit(front_info); > + return -ENODEV; > + } > + > + return 0; > +} > + > +static void xen_drv_remove_internal(struct xen_drm_front_info *front_info) > +{ > + xen_drm_drv_deinit(front_info); > + xen_drm_front_evtchnl_free_all(front_info); > + dbuf_free_all(&front_info->dbuf_list); > +} > + > +static int displback_initwait(struct xen_drm_front_info *front_info) > +{ > + struct xen_drm_front_cfg *cfg = &front_info->cfg; > + int ret; > + > + cfg->front_info = front_info; > + ret = xen_drm_front_cfg_card(front_info, cfg); > + if (ret < 0) > + return ret; > + > + DRM_INFO("Have %d conector(s)\n", cfg->num_connectors); > + /* Create event channels for all connectors and publish */ > + ret = xen_drm_front_evtchnl_create_all(front_info); > + if (ret < 0) > + return ret; > + > + return xen_drm_front_evtchnl_publish_all(front_info); > +} > + > +static int displback_connect(struct xen_drm_front_info *front_info) > +{ > + xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED); > + return xen_drm_drv_init(front_info); > +} > + > +static void displback_disconnect(struct xen_drm_front_info *front_info) > +{ > + bool removed = true; > + > + if (front_info->drm_pdev) { > + if (xen_drm_front_drv_is_used(front_info->drm_pdev)) { > + DRM_WARN("DRM driver still in use, deferring removal\n"); > + removed = false; > + } else > + xen_drv_remove_internal(front_info); Ok this logic here is fishy, since you're open-coding the drm unplug infrastructure, but slightly differently and slightyl racy. If you have a driver where your underlying "hw" (well it's virtual here, but same idea) can disappear any time while userspace is still using the drm driver, you need to use the drm_dev_unplug() function and related code. drm_dev_unplug() works like drm_dev_unregister, except for the hotplug case. Then you also have to guard all the driver entry points where you do access the backchannel using drm_dev_is_unplugged() (I've seen a few of those already). Then you can rip out all the logic here and the xen_drm_front_drv_is_used() helper. I thought there's some patches from Noralf in-flight that improved the docs on this, I need to check > + } > + > + xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_DISCONNECTED); > + > + if (removed) > + xenbus_switch_state(front_info->xb_dev, > + XenbusStateInitialising); > + else > + xenbus_switch_state(front_info->xb_dev, > + XenbusStateReconfiguring); > +} > + > +static void displback_changed(struct xenbus_device *xb_dev, > + enum xenbus_state backend_state) > +{ > + struct xen_drm_front_info *front_info = dev_get_drvdata(&xb_dev->dev); > + int ret; > + > + DRM_DEBUG("Backend state is %s, front is %s\n", > + xenbus_strstate(backend_state), > + xenbus_strstate(xb_dev->state)); > + > + switch (backend_state) { > + case XenbusStateReconfiguring: > + /* fall through */ > + case XenbusStateReconfigured: > + /* fall through */ > + case XenbusStateInitialised: > + break; > + > + case XenbusStateInitialising: > + /* recovering after backend unexpected closure */ > + displback_disconnect(front_info); > + break; > + > + case XenbusStateInitWait: > + /* recovering after backend unexpected closure */ > + displback_disconnect(front_info); > + if (xb_dev->state != XenbusStateInitialising) > + break; > + > + ret = displback_initwait(front_info); > + if (ret < 0) > + xenbus_dev_fatal(xb_dev, ret, > + "initializing frontend"); > + else > + xenbus_switch_state(xb_dev, XenbusStateInitialised); > + break; > + > + case XenbusStateConnected: > + if (xb_dev->state != XenbusStateInitialised) > + break; > + > + ret = displback_connect(front_info); > + if (ret < 0) > + xenbus_dev_fatal(xb_dev, ret, > + "initializing DRM driver"); > + else > + xenbus_switch_state(xb_dev, XenbusStateConnected); > + break; > + > + case XenbusStateClosing: > + /* > + * in this state backend starts freeing resources, > + * so let it go into closed state, so we can also > + * remove ours > + */ > + break; > + > + case XenbusStateUnknown: > + /* fall through */ > + case XenbusStateClosed: > + if (xb_dev->state == XenbusStateClosed) > + break; > + > + displback_disconnect(front_info); > + break; > + } > +} > + > +static int xen_drv_probe(struct xenbus_device *xb_dev, > + const struct xenbus_device_id *id) > +{ > + struct xen_drm_front_info *front_info; > + > + front_info = devm_kzalloc(&xb_dev->dev, > + sizeof(*front_info), GFP_KERNEL); > + if (!front_info) > + return -ENOMEM; > + > + front_info->xb_dev = xb_dev; > + spin_lock_init(&front_info->io_lock); > + mutex_init(&front_info->req_io_lock); > + INIT_LIST_HEAD(&front_info->dbuf_list); > + front_info->drm_pdrv_registered = false; > + dev_set_drvdata(&xb_dev->dev, front_info); > + return xenbus_switch_state(xb_dev, XenbusStateInitialising); > +} > + > +static int xen_drv_remove(struct xenbus_device *dev) > +{ > + struct xen_drm_front_info *front_info = dev_get_drvdata(&dev->dev); > + int to = 100; > + > + xenbus_switch_state(dev, XenbusStateClosing); > + > + /* > + * On driver removal it is disconnected from XenBus, > + * so no backend state change events come via .otherend_changed > + * callback. This prevents us from exiting gracefully, e.g. > + * signaling the backend to free event channels, waiting for its > + * state to change to XenbusStateClosed and cleaning at our end. > + * Normally when front driver removed backend will finally go into > + * XenbusStateInitWait state. > + * > + * Workaround: read backend's state manually and wait with time-out. > + */ > + while ((xenbus_read_unsigned(front_info->xb_dev->otherend, > + "state", XenbusStateUnknown) != XenbusStateInitWait) && > + to--) > + msleep(10); > + > + if (!to) > + DRM_ERROR("Backend state is %s while removing driver\n", > + xenbus_strstate(xenbus_read_unsigned( > + front_info->xb_dev->otherend, > + "state", XenbusStateUnknown))); > + > + xen_drv_remove_internal(front_info); > + xenbus_frontend_closed(dev); > + return 0; > +} > + > +static const struct xenbus_device_id xen_drv_ids[] = { > + { XENDISPL_DRIVER_NAME }, > + { "" } > +}; > + > +static struct xenbus_driver xen_driver = { > + .ids = xen_drv_ids, > + .probe = xen_drv_probe, > + .remove = xen_drv_remove, > + .otherend_changed = displback_changed, > +}; > + > +static int __init xen_drv_init(void) > +{ > + /* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */ > + if (XEN_PAGE_SIZE != PAGE_SIZE) { > + DRM_ERROR(XENDISPL_DRIVER_NAME ": different kernel and Xen page sizes are not supported: XEN_PAGE_SIZE (%lu) != PAGE_SIZE (%lu)\n", > + XEN_PAGE_SIZE, PAGE_SIZE); > + return -ENODEV; > + } > + > + if (!xen_domain()) > + return -ENODEV; > + > + if (!xen_has_pv_devices()) > + return -ENODEV; > + > + DRM_INFO("Registering XEN PV " XENDISPL_DRIVER_NAME "\n"); > + return xenbus_register_frontend(&xen_driver); > +} > + > +static void __exit xen_drv_cleanup(void) > +{ > + DRM_INFO("Unregistering XEN PV " XENDISPL_DRIVER_NAME "\n"); > + xenbus_unregister_driver(&xen_driver); > +} > + > +module_init(xen_drv_init); > +module_exit(xen_drv_cleanup); > + > +MODULE_DESCRIPTION("Xen para-virtualized display device frontend"); > +MODULE_LICENSE("GPL"); > +MODULE_ALIAS("xen:"XENDISPL_DRIVER_NAME); > diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h > new file mode 100644 > index 000000000000..93c58c4e87d2 > --- /dev/null > +++ b/drivers/gpu/drm/xen/xen_drm_front.h > @@ -0,0 +1,77 @@ > +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ > + > +/* > + * Xen para-virtual DRM device > + * > + * Copyright (C) 2016-2018 EPAM Systems Inc. > + * > + * Author: Oleksandr Andrushchenko > + */ > + > +#ifndef __XEN_DRM_FRONT_H_ > +#define __XEN_DRM_FRONT_H_ > + > +#include > + > +#include "xen_drm_front_cfg.h" > + > +/* timeout in ms to wait for backend to respond */ > +#define XEN_DRM_FRONT_WAIT_BACK_MS 3000 > + > +#ifndef GRANT_INVALID_REF > +/* > + * Note on usage of grant reference 0 as invalid grant reference: > + * grant reference 0 is valid, but never exposed to a PV driver, > + * because of the fact it is already in use/reserved by the PV console. > + */ > +#define GRANT_INVALID_REF 0 > +#endif > + > +struct xen_drm_front_drm_pipeline; > + > +struct xen_drm_front_info { > + struct xenbus_device *xb_dev; > + /* to protect data between backend IO code and interrupt handler */ > + spinlock_t io_lock; > + /* serializer for backend IO: request/response */ > + struct mutex req_io_lock; > + bool drm_pdrv_registered; > + /* virtual DRM platform device */ > + struct platform_device *drm_pdev; > + > + int num_evt_pairs; > + struct xen_drm_front_evtchnl_pair *evt_pairs; > + struct xen_drm_front_cfg cfg; > + > + /* display buffers */ > + struct list_head dbuf_list; > +}; > + > +int xen_drm_front_mode_set(struct xen_drm_front_drm_pipeline *pipeline, > + uint32_t x, uint32_t y, uint32_t width, uint32_t height, > + uint32_t bpp, uint64_t fb_cookie); > + > +int xen_drm_front_dbuf_create_from_sgt(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie, uint32_t width, uint32_t height, > + uint32_t bpp, uint64_t size, struct sg_table *sgt); > + > +int xen_drm_front_dbuf_create_from_pages(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie, uint32_t width, uint32_t height, > + uint32_t bpp, uint64_t size, struct page **pages); > + > +int xen_drm_front_dbuf_destroy(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie); > + > +int xen_drm_front_fb_attach(struct xen_drm_front_info *front_info, > + uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width, > + uint32_t height, uint32_t pixel_format); > + > +int xen_drm_front_fb_detach(struct xen_drm_front_info *front_info, > + uint64_t fb_cookie); > + > +int xen_drm_front_page_flip(struct xen_drm_front_info *front_info, > + int conn_idx, uint64_t fb_cookie); > + > +void xen_drm_front_unload(struct xen_drm_front_info *front_info); > + > +#endif /* __XEN_DRM_FRONT_H_ */ > diff --git a/drivers/gpu/drm/xen/xen_drm_front_cfg.c b/drivers/gpu/drm/xen/xen_drm_front_cfg.c > new file mode 100644 > index 000000000000..9a0b2b8e6169 > --- /dev/null > +++ b/drivers/gpu/drm/xen/xen_drm_front_cfg.c > @@ -0,0 +1,77 @@ > +// SPDX-License-Identifier: GPL-2.0 OR MIT > + > +/* > + * Xen para-virtual DRM device > + * > + * Copyright (C) 2016-2018 EPAM Systems Inc. > + * > + * Author: Oleksandr Andrushchenko > + */ > + > +#include > + > +#include > + > +#include > +#include > + > +#include "xen_drm_front.h" > +#include "xen_drm_front_cfg.h" > + > +static int cfg_connector(struct xen_drm_front_info *front_info, > + struct xen_drm_front_cfg_connector *connector, > + const char *path, int index) > +{ > + char *connector_path; > + > + connector_path = devm_kasprintf(&front_info->xb_dev->dev, > + GFP_KERNEL, "%s/%d", path, index); > + if (!connector_path) > + return -ENOMEM; > + > + if (xenbus_scanf(XBT_NIL, connector_path, XENDISPL_FIELD_RESOLUTION, > + "%d" XENDISPL_RESOLUTION_SEPARATOR "%d", > + &connector->width, &connector->height) < 0) { > + /* either no entry configured or wrong resolution set */ > + connector->width = 0; > + connector->height = 0; > + return -EINVAL; > + } > + > + connector->xenstore_path = connector_path; > + > + DRM_INFO("Connector %s: resolution %dx%d\n", > + connector_path, connector->width, connector->height); > + return 0; > +} > + > +int xen_drm_front_cfg_card(struct xen_drm_front_info *front_info, > + struct xen_drm_front_cfg *cfg) > +{ > + struct xenbus_device *xb_dev = front_info->xb_dev; > + int ret, i; > + > + if (xenbus_read_unsigned(front_info->xb_dev->nodename, > + XENDISPL_FIELD_BE_ALLOC, 0)) { > + DRM_INFO("Backend can provide display buffers\n"); > + cfg->be_alloc = true; > + } > + > + cfg->num_connectors = 0; > + for (i = 0; i < ARRAY_SIZE(cfg->connectors); i++) { > + ret = cfg_connector(front_info, > + &cfg->connectors[i], xb_dev->nodename, i); > + if (ret < 0) > + break; > + cfg->num_connectors++; > + } > + > + if (!cfg->num_connectors) { > + DRM_ERROR("No connector(s) configured at %s\n", > + xb_dev->nodename); > + return -ENODEV; > + } > + > + return 0; > +} > + > diff --git a/drivers/gpu/drm/xen/xen_drm_front_cfg.h b/drivers/gpu/drm/xen/xen_drm_front_cfg.h > new file mode 100644 > index 000000000000..6e7af670f8cd > --- /dev/null > +++ b/drivers/gpu/drm/xen/xen_drm_front_cfg.h > @@ -0,0 +1,37 @@ > +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ > + > +/* > + * Xen para-virtual DRM device > + * > + * Copyright (C) 2016-2018 EPAM Systems Inc. > + * > + * Author: Oleksandr Andrushchenko > + */ > + > +#ifndef __XEN_DRM_FRONT_CFG_H_ > +#define __XEN_DRM_FRONT_CFG_H_ > + > +#include > + > +#define XEN_DRM_FRONT_MAX_CRTCS 4 > + > +struct xen_drm_front_cfg_connector { > + int width; > + int height; > + char *xenstore_path; > +}; > + > +struct xen_drm_front_cfg { > + struct xen_drm_front_info *front_info; > + /* number of connectors in this configuration */ > + int num_connectors; > + /* connector configurations */ > + struct xen_drm_front_cfg_connector connectors[XEN_DRM_FRONT_MAX_CRTCS]; > + /* set if dumb buffers are allocated externally on backend side */ > + bool be_alloc; > +}; > + > +int xen_drm_front_cfg_card(struct xen_drm_front_info *front_info, > + struct xen_drm_front_cfg *cfg); > + > +#endif /* __XEN_DRM_FRONT_CFG_H_ */ > diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.c b/drivers/gpu/drm/xen/xen_drm_front_conn.c > new file mode 100644 > index 000000000000..aaa1cfff4797 > --- /dev/null > +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.c > @@ -0,0 +1,146 @@ > +// SPDX-License-Identifier: GPL-2.0 OR MIT > + > +/* > + * Xen para-virtual DRM device > + * > + * Copyright (C) 2016-2018 EPAM Systems Inc. > + * > + * Author: Oleksandr Andrushchenko > + */ > + > +#include > +#include > + > +#include