From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91176C433E1 for ; Fri, 31 Jul 2020 08:56:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51A7820829 for ; Fri, 31 Jul 2020 08:56:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732043AbgGaI4a (ORCPT ); Fri, 31 Jul 2020 04:56:30 -0400 Received: from honk.sigxcpu.org ([24.134.29.49]:59000 "EHLO honk.sigxcpu.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730268AbgGaI43 (ORCPT ); Fri, 31 Jul 2020 04:56:29 -0400 Received: from localhost (localhost [127.0.0.1]) by honk.sigxcpu.org (Postfix) with ESMTP id 831CBFB02; Fri, 31 Jul 2020 10:56:24 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at honk.sigxcpu.org Received: from honk.sigxcpu.org ([127.0.0.1]) by localhost (honk.sigxcpu.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0d_ZWHXTWEQQ; Fri, 31 Jul 2020 10:56:09 +0200 (CEST) Received: by bogon.sigxcpu.org (Postfix, from userid 1000) id B020F4537D; Fri, 31 Jul 2020 10:56:08 +0200 (CEST) Date: Fri, 31 Jul 2020 10:56:08 +0200 From: Guido =?iso-8859-1?Q?G=FCnther?= To: Laurentiu Palcu Cc: Philipp Zabel , David Airlie , Daniel Vetter , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Lucas Stach , lukas@mntmn.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v9 2/5] drm/imx: Add initial support for DCSS on iMX8MQ Message-ID: <20200731085608.GA13342@bogon.m.sigxcpu.org> References: <20200731081836.3048-1-laurentiu.palcu@oss.nxp.com> <20200731081836.3048-3-laurentiu.palcu@oss.nxp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20200731081836.3048-3-laurentiu.palcu@oss.nxp.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On Fri, Jul 31, 2020 at 11:18:30AM +0300, Laurentiu Palcu wrote: > From: Laurentiu Palcu > > This adds initial support for iMX8MQ's Display Controller Subsystem (DCSS). > Some of its capabilities include: > * 4K@60fps; > * HDR10; > * one graphics and 2 video pipelines; > * on-the-fly decompression of compressed video and graphics; > > The reference manual can be found here: > https://www.nxp.com/webapp/Download?colCode=IMX8MDQLQRM > > The current patch adds only basic functionality: one primary plane for > graphics, linear, tiled and super-tiled buffers support (no graphics > decompression yet), no HDR10 and no video planes. > > Video planes support and HDR10 will be added in subsequent patches once > per-plane de-gamma/CSC/gamma support is in. > > Signed-off-by: Laurentiu Palcu > Reviewed-by: Lucas Stach I'm putting a Acked-by: Guido Günther since it fixes the build for me and it looks good to me from what i know about DCSS. Cheers, -- Guido > --- > drivers/gpu/drm/imx/Kconfig | 2 + > drivers/gpu/drm/imx/Makefile | 1 + > drivers/gpu/drm/imx/dcss/Kconfig | 9 + > drivers/gpu/drm/imx/dcss/Makefile | 6 + > drivers/gpu/drm/imx/dcss/dcss-blkctl.c | 70 +++ > drivers/gpu/drm/imx/dcss/dcss-crtc.c | 219 +++++++ > drivers/gpu/drm/imx/dcss/dcss-ctxld.c | 424 +++++++++++++ > drivers/gpu/drm/imx/dcss/dcss-dev.c | 314 ++++++++++ > drivers/gpu/drm/imx/dcss/dcss-dev.h | 177 ++++++ > drivers/gpu/drm/imx/dcss/dcss-dpr.c | 562 +++++++++++++++++ > drivers/gpu/drm/imx/dcss/dcss-drv.c | 138 +++++ > drivers/gpu/drm/imx/dcss/dcss-dtg.c | 409 ++++++++++++ > drivers/gpu/drm/imx/dcss/dcss-kms.c | 177 ++++++ > drivers/gpu/drm/imx/dcss/dcss-kms.h | 43 ++ > drivers/gpu/drm/imx/dcss/dcss-plane.c | 405 ++++++++++++ > drivers/gpu/drm/imx/dcss/dcss-scaler.c | 826 +++++++++++++++++++++++++ > drivers/gpu/drm/imx/dcss/dcss-ss.c | 180 ++++++ > 17 files changed, 3962 insertions(+) > create mode 100644 drivers/gpu/drm/imx/dcss/Kconfig > create mode 100644 drivers/gpu/drm/imx/dcss/Makefile > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-blkctl.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-crtc.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-ctxld.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-dev.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-dev.h > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-dpr.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-drv.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-dtg.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-kms.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-kms.h > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-plane.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-scaler.c > create mode 100644 drivers/gpu/drm/imx/dcss/dcss-ss.c > > diff --git a/drivers/gpu/drm/imx/Kconfig b/drivers/gpu/drm/imx/Kconfig > index 207bf7409dfb..6231048aa5aa 100644 > --- a/drivers/gpu/drm/imx/Kconfig > +++ b/drivers/gpu/drm/imx/Kconfig > @@ -39,3 +39,5 @@ config DRM_IMX_HDMI > depends on DRM_IMX > help > Choose this if you want to use HDMI on i.MX6. > + > +source "drivers/gpu/drm/imx/dcss/Kconfig" > diff --git a/drivers/gpu/drm/imx/Makefile b/drivers/gpu/drm/imx/Makefile > index 21cdcc2faabc..b644deffe948 100644 > --- a/drivers/gpu/drm/imx/Makefile > +++ b/drivers/gpu/drm/imx/Makefile > @@ -9,3 +9,4 @@ obj-$(CONFIG_DRM_IMX_TVE) += imx-tve.o > obj-$(CONFIG_DRM_IMX_LDB) += imx-ldb.o > > obj-$(CONFIG_DRM_IMX_HDMI) += dw_hdmi-imx.o > +obj-$(CONFIG_DRM_IMX_DCSS) += dcss/ > diff --git a/drivers/gpu/drm/imx/dcss/Kconfig b/drivers/gpu/drm/imx/dcss/Kconfig > new file mode 100644 > index 000000000000..69860de8861f > --- /dev/null > +++ b/drivers/gpu/drm/imx/dcss/Kconfig > @@ -0,0 +1,9 @@ > +config DRM_IMX_DCSS > + tristate "i.MX8MQ DCSS" > + select IMX_IRQSTEER > + select DRM_KMS_CMA_HELPER > + select VIDEOMODE_HELPERS > + depends on DRM && ARCH_MXC > + help > + Choose this if you have a NXP i.MX8MQ based system and want to use the > + Display Controller Subsystem. This option enables DCSS support. > diff --git a/drivers/gpu/drm/imx/dcss/Makefile b/drivers/gpu/drm/imx/dcss/Makefile > new file mode 100644 > index 000000000000..8c7c8da42792 > --- /dev/null > +++ b/drivers/gpu/drm/imx/dcss/Makefile > @@ -0,0 +1,6 @@ > +imx-dcss-objs := dcss-drv.o dcss-dev.o dcss-blkctl.o dcss-ctxld.o dcss-dtg.o \ > + dcss-ss.o dcss-dpr.o dcss-scaler.o dcss-kms.o dcss-crtc.o \ > + dcss-plane.o > + > +obj-$(CONFIG_DRM_IMX_DCSS) += imx-dcss.o > + > diff --git a/drivers/gpu/drm/imx/dcss/dcss-blkctl.c b/drivers/gpu/drm/imx/dcss/dcss-blkctl.c > new file mode 100644 > index 000000000000..c9b54bb2692d > --- /dev/null > +++ b/drivers/gpu/drm/imx/dcss/dcss-blkctl.c > @@ -0,0 +1,70 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright 2019 NXP. > + */ > + > +#include > +#include > +#include > + > +#include "dcss-dev.h" > + > +#define DCSS_BLKCTL_RESET_CTRL 0x00 > +#define B_CLK_RESETN BIT(0) > +#define APB_CLK_RESETN BIT(1) > +#define P_CLK_RESETN BIT(2) > +#define RTR_CLK_RESETN BIT(4) > +#define DCSS_BLKCTL_CONTROL0 0x10 > +#define HDMI_MIPI_CLK_SEL BIT(0) > +#define DISPMIX_REFCLK_SEL_POS 4 > +#define DISPMIX_REFCLK_SEL_MASK GENMASK(5, 4) > +#define DISPMIX_PIXCLK_SEL BIT(8) > +#define HDMI_SRC_SECURE_EN BIT(16) > + > +struct dcss_blkctl { > + struct dcss_dev *dcss; > + void __iomem *base_reg; > +}; > + > +void dcss_blkctl_cfg(struct dcss_blkctl *blkctl) > +{ > + if (blkctl->dcss->hdmi_output) > + dcss_writel(0, blkctl->base_reg + DCSS_BLKCTL_CONTROL0); > + else > + dcss_writel(DISPMIX_PIXCLK_SEL, > + blkctl->base_reg + DCSS_BLKCTL_CONTROL0); > + > + dcss_set(B_CLK_RESETN | APB_CLK_RESETN | P_CLK_RESETN | RTR_CLK_RESETN, > + blkctl->base_reg + DCSS_BLKCTL_RESET_CTRL); > +} > + > +int dcss_blkctl_init(struct dcss_dev *dcss, unsigned long blkctl_base) > +{ > + struct dcss_blkctl *blkctl; > + > + blkctl = kzalloc(sizeof(*blkctl), GFP_KERNEL); > + if (!blkctl) > + return -ENOMEM; > + > + blkctl->base_reg = ioremap(blkctl_base, SZ_4K); > + if (!blkctl->base_reg) { > + dev_err(dcss->dev, "unable to remap BLK CTRL base\n"); > + kfree(blkctl); > + return -ENOMEM; > + } > + > + dcss->blkctl = blkctl; > + blkctl->dcss = dcss; > + > + dcss_blkctl_cfg(blkctl); > + > + return 0; > +} > + > +void dcss_blkctl_exit(struct dcss_blkctl *blkctl) > +{ > + if (blkctl->base_reg) > + iounmap(blkctl->base_reg); > + > + kfree(blkctl); > +} > diff --git a/drivers/gpu/drm/imx/dcss/dcss-crtc.c b/drivers/gpu/drm/imx/dcss/dcss-crtc.c > new file mode 100644 > index 000000000000..36abff0890b2 > --- /dev/null > +++ b/drivers/gpu/drm/imx/dcss/dcss-crtc.c > @@ -0,0 +1,219 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright 2019 NXP. > + */ > + > +#include > +#include > +#include > +#include > + > +#include "dcss-dev.h" > +#include "dcss-kms.h" > + > +static int dcss_enable_vblank(struct drm_crtc *crtc) > +{ > + struct dcss_crtc *dcss_crtc = container_of(crtc, struct dcss_crtc, > + base); > + struct dcss_dev *dcss = crtc->dev->dev_private; > + > + dcss_dtg_vblank_irq_enable(dcss->dtg, true); > + > + dcss_dtg_ctxld_kick_irq_enable(dcss->dtg, true); > + > + enable_irq(dcss_crtc->irq); > + > + return 0; > +} > + > +static void dcss_disable_vblank(struct drm_crtc *crtc) > +{ > + struct dcss_crtc *dcss_crtc = container_of(crtc, struct dcss_crtc, > + base); > + struct dcss_dev *dcss = dcss_crtc->base.dev->dev_private; > + > + disable_irq_nosync(dcss_crtc->irq); > + > + dcss_dtg_vblank_irq_enable(dcss->dtg, false); > + > + if (dcss_crtc->disable_ctxld_kick_irq) > + dcss_dtg_ctxld_kick_irq_enable(dcss->dtg, false); > +} > + > +static const struct drm_crtc_funcs dcss_crtc_funcs = { > + .set_config = drm_atomic_helper_set_config, > + .destroy = drm_crtc_cleanup, > + .page_flip = drm_atomic_helper_page_flip, > + .reset = drm_atomic_helper_crtc_reset, > + .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, > + .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, > + .enable_vblank = dcss_enable_vblank, > + .disable_vblank = dcss_disable_vblank, > +}; > + > +static void dcss_crtc_atomic_begin(struct drm_crtc *crtc, > + struct drm_crtc_state *old_crtc_state) > +{ > + drm_crtc_vblank_on(crtc); > +} > + > +static void dcss_crtc_atomic_flush(struct drm_crtc *crtc, > + struct drm_crtc_state *old_crtc_state) > +{ > + struct dcss_crtc *dcss_crtc = container_of(crtc, struct dcss_crtc, > + base); > + struct dcss_dev *dcss = dcss_crtc->base.dev->dev_private; > + > + spin_lock_irq(&crtc->dev->event_lock); > + if (crtc->state->event) { > + WARN_ON(drm_crtc_vblank_get(crtc)); > + drm_crtc_arm_vblank_event(crtc, crtc->state->event); > + crtc->state->event = NULL; > + } > + spin_unlock_irq(&crtc->dev->event_lock); > + > + if (dcss_dtg_is_enabled(dcss->dtg)) > + dcss_ctxld_enable(dcss->ctxld); > +} > + > +static void dcss_crtc_atomic_enable(struct drm_crtc *crtc, > + struct drm_crtc_state *old_crtc_state) > +{ > + struct dcss_crtc *dcss_crtc = container_of(crtc, struct dcss_crtc, > + base); > + struct dcss_dev *dcss = dcss_crtc->base.dev->dev_private; > + struct drm_display_mode *mode = &crtc->state->adjusted_mode; > + struct drm_display_mode *old_mode = &old_crtc_state->adjusted_mode; > + struct videomode vm; > + > + drm_display_mode_to_videomode(mode, &vm); > + > + pm_runtime_get_sync(dcss->dev); > + > + vm.pixelclock = mode->crtc_clock * 1000; > + > + dcss_ss_subsam_set(dcss->ss); > + dcss_dtg_css_set(dcss->dtg); > + > + if (!drm_mode_equal(mode, old_mode) || !old_crtc_state->active) { > + dcss_dtg_sync_set(dcss->dtg, &vm); > + dcss_ss_sync_set(dcss->ss, &vm, > + mode->flags & DRM_MODE_FLAG_PHSYNC, > + mode->flags & DRM_MODE_FLAG_PVSYNC); > + } > + > + dcss_enable_dtg_and_ss(dcss); > + > + dcss_ctxld_enable(dcss->ctxld); > + > + /* Allow CTXLD kick interrupt to be disabled when VBLANK is disabled. */ > + dcss_crtc->disable_ctxld_kick_irq = true; > +} > + > +static void dcss_crtc_atomic_disable(struct drm_crtc *crtc, > + struct drm_crtc_state *old_crtc_state) > +{ > + struct dcss_crtc *dcss_crtc = container_of(crtc, struct dcss_crtc, > + base); > + struct dcss_dev *dcss = dcss_crtc->base.dev->dev_private; > + struct drm_display_mode *mode = &crtc->state->adjusted_mode; > + struct drm_display_mode *old_mode = &old_crtc_state->adjusted_mode; > + > + drm_atomic_helper_disable_planes_on_crtc(old_crtc_state, false); > + > + spin_lock_irq(&crtc->dev->event_lock); > + if (crtc->state->event) { > + drm_crtc_send_vblank_event(crtc, crtc->state->event); > + crtc->state->event = NULL; > + } > + spin_unlock_irq(&crtc->dev->event_lock); > + > + dcss_dtg_ctxld_kick_irq_enable(dcss->dtg, true); > + > + reinit_completion(&dcss->disable_completion); > + > + dcss_disable_dtg_and_ss(dcss); > + > + dcss_ctxld_enable(dcss->ctxld); > + > + if (!drm_mode_equal(mode, old_mode) || !crtc->state->active) > + if (!wait_for_completion_timeout(&dcss->disable_completion, > + msecs_to_jiffies(100))) > + dev_err(dcss->dev, "Shutting off DTG timed out.\n"); > + > + /* > + * Do not shut off CTXLD kick interrupt when shutting VBLANK off. It > + * will be needed to commit the last changes, before going to suspend. > + */ > + dcss_crtc->disable_ctxld_kick_irq = false; > + > + drm_crtc_vblank_off(crtc); > + > + pm_runtime_mark_last_busy(dcss->dev); > + pm_runtime_put_autosuspend(dcss->dev); > +} > + > +static const struct drm_crtc_helper_funcs dcss_helper_funcs = { > + .atomic_begin = dcss_crtc_atomic_begin, > + .atomic_flush = dcss_crtc_atomic_flush, > + .atomic_enable = dcss_crtc_atomic_enable, > + .atomic_disable = dcss_crtc_atomic_disable, > +}; > + > +static irqreturn_t dcss_crtc_irq_handler(int irq, void *dev_id) > +{ > + struct dcss_crtc *dcss_crtc = dev_id; > + struct dcss_dev *dcss = dcss_crtc->base.dev->dev_private; > + > + if (!dcss_dtg_vblank_irq_valid(dcss->dtg)) > + return IRQ_NONE; > + > + if (dcss_ctxld_is_flushed(dcss->ctxld)) > + drm_crtc_handle_vblank(&dcss_crtc->base); > + > + dcss_dtg_vblank_irq_clear(dcss->dtg); > + > + return IRQ_HANDLED; > +} > + > +int dcss_crtc_init(struct dcss_crtc *crtc, struct drm_device *drm) > +{ > + struct dcss_dev *dcss = drm->dev_private; > + struct platform_device *pdev = to_platform_device(dcss->dev); > + int ret; > + > + crtc->plane[0] = dcss_plane_init(drm, drm_crtc_mask(&crtc->base), > + DRM_PLANE_TYPE_PRIMARY, 0); > + if (IS_ERR(crtc->plane[0])) > + return PTR_ERR(crtc->plane[0]); > + > + crtc->base.port = dcss->of_port; > + > + drm_crtc_helper_add(&crtc->base, &dcss_helper_funcs); > + ret = drm_crtc_init_with_planes(drm, &crtc->base, &crtc->plane[0]->base, > + NULL, &dcss_crtc_funcs, NULL); > + if (ret) { > + dev_err(dcss->dev, "failed to init crtc\n"); > + return ret; > + } > + > + crtc->irq = platform_get_irq_byname(pdev, "vblank"); > + if (crtc->irq < 0) > + return crtc->irq; > + > + ret = request_irq(crtc->irq, dcss_crtc_irq_handler, > + 0, "dcss_drm", crtc); > + if (ret) { > + dev_err(dcss->dev, "irq request failed with %d.\n", ret); > + return ret; > + } > + > + disable_irq(crtc->irq); > + > + return 0; > +} > + > +void dcss_crtc_deinit(struct dcss_crtc *crtc, struct drm_device *drm) > +{ > + free_irq(crtc->irq, crtc); > +} > diff --git a/drivers/gpu/drm/imx/dcss/dcss-ctxld.c b/drivers/gpu/drm/imx/dcss/dcss-ctxld.c > new file mode 100644 > index 000000000000..3a84cb3209c4 > --- /dev/null > +++ b/drivers/gpu/drm/imx/dcss/dcss-ctxld.c > @@ -0,0 +1,424 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright 2019 NXP. > + */ > + > +#include > +#include > +#include > +#include > +#include > + > +#include "dcss-dev.h" > + > +#define DCSS_CTXLD_CONTROL_STATUS 0x0 > +#define CTXLD_ENABLE BIT(0) > +#define ARB_SEL BIT(1) > +#define RD_ERR_EN BIT(2) > +#define DB_COMP_EN BIT(3) > +#define SB_HP_COMP_EN BIT(4) > +#define SB_LP_COMP_EN BIT(5) > +#define DB_PEND_SB_REC_EN BIT(6) > +#define SB_PEND_DISP_ACTIVE_EN BIT(7) > +#define AHB_ERR_EN BIT(8) > +#define RD_ERR BIT(16) > +#define DB_COMP BIT(17) > +#define SB_HP_COMP BIT(18) > +#define SB_LP_COMP BIT(19) > +#define DB_PEND_SB_REC BIT(20) > +#define SB_PEND_DISP_ACTIVE BIT(21) > +#define AHB_ERR BIT(22) > +#define DCSS_CTXLD_DB_BASE_ADDR 0x10 > +#define DCSS_CTXLD_DB_COUNT 0x14 > +#define DCSS_CTXLD_SB_BASE_ADDR 0x18 > +#define DCSS_CTXLD_SB_COUNT 0x1C > +#define SB_HP_COUNT_POS 0 > +#define SB_HP_COUNT_MASK 0xffff > +#define SB_LP_COUNT_POS 16 > +#define SB_LP_COUNT_MASK 0xffff0000 > +#define DCSS_AHB_ERR_ADDR 0x20 > + > +#define CTXLD_IRQ_COMPLETION (DB_COMP | SB_HP_COMP | SB_LP_COMP) > +#define CTXLD_IRQ_ERROR (RD_ERR | DB_PEND_SB_REC | AHB_ERR) > + > +/* The following sizes are in context loader entries, 8 bytes each. */ > +#define CTXLD_DB_CTX_ENTRIES 1024 /* max 65536 */ > +#define CTXLD_SB_LP_CTX_ENTRIES 10240 /* max 65536 */ > +#define CTXLD_SB_HP_CTX_ENTRIES 20000 /* max 65536 */ > +#define CTXLD_SB_CTX_ENTRIES (CTXLD_SB_LP_CTX_ENTRIES + \ > + CTXLD_SB_HP_CTX_ENTRIES) > + > +/* Sizes, in entries, of the DB, SB_HP and SB_LP context regions. */ > +static u16 dcss_ctxld_ctx_size[3] = { > + CTXLD_DB_CTX_ENTRIES, > + CTXLD_SB_HP_CTX_ENTRIES, > + CTXLD_SB_LP_CTX_ENTRIES > +}; > + > +/* this represents an entry in the context loader map */ > +struct dcss_ctxld_item { > + u32 val; > + u32 ofs; > +}; > + > +#define CTX_ITEM_SIZE sizeof(struct dcss_ctxld_item) > + > +struct dcss_ctxld { > + struct device *dev; > + void __iomem *ctxld_reg; > + int irq; > + bool irq_en; > + > + struct dcss_ctxld_item *db[2]; > + struct dcss_ctxld_item *sb_hp[2]; > + struct dcss_ctxld_item *sb_lp[2]; > + > + dma_addr_t db_paddr[2]; > + dma_addr_t sb_paddr[2]; > + > + u16 ctx_size[2][3]; /* holds the sizes of DB, SB_HP and SB_LP ctx */ > + u8 current_ctx; > + > + bool in_use; > + bool armed; > + > + spinlock_t lock; /* protects concurent access to private data */ > +}; > + > +static irqreturn_t dcss_ctxld_irq_handler(int irq, void *data) > +{ > + struct dcss_ctxld *ctxld = data; > + struct dcss_dev *dcss = dcss_drv_dev_to_dcss(ctxld->dev); > + u32 irq_status; > + > + irq_status = dcss_readl(ctxld->ctxld_reg + DCSS_CTXLD_CONTROL_STATUS); > + > + if (irq_status & CTXLD_IRQ_COMPLETION && > + !(irq_status & CTXLD_ENABLE) && ctxld->in_use) { > + ctxld->in_use = false; > + > + if (dcss && dcss->disable_callback) > + dcss->disable_callback(dcss); > + } else if (irq_status & CTXLD_IRQ_ERROR) { > + /* > + * Except for throwing an error message and clearing the status > + * register, there's not much we can do here. > + */ > + dev_err(ctxld->dev, "ctxld: error encountered: %08x\n", > + irq_status); > + dev_err(ctxld->dev, "ctxld: db=%d, sb_hp=%d, sb_lp=%d\n", > + ctxld->ctx_size[ctxld->current_ctx ^ 1][CTX_DB], > + ctxld->ctx_size[ctxld->current_ctx ^ 1][CTX_SB_HP], > + ctxld->ctx_size[ctxld->current_ctx ^ 1][CTX_SB_LP]); > + } > + > + dcss_clr(irq_status & (CTXLD_IRQ_ERROR | CTXLD_IRQ_COMPLETION), > + ctxld->ctxld_reg + DCSS_CTXLD_CONTROL_STATUS); > + > + return IRQ_HANDLED; > +} > + > +static int dcss_ctxld_irq_config(struct dcss_ctxld *ctxld, > + struct platform_device *pdev) > +{ > + int ret; > + > + ctxld->irq = platform_get_irq_byname(pdev, "ctxld"); > + if (ctxld->irq < 0) > + return ctxld->irq; > + > + ret = request_irq(ctxld->irq, dcss_ctxld_irq_handler, > + 0, "dcss_ctxld", ctxld); > + if (ret) { > + dev_err(ctxld->dev, "ctxld: irq request failed.\n"); > + return ret; > + } > + > + ctxld->irq_en = true; > + > + return 0; > +} > + > +static void dcss_ctxld_hw_cfg(struct dcss_ctxld *ctxld) > +{ > + dcss_writel(RD_ERR_EN | SB_HP_COMP_EN | > + DB_PEND_SB_REC_EN | AHB_ERR_EN | RD_ERR | AHB_ERR, > + ctxld->ctxld_reg + DCSS_CTXLD_CONTROL_STATUS); > +} > + > +static void dcss_ctxld_free_ctx(struct dcss_ctxld *ctxld) > +{ > + struct dcss_ctxld_item *ctx; > + int i; > + > + for (i = 0; i < 2; i++) { > + if (ctxld->db[i]) { > + dma_free_coherent(ctxld->dev, > + CTXLD_DB_CTX_ENTRIES * sizeof(*ctx), > + ctxld->db[i], ctxld->db_paddr[i]); > + ctxld->db[i] = NULL; > + ctxld->db_paddr[i] = 0; > + } > + > + if (ctxld->sb_hp[i]) { > + dma_free_coherent(ctxld->dev, > + CTXLD_SB_CTX_ENTRIES * sizeof(*ctx), > + ctxld->sb_hp[i], ctxld->sb_paddr[i]); > + ctxld->sb_hp[i] = NULL; > + ctxld->sb_paddr[i] = 0; > + } > + } > +} > + > +static int dcss_ctxld_alloc_ctx(struct dcss_ctxld *ctxld) > +{ > + struct dcss_ctxld_item *ctx; > + int i; > + > + for (i = 0; i < 2; i++) { > + ctx = dma_alloc_coherent(ctxld->dev, > + CTXLD_DB_CTX_ENTRIES * sizeof(*ctx), > + &ctxld->db_paddr[i], GFP_KERNEL); > + if (!ctx) > + return -ENOMEM; > + > + ctxld->db[i] = ctx; > + > + ctx = dma_alloc_coherent(ctxld->dev, > + CTXLD_SB_CTX_ENTRIES * sizeof(*ctx), > + &ctxld->sb_paddr[i], GFP_KERNEL); > + if (!ctx) > + return -ENOMEM; > + > + ctxld->sb_hp[i] = ctx; > + ctxld->sb_lp[i] = ctx + CTXLD_SB_HP_CTX_ENTRIES; > + } > + > + return 0; > +} > + > +int dcss_ctxld_init(struct dcss_dev *dcss, unsigned long ctxld_base) > +{ > + struct dcss_ctxld *ctxld; > + int ret; > + > + ctxld = kzalloc(sizeof(*ctxld), GFP_KERNEL); > + if (!ctxld) > + return -ENOMEM; > + > + dcss->ctxld = ctxld; > + ctxld->dev = dcss->dev; > + > + spin_lock_init(&ctxld->lock); > + > + ret = dcss_ctxld_alloc_ctx(ctxld); > + if (ret) { > + dev_err(dcss->dev, "ctxld: cannot allocate context memory.\n"); > + goto err; > + } > + > + ctxld->ctxld_reg = ioremap(ctxld_base, SZ_4K); > + if (!ctxld->ctxld_reg) { > + dev_err(dcss->dev, "ctxld: unable to remap ctxld base\n"); > + ret = -ENOMEM; > + goto err; > + } > + > + ret = dcss_ctxld_irq_config(ctxld, to_platform_device(dcss->dev)); > + if (ret) > + goto err_irq; > + > + dcss_ctxld_hw_cfg(ctxld); > + > + return 0; > + > +err_irq: > + iounmap(ctxld->ctxld_reg); > + > +err: > + dcss_ctxld_free_ctx(ctxld); > + kfree(ctxld); > + > + return ret; > +} > + > +void dcss_ctxld_exit(struct dcss_ctxld *ctxld) > +{ > + free_irq(ctxld->irq, ctxld); > + > + if (ctxld->ctxld_reg) > + iounmap(ctxld->ctxld_reg); > + > + dcss_ctxld_free_ctx(ctxld); > + kfree(ctxld); > +} > + > +static int dcss_ctxld_enable_locked(struct dcss_ctxld *ctxld) > +{ > + int curr_ctx = ctxld->current_ctx; > + u32 db_base, sb_base, sb_count; > + u32 sb_hp_cnt, sb_lp_cnt, db_cnt; > + struct dcss_dev *dcss = dcss_drv_dev_to_dcss(ctxld->dev); > + > + if (!dcss) > + return 0; > + > + dcss_dpr_write_sysctrl(dcss->dpr); > + > + dcss_scaler_write_sclctrl(dcss->scaler); > + > + sb_hp_cnt = ctxld->ctx_size[curr_ctx][CTX_SB_HP]; > + sb_lp_cnt = ctxld->ctx_size[curr_ctx][CTX_SB_LP]; > + db_cnt = ctxld->ctx_size[curr_ctx][CTX_DB]; > + > + /* make sure SB_LP context area comes after SB_HP */ > + if (sb_lp_cnt && > + ctxld->sb_lp[curr_ctx] != ctxld->sb_hp[curr_ctx] + sb_hp_cnt) { > + struct dcss_ctxld_item *sb_lp_adjusted; > + > + sb_lp_adjusted = ctxld->sb_hp[curr_ctx] + sb_hp_cnt; > + > + memcpy(sb_lp_adjusted, ctxld->sb_lp[curr_ctx], > + sb_lp_cnt * CTX_ITEM_SIZE); > + } > + > + db_base = db_cnt ? ctxld->db_paddr[curr_ctx] : 0; > + > + dcss_writel(db_base, ctxld->ctxld_reg + DCSS_CTXLD_DB_BASE_ADDR); > + dcss_writel(db_cnt, ctxld->ctxld_reg + DCSS_CTXLD_DB_COUNT); > + > + if (sb_hp_cnt) > + sb_count = ((sb_hp_cnt << SB_HP_COUNT_POS) & SB_HP_COUNT_MASK) | > + ((sb_lp_cnt << SB_LP_COUNT_POS) & SB_LP_COUNT_MASK); > + else > + sb_count = (sb_lp_cnt << SB_HP_COUNT_POS) & SB_HP_COUNT_MASK; > + > + sb_base = sb_count ? ctxld->sb_paddr[curr_ctx] : 0; > + > + dcss_writel(sb_base, ctxld->ctxld_reg + DCSS_CTXLD_SB_BASE_ADDR); > + dcss_writel(sb_count, ctxld->ctxld_reg + DCSS_CTXLD_SB_COUNT); > + > + /* enable the context loader */ > + dcss_set(CTXLD_ENABLE, ctxld->ctxld_reg + DCSS_CTXLD_CONTROL_STATUS); > + > + ctxld->in_use = true; > + > + /* > + * Toggle the current context to the alternate one so that any updates > + * in the modules' settings take place there. > + */ > + ctxld->current_ctx ^= 1; > + > + ctxld->ctx_size[ctxld->current_ctx][CTX_DB] = 0; > + ctxld->ctx_size[ctxld->current_ctx][CTX_SB_HP] = 0; > + ctxld->ctx_size[ctxld->current_ctx][CTX_SB_LP] = 0; > + > + return 0; > +} > + > +int dcss_ctxld_enable(struct dcss_ctxld *ctxld) > +{ > + spin_lock_irq(&ctxld->lock); > + ctxld->armed = true; > + spin_unlock_irq(&ctxld->lock); > + > + return 0; > +} > + > +void dcss_ctxld_kick(struct dcss_ctxld *ctxld) > +{ > + unsigned long flags; > + > + spin_lock_irqsave(&ctxld->lock, flags); > + if (ctxld->armed && !ctxld->in_use) { > + ctxld->armed = false; > + dcss_ctxld_enable_locked(ctxld); > + } > + spin_unlock_irqrestore(&ctxld->lock, flags); > +} > + > +void dcss_ctxld_write_irqsafe(struct dcss_ctxld *ctxld, u32 ctx_id, u32 val, > + u32 reg_ofs) > +{ > + int curr_ctx = ctxld->current_ctx; > + struct dcss_ctxld_item *ctx[] = { > + [CTX_DB] = ctxld->db[curr_ctx], > + [CTX_SB_HP] = ctxld->sb_hp[curr_ctx], > + [CTX_SB_LP] = ctxld->sb_lp[curr_ctx] > + }; > + int item_idx = ctxld->ctx_size[curr_ctx][ctx_id]; > + > + if (item_idx + 1 > dcss_ctxld_ctx_size[ctx_id]) { > + WARN_ON(1); > + return; > + } > + > + ctx[ctx_id][item_idx].val = val; > + ctx[ctx_id][item_idx].ofs = reg_ofs; > + ctxld->ctx_size[curr_ctx][ctx_id] += 1; > +} > + > +void dcss_ctxld_write(struct dcss_ctxld *ctxld, u32 ctx_id, > + u32 val, u32 reg_ofs) > +{ > + spin_lock_irq(&ctxld->lock); > + dcss_ctxld_write_irqsafe(ctxld, ctx_id, val, reg_ofs); > + spin_unlock_irq(&ctxld->lock); > +} > + > +bool dcss_ctxld_is_flushed(struct dcss_ctxld *ctxld) > +{ > + return ctxld->ctx_size[ctxld->current_ctx][CTX_DB] == 0 && > + ctxld->ctx_size[ctxld->current_ctx][CTX_SB_HP] == 0 && > + ctxld->ctx_size[ctxld->current_ctx][CTX_SB_LP] == 0; > +} > + > +int dcss_ctxld_resume(struct dcss_ctxld *ctxld) > +{ > + dcss_ctxld_hw_cfg(ctxld); > + > + if (!ctxld->irq_en) { > + enable_irq(ctxld->irq); > + ctxld->irq_en = true; > + } > + > + return 0; > +} > + > +int dcss_ctxld_suspend(struct dcss_ctxld *ctxld) > +{ > + int ret = 0; > + unsigned long timeout = jiffies + msecs_to_jiffies(500); > + > + if (!dcss_ctxld_is_flushed(ctxld)) { > + dcss_ctxld_kick(ctxld); > + > + while (!time_after(jiffies, timeout) && ctxld->in_use) > + msleep(20); > + > + if (time_after(jiffies, timeout)) > + return -ETIMEDOUT; > + } > + > + spin_lock_irq(&ctxld->lock); > + > + if (ctxld->irq_en) { > + disable_irq_nosync(ctxld->irq); > + ctxld->irq_en = false; > + } > + > + /* reset context region and sizes */ > + ctxld->current_ctx = 0; > + ctxld->ctx_size[0][CTX_DB] = 0; > + ctxld->ctx_size[0][CTX_SB_HP] = 0; > + ctxld->ctx_size[0][CTX_SB_LP] = 0; > + > + spin_unlock_irq(&ctxld->lock); > + > + return ret; > +} > + > +void dcss_ctxld_assert_locked(struct dcss_ctxld *ctxld) > +{ > + lockdep_assert_held(&ctxld->lock); > +} > diff --git a/drivers/gpu/drm/imx/dcss/dcss-dev.c b/drivers/gpu/drm/imx/dcss/dcss-dev.c > new file mode 100644 > index 000000000000..83a4840435cf > --- /dev/null > +++ b/drivers/gpu/drm/imx/dcss/dcss-dev.c > @@ -0,0 +1,314 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright 2019 NXP. > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "dcss-dev.h" > + > +static void dcss_clocks_enable(struct dcss_dev *dcss) > +{ > + clk_prepare_enable(dcss->axi_clk); > + clk_prepare_enable(dcss->apb_clk); > + clk_prepare_enable(dcss->rtrm_clk); > + clk_prepare_enable(dcss->dtrc_clk); > + clk_prepare_enable(dcss->pix_clk); > +} > + > +static void dcss_clocks_disable(struct dcss_dev *dcss) > +{ > + clk_disable_unprepare(dcss->pix_clk); > + clk_disable_unprepare(dcss->dtrc_clk); > + clk_disable_unprepare(dcss->rtrm_clk); > + clk_disable_unprepare(dcss->apb_clk); > + clk_disable_unprepare(dcss->axi_clk); > +} > + > +static void dcss_disable_dtg_and_ss_cb(void *data) > +{ > + struct dcss_dev *dcss = data; > + > + dcss->disable_callback = NULL; > + > + dcss_ss_shutoff(dcss->ss); > + dcss_dtg_shutoff(dcss->dtg); > + > + complete(&dcss->disable_completion); > +} > + > +void dcss_disable_dtg_and_ss(struct dcss_dev *dcss) > +{ > + dcss->disable_callback = dcss_disable_dtg_and_ss_cb; > +} > + > +void dcss_enable_dtg_and_ss(struct dcss_dev *dcss) > +{ > + if (dcss->disable_callback) > + dcss->disable_callback = NULL; > + > + dcss_dtg_enable(dcss->dtg); > + dcss_ss_enable(dcss->ss); > +} > + > +static int dcss_submodules_init(struct dcss_dev *dcss) > +{ > + int ret = 0; > + u32 base_addr = dcss->start_addr; > + const struct dcss_type_data *devtype = dcss->devtype; > + > + dcss_clocks_enable(dcss); > + > + ret = dcss_blkctl_init(dcss, base_addr + devtype->blkctl_ofs); > + if (ret) > + return ret; > + > + ret = dcss_ctxld_init(dcss, base_addr + devtype->ctxld_ofs); > + if (ret) > + goto ctxld_err; > + > + ret = dcss_dtg_init(dcss, base_addr + devtype->dtg_ofs); > + if (ret) > + goto dtg_err; > + > + ret = dcss_ss_init(dcss, base_addr + devtype->ss_ofs); > + if (ret) > + goto ss_err; > + > + ret = dcss_dpr_init(dcss, base_addr + devtype->dpr_ofs); > + if (ret) > + goto dpr_err; > + > + ret = dcss_scaler_init(dcss, base_addr + devtype->scaler_ofs); > + if (ret) > + goto scaler_err; > + > + dcss_clocks_disable(dcss); > + > + return 0; > + > +scaler_err: > + dcss_dpr_exit(dcss->dpr); > + > +dpr_err: > + dcss_ss_exit(dcss->ss); > + > +ss_err: > + dcss_dtg_exit(dcss->dtg); > + > +dtg_err: > + dcss_ctxld_exit(dcss->ctxld); > + > +ctxld_err: > + dcss_blkctl_exit(dcss->blkctl); > + > + dcss_clocks_disable(dcss); > + > + return ret; > +} > + > +static void dcss_submodules_stop(struct dcss_dev *dcss) > +{ > + dcss_clocks_enable(dcss); > + dcss_scaler_exit(dcss->scaler); > + dcss_dpr_exit(dcss->dpr); > + dcss_ss_exit(dcss->ss); > + dcss_dtg_exit(dcss->dtg); > + dcss_ctxld_exit(dcss->ctxld); > + dcss_blkctl_exit(dcss->blkctl); > + dcss_clocks_disable(dcss); > +} > + > +static int dcss_clks_init(struct dcss_dev *dcss) > +{ > + int i; > + struct { > + const char *id; > + struct clk **clk; > + } clks[] = { > + {"apb", &dcss->apb_clk}, > + {"axi", &dcss->axi_clk}, > + {"pix", &dcss->pix_clk}, > + {"rtrm", &dcss->rtrm_clk}, > + {"dtrc", &dcss->dtrc_clk}, > + }; > + > + for (i = 0; i < ARRAY_SIZE(clks); i++) { > + *clks[i].clk = devm_clk_get(dcss->dev, clks[i].id); > + if (IS_ERR(*clks[i].clk)) { > + dev_err(dcss->dev, "failed to get %s clock\n", > + clks[i].id); > + return PTR_ERR(*clks[i].clk); > + } > + } > + > + return 0; > +} > + > +static void dcss_clks_release(struct dcss_dev *dcss) > +{ > + devm_clk_put(dcss->dev, dcss->dtrc_clk); > + devm_clk_put(dcss->dev, dcss->rtrm_clk); > + devm_clk_put(dcss->dev, dcss->pix_clk); > + devm_clk_put(dcss->dev, dcss->axi_clk); > + devm_clk_put(dcss->dev, dcss->apb_clk); > +} > + > +struct dcss_dev *dcss_dev_create(struct device *dev, bool hdmi_output) > +{ > + struct platform_device *pdev = to_platform_device(dev); > + int ret; > + struct resource *res; > + struct dcss_dev *dcss; > + const struct dcss_type_data *devtype; > + > + devtype = of_device_get_match_data(dev); > + if (!devtype) { > + dev_err(dev, "no device match found\n"); > + return ERR_PTR(-ENODEV); > + } > + > + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); > + if (!res) { > + dev_err(dev, "cannot get memory resource\n"); > + return ERR_PTR(-EINVAL); > + } > + > + dcss = kzalloc(sizeof(*dcss), GFP_KERNEL); > + if (!dcss) > + return ERR_PTR(-ENOMEM); > + > + dcss->dev = dev; > + dcss->devtype = devtype; > + dcss->hdmi_output = hdmi_output; > + > + ret = dcss_clks_init(dcss); > + if (ret) { > + dev_err(dev, "clocks initialization failed\n"); > + goto err; > + } > + > + dcss->of_port = of_graph_get_port_by_id(dev->of_node, 0); > + if (!dcss->of_port) { > + dev_err(dev, "no port@0 node in %s\n", dev->of_node->full_name); > + ret = -ENODEV; > + goto clks_err; > + } > + > + dcss->start_addr = res->start; > + > + ret = dcss_submodules_init(dcss); > + if (ret) { > + dev_err(dev, "submodules initialization failed\n"); > + goto clks_err; > + } > + > + init_completion(&dcss->disable_completion); > + > + pm_runtime_set_autosuspend_delay(dev, 100); > + pm_runtime_use_autosuspend(dev); > + pm_runtime_set_suspended(dev); > + pm_runtime_allow(dev); > + pm_runtime_enable(dev); > + > + return dcss; > + > +clks_err: > + dcss_clks_release(dcss); > + > +err: > + kfree(dcss); > + > + return ERR_PTR(ret); > +} > + > +void dcss_dev_destroy(struct dcss_dev *dcss) > +{ > + if (!pm_runtime_suspended(dcss->dev)) { > + dcss_ctxld_suspend(dcss->ctxld); > + dcss_clocks_disable(dcss); > + } > + > + pm_runtime_disable(dcss->dev); > + > + dcss_submodules_stop(dcss); > + > + dcss_clks_release(dcss); > + > + kfree(dcss); > +} > + > +#ifdef CONFIG_PM_SLEEP > +int dcss_dev_suspend(struct device *dev) > +{ > + struct dcss_dev *dcss = dcss_drv_dev_to_dcss(dev); > + int ret; > + > + drm_mode_config_helper_suspend(dcss_drv_dev_to_drm(dev)); > + > + if (pm_runtime_suspended(dev)) > + return 0; > + > + ret = dcss_ctxld_suspend(dcss->ctxld); > + if (ret) > + return ret; > + > + dcss_clocks_disable(dcss); > + > + return 0; > +} > + > +int dcss_dev_resume(struct device *dev) > +{ > + struct dcss_dev *dcss = dcss_drv_dev_to_dcss(dev); > + > + if (pm_runtime_suspended(dev)) { > + drm_mode_config_helper_resume(dcss_drv_dev_to_drm(dev)); > + return 0; > + } > + > + dcss_clocks_enable(dcss); > + > + dcss_blkctl_cfg(dcss->blkctl); > + > + dcss_ctxld_resume(dcss->ctxld); > + > + drm_mode_config_helper_resume(dcss_drv_dev_to_drm(dev)); > + > + return 0; > +} > +#endif /* CONFIG_PM_SLEEP */ > + > +#ifdef CONFIG_PM > +int dcss_dev_runtime_suspend(struct device *dev) > +{ > + struct dcss_dev *dcss = dcss_drv_dev_to_dcss(dev); > + int ret; > + > + ret = dcss_ctxld_suspend(dcss->ctxld); > + if (ret) > + return ret; > + > + dcss_clocks_disable(dcss); > + > + return 0; > +} > + > +int dcss_dev_runtime_resume(struct device *dev) > +{ > + struct dcss_dev *dcss = dcss_drv_dev_to_dcss(dev); > + > + dcss_clocks_enable(dcss); > + > + dcss_blkctl_cfg(dcss->blkctl); > + > + dcss_ctxld_resume(dcss->ctxld); > + > + return 0; > +} > +#endif /* CONFIG_PM */ > diff --git a/drivers/gpu/drm/imx/dcss/dcss-dev.h b/drivers/gpu/drm/imx/dcss/dcss-dev.h > new file mode 100644 > index 000000000000..c642ae17837f > --- /dev/null > +++ b/drivers/gpu/drm/imx/dcss/dcss-dev.h > @@ -0,0 +1,177 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * Copyright 2019 NXP. > + */ > + > +#ifndef __DCSS_PRV_H__ > +#define __DCSS_PRV_H__ > + > +#include > +#include > +#include