From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87AAEC433FE for ; Thu, 10 Dec 2020 10:47:18 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2571523D6A for ; Thu, 10 Dec 2020 10:47:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2571523D6A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8FB266EA6D; Thu, 10 Dec 2020 10:47:17 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8F50C6EA6D for ; Thu, 10 Dec 2020 10:47:15 +0000 (UTC) IronPort-SDR: eUTi50g+L1qcftGK8IZ1SbOzF9lYsz3z6vsBU05Fftdv1DseD8cYOrgwa6d2p+3L1FoFWS1XZA ZaNzYGlw1TLQ== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="174349094" X-IronPort-AV: E=Sophos;i="5.78,408,1599548400"; d="scan'208";a="174349094" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2020 02:47:14 -0800 IronPort-SDR: cKWOu8NxmkC3NTQLKEX9IAJcpPbCbtXk9nZ6gDEo5/5ujjyVloRwk6WJRMY4Ukhaa+E9hPsOlv PHXrOq6QDiiA== X-IronPort-AV: E=Sophos;i="5.78,408,1599548400"; d="scan'208";a="364571171" Received: from ggiordax-mobl1.ger.corp.intel.com (HELO localhost) ([10.251.87.181]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2020 02:47:13 -0800 MIME-Version: 1.0 In-Reply-To: <20201209070307.2304-6-sean.z.huang@intel.com> References: <20201209070307.2304-1-sean.z.huang@intel.com> <20201209070307.2304-6-sean.z.huang@intel.com> From: Joonas Lahtinen To: "Huang, Sean Z" , Intel-gfx@lists.freedesktop.org Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo Message-ID: <160759722951.5062.15504912937063428732@jlahtine-mobl.ger.corp.intel.com> User-Agent: alot/0.8.1 Date: Thu, 10 Dec 2020 12:47:10 +0200 Subject: Re: [Intel-gfx] [RFC-v3 05/13] drm/i915/pxp: Func to send hardware session termination X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Quoting Huang, Sean Z (2020-12-09 09:02:59) > Implement the functions to allow PXP to send a GPU command, in > order to terminate the hardware session, so hardware can recycle > this session slot for the next usage. Just like for the ARB session creation must happen during i915 initialization in order to guarantee being available for userspace, we must detect the corruption from i915 and i915 must be able to recreate the session. We already initiate the TEE commands to create the session, so I assume we have to repeat those to recreate the session? Should not we invalidate the previous session as part of that flow. Any command to be sent to the command streamers that i915 is also submitting to, should definitely happen under strict i915 control. So we should include the session termination command in here, inside i915 source. Regards, Joonas > Signed-off-by: Huang, Sean Z > --- > drivers/gpu/drm/i915/Makefile | 1 + > drivers/gpu/drm/i915/pxp/intel_pxp_cmd.c | 156 +++++++++++++++++++++++ > drivers/gpu/drm/i915/pxp/intel_pxp_cmd.h | 18 +++ > 3 files changed, 175 insertions(+) > create mode 100644 drivers/gpu/drm/i915/pxp/intel_pxp_cmd.c > create mode 100644 drivers/gpu/drm/i915/pxp/intel_pxp_cmd.h > > diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile > index 0710cc522f38..2da904cda49f 100644 > --- a/drivers/gpu/drm/i915/Makefile > +++ b/drivers/gpu/drm/i915/Makefile > @@ -258,6 +258,7 @@ i915-y += i915_perf.o > i915-$(CONFIG_DRM_I915_PXP) += \ > pxp/intel_pxp.o \ > pxp/intel_pxp_arb.o \ > + pxp/intel_pxp_cmd.o \ > pxp/intel_pxp_context.o \ > pxp/intel_pxp_tee.o > > diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_cmd.c b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd.c > new file mode 100644 > index 000000000000..e531ea9f3cdc > --- /dev/null > +++ b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd.c > @@ -0,0 +1,156 @@ > +// SPDX-License-Identifier: MIT > +/* > + * Copyright(c) 2020, Intel Corporation. All rights reserved. > + */ > + > +#include "intel_pxp_cmd.h" > +#include "i915_drv.h" > +#include "gt/intel_context.h" > +#include "gt/intel_engine_pm.h" > + > +struct i915_vma *intel_pxp_cmd_get_batch(struct intel_pxp *pxp, > + struct intel_context *ce, > + struct intel_gt_buffer_pool_node *pool, > + u32 *cmd_buf, int cmd_size_in_dw) > +{ > + struct i915_vma *batch = ERR_PTR(-EINVAL); > + struct intel_gt *gt = container_of(pxp, struct intel_gt, pxp); > + u32 *cmd; > + > + if (!ce || !ce->engine || !cmd_buf) > + return ERR_PTR(-EINVAL); > + > + if (cmd_size_in_dw * 4 > PAGE_SIZE) { > + drm_err(>->i915->drm, "Failed to %s, invalid cmd_size_id_dw=[%d]\n", > + __func__, cmd_size_in_dw); > + return ERR_PTR(-EINVAL); > + } > + > + cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC); > + if (IS_ERR(cmd)) { > + drm_err(>->i915->drm, "Failed to i915_gem_object_pin_map()\n"); > + return ERR_PTR(-EINVAL); > + } > + > + memcpy(cmd, cmd_buf, cmd_size_in_dw * 4); > + > + if (drm_debug_enabled(DRM_UT_DRIVER)) { > + print_hex_dump(KERN_DEBUG, "cmd binaries:", > + DUMP_PREFIX_OFFSET, 4, 4, cmd, cmd_size_in_dw * 4, true); > + } > + > + i915_gem_object_unpin_map(pool->obj); > + > + batch = i915_vma_instance(pool->obj, ce->vm, NULL); > + if (IS_ERR(batch)) { > + drm_err(>->i915->drm, "Failed to i915_vma_instance()\n"); > + return batch; > + } > + > + return batch; > +} > + > +int intel_pxp_cmd_submit(struct intel_pxp *pxp, u32 *cmd, int cmd_size_in_dw) > +{ > + int err = -EINVAL; > + struct i915_vma *batch; > + struct i915_request *rq; > + struct intel_context *ce = NULL; > + bool is_engine_pm_get = false; > + bool is_batch_vma_pin = false; > + bool is_skip_req_on_err = false; > + bool is_engine_get_pool = false; > + struct intel_gt_buffer_pool_node *pool = NULL; > + struct intel_gt *gt = container_of(pxp, struct intel_gt, pxp); > + > + if (!HAS_ENGINE(gt, VCS0) || > + !gt->engine[VCS0]->kernel_context) { > + err = -EINVAL; > + goto end; > + } > + > + if (!cmd || (cmd_size_in_dw * 4) > PAGE_SIZE) { > + drm_err(>->i915->drm, "Failed to %s bad params\n", __func__); > + return -EINVAL; > + } > + > + ce = gt->engine[VCS0]->kernel_context; > + > + intel_engine_pm_get(ce->engine); > + is_engine_pm_get = true; > + > + pool = intel_gt_get_buffer_pool(gt, PAGE_SIZE); > + if (IS_ERR(pool)) { > + drm_err(>->i915->drm, "Failed to intel_engine_get_pool()\n"); > + goto end; > + } > + is_engine_get_pool = true; > + > + batch = intel_pxp_cmd_get_batch(pxp, ce, pool, cmd, cmd_size_in_dw); > + if (IS_ERR(batch)) { > + drm_err(>->i915->drm, "Failed to intel_pxp_cmd_get_batch()\n"); > + goto end; > + } > + > + err = i915_vma_pin(batch, 0, 0, PIN_USER); > + if (err) { > + drm_err(>->i915->drm, "Failed to i915_vma_pin()\n"); > + goto end; > + } > + is_batch_vma_pin = true; > + > + rq = intel_context_create_request(ce); > + if (IS_ERR(rq)) { > + drm_err(>->i915->drm, "Failed to intel_context_create_request()\n"); > + goto end; > + } > + is_skip_req_on_err = true; > + > + err = intel_gt_buffer_pool_mark_active(pool, rq); > + if (err) { > + drm_err(>->i915->drm, "Failed to intel_engine_pool_mark_active()\n"); > + goto end; > + } > + > + i915_vma_lock(batch); > + err = i915_request_await_object(rq, batch->obj, false); > + if (!err) > + err = i915_vma_move_to_active(batch, rq, 0); > + i915_vma_unlock(batch); > + if (err) { > + drm_err(>->i915->drm, "Failed to i915_request_await_object()\n"); > + goto end; > + } > + > + if (ce->engine->emit_init_breadcrumb) { > + err = ce->engine->emit_init_breadcrumb(rq); > + if (err) { > + drm_err(>->i915->drm, "Failed to emit_init_breadcrumb()\n"); > + goto end; > + } > + } > + > + err = ce->engine->emit_bb_start(rq, batch->node.start, > + batch->node.size, 0); > + if (err) { > + drm_err(>->i915->drm, "Failed to emit_bb_start()\n"); > + goto end; > + } > + > + i915_request_add(rq); > + > +end: > + if (unlikely(err) && is_skip_req_on_err) > + i915_request_set_error_once(rq, err); > + > + if (is_batch_vma_pin) > + i915_vma_unpin(batch); > + > + if (is_engine_get_pool) > + intel_gt_buffer_pool_put(pool); > + > + if (is_engine_pm_get) > + intel_engine_pm_put(ce->engine); > + > + return err; > +} > diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_cmd.h b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd.h > new file mode 100644 > index 000000000000..d04463962421 > --- /dev/null > +++ b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd.h > @@ -0,0 +1,18 @@ > +/* SPDX-License-Identifier: MIT */ > +/* > + * Copyright(c) 2020, Intel Corporation. All rights reserved. > + */ > + > +#ifndef __INTEL_PXP_CMD_H__ > +#define __INTEL_PXP_CMD_H__ > + > +#include "gt/intel_gt_buffer_pool.h" > +#include "intel_pxp.h" > + > +struct i915_vma *intel_pxp_cmd_get_batch(struct intel_pxp *pxp, > + struct intel_context *ce, > + struct intel_gt_buffer_pool_node *pool, > + u32 *cmd_buf, int cmd_size_in_dw); > + > +int intel_pxp_cmd_submit(struct intel_pxp *pxp, u32 *cmd, int cmd_size_in_dw); > +#endif /* __INTEL_PXP_SM_H__ */ > -- > 2.17.1 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx