From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C450C38BE2 for ; Mon, 24 Feb 2020 18:25:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB45920838 for ; Mon, 24 Feb 2020 18:25:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="rhJnsCqi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB45920838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 09DE06B00A6; Mon, 24 Feb 2020 13:25:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 026516B00A8; Mon, 24 Feb 2020 13:25:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E26226B00A9; Mon, 24 Feb 2020 13:25:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id C071A6B00A6 for ; Mon, 24 Feb 2020 13:25:00 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 69F27181AC9B6 for ; Mon, 24 Feb 2020 18:25:00 +0000 (UTC) X-FDA: 76525847160.28.plot49_65d46f6e51a14 X-HE-Tag: plot49_65d46f6e51a14 X-Filterd-Recvd-Size: 17343 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:59 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id y17so2769732wrn.6 for ; Mon, 24 Feb 2020 10:24:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QhzuO7E/tPXvjm3lOwKKbbuvggFZYOykdp4cgLKXUjc=; b=rhJnsCqiLzzpFZarK5PZPVZX1EPddqa5jLad60Z8U7w1CG1LwjKmN75XJM0TeiQyCe xF4saHy4d5w8OAfJ718jBb2k58f1/pqxAYoczuciV29+MC44tfAxlD7Y+wBiF+LZB68V RYv5/ulpS06ZbNwFl+mhu1gA6Kl9BHPV5lnaFlE2EnPixaDyX/bg2ab7SWUeUnkoqeYp bTT6059LoN0iI2tvprDIzXCFDxybk6KHVhyYmYvfSvQDTpK7pqSdx2NLg8wB2lJvU/2H o3tLPeyWP+TuBaVKRakaD+TPB0K6BV67CsvAAuLzrEQEI2a45KdRXSTHgMvPMFuEycNP zgaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QhzuO7E/tPXvjm3lOwKKbbuvggFZYOykdp4cgLKXUjc=; b=fcmgAuhveZ0gGDEZ9PAr+IFcj7f2tVYInHceJvscz1FR5Vw8AwPRREEVpPEPGVSxon LCBgOdzDhIQ+LclWFeO5QxccmUbAgq/DF+eIxT+tWW4ZyI/n39qWL5B3SqB2vmzxDpNb hf6emI8IgJlzg1B8R9BjHRipSLiJrJvjWKC5SvBo2XyimqkwkHZ3zGAD197CZl/N2TKr eQOhztvy6oIEeJeMKEwi1d/bAp29+xcOSoiC28pMMn7KFIEL9DN8cbS4DuaklzhCL50G K6iRq0jNr/7KFNOKevKmHzOdsxzvMGp/krlGBvwMGFOiMEJuknG/6IfkSLGGNKXj8YBt KxmA== X-Gm-Message-State: APjAAAVjpb/M3R2njfvJNUeLTHPyxf0OMgzWrAOE35v2K2uQjr/a67rU /c+5UHg+0RcbdUnuPM1w2FEavQ== X-Google-Smtp-Source: APXvYqzrOQRGThABy8GPo4IKa+3r1YNI9YUhNUvkDvOvqDRDj80+wwDRBgWca3hkV2MNzupjeXtT0g== X-Received: by 2002:adf:e908:: with SMTP id f8mr4338897wrm.37.1582568698515; Mon, 24 Feb 2020 10:24:58 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:58 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 26/26] iommu/arm-smmu-v3: Add support for PRI Date: Mon, 24 Feb 2020 19:24:01 +0100 Message-Id: <20200224182401.353359-27-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker For PCI devices that support it, enable the PRI capability and handle PRI Page Requests with the generic fault handler. It is enabled on demand by iommu_sva_device_init(). Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 278 +++++++++++++++++++++++++++++------- 1 file changed, 228 insertions(+), 50 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index da5dda5ba26a..f9732e397b2d 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -248,6 +248,7 @@ #define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4) #define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6) =20 +#define STRTAB_STE_1_PPAR (1UL << 18) #define STRTAB_STE_1_S1STALLD (1UL << 27) =20 #define STRTAB_STE_1_EATS GENMASK_ULL(29, 28) @@ -373,6 +374,9 @@ #define CMDQ_PRI_0_SID GENMASK_ULL(63, 32) #define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) #define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) +#define CMDQ_PRI_1_RESP_FAILURE 0UL +#define CMDQ_PRI_1_RESP_INVALID 1UL +#define CMDQ_PRI_1_RESP_SUCCESS 2UL =20 #define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) #define CMDQ_RESUME_0_RESP_TERM 0UL @@ -445,12 +449,6 @@ module_param_named(disable_bypass, disable_bypass, b= ool, S_IRUGO); MODULE_PARM_DESC(disable_bypass, "Disable bypass streams such that incoming transactions from devices th= at are not attached to an iommu domain will report an abort back to the d= evice and will not be allowed to pass through the SMMU."); =20 -enum pri_resp { - PRI_RESP_DENY =3D 0, - PRI_RESP_FAIL =3D 1, - PRI_RESP_SUCC =3D 2, -}; - enum arm_smmu_msi_index { EVTQ_MSI_INDEX, GERROR_MSI_INDEX, @@ -533,7 +531,7 @@ struct arm_smmu_cmdq_ent { u32 sid; u32 ssid; u16 grpid; - enum pri_resp resp; + u8 resp; } pri; =20 #define CMDQ_OP_RESUME 0x44 @@ -611,6 +609,7 @@ struct arm_smmu_evtq { =20 struct arm_smmu_priq { struct arm_smmu_queue q; + struct iopf_queue *iopf; }; =20 /* High-level stream table and context descriptor structures */ @@ -743,6 +742,8 @@ struct arm_smmu_master { unsigned int num_streams; bool ats_enabled; bool stall_enabled; + bool pri_supported; + bool prg_resp_needs_ssid; unsigned int ssid_bits; }; =20 @@ -1015,14 +1016,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struc= t arm_smmu_cmdq_ent *ent) cmd[0] |=3D FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid); cmd[0] |=3D FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid); cmd[1] |=3D FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid); - switch (ent->pri.resp) { - case PRI_RESP_DENY: - case PRI_RESP_FAIL: - case PRI_RESP_SUCC: - break; - default: - return -EINVAL; - } cmd[1] |=3D FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp); break; case CMDQ_OP_RESUME: @@ -1602,6 +1595,7 @@ static int arm_smmu_page_response(struct device *de= v, { struct arm_smmu_cmdq_ent cmd =3D {0}; struct arm_smmu_master *master =3D dev_iommu_fwspec_get(dev)->iommu_pri= v; + bool pasid_valid =3D resp->flags & IOMMU_PAGE_RESP_PASID_VALID; int sid =3D master->streams[0].id; =20 if (master->stall_enabled) { @@ -1619,8 +1613,27 @@ static int arm_smmu_page_response(struct device *d= ev, default: return -EINVAL; } + } else if (master->pri_supported) { + cmd.opcode =3D CMDQ_OP_PRI_RESP; + cmd.substream_valid =3D pasid_valid && + master->prg_resp_needs_ssid; + cmd.pri.sid =3D sid; + cmd.pri.ssid =3D resp->pasid; + cmd.pri.grpid =3D resp->grpid; + switch (resp->code) { + case IOMMU_PAGE_RESP_FAILURE: + cmd.pri.resp =3D CMDQ_PRI_1_RESP_FAILURE; + break; + case IOMMU_PAGE_RESP_INVALID: + cmd.pri.resp =3D CMDQ_PRI_1_RESP_INVALID; + break; + case IOMMU_PAGE_RESP_SUCCESS: + cmd.pri.resp =3D CMDQ_PRI_1_RESP_SUCCESS; + break; + default: + return -EINVAL; + } } else { - /* TODO: insert PRI response here */ return -ENODEV; } =20 @@ -2215,6 +2228,9 @@ static void arm_smmu_write_strtab_ent(struct arm_sm= mu_master *master, u32 sid, FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) | FIELD_PREP(STRTAB_STE_1_STRW, strw)); =20 + if (master->prg_resp_needs_ssid) + dst[1] |=3D STRTAB_STE_1_PPAR; + if (smmu->features & ARM_SMMU_FEAT_STALLS && !master->stall_enabled) dst[1] |=3D cpu_to_le64(STRTAB_STE_1_S1STALLD); @@ -2460,61 +2476,110 @@ static irqreturn_t arm_smmu_evtq_thread(int irq,= void *dev) =20 static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt) { - u32 sid, ssid; - u16 grpid; - bool ssv, last; - - sid =3D FIELD_GET(PRIQ_0_SID, evt[0]); - ssv =3D FIELD_GET(PRIQ_0_SSID_V, evt[0]); - ssid =3D ssv ? FIELD_GET(PRIQ_0_SSID, evt[0]) : 0; - last =3D FIELD_GET(PRIQ_0_PRG_LAST, evt[0]); - grpid =3D FIELD_GET(PRIQ_1_PRG_IDX, evt[1]); - - dev_info(smmu->dev, "unexpected PRI request received:\n"); - dev_info(smmu->dev, - "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%01= 6llx\n", - sid, ssid, grpid, last ? "L" : "", - evt[0] & PRIQ_0_PERM_PRIV ? "" : "un", - evt[0] & PRIQ_0_PERM_READ ? "R" : "", - evt[0] & PRIQ_0_PERM_WRITE ? "W" : "", - evt[0] & PRIQ_0_PERM_EXEC ? "X" : "", - evt[1] & PRIQ_1_ADDR_MASK); - - if (last) { - struct arm_smmu_cmdq_ent cmd =3D { - .opcode =3D CMDQ_OP_PRI_RESP, - .substream_valid =3D ssv, - .pri =3D { - .sid =3D sid, - .ssid =3D ssid, - .grpid =3D grpid, - .resp =3D PRI_RESP_DENY, - }, + u32 sid =3D FIELD_PREP(PRIQ_0_SID, evt[0]); + + bool pasid_valid, last; + struct arm_smmu_master *master; + struct iommu_fault_event fault_evt =3D { + .fault.type =3D IOMMU_FAULT_PAGE_REQ, + .fault.prm =3D { + .pasid =3D FIELD_GET(PRIQ_0_SSID, evt[0]), + .grpid =3D FIELD_GET(PRIQ_1_PRG_IDX, evt[1]), + .addr =3D evt[1] & PRIQ_1_ADDR_MASK, + }, + }; + struct iommu_fault_page_request *pr =3D &fault_evt.fault.prm; + + pasid_valid =3D evt[0] & PRIQ_0_SSID_V; + last =3D evt[0] & PRIQ_0_PRG_LAST; + + /* Discard Stop PASID marker, it isn't used */ + if (!(evt[0] & (PRIQ_0_PERM_READ | PRIQ_0_PERM_WRITE)) && last) + return; + + if (last) + pr->flags |=3D IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE; + if (pasid_valid) + pr->flags |=3D IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; + if (evt[0] & PRIQ_0_PERM_READ) + pr->perm |=3D IOMMU_FAULT_PERM_READ; + if (evt[0] & PRIQ_0_PERM_WRITE) + pr->perm |=3D IOMMU_FAULT_PERM_WRITE; + if (evt[0] & PRIQ_0_PERM_EXEC) + pr->perm |=3D IOMMU_FAULT_PERM_EXEC; + if (evt[0] & PRIQ_0_PERM_PRIV) + pr->perm |=3D IOMMU_FAULT_PERM_PRIV; + + master =3D arm_smmu_find_master(smmu, sid); + if (WARN_ON(!master)) + return; + + if (iommu_report_device_fault(master->dev, &fault_evt)) { + /* + * No handler registered, so subsequent faults won't produce + * better results. Try to disable PRI. + */ + struct iommu_page_response resp =3D { + .flags =3D pasid_valid ? + IOMMU_PAGE_RESP_PASID_VALID : 0, + .pasid =3D pr->pasid, + .grpid =3D pr->grpid, + .code =3D IOMMU_PAGE_RESP_FAILURE, }; =20 - arm_smmu_cmdq_issue_cmd(smmu, &cmd); + dev_warn(master->dev, + "PPR 0x%x:0x%llx 0x%x: nobody cared, disabling PRI\n", + pasid_valid ? pr->pasid : 0, pr->addr, pr->perm); + if (last) + arm_smmu_page_response(master->dev, NULL, &resp); } } =20 static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) { + int num_handled =3D 0; + bool overflow =3D false; struct arm_smmu_device *smmu =3D dev; struct arm_smmu_queue *q =3D &smmu->priq.q; struct arm_smmu_ll_queue *llq =3D &q->llq; + size_t queue_size =3D 1 << llq->max_n_shift; u64 evt[PRIQ_ENT_DWORDS]; =20 + spin_lock(&q->wq.lock); do { - while (!queue_remove_raw(q, evt)) + while (!queue_remove_raw(q, evt)) { + spin_unlock(&q->wq.lock); arm_smmu_handle_ppr(smmu, evt); + spin_lock(&q->wq.lock); + if (++num_handled =3D=3D queue_size) { + q->batch++; + wake_up_all_locked(&q->wq); + num_handled =3D 0; + } + } =20 - if (queue_sync_prod_in(q) =3D=3D -EOVERFLOW) + if (queue_sync_prod_in(q) =3D=3D -EOVERFLOW) { dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n"); + overflow =3D true; + } } while (!queue_empty(llq)); =20 /* Sync our overflow flag, as we believe we're up to speed */ llq->cons =3D Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) | Q_IDX(llq, llq->cons); queue_sync_cons_out(q); + + wake_up_all_locked(&q->wq); + spin_unlock(&q->wq.lock); + + /* + * On overflow, the SMMU might have discarded the last PPR in a group. + * There is no way to know more about it, so we have to discard all + * partial faults already queued. + */ + if (overflow) + iopf_queue_discard_partial(smmu->priq.iopf); + return IRQ_HANDLED; } =20 @@ -2545,6 +2610,30 @@ static int arm_smmu_flush_evtq(void *cookie, struc= t device *dev, int pasid) return ret; } =20 +static int arm_smmu_flush_priq(void *cookie, struct device *dev, int pas= id) +{ + int ret; + u64 batch; + bool overflow =3D false; + struct arm_smmu_device *smmu =3D cookie; + struct arm_smmu_queue *q =3D &smmu->priq.q; + + spin_lock(&q->wq.lock); + if (queue_sync_prod_in(q) =3D=3D -EOVERFLOW) { + dev_err(smmu->dev, "priq overflow detected -- requests lost\n"); + overflow =3D true; + } + + batch =3D q->batch; + ret =3D wait_event_interruptible_locked(q->wq, queue_empty(&q->llq) || + q->batch >=3D batch + 2); + spin_unlock(&q->wq.lock); + + if (overflow) + iopf_queue_discard_partial(smmu->priq.iopf); + return ret; +} + static int arm_smmu_device_disable(struct arm_smmu_device *smmu); =20 static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) @@ -3208,6 +3297,75 @@ static void arm_smmu_disable_pasid(struct arm_smmu= _master *master) pci_disable_pasid(pdev); } =20 +static int arm_smmu_init_pri(struct arm_smmu_master *master) +{ + int pos; + struct pci_dev *pdev; + + if (!dev_is_pci(master->dev)) + return -EINVAL; + + if (!(master->smmu->features & ARM_SMMU_FEAT_PRI)) + return 0; + + pdev =3D to_pci_dev(master->dev); + pos =3D pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); + if (!pos) + return 0; + + /* If the device supports PASID and PRI, set STE.PPAR */ + if (master->ssid_bits) + master->prg_resp_needs_ssid =3D pci_prg_resp_pasid_required(pdev); + + master->pri_supported =3D true; + return 0; +} + +static int arm_smmu_enable_pri(struct arm_smmu_master *master) +{ + int ret; + struct pci_dev *pdev; + /* + * TODO: find a good inflight PPR number. We should divide the PRI queu= e + * by the number of PRI-capable devices, but it's impossible to know + * about future (probed late or hotplugged) devices. So we're at risk o= f + * dropping PPRs (and leaking pending requests in the FQ). + */ + size_t max_inflight_pprs =3D 16; + + if (!master->pri_supported || !master->ats_enabled) + return -ENOSYS; + + pdev =3D to_pci_dev(master->dev); + + ret =3D pci_reset_pri(pdev); + if (ret) + return ret; + + ret =3D pci_enable_pri(pdev, max_inflight_pprs); + if (ret) { + dev_err(master->dev, "cannot enable PRI: %d\n", ret); + return ret; + } + + return 0; +} + +static void arm_smmu_disable_pri(struct arm_smmu_master *master) +{ + struct pci_dev *pdev; + + if (!dev_is_pci(master->dev)) + return; + + pdev =3D to_pci_dev(master->dev); + + if (!pdev->pri_enabled) + return; + + pci_disable_pri(pdev); +} + static void arm_smmu_detach_dev(struct arm_smmu_master *master) { unsigned long flags; @@ -3603,6 +3761,8 @@ static int arm_smmu_add_device(struct device *dev) smmu->features & ARM_SMMU_FEAT_STALL_FORCE) master->stall_enabled =3D true; =20 + arm_smmu_init_pri(master); + ret =3D iommu_device_link(&smmu->iommu, dev); if (ret) goto err_disable_pasid; @@ -3639,6 +3799,7 @@ static void arm_smmu_remove_device(struct device *d= ev) master =3D fwspec->iommu_priv; smmu =3D master->smmu; iopf_queue_remove_device(smmu->evtq.iopf, dev); + iopf_queue_remove_device(smmu->priq.iopf, dev); iommu_sva_disable(dev); arm_smmu_detach_dev(master); iommu_group_remove_device(dev); @@ -3762,7 +3923,7 @@ static void arm_smmu_get_resv_regions(struct device= *dev, =20 static bool arm_smmu_iopf_supported(struct arm_smmu_master *master) { - return master->stall_enabled; + return master->stall_enabled || master->pri_supported; } =20 static bool arm_smmu_dev_has_feature(struct device *dev, @@ -3820,6 +3981,15 @@ static int arm_smmu_dev_enable_sva(struct device *= dev) ret =3D iopf_queue_add_device(master->smmu->evtq.iopf, dev); if (ret) goto err_disable_sva; + } else if (master->pri_supported) { + ret =3D iopf_queue_add_device(master->smmu->priq.iopf, dev); + if (ret) + goto err_disable_sva; + + if (arm_smmu_enable_pri(master)) { + iopf_queue_remove_device(master->smmu->priq.iopf, dev); + goto err_disable_sva; + } } return 0; =20 @@ -3855,7 +4025,9 @@ static int arm_smmu_dev_disable_feature(struct devi= ce *dev, =20 switch (feat) { case IOMMU_DEV_FEAT_SVA: + arm_smmu_disable_pri(master); iopf_queue_remove_device(master->smmu->evtq.iopf, dev); + iopf_queue_remove_device(master->smmu->priq.iopf, dev); return iommu_sva_disable(dev); default: return -EINVAL; @@ -3999,6 +4171,11 @@ static int arm_smmu_init_queues(struct arm_smmu_de= vice *smmu) if (!(smmu->features & ARM_SMMU_FEAT_PRI)) return 0; =20 + smmu->priq.iopf =3D iopf_queue_alloc(dev_name(smmu->dev), + arm_smmu_flush_priq, smmu); + if (!smmu->priq.iopf) + return -ENOMEM; + return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD, ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS, "priq"); @@ -4971,6 +5148,7 @@ static int arm_smmu_device_remove(struct platform_d= evice *pdev) iommu_device_sysfs_remove(&smmu->iommu); arm_smmu_device_disable(smmu); iopf_queue_free(smmu->evtq.iopf); + iopf_queue_free(smmu->priq.iopf); =20 return 0; } --=20 2.25.0