From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eric Auger Subject: [PATCH v7 16/23] iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement Date: Mon, 8 Apr 2019 14:19:04 +0200 Message-ID: <20190408121911.24103-17-eric.auger@redhat.com> References: <20190408121911.24103-1-eric.auger@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20190408121911.24103-1-eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg@public.gmane.org, joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org, alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, jacob.jun.pan-VuQAYsv1563Yd54FQh9/CA@public.gmane.org, yi.l.liu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org, will.deacon-5wv7dgnIgG8@public.gmane.org, robin.murphy-5wv7dgnIgG8@public.gmane.org Cc: peter.maydell-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org, kevin.tian-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, vincent.stehle-5wv7dgnIgG8@public.gmane.org, ashok.raj-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, marc.zyngier-5wv7dgnIgG8@public.gmane.org, christoffer.dall-5wv7dgnIgG8@public.gmane.org List-Id: kvmarm@lists.cs.columbia.edu In nested mode we enforce the rule that all devices belonging to the same iommu_domain share the same msi_domain. Indeed if there were several physical MSI doorbells being used within a single iommu_domain, it becomes really difficult to resolve the nested stage mapping translating into the correct physical doorbell. So let's forbid this situation. Signed-off-by: Eric Auger --- drivers/iommu/arm-smmu-v3.c | 41 +++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4366921d8318..2b2d90d736d6 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1781,6 +1781,37 @@ static void arm_smmu_detach_dev(struct device *dev) arm_smmu_install_ste_for_dev(fwspec); } +static bool arm_smmu_share_msi_domain(struct iommu_domain *domain, + struct device *dev) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct irq_domain *irqd = dev_get_msi_domain(dev); + struct arm_smmu_master_data *entry; + unsigned long flags; + bool share = false; + + if (!irqd) + return true; + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(entry, &smmu_domain->devices, list) { + struct irq_domain *d = dev_get_msi_domain(entry->dev); + + if (!d) + continue; + if (irqd != d) { + dev_info(dev, "Nested mode forbids to attach devices " + "using different physical MSI doorbells " + "to the same iommu_domain"); + goto unlock; + } + } + share = true; +unlock: + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + return share; +} + static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) { int ret = 0; @@ -1819,6 +1850,16 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ret = -ENXIO; goto out_unlock; } + /* + * In nested mode we must check all devices belonging to the + * domain share the same physical MSI doorbell. Otherwise nested + * stage MSI binding is not supported. + */ + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED && + !arm_smmu_share_msi_domain(domain, dev)) { + ret = -EINVAL; + goto out_unlock; + } ste->assigned = true; master->domain = smmu_domain; -- 2.20.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54648C10F14 for ; Mon, 8 Apr 2019 12:21:36 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id DD0552087F for ; Mon, 8 Apr 2019 12:21:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD0552087F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6E8D74A389; Mon, 8 Apr 2019 08:21:35 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5iczIgUVjYrg; Mon, 8 Apr 2019 08:21:34 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6E8644A422; Mon, 8 Apr 2019 08:21:34 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id E4EB04A389 for ; Mon, 8 Apr 2019 08:21:33 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8rj9MVzUK4+y for ; Mon, 8 Apr 2019 08:21:32 -0400 (EDT) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id DE66F4A380 for ; Mon, 8 Apr 2019 08:21:32 -0400 (EDT) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 13D3A30024AD; Mon, 8 Apr 2019 12:21:32 +0000 (UTC) Received: from laptop.redhat.com (ovpn-117-161.ams2.redhat.com [10.36.117.161]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1777C60471; Mon, 8 Apr 2019 12:21:24 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, jean-philippe.brucker@arm.com, will.deacon@arm.com, robin.murphy@arm.com Subject: [PATCH v7 16/23] iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement Date: Mon, 8 Apr 2019 14:19:04 +0200 Message-Id: <20190408121911.24103-17-eric.auger@redhat.com> In-Reply-To: <20190408121911.24103-1-eric.auger@redhat.com> References: <20190408121911.24103-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Mon, 08 Apr 2019 12:21:32 +0000 (UTC) Cc: kevin.tian@intel.com, vincent.stehle@arm.com, ashok.raj@intel.com, marc.zyngier@arm.com X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Message-ID: <20190408121904.PuIwBa7ViyM23V9VuBIAvhjgyq7fud2SsV-V-04qubY@z> In nested mode we enforce the rule that all devices belonging to the same iommu_domain share the same msi_domain. Indeed if there were several physical MSI doorbells being used within a single iommu_domain, it becomes really difficult to resolve the nested stage mapping translating into the correct physical doorbell. So let's forbid this situation. Signed-off-by: Eric Auger --- drivers/iommu/arm-smmu-v3.c | 41 +++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4366921d8318..2b2d90d736d6 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1781,6 +1781,37 @@ static void arm_smmu_detach_dev(struct device *dev) arm_smmu_install_ste_for_dev(fwspec); } +static bool arm_smmu_share_msi_domain(struct iommu_domain *domain, + struct device *dev) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct irq_domain *irqd = dev_get_msi_domain(dev); + struct arm_smmu_master_data *entry; + unsigned long flags; + bool share = false; + + if (!irqd) + return true; + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(entry, &smmu_domain->devices, list) { + struct irq_domain *d = dev_get_msi_domain(entry->dev); + + if (!d) + continue; + if (irqd != d) { + dev_info(dev, "Nested mode forbids to attach devices " + "using different physical MSI doorbells " + "to the same iommu_domain"); + goto unlock; + } + } + share = true; +unlock: + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + return share; +} + static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) { int ret = 0; @@ -1819,6 +1850,16 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ret = -ENXIO; goto out_unlock; } + /* + * In nested mode we must check all devices belonging to the + * domain share the same physical MSI doorbell. Otherwise nested + * stage MSI binding is not supported. + */ + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED && + !arm_smmu_share_msi_domain(domain, dev)) { + ret = -EINVAL; + goto out_unlock; + } ste->assigned = true; master->domain = smmu_domain; -- 2.20.1 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm