From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,URI_WP_DIRINDEX,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 959BDC33CB2 for ; Wed, 29 Jan 2020 12:06:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 71D5622522 for ; Wed, 29 Jan 2020 12:06:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726751AbgA2MGr (ORCPT ); Wed, 29 Jan 2020 07:06:47 -0500 Received: from mga03.intel.com ([134.134.136.65]:59029 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726712AbgA2MGm (ORCPT ); Wed, 29 Jan 2020 07:06:42 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Jan 2020 04:06:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,377,1574150400"; d="scan'208";a="222433157" Received: from jacob-builder.jf.intel.com ([10.7.199.155]) by orsmga008.jf.intel.com with ESMTP; 29 Jan 2020 04:06:39 -0800 From: "Liu, Yi L" To: alex.williamson@redhat.com, eric.auger@redhat.com Cc: kevin.tian@intel.com, jacob.jun.pan@linux.intel.com, joro@8bytes.org, ashok.raj@intel.com, yi.l.liu@intel.com, jun.j.tian@intel.com, yi.y.sun@intel.com, jean-philippe.brucker@arm.com, peterx@redhat.com, iommu@lists.linux-foundation.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC v3 8/8] vfio/type1: Add vSVA support for IOMMU-backed mdevs Date: Wed, 29 Jan 2020 04:11:52 -0800 Message-Id: <1580299912-86084-9-git-send-email-yi.l.liu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1580299912-86084-1-git-send-email-yi.l.liu@intel.com> References: <1580299912-86084-1-git-send-email-yi.l.liu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Liu Yi L Recent years, mediated device pass-through framework (e.g. vfio-mdev) are used to achieve flexible device sharing across domains (e.g. VMs). Also there are hardware assisted mediated pass-through solutions from platform vendors. e.g. Intel VT-d scalable mode which supports Intel Scalable I/O Virtualization technology. Such mdevs are called IOMMU- backed mdevs as there are IOMMU enforced DMA isolation for such mdevs. In kernel, IOMMU-backed mdevs are exposed to IOMMU layer by aux-domain concept, which means mdevs are protected by an iommu domain which is aux-domain of its physical device. Details can be found in the KVM presentation from Kevin Tian. IOMMU-backed equals to IOMMU-capable. https://events19.linuxfoundation.org/wp-content/uploads/2017/12/\ Hardware-Assisted-Mediated-Pass-Through-with-VFIO-Kevin-Tian-Intel.pdf This patch supports NESTING IOMMU for IOMMU-backed mdevs by figuring out the physical device of an IOMMU-backed mdev and then invoking IOMMU requests to IOMMU layer with the physical device and the mdev's aux domain info. With this patch, vSVA (Virtual Shared Virtual Addressing) can be used on IOMMU-backed mdevs. Cc: Kevin Tian CC: Jacob Pan CC: Jun Tian Cc: Alex Williamson Cc: Eric Auger Cc: Jean-Philippe Brucker Signed-off-by: Liu Yi L --- drivers/vfio/vfio_iommu_type1.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 2168318..5aea355 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -127,6 +127,7 @@ struct vfio_regions { struct domain_capsule { struct iommu_domain *domain; + struct vfio_group *group; void *data; }; @@ -143,6 +144,7 @@ static int vfio_iommu_for_each_dev(struct vfio_iommu *iommu, list_for_each_entry(d, &iommu->domain_list, next) { dc.domain = d->domain; list_for_each_entry(g, &d->group_list, next) { + dc.group = g; ret = iommu_group_for_each_dev(g->iommu_group, &dc, fn); if (ret) @@ -2372,7 +2374,12 @@ static int vfio_bind_gpasid_fn(struct device *dev, void *data) struct iommu_gpasid_bind_data *gbind_data = (struct iommu_gpasid_bind_data *) dc->data; - return iommu_sva_bind_gpasid(dc->domain, dev, gbind_data); + if (dc->group->mdev_group) + return iommu_sva_bind_gpasid(dc->domain, + vfio_mdev_get_iommu_device(dev), gbind_data); + else + return iommu_sva_bind_gpasid(dc->domain, + dev, gbind_data); } static int vfio_unbind_gpasid_fn(struct device *dev, void *data) @@ -2381,7 +2388,12 @@ static int vfio_unbind_gpasid_fn(struct device *dev, void *data) struct iommu_gpasid_bind_data *gbind_data = (struct iommu_gpasid_bind_data *) dc->data; - return iommu_sva_unbind_gpasid(dc->domain, dev, + if (dc->group->mdev_group) + return iommu_sva_unbind_gpasid(dc->domain, + vfio_mdev_get_iommu_device(dev), + gbind_data->hpasid); + else + return iommu_sva_unbind_gpasid(dc->domain, dev, gbind_data->hpasid); } @@ -2454,7 +2466,12 @@ static int vfio_cache_inv_fn(struct device *dev, void *data) struct iommu_cache_invalidate_info *cache_inv_info = (struct iommu_cache_invalidate_info *) dc->data; - return iommu_cache_invalidate(dc->domain, dev, cache_inv_info); + if (dc->group->mdev_group) + return iommu_cache_invalidate(dc->domain, + vfio_mdev_get_iommu_device(dev), cache_inv_info); + else + return iommu_cache_invalidate(dc->domain, + dev, cache_inv_info); } static long vfio_iommu_type1_ioctl(void *iommu_data, -- 2.7.4