From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751169AbdAQL7y (ORCPT ); Tue, 17 Jan 2017 06:59:54 -0500 Received: from mail-by2nam03on0068.outbound.protection.outlook.com ([104.47.42.68]:49189 "EHLO NAM03-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750866AbdAQL7t (ORCPT ); Tue, 17 Jan 2017 06:59:49 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Tomasz.Nowicki@caviumnetworks.com; Subject: Re: [PATCH v8 01/18] iommu/dma: Allow MSI-only cookies To: Eric Auger , , , , , , , , , , References: <1484127714-3263-1-git-send-email-eric.auger@redhat.com> <1484127714-3263-2-git-send-email-eric.auger@redhat.com> CC: , , , , , , , , , , From: Tomasz Nowicki Message-ID: <63888cd2-dfbd-ecad-abe2-9426fa61e72c@caviumnetworks.com> Date: Tue, 17 Jan 2017 12:59:02 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0 Thunderbird/45.5.1 MIME-Version: 1.0 In-Reply-To: <1484127714-3263-2-git-send-email-eric.auger@redhat.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [31.172.191.173] X-ClientProxiedBy: DB6PR0402CA0002.eurprd04.prod.outlook.com (10.172.243.140) To DM3PR07MB2170.namprd07.prod.outlook.com (10.164.4.152) X-MS-Office365-Filtering-Correlation-Id: 14aca261-dcd5-4d58-7d2f-08d43ed041a6 X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001);SRVR:DM3PR07MB2170; X-Microsoft-Exchange-Diagnostics: 1;DM3PR07MB2170;3:fFHyT074J40YOPpyBWXBhbocGAOLvq5emTEIkkRiaVzeP/gPnNoyFKs0puRDJmyjoy/kC/QLxT3tiHDDKXlDpYlARSpE0PpUPKIy6CsPZ512Bg1xgp74yK5qSdxTpKagctjvHF7xgOvR1iQKzuOPGn1K6uwG+COKDoTAblCPOfMdj6qUnuyVM6WT/quXiNs1sZBWwl3yJjJQH1TCcj4Rtf8V9pNfHZ97p6V7wvToAwG3cEIwiSw7QM4VMS7MkDNa4G/PbS+9xuxffJf1dwyKlQ== X-Microsoft-Exchange-Diagnostics: 1;DM3PR07MB2170;25:gVASOwCW9jZZJj55IXnRg63NSqxEsyoD5peG8Fg3+PddxHY2loJwZcfRZKYpDhnU2RW376HY6gPwDCmjNSdOeTlm3YsIuyP1I4qv1AeCOzW1Wr68GXZeoR0UdP5xI/SPcHo8yvL8kVajup+JCn27ehZ2H+Ti+F4dqI2stp/ukOnrpouqfzn4TbNoGUYbkyk8YtaGkXgOBbdxlEVUyEebXNKiWRPoa97QUDr2HGHmcjiG5LBg0QmyyJ2M2KEuh74CoyHtXvakIW6at+Rzbg3jVJ11+9MKWSeUDZ88CxT1aqLnE1MGlm3DHFjp3h7lQrwilawhiOtpYaH9ajEEkqNSg6jxrUrqTbpze86wKWo71zJLlQGq/0ysPj9B3pc7oFHzbNABtQE0UpUkZSHbqJG5HxaICfitlNk8IUwyMsjR12d4mFM2jECfkYhhnaRYy5uUxRlWn0jUuMSc68DuLB7EJeN6Mg3XKSNcNWYsGs1LII4kgWAylQpyaMElApG7VgYrYel9PG5gLnc8JNjh5oj5adf9riLx3bv/MSlkUC3DBkVrxTpm5rt4Y88jibWSI7SirjajGq9mIzth5gZZJ7RyPnQmfTI5/HQi96VAHQjLQ+cf7Pcc4n+fn+ZTA22oi/eGV3ZTepOcLEcQIj8/ZZz8TtPO+j0r0dPexZ6gIoSNDK0G3fI4+AuIoHC1AH61OCed08A7oV2qWduLbSQp5xnuCueCvcE9kX1nK1EPysg6n64814mS/4AD8MbJm2P7V7tVwgqRiOnTMG6Wo4PYIreauAgZ7RJ8Qg5WsSPdH4jdxgs= X-Microsoft-Exchange-Diagnostics: 1;DM3PR07MB2170;31:BiAKp5RxHU96K8cEvRtUMYd6EIpLuhYhXOVoGyutb0bACkY1XCYKMD3skMfXWAu+JeBwr5ZEBiI6jreWUNz9wsmBgbt/ouiamO8Q9sc4HiFvTMjB7rLsmx/Yc+KS7Sve24mPfN/iYocQCrxhw1tGqUs9PKgd39ADjhY+ktn7HNlbIYIBqA605QK2JtD0uybs5/KbrTE9YgXlRZiWJan0jwb9OF/MKPlb1nMk8b9LhLUjc9tR/Hf/1y1OlBflZv6tFXz+q/Gf6fM8Wu0p+CBGaA==;20:Pnr1R5q0vnnhmBCISUvH04QWmnOds/m6Wo/yNbXBdOPP/Kr4a6ZxkgotgsG3GcvGpdlQVB0dpq0UTGEVjEC2whuCaegX17FlI4m9HfBMOOErDUEazKK45lxjjzXNzdWaBUPM150k0JOHbygDPCkQw5dQkzQlupDngAk2xikwmxTQNQuxoNGQHekDKqgg92EeB5fspTFSLYVlVonQOewIuEufo7iriO9KFB/Z5Pt5sJLyhgecraXk7yqH53ONCC1hb1p85wUHy+Ocwgymu7isKSmlRuMUMaw+zPm2r06Z8bsDHcgQLFCKZZpCG00jah4fUFZPiEMgOcTPNLfIGCRuHLKZw8xCUHvJMpUGIVD3EVCST6PkYstXAe7XEh8eJIau6H8rDsz19KGGl2VqqrkUiJjF5seglVUmXIEA9LWi4S8DX5cHkbEoAMrg7NKAXFDZmMVNt3vL6XTclciXW/BUPtWQG3fihn0A8zRD4wzTVlShLrHZP5folo/AR9f0Ls938Zh/OEAtBfnrOX7GHeJfAAQ759/1UVCx0l7sLteDEBQdmnTKCFqdca+ZfBu6BaGpm85l/Hv4O+5H1616uFjSm+h+8a2Ky2bb+mIvaEpWJZU= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(180628864354917); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040375)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6041248)(20161123555025)(20161123560025)(20161123564025)(20161123562025)(6072148);SRVR:DM3PR07MB2170;BCL:0;PCL:0;RULEID:;SRVR:DM3PR07MB2170; X-Microsoft-Exchange-Diagnostics: 1;DM3PR07MB2170;4:LeF0w6fVMzJ1TRfrdwMqFjUMXXvgalsPbblTb2jNFod3xgp6L7G3mlJXqWo912iG/bTXSQg232FuzRbTrWZ7nvk1DxWWdBxFyBeNXbUINIhJPlZxWd2dpEFlHniCrgFdRJe34uScZ/fLxieCkAfvh7QreWDgXLxJ1j4W1+mIoY/0G1M3r3QUCgkGTtboLg2VZzTaZoPx7hREH/dex0OxUBhhaO4PJBBpjbXOA0XRO/pwDA54WA70w3581nVRAJJ98R/VZVVzEdb+xmXZHkxwO6hxqYvHWMqBxiZu8LwUkRzHwDafBD8iLTko0GoPFpniQQ/1CXXI9+RqeiP0+YXixYr8KfWeN6oouUqTtugGoZ/Q8Q9UJFVkYOCMKeJgIymwDHjl5iHbsIsigthw8H2Dzm7PJYXwBWnEM+eqzs0iDoKIko2Nia43GOLNUmZJN6XlqU04J2EkDYWYAPZsB/ANoMjGJxk6pXYvHNLniQiYIkOAcYVWrsd9xZ7imOHkMW6pSn+yo2O/XGy4E9h8NNC5bfaRNnHd2pIH0/+k7chc0mv/PDYGwEbDWkYvCV60LxcAqhUtfxqL7dmJ7YOLJBJbfPZjetcoXC3HoemhqYkjMtjC71nX+WsTqrc4dFr1NU3V X-Forefront-PRVS: 01901B3451 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6049001)(6009001)(7916002)(39450400003)(189002)(24454002)(199003)(189998001)(23746002)(101416001)(31686004)(50466002)(54906002)(6666003)(6116002)(64126003)(8656002)(3846002)(31696002)(36756003)(76176999)(5001770100001)(97736004)(50986999)(42186005)(4001350100001)(105586002)(81156014)(2950100002)(81166006)(6486002)(66066001)(106356001)(8676002)(54356999)(90366009)(65956001)(2906002)(7416002)(7736002)(229853002)(4326007)(5660300001)(25786008)(305945005)(5890100001)(47776003)(38730400001)(92566002)(33646002)(575784001)(230700001)(77096006)(2201001)(39060400001)(30001)(83506001)(68736007)(921003)(83996005)(1121003)(2101003);DIR:OUT;SFP:1101;SCL:1;SRVR:DM3PR07MB2170;H:[10.0.0.85];FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?Windows-1252?Q?1;DM3PR07MB2170;23:3qOUNPxJS0HQrWo5mOETi0odsC6rxm4Jisfz7?= =?Windows-1252?Q?heLQIDuxlVbgczGnPutPEf3FOv8cCjE4XwvHgFBoWe56bWqZFfLu7NpZ?= =?Windows-1252?Q?XIMqOZAPx0vxe3PCLS2scS3xhzfaFgXICyFZ2nfdloCs3zKSNP6Mt4QO?= =?Windows-1252?Q?oafjPED3h1PMdmQwJa5uNbVr/Ct4iuH/ZvT1ces2iGWoOaBhqKDNT905?= =?Windows-1252?Q?8LuHkXK8hr3xguPNytFPmYVgBGgKclfRTcAEsU1MCFYVs0b4uctIv7r4?= =?Windows-1252?Q?rYEXRmnRQDvKZXkXbNUYhJfNhKI2loJrTOlm0i+MnXGFxqTXKJTsB+iF?= =?Windows-1252?Q?1j6vlM8Ln8DIc9jKhjSqmmH9DCeaSS+BpVHYSZlqEiPeJu2eHFQxGKwz?= =?Windows-1252?Q?4goEt4sCueM/52highB5HAumu20J66BrzEfeYF+rMxOWxw1L983d1A8P?= =?Windows-1252?Q?IoUnACjRNh8mlMBLg3wZaWJ/cvgF6Jn137BlnM0M3XRJUDQ1RCyu9OVf?= =?Windows-1252?Q?4OOTj63c0wB9mNVk5rGma8DE7EvLKtCxpVYEVcB6rPRW1yVhbQOlXzNt?= =?Windows-1252?Q?Gz35yRiVdexnoB+vxvzw62LWaiz70YKaDuLkPnRGFKVioSQW33usFeCw?= =?Windows-1252?Q?zFS04B8JF7paINlJNlw2Zance++KnLRbDYMeOVWNmjSvtauNR3LxY6mS?= =?Windows-1252?Q?+TCUof314Q8tVfSydxZoWQtRzNHAF4hQQQfyxaqI1QScvZhFOYUgQ9th?= =?Windows-1252?Q?zTL69DS+XVmlRnxQFyILSiOm11rN8F6kWW0m1/dAkaJ1qx9t75b95XEi?= =?Windows-1252?Q?k86LmPRenY19iE3i4l/0H8OpRJCPwjd/vITyDjVYt/CZzoXEsgwI0pla?= =?Windows-1252?Q?O91piDpjKw+DrIEDCsCQvXAvmNUzKuf9VfDQu2ldv7sk0QcIsI1nrStr?= =?Windows-1252?Q?pwyRQ+A/BoPey0IhbzE7KSZBERoVpLmCg4oyQ/QMgKGV3Q3WEVQV1bGg?= =?Windows-1252?Q?+3bNJ3eyU+GwQqAs7fPnk/IpNkGePCYiWHR3mZJGxyfzExv2S/wLdlFp?= =?Windows-1252?Q?/cave4m8Y+x3Iif3QWFKR4ZQVlAx8Hqh3t/qVwTWPFYsqhtH8z57od0u?= =?Windows-1252?Q?u99sq2/+1mMMlSpf2cs6seToe2hWNpnE021Ywl5PnwVJ/Yy0xLbL37vX?= =?Windows-1252?Q?M01V0Q9DsvOPoJAXf1P+sQ+0w7Ld7uIb0guYnugSxWzQC4g34TzH1btY?= =?Windows-1252?Q?+i4MAspM3NcEovsY6NnlR3O9x80fyEUU4MZM1pBjzjWw2ArVFCtXkHeO?= =?Windows-1252?Q?/ELDlUwnAiCvHQmkJhQO3H0rczpB/VgbLPWriYCVvzQTpt9uBShiW8IF?= =?Windows-1252?Q?HAbBlnJUO1/IzfTIEBZ232tawgA8vZzSCoocBJKiLUfMm/mbhzS50jIv?= =?Windows-1252?Q?rusy84wuzzXSZAtOs3KFXhiKGyBRZupzmiBHQ75MV44Jk0q6zSMb6bZH?= =?Windows-1252?Q?iQMxxo9QUYO085OAhUxbAKQN9BJOaNWb0H5HKyile32JDH05fUdg15oo?= =?Windows-1252?Q?YzBHbHvKuRaltWEJdHzH+7j8aDAOIDj+pocuTXBZaiItfwioCY/k9MVL?= =?Windows-1252?B?dz09?= X-Microsoft-Exchange-Diagnostics: 1;DM3PR07MB2170;6:RUD9wulk9znnTHwtOcmITaRbUuB9Kjfo/rPPmpq+bif6A1yq4oO4K3UTKlA4vsMWZre2/2trp0mb/EETPIw1YPSVgMYO8rHKXLX++oP5kdUlxuW1tUeftvoCLzM6CnOPSTojjuZ5TDvOSSNMDwANhvgLGFuDNKn15+FegxQbpJKfbott8MZBmR0gCTE7KGfNFdbfocaQVmNKlavyfha3JnJmn0B416OOWCE7WmD9SkNhnVKoaZGR+NVNlmKcotApm/bvQOGdas5uQtOnvjHLtqOH5SQlIPci77lvwD2m2nzR1j4F5zJxcIUynKU5Fp6MbNyXs5T4DFInP6A2MG5xrClGiR62c6QZ/iQEcot1eLPeVIPWhpqUvZNsBqo7GO3zoZZvhSC8t/FNKpr0B+F1AuNab09HYRgQUR7BDTbOypY=;5:6+eZMsgoJn7kypXcfi82poP0KfVIDJ2ZagLbl8vASUWqfUrPhdfDLC15tYJfZ3P674I9b2KxIUVrLuqoGrD+7ot1r+Szj3wLavaB+4EoJA50weIKEP1F5RZYZEYE+gsx8xEa30Nw4+w0e5SDWs7jnnEwcN/eRY65idBstYEPhLU=;24:oKuzCKASEITvQgIza/e8s6gRtVfM3YL4spd375mzL2f1Ie4UpX+UyYo4ZJrPzxg6buU9HyZdUIRY0cVFwbyhhU8IQyWyysvyX2/C8YLFg/o= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM3PR07MB2170;7:WbBy9I6Ylb2UoLbHH/huF3RyH41Gq+O+zF5qi9CWLvqcWkPI3c6SfoKWpG6MJi9VyKapqT+0AHHrt88uO5TGzmUTp5WD11WetwMtzrEurfiV/BSh7c6IbOmBDuft1muqwLXSQP2LtixCNUO1cOGd68HoozcE9tQeBCLG2dv/5bXNr5FKHj+xZmZ8l2nBchu+Dl0VGEbVrMUXcI2rp5PwN1VkOLfei1ix7BGFdGvYCAydmFYfmHSK7U0lszLZnHm4RyeCuWsQmiJe6FmYwtHOCcfQy5E6WoLSDC+YJ1a/cLyqteG0AzZqysbYKbCEg0T4cy+k1PF/6PSnBOA2B8/8HgaDDdND8vyL5yb6rHVVSK90z1FKidoLw2bx5f/NEOiKfR336j1qBjR/N3nmT8Zg9O3aYSEcL1IrFQMJjfGgpBNWo0cBEtkmK3JniIhqZgq6CCL6csTlTNFA7pd1gfdIRA== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2017 11:59:10.4742 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR07MB2170 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11.01.2017 10:41, Eric Auger wrote: > From: Robin Murphy > > IOMMU domain users such as VFIO face a similar problem to DMA API ops > with regard to mapping MSI messages in systems where the MSI write is > subject to IOMMU translation. With the relevant infrastructure now in > place for managed DMA domains, it's actually really simple for other > users to piggyback off that and reap the benefits without giving up > their own IOVA management, and without having to reinvent their own > wheel in the MSI layer. > > Allow such users to opt into automatic MSI remapping by dedicating a > region of their IOVA space to a managed cookie, and extend the mapping > routine to implement a trivial linear allocator in such cases, to avoid > the needless overhead of a full-blown IOVA domain. > > Signed-off-by: Robin Murphy Reviewed-by: Tomasz Nowicki Thanks, Tomasz > --- > drivers/iommu/dma-iommu.c | 119 +++++++++++++++++++++++++++++++++++++--------- > include/linux/dma-iommu.h | 6 +++ > 2 files changed, 102 insertions(+), 23 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 2db0d64..de41ead 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -37,15 +37,50 @@ struct iommu_dma_msi_page { > phys_addr_t phys; > }; > > +enum iommu_dma_cookie_type { > + IOMMU_DMA_IOVA_COOKIE, > + IOMMU_DMA_MSI_COOKIE, > +}; > + > struct iommu_dma_cookie { > - struct iova_domain iovad; > - struct list_head msi_page_list; > - spinlock_t msi_lock; > + enum iommu_dma_cookie_type type; > + union { > + /* Full allocator for IOMMU_DMA_IOVA_COOKIE */ > + struct iova_domain iovad; > + /* Trivial linear page allocator for IOMMU_DMA_MSI_COOKIE */ > + dma_addr_t msi_iova; > + }; > + struct list_head msi_page_list; > + spinlock_t msi_lock; > }; > > +static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) > +{ > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) > + return cookie->iovad.granule; > + return PAGE_SIZE; > +} > + > static inline struct iova_domain *cookie_iovad(struct iommu_domain *domain) > { > - return &((struct iommu_dma_cookie *)domain->iova_cookie)->iovad; > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) > + return &cookie->iovad; > + return NULL; > +} > + > +static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type type) > +{ > + struct iommu_dma_cookie *cookie; > + > + cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); > + if (cookie) { > + spin_lock_init(&cookie->msi_lock); > + INIT_LIST_HEAD(&cookie->msi_page_list); > + cookie->type = type; > + } > + return cookie; > } > > int iommu_dma_init(void) > @@ -62,25 +97,53 @@ int iommu_dma_init(void) > */ > int iommu_get_dma_cookie(struct iommu_domain *domain) > { > + if (domain->iova_cookie) > + return -EEXIST; > + > + domain->iova_cookie = cookie_alloc(IOMMU_DMA_IOVA_COOKIE); > + if (!domain->iova_cookie) > + return -ENOMEM; > + > + return 0; > +} > +EXPORT_SYMBOL(iommu_get_dma_cookie); > + > +/** > + * iommu_get_msi_cookie - Acquire just MSI remapping resources > + * @domain: IOMMU domain to prepare > + * @base: Start address of IOVA region for MSI mappings > + * > + * Users who manage their own IOVA allocation and do not want DMA API support, > + * but would still like to take advantage of automatic MSI remapping, can use > + * this to initialise their own domain appropriately. Users should reserve a > + * contiguous IOVA region, starting at @base, large enough to accommodate the > + * number of PAGE_SIZE mappings necessary to cover every MSI doorbell address > + * used by the devices attached to @domain. > + */ > +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) > +{ > struct iommu_dma_cookie *cookie; > > + if (domain->type != IOMMU_DOMAIN_UNMANAGED) > + return -EINVAL; > + > if (domain->iova_cookie) > return -EEXIST; > > - cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); > + cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE); > if (!cookie) > return -ENOMEM; > > - spin_lock_init(&cookie->msi_lock); > - INIT_LIST_HEAD(&cookie->msi_page_list); > + cookie->msi_iova = base; > domain->iova_cookie = cookie; > return 0; > } > -EXPORT_SYMBOL(iommu_get_dma_cookie); > +EXPORT_SYMBOL(iommu_get_msi_cookie); > > /** > * iommu_put_dma_cookie - Release a domain's DMA mapping resources > - * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() > + * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() or > + * iommu_get_msi_cookie() > * > * IOMMU drivers should normally call this from their domain_free callback. > */ > @@ -92,7 +155,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) > if (!cookie) > return; > > - if (cookie->iovad.granule) > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE && cookie->iovad.granule) > put_iova_domain(&cookie->iovad); > > list_for_each_entry_safe(msi, tmp, &cookie->msi_page_list, list) { > @@ -137,11 +200,12 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, > int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, > u64 size, struct device *dev) > { > - struct iova_domain *iovad = cookie_iovad(domain); > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + struct iova_domain *iovad = &cookie->iovad; > unsigned long order, base_pfn, end_pfn; > > - if (!iovad) > - return -ENODEV; > + if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) > + return -EINVAL; > > /* Use the smallest supported page size for IOVA granularity */ > order = __ffs(domain->pgsize_bitmap); > @@ -662,11 +726,12 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > { > struct iommu_dma_cookie *cookie = domain->iova_cookie; > struct iommu_dma_msi_page *msi_page; > - struct iova_domain *iovad = &cookie->iovad; > + struct iova_domain *iovad = cookie_iovad(domain); > struct iova *iova; > int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; > + size_t size = cookie_msi_granule(cookie); > > - msi_addr &= ~(phys_addr_t)iova_mask(iovad); > + msi_addr &= ~(phys_addr_t)(size - 1); > list_for_each_entry(msi_page, &cookie->msi_page_list, list) > if (msi_page->phys == msi_addr) > return msi_page; > @@ -675,13 +740,18 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > if (!msi_page) > return NULL; > > - iova = __alloc_iova(domain, iovad->granule, dma_get_mask(dev)); > - if (!iova) > - goto out_free_page; > - > msi_page->phys = msi_addr; > - msi_page->iova = iova_dma_addr(iovad, iova); > - if (iommu_map(domain, msi_page->iova, msi_addr, iovad->granule, prot)) > + if (iovad) { > + iova = __alloc_iova(domain, size, dma_get_mask(dev)); > + if (!iova) > + goto out_free_page; > + msi_page->iova = iova_dma_addr(iovad, iova); > + } else { > + msi_page->iova = cookie->msi_iova; > + cookie->msi_iova += size; > + } > + > + if (iommu_map(domain, msi_page->iova, msi_addr, size, prot)) > goto out_free_iova; > > INIT_LIST_HEAD(&msi_page->list); > @@ -689,7 +759,10 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > return msi_page; > > out_free_iova: > - __free_iova(iovad, iova); > + if (iovad) > + __free_iova(iovad, iova); > + else > + cookie->msi_iova -= size; > out_free_page: > kfree(msi_page); > return NULL; > @@ -730,7 +803,7 @@ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) > msg->data = ~0U; > } else { > msg->address_hi = upper_32_bits(msi_page->iova); > - msg->address_lo &= iova_mask(&cookie->iovad); > + msg->address_lo &= cookie_msi_granule(cookie) - 1; > msg->address_lo += lower_32_bits(msi_page->iova); > } > } > diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h > index 7f7e9a7..28df844 100644 > --- a/include/linux/dma-iommu.h > +++ b/include/linux/dma-iommu.h > @@ -27,6 +27,7 @@ > > /* Domain management interface for IOMMU drivers */ > int iommu_get_dma_cookie(struct iommu_domain *domain); > +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base); > void iommu_put_dma_cookie(struct iommu_domain *domain); > > /* Setup call for arch DMA mapping code */ > @@ -86,6 +87,11 @@ static inline int iommu_get_dma_cookie(struct iommu_domain *domain) > return -ENODEV; > } > > +static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) > +{ > + return -ENODEV; > +} > + > static inline void iommu_put_dma_cookie(struct iommu_domain *domain) > { > } > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tomasz Nowicki Subject: Re: [PATCH v8 01/18] iommu/dma: Allow MSI-only cookies Date: Tue, 17 Jan 2017 12:59:02 +0100 Message-ID: <63888cd2-dfbd-ecad-abe2-9426fa61e72c@caviumnetworks.com> References: <1484127714-3263-1-git-send-email-eric.auger@redhat.com> <1484127714-3263-2-git-send-email-eric.auger@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Cc: drjones@redhat.com, kvm@vger.kernel.org, punit.agrawal@arm.com, linux-kernel@vger.kernel.org, geethasowjanya.akula@gmail.com, diana.craciun@nxp.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, bharat.bhushan@nxp.com, shankerd@codeaurora.org, gpkulkarni@gmail.com To: Eric Auger , , , , , , , , , , Return-path: In-Reply-To: <1484127714-3263-2-git-send-email-eric.auger@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=m.gmane.org@lists.infradead.org List-Id: kvm.vger.kernel.org On 11.01.2017 10:41, Eric Auger wrote: > From: Robin Murphy > > IOMMU domain users such as VFIO face a similar problem to DMA API ops > with regard to mapping MSI messages in systems where the MSI write is > subject to IOMMU translation. With the relevant infrastructure now in > place for managed DMA domains, it's actually really simple for other > users to piggyback off that and reap the benefits without giving up > their own IOVA management, and without having to reinvent their own > wheel in the MSI layer. > > Allow such users to opt into automatic MSI remapping by dedicating a > region of their IOVA space to a managed cookie, and extend the mapping > routine to implement a trivial linear allocator in such cases, to avoid > the needless overhead of a full-blown IOVA domain. > > Signed-off-by: Robin Murphy Reviewed-by: Tomasz Nowicki Thanks, Tomasz > --- > drivers/iommu/dma-iommu.c | 119 +++++++++++++++++++++++++++++++++++++--------- > include/linux/dma-iommu.h | 6 +++ > 2 files changed, 102 insertions(+), 23 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 2db0d64..de41ead 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -37,15 +37,50 @@ struct iommu_dma_msi_page { > phys_addr_t phys; > }; > > +enum iommu_dma_cookie_type { > + IOMMU_DMA_IOVA_COOKIE, > + IOMMU_DMA_MSI_COOKIE, > +}; > + > struct iommu_dma_cookie { > - struct iova_domain iovad; > - struct list_head msi_page_list; > - spinlock_t msi_lock; > + enum iommu_dma_cookie_type type; > + union { > + /* Full allocator for IOMMU_DMA_IOVA_COOKIE */ > + struct iova_domain iovad; > + /* Trivial linear page allocator for IOMMU_DMA_MSI_COOKIE */ > + dma_addr_t msi_iova; > + }; > + struct list_head msi_page_list; > + spinlock_t msi_lock; > }; > > +static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) > +{ > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) > + return cookie->iovad.granule; > + return PAGE_SIZE; > +} > + > static inline struct iova_domain *cookie_iovad(struct iommu_domain *domain) > { > - return &((struct iommu_dma_cookie *)domain->iova_cookie)->iovad; > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) > + return &cookie->iovad; > + return NULL; > +} > + > +static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type type) > +{ > + struct iommu_dma_cookie *cookie; > + > + cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); > + if (cookie) { > + spin_lock_init(&cookie->msi_lock); > + INIT_LIST_HEAD(&cookie->msi_page_list); > + cookie->type = type; > + } > + return cookie; > } > > int iommu_dma_init(void) > @@ -62,25 +97,53 @@ int iommu_dma_init(void) > */ > int iommu_get_dma_cookie(struct iommu_domain *domain) > { > + if (domain->iova_cookie) > + return -EEXIST; > + > + domain->iova_cookie = cookie_alloc(IOMMU_DMA_IOVA_COOKIE); > + if (!domain->iova_cookie) > + return -ENOMEM; > + > + return 0; > +} > +EXPORT_SYMBOL(iommu_get_dma_cookie); > + > +/** > + * iommu_get_msi_cookie - Acquire just MSI remapping resources > + * @domain: IOMMU domain to prepare > + * @base: Start address of IOVA region for MSI mappings > + * > + * Users who manage their own IOVA allocation and do not want DMA API support, > + * but would still like to take advantage of automatic MSI remapping, can use > + * this to initialise their own domain appropriately. Users should reserve a > + * contiguous IOVA region, starting at @base, large enough to accommodate the > + * number of PAGE_SIZE mappings necessary to cover every MSI doorbell address > + * used by the devices attached to @domain. > + */ > +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) > +{ > struct iommu_dma_cookie *cookie; > > + if (domain->type != IOMMU_DOMAIN_UNMANAGED) > + return -EINVAL; > + > if (domain->iova_cookie) > return -EEXIST; > > - cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); > + cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE); > if (!cookie) > return -ENOMEM; > > - spin_lock_init(&cookie->msi_lock); > - INIT_LIST_HEAD(&cookie->msi_page_list); > + cookie->msi_iova = base; > domain->iova_cookie = cookie; > return 0; > } > -EXPORT_SYMBOL(iommu_get_dma_cookie); > +EXPORT_SYMBOL(iommu_get_msi_cookie); > > /** > * iommu_put_dma_cookie - Release a domain's DMA mapping resources > - * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() > + * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() or > + * iommu_get_msi_cookie() > * > * IOMMU drivers should normally call this from their domain_free callback. > */ > @@ -92,7 +155,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) > if (!cookie) > return; > > - if (cookie->iovad.granule) > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE && cookie->iovad.granule) > put_iova_domain(&cookie->iovad); > > list_for_each_entry_safe(msi, tmp, &cookie->msi_page_list, list) { > @@ -137,11 +200,12 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, > int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, > u64 size, struct device *dev) > { > - struct iova_domain *iovad = cookie_iovad(domain); > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + struct iova_domain *iovad = &cookie->iovad; > unsigned long order, base_pfn, end_pfn; > > - if (!iovad) > - return -ENODEV; > + if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) > + return -EINVAL; > > /* Use the smallest supported page size for IOVA granularity */ > order = __ffs(domain->pgsize_bitmap); > @@ -662,11 +726,12 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > { > struct iommu_dma_cookie *cookie = domain->iova_cookie; > struct iommu_dma_msi_page *msi_page; > - struct iova_domain *iovad = &cookie->iovad; > + struct iova_domain *iovad = cookie_iovad(domain); > struct iova *iova; > int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; > + size_t size = cookie_msi_granule(cookie); > > - msi_addr &= ~(phys_addr_t)iova_mask(iovad); > + msi_addr &= ~(phys_addr_t)(size - 1); > list_for_each_entry(msi_page, &cookie->msi_page_list, list) > if (msi_page->phys == msi_addr) > return msi_page; > @@ -675,13 +740,18 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > if (!msi_page) > return NULL; > > - iova = __alloc_iova(domain, iovad->granule, dma_get_mask(dev)); > - if (!iova) > - goto out_free_page; > - > msi_page->phys = msi_addr; > - msi_page->iova = iova_dma_addr(iovad, iova); > - if (iommu_map(domain, msi_page->iova, msi_addr, iovad->granule, prot)) > + if (iovad) { > + iova = __alloc_iova(domain, size, dma_get_mask(dev)); > + if (!iova) > + goto out_free_page; > + msi_page->iova = iova_dma_addr(iovad, iova); > + } else { > + msi_page->iova = cookie->msi_iova; > + cookie->msi_iova += size; > + } > + > + if (iommu_map(domain, msi_page->iova, msi_addr, size, prot)) > goto out_free_iova; > > INIT_LIST_HEAD(&msi_page->list); > @@ -689,7 +759,10 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > return msi_page; > > out_free_iova: > - __free_iova(iovad, iova); > + if (iovad) > + __free_iova(iovad, iova); > + else > + cookie->msi_iova -= size; > out_free_page: > kfree(msi_page); > return NULL; > @@ -730,7 +803,7 @@ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) > msg->data = ~0U; > } else { > msg->address_hi = upper_32_bits(msi_page->iova); > - msg->address_lo &= iova_mask(&cookie->iovad); > + msg->address_lo &= cookie_msi_granule(cookie) - 1; > msg->address_lo += lower_32_bits(msi_page->iova); > } > } > diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h > index 7f7e9a7..28df844 100644 > --- a/include/linux/dma-iommu.h > +++ b/include/linux/dma-iommu.h > @@ -27,6 +27,7 @@ > > /* Domain management interface for IOMMU drivers */ > int iommu_get_dma_cookie(struct iommu_domain *domain); > +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base); > void iommu_put_dma_cookie(struct iommu_domain *domain); > > /* Setup call for arch DMA mapping code */ > @@ -86,6 +87,11 @@ static inline int iommu_get_dma_cookie(struct iommu_domain *domain) > return -ENODEV; > } > > +static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) > +{ > + return -ENODEV; > +} > + > static inline void iommu_put_dma_cookie(struct iommu_domain *domain) > { > } > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tomasz Nowicki Subject: Re: [PATCH v8 01/18] iommu/dma: Allow MSI-only cookies Date: Tue, 17 Jan 2017 12:59:02 +0100 Message-ID: <63888cd2-dfbd-ecad-abe2-9426fa61e72c@caviumnetworks.com> References: <1484127714-3263-1-git-send-email-eric.auger@redhat.com> <1484127714-3263-2-git-send-email-eric.auger@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1484127714-3263-2-git-send-email-eric.auger@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=m.gmane.org@lists.infradead.org To: Eric Auger , eric.auger.pro@gmail.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, linux-arm-kernel@lists.infradead.org Cc: drjones@redhat.com, kvm@vger.kernel.org, punit.agrawal@arm.com, linux-kernel@vger.kernel.org, geethasowjanya.akula@gmail.com, diana.craciun@nxp.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, bharat.bhushan@nxp.com, shankerd@codeaurora.org, gpkulkarni@gmail.com List-Id: iommu@lists.linux-foundation.org On 11.01.2017 10:41, Eric Auger wrote: > From: Robin Murphy > > IOMMU domain users such as VFIO face a similar problem to DMA API ops > with regard to mapping MSI messages in systems where the MSI write is > subject to IOMMU translation. With the relevant infrastructure now in > place for managed DMA domains, it's actually really simple for other > users to piggyback off that and reap the benefits without giving up > their own IOVA management, and without having to reinvent their own > wheel in the MSI layer. > > Allow such users to opt into automatic MSI remapping by dedicating a > region of their IOVA space to a managed cookie, and extend the mapping > routine to implement a trivial linear allocator in such cases, to avoid > the needless overhead of a full-blown IOVA domain. > > Signed-off-by: Robin Murphy Reviewed-by: Tomasz Nowicki Thanks, Tomasz > --- > drivers/iommu/dma-iommu.c | 119 +++++++++++++++++++++++++++++++++++++--------- > include/linux/dma-iommu.h | 6 +++ > 2 files changed, 102 insertions(+), 23 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 2db0d64..de41ead 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -37,15 +37,50 @@ struct iommu_dma_msi_page { > phys_addr_t phys; > }; > > +enum iommu_dma_cookie_type { > + IOMMU_DMA_IOVA_COOKIE, > + IOMMU_DMA_MSI_COOKIE, > +}; > + > struct iommu_dma_cookie { > - struct iova_domain iovad; > - struct list_head msi_page_list; > - spinlock_t msi_lock; > + enum iommu_dma_cookie_type type; > + union { > + /* Full allocator for IOMMU_DMA_IOVA_COOKIE */ > + struct iova_domain iovad; > + /* Trivial linear page allocator for IOMMU_DMA_MSI_COOKIE */ > + dma_addr_t msi_iova; > + }; > + struct list_head msi_page_list; > + spinlock_t msi_lock; > }; > > +static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) > +{ > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) > + return cookie->iovad.granule; > + return PAGE_SIZE; > +} > + > static inline struct iova_domain *cookie_iovad(struct iommu_domain *domain) > { > - return &((struct iommu_dma_cookie *)domain->iova_cookie)->iovad; > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) > + return &cookie->iovad; > + return NULL; > +} > + > +static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type type) > +{ > + struct iommu_dma_cookie *cookie; > + > + cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); > + if (cookie) { > + spin_lock_init(&cookie->msi_lock); > + INIT_LIST_HEAD(&cookie->msi_page_list); > + cookie->type = type; > + } > + return cookie; > } > > int iommu_dma_init(void) > @@ -62,25 +97,53 @@ int iommu_dma_init(void) > */ > int iommu_get_dma_cookie(struct iommu_domain *domain) > { > + if (domain->iova_cookie) > + return -EEXIST; > + > + domain->iova_cookie = cookie_alloc(IOMMU_DMA_IOVA_COOKIE); > + if (!domain->iova_cookie) > + return -ENOMEM; > + > + return 0; > +} > +EXPORT_SYMBOL(iommu_get_dma_cookie); > + > +/** > + * iommu_get_msi_cookie - Acquire just MSI remapping resources > + * @domain: IOMMU domain to prepare > + * @base: Start address of IOVA region for MSI mappings > + * > + * Users who manage their own IOVA allocation and do not want DMA API support, > + * but would still like to take advantage of automatic MSI remapping, can use > + * this to initialise their own domain appropriately. Users should reserve a > + * contiguous IOVA region, starting at @base, large enough to accommodate the > + * number of PAGE_SIZE mappings necessary to cover every MSI doorbell address > + * used by the devices attached to @domain. > + */ > +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) > +{ > struct iommu_dma_cookie *cookie; > > + if (domain->type != IOMMU_DOMAIN_UNMANAGED) > + return -EINVAL; > + > if (domain->iova_cookie) > return -EEXIST; > > - cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); > + cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE); > if (!cookie) > return -ENOMEM; > > - spin_lock_init(&cookie->msi_lock); > - INIT_LIST_HEAD(&cookie->msi_page_list); > + cookie->msi_iova = base; > domain->iova_cookie = cookie; > return 0; > } > -EXPORT_SYMBOL(iommu_get_dma_cookie); > +EXPORT_SYMBOL(iommu_get_msi_cookie); > > /** > * iommu_put_dma_cookie - Release a domain's DMA mapping resources > - * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() > + * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() or > + * iommu_get_msi_cookie() > * > * IOMMU drivers should normally call this from their domain_free callback. > */ > @@ -92,7 +155,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) > if (!cookie) > return; > > - if (cookie->iovad.granule) > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE && cookie->iovad.granule) > put_iova_domain(&cookie->iovad); > > list_for_each_entry_safe(msi, tmp, &cookie->msi_page_list, list) { > @@ -137,11 +200,12 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, > int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, > u64 size, struct device *dev) > { > - struct iova_domain *iovad = cookie_iovad(domain); > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + struct iova_domain *iovad = &cookie->iovad; > unsigned long order, base_pfn, end_pfn; > > - if (!iovad) > - return -ENODEV; > + if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) > + return -EINVAL; > > /* Use the smallest supported page size for IOVA granularity */ > order = __ffs(domain->pgsize_bitmap); > @@ -662,11 +726,12 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > { > struct iommu_dma_cookie *cookie = domain->iova_cookie; > struct iommu_dma_msi_page *msi_page; > - struct iova_domain *iovad = &cookie->iovad; > + struct iova_domain *iovad = cookie_iovad(domain); > struct iova *iova; > int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; > + size_t size = cookie_msi_granule(cookie); > > - msi_addr &= ~(phys_addr_t)iova_mask(iovad); > + msi_addr &= ~(phys_addr_t)(size - 1); > list_for_each_entry(msi_page, &cookie->msi_page_list, list) > if (msi_page->phys == msi_addr) > return msi_page; > @@ -675,13 +740,18 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > if (!msi_page) > return NULL; > > - iova = __alloc_iova(domain, iovad->granule, dma_get_mask(dev)); > - if (!iova) > - goto out_free_page; > - > msi_page->phys = msi_addr; > - msi_page->iova = iova_dma_addr(iovad, iova); > - if (iommu_map(domain, msi_page->iova, msi_addr, iovad->granule, prot)) > + if (iovad) { > + iova = __alloc_iova(domain, size, dma_get_mask(dev)); > + if (!iova) > + goto out_free_page; > + msi_page->iova = iova_dma_addr(iovad, iova); > + } else { > + msi_page->iova = cookie->msi_iova; > + cookie->msi_iova += size; > + } > + > + if (iommu_map(domain, msi_page->iova, msi_addr, size, prot)) > goto out_free_iova; > > INIT_LIST_HEAD(&msi_page->list); > @@ -689,7 +759,10 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > return msi_page; > > out_free_iova: > - __free_iova(iovad, iova); > + if (iovad) > + __free_iova(iovad, iova); > + else > + cookie->msi_iova -= size; > out_free_page: > kfree(msi_page); > return NULL; > @@ -730,7 +803,7 @@ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) > msg->data = ~0U; > } else { > msg->address_hi = upper_32_bits(msi_page->iova); > - msg->address_lo &= iova_mask(&cookie->iovad); > + msg->address_lo &= cookie_msi_granule(cookie) - 1; > msg->address_lo += lower_32_bits(msi_page->iova); > } > } > diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h > index 7f7e9a7..28df844 100644 > --- a/include/linux/dma-iommu.h > +++ b/include/linux/dma-iommu.h > @@ -27,6 +27,7 @@ > > /* Domain management interface for IOMMU drivers */ > int iommu_get_dma_cookie(struct iommu_domain *domain); > +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base); > void iommu_put_dma_cookie(struct iommu_domain *domain); > > /* Setup call for arch DMA mapping code */ > @@ -86,6 +87,11 @@ static inline int iommu_get_dma_cookie(struct iommu_domain *domain) > return -ENODEV; > } > > +static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) > +{ > + return -ENODEV; > +} > + > static inline void iommu_put_dma_cookie(struct iommu_domain *domain) > { > } > From mboxrd@z Thu Jan 1 00:00:00 1970 From: tnowicki@caviumnetworks.com (Tomasz Nowicki) Date: Tue, 17 Jan 2017 12:59:02 +0100 Subject: [PATCH v8 01/18] iommu/dma: Allow MSI-only cookies In-Reply-To: <1484127714-3263-2-git-send-email-eric.auger@redhat.com> References: <1484127714-3263-1-git-send-email-eric.auger@redhat.com> <1484127714-3263-2-git-send-email-eric.auger@redhat.com> Message-ID: <63888cd2-dfbd-ecad-abe2-9426fa61e72c@caviumnetworks.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On 11.01.2017 10:41, Eric Auger wrote: > From: Robin Murphy > > IOMMU domain users such as VFIO face a similar problem to DMA API ops > with regard to mapping MSI messages in systems where the MSI write is > subject to IOMMU translation. With the relevant infrastructure now in > place for managed DMA domains, it's actually really simple for other > users to piggyback off that and reap the benefits without giving up > their own IOVA management, and without having to reinvent their own > wheel in the MSI layer. > > Allow such users to opt into automatic MSI remapping by dedicating a > region of their IOVA space to a managed cookie, and extend the mapping > routine to implement a trivial linear allocator in such cases, to avoid > the needless overhead of a full-blown IOVA domain. > > Signed-off-by: Robin Murphy Reviewed-by: Tomasz Nowicki Thanks, Tomasz > --- > drivers/iommu/dma-iommu.c | 119 +++++++++++++++++++++++++++++++++++++--------- > include/linux/dma-iommu.h | 6 +++ > 2 files changed, 102 insertions(+), 23 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 2db0d64..de41ead 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -37,15 +37,50 @@ struct iommu_dma_msi_page { > phys_addr_t phys; > }; > > +enum iommu_dma_cookie_type { > + IOMMU_DMA_IOVA_COOKIE, > + IOMMU_DMA_MSI_COOKIE, > +}; > + > struct iommu_dma_cookie { > - struct iova_domain iovad; > - struct list_head msi_page_list; > - spinlock_t msi_lock; > + enum iommu_dma_cookie_type type; > + union { > + /* Full allocator for IOMMU_DMA_IOVA_COOKIE */ > + struct iova_domain iovad; > + /* Trivial linear page allocator for IOMMU_DMA_MSI_COOKIE */ > + dma_addr_t msi_iova; > + }; > + struct list_head msi_page_list; > + spinlock_t msi_lock; > }; > > +static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) > +{ > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) > + return cookie->iovad.granule; > + return PAGE_SIZE; > +} > + > static inline struct iova_domain *cookie_iovad(struct iommu_domain *domain) > { > - return &((struct iommu_dma_cookie *)domain->iova_cookie)->iovad; > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) > + return &cookie->iovad; > + return NULL; > +} > + > +static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type type) > +{ > + struct iommu_dma_cookie *cookie; > + > + cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); > + if (cookie) { > + spin_lock_init(&cookie->msi_lock); > + INIT_LIST_HEAD(&cookie->msi_page_list); > + cookie->type = type; > + } > + return cookie; > } > > int iommu_dma_init(void) > @@ -62,25 +97,53 @@ int iommu_dma_init(void) > */ > int iommu_get_dma_cookie(struct iommu_domain *domain) > { > + if (domain->iova_cookie) > + return -EEXIST; > + > + domain->iova_cookie = cookie_alloc(IOMMU_DMA_IOVA_COOKIE); > + if (!domain->iova_cookie) > + return -ENOMEM; > + > + return 0; > +} > +EXPORT_SYMBOL(iommu_get_dma_cookie); > + > +/** > + * iommu_get_msi_cookie - Acquire just MSI remapping resources > + * @domain: IOMMU domain to prepare > + * @base: Start address of IOVA region for MSI mappings > + * > + * Users who manage their own IOVA allocation and do not want DMA API support, > + * but would still like to take advantage of automatic MSI remapping, can use > + * this to initialise their own domain appropriately. Users should reserve a > + * contiguous IOVA region, starting at @base, large enough to accommodate the > + * number of PAGE_SIZE mappings necessary to cover every MSI doorbell address > + * used by the devices attached to @domain. > + */ > +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) > +{ > struct iommu_dma_cookie *cookie; > > + if (domain->type != IOMMU_DOMAIN_UNMANAGED) > + return -EINVAL; > + > if (domain->iova_cookie) > return -EEXIST; > > - cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); > + cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE); > if (!cookie) > return -ENOMEM; > > - spin_lock_init(&cookie->msi_lock); > - INIT_LIST_HEAD(&cookie->msi_page_list); > + cookie->msi_iova = base; > domain->iova_cookie = cookie; > return 0; > } > -EXPORT_SYMBOL(iommu_get_dma_cookie); > +EXPORT_SYMBOL(iommu_get_msi_cookie); > > /** > * iommu_put_dma_cookie - Release a domain's DMA mapping resources > - * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() > + * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() or > + * iommu_get_msi_cookie() > * > * IOMMU drivers should normally call this from their domain_free callback. > */ > @@ -92,7 +155,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) > if (!cookie) > return; > > - if (cookie->iovad.granule) > + if (cookie->type == IOMMU_DMA_IOVA_COOKIE && cookie->iovad.granule) > put_iova_domain(&cookie->iovad); > > list_for_each_entry_safe(msi, tmp, &cookie->msi_page_list, list) { > @@ -137,11 +200,12 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, > int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, > u64 size, struct device *dev) > { > - struct iova_domain *iovad = cookie_iovad(domain); > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + struct iova_domain *iovad = &cookie->iovad; > unsigned long order, base_pfn, end_pfn; > > - if (!iovad) > - return -ENODEV; > + if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) > + return -EINVAL; > > /* Use the smallest supported page size for IOVA granularity */ > order = __ffs(domain->pgsize_bitmap); > @@ -662,11 +726,12 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > { > struct iommu_dma_cookie *cookie = domain->iova_cookie; > struct iommu_dma_msi_page *msi_page; > - struct iova_domain *iovad = &cookie->iovad; > + struct iova_domain *iovad = cookie_iovad(domain); > struct iova *iova; > int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; > + size_t size = cookie_msi_granule(cookie); > > - msi_addr &= ~(phys_addr_t)iova_mask(iovad); > + msi_addr &= ~(phys_addr_t)(size - 1); > list_for_each_entry(msi_page, &cookie->msi_page_list, list) > if (msi_page->phys == msi_addr) > return msi_page; > @@ -675,13 +740,18 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > if (!msi_page) > return NULL; > > - iova = __alloc_iova(domain, iovad->granule, dma_get_mask(dev)); > - if (!iova) > - goto out_free_page; > - > msi_page->phys = msi_addr; > - msi_page->iova = iova_dma_addr(iovad, iova); > - if (iommu_map(domain, msi_page->iova, msi_addr, iovad->granule, prot)) > + if (iovad) { > + iova = __alloc_iova(domain, size, dma_get_mask(dev)); > + if (!iova) > + goto out_free_page; > + msi_page->iova = iova_dma_addr(iovad, iova); > + } else { > + msi_page->iova = cookie->msi_iova; > + cookie->msi_iova += size; > + } > + > + if (iommu_map(domain, msi_page->iova, msi_addr, size, prot)) > goto out_free_iova; > > INIT_LIST_HEAD(&msi_page->list); > @@ -689,7 +759,10 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > return msi_page; > > out_free_iova: > - __free_iova(iovad, iova); > + if (iovad) > + __free_iova(iovad, iova); > + else > + cookie->msi_iova -= size; > out_free_page: > kfree(msi_page); > return NULL; > @@ -730,7 +803,7 @@ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) > msg->data = ~0U; > } else { > msg->address_hi = upper_32_bits(msi_page->iova); > - msg->address_lo &= iova_mask(&cookie->iovad); > + msg->address_lo &= cookie_msi_granule(cookie) - 1; > msg->address_lo += lower_32_bits(msi_page->iova); > } > } > diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h > index 7f7e9a7..28df844 100644 > --- a/include/linux/dma-iommu.h > +++ b/include/linux/dma-iommu.h > @@ -27,6 +27,7 @@ > > /* Domain management interface for IOMMU drivers */ > int iommu_get_dma_cookie(struct iommu_domain *domain); > +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base); > void iommu_put_dma_cookie(struct iommu_domain *domain); > > /* Setup call for arch DMA mapping code */ > @@ -86,6 +87,11 @@ static inline int iommu_get_dma_cookie(struct iommu_domain *domain) > return -ENODEV; > } > > +static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) > +{ > + return -ENODEV; > +} > + > static inline void iommu_put_dma_cookie(struct iommu_domain *domain) > { > } >