From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22332C433F5 for ; Mon, 15 Nov 2021 13:28:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9EB6C61C4F for ; Mon, 15 Nov 2021 13:28:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231491AbhKONbA (ORCPT ); Mon, 15 Nov 2021 08:31:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231357AbhKONaV (ORCPT ); Mon, 15 Nov 2021 08:30:21 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6097AC061570; Mon, 15 Nov 2021 05:27:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=gFbWnaEtnneFXpm4hKQq3b4yTcblOu4hKmE8Ex8W9zI=; b=2m/s8W8TC+G0NpSZq+mSN4ls+u jV9qfuS8az9wfjy2pVWkwDVeiceuojC/T8scEElAX0J7CUDqKb7OfLTCVc13YjjP+ZZnu2isOfMw6 mwbQbgDvrRCaLOpWBxfmuVGQKPyLp2n8pYtWQReF5cpBHOrUYMqWXH8LSNoVkm9XSjRc2jykB8qms viVHUX3JSwg3hvKlBpec892YIh04mRxh9JJpIEgjHyfVGOwgPrweK5Ssq/N4T/hql4goL//Vv82uV 0xGZgDANBRCCT+IU7KIJYYhZess7GRMfJtJdYSXftYlNOGtc+Mx9gKw3Mn9vttbsipmdHTwkQpWeu RxXTtqZQ==; Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mmc0y-00FeyR-00; Mon, 15 Nov 2021 13:27:16 +0000 Date: Mon, 15 Nov 2021 05:27:15 -0800 From: Christoph Hellwig To: Lu Baolu Cc: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Kevin Tian , Ashok Raj , Chaitanya Kulkarni , kvm@vger.kernel.org, rafael@kernel.org, linux-pci@vger.kernel.org, Cornelia Huck , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, Jacob jun Pan , Diana Craciun , Will Deacon Subject: Re: [PATCH 06/11] iommu: Expose group variants of dma ownership interfaces Message-ID: References: <20211115020552.2378167-1-baolu.lu@linux.intel.com> <20211115020552.2378167-7-baolu.lu@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211115020552.2378167-7-baolu.lu@linux.intel.com> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 15, 2021 at 10:05:47AM +0800, Lu Baolu wrote: > The vfio needs to set DMA_OWNER_USER for the entire group when attaching The vfio subsystem? driver? > it to a vfio container. So expose group variants of setting/releasing dma > ownership for this purpose. > > This also exposes the helper iommu_group_dma_owner_unclaimed() for vfio > report to userspace if the group is viable to user assignment, for .. for vfio to report .. ? > void iommu_device_release_dma_owner(struct device *dev, enum iommu_dma_owner owner); > +int iommu_group_set_dma_owner(struct iommu_group *group, enum iommu_dma_owner owner, > + struct file *user_file); > +void iommu_group_release_dma_owner(struct iommu_group *group, enum iommu_dma_owner owner); Pleae avoid all these overly long lines. > +static inline int iommu_group_set_dma_owner(struct iommu_group *group, > + enum iommu_dma_owner owner, > + struct file *user_file) > +{ > + return -EINVAL; > +} > + > +static inline void iommu_group_release_dma_owner(struct iommu_group *group, > + enum iommu_dma_owner owner) > +{ > +} > + > +static inline bool iommu_group_dma_owner_unclaimed(struct iommu_group *group) > +{ > + return false; > +} Why do we need these stubs? All potential callers should already require CONFIG_IOMMU_API? Same for the helpers added in patch 1, btw. > + mutex_lock(&group->mutex); > + ret = __iommu_group_set_dma_owner(group, owner, user_file); > + mutex_unlock(&group->mutex); > + mutex_lock(&group->mutex); > + __iommu_group_release_dma_owner(group, owner); > + mutex_unlock(&group->mutex); Unless I'm missing something (just skipping over the patches), the existing callers also take the lock just around these calls, so we don't really need the __-prefixed lowlevel helpers. > + mutex_lock(&group->mutex); > + owner = group->dma_owner; > + mutex_unlock(&group->mutex); No need for a lock to read a single scalar. > + > + return owner == DMA_OWNER_NONE; > +} > +EXPORT_SYMBOL_GPL(iommu_group_dma_owner_unclaimed);