IOMMU Archive on lore.kernel.org
 help / color / Atom feed
From: Andy Shevchenko <andy.shevchenko@gmail.com>
To: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Raj Ashok <ashok.raj@intel.com>,
	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>,
	iommu@lists.linux-foundation.org,
	LKML <linux-kernel@vger.kernel.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Jonathan Cameron <jic23@kernel.org>
Subject: Re: [PATCH v5 16/19] iommu/vt-d: Misc macro clean up for SVM
Date: Fri, 16 Aug 2019 00:17:44 +0300
Message-ID: <CAHp75VdD4SpA3oR8KBr6WihLkBgaaUn6f7tFdO7H-2n-hbun_Q@mail.gmail.com> (raw)
In-Reply-To: <1565900005-62508-17-git-send-email-jacob.jun.pan@linux.intel.com>

On Thu, Aug 15, 2019 at 11:52 PM Jacob Pan
<jacob.jun.pan@linux.intel.com> wrote:
>
> Use combined macros for_each_svm_dev() to simplify SVM device iteration
> and error checking.
>
> Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> ---
>  drivers/iommu/intel-svm.c | 85 +++++++++++++++++++++++------------------------
>  1 file changed, 41 insertions(+), 44 deletions(-)
>
> diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
> index 5a688a5..ea6f2e2 100644
> --- a/drivers/iommu/intel-svm.c
> +++ b/drivers/iommu/intel-svm.c
> @@ -218,6 +218,10 @@ static const struct mmu_notifier_ops intel_mmuops = {
>  static DEFINE_MUTEX(pasid_mutex);
>  static LIST_HEAD(global_svm_list);
>
> +#define for_each_svm_dev(svm, dev)                     \
> +       list_for_each_entry(sdev, &svm->devs, list)     \

> +       if (dev == sdev->dev)                           \

This should be
  if (dev != sdev->dev) {} else
and no trailing \ is neeeded.

The rationale of above form to avoid
for_each_foo() {
} else {
  ...WTF?!..
}

> +
>  int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_ops *ops)
>  {
>         struct intel_iommu *iommu = intel_svm_device_to_iommu(dev);
> @@ -263,15 +267,13 @@ int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_
>                                 goto out;
>                         }
>
> -                       list_for_each_entry(sdev, &svm->devs, list) {
> -                               if (dev == sdev->dev) {
> -                                       if (sdev->ops != ops) {
> -                                               ret = -EBUSY;
> -                                               goto out;
> -                                       }
> -                                       sdev->users++;
> -                                       goto success;
> +                       for_each_svm_dev(svm, dev) {
> +                               if (sdev->ops != ops) {
> +                                       ret = -EBUSY;
> +                                       goto out;
>                                 }
> +                               sdev->users++;
> +                               goto success;
>                         }
>
>                         break;
> @@ -408,48 +410,43 @@ int intel_svm_unbind_mm(struct device *dev, int pasid)
>                 goto out;
>
>         svm = ioasid_find(NULL, pasid, NULL);
> -       if (IS_ERR(svm)) {
> +       if (IS_ERR_OR_NULL(svm)) {
>                 ret = PTR_ERR(svm);
>                 goto out;
>         }
>
> -       if (!svm)
> -               goto out;
> +       for_each_svm_dev(svm, dev) {
> +               ret = 0;
> +               sdev->users--;
> +               if (!sdev->users) {
> +                       list_del_rcu(&sdev->list);
> +                       /* Flush the PASID cache and IOTLB for this device.
> +                        * Note that we do depend on the hardware *not* using
> +                        * the PASID any more. Just as we depend on other
> +                        * devices never using PASIDs that they have no right
> +                        * to use. We have a *shared* PASID table, because it's
> +                        * large and has to be physically contiguous. So it's
> +                        * hard to be as defensive as we might like. */
> +                       intel_pasid_tear_down_entry(iommu, dev, svm->pasid);
> +                       intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm);
> +                       kfree_rcu(sdev, rcu);
> +
> +                       if (list_empty(&svm->devs)) {
> +                               ioasid_free(svm->pasid);
> +                               if (svm->mm)
> +                                       mmu_notifier_unregister(&svm->notifier, svm->mm);
>
> -       list_for_each_entry(sdev, &svm->devs, list) {
> -               if (dev == sdev->dev) {
> -                       ret = 0;
> -                       sdev->users--;
> -                       if (!sdev->users) {
> -                               list_del_rcu(&sdev->list);
> -                               /* Flush the PASID cache and IOTLB for this device.
> -                                * Note that we do depend on the hardware *not* using
> -                                * the PASID any more. Just as we depend on other
> -                                * devices never using PASIDs that they have no right
> -                                * to use. We have a *shared* PASID table, because it's
> -                                * large and has to be physically contiguous. So it's
> -                                * hard to be as defensive as we might like. */
> -                               intel_pasid_tear_down_entry(iommu, dev, svm->pasid);
> -                               intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm);
> -                               kfree_rcu(sdev, rcu);
> -
> -                               if (list_empty(&svm->devs)) {
> -                                       ioasid_free(svm->pasid);
> -                                       if (svm->mm)
> -                                               mmu_notifier_unregister(&svm->notifier, svm->mm);
> -
> -                                       list_del(&svm->list);
> -
> -                                       /* We mandate that no page faults may be outstanding
> -                                        * for the PASID when intel_svm_unbind_mm() is called.
> -                                        * If that is not obeyed, subtle errors will happen.
> -                                        * Let's make them less subtle... */
> -                                       memset(svm, 0x6b, sizeof(*svm));
> -                                       kfree(svm);
> -                               }
> +                               list_del(&svm->list);
> +
> +                               /* We mandate that no page faults may be outstanding
> +                                * for the PASID when intel_svm_unbind_mm() is called.
> +                                * If that is not obeyed, subtle errors will happen.
> +                                * Let's make them less subtle... */
> +                               memset(svm, 0x6b, sizeof(*svm));
> +                               kfree(svm);
>                         }
> -                       break;
>                 }
> +               break;
>         }
>   out:
>         mutex_unlock(&pasid_mutex);
> @@ -585,7 +582,7 @@ static irqreturn_t prq_event_thread(int irq, void *d)
>                          * to unbind the mm while any page faults are outstanding.
>                          * So we only need RCU to protect the internal idr code. */
>                         rcu_read_unlock();
> -                       if (IS_ERR(svm) || !svm) {
> +                       if (IS_ERR_OR_NULL(svm)) {
>                                 pr_err("%s: Page request for invalid PASID %d: %08llx %08llx\n",
>                                        iommu->name, req->pasid, ((unsigned long long *)req)[0],
>                                        ((unsigned long long *)req)[1]);
> --
> 2.7.4
>


-- 
With Best Regards,
Andy Shevchenko
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply index

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-15 20:13 [PATCH v5 00/19] Shared virtual address IOMMU and VT-d support Jacob Pan
2019-08-15 20:13 ` [PATCH v5 01/19] iommu: Add a timeout parameter for PRQ response Jacob Pan
2019-08-15 20:13 ` [PATCH v5 02/19] iommu: handle page response timeout Jacob Pan
2019-08-15 20:13 ` [PATCH v5 03/19] trace/iommu: Add sva trace events Jacob Pan
2019-08-15 20:13 ` [PATCH v5 04/19] iommu: Use device fault trace event Jacob Pan
2019-08-15 20:13 ` [PATCH v5 05/19] iommu: Introduce attach/detach_pasid_table API Jacob Pan
2019-08-15 20:13 ` [PATCH v5 06/19] iommu: Introduce cache_invalidate API Jacob Pan
2019-08-15 20:13 ` [PATCH v5 07/19] iommu: Add I/O ASID allocator Jacob Pan
2019-08-15 20:13 ` [PATCH v5 08/19] iommu/ioasid: Add custom allocators Jacob Pan
2019-08-15 20:13 ` [PATCH v5 09/19] iommu: Introduce guest PASID bind function Jacob Pan
2019-08-15 20:13 ` [PATCH v5 10/19] iommu/vt-d: Enlightened PASID allocation Jacob Pan
2019-08-15 20:13 ` [PATCH v5 11/19] iommu/vt-d: Add custom allocator for IOASID Jacob Pan
2019-08-15 20:13 ` [PATCH v5 12/19] iommu/vt-d: Replace Intel specific PASID allocator with IOASID Jacob Pan
2019-08-15 20:13 ` [PATCH v5 13/19] iommu/vt-d: Move domain helper to header Jacob Pan
2019-08-15 20:13 ` [PATCH v5 14/19] iommu/vt-d: Avoid duplicated code for PASID setup Jacob Pan
2019-08-15 20:13 ` [PATCH v5 15/19] iommu/vt-d: Add nested translation helper function Jacob Pan
2019-08-15 20:13 ` [PATCH v5 16/19] iommu/vt-d: Misc macro clean up for SVM Jacob Pan
2019-08-15 21:17   ` Andy Shevchenko [this message]
2019-08-15 22:50     ` Jacob Pan
2019-08-15 20:13 ` [PATCH v5 17/19] iommu/vt-d: Add bind guest PASID support Jacob Pan
2019-08-15 20:13 ` [PATCH v5 18/19] iommu/vt-d: Support flushing more translation cache types Jacob Pan
2019-08-15 20:13 ` [PATCH v5 19/19] iommu/vt-d: Add svm/sva invalidate function Jacob Pan

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHp75VdD4SpA3oR8KBr6WihLkBgaaUn6f7tFdO7H-2n-hbun_Q@mail.gmail.com \
    --to=andy.shevchenko@gmail.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=dwmw2@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jean-philippe.brucker@arm.com \
    --cc=jic23@kernel.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

IOMMU Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-iommu/0 linux-iommu/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-iommu linux-iommu/ https://lore.kernel.org/linux-iommu \
		iommu@lists.linux-foundation.org iommu@archiver.kernel.org
	public-inbox-index linux-iommu


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.linux-foundation.lists.iommu


AGPL code for this site: git clone https://public-inbox.org/ public-inbox