All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Joerg Roedel <joro@8bytes.org>,
	Jean-Philippe Brucker <jean-philippe@linaro.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Jason Wang <jasowang@redhat.com>,
	"parav@mellanox.com" <parav@mellanox.com>,
	"Enrico Weigelt, metux IT consult" <lkml@metux.net>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Shenming Lu <lushenming@huawei.com>,
	Eric Auger <eric.auger@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>,
	"Jiang, Dave" <dave.jiang@intel.com>,
	Jacob Pan <jacob.jun.pan@linux.intel.com>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	Robin Murphy <robin.murphy@arm.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	David Woodhouse <dwmw2@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Lu Baolu <baolu.lu@linux.intel.com>
Subject: Re: Plan for /dev/ioasid RFC v2
Date: Thu, 17 Jun 2021 21:10:56 -0300	[thread overview]
Message-ID: <20210618001056.GB1002214@nvidia.com> (raw)
In-Reply-To: <20210615101215.4ba67c86.alex.williamson@redhat.com>

On Tue, Jun 15, 2021 at 10:12:15AM -0600, Alex Williamson wrote:
> 
> 1) A dual-function PCIe e1000e NIC where the functions are grouped
>    together due to ACS isolation issues.
> 
>    a) Initial state: functions 0 & 1 are both bound to e1000e driver.
> 
>    b) Admin uses driverctl to bind function 1 to vfio-pci, creating
>       vfio device file, which is chmod'd to grant to a user.
> 
>    c) User opens vfio function 1 device file and an iommu_fd, binds
>    device_fd to iommu_fd.
> 
>    Does this succeed?
>      - if no, specifically where does it fail?

No, the e1000e driver is still connected to the device.

It fails during the VFIO_BIND_IOASID_FD call because the iommu common
code checks the group membership for consistency.

We detect it basically the same way things work today, just moved to
the iommu code.

>    d) Repeat b) for function 0.
>    e) Repeat c), still using function 1, is it different?  Where?  Why?

Succeeds because all group device members are now bound to vfio

It is hard to predict the nicest way to do all of this, but I would
start by imagining that iommu_fd using drivers (like vfio) will call
some kind of iommu_fd_allow_dma_blocking() call during their probe()
which organizes the machinery to drive this.

> 2) The same NIC as 1)
> 
>    a) Initial state: functions 0 & 1 bound to vfio-pci, vfio device
>       files granted to user, user has bound both device_fds to the same
>       iommu_fd.
> 
>    AIUI, even though not bound to an IOASID, vfio can now enable access
>    through the device_fds, right?

Yes

>    What specific entity has placed these
>    devices into a block DMA state, when, and how?

To keep all the semantics the same it must be done as part of
VFIO_BIND_IOASID_FD. 

This will have to go over every device in the group and put it in the
dma blocked state. Riffing on the above this is possible if there is
no attached device driver, or the device driver that is attached has
called iommu_fd_allow_dma_blocking() during its probe()

I haven't gone through all of Kevins notes about how this could be
sorted out directly in the iomumu code though..

>    b) Both devices are attached to the same IOASID.
>
>    Are we assuming that each device was atomically moved to the new
>    IOMMU context by the IOASID code?  What if the IOMMU cannot change
>    the domain atomically?

What does "atomically" mean here? I assume all IOMMU HW can
change IOASIDs without accidentally leaking traffic
through.

Otherwise that is a major design restriction..

> c) The device_fd for function 1 is detached from the IOASID.
> 
>    Are we assuming the reverse of b) performed by the IOASID code?

Yes, the IOMMU will change from the active IOASID to the "block DMA"
ioasid in a way that is secure.

>    d) The device_fd for function 1 is unbound from the iommu_fd.
> 
>    Does this succeed?

Yes

>      - if yes, what is the resulting IOMMU context of the device and
>        who owns it?

device_fd for function 1 remains set to the "block DMA"
ioasid.

Attempting to attach a kernel driver triggers bug_on as today

Attempting to open it again and use it with a different iommu_fd fails

>    e) Function 1 is unbound from vfio-pci.
> 
>    Does this work or is it blocked?  If blocked, by what entity
>    specifically?

As today, it is allowed. The IOASID would have to remain at the "block
all dma" until the implicit connection to the group in the iommu_fd is
released.

>    f) Function 1 is bound to e1000e driver.

As today bug_on is triggered via the same maze of notifiers (gross,
but where we are for now). The notifiers would be done by the iommu_fd
instead of vfio

> 3) A dual-function conventional PCI e1000 NIC where the functions are
>    grouped together due to shared RID.

This operates effectively the same as today. Manipulating a device
implicitly manipulates the group. Instead of doing dma block the
devices track the IOASID the group is using. 

We model it by demanding that all devices attach to the same IOASID
and instead of doing the DMA block step the device remains attached to
the group's IOASID.  Today this is such an uncommon configuration (a
PCI bridge!) we shouldn't design the entire API around it.

> If vfio gets to offload all of it's group management to IOASID code,
> that's great, but I'm afraid that IOASID is so focused on a
> device-level API that we're instead just ignoring the group dynamics
> and vfio will be forced to provide oversight to maintain secure
> userspace access.

I think it would be a major design failure if VFIO is required to
provide additional security on top of the iommu code. This is
basically the refactoring excercise - to move the VFIO code that is
only about iommu concerns to the iommu layer and VFIO becomes thinner.

Otherwise we still can't properly share this code - why should VDPA
and VFIO have different isolation models? Is it just because we expect
that everything except VFIO has 1:1 groups or not group at all? Feels
wonky.

Jason

WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jason Wang <jasowang@redhat.com>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	Jean-Philippe Brucker <jean-philippe@linaro.org>,
	"Jiang, Dave" <dave.jiang@intel.com>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"parav@mellanox.com" <parav@mellanox.com>,
	"Enrico Weigelt, metux IT consult" <lkml@metux.net>,
	David Gibson <david@gibson.dropbear.id.au>,
	Robin Murphy <robin.murphy@arm.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Shenming Lu <lushenming@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: Re: Plan for /dev/ioasid RFC v2
Date: Thu, 17 Jun 2021 21:10:56 -0300	[thread overview]
Message-ID: <20210618001056.GB1002214@nvidia.com> (raw)
In-Reply-To: <20210615101215.4ba67c86.alex.williamson@redhat.com>

On Tue, Jun 15, 2021 at 10:12:15AM -0600, Alex Williamson wrote:
> 
> 1) A dual-function PCIe e1000e NIC where the functions are grouped
>    together due to ACS isolation issues.
> 
>    a) Initial state: functions 0 & 1 are both bound to e1000e driver.
> 
>    b) Admin uses driverctl to bind function 1 to vfio-pci, creating
>       vfio device file, which is chmod'd to grant to a user.
> 
>    c) User opens vfio function 1 device file and an iommu_fd, binds
>    device_fd to iommu_fd.
> 
>    Does this succeed?
>      - if no, specifically where does it fail?

No, the e1000e driver is still connected to the device.

It fails during the VFIO_BIND_IOASID_FD call because the iommu common
code checks the group membership for consistency.

We detect it basically the same way things work today, just moved to
the iommu code.

>    d) Repeat b) for function 0.
>    e) Repeat c), still using function 1, is it different?  Where?  Why?

Succeeds because all group device members are now bound to vfio

It is hard to predict the nicest way to do all of this, but I would
start by imagining that iommu_fd using drivers (like vfio) will call
some kind of iommu_fd_allow_dma_blocking() call during their probe()
which organizes the machinery to drive this.

> 2) The same NIC as 1)
> 
>    a) Initial state: functions 0 & 1 bound to vfio-pci, vfio device
>       files granted to user, user has bound both device_fds to the same
>       iommu_fd.
> 
>    AIUI, even though not bound to an IOASID, vfio can now enable access
>    through the device_fds, right?

Yes

>    What specific entity has placed these
>    devices into a block DMA state, when, and how?

To keep all the semantics the same it must be done as part of
VFIO_BIND_IOASID_FD. 

This will have to go over every device in the group and put it in the
dma blocked state. Riffing on the above this is possible if there is
no attached device driver, or the device driver that is attached has
called iommu_fd_allow_dma_blocking() during its probe()

I haven't gone through all of Kevins notes about how this could be
sorted out directly in the iomumu code though..

>    b) Both devices are attached to the same IOASID.
>
>    Are we assuming that each device was atomically moved to the new
>    IOMMU context by the IOASID code?  What if the IOMMU cannot change
>    the domain atomically?

What does "atomically" mean here? I assume all IOMMU HW can
change IOASIDs without accidentally leaking traffic
through.

Otherwise that is a major design restriction..

> c) The device_fd for function 1 is detached from the IOASID.
> 
>    Are we assuming the reverse of b) performed by the IOASID code?

Yes, the IOMMU will change from the active IOASID to the "block DMA"
ioasid in a way that is secure.

>    d) The device_fd for function 1 is unbound from the iommu_fd.
> 
>    Does this succeed?

Yes

>      - if yes, what is the resulting IOMMU context of the device and
>        who owns it?

device_fd for function 1 remains set to the "block DMA"
ioasid.

Attempting to attach a kernel driver triggers bug_on as today

Attempting to open it again and use it with a different iommu_fd fails

>    e) Function 1 is unbound from vfio-pci.
> 
>    Does this work or is it blocked?  If blocked, by what entity
>    specifically?

As today, it is allowed. The IOASID would have to remain at the "block
all dma" until the implicit connection to the group in the iommu_fd is
released.

>    f) Function 1 is bound to e1000e driver.

As today bug_on is triggered via the same maze of notifiers (gross,
but where we are for now). The notifiers would be done by the iommu_fd
instead of vfio

> 3) A dual-function conventional PCI e1000 NIC where the functions are
>    grouped together due to shared RID.

This operates effectively the same as today. Manipulating a device
implicitly manipulates the group. Instead of doing dma block the
devices track the IOASID the group is using. 

We model it by demanding that all devices attach to the same IOASID
and instead of doing the DMA block step the device remains attached to
the group's IOASID.  Today this is such an uncommon configuration (a
PCI bridge!) we shouldn't design the entire API around it.

> If vfio gets to offload all of it's group management to IOASID code,
> that's great, but I'm afraid that IOASID is so focused on a
> device-level API that we're instead just ignoring the group dynamics
> and vfio will be forced to provide oversight to maintain secure
> userspace access.

I think it would be a major design failure if VFIO is required to
provide additional security on top of the iommu code. This is
basically the refactoring excercise - to move the VFIO code that is
only about iommu concerns to the iommu layer and VFIO becomes thinner.

Otherwise we still can't properly share this code - why should VDPA
and VFIO have different isolation models? Is it just because we expect
that everything except VFIO has 1:1 groups or not group at all? Feels
wonky.

Jason
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  parent reply	other threads:[~2021-06-18  0:11 UTC|newest]

Thread overview: 162+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07  2:58 Plan for /dev/ioasid RFC v2 Tian, Kevin
2021-06-07  2:58 ` Tian, Kevin
2021-06-09  8:14 ` Eric Auger
2021-06-09  8:14   ` Eric Auger
2021-06-09  9:37   ` Tian, Kevin
2021-06-09  9:37     ` Tian, Kevin
2021-06-09 10:14     ` Eric Auger
2021-06-09 10:14       ` Eric Auger
2021-06-09  9:01 ` Leon Romanovsky
2021-06-09  9:01   ` Leon Romanovsky
2021-06-09  9:43   ` Tian, Kevin
2021-06-09  9:43     ` Tian, Kevin
2021-06-09 12:24 ` Joerg Roedel
2021-06-09 12:24   ` Joerg Roedel
2021-06-09 12:39   ` Jason Gunthorpe
2021-06-09 12:39     ` Jason Gunthorpe
2021-06-09 13:32     ` Joerg Roedel
2021-06-09 13:32       ` Joerg Roedel
2021-06-09 15:00       ` Jason Gunthorpe
2021-06-09 15:00         ` Jason Gunthorpe
2021-06-09 15:51         ` Joerg Roedel
2021-06-09 15:51           ` Joerg Roedel
2021-06-09 16:15           ` Alex Williamson
2021-06-09 16:15             ` Alex Williamson
2021-06-09 16:27             ` Alex Williamson
2021-06-09 16:27               ` Alex Williamson
2021-06-09 18:49               ` Jason Gunthorpe
2021-06-09 18:49                 ` Jason Gunthorpe
2021-06-10 15:38                 ` Alex Williamson
2021-06-10 15:38                   ` Alex Williamson
2021-06-11  0:58                   ` Tian, Kevin
2021-06-11  0:58                     ` Tian, Kevin
2021-06-11 21:38                     ` Alex Williamson
2021-06-11 21:38                       ` Alex Williamson
2021-06-14  3:09                       ` Tian, Kevin
2021-06-14  3:09                         ` Tian, Kevin
2021-06-14  3:22                         ` Alex Williamson
2021-06-14  3:22                           ` Alex Williamson
2021-06-15  1:05                           ` Tian, Kevin
2021-06-15  1:05                             ` Tian, Kevin
2021-06-14 13:38                         ` Jason Gunthorpe
2021-06-14 13:38                           ` Jason Gunthorpe
2021-06-15  1:21                           ` Tian, Kevin
2021-06-15  1:21                             ` Tian, Kevin
2021-06-15 16:56                             ` Alex Williamson
2021-06-15 16:56                               ` Alex Williamson
2021-06-16  6:53                               ` Tian, Kevin
2021-06-16  6:53                                 ` Tian, Kevin
2021-06-24  4:50                             ` David Gibson
2021-06-24  4:50                               ` David Gibson
2021-06-11 16:45                   ` Jason Gunthorpe
2021-06-11 16:45                     ` Jason Gunthorpe
2021-06-11 19:38                     ` Alex Williamson
2021-06-11 19:38                       ` Alex Williamson
2021-06-12  1:28                       ` Jason Gunthorpe
2021-06-12  1:28                         ` Jason Gunthorpe
2021-06-12 16:57                         ` Alex Williamson
2021-06-12 16:57                           ` Alex Williamson
2021-06-14 14:07                           ` Jason Gunthorpe
2021-06-14 14:07                             ` Jason Gunthorpe
2021-06-14 16:28                             ` Alex Williamson
2021-06-14 16:28                               ` Alex Williamson
2021-06-14 19:40                               ` Jason Gunthorpe
2021-06-14 19:40                                 ` Jason Gunthorpe
2021-06-15  2:31                               ` Tian, Kevin
2021-06-15  2:31                                 ` Tian, Kevin
2021-06-15 16:12                                 ` Alex Williamson
2021-06-15 16:12                                   ` Alex Williamson
2021-06-16  6:43                                   ` Tian, Kevin
2021-06-16  6:43                                     ` Tian, Kevin
2021-06-16 19:39                                     ` Alex Williamson
2021-06-16 19:39                                       ` Alex Williamson
2021-06-17  3:39                                       ` Liu Yi L
2021-06-17  3:39                                         ` Liu Yi L
2021-06-17  7:31                                       ` Tian, Kevin
2021-06-17  7:31                                         ` Tian, Kevin
2021-06-17 21:14                                         ` Alex Williamson
2021-06-17 21:14                                           ` Alex Williamson
2021-06-18  0:19                                           ` Jason Gunthorpe
2021-06-18  0:19                                             ` Jason Gunthorpe
2021-06-18 16:57                                             ` Tian, Kevin
2021-06-18 16:57                                               ` Tian, Kevin
2021-06-18 18:23                                               ` Jason Gunthorpe
2021-06-18 18:23                                                 ` Jason Gunthorpe
2021-06-25 10:27                                                 ` Tian, Kevin
2021-06-25 10:27                                                   ` Tian, Kevin
2021-06-25 14:36                                                   ` Jason Gunthorpe
2021-06-25 14:36                                                     ` Jason Gunthorpe
2021-06-28  1:09                                                     ` Tian, Kevin
2021-06-28  1:09                                                       ` Tian, Kevin
2021-06-28 22:31                                                       ` Alex Williamson
2021-06-28 22:31                                                         ` Alex Williamson
2021-06-28 22:48                                                         ` Jason Gunthorpe
2021-06-28 22:48                                                           ` Jason Gunthorpe
2021-06-28 23:09                                                           ` Alex Williamson
2021-06-28 23:09                                                             ` Alex Williamson
2021-06-28 23:13                                                             ` Jason Gunthorpe
2021-06-28 23:13                                                               ` Jason Gunthorpe
2021-06-29  0:26                                                               ` Tian, Kevin
2021-06-29  0:26                                                                 ` Tian, Kevin
2021-06-29  0:28                                                             ` Tian, Kevin
2021-06-29  0:28                                                               ` Tian, Kevin
2021-06-29  0:43                                                         ` Tian, Kevin
2021-06-29  0:43                                                           ` Tian, Kevin
2021-06-28  2:03                                                     ` Tian, Kevin
2021-06-28  2:03                                                       ` Tian, Kevin
2021-06-28 14:41                                                       ` Jason Gunthorpe
2021-06-28 14:41                                                         ` Jason Gunthorpe
2021-06-28  6:45                                                     ` Tian, Kevin
2021-06-28  6:45                                                       ` Tian, Kevin
2021-06-28 16:26                                                       ` Jason Gunthorpe
2021-06-28 16:26                                                         ` Jason Gunthorpe
2021-06-24  4:26                                               ` David Gibson
2021-06-24  4:26                                                 ` David Gibson
2021-06-24  5:59                                                 ` Tian, Kevin
2021-06-24  5:59                                                   ` Tian, Kevin
2021-06-24 12:22                                                 ` Lu Baolu
2021-06-24 12:22                                                   ` Lu Baolu
2021-06-24  4:23                                           ` David Gibson
2021-06-24  4:23                                             ` David Gibson
2021-06-18  0:52                                         ` Jason Gunthorpe
2021-06-18  0:52                                           ` Jason Gunthorpe
2021-06-18 13:47                                         ` Joerg Roedel
2021-06-18 13:47                                           ` Joerg Roedel
2021-06-18 15:15                                           ` Jason Gunthorpe
2021-06-18 15:15                                             ` Jason Gunthorpe
2021-06-18 15:37                                             ` Raj, Ashok
2021-06-18 15:37                                               ` Raj, Ashok
2021-06-18 15:51                                               ` Alex Williamson
2021-06-18 15:51                                                 ` Alex Williamson
2021-06-24  4:29                                             ` David Gibson
2021-06-24  4:29                                               ` David Gibson
2021-06-24 11:56                                               ` Jason Gunthorpe
2021-06-24 11:56                                                 ` Jason Gunthorpe
2021-06-18  0:10                                   ` Jason Gunthorpe [this message]
2021-06-18  0:10                                     ` Jason Gunthorpe
2021-06-17  5:29                     ` David Gibson
2021-06-17  5:29                       ` David Gibson
2021-06-17  5:02             ` David Gibson
2021-06-17  5:02               ` David Gibson
2021-06-17 23:04               ` Jason Gunthorpe
2021-06-17 23:04                 ` Jason Gunthorpe
2021-06-24  4:37                 ` David Gibson
2021-06-24  4:37                   ` David Gibson
2021-06-24 11:57                   ` Jason Gunthorpe
2021-06-24 11:57                     ` Jason Gunthorpe
2021-06-10  5:50     ` Lu Baolu
2021-06-10  5:50       ` Lu Baolu
2021-06-17  5:22       ` David Gibson
2021-06-17  5:22         ` David Gibson
2021-06-18  5:21         ` Lu Baolu
2021-06-18  5:21           ` Lu Baolu
2021-06-24  4:03           ` David Gibson
2021-06-24  4:03             ` David Gibson
2021-06-24 13:42             ` Lu Baolu
2021-06-24 13:42               ` Lu Baolu
2021-06-17  4:45     ` David Gibson
2021-06-17  4:45       ` David Gibson
2021-06-17 23:10       ` Jason Gunthorpe
2021-06-17 23:10         ` Jason Gunthorpe
2021-06-24  4:07         ` David Gibson
2021-06-24  4:07           ` David Gibson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210618001056.GB1002214@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=corbet@lwn.net \
    --cc=dave.jiang@intel.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=dwmw2@infradead.org \
    --cc=eric.auger@redhat.com \
    --cc=hao.wu@intel.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jasowang@redhat.com \
    --cc=jean-philippe@linaro.org \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkml@metux.net \
    --cc=lushenming@huawei.com \
    --cc=parav@mellanox.com \
    --cc=pbonzini@redhat.com \
    --cc=robin.murphy@arm.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.