All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
	"Alex Williamson (alex.williamson@redhat.com)" 
	<alex.williamson@redhat.com>, Joerg Roedel <joro@8bytes.org>
Cc: Alex Williamson <alex.williamson@redhat.com>,
	Joerg Roedel <joro@8bytes.org>,
	Jean-Philippe Brucker <jean-philippe@linaro.org>,
	"David Gibson" <david@gibson.dropbear.id.au>,
	Jason Wang <jasowang@redhat.com>,
	"parav@mellanox.com" <parav@mellanox.com>,
	"Enrico Weigelt, metux IT consult" <lkml@metux.net>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Shenming Lu <lushenming@huawei.com>,
	Eric Auger <eric.auger@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>,
	"Jiang, Dave" <dave.jiang@intel.com>,
	Jacob Pan <jacob.jun.pan@linux.intel.com>,
	"Kirti Wankhede" <kwankhede@nvidia.com>,
	Robin Murphy <robin.murphy@arm.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"David Woodhouse" <dwmw2@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"Lu Baolu" <baolu.lu@linux.intel.com>
Subject: RE: Plan for /dev/ioasid RFC v2
Date: Fri, 25 Jun 2021 10:27:18 +0000	[thread overview]
Message-ID: <BN9PR11MB5433B9C0577CF0BD8EFCC9BC8C069@BN9PR11MB5433.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210618182306.GI1002214@nvidia.com>

Hi, Alex/Joerg/Jason,

Want to draw your attention on an updated proposal below. Let's see
whether there is a converged direction to move forward. 😊

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Saturday, June 19, 2021 2:23 AM
> 
> On Fri, Jun 18, 2021 at 04:57:40PM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Friday, June 18, 2021 8:20 AM
> > >
> > > On Thu, Jun 17, 2021 at 03:14:52PM -0600, Alex Williamson wrote:
> > >
> > > > I've referred to this as a limitation of type1, that we can't put
> > > > devices within the same group into different address spaces, such as
> > > > behind separate vRoot-Ports in a vIOMMU config, but really, who cares?
> > > > As isolation support improves we see fewer multi-device groups, this
> > > > scenario becomes the exception.  Buy better hardware to use the
> devices
> > > > independently.
> > >
> > > This is basically my thinking too, but my conclusion is that we should
> > > not continue to make groups central to the API.
> > >
> > > As I've explained to David this is actually causing functional
> > > problems and mess - and I don't see a clean way to keep groups central
> > > but still have the device in control of what is happening. We need
> > > this device <-> iommu connection to be direct to robustly model all
> > > the things that are in the RFC.
> > >
> > > To keep groups central someone needs to sketch out how to solve
> > > today's mdev SW page table and mdev PASID issues in a clean
> > > way. Device centric is my suggestion on how to make it clean, but I
> > > haven't heard an alternative??
> > >
> > > So, I view the purpose of this discussion to scope out what a
> > > device-centric world looks like and then if we can securely fit in the
> > > legacy non-isolated world on top of that clean future oriented
> > > API. Then decide if it is work worth doing or not.
> > >
> > > To my mind it looks like it is not so bad, granted not every detail is
> > > clear, and no code has be sketched, but I don't see a big scary
> > > blocker emerging. An extra ioctl or two, some special logic that
> > > activates for >1 device groups that looks a lot like VFIO's current
> > > logic..
> > >
> > > At some level I would be perfectly fine if we made the group FD part
> > > of the API for >1 device groups - except that complexifies every user
> > > space implementation to deal with that. It doesn't feel like a good
> > > trade off.
> > >
> >
> > Would it be an acceptable tradeoff by leaving >1 device groups
> > supported only via legacy VFIO (which is anyway kept for backward
> > compatibility), if we think such scenario is being deprecated over
> > time (thus little value to add new features on it)? Then all new
> > sub-systems including vdpa and new vfio only support singleton
> > device group via /dev/iommu...
> 
> That might just be a great idea - userspace has to support those APIs
> anyhow, if it can be made trivially obvious to use this fallback even
> though /dev/iommu is available it is a great place to start. It also
> means PASID/etc are naturally blocked off.
> 
> Maybe years down the road we will want to harmonize them, so I would
> still sketch it out enough to be confident it could be implemented..
> 

First let's align on the high level goal of supporting multi-devices group 
via IOMMU fd. Based on previous discussions I feel it's fair to say that 
we will not provide new features beyond what vfio group delivers today,
which implies:

1) All devices within the group must share the same address space.

        Though it's possible to support multiple address spaces (e.g. if caused
        by !ACS), there are some scenarios (DMA aliasing, RID sharing, etc.)
        where a single address space is mandatory. The effort to support
        multiple spaces is not worthwhile due to improved isolation over time.

2) It's not necessary to bind all devices within the group to the IOMMU fd.

        Other devices could be left unused, or bound to a known driver which
        doesn't do DMA. This implies a group viability mechanism must be in
        place which can identify when the group is viable for operation and 
        BUG_ON() when the viability is changed due to user action.

3) User must be denied from accessing a device before its group is attached
     to a known security context.

If above goals are agreed, below is the updated proposal for supporting
multi-devices group via device-centric API. Most ideas come from Jason.
Here try to expand and compose them in a full picture.

In general:

-   vfio keeps existing uAPI sequence, with slightly different semantics:

        a) VFIO_GROUP_SET_CONTAINER, as today

        b) VFIO_SET_IOMMU with a new iommu type (VFIO_EXTERNAL_
             IOMMU) which, once set, tells VFIO not to establish its own
             security context.

        c)  VFIO_GROUP_GET_DEVICE_FD_NEW, carrying additional info
             about external iommu driver (iommu_fd, device_cookie). This
             call automatically binds the device to iommu_fd. Device fd is
             returned to the user only after successful binding which implies 
             a security context (BLOCK_DMA) has been established for the 
             entire group. Since the security context is managed by iommu_fd,
             group viable check should be done in the iommu layer thus 
             vfio_group_viable() mechanism is redundant in this case.

-   When receiving the binding call for the 1st device in a group, iommu_fd 
    calls iommu_group_set_block_dma(group, dev->driver) which does 
    several things:

        a) Check group viability. A group is viable only when all devices in
            the group are in one of below states:

                * driver-less
                * bound to a driver which is same as dev->driver (vfio in this case)
                * bound to an otherwise allowed driver (same list as in vfio)

        b) Set block_dma flag for the group and configure the IOMMU to block
            DMA for all devices in this group. This could be done by attaching to
            a dedicated iommu domain (IOMMU_DOMAIN_BLOCKED) which has
            an empty page table.

        c) The iommu layer also verifies group viability on BUS_NOTIFY_
            BOUND_DRIVER event. BUG_ON if viability is broken while block_dma
            is set.

-   Binding other devices in the group to iommu_fd just succeeds since 
    the group is already in block_dma.

-   When a group is in block_dma state, all devices in the group (even not
    bound to iommu_fd) switch together between blocked domain and 
    IOASID domain, initiated by attaching to or detaching from an IOASID.

        a) iommu_fd verifies that all bound devices in the same group must be
            attached to a single IOASID.

        b) the 1st device attach in the group calls iommu API to move the 
             entire group to use the new IOASID domain.

        c) the last device detach calls iommu API to move the entire group 
            back to the blocked domain. 

-   A device is allowed to be unbound from iommu_fd when other devices
    in the group are still bound. In this case the group is still in block_dma
    state thus the unbound device should not be bound to another driver
    which could break the group viability.

         a) for vfio this unbound is automatically done when device fd is closed.

-   When vfio requests to unbind the last device in the group, iommu_fd
    calls iommu_group_unset_block_dma(group) to move the group out
    of the block_dma state. Devices in the group are re-attached to the 
    default domain from now on.

With this design all the helper functions and uAPI are kept device-centric
in iommu_fd. It maintains minimal group knowledge internally by tracking 
device binding/attaching status within each group and then calling proper
iommu API upon changed group status.

VFIO still keeps its container/group/device semantics for backward
compatibility.

A new subsystem can completely eliminate group semantics as long as
it could find a way to finish device binding before granting user to
access the device. 

Thanks
Kevin

WARNING: multiple messages have this Message-ID (diff)
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
	"Alex Williamson (alex.williamson@redhat.com)"
	<alex.williamson@redhat.com>, Joerg Roedel <joro@8bytes.org>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jason Wang <jasowang@redhat.com>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	Jean-Philippe Brucker <jean-philippe@linaro.org>,
	"Jiang, Dave" <dave.jiang@intel.com>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"parav@mellanox.com" <parav@mellanox.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"Enrico Weigelt, metux IT consult" <lkml@metux.net>,
	David Gibson <david@gibson.dropbear.id.au>,
	Robin Murphy <robin.murphy@arm.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Shenming Lu <lushenming@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: RE: Plan for /dev/ioasid RFC v2
Date: Fri, 25 Jun 2021 10:27:18 +0000	[thread overview]
Message-ID: <BN9PR11MB5433B9C0577CF0BD8EFCC9BC8C069@BN9PR11MB5433.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210618182306.GI1002214@nvidia.com>

Hi, Alex/Joerg/Jason,

Want to draw your attention on an updated proposal below. Let's see
whether there is a converged direction to move forward. 😊

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Saturday, June 19, 2021 2:23 AM
> 
> On Fri, Jun 18, 2021 at 04:57:40PM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Friday, June 18, 2021 8:20 AM
> > >
> > > On Thu, Jun 17, 2021 at 03:14:52PM -0600, Alex Williamson wrote:
> > >
> > > > I've referred to this as a limitation of type1, that we can't put
> > > > devices within the same group into different address spaces, such as
> > > > behind separate vRoot-Ports in a vIOMMU config, but really, who cares?
> > > > As isolation support improves we see fewer multi-device groups, this
> > > > scenario becomes the exception.  Buy better hardware to use the
> devices
> > > > independently.
> > >
> > > This is basically my thinking too, but my conclusion is that we should
> > > not continue to make groups central to the API.
> > >
> > > As I've explained to David this is actually causing functional
> > > problems and mess - and I don't see a clean way to keep groups central
> > > but still have the device in control of what is happening. We need
> > > this device <-> iommu connection to be direct to robustly model all
> > > the things that are in the RFC.
> > >
> > > To keep groups central someone needs to sketch out how to solve
> > > today's mdev SW page table and mdev PASID issues in a clean
> > > way. Device centric is my suggestion on how to make it clean, but I
> > > haven't heard an alternative??
> > >
> > > So, I view the purpose of this discussion to scope out what a
> > > device-centric world looks like and then if we can securely fit in the
> > > legacy non-isolated world on top of that clean future oriented
> > > API. Then decide if it is work worth doing or not.
> > >
> > > To my mind it looks like it is not so bad, granted not every detail is
> > > clear, and no code has be sketched, but I don't see a big scary
> > > blocker emerging. An extra ioctl or two, some special logic that
> > > activates for >1 device groups that looks a lot like VFIO's current
> > > logic..
> > >
> > > At some level I would be perfectly fine if we made the group FD part
> > > of the API for >1 device groups - except that complexifies every user
> > > space implementation to deal with that. It doesn't feel like a good
> > > trade off.
> > >
> >
> > Would it be an acceptable tradeoff by leaving >1 device groups
> > supported only via legacy VFIO (which is anyway kept for backward
> > compatibility), if we think such scenario is being deprecated over
> > time (thus little value to add new features on it)? Then all new
> > sub-systems including vdpa and new vfio only support singleton
> > device group via /dev/iommu...
> 
> That might just be a great idea - userspace has to support those APIs
> anyhow, if it can be made trivially obvious to use this fallback even
> though /dev/iommu is available it is a great place to start. It also
> means PASID/etc are naturally blocked off.
> 
> Maybe years down the road we will want to harmonize them, so I would
> still sketch it out enough to be confident it could be implemented..
> 

First let's align on the high level goal of supporting multi-devices group 
via IOMMU fd. Based on previous discussions I feel it's fair to say that 
we will not provide new features beyond what vfio group delivers today,
which implies:

1) All devices within the group must share the same address space.

        Though it's possible to support multiple address spaces (e.g. if caused
        by !ACS), there are some scenarios (DMA aliasing, RID sharing, etc.)
        where a single address space is mandatory. The effort to support
        multiple spaces is not worthwhile due to improved isolation over time.

2) It's not necessary to bind all devices within the group to the IOMMU fd.

        Other devices could be left unused, or bound to a known driver which
        doesn't do DMA. This implies a group viability mechanism must be in
        place which can identify when the group is viable for operation and 
        BUG_ON() when the viability is changed due to user action.

3) User must be denied from accessing a device before its group is attached
     to a known security context.

If above goals are agreed, below is the updated proposal for supporting
multi-devices group via device-centric API. Most ideas come from Jason.
Here try to expand and compose them in a full picture.

In general:

-   vfio keeps existing uAPI sequence, with slightly different semantics:

        a) VFIO_GROUP_SET_CONTAINER, as today

        b) VFIO_SET_IOMMU with a new iommu type (VFIO_EXTERNAL_
             IOMMU) which, once set, tells VFIO not to establish its own
             security context.

        c)  VFIO_GROUP_GET_DEVICE_FD_NEW, carrying additional info
             about external iommu driver (iommu_fd, device_cookie). This
             call automatically binds the device to iommu_fd. Device fd is
             returned to the user only after successful binding which implies 
             a security context (BLOCK_DMA) has been established for the 
             entire group. Since the security context is managed by iommu_fd,
             group viable check should be done in the iommu layer thus 
             vfio_group_viable() mechanism is redundant in this case.

-   When receiving the binding call for the 1st device in a group, iommu_fd 
    calls iommu_group_set_block_dma(group, dev->driver) which does 
    several things:

        a) Check group viability. A group is viable only when all devices in
            the group are in one of below states:

                * driver-less
                * bound to a driver which is same as dev->driver (vfio in this case)
                * bound to an otherwise allowed driver (same list as in vfio)

        b) Set block_dma flag for the group and configure the IOMMU to block
            DMA for all devices in this group. This could be done by attaching to
            a dedicated iommu domain (IOMMU_DOMAIN_BLOCKED) which has
            an empty page table.

        c) The iommu layer also verifies group viability on BUS_NOTIFY_
            BOUND_DRIVER event. BUG_ON if viability is broken while block_dma
            is set.

-   Binding other devices in the group to iommu_fd just succeeds since 
    the group is already in block_dma.

-   When a group is in block_dma state, all devices in the group (even not
    bound to iommu_fd) switch together between blocked domain and 
    IOASID domain, initiated by attaching to or detaching from an IOASID.

        a) iommu_fd verifies that all bound devices in the same group must be
            attached to a single IOASID.

        b) the 1st device attach in the group calls iommu API to move the 
             entire group to use the new IOASID domain.

        c) the last device detach calls iommu API to move the entire group 
            back to the blocked domain. 

-   A device is allowed to be unbound from iommu_fd when other devices
    in the group are still bound. In this case the group is still in block_dma
    state thus the unbound device should not be bound to another driver
    which could break the group viability.

         a) for vfio this unbound is automatically done when device fd is closed.

-   When vfio requests to unbind the last device in the group, iommu_fd
    calls iommu_group_unset_block_dma(group) to move the group out
    of the block_dma state. Devices in the group are re-attached to the 
    default domain from now on.

With this design all the helper functions and uAPI are kept device-centric
in iommu_fd. It maintains minimal group knowledge internally by tracking 
device binding/attaching status within each group and then calling proper
iommu API upon changed group status.

VFIO still keeps its container/group/device semantics for backward
compatibility.

A new subsystem can completely eliminate group semantics as long as
it could find a way to finish device binding before granting user to
access the device. 

Thanks
Kevin
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2021-06-25 10:27 UTC|newest]

Thread overview: 162+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07  2:58 Plan for /dev/ioasid RFC v2 Tian, Kevin
2021-06-07  2:58 ` Tian, Kevin
2021-06-09  8:14 ` Eric Auger
2021-06-09  8:14   ` Eric Auger
2021-06-09  9:37   ` Tian, Kevin
2021-06-09  9:37     ` Tian, Kevin
2021-06-09 10:14     ` Eric Auger
2021-06-09 10:14       ` Eric Auger
2021-06-09  9:01 ` Leon Romanovsky
2021-06-09  9:01   ` Leon Romanovsky
2021-06-09  9:43   ` Tian, Kevin
2021-06-09  9:43     ` Tian, Kevin
2021-06-09 12:24 ` Joerg Roedel
2021-06-09 12:24   ` Joerg Roedel
2021-06-09 12:39   ` Jason Gunthorpe
2021-06-09 12:39     ` Jason Gunthorpe
2021-06-09 13:32     ` Joerg Roedel
2021-06-09 13:32       ` Joerg Roedel
2021-06-09 15:00       ` Jason Gunthorpe
2021-06-09 15:00         ` Jason Gunthorpe
2021-06-09 15:51         ` Joerg Roedel
2021-06-09 15:51           ` Joerg Roedel
2021-06-09 16:15           ` Alex Williamson
2021-06-09 16:15             ` Alex Williamson
2021-06-09 16:27             ` Alex Williamson
2021-06-09 16:27               ` Alex Williamson
2021-06-09 18:49               ` Jason Gunthorpe
2021-06-09 18:49                 ` Jason Gunthorpe
2021-06-10 15:38                 ` Alex Williamson
2021-06-10 15:38                   ` Alex Williamson
2021-06-11  0:58                   ` Tian, Kevin
2021-06-11  0:58                     ` Tian, Kevin
2021-06-11 21:38                     ` Alex Williamson
2021-06-11 21:38                       ` Alex Williamson
2021-06-14  3:09                       ` Tian, Kevin
2021-06-14  3:09                         ` Tian, Kevin
2021-06-14  3:22                         ` Alex Williamson
2021-06-14  3:22                           ` Alex Williamson
2021-06-15  1:05                           ` Tian, Kevin
2021-06-15  1:05                             ` Tian, Kevin
2021-06-14 13:38                         ` Jason Gunthorpe
2021-06-14 13:38                           ` Jason Gunthorpe
2021-06-15  1:21                           ` Tian, Kevin
2021-06-15  1:21                             ` Tian, Kevin
2021-06-15 16:56                             ` Alex Williamson
2021-06-15 16:56                               ` Alex Williamson
2021-06-16  6:53                               ` Tian, Kevin
2021-06-16  6:53                                 ` Tian, Kevin
2021-06-24  4:50                             ` David Gibson
2021-06-24  4:50                               ` David Gibson
2021-06-11 16:45                   ` Jason Gunthorpe
2021-06-11 16:45                     ` Jason Gunthorpe
2021-06-11 19:38                     ` Alex Williamson
2021-06-11 19:38                       ` Alex Williamson
2021-06-12  1:28                       ` Jason Gunthorpe
2021-06-12  1:28                         ` Jason Gunthorpe
2021-06-12 16:57                         ` Alex Williamson
2021-06-12 16:57                           ` Alex Williamson
2021-06-14 14:07                           ` Jason Gunthorpe
2021-06-14 14:07                             ` Jason Gunthorpe
2021-06-14 16:28                             ` Alex Williamson
2021-06-14 16:28                               ` Alex Williamson
2021-06-14 19:40                               ` Jason Gunthorpe
2021-06-14 19:40                                 ` Jason Gunthorpe
2021-06-15  2:31                               ` Tian, Kevin
2021-06-15  2:31                                 ` Tian, Kevin
2021-06-15 16:12                                 ` Alex Williamson
2021-06-15 16:12                                   ` Alex Williamson
2021-06-16  6:43                                   ` Tian, Kevin
2021-06-16  6:43                                     ` Tian, Kevin
2021-06-16 19:39                                     ` Alex Williamson
2021-06-16 19:39                                       ` Alex Williamson
2021-06-17  3:39                                       ` Liu Yi L
2021-06-17  3:39                                         ` Liu Yi L
2021-06-17  7:31                                       ` Tian, Kevin
2021-06-17  7:31                                         ` Tian, Kevin
2021-06-17 21:14                                         ` Alex Williamson
2021-06-17 21:14                                           ` Alex Williamson
2021-06-18  0:19                                           ` Jason Gunthorpe
2021-06-18  0:19                                             ` Jason Gunthorpe
2021-06-18 16:57                                             ` Tian, Kevin
2021-06-18 16:57                                               ` Tian, Kevin
2021-06-18 18:23                                               ` Jason Gunthorpe
2021-06-18 18:23                                                 ` Jason Gunthorpe
2021-06-25 10:27                                                 ` Tian, Kevin [this message]
2021-06-25 10:27                                                   ` Tian, Kevin
2021-06-25 14:36                                                   ` Jason Gunthorpe
2021-06-25 14:36                                                     ` Jason Gunthorpe
2021-06-28  1:09                                                     ` Tian, Kevin
2021-06-28  1:09                                                       ` Tian, Kevin
2021-06-28 22:31                                                       ` Alex Williamson
2021-06-28 22:31                                                         ` Alex Williamson
2021-06-28 22:48                                                         ` Jason Gunthorpe
2021-06-28 22:48                                                           ` Jason Gunthorpe
2021-06-28 23:09                                                           ` Alex Williamson
2021-06-28 23:09                                                             ` Alex Williamson
2021-06-28 23:13                                                             ` Jason Gunthorpe
2021-06-28 23:13                                                               ` Jason Gunthorpe
2021-06-29  0:26                                                               ` Tian, Kevin
2021-06-29  0:26                                                                 ` Tian, Kevin
2021-06-29  0:28                                                             ` Tian, Kevin
2021-06-29  0:28                                                               ` Tian, Kevin
2021-06-29  0:43                                                         ` Tian, Kevin
2021-06-29  0:43                                                           ` Tian, Kevin
2021-06-28  2:03                                                     ` Tian, Kevin
2021-06-28  2:03                                                       ` Tian, Kevin
2021-06-28 14:41                                                       ` Jason Gunthorpe
2021-06-28 14:41                                                         ` Jason Gunthorpe
2021-06-28  6:45                                                     ` Tian, Kevin
2021-06-28  6:45                                                       ` Tian, Kevin
2021-06-28 16:26                                                       ` Jason Gunthorpe
2021-06-28 16:26                                                         ` Jason Gunthorpe
2021-06-24  4:26                                               ` David Gibson
2021-06-24  4:26                                                 ` David Gibson
2021-06-24  5:59                                                 ` Tian, Kevin
2021-06-24  5:59                                                   ` Tian, Kevin
2021-06-24 12:22                                                 ` Lu Baolu
2021-06-24 12:22                                                   ` Lu Baolu
2021-06-24  4:23                                           ` David Gibson
2021-06-24  4:23                                             ` David Gibson
2021-06-18  0:52                                         ` Jason Gunthorpe
2021-06-18  0:52                                           ` Jason Gunthorpe
2021-06-18 13:47                                         ` Joerg Roedel
2021-06-18 13:47                                           ` Joerg Roedel
2021-06-18 15:15                                           ` Jason Gunthorpe
2021-06-18 15:15                                             ` Jason Gunthorpe
2021-06-18 15:37                                             ` Raj, Ashok
2021-06-18 15:37                                               ` Raj, Ashok
2021-06-18 15:51                                               ` Alex Williamson
2021-06-18 15:51                                                 ` Alex Williamson
2021-06-24  4:29                                             ` David Gibson
2021-06-24  4:29                                               ` David Gibson
2021-06-24 11:56                                               ` Jason Gunthorpe
2021-06-24 11:56                                                 ` Jason Gunthorpe
2021-06-18  0:10                                   ` Jason Gunthorpe
2021-06-18  0:10                                     ` Jason Gunthorpe
2021-06-17  5:29                     ` David Gibson
2021-06-17  5:29                       ` David Gibson
2021-06-17  5:02             ` David Gibson
2021-06-17  5:02               ` David Gibson
2021-06-17 23:04               ` Jason Gunthorpe
2021-06-17 23:04                 ` Jason Gunthorpe
2021-06-24  4:37                 ` David Gibson
2021-06-24  4:37                   ` David Gibson
2021-06-24 11:57                   ` Jason Gunthorpe
2021-06-24 11:57                     ` Jason Gunthorpe
2021-06-10  5:50     ` Lu Baolu
2021-06-10  5:50       ` Lu Baolu
2021-06-17  5:22       ` David Gibson
2021-06-17  5:22         ` David Gibson
2021-06-18  5:21         ` Lu Baolu
2021-06-18  5:21           ` Lu Baolu
2021-06-24  4:03           ` David Gibson
2021-06-24  4:03             ` David Gibson
2021-06-24 13:42             ` Lu Baolu
2021-06-24 13:42               ` Lu Baolu
2021-06-17  4:45     ` David Gibson
2021-06-17  4:45       ` David Gibson
2021-06-17 23:10       ` Jason Gunthorpe
2021-06-17 23:10         ` Jason Gunthorpe
2021-06-24  4:07         ` David Gibson
2021-06-24  4:07           ` David Gibson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BN9PR11MB5433B9C0577CF0BD8EFCC9BC8C069@BN9PR11MB5433.namprd11.prod.outlook.com \
    --to=kevin.tian@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=corbet@lwn.net \
    --cc=dave.jiang@intel.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=dwmw2@infradead.org \
    --cc=eric.auger@redhat.com \
    --cc=hao.wu@intel.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jasowang@redhat.com \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkml@metux.net \
    --cc=lushenming@huawei.com \
    --cc=parav@mellanox.com \
    --cc=pbonzini@redhat.com \
    --cc=robin.murphy@arm.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.