All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Kenneth Lee <liguozhu@hisilicon.com>,
	Kenneth Lee <nek.in.cn@gmail.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Zaibo Xu <xuzaibo@huawei.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Zhou Wang <wangzhou1@hisilicon.com>,
	Philippe Ombredanne <pombredanne@
Subject: Re: [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Fri, 14 Sep 2018 10:13:43 -0400	[thread overview]
Message-ID: <20180914141342.GB3826@redhat.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D191303A7F@SHSMSX101.ccr.corp.intel.com>

On Fri, Sep 14, 2018 at 06:50:55AM +0000, Tian, Kevin wrote:
> > From: Jerome Glisse
> > Sent: Thursday, September 13, 2018 10:52 PM
> >
> [...]
>  > AFAIK, on x86 and PPC at least, all PCIE devices are in the same group
> > by default at boot or at least all devices behind the same bridge.
> 
> the group thing reflects physical hierarchy limitation, not changed
> cross boot. Please note iommu group defines the minimal isolation
> boundary - all devices within same group must be attached to the
> same iommu domain or address space, because physically IOMMU
> cannot differentiate DMAs out of those devices. devices behind
> legacy PCI-X bridge is one example. other examples include devices
> behind a PCIe switch port which doesn't support ACS thus cannot
> route p2p transaction to IOMMU. If talking about typical PCIe 
> endpoint (with upstreaming ports all supporting ACS), you'll get
> one device per group.
> 
> One iommu group today is attached to only one iommu domain.
> In the future one group may attach to multiple domains, as the
> aux domain concept being discussed in another thread.

Thanks for the info.

> 
> > 
> > Maybe they are kernel option to avoid that and userspace init program
> > can definitly re-arrange that base on sysadmin policy).
> 
> I don't think there is such option, as it may break isolation model
> enabled by IOMMU.
> 
> [...]
> > > > That is why i am being pedantic :) on making sure there is good reasons
> > > > to do what you do inside VFIO. I do believe that we want a common
> > frame-
> > > > work like the one you are proposing but i do not believe it should be
> > > > part of VFIO given the baggages it comes with and that are not relevant
> > > > to the use cases for this kind of devices.
> > >
> 
> The purpose of VFIO is clear - the kernel portal for granting generic 
> device resource (mmio, irq, etc.) to user space. VFIO doesn't care
> what exactly a resource is used for (queue, cmd reg, etc.). If really
> pursuing VFIO path is necessary, maybe such common framework
> should lay down in user space, which gets all granted resource from
> kernel driver thru VFIO and then provides accelerator services to 
> other processes?

Except that many existing device driver falls under that description
(ie exposing mmio, command queues, ...) and are not under VFIO.

Up to mdev VFIO was all about handling a full device to userspace AFAIK.
With the introduction of mdev a host kernel driver can "slice" its
device and share it through VFIO to userspace. Note that in that case
it might never end over any mmio, irq, ... the host driver might just
be handling over memory and would be polling from it to schedule on
the real hardware.


The question i am asking about warpdrive is wether being in VFIO is
necessary ? as i do not see the requirement myself.

Cheers,
Jérôme

WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Kenneth Lee <liguozhu@hisilicon.com>,
	Kenneth Lee <nek.in.cn@gmail.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Zaibo Xu <xuzaibo@huawei.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Zhou Wang <wangzhou1@hisilicon.com>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	"David S . Miller" <davem@davemloft.net>,
	"linux-accelerators@lists.ozlabs.org" 
	<linux-accelerators@lists.ozlabs.org>,
	Joerg Roedel <joro@8bytes.org>
Subject: Re: [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Fri, 14 Sep 2018 10:13:43 -0400	[thread overview]
Message-ID: <20180914141342.GB3826@redhat.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D191303A7F@SHSMSX101.ccr.corp.intel.com>

On Fri, Sep 14, 2018 at 06:50:55AM +0000, Tian, Kevin wrote:
> > From: Jerome Glisse
> > Sent: Thursday, September 13, 2018 10:52 PM
> >
> [...]
>  > AFAIK, on x86 and PPC at least, all PCIE devices are in the same group
> > by default at boot or at least all devices behind the same bridge.
> 
> the group thing reflects physical hierarchy limitation, not changed
> cross boot. Please note iommu group defines the minimal isolation
> boundary - all devices within same group must be attached to the
> same iommu domain or address space, because physically IOMMU
> cannot differentiate DMAs out of those devices. devices behind
> legacy PCI-X bridge is one example. other examples include devices
> behind a PCIe switch port which doesn't support ACS thus cannot
> route p2p transaction to IOMMU. If talking about typical PCIe 
> endpoint (with upstreaming ports all supporting ACS), you'll get
> one device per group.
> 
> One iommu group today is attached to only one iommu domain.
> In the future one group may attach to multiple domains, as the
> aux domain concept being discussed in another thread.

Thanks for the info.

> 
> > 
> > Maybe they are kernel option to avoid that and userspace init program
> > can definitly re-arrange that base on sysadmin policy).
> 
> I don't think there is such option, as it may break isolation model
> enabled by IOMMU.
> 
> [...]
> > > > That is why i am being pedantic :) on making sure there is good reasons
> > > > to do what you do inside VFIO. I do believe that we want a common
> > frame-
> > > > work like the one you are proposing but i do not believe it should be
> > > > part of VFIO given the baggages it comes with and that are not relevant
> > > > to the use cases for this kind of devices.
> > >
> 
> The purpose of VFIO is clear - the kernel portal for granting generic 
> device resource (mmio, irq, etc.) to user space. VFIO doesn't care
> what exactly a resource is used for (queue, cmd reg, etc.). If really
> pursuing VFIO path is necessary, maybe such common framework
> should lay down in user space, which gets all granted resource from
> kernel driver thru VFIO and then provides accelerator services to 
> other processes?

Except that many existing device driver falls under that description
(ie exposing mmio, command queues, ...) and are not under VFIO.

Up to mdev VFIO was all about handling a full device to userspace AFAIK.
With the introduction of mdev a host kernel driver can "slice" its
device and share it through VFIO to userspace. Note that in that case
it might never end over any mmio, irq, ... the host driver might just
be handling over memory and would be polling from it to schedule on
the real hardware.


The question i am asking about warpdrive is wether being in VFIO is
necessary ? as i do not see the requirement myself.

Cheers,
Jérôme

WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Kenneth Lee <liguozhu@hisilicon.com>,
	Kenneth Lee <nek.in.cn@gmail.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Zaibo Xu <xuzaibo@huawei.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Zhou Wang <wangzhou1@hisilicon.com>,
	Philippe Ombredanne <pombredanne@>
Subject: Re: [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Fri, 14 Sep 2018 10:13:43 -0400	[thread overview]
Message-ID: <20180914141342.GB3826@redhat.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D191303A7F@SHSMSX101.ccr.corp.intel.com>

On Fri, Sep 14, 2018 at 06:50:55AM +0000, Tian, Kevin wrote:
> > From: Jerome Glisse
> > Sent: Thursday, September 13, 2018 10:52 PM
> >
> [...]
>  > AFAIK, on x86 and PPC at least, all PCIE devices are in the same group
> > by default at boot or at least all devices behind the same bridge.
> 
> the group thing reflects physical hierarchy limitation, not changed
> cross boot. Please note iommu group defines the minimal isolation
> boundary - all devices within same group must be attached to the
> same iommu domain or address space, because physically IOMMU
> cannot differentiate DMAs out of those devices. devices behind
> legacy PCI-X bridge is one example. other examples include devices
> behind a PCIe switch port which doesn't support ACS thus cannot
> route p2p transaction to IOMMU. If talking about typical PCIe 
> endpoint (with upstreaming ports all supporting ACS), you'll get
> one device per group.
> 
> One iommu group today is attached to only one iommu domain.
> In the future one group may attach to multiple domains, as the
> aux domain concept being discussed in another thread.

Thanks for the info.

> 
> > 
> > Maybe they are kernel option to avoid that and userspace init program
> > can definitly re-arrange that base on sysadmin policy).
> 
> I don't think there is such option, as it may break isolation model
> enabled by IOMMU.
> 
> [...]
> > > > That is why i am being pedantic :) on making sure there is good reasons
> > > > to do what you do inside VFIO. I do believe that we want a common
> > frame-
> > > > work like the one you are proposing but i do not believe it should be
> > > > part of VFIO given the baggages it comes with and that are not relevant
> > > > to the use cases for this kind of devices.
> > >
> 
> The purpose of VFIO is clear - the kernel portal for granting generic 
> device resource (mmio, irq, etc.) to user space. VFIO doesn't care
> what exactly a resource is used for (queue, cmd reg, etc.). If really
> pursuing VFIO path is necessary, maybe such common framework
> should lay down in user space, which gets all granted resource from
> kernel driver thru VFIO and then provides accelerator services to 
> other processes?

Except that many existing device driver falls under that description
(ie exposing mmio, command queues, ...) and are not under VFIO.

Up to mdev VFIO was all about handling a full device to userspace AFAIK.
With the introduction of mdev a host kernel driver can "slice" its
device and share it through VFIO to userspace. Note that in that case
it might never end over any mmio, irq, ... the host driver might just
be handling over memory and would be polling from it to schedule on
the real hardware.


The question i am asking about warpdrive is wether being in VFIO is
necessary ? as i do not see the requirement myself.

Cheers,
Jérôme

  parent reply	other threads:[~2018-09-14 14:13 UTC|newest]

Thread overview: 91+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-03  0:51 [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive Kenneth Lee
2018-09-03  0:51 ` Kenneth Lee
2018-09-03  0:51 ` [PATCH 2/7] iommu: Add share domain interface in iommu for sdmdev Kenneth Lee
2018-09-03  0:52 ` [PATCH 3/7] vfio: add sdmdev support Kenneth Lee
2018-09-03  2:11   ` Randy Dunlap
2018-09-06  8:08     ` Kenneth Lee
2018-09-06  8:08       ` Kenneth Lee
2018-09-03  2:55   ` Lu Baolu
2018-09-06  9:01     ` Kenneth Lee
2018-09-06  9:01       ` Kenneth Lee
2018-09-04 15:31   ` [RFC PATCH] vfio: vfio_sdmdev_groups[] can be static kbuild test robot
2018-09-04 15:32   ` [PATCH 3/7] vfio: add sdmdev support kbuild test robot
     [not found]   ` <20180903005204.26041-4-nek.in.cn-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-09-04 15:32     ` kbuild test robot
2018-09-04 15:32       ` kbuild test robot
2018-09-05  7:27   ` Dan Carpenter
2018-09-05  7:27     ` Dan Carpenter
2018-09-03  0:52 ` [PATCH 4/7] crypto: add hisilicon Queue Manager driver Kenneth Lee
2018-09-03  2:15   ` Randy Dunlap
     [not found]     ` <4e46a451-d1cd-ac68-84b4-20792fdbc733-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
2018-09-06  9:08       ` Kenneth Lee
2018-09-06  9:08         ` Kenneth Lee
2018-09-03  0:52 ` [PATCH 5/7] crypto: Add Hisilicon Zip driver Kenneth Lee
2018-09-03  0:52 ` [PATCH 6/7] crypto: add sdmdev support to Hisilicon QM Kenneth Lee
2018-09-03  2:19   ` Randy Dunlap
2018-09-06  9:09     ` Kenneth Lee
2018-09-06  9:09       ` Kenneth Lee
2018-09-03  0:52 ` [PATCH 7/7] vfio/sdmdev: add user sample Kenneth Lee
2018-09-03  2:25   ` Randy Dunlap
2018-09-06  9:10     ` Kenneth Lee
2018-09-06  9:10       ` Kenneth Lee
2018-09-03  2:32 ` [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive Lu Baolu
     [not found]   ` <81edb8ff-d046-34e5-aee7-d8564e2517c2-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2018-09-06  9:11     ` Kenneth Lee
2018-09-06  9:11       ` Kenneth Lee
2018-09-04 15:00 ` Jerome Glisse
2018-09-04 16:15   ` Alex Williamson
2018-09-06  9:45     ` Kenneth Lee
2018-09-06  9:45       ` Kenneth Lee
2018-09-06 13:31       ` Jerome Glisse
     [not found]         ` <20180906133133.GA3830-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-09-07  4:01           ` Kenneth Lee
2018-09-07  4:01             ` Kenneth Lee
2018-09-07 16:53             ` Jerome Glisse
2018-09-07 16:53               ` Jerome Glisse
2018-09-07 17:55               ` Jean-Philippe Brucker
2018-09-07 18:04                 ` Jerome Glisse
2018-09-10  3:28               ` Kenneth Lee
2018-09-10  3:28                 ` Kenneth Lee
2018-09-10 14:54                 ` Jerome Glisse
2018-09-10 14:54                   ` Jerome Glisse
     [not found]                   ` <20180910145423.GA3488-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-09-11  2:42                     ` Kenneth Lee
2018-09-11  2:42                       ` Kenneth Lee
2018-09-11  3:33                       ` Jerome Glisse
2018-09-11  3:33                         ` Jerome Glisse
     [not found]                         ` <20180911033358.GA4730-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-09-11  6:40                           ` Kenneth Lee
2018-09-11  6:40                             ` Kenneth Lee
2018-09-11 13:40                             ` Jerome Glisse
     [not found]                               ` <20180911134013.GA3932-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-09-13  8:32                                 ` Kenneth Lee
2018-09-13  8:32                                   ` Kenneth Lee
2018-09-13 14:51                                   ` Jerome Glisse
2018-09-14  3:12                                     ` Kenneth Lee
2018-09-14  3:12                                       ` Kenneth Lee
2018-09-14 14:05                                       ` Jerome Glisse
     [not found]                                     ` <20180913145149.GB3576-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-09-14  6:50                                       ` Tian, Kevin
2018-09-14  6:50                                         ` Tian, Kevin
2018-09-14 13:05                                         ` Kenneth Lee
2018-09-14 13:05                                           ` Kenneth Lee
2018-09-14 13:05                                           ` Kenneth Lee
2018-09-14 14:13                                         ` Jerome Glisse [this message]
2018-09-14 14:13                                           ` Jerome Glisse
2018-09-14 14:13                                           ` Jerome Glisse
     [not found] ` <20180903005204.26041-1-nek.in.cn-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-09-03  0:51   ` [PATCH 1/7] vfio/sdmdev: Add documents for WarpDrive framework Kenneth Lee
2018-09-03  0:51     ` Kenneth Lee
2018-09-06 18:36     ` Randy Dunlap
2018-09-07  2:21       ` Kenneth Lee
2018-09-07  2:21         ` Kenneth Lee
2018-09-17  1:42   ` [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive Jerome Glisse
2018-09-17  1:42     ` Jerome Glisse
     [not found]     ` <20180917014244.GA27596-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-09-17  8:39       ` Kenneth Lee
2018-09-17  8:39         ` Kenneth Lee
2018-09-17 12:37         ` Jerome Glisse
     [not found]           ` <20180917123744.GA3605-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-09-18  6:00             ` Kenneth Lee
2018-09-18  6:00               ` Kenneth Lee
2018-09-18 13:03               ` Jerome Glisse
2018-09-20  5:55                 ` Kenneth Lee
2018-09-20  5:55                   ` Kenneth Lee
2018-09-20 14:23                   ` Jerome Glisse
2018-09-21 10:05                     ` Kenneth Lee
2018-09-21 10:05                       ` Kenneth Lee
2018-09-21 10:03     ` Kenneth Lee
2018-09-21 10:03       ` Kenneth Lee
2018-09-21 14:52       ` Jerome Glisse
     [not found]         ` <20180921145201.GA3357-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-09-25  5:55           ` Kenneth Lee
2018-09-25  5:55             ` Kenneth Lee

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180914141342.GB3826@redhat.com \
    --to=jglisse@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=corbet@lwn.net \
    --cc=fanghao11@huawei.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=iommu@lists.linux-foundation.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=liguozhu@hisilicon.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=nek.in.cn@gmail.com \
    --cc=sanjay.k.kumar@intel.com \
    --cc=wangzhou1@hisilicon.com \
    --cc=xuzaibo@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.