All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: Kenneth Lee <liguozhu@hisilicon.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Zaibo Xu <xuzaibo@huawei.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Thomas Gleixner <tglx@linutronix.de>, Kenneth Lee <nek.in.cn@
Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Fri, 3 Aug 2018 10:39:44 -0400	[thread overview]
Message-ID: <20180803143944.GA4079@redhat.com> (raw)
In-Reply-To: <20180803034721.GC91035@Turing-Arch-b>

On Fri, Aug 03, 2018 at 11:47:21AM +0800, Kenneth Lee wrote:
> On Thu, Aug 02, 2018 at 10:22:43AM -0400, Jerome Glisse wrote:
> > Date: Thu, 2 Aug 2018 10:22:43 -0400
> > From: Jerome Glisse <jglisse@redhat.com>
> > To: Kenneth Lee <liguozhu@hisilicon.com>
> > CC: "Tian, Kevin" <kevin.tian@intel.com>, Hao Fang <fanghao11@huawei.com>,
> >  Alex Williamson <alex.williamson@redhat.com>, Herbert Xu
> >  <herbert@gondor.apana.org.au>, "kvm@vger.kernel.org"
> >  <kvm@vger.kernel.org>, Jonathan Corbet <corbet@lwn.net>, Greg
> >  Kroah-Hartman <gregkh@linuxfoundation.org>, Zaibo Xu <xuzaibo@huawei.com>,
> >  "linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>, "Kumar, Sanjay K"
> >  <sanjay.k.kumar@intel.com>, Kenneth Lee <nek.in.cn@gmail.com>,
> >  "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>,
> >  "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
> >  "linuxarm@huawei.com" <linuxarm@huawei.com>,
> >  "linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>, Philippe
> >  Ombredanne <pombredanne@nexb.com>, Thomas Gleixner <tglx@linutronix.de>,
> >  "David S . Miller" <davem@davemloft.net>,
> >  "linux-accelerators@lists.ozlabs.org"
> >  <linux-accelerators@lists.ozlabs.org>
> > Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
> > User-Agent: Mutt/1.10.0 (2018-05-17)
> > Message-ID: <20180802142243.GA3481@redhat.com>
> > 
> > On Thu, Aug 02, 2018 at 12:05:57PM +0800, Kenneth Lee wrote:
> > > On Thu, Aug 02, 2018 at 02:33:12AM +0000, Tian, Kevin wrote:
> > > > Date: Thu, 2 Aug 2018 02:33:12 +0000
> > > > > From: Jerome Glisse
> > > > > On Wed, Aug 01, 2018 at 06:22:14PM +0800, Kenneth Lee wrote:
> > > > > > From: Kenneth Lee <liguozhu@hisilicon.com>
> > > > > >
> > > > > > WarpDrive is an accelerator framework to expose the hardware
> > > > > capabilities
> > > > > > directly to the user space. It makes use of the exist vfio and vfio-mdev
> > > > > > facilities. So the user application can send request and DMA to the
> > > > > > hardware without interaction with the kernel. This remove the latency
> > > > > > of syscall and context switch.
> > > > > >
> > > > > > The patchset contains documents for the detail. Please refer to it for
> > > > > more
> > > > > > information.
> > > > > >
> > > > > > This patchset is intended to be used with Jean Philippe Brucker's SVA
> > > > > > patch [1] (Which is also in RFC stage). But it is not mandatory. This
> > > > > > patchset is tested in the latest mainline kernel without the SVA patches.
> > > > > > So it support only one process for each accelerator.
> > > > > >
> > > > > > With SVA support, WarpDrive can support multi-process in the same
> > > > > > accelerator device.  We tested it in our SoC integrated Accelerator (board
> > > > > > ID: D06, Chip ID: HIP08). A reference work tree can be found here: [2].
> > > > > 
> > > > > I have not fully inspected things nor do i know enough about
> > > > > this Hisilicon ZIP accelerator to ascertain, but from glimpsing
> > > > > at the code it seems that it is unsafe to use even with SVA due
> > > > > to the doorbell. There is a comment talking about safetyness
> > > > > in patch 7.
> > > > > 
> > > > > Exposing thing to userspace is always enticing, but if it is
> > > > > a security risk then it should clearly say so and maybe a
> > > > > kernel boot flag should be necessary to allow such device to
> > > > > be use.
> > > > > 
> > > 
> > > But doorbell is just a notification. Except for DOS (to make hardware busy) it
> > > cannot actually take or change anything from the kernel space. And the DOS
> > > problem can be always taken as the problem that a group of processes share the
> > > same kernel entity.
> > > 
> > > In the coming HIP09 hardware, the doorbell will come with a random number so
> > > only the process who allocated the queue can knock it correctly.
> > 
> > When doorbell is ring the hardware start fetching commands from
> > the queue and execute them ? If so than a rogue process B might
> > ring the doorbell of process A which would starts execution of
> > random commands (ie whatever random memory value there is left
> > inside the command buffer memory, could be old commands i guess).
> > 
> > If this is not how this doorbell works then, yes it can only do
> > a denial of service i guess. Issue i have with doorbell is that
> > i have seen 10 differents implementations in 10 differents hw
> > and each are different as to what ringing or value written to the
> > doorbell does. It is painfull to track what is what for each hw.
> > 
> 
> In our implementation, doorbell is simply a notification, just like an interrupt
> to the accelerator. The command is all about what's in the queue.
> 
> I agree that there is no simple and standard way to track the shared IO space.
> But I think we have to trust the driver in some way. If the driver is malicious,
> even a simple ioctl can become an attack.

Trusting kernel space driver is fine, trusting user space driver is
not in my view. AFAICT every driver developer so far always made
sure that someone could not abuse its device to do harmfull thing to
other process.


> > > > > My more general question is do we want to grow VFIO to become
> > > > > a more generic device driver API. This patchset adds a command
> > > > > queue concept to it (i don't think it exist today but i have
> > > > > not follow VFIO closely).
> > > > > 
> > > 
> > > The thing is, VFIO is the only place to support DMA from user land. If we don't
> > > put it here, we have to create another similar facility to support the same.
> > 
> > No it is not, network device, GPU, block device, ... they all do
> > support DMA. The point i am trying to make here is that even in
> 
> Sorry, wait a minute, are we talking the same thing? I meant "DMA from user
> land", not "DMA from kernel driver". To do that we have to manipulate the
> IOMMU(Unit). I think it can only be done by default_domain or vfio domain. Or
> the user space have to directly access the IOMMU.

GPU do DMA in the sense that you pass to the kernel a valid
virtual address (kernel driver do all the proper check) and
then you can use the GPU to copy from or to that range of
virtual address. Exactly how you want to use this compression
engine. It does not rely on SVM but SVM going forward would
still be the prefered option.


> > your mechanisms the userspace must have a specific userspace
> > drivers for each hardware and thus there are virtually no
> > differences between having this userspace driver open a device
> > file in vfio or somewhere else in the device filesystem. This is
> > just a different path.
> > 
> 
> The basic problem WarpDrive want to solve it to avoid syscall. This is important
> to accelerators. We have some data here:
> https://www.slideshare.net/linaroorg/progress-and-demonstration-of-wrapdrive-a-accelerator-framework-sfo17317
> 
> (see page 3)
> 
> The performance is different on using kernel and user drivers.

Yes and example i point to is exactly that. You have a one time setup
cost (creating command buffer binding PASID with command buffer and
couple other setup steps). Then userspace no longer have to do any
ioctl to schedule work on the GPU. It is all down from userspace and
it use a doorbell to notify hardware when it should go look at command
buffer for new thing to execute.

My point stands on that. You have existing driver already doing so
with no new framework and in your scheme you need a userspace driver.
So i do not see the value add, using one path or the other in the
userspace driver is litteraly one line to change.


> And we also believe the hardware interface can become standard after sometime.
> Some companies have started to do this (such ARM's Revere). But before that, we
> should have a software channel for it.

I hope it does, but right now for every single piece of hardware you
will need a specific driver (i am ignoring backward compatible hardware
evolution as this is a thing that do exist).

Even if down the road for every class of hardware you can use the same
driver, i am not sure what the value add is to do it inside VFIO versus
a class of device driver (like USB, PCIE, DRM aka GPU, ...) ie you would
have a compression class (/dev/compress/*) a encryption one, ...


> > So this is why i do not see any benefit to having all drivers with
> > SVM (can we please use SVM and not SVA as SVM is what have been use
> > in more places so far).
> > 
> 
> Personally, we don't care what name to be used. I used SVM when I start this
> work. And then Jean said SVM had been used by AMD as Secure Virtual Machine. So
> he called it SVA. And now... who should I follow? :)

I think Intel call it SVM too, i do not have any strong preference
beside have only one to remember :)

Cheers,
Jérôme

WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: Kenneth Lee <liguozhu@hisilicon.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Zaibo Xu <xuzaibo@huawei.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Kenneth Lee <nek.in.cn@gmail.com>,
	"David S . Miller" <davem@davemloft.net>,
	"linux-accelerators@lists.ozlabs.org" 
	<linux-accelerators@lists.ozlabs.org>
Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Fri, 3 Aug 2018 10:39:44 -0400	[thread overview]
Message-ID: <20180803143944.GA4079@redhat.com> (raw)
In-Reply-To: <20180803034721.GC91035@Turing-Arch-b>

On Fri, Aug 03, 2018 at 11:47:21AM +0800, Kenneth Lee wrote:
> On Thu, Aug 02, 2018 at 10:22:43AM -0400, Jerome Glisse wrote:
> > Date: Thu, 2 Aug 2018 10:22:43 -0400
> > From: Jerome Glisse <jglisse@redhat.com>
> > To: Kenneth Lee <liguozhu@hisilicon.com>
> > CC: "Tian, Kevin" <kevin.tian@intel.com>, Hao Fang <fanghao11@huawei.com>,
> >  Alex Williamson <alex.williamson@redhat.com>, Herbert Xu
> >  <herbert@gondor.apana.org.au>, "kvm@vger.kernel.org"
> >  <kvm@vger.kernel.org>, Jonathan Corbet <corbet@lwn.net>, Greg
> >  Kroah-Hartman <gregkh@linuxfoundation.org>, Zaibo Xu <xuzaibo@huawei.com>,
> >  "linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>, "Kumar, Sanjay K"
> >  <sanjay.k.kumar@intel.com>, Kenneth Lee <nek.in.cn@gmail.com>,
> >  "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>,
> >  "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
> >  "linuxarm@huawei.com" <linuxarm@huawei.com>,
> >  "linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>, Philippe
> >  Ombredanne <pombredanne@nexb.com>, Thomas Gleixner <tglx@linutronix.de>,
> >  "David S . Miller" <davem@davemloft.net>,
> >  "linux-accelerators@lists.ozlabs.org"
> >  <linux-accelerators@lists.ozlabs.org>
> > Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
> > User-Agent: Mutt/1.10.0 (2018-05-17)
> > Message-ID: <20180802142243.GA3481@redhat.com>
> > 
> > On Thu, Aug 02, 2018 at 12:05:57PM +0800, Kenneth Lee wrote:
> > > On Thu, Aug 02, 2018 at 02:33:12AM +0000, Tian, Kevin wrote:
> > > > Date: Thu, 2 Aug 2018 02:33:12 +0000
> > > > > From: Jerome Glisse
> > > > > On Wed, Aug 01, 2018 at 06:22:14PM +0800, Kenneth Lee wrote:
> > > > > > From: Kenneth Lee <liguozhu@hisilicon.com>
> > > > > >
> > > > > > WarpDrive is an accelerator framework to expose the hardware
> > > > > capabilities
> > > > > > directly to the user space. It makes use of the exist vfio and vfio-mdev
> > > > > > facilities. So the user application can send request and DMA to the
> > > > > > hardware without interaction with the kernel. This remove the latency
> > > > > > of syscall and context switch.
> > > > > >
> > > > > > The patchset contains documents for the detail. Please refer to it for
> > > > > more
> > > > > > information.
> > > > > >
> > > > > > This patchset is intended to be used with Jean Philippe Brucker's SVA
> > > > > > patch [1] (Which is also in RFC stage). But it is not mandatory. This
> > > > > > patchset is tested in the latest mainline kernel without the SVA patches.
> > > > > > So it support only one process for each accelerator.
> > > > > >
> > > > > > With SVA support, WarpDrive can support multi-process in the same
> > > > > > accelerator device.  We tested it in our SoC integrated Accelerator (board
> > > > > > ID: D06, Chip ID: HIP08). A reference work tree can be found here: [2].
> > > > > 
> > > > > I have not fully inspected things nor do i know enough about
> > > > > this Hisilicon ZIP accelerator to ascertain, but from glimpsing
> > > > > at the code it seems that it is unsafe to use even with SVA due
> > > > > to the doorbell. There is a comment talking about safetyness
> > > > > in patch 7.
> > > > > 
> > > > > Exposing thing to userspace is always enticing, but if it is
> > > > > a security risk then it should clearly say so and maybe a
> > > > > kernel boot flag should be necessary to allow such device to
> > > > > be use.
> > > > > 
> > > 
> > > But doorbell is just a notification. Except for DOS (to make hardware busy) it
> > > cannot actually take or change anything from the kernel space. And the DOS
> > > problem can be always taken as the problem that a group of processes share the
> > > same kernel entity.
> > > 
> > > In the coming HIP09 hardware, the doorbell will come with a random number so
> > > only the process who allocated the queue can knock it correctly.
> > 
> > When doorbell is ring the hardware start fetching commands from
> > the queue and execute them ? If so than a rogue process B might
> > ring the doorbell of process A which would starts execution of
> > random commands (ie whatever random memory value there is left
> > inside the command buffer memory, could be old commands i guess).
> > 
> > If this is not how this doorbell works then, yes it can only do
> > a denial of service i guess. Issue i have with doorbell is that
> > i have seen 10 differents implementations in 10 differents hw
> > and each are different as to what ringing or value written to the
> > doorbell does. It is painfull to track what is what for each hw.
> > 
> 
> In our implementation, doorbell is simply a notification, just like an interrupt
> to the accelerator. The command is all about what's in the queue.
> 
> I agree that there is no simple and standard way to track the shared IO space.
> But I think we have to trust the driver in some way. If the driver is malicious,
> even a simple ioctl can become an attack.

Trusting kernel space driver is fine, trusting user space driver is
not in my view. AFAICT every driver developer so far always made
sure that someone could not abuse its device to do harmfull thing to
other process.


> > > > > My more general question is do we want to grow VFIO to become
> > > > > a more generic device driver API. This patchset adds a command
> > > > > queue concept to it (i don't think it exist today but i have
> > > > > not follow VFIO closely).
> > > > > 
> > > 
> > > The thing is, VFIO is the only place to support DMA from user land. If we don't
> > > put it here, we have to create another similar facility to support the same.
> > 
> > No it is not, network device, GPU, block device, ... they all do
> > support DMA. The point i am trying to make here is that even in
> 
> Sorry, wait a minute, are we talking the same thing? I meant "DMA from user
> land", not "DMA from kernel driver". To do that we have to manipulate the
> IOMMU(Unit). I think it can only be done by default_domain or vfio domain. Or
> the user space have to directly access the IOMMU.

GPU do DMA in the sense that you pass to the kernel a valid
virtual address (kernel driver do all the proper check) and
then you can use the GPU to copy from or to that range of
virtual address. Exactly how you want to use this compression
engine. It does not rely on SVM but SVM going forward would
still be the prefered option.


> > your mechanisms the userspace must have a specific userspace
> > drivers for each hardware and thus there are virtually no
> > differences between having this userspace driver open a device
> > file in vfio or somewhere else in the device filesystem. This is
> > just a different path.
> > 
> 
> The basic problem WarpDrive want to solve it to avoid syscall. This is important
> to accelerators. We have some data here:
> https://www.slideshare.net/linaroorg/progress-and-demonstration-of-wrapdrive-a-accelerator-framework-sfo17317
> 
> (see page 3)
> 
> The performance is different on using kernel and user drivers.

Yes and example i point to is exactly that. You have a one time setup
cost (creating command buffer binding PASID with command buffer and
couple other setup steps). Then userspace no longer have to do any
ioctl to schedule work on the GPU. It is all down from userspace and
it use a doorbell to notify hardware when it should go look at command
buffer for new thing to execute.

My point stands on that. You have existing driver already doing so
with no new framework and in your scheme you need a userspace driver.
So i do not see the value add, using one path or the other in the
userspace driver is litteraly one line to change.


> And we also believe the hardware interface can become standard after sometime.
> Some companies have started to do this (such ARM's Revere). But before that, we
> should have a software channel for it.

I hope it does, but right now for every single piece of hardware you
will need a specific driver (i am ignoring backward compatible hardware
evolution as this is a thing that do exist).

Even if down the road for every class of hardware you can use the same
driver, i am not sure what the value add is to do it inside VFIO versus
a class of device driver (like USB, PCIE, DRM aka GPU, ...) ie you would
have a compression class (/dev/compress/*) a encryption one, ...


> > So this is why i do not see any benefit to having all drivers with
> > SVM (can we please use SVM and not SVA as SVM is what have been use
> > in more places so far).
> > 
> 
> Personally, we don't care what name to be used. I used SVM when I start this
> work. And then Jean said SVM had been used by AMD as Secure Virtual Machine. So
> he called it SVA. And now... who should I follow? :)

I think Intel call it SVM too, i do not have any strong preference
beside have only one to remember :)

Cheers,
Jérôme

WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: Kenneth Lee <liguozhu@hisilicon.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Zaibo Xu <xuzaibo@huawei.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Kenneth Lee <nek.in.cn@gmail.com>,
	"David S . Miller" <davem@davemloft.net>,
	"linux-accelerators@lists.ozlabs.org" 
	<linux-accelerators@lists.ozlabs.org>
Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Fri, 3 Aug 2018 10:39:44 -0400	[thread overview]
Message-ID: <20180803143944.GA4079@redhat.com> (raw)
In-Reply-To: <20180803034721.GC91035@Turing-Arch-b>

On Fri, Aug 03, 2018 at 11:47:21AM +0800, Kenneth Lee wrote:
> On Thu, Aug 02, 2018 at 10:22:43AM -0400, Jerome Glisse wrote:
> > Date: Thu, 2 Aug 2018 10:22:43 -0400
> > From: Jerome Glisse <jglisse@redhat.com>
> > To: Kenneth Lee <liguozhu@hisilicon.com>
> > CC: "Tian, Kevin" <kevin.tian@intel.com>, Hao Fang <fanghao11@huawei.com>,
> >  Alex Williamson <alex.williamson@redhat.com>, Herbert Xu
> >  <herbert@gondor.apana.org.au>, "kvm@vger.kernel.org"
> >  <kvm@vger.kernel.org>, Jonathan Corbet <corbet@lwn.net>, Greg
> >  Kroah-Hartman <gregkh@linuxfoundation.org>, Zaibo Xu <xuzaibo@huawei.com>,
> >  "linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>, "Kumar, Sanjay K"
> >  <sanjay.k.kumar@intel.com>, Kenneth Lee <nek.in.cn@gmail.com>,
> >  "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>,
> >  "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
> >  "linuxarm@huawei.com" <linuxarm@huawei.com>,
> >  "linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>, Philippe
> >  Ombredanne <pombredanne@nexb.com>, Thomas Gleixner <tglx@linutronix.de>,
> >  "David S . Miller" <davem@davemloft.net>,
> >  "linux-accelerators@lists.ozlabs.org"
> >  <linux-accelerators@lists.ozlabs.org>
> > Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
> > User-Agent: Mutt/1.10.0 (2018-05-17)
> > Message-ID: <20180802142243.GA3481@redhat.com>
> > 
> > On Thu, Aug 02, 2018 at 12:05:57PM +0800, Kenneth Lee wrote:
> > > On Thu, Aug 02, 2018 at 02:33:12AM +0000, Tian, Kevin wrote:
> > > > Date: Thu, 2 Aug 2018 02:33:12 +0000
> > > > > From: Jerome Glisse
> > > > > On Wed, Aug 01, 2018 at 06:22:14PM +0800, Kenneth Lee wrote:
> > > > > > From: Kenneth Lee <liguozhu@hisilicon.com>
> > > > > >
> > > > > > WarpDrive is an accelerator framework to expose the hardware
> > > > > capabilities
> > > > > > directly to the user space. It makes use of the exist vfio and vfio-mdev
> > > > > > facilities. So the user application can send request and DMA to the
> > > > > > hardware without interaction with the kernel. This remove the latency
> > > > > > of syscall and context switch.
> > > > > >
> > > > > > The patchset contains documents for the detail. Please refer to it for
> > > > > more
> > > > > > information.
> > > > > >
> > > > > > This patchset is intended to be used with Jean Philippe Brucker's SVA
> > > > > > patch [1] (Which is also in RFC stage). But it is not mandatory. This
> > > > > > patchset is tested in the latest mainline kernel without the SVA patches.
> > > > > > So it support only one process for each accelerator.
> > > > > >
> > > > > > With SVA support, WarpDrive can support multi-process in the same
> > > > > > accelerator device.  We tested it in our SoC integrated Accelerator (board
> > > > > > ID: D06, Chip ID: HIP08). A reference work tree can be found here: [2].
> > > > > 
> > > > > I have not fully inspected things nor do i know enough about
> > > > > this Hisilicon ZIP accelerator to ascertain, but from glimpsing
> > > > > at the code it seems that it is unsafe to use even with SVA due
> > > > > to the doorbell. There is a comment talking about safetyness
> > > > > in patch 7.
> > > > > 
> > > > > Exposing thing to userspace is always enticing, but if it is
> > > > > a security risk then it should clearly say so and maybe a
> > > > > kernel boot flag should be necessary to allow such device to
> > > > > be use.
> > > > > 
> > > 
> > > But doorbell is just a notification. Except for DOS (to make hardware busy) it
> > > cannot actually take or change anything from the kernel space. And the DOS
> > > problem can be always taken as the problem that a group of processes share the
> > > same kernel entity.
> > > 
> > > In the coming HIP09 hardware, the doorbell will come with a random number so
> > > only the process who allocated the queue can knock it correctly.
> > 
> > When doorbell is ring the hardware start fetching commands from
> > the queue and execute them ? If so than a rogue process B might
> > ring the doorbell of process A which would starts execution of
> > random commands (ie whatever random memory value there is left
> > inside the command buffer memory, could be old commands i guess).
> > 
> > If this is not how this doorbell works then, yes it can only do
> > a denial of service i guess. Issue i have with doorbell is that
> > i have seen 10 differents implementations in 10 differents hw
> > and each are different as to what ringing or value written to the
> > doorbell does. It is painfull to track what is what for each hw.
> > 
> 
> In our implementation, doorbell is simply a notification, just like an interrupt
> to the accelerator. The command is all about what's in the queue.
> 
> I agree that there is no simple and standard way to track the shared IO space.
> But I think we have to trust the driver in some way. If the driver is malicious,
> even a simple ioctl can become an attack.

Trusting kernel space driver is fine, trusting user space driver is
not in my view. AFAICT every driver developer so far always made
sure that someone could not abuse its device to do harmfull thing to
other process.


> > > > > My more general question is do we want to grow VFIO to become
> > > > > a more generic device driver API. This patchset adds a command
> > > > > queue concept to it (i don't think it exist today but i have
> > > > > not follow VFIO closely).
> > > > > 
> > > 
> > > The thing is, VFIO is the only place to support DMA from user land. If we don't
> > > put it here, we have to create another similar facility to support the same.
> > 
> > No it is not, network device, GPU, block device, ... they all do
> > support DMA. The point i am trying to make here is that even in
> 
> Sorry, wait a minute, are we talking the same thing? I meant "DMA from user
> land", not "DMA from kernel driver". To do that we have to manipulate the
> IOMMU(Unit). I think it can only be done by default_domain or vfio domain. Or
> the user space have to directly access the IOMMU.

GPU do DMA in the sense that you pass to the kernel a valid
virtual address (kernel driver do all the proper check) and
then you can use the GPU to copy from or to that range of
virtual address. Exactly how you want to use this compression
engine. It does not rely on SVM but SVM going forward would
still be the prefered option.


> > your mechanisms the userspace must have a specific userspace
> > drivers for each hardware and thus there are virtually no
> > differences between having this userspace driver open a device
> > file in vfio or somewhere else in the device filesystem. This is
> > just a different path.
> > 
> 
> The basic problem WarpDrive want to solve it to avoid syscall. This is important
> to accelerators. We have some data here:
> https://www.slideshare.net/linaroorg/progress-and-demonstration-of-wrapdrive-a-accelerator-framework-sfo17317
> 
> (see page 3)
> 
> The performance is different on using kernel and user drivers.

Yes and example i point to is exactly that. You have a one time setup
cost (creating command buffer binding PASID with command buffer and
couple other setup steps). Then userspace no longer have to do any
ioctl to schedule work on the GPU. It is all down from userspace and
it use a doorbell to notify hardware when it should go look at command
buffer for new thing to execute.

My point stands on that. You have existing driver already doing so
with no new framework and in your scheme you need a userspace driver.
So i do not see the value add, using one path or the other in the
userspace driver is litteraly one line to change.


> And we also believe the hardware interface can become standard after sometime.
> Some companies have started to do this (such ARM's Revere). But before that, we
> should have a software channel for it.

I hope it does, but right now for every single piece of hardware you
will need a specific driver (i am ignoring backward compatible hardware
evolution as this is a thing that do exist).

Even if down the road for every class of hardware you can use the same
driver, i am not sure what the value add is to do it inside VFIO versus
a class of device driver (like USB, PCIE, DRM aka GPU, ...) ie you would
have a compression class (/dev/compress/*) a encryption one, ...


> > So this is why i do not see any benefit to having all drivers with
> > SVM (can we please use SVM and not SVA as SVM is what have been use
> > in more places so far).
> > 
> 
> Personally, we don't care what name to be used. I used SVM when I start this
> work. And then Jean said SVM had been used by AMD as Secure Virtual Machine. So
> he called it SVA. And now... who should I follow? :)

I think Intel call it SVM too, i do not have any strong preference
beside have only one to remember :)

Cheers,
Jérôme
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: Kenneth Lee <liguozhu@hisilicon.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Zaibo Xu <xuzaibo@huawei.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Thomas Gleixner <tglx@linutronix.de>, Kenneth Lee <nek.in.cn@>
Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Fri, 3 Aug 2018 10:39:44 -0400	[thread overview]
Message-ID: <20180803143944.GA4079@redhat.com> (raw)
In-Reply-To: <20180803034721.GC91035@Turing-Arch-b>

On Fri, Aug 03, 2018 at 11:47:21AM +0800, Kenneth Lee wrote:
> On Thu, Aug 02, 2018 at 10:22:43AM -0400, Jerome Glisse wrote:
> > Date: Thu, 2 Aug 2018 10:22:43 -0400
> > From: Jerome Glisse <jglisse@redhat.com>
> > To: Kenneth Lee <liguozhu@hisilicon.com>
> > CC: "Tian, Kevin" <kevin.tian@intel.com>, Hao Fang <fanghao11@huawei.com>,
> >  Alex Williamson <alex.williamson@redhat.com>, Herbert Xu
> >  <herbert@gondor.apana.org.au>, "kvm@vger.kernel.org"
> >  <kvm@vger.kernel.org>, Jonathan Corbet <corbet@lwn.net>, Greg
> >  Kroah-Hartman <gregkh@linuxfoundation.org>, Zaibo Xu <xuzaibo@huawei.com>,
> >  "linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>, "Kumar, Sanjay K"
> >  <sanjay.k.kumar@intel.com>, Kenneth Lee <nek.in.cn@gmail.com>,
> >  "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>,
> >  "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
> >  "linuxarm@huawei.com" <linuxarm@huawei.com>,
> >  "linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>, Philippe
> >  Ombredanne <pombredanne@nexb.com>, Thomas Gleixner <tglx@linutronix.de>,
> >  "David S . Miller" <davem@davemloft.net>,
> >  "linux-accelerators@lists.ozlabs.org"
> >  <linux-accelerators@lists.ozlabs.org>
> > Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
> > User-Agent: Mutt/1.10.0 (2018-05-17)
> > Message-ID: <20180802142243.GA3481@redhat.com>
> > 
> > On Thu, Aug 02, 2018 at 12:05:57PM +0800, Kenneth Lee wrote:
> > > On Thu, Aug 02, 2018 at 02:33:12AM +0000, Tian, Kevin wrote:
> > > > Date: Thu, 2 Aug 2018 02:33:12 +0000
> > > > > From: Jerome Glisse
> > > > > On Wed, Aug 01, 2018 at 06:22:14PM +0800, Kenneth Lee wrote:
> > > > > > From: Kenneth Lee <liguozhu@hisilicon.com>
> > > > > >
> > > > > > WarpDrive is an accelerator framework to expose the hardware
> > > > > capabilities
> > > > > > directly to the user space. It makes use of the exist vfio and vfio-mdev
> > > > > > facilities. So the user application can send request and DMA to the
> > > > > > hardware without interaction with the kernel. This remove the latency
> > > > > > of syscall and context switch.
> > > > > >
> > > > > > The patchset contains documents for the detail. Please refer to it for
> > > > > more
> > > > > > information.
> > > > > >
> > > > > > This patchset is intended to be used with Jean Philippe Brucker's SVA
> > > > > > patch [1] (Which is also in RFC stage). But it is not mandatory. This
> > > > > > patchset is tested in the latest mainline kernel without the SVA patches.
> > > > > > So it support only one process for each accelerator.
> > > > > >
> > > > > > With SVA support, WarpDrive can support multi-process in the same
> > > > > > accelerator device.  We tested it in our SoC integrated Accelerator (board
> > > > > > ID: D06, Chip ID: HIP08). A reference work tree can be found here: [2].
> > > > > 
> > > > > I have not fully inspected things nor do i know enough about
> > > > > this Hisilicon ZIP accelerator to ascertain, but from glimpsing
> > > > > at the code it seems that it is unsafe to use even with SVA due
> > > > > to the doorbell. There is a comment talking about safetyness
> > > > > in patch 7.
> > > > > 
> > > > > Exposing thing to userspace is always enticing, but if it is
> > > > > a security risk then it should clearly say so and maybe a
> > > > > kernel boot flag should be necessary to allow such device to
> > > > > be use.
> > > > > 
> > > 
> > > But doorbell is just a notification. Except for DOS (to make hardware busy) it
> > > cannot actually take or change anything from the kernel space. And the DOS
> > > problem can be always taken as the problem that a group of processes share the
> > > same kernel entity.
> > > 
> > > In the coming HIP09 hardware, the doorbell will come with a random number so
> > > only the process who allocated the queue can knock it correctly.
> > 
> > When doorbell is ring the hardware start fetching commands from
> > the queue and execute them ? If so than a rogue process B might
> > ring the doorbell of process A which would starts execution of
> > random commands (ie whatever random memory value there is left
> > inside the command buffer memory, could be old commands i guess).
> > 
> > If this is not how this doorbell works then, yes it can only do
> > a denial of service i guess. Issue i have with doorbell is that
> > i have seen 10 differents implementations in 10 differents hw
> > and each are different as to what ringing or value written to the
> > doorbell does. It is painfull to track what is what for each hw.
> > 
> 
> In our implementation, doorbell is simply a notification, just like an interrupt
> to the accelerator. The command is all about what's in the queue.
> 
> I agree that there is no simple and standard way to track the shared IO space.
> But I think we have to trust the driver in some way. If the driver is malicious,
> even a simple ioctl can become an attack.

Trusting kernel space driver is fine, trusting user space driver is
not in my view. AFAICT every driver developer so far always made
sure that someone could not abuse its device to do harmfull thing to
other process.


> > > > > My more general question is do we want to grow VFIO to become
> > > > > a more generic device driver API. This patchset adds a command
> > > > > queue concept to it (i don't think it exist today but i have
> > > > > not follow VFIO closely).
> > > > > 
> > > 
> > > The thing is, VFIO is the only place to support DMA from user land. If we don't
> > > put it here, we have to create another similar facility to support the same.
> > 
> > No it is not, network device, GPU, block device, ... they all do
> > support DMA. The point i am trying to make here is that even in
> 
> Sorry, wait a minute, are we talking the same thing? I meant "DMA from user
> land", not "DMA from kernel driver". To do that we have to manipulate the
> IOMMU(Unit). I think it can only be done by default_domain or vfio domain. Or
> the user space have to directly access the IOMMU.

GPU do DMA in the sense that you pass to the kernel a valid
virtual address (kernel driver do all the proper check) and
then you can use the GPU to copy from or to that range of
virtual address. Exactly how you want to use this compression
engine. It does not rely on SVM but SVM going forward would
still be the prefered option.


> > your mechanisms the userspace must have a specific userspace
> > drivers for each hardware and thus there are virtually no
> > differences between having this userspace driver open a device
> > file in vfio or somewhere else in the device filesystem. This is
> > just a different path.
> > 
> 
> The basic problem WarpDrive want to solve it to avoid syscall. This is important
> to accelerators. We have some data here:
> https://www.slideshare.net/linaroorg/progress-and-demonstration-of-wrapdrive-a-accelerator-framework-sfo17317
> 
> (see page 3)
> 
> The performance is different on using kernel and user drivers.

Yes and example i point to is exactly that. You have a one time setup
cost (creating command buffer binding PASID with command buffer and
couple other setup steps). Then userspace no longer have to do any
ioctl to schedule work on the GPU. It is all down from userspace and
it use a doorbell to notify hardware when it should go look at command
buffer for new thing to execute.

My point stands on that. You have existing driver already doing so
with no new framework and in your scheme you need a userspace driver.
So i do not see the value add, using one path or the other in the
userspace driver is litteraly one line to change.


> And we also believe the hardware interface can become standard after sometime.
> Some companies have started to do this (such ARM's Revere). But before that, we
> should have a software channel for it.

I hope it does, but right now for every single piece of hardware you
will need a specific driver (i am ignoring backward compatible hardware
evolution as this is a thing that do exist).

Even if down the road for every class of hardware you can use the same
driver, i am not sure what the value add is to do it inside VFIO versus
a class of device driver (like USB, PCIE, DRM aka GPU, ...) ie you would
have a compression class (/dev/compress/*) a encryption one, ...


> > So this is why i do not see any benefit to having all drivers with
> > SVM (can we please use SVM and not SVA as SVM is what have been use
> > in more places so far).
> > 
> 
> Personally, we don't care what name to be used. I used SVM when I start this
> work. And then Jean said SVM had been used by AMD as Secure Virtual Machine. So
> he called it SVA. And now... who should I follow? :)

I think Intel call it SVM too, i do not have any strong preference
beside have only one to remember :)

Cheers,
Jérôme

  reply	other threads:[~2018-08-03 14:39 UTC|newest]

Thread overview: 192+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-01 10:22 [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive Kenneth Lee
2018-08-01 10:22 ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 1/7] vfio/spimdev: Add documents for WarpDrive framework Kenneth Lee
2018-08-01 10:22   ` Kenneth Lee
     [not found]   ` <20180801102221.5308-2-nek.in.cn-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-08-02  3:14     ` Tian, Kevin
2018-08-02  3:14       ` Tian, Kevin
2018-08-02  3:14       ` Tian, Kevin
     [not found]       ` <AADFC41AFE54684AB9EE6CBC0274A5D191290F04-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2018-08-02  4:22         ` Kenneth Lee
2018-08-02  4:22           ` Kenneth Lee
2018-08-02  4:22           ` Kenneth Lee
2018-08-02  4:41           ` Tian, Kevin
2018-08-02  4:41             ` Tian, Kevin
2018-08-02  4:41             ` Tian, Kevin
2018-08-06 12:27   ` Pavel Machek
2018-08-06 12:27     ` Pavel Machek
     [not found]     ` <20180806122733.GA17232-5NIqAleC692hcjWhqY66xCZi+YwRKgec@public.gmane.org>
2018-08-08  1:43       ` Kenneth Lee
2018-08-08  1:43         ` Kenneth Lee
2018-08-08  1:43         ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 2/7] iommu: Add share domain interface in iommu for spimdev Kenneth Lee
2018-08-01 10:22   ` Kenneth Lee
     [not found]   ` <20180801102221.5308-3-nek.in.cn-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-08-02  3:17     ` Tian, Kevin
2018-08-02  3:17       ` Tian, Kevin
2018-08-02  3:17       ` Tian, Kevin
     [not found]       ` <AADFC41AFE54684AB9EE6CBC0274A5D191290F49-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2018-08-02  4:15         ` Kenneth Lee
2018-08-02  4:15           ` Kenneth Lee
2018-08-02  4:15           ` Kenneth Lee
2018-08-02  4:39           ` Tian, Kevin
2018-08-02  4:39             ` Tian, Kevin
2018-08-08  9:13   ` Joerg Roedel
2018-08-08  9:13     ` Joerg Roedel
2018-08-08  9:13     ` Joerg Roedel
     [not found]     ` <20180808091354.ppqgineql3pufwwr-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
2018-08-09  1:09       ` Kenneth Lee
2018-08-09  1:09         ` Kenneth Lee
2018-08-09  1:09         ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 3/7] vfio: add spimdev support Kenneth Lee
2018-08-01 10:22   ` Kenneth Lee
2018-08-01 16:23   ` Randy Dunlap
2018-08-01 16:23     ` Randy Dunlap
     [not found]     ` <d11c7745-2f31-0f33-1bd8-78379dc66e6e-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
2018-08-02  3:07       ` Kenneth Lee
2018-08-02  3:07         ` Kenneth Lee
2018-08-02  3:07         ` Kenneth Lee
2018-08-02  3:21   ` Tian, Kevin
2018-08-02  3:21     ` Tian, Kevin
2018-08-02  3:21     ` Tian, Kevin
     [not found]     ` <AADFC41AFE54684AB9EE6CBC0274A5D191290F7B-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2018-08-02  3:47       ` Kenneth Lee
2018-08-02  3:47         ` Kenneth Lee
2018-08-02  3:47         ` Kenneth Lee
2018-08-02  4:24         ` Tian, Kevin
2018-08-02  4:24           ` Tian, Kevin
2018-08-02  4:24           ` Tian, Kevin
2018-08-02  7:34           ` Kenneth Lee
2018-08-02  7:34             ` Kenneth Lee
2018-08-02  7:34             ` Kenneth Lee
2018-08-02  7:34             ` Kenneth Lee
2018-08-02  8:35             ` Cornelia Huck
2018-08-02  8:35               ` Cornelia Huck
     [not found]               ` <20180802103528.0b863030.cohuck-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-08-02 18:43                 ` Alex Williamson
2018-08-02 18:43                   ` Alex Williamson
     [not found]                   ` <20180802124327.403b10ab-1yVPhWWZRC1BDLzU/O5InQ@public.gmane.org>
2018-08-06  1:40                     ` Kenneth Lee
2018-08-06  1:40                       ` Kenneth Lee
2018-08-06  1:40                       ` Kenneth Lee
2018-08-06 15:49                       ` Alex Williamson
2018-08-06 15:49                         ` Alex Williamson
2018-08-06 15:49                         ` Alex Williamson
2018-08-06 15:49                         ` Alex Williamson
2018-08-06 16:34                         ` Raj, Ashok
2018-08-06 16:34                           ` Raj, Ashok
2018-08-06 16:34                           ` Raj, Ashok
2018-08-06 16:34                           ` Raj, Ashok
2018-08-06 17:05                           ` Alex Williamson
2018-08-06 17:05                             ` Alex Williamson
2018-08-06 17:05                             ` Alex Williamson
2018-08-06 17:05                             ` Alex Williamson
     [not found]                             ` <20180806110521.0b708e0b-1yVPhWWZRC1BDLzU/O5InQ@public.gmane.org>
2018-08-08  1:32                               ` Kenneth Lee
2018-08-08  1:32                                 ` Kenneth Lee
2018-08-08  1:32                                 ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 4/7] crypto: add hisilicon Queue Manager driver Kenneth Lee
2018-08-01 10:22   ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 5/7] crypto: Add Hisilicon Zip driver Kenneth Lee
2018-08-01 10:22   ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 6/7] crypto: add spimdev support to Hisilicon QM Kenneth Lee
2018-08-01 10:22   ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 7/7] vfio/spimdev: add user sample for spimdev Kenneth Lee
2018-08-01 10:22   ` Kenneth Lee
     [not found] ` <20180801102221.5308-1-nek.in.cn-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-08-01 16:56   ` [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive Jerome Glisse
2018-08-01 16:56     ` Jerome Glisse
2018-08-01 16:56     ` Jerome Glisse
     [not found]     ` <20180801165644.GA3820-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-08-02  2:33       ` Tian, Kevin
2018-08-02  2:33         ` Tian, Kevin
2018-08-02  2:33         ` Tian, Kevin
     [not found]         ` <AADFC41AFE54684AB9EE6CBC0274A5D191290E1A-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2018-08-02  4:05           ` Kenneth Lee
2018-08-02  4:05             ` Kenneth Lee
2018-08-02  4:05             ` Kenneth Lee
2018-08-02  4:05             ` Kenneth Lee
2018-08-02 14:22             ` Jerome Glisse
2018-08-02 14:22               ` Jerome Glisse
2018-08-02 14:22               ` Jerome Glisse
2018-08-02 14:22               ` Jerome Glisse
     [not found]               ` <20180802142243.GA3481-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-08-03  3:47                 ` Kenneth Lee
2018-08-03  3:47                   ` Kenneth Lee
2018-08-03  3:47                   ` Kenneth Lee
2018-08-03  3:47                   ` Kenneth Lee
2018-08-03 14:39                   ` Jerome Glisse [this message]
2018-08-03 14:39                     ` Jerome Glisse
2018-08-03 14:39                     ` Jerome Glisse
2018-08-03 14:39                     ` Jerome Glisse
     [not found]                     ` <20180803143944.GA4079-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-08-06  3:12                       ` Kenneth Lee
2018-08-06  3:12                         ` Kenneth Lee
2018-08-06  3:12                         ` Kenneth Lee
2018-08-06  3:12                         ` Kenneth Lee
2018-08-06 15:32                         ` Jerome Glisse
2018-08-06 15:32                           ` Jerome Glisse
2018-08-06 15:32                           ` Jerome Glisse
2018-08-06 15:32                           ` Jerome Glisse
2018-08-08  1:08                           ` Kenneth Lee
2018-08-08  1:08                             ` Kenneth Lee
2018-08-08  1:08                             ` Kenneth Lee
2018-08-08  1:08                             ` Kenneth Lee
     [not found]                             ` <11bace0e-dc14-5d2c-f65c-25b852f4e9ca-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-08-08 15:18                               ` Jerome Glisse
2018-08-08 15:18                                 ` Jerome Glisse
2018-08-08 15:18                                 ` Jerome Glisse
2018-08-08 15:18                                 ` Jerome Glisse
2018-08-09  8:03                                 ` Kenneth Lee
2018-08-09  8:03                                   ` Kenneth Lee
2018-08-09  8:03                                   ` Kenneth Lee
2018-08-09  8:03                                   ` Kenneth Lee
2018-08-09  8:31                                   ` Tian, Kevin
2018-08-09  8:31                                     ` Tian, Kevin
2018-08-09  8:31                                     ` Tian, Kevin
2018-08-10  1:37                                     ` Kenneth Lee
2018-08-10  1:37                                       ` Kenneth Lee
2018-08-10  1:37                                       ` Kenneth Lee
2018-08-09 14:46                                   ` Jerome Glisse
2018-08-09 14:46                                     ` Jerome Glisse
2018-08-09 14:46                                     ` Jerome Glisse
2018-08-09 14:46                                     ` Jerome Glisse
     [not found]                                     ` <20180809144613.GB3386-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-08-10  3:39                                       ` Kenneth Lee
2018-08-10  3:39                                         ` Kenneth Lee
2018-08-10  3:39                                         ` Kenneth Lee
2018-08-10 13:12                                         ` Jean-Philippe Brucker
2018-08-10 13:12                                           ` Jean-Philippe Brucker
2018-08-10 13:12                                           ` Jean-Philippe Brucker
     [not found]                                           ` <0f6bac9b-8381-1874-9367-46b5f4cef56e-5wv7dgnIgG8@public.gmane.org>
2018-08-11 15:26                                             ` Kenneth Lee
2018-08-11 15:26                                               ` Kenneth Lee
2018-08-13  9:29                                               ` Kenneth Lee
2018-08-13  9:29                                                 ` Kenneth Lee
2018-08-13 19:23                                                 ` Jerome Glisse
2018-08-13 19:23                                                   ` Jerome Glisse
2018-08-13 19:23                                                   ` Jerome Glisse
2018-08-14  3:46                                                   ` Kenneth Lee
2018-08-14  3:46                                                     ` Kenneth Lee
2018-08-10 14:32                                         ` Jerome Glisse
2018-08-10 14:32                                           ` Jerome Glisse
2018-08-10 14:32                                           ` Jerome Glisse
2018-08-11 14:44                                           ` Kenneth Lee
2018-08-11 14:44                                             ` Kenneth Lee
2018-08-11 14:44                                             ` Kenneth Lee
2018-08-02 10:10         ` Alan Cox
2018-08-02 10:10           ` Alan Cox
2018-08-02 10:10           ` Alan Cox
2018-08-02 10:10           ` Alan Cox
2018-08-02 12:24           ` Xu Zaibo
2018-08-02 12:24             ` Xu Zaibo
2018-08-02 12:24             ` Xu Zaibo
2018-08-02 12:24             ` Xu Zaibo
2018-08-02 14:46           ` Jerome Glisse
2018-08-02 14:46             ` Jerome Glisse
2018-08-02 14:46             ` Jerome Glisse
2018-08-02 14:46             ` Jerome Glisse
     [not found]             ` <20180802144627.GB3481-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-08-03 14:20               ` Alan Cox
2018-08-03 14:20                 ` Alan Cox
2018-08-03 14:20                 ` Alan Cox
2018-08-03 14:55                 ` Jerome Glisse
2018-08-03 14:55                   ` Jerome Glisse
2018-08-03 14:55                   ` Jerome Glisse
2018-08-06  1:26                 ` Kenneth Lee
2018-08-06  1:26                   ` Kenneth Lee
2018-08-06  1:26                   ` Kenneth Lee
2018-08-02  2:59 ` Tian, Kevin
2018-08-02  2:59   ` Tian, Kevin
2018-08-02  2:59   ` Tian, Kevin
     [not found]   ` <AADFC41AFE54684AB9EE6CBC0274A5D191290EB3-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2018-08-02  3:40     ` Kenneth Lee
2018-08-02  3:40       ` Kenneth Lee
2018-08-02  3:40       ` Kenneth Lee
2018-08-02  4:36       ` Tian, Kevin
2018-08-02  4:36         ` Tian, Kevin
2018-08-02  4:36         ` Tian, Kevin
2018-08-02  4:36         ` Tian, Kevin
2018-08-02  5:35         ` Kenneth Lee
2018-08-02  5:35           ` Kenneth Lee
2018-08-02  5:35           ` Kenneth Lee
2018-08-02  5:35           ` Kenneth Lee

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180803143944.GA4079@redhat.com \
    --to=jglisse@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=corbet@lwn.net \
    --cc=fanghao11@huawei.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=iommu@lists.linux-foundation.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=liguozhu@hisilicon.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=pombredanne@nexb.com \
    --cc=sanjay.k.kumar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=xuzaibo@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.