All of lore.kernel.org
 help / color / mirror / Atom feed
* [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
@ 2016-01-28 17:55 Jerome Glisse
  2016-01-29  9:50 ` Kirill A. Shutemov
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Jerome Glisse @ 2016-01-28 17:55 UTC (permalink / raw)
  To: lsf-pc, linux-mm

Hi,

I would like to attend LSF/MM this year to discuss about HMM
(Heterogeneous Memory Manager) and more generaly all topics
related to GPU and heterogeneous memory architecture (including
persistent memory).

I want to discuss how to move forward with HMM merging and i
hope that by MM summit time i will be able to share more
informations publicly on devices which rely on HMM.

Jerome Glisse

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-01-28 17:55 [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU Jerome Glisse
@ 2016-01-29  9:50 ` Kirill A. Shutemov
  2016-01-29 13:35   ` Jerome Glisse
  2016-02-01 15:46 ` Aneesh Kumar K.V
  2016-02-03  0:40 ` David Woodhouse
  2 siblings, 1 reply; 17+ messages in thread
From: Kirill A. Shutemov @ 2016-01-29  9:50 UTC (permalink / raw)
  To: Jerome Glisse; +Cc: lsf-pc, linux-mm

On Thu, Jan 28, 2016 at 06:55:37PM +0100, Jerome Glisse wrote:
> Hi,
> 
> I would like to attend LSF/MM this year to discuss about HMM
> (Heterogeneous Memory Manager) and more generaly all topics
> related to GPU and heterogeneous memory architecture (including
> persistent memory).

How is persistent memory heterogeneous?

I thought it's either in the same cache coherency domain (DAX case) or is
not a memory for kernel -- behind block layer.
Do we have yet another option?

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-01-29  9:50 ` Kirill A. Shutemov
@ 2016-01-29 13:35   ` Jerome Glisse
  0 siblings, 0 replies; 17+ messages in thread
From: Jerome Glisse @ 2016-01-29 13:35 UTC (permalink / raw)
  To: Kirill A. Shutemov; +Cc: lsf-pc, linux-mm

On Fri, Jan 29, 2016 at 11:50:28AM +0200, Kirill A. Shutemov wrote:
> On Thu, Jan 28, 2016 at 06:55:37PM +0100, Jerome Glisse wrote:
> > Hi,
> > 
> > I would like to attend LSF/MM this year to discuss about HMM
> > (Heterogeneous Memory Manager) and more generaly all topics
> > related to GPU and heterogeneous memory architecture (including
> > persistent memory).
> 
> How is persistent memory heterogeneous?
> 
> I thought it's either in the same cache coherency domain (DAX case) or is
> not a memory for kernel -- behind block layer.
> Do we have yet another option?


Right now it is not, but i am interested in the DMA mapping issue. But from
what i have seen on roadmap, we are going toward a world with a deeper memory
hierarchy. Very fast cache near CPU in GB range, regular memory like ddr,
slower persistent or similar but with enormous capacity. On top of this you
have thing like GPU memory (which is my main topic of interest) and other
similar thing like FPGA. GPU are not going away, bandwidth for GPU is in TB/s
ranges and on GPU roadmap the gap with CPU memory bandwidth keeps getting
bigger.

So i believe this hierarchy of memory add a layer of complexity on top of
numa. Technology is not ready but it might be worth discussing it, seeing
if there is anything to do on top of numa.

Also note that thing like GPU memory can either be visible or unvisible from
CPU point of view, more over it can be cache coherent or not. Thought the
latter is only enabled through specific API where application is aware that
it loose cache coherency with CPU.

Cheers,
Jerome

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-01-28 17:55 [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU Jerome Glisse
  2016-01-29  9:50 ` Kirill A. Shutemov
@ 2016-02-01 15:46 ` Aneesh Kumar K.V
  2016-02-02 23:03   ` Jerome Glisse
  2016-02-03  0:40 ` David Woodhouse
  2 siblings, 1 reply; 17+ messages in thread
From: Aneesh Kumar K.V @ 2016-02-01 15:46 UTC (permalink / raw)
  To: Jerome Glisse, lsf-pc, linux-mm

Jerome Glisse <j.glisse@gmail.com> writes:

> Hi,
>
> I would like to attend LSF/MM this year to discuss about HMM
> (Heterogeneous Memory Manager) and more generaly all topics
> related to GPU and heterogeneous memory architecture (including
> persistent memory).
>
> I want to discuss how to move forward with HMM merging and i
> hope that by MM summit time i will be able to share more
> informations publicly on devices which rely on HMM.
>

I mentioned in my request to attend mail, I would like to attend this
discussion. I am wondering whether we can split the series further to
mmu_notifier bits and then the page table mirroring bits. Can the mmu notifier
changes go in early so that we can merge the page table mirroring later ?

Can be page table mirroring bits be built as a kernel module ?

-aneesh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-01 15:46 ` Aneesh Kumar K.V
@ 2016-02-02 23:03   ` Jerome Glisse
  0 siblings, 0 replies; 17+ messages in thread
From: Jerome Glisse @ 2016-02-02 23:03 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: lsf-pc, linux-mm

On Mon, Feb 01, 2016 at 09:16:02PM +0530, Aneesh Kumar K.V wrote:
> Jerome Glisse <j.glisse@gmail.com> writes:
> 
> > Hi,
> >
> > I would like to attend LSF/MM this year to discuss about HMM
> > (Heterogeneous Memory Manager) and more generaly all topics
> > related to GPU and heterogeneous memory architecture (including
> > persistent memory).
> >
> > I want to discuss how to move forward with HMM merging and i
> > hope that by MM summit time i will be able to share more
> > informations publicly on devices which rely on HMM.
> >
> 
> I mentioned in my request to attend mail, I would like to attend this
> discussion. I am wondering whether we can split the series further to
> mmu_notifier bits and then the page table mirroring bits. Can the mmu notifier
> changes go in early so that we can merge the page table mirroring later ?

Well the mmu_notifier bit can be upstream on their own but they would
not useful. Maybe on KVM side i need to investigate.


> Can be page table mirroring bits be built as a kernel module ?

Well i am not sure this is a good idea. Memory migration requires to
hook up into page fault code path and it relies on the mirrored page
table to service fault on memory that is migrated.

Jerome

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-01-28 17:55 [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU Jerome Glisse
  2016-01-29  9:50 ` Kirill A. Shutemov
  2016-02-01 15:46 ` Aneesh Kumar K.V
@ 2016-02-03  0:40 ` David Woodhouse
  2016-02-03  8:13   ` Oded Gabbay
  2016-02-25 13:49   ` Joerg Roedel
  2 siblings, 2 replies; 17+ messages in thread
From: David Woodhouse @ 2016-02-03  0:40 UTC (permalink / raw)
  To: Jerome Glisse, lsf-pc, linux-mm; +Cc: joro

[-- Attachment #1: Type: text/plain, Size: 1797 bytes --]

On Thu, 2016-01-28 at 18:55 +0100, Jerome Glisse wrote:
> 
> I would like to attend LSF/MM this year to discuss about HMM
> (Heterogeneous Memory Manager) and more generaly all topics
> related to GPU and heterogeneous memory architecture (including
> persistent memory).
> 
> I want to discuss how to move forward with HMM merging and i
> hope that by MM summit time i will be able to share more
> informations publicly on devices which rely on HMM.

There are a few related issues here around Shared Virtual Memory, and
lifetime management of the associated MM, and the proposal discussed at
the Kernel Summit for "off-CPU tasks".

I've hit a situation with the Intel SVM code in 4.4 where the device
driver binds a PASID, and also has mmap() functionality on the same
file descriptor that the PASID is associated with.

So on process exit, the MM doesn't die because the PASID binding still
exists. The VMA of the mmap doesn't die because the MM still exists. So
the underlying file remains open because the VMA still exists. And the
PASID binding thus doesn't die because the file is still open.

I've posted a patch¹ which moves us closer to the amd_iommu_v2 model,
although I'm still *strongly* resisting the temptation to call out into
device driver code from the mmu_notifier's release callback.

I would like to attend LSF/MM this year so we can continue to work on
those issues — now that we actually have some hardware in the field and
a better idea of how we can build a unified access model for SVM across
the different IOMMU types.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation


¹ http://www.spinics.net/lists/linux-mm/msg100230.html

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03  0:40 ` David Woodhouse
@ 2016-02-03  8:13   ` Oded Gabbay
  2016-02-03  8:40     ` David Woodhouse
  2016-02-25 13:49   ` Joerg Roedel
  1 sibling, 1 reply; 17+ messages in thread
From: Oded Gabbay @ 2016-02-03  8:13 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

On Wed, Feb 3, 2016 at 2:40 AM, David Woodhouse <dwmw2@infradead.org> wrote:
> On Thu, 2016-01-28 at 18:55 +0100, Jerome Glisse wrote:
>>
>> I would like to attend LSF/MM this year to discuss about HMM
>> (Heterogeneous Memory Manager) and more generaly all topics
>> related to GPU and heterogeneous memory architecture (including
>> persistent memory).
>>
>> I want to discuss how to move forward with HMM merging and i
>> hope that by MM summit time i will be able to share more
>> informations publicly on devices which rely on HMM.
>
> There are a few related issues here around Shared Virtual Memory, and
> lifetime management of the associated MM, and the proposal discussed at
> the Kernel Summit for "off-CPU tasks".
>
> I've hit a situation with the Intel SVM code in 4.4 where the device
> driver binds a PASID, and also has mmap() functionality on the same
> file descriptor that the PASID is associated with.
>
> So on process exit, the MM doesn't die because the PASID binding still
> exists. The VMA of the mmap doesn't die because the MM still exists. So
> the underlying file remains open because the VMA still exists. And the
> PASID binding thus doesn't die because the file is still open.
>
Why connect the PASID to the FD in the first place ?
Why not tie everything to the MM ?

> I've posted a patch¹ which moves us closer to the amd_iommu_v2 model,
> although I'm still *strongly* resisting the temptation to call out into
> device driver code from the mmu_notifier's release callback.

You mean you are resisting doing this (taken from amdkfd):

--------------
static const struct mmu_notifier_ops kfd_process_mmu_notifier_ops = {
.release = kfd_process_notifier_release,
};

process->mmu_notifier.ops = &kfd_process_mmu_notifier_ops;
-----------

Why, if I may ask ?

Oded
>
> I would like to attend LSF/MM this year so we can continue to work on
> those issues — now that we actually have some hardware in the field and
> a better idea of how we can build a unified access model for SVM across
> the different IOMMU types.
>
> --
> David Woodhouse                            Open Source Technology Centre
> David.Woodhouse@intel.com                              Intel Corporation
>
>
> ¹ http://www.spinics.net/lists/linux-mm/msg100230.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03  8:13   ` Oded Gabbay
@ 2016-02-03  8:40     ` David Woodhouse
  2016-02-03  9:21       ` Oded Gabbay
  0 siblings, 1 reply; 17+ messages in thread
From: David Woodhouse @ 2016-02-03  8:40 UTC (permalink / raw)
  To: Oded Gabbay; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

[-- Attachment #1: Type: text/plain, Size: 2777 bytes --]

On Wed, 2016-02-03 at 10:13 +0200, Oded Gabbay wrote:
> 
> > So on process exit, the MM doesn't die because the PASID binding still
> > exists. The VMA of the mmap doesn't die because the MM still exists. So
> > the underlying file remains open because the VMA still exists. And the
> > PASID binding thus doesn't die because the file is still open.
> >
> Why connect the PASID to the FD in the first place ?
> Why not tie everything to the MM ?

That's actually a question for the device driver in question, of
course; it's not the generic SVM support code which chooses *when* to
bind/unbind PASIDs. We just provide those functions for the driver to
call.

But the answer is that that's the normal resource tracking model.
Resources hang off the file and are cleared up when the file is closed.

(And exit_files() is called later than exit_mm()).

> > I've posted a patch¹ which moves us closer to the amd_iommu_v2 model,
> > although I'm still *strongly* resisting the temptation to call out into
> > device driver code from the mmu_notifier's release callback.
> 
> You mean you are resisting doing this (taken from amdkfd):
> 
> --------------
> static const struct mmu_notifier_ops kfd_process_mmu_notifier_ops = {
> .release = kfd_process_notifier_release,
> };
> 
> process->mmu_notifier.ops = &kfd_process_mmu_notifier_ops;
> -----------
> 
> Why, if I may ask ?

The KISS principle, especially as it relates to device drivers.
We just Do Not Want random device drivers being called in that context.

It's OK for amdkfd where you have sufficient clue to deal with it —
it's more than "just a device driver".

But when we get discrete devices with PASID support (and the required
TLP prefix support in our root ports at last!) we're going to see SVM
supported in many more device drivers, and we should make it simple.

Having the mmu_notifier release callback exposed to drivers is going to
strongly encourage them to do the WRONG thing, because they need to
interact with their hardware and *wait* for the PASID to be entirely
retired through the pipeline before they tell the IOMMU to flush it.

The patch at http://www.spinics.net/lists/linux-mm/msg100230.html
addresses this by clearing the PASID from the PASID table (in core
IOMMU code) when the process exits so that all subsequent accesses to
that PASID then take faults. The device driver can then clean up its
binding for that PASID in its own time.

It is a fairly fundamental rule that faulting access to *one* PASID
should not adversely affect behaviour for *other* PASIDs, of course.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03  8:40     ` David Woodhouse
@ 2016-02-03  9:21       ` Oded Gabbay
  2016-02-03 10:15         ` David Woodhouse
  0 siblings, 1 reply; 17+ messages in thread
From: Oded Gabbay @ 2016-02-03  9:21 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

On Wed, Feb 3, 2016 at 10:40 AM, David Woodhouse <dwmw2@infradead.org> wrote:
> On Wed, 2016-02-03 at 10:13 +0200, Oded Gabbay wrote:
>>
>> > So on process exit, the MM doesn't die because the PASID binding still
>> > exists. The VMA of the mmap doesn't die because the MM still exists. So
>> > the underlying file remains open because the VMA still exists. And the
>> > PASID binding thus doesn't die because the file is still open.
>> >
>> Why connect the PASID to the FD in the first place ?
>> Why not tie everything to the MM ?
>
> That's actually a question for the device driver in question, of
> course; it's not the generic SVM support code which chooses *when* to
> bind/unbind PASIDs. We just provide those functions for the driver to
> call.
>
> But the answer is that that's the normal resource tracking model.
> Resources hang off the file and are cleared up when the file is closed.
>
> (And exit_files() is called later than exit_mm()).
>
>> > I've posted a patch¹ which moves us closer to the amd_iommu_v2 model,
>> > although I'm still *strongly* resisting the temptation to call out into
>> > device driver code from the mmu_notifier's release callback.
>>
>> You mean you are resisting doing this (taken from amdkfd):
>>
>> --------------
>> static const struct mmu_notifier_ops kfd_process_mmu_notifier_ops = {
>> .release = kfd_process_notifier_release,
>> };
>>
>> process->mmu_notifier.ops = &kfd_process_mmu_notifier_ops;
>> -----------
>>
>> Why, if I may ask ?
>
> The KISS principle, especially as it relates to device drivers.
> We just Do Not Want random device drivers being called in that context.
>
> It's OK for amdkfd where you have sufficient clue to deal with it —
> it's more than "just a device driver".
>
> But when we get discrete devices with PASID support (and the required
> TLP prefix support in our root ports at last!) we're going to see SVM
> supported in many more device drivers, and we should make it simple.
>
> Having the mmu_notifier release callback exposed to drivers is going to
> strongly encourage them to do the WRONG thing, because they need to
> interact with their hardware and *wait* for the PASID to be entirely
> retired through the pipeline before they tell the IOMMU to flush it.
>
> The patch at http://www.spinics.net/lists/linux-mm/msg100230.html
> addresses this by clearing the PASID from the PASID table (in core
> IOMMU code) when the process exits so that all subsequent accesses to
> that PASID then take faults. The device driver can then clean up its
> binding for that PASID in its own time.

OK, so I think I got confused up a little, but looking at your code I
see that you register SVM for the mm notifier (intel_mm_release),
therefore I guess what you meant to say you don't want to call a
device driver callback from your mm notifier callback, correct ? (like
the amd_iommu_v2 does when it calls ev_state->inv_ctx_cb inside its
mn_release)

Because you can't really control what the device driver will do, i.e.
if it decides to register itself to the mm notifier in its own code.

And because you don't call the device driver, the driver can/will get
errors for using this PASID (since you unbinded it) and the device
driver is supposed to handle it. Did I understood that correctly ?

If I understood it correctly, doesn't it confuses between error/fault
and normal unbinding ? Won't it be better to actively notify them and
indeed *wait* until the device driver cleared its H/W pipeline before
"pulling the carpet under their feet" ?

In our case (AMD GPUs), if we have such an error it could make the GPU
stuck. That's why we even reset the wavefronts inside the GPU, if we
can't gracefully remove the work from the GPU (see
kfd_unbind_process_from_device)

In the patch's comment you wrote:
"Hardware designers have confirmed that the resulting 'PASID not present'
faults should be handled just as gracefully as 'page not present' faults"

Unless *all* the H/W that is going to use SVM is designed by the same
company, I don't think we can say such a thing. And even then, from my
experience, H/W designers can be "creative" sometimes.

Just my 2 cents.

    Oded

>
> It is a fairly fundamental rule that faulting access to *one* PASID
> should not adversely affect behaviour for *other* PASIDs, of course.
>
> --
> David Woodhouse                            Open Source Technology Centre
> David.Woodhouse@intel.com                              Intel Corporation
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03  9:21       ` Oded Gabbay
@ 2016-02-03 10:15         ` David Woodhouse
  2016-02-03 11:01           ` Oded Gabbay
  0 siblings, 1 reply; 17+ messages in thread
From: David Woodhouse @ 2016-02-03 10:15 UTC (permalink / raw)
  To: Oded Gabbay; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

[-- Attachment #1: Type: text/plain, Size: 3560 bytes --]

On Wed, 2016-02-03 at 11:21 +0200, Oded Gabbay wrote:

> OK, so I think I got confused up a little, but looking at your code I
> see that you register SVM for the mm notifier (intel_mm_release),
> therefore I guess what you meant to say you don't want to call a
> device driver callback from your mm notifier callback, correct ? (like
> the amd_iommu_v2 does when it calls ev_state->inv_ctx_cb inside its
> mn_release)

Right.

> Because you can't really control what the device driver will do, i.e.
> if it decides to register itself to the mm notifier in its own code.

Right. I can't *prevent* them from doing it. But I don't need to
encourage or facilitate it :)

> And because you don't call the device driver, the driver can/will get
> errors for using this PASID (since you unbinded it) and the device
> driver is supposed to handle it. Did I understood that correctly ?

In the case of an unclean exit, yes. In an orderly shutdown of the
process, one would hope that the device context is relinquished cleanly
rather than the process simply exiting.

And yes, the device and its driver are expected to handle faults. If
they don't do that, they are broken :)

> If I understood it correctly, doesn't it confuses between error/fault
> and normal unbinding ? Won't it be better to actively notify them and
> indeed *wait* until the device driver cleared its H/W pipeline before
> "pulling the carpet under their feet" ?
> 
> In our case (AMD GPUs), if we have such an error it could make the GPU
> stuck. That's why we even reset the wavefronts inside the GPU, if we
> can't gracefully remove the work from the GPU (see
> kfd_unbind_process_from_device)

But a rogue process can easily trigger faults — just request access to
an address that doesn't exist. My conversation with the hardware
designers was not about the peculiarities of any specific
implementation, but just getting them to confirm my assertion that if a
device *doesn't* cleanly handle faults on *one* PASID without screwing
over all the *other* PASIDs, then it is utterly broken by design and
should never get to production.

I *do* anticipate broken hardware which will crap itself completely
when it takes a fault, and have implemented a callback from the fault
handler so that the driver gets notified when a fault *happens* (even
on a PASID which is still alive), and can prod the broken hardware if
it needs to.

But I wasn't expecting it to be the norm.

> In the patch's comment you wrote:
> "Hardware designers have confirmed that the resulting 'PASID not present'
> faults should be handled just as gracefully as 'page not present' faults"
> 
> Unless *all* the H/W that is going to use SVM is designed by the same
> company, I don't think we can say such a thing. And even then, from my
> experience, H/W designers can be "creative" sometimes.

If we have to turn it into a 'page not present' fault instead of a
'PASID not present' fault, that's easy enough to do by pointing it at a
dummy PML4 (the zero page will do).

But I stand by my assertion that any hardware which doesn't handle at
least a 'page not present' fault in a given PASID without screwing over
all the other users of the hardware is BROKEN.

We could *almost* forgive hardware for stalling when it sees a 'PASID
not present' fault. Since that *does* require OS participation.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03 10:15         ` David Woodhouse
@ 2016-02-03 11:01           ` Oded Gabbay
  2016-02-03 11:07             ` Oded Gabbay
  0 siblings, 1 reply; 17+ messages in thread
From: Oded Gabbay @ 2016-02-03 11:01 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

On Wed, Feb 3, 2016 at 12:15 PM, David Woodhouse <dwmw2@infradead.org> wrote:
> On Wed, 2016-02-03 at 11:21 +0200, Oded Gabbay wrote:
>
>> OK, so I think I got confused up a little, but looking at your code I
>> see that you register SVM for the mm notifier (intel_mm_release),
>> therefore I guess what you meant to say you don't want to call a
>> device driver callback from your mm notifier callback, correct ? (like
>> the amd_iommu_v2 does when it calls ev_state->inv_ctx_cb inside its
>> mn_release)
>
> Right.
>
>> Because you can't really control what the device driver will do, i.e.
>> if it decides to register itself to the mm notifier in its own code.
>
> Right. I can't *prevent* them from doing it. But I don't need to
> encourage or facilitate it :)
>
>> And because you don't call the device driver, the driver can/will get
>> errors for using this PASID (since you unbinded it) and the device
>> driver is supposed to handle it. Did I understood that correctly ?
>
> In the case of an unclean exit, yes. In an orderly shutdown of the
> process, one would hope that the device context is relinquished cleanly
> rather than the process simply exiting.
>
> And yes, the device and its driver are expected to handle faults. If
> they don't do that, they are broken :)
>
>> If I understood it correctly, doesn't it confuses between error/fault
>> and normal unbinding ? Won't it be better to actively notify them and
>> indeed *wait* until the device driver cleared its H/W pipeline before
>> "pulling the carpet under their feet" ?
>>
>> In our case (AMD GPUs), if we have such an error it could make the GPU
>> stuck. That's why we even reset the wavefronts inside the GPU, if we
>> can't gracefully remove the work from the GPU (see
>> kfd_unbind_process_from_device)
>
> But a rogue process can easily trigger faults — just request access to
> an address that doesn't exist. My conversation with the hardware
> designers was not about the peculiarities of any specific
> implementation, but just getting them to confirm my assertion that if a
> device *doesn't* cleanly handle faults on *one* PASID without screwing
> over all the *other* PASIDs, then it is utterly broken by design and
> should never get to production.

Yes, that is agreed, address errors should not affect the H/W itself,
nor other processes.

>
> I *do* anticipate broken hardware which will crap itself completely
> when it takes a fault, and have implemented a callback from the fault
> handler so that the driver gets notified when a fault *happens* (even
> on a PASID which is still alive), and can prod the broken hardware if
> it needs to.
>
> But I wasn't expecting it to be the norm.
>
Yeah, I guess that after a few H/W iterations the "correct"
implementation will be the norm.

>> In the patch's comment you wrote:
>> "Hardware designers have confirmed that the resulting 'PASID not present'
>> faults should be handled just as gracefully as 'page not present' faults"
>>
>> Unless *all* the H/W that is going to use SVM is designed by the same
>> company, I don't think we can say such a thing. And even then, from my
>> experience, H/W designers can be "creative" sometimes.
>
> If we have to turn it into a 'page not present' fault instead of a
> 'PASID not present' fault, that's easy enough to do by pointing it at a
> dummy PML4 (the zero page will do).
>
> But I stand by my assertion that any hardware which doesn't handle at
> least a 'page not present' fault in a given PASID without screwing over
> all the other users of the hardware is BROKEN.

Totally agreed!

>
> We could *almost* forgive hardware for stalling when it sees a 'PASID
> not present' fault. Since that *does* require OS participation.
>
> --
> David Woodhouse                            Open Source Technology Centre
> David.Woodhouse@intel.com                              Intel Corporation
>

Another, perhaps trivial, question.
When there is an address fault, who handles it ? the SVM driver, or
each device driver ?

In other words, is the model the same as (AMD) IOMMU where it binds
amd_iommu driver to the IOMMU H/W, and that driver (amd_iommu/v2) is
the only one which handles the PPR events ?

If that is the case, then with SVM, how will the device driver be made
aware of faults, if the SVM driver won't notify him about them,
because it has already severed the connection between PASID and
process ?

If the model is that each device driver gets a direct fault
notification (via interrupt or some other way) then that is a
different story.

Oded

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03 11:01           ` Oded Gabbay
@ 2016-02-03 11:07             ` Oded Gabbay
  2016-02-03 11:35               ` David Woodhouse
  0 siblings, 1 reply; 17+ messages in thread
From: Oded Gabbay @ 2016-02-03 11:07 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

On Wed, Feb 3, 2016 at 1:01 PM, Oded Gabbay <oded.gabbay@gmail.com> wrote:
> On Wed, Feb 3, 2016 at 12:15 PM, David Woodhouse <dwmw2@infradead.org> wrote:
>> On Wed, 2016-02-03 at 11:21 +0200, Oded Gabbay wrote:
>>
>>> OK, so I think I got confused up a little, but looking at your code I
>>> see that you register SVM for the mm notifier (intel_mm_release),
>>> therefore I guess what you meant to say you don't want to call a
>>> device driver callback from your mm notifier callback, correct ? (like
>>> the amd_iommu_v2 does when it calls ev_state->inv_ctx_cb inside its
>>> mn_release)
>>
>> Right.
>>
>>> Because you can't really control what the device driver will do, i.e.
>>> if it decides to register itself to the mm notifier in its own code.
>>
>> Right. I can't *prevent* them from doing it. But I don't need to
>> encourage or facilitate it :)
>>
>>> And because you don't call the device driver, the driver can/will get
>>> errors for using this PASID (since you unbinded it) and the device
>>> driver is supposed to handle it. Did I understood that correctly ?
>>
>> In the case of an unclean exit, yes. In an orderly shutdown of the
>> process, one would hope that the device context is relinquished cleanly
>> rather than the process simply exiting.
>>
>> And yes, the device and its driver are expected to handle faults. If
>> they don't do that, they are broken :)
>>
>>> If I understood it correctly, doesn't it confuses between error/fault
>>> and normal unbinding ? Won't it be better to actively notify them and
>>> indeed *wait* until the device driver cleared its H/W pipeline before
>>> "pulling the carpet under their feet" ?
>>>
>>> In our case (AMD GPUs), if we have such an error it could make the GPU
>>> stuck. That's why we even reset the wavefronts inside the GPU, if we
>>> can't gracefully remove the work from the GPU (see
>>> kfd_unbind_process_from_device)
>>
>> But a rogue process can easily trigger faults — just request access to
>> an address that doesn't exist. My conversation with the hardware
>> designers was not about the peculiarities of any specific
>> implementation, but just getting them to confirm my assertion that if a
>> device *doesn't* cleanly handle faults on *one* PASID without screwing
>> over all the *other* PASIDs, then it is utterly broken by design and
>> should never get to production.
>
> Yes, that is agreed, address errors should not affect the H/W itself,
> nor other processes.
>
>>
>> I *do* anticipate broken hardware which will crap itself completely
>> when it takes a fault, and have implemented a callback from the fault
>> handler so that the driver gets notified when a fault *happens* (even
>> on a PASID which is still alive), and can prod the broken hardware if
>> it needs to.
>>
>> But I wasn't expecting it to be the norm.
>>
> Yeah, I guess that after a few H/W iterations the "correct"
> implementation will be the norm.
>
>>> In the patch's comment you wrote:
>>> "Hardware designers have confirmed that the resulting 'PASID not present'
>>> faults should be handled just as gracefully as 'page not present' faults"
>>>
>>> Unless *all* the H/W that is going to use SVM is designed by the same
>>> company, I don't think we can say such a thing. And even then, from my
>>> experience, H/W designers can be "creative" sometimes.
>>
>> If we have to turn it into a 'page not present' fault instead of a
>> 'PASID not present' fault, that's easy enough to do by pointing it at a
>> dummy PML4 (the zero page will do).
>>
>> But I stand by my assertion that any hardware which doesn't handle at
>> least a 'page not present' fault in a given PASID without screwing over
>> all the other users of the hardware is BROKEN.
>
> Totally agreed!
>
>>
>> We could *almost* forgive hardware for stalling when it sees a 'PASID
>> not present' fault. Since that *does* require OS participation.
>>
>> --
>> David Woodhouse                            Open Source Technology Centre
>> David.Woodhouse@intel.com                              Intel Corporation
>>
>
> Another, perhaps trivial, question.
> When there is an address fault, who handles it ? the SVM driver, or
> each device driver ?
>
> In other words, is the model the same as (AMD) IOMMU where it binds
> amd_iommu driver to the IOMMU H/W, and that driver (amd_iommu/v2) is
> the only one which handles the PPR events ?
>
> If that is the case, then with SVM, how will the device driver be made
> aware of faults, if the SVM driver won't notify him about them,
> because it has already severed the connection between PASID and
> process ?
>
> If the model is that each device driver gets a direct fault
> notification (via interrupt or some other way) then that is a
> different story.
>
> Oded

And another question, if I may, aren't you afraid of "false positive"
prints to dmesg ? I mean, I'm pretty sure page faults / pasid faults
errors will be logged somewhere, probably to dmesg. Aren't you
concerned of the users seeing those errors and thinking they may have
a bug, while actually the errors were only caused by process
termination ?

Or in that case you say that the application is broken, because if it
still had something running in the H/W, it should not have closed
itself ?

I can accept that, I just want to know what is our answer when people
will start to complain :)

Thanks,

     Oded

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03 11:07             ` Oded Gabbay
@ 2016-02-03 11:35               ` David Woodhouse
  2016-02-03 11:41                 ` David Woodhouse
  2016-02-03 11:41                 ` Oded Gabbay
  0 siblings, 2 replies; 17+ messages in thread
From: David Woodhouse @ 2016-02-03 11:35 UTC (permalink / raw)
  To: Oded Gabbay; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

[-- Attachment #1: Type: text/plain, Size: 2664 bytes --]

On Wed, 2016-02-03 at 13:07 +0200, Oded Gabbay wrote:
> > Another, perhaps trivial, question.
> > When there is an address fault, who handles it ? the SVM driver, or
> > each device driver ?
> >
> > In other words, is the model the same as (AMD) IOMMU where it binds
> > amd_iommu driver to the IOMMU H/W, and that driver (amd_iommu/v2) is
> > the only one which handles the PPR events ?
> >
> > If that is the case, then with SVM, how will the device driver be made
> > aware of faults, if the SVM driver won't notify him about them,
> > because it has already severed the connection between PASID and
> > process ?

In the ideal case, there's no need for the device driver to get
involved at all. When a page isn't found in the page tables, the IOMMU
code calls handle_mm_fault() and either populates the page and sends a
a 'success' response, or sends an 'invalid fault' response back.

To account for broken hardware, we *have* added a callback into the
device driver when these faults happen. Ideally it should never be
used, of course.

In the case where the process has gone away, the PASID is still
assigned and we still hold mm_count on the MM, just not mm_users. This
callback into the device driver still occurs if a fault happens during
process exit between the exit_mm() and exit_files() stage.

> And another question, if I may, aren't you afraid of "false positive"
> prints to dmesg ? I mean, I'm pretty sure page faults / pasid faults
> errors will be logged somewhere, probably to dmesg. Aren't you
> concerned of the users seeing those errors and thinking they may have
> a bug, while actually the errors were only caused by process
> termination ?

If that's the case, it's easy enough to silence them. We are already
explicitly testing for the 'defunct mm' case in our fault handler, to
prevent us from faulting more pages into an obsolescent MM after its
mm_users reaches zero and its page tables are supposed to have been
torn down. That's the 'if(!atomic_inc_not_zere(&svm->mm->mm_users))
goto bad_req;' part.

> Or in that case you say that the application is broken, because if it
> still had something running in the H/W, it should not have closed
> itself ?

That's also true but it's still nice to avoid confusion. Even if only
to disambiguate cause and effect — we don't want people to see PASID
faults which were caused by the process crashing, and to think that
they might be involved in *causing* that process to crash...

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03 11:35               ` David Woodhouse
@ 2016-02-03 11:41                 ` David Woodhouse
  2016-02-03 11:41                 ` Oded Gabbay
  1 sibling, 0 replies; 17+ messages in thread
From: David Woodhouse @ 2016-02-03 11:41 UTC (permalink / raw)
  To: Oded Gabbay; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

[-- Attachment #1: Type: text/plain, Size: 1366 bytes --]

On Wed, 2016-02-03 at 11:35 +0000, David Woodhouse wrote:
> 
> In the ideal case, there's no need for the device driver to get
> involved at all. When a page isn't found in the page tables, the IOMMU
> code calls handle_mm_fault() and either populates the page and sends a
> a 'success' response, or sends an 'invalid fault' response back.

I missed a bit here; I should have made it explicit:

The device hardware receives that page-request response, successful or
otherwise, and is supposed to act on it accordingly. The device's own
request then fails, and it should have some coherent way of reporting
that to the device driver.

The point is that there should be no need to 'short-circuit' that and
pass notification directly from the IOMMU code to the device driver
that "there was a fault on PASID x". That direct notification hack
doesn't even *tell* us which device-side context was affected, if
there's more than one context accessing a given PASID.

(Actually, in the Intel case for integrated devices, there *are* some
opaque¹ bits in the page-request which do include that information. But
that's horrid, and not a solution for the general case.)

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation


¹ to the IOMMU code.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03 11:35               ` David Woodhouse
  2016-02-03 11:41                 ` David Woodhouse
@ 2016-02-03 11:41                 ` Oded Gabbay
  2016-02-03 12:22                   ` David Woodhouse
  1 sibling, 1 reply; 17+ messages in thread
From: Oded Gabbay @ 2016-02-03 11:41 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

On Wed, Feb 3, 2016 at 1:35 PM, David Woodhouse <dwmw2@infradead.org> wrote:
> On Wed, 2016-02-03 at 13:07 +0200, Oded Gabbay wrote:
>> > Another, perhaps trivial, question.
>> > When there is an address fault, who handles it ? the SVM driver, or
>> > each device driver ?
>> >
>> > In other words, is the model the same as (AMD) IOMMU where it binds
>> > amd_iommu driver to the IOMMU H/W, and that driver (amd_iommu/v2) is
>> > the only one which handles the PPR events ?
>> >
>> > If that is the case, then with SVM, how will the device driver be made
>> > aware of faults, if the SVM driver won't notify him about them,
>> > because it has already severed the connection between PASID and
>> > process ?
>
> In the ideal case, there's no need for the device driver to get
> involved at all. When a page isn't found in the page tables, the IOMMU
> code calls handle_mm_fault() and either populates the page and sends a
> a 'success' response, or sends an 'invalid fault' response back.
>
> To account for broken hardware, we *have* added a callback into the
> device driver when these faults happen. Ideally it should never be
> used, of course.
>
> In the case where the process has gone away, the PASID is still
> assigned and we still hold mm_count on the MM, just not mm_users. This
> callback into the device driver still occurs if a fault happens during
> process exit between the exit_mm() and exit_files() stage.
>
>> And another question, if I may, aren't you afraid of "false positive"
>> prints to dmesg ? I mean, I'm pretty sure page faults / pasid faults
>> errors will be logged somewhere, probably to dmesg. Aren't you
>> concerned of the users seeing those errors and thinking they may have
>> a bug, while actually the errors were only caused by process
>> termination ?
>
> If that's the case, it's easy enough to silence them. We are already
> explicitly testing for the 'defunct mm' case in our fault handler, to
> prevent us from faulting more pages into an obsolescent MM after its
> mm_users reaches zero and its page tables are supposed to have been
> torn down. That's the 'if(!atomic_inc_not_zere(&svm->mm->mm_users))
> goto bad_req;' part.
>
>> Or in that case you say that the application is broken, because if it
>> still had something running in the H/W, it should not have closed
>> itself ?
>
> That's also true but it's still nice to avoid confusion. Even if only
> to disambiguate cause and effect — we don't want people to see PASID
> faults which were caused by the process crashing, and to think that
> they might be involved in *causing* that process to crash...

Yes, that's why in our model, we aim to kill all running waves
*before* the amd_iommu_v2 driver unbinds the PASID.

>
> --
> David Woodhouse                            Open Source Technology Centre
> David.Woodhouse@intel.com                              Intel Corporation
>


It seems you have most of your bases covered. I'll stop harassing you now :)
But in seriousness, its interesting to see the different approaches
taken to handling pretty much the same type of H/W (IOMMU).

Thanks for your patience in answering my questions.

Oded

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03 11:41                 ` Oded Gabbay
@ 2016-02-03 12:22                   ` David Woodhouse
  0 siblings, 0 replies; 17+ messages in thread
From: David Woodhouse @ 2016-02-03 12:22 UTC (permalink / raw)
  To: Oded Gabbay; +Cc: Jerome Glisse, lsf-pc, linux-mm, Joerg Roedel

[-- Attachment #1: Type: text/plain, Size: 1454 bytes --]

On Wed, 2016-02-03 at 13:41 +0200, Oded Gabbay wrote:
> 
> It seems you have most of your bases covered. I'll stop harassing you now :)
> But in seriousness, its interesting to see the different approaches
> taken to handling pretty much the same type of H/W (IOMMU).

Well, the point is that we need to settle on a model we can *all* use.

It's all very well having vendor-specific intel_svm_bind_mm() and
amd_iommu_bind_pasid() functions with subtly different semantics, while
the only devices we support for Intel are integrated graphics and our
PCIe root ports don't even support discrete devices with PASID
capabilities — and while the only device using the AMD version is the
AMD GPU.

But we *are* starting to see additional devices with PASID
capabilities, and it won't be long before we really do have to support
third-party discrete devices.

So we do need a coherent API for SVM, as an extension of the DMA API.
And that means we have to settle on the semantics we want for it :)

With the commit I showed earlier, I've moved the Intel model somewhat
closer to the AMD model — no longer holding mm_users on the MM in
question. I think we can come up with something acceptable. 

There are Power and ARM incarnations of SVM also in the works, I
believe.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU
  2016-02-03  0:40 ` David Woodhouse
  2016-02-03  8:13   ` Oded Gabbay
@ 2016-02-25 13:49   ` Joerg Roedel
  1 sibling, 0 replies; 17+ messages in thread
From: Joerg Roedel @ 2016-02-25 13:49 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Jerome Glisse, lsf-pc, linux-mm

Hey,

On Wed, Feb 03, 2016 at 12:40:57AM +0000, David Woodhouse wrote:
> There are a few related issues here around Shared Virtual Memory, and
> lifetime management of the associated MM, and the proposal discussed at
> the Kernel Summit for "off-CPU tasks".
> 
> I've hit a situation with the Intel SVM code in 4.4 where the device
> driver binds a PASID, and also has mmap() functionality on the same
> file descriptor that the PASID is associated with.
> 
> So on process exit, the MM doesn't die because the PASID binding still
> exists. The VMA of the mmap doesn't die because the MM still exists. So
> the underlying file remains open because the VMA still exists. And the
> PASID binding thus doesn't die because the file is still open.
> 
> I've posted a patchA1 which moves us closer to the amd_iommu_v2 model,
> although I'm still *strongly* resisting the temptation to call out into
> device driver code from the mmu_notifier's release callback.
> 
> I would like to attend LSF/MM this year so we can continue to work on
> those issues a?? now that we actually have some hardware in the field and
> a better idea of how we can build a unified access model for SVM across
> the different IOMMU types.

That sounds very interesting and I'd like to participate in this
discussion. Unfortunatly I can't make it to the mm-sumit this year, so I
didn't even apply for an invitation.

But if this gets discussed there I am interested in the outcome. I still
have a prototype for the off-cpu task concept on my list of thing to
implement. The problem is that I can't really test any changes I make
because I don't have SVM hardware and on the AMD side the user-space
part needed for testing only runs on Ubuntu with some AMD provided
kernel :(


	Joerg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2016-02-25 13:49 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-28 17:55 [LSF/MM ATTEND] HMM (heterogeneous memory manager) and GPU Jerome Glisse
2016-01-29  9:50 ` Kirill A. Shutemov
2016-01-29 13:35   ` Jerome Glisse
2016-02-01 15:46 ` Aneesh Kumar K.V
2016-02-02 23:03   ` Jerome Glisse
2016-02-03  0:40 ` David Woodhouse
2016-02-03  8:13   ` Oded Gabbay
2016-02-03  8:40     ` David Woodhouse
2016-02-03  9:21       ` Oded Gabbay
2016-02-03 10:15         ` David Woodhouse
2016-02-03 11:01           ` Oded Gabbay
2016-02-03 11:07             ` Oded Gabbay
2016-02-03 11:35               ` David Woodhouse
2016-02-03 11:41                 ` David Woodhouse
2016-02-03 11:41                 ` Oded Gabbay
2016-02-03 12:22                   ` David Woodhouse
2016-02-25 13:49   ` Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.