All of lore.kernel.org
 help / color / mirror / Atom feed
* Poll CQ syncing problem
       [not found] ` <b4355d22-fc79-c860-de8a-5a4d468c884d-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-03-01 14:30   ` Noa Osherovich
       [not found]     ` <3ba1baab-e2ac-358d-3b3b-ff4a27405c93-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Noa Osherovich @ 2017-03-01 14:30 UTC (permalink / raw)
  To: hch-jcswGhMUV9g, sagi-NQWnxTmZq1alnMjI0IkVqw
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Majd Dibbiny

Hi Christoph, Sagi

I’ve been debugging an issue here and seems like it was exposed by
the work you did in the following commit:
14d3a3b2498ed (‘IB: add a proper completion queue abstraction’).

The scenario we run is randomizing pkeys for an IPoIB interface and then
running traffic on all of them.

We get the following panic trace (this one is PPC):

Unable to handle kernel paging request for data at address 0x00200200
Faulting instruction address: 0xc000000000325620
Oops: Kernel access of bad area, sig: 11 [#1]
SMP NR_CPUS=1024 NUMA pSeries
Modules linked in: rdma_ucm(U) ib_ucm(U) rdma_cm(U) iw_cm(U) ib_ipoib(U)
ib_cm(U) ib_uverbs(U) ib_umad(U) mlx5_ib(U) mlx5_core(U) mlx4_en(U) mlx4_ib(U)
ib_core(U) mlx4_core(U) mlx_compat(U) memtrack(U) mst_pciconf(U) netconsole 
nfs fscache nfsd lockd exportfs auth_rpcgss nfs_acl sunrpc autofs4 configfs
ses enclosure sg ipv6 tg3 e1000e ptp pps_core shpchp ext4 jbd2 mbcache sd_mod
crc_t10dif sr_mod cdrom ipr dm_mirror dm_region_hash dm_log dm_mod
[last unloaded: memtrack]
NIP: c000000000325620 LR: d000000003d46840 CTR: c000000000325600
REGS: c0000001ce7077e0 TRAP: 0300   Not tainted  (2.6.32-642.el6.ppc64)
MSR: 8000000000009032 <EE,ME,IR,DR>  CR: 24004082  XER: 00000000
DAR: 0000000000200200, DSISR: 0000000040000000
TASK = c0000001cca8e5c0[10314] 'ib-comp-wq/8' THREAD: c0000001ce704000 CPU: 8
GPR00: d000000003d46840 c0000001ce707a60 c000000000f9f3b0 c0000001b7989780 
GPR04: c0000001b706e200 c0000001d40b0b40 00000001001900b2 0000000000000000 
GPR08: d00007ffffe10401 0000000000200200 c000000001082500 c000000000325600 
GPR12: d000000003d4eba8 c000000001083900 00000000019ffa50 0000000000223718 
GPR16: 00000000002237c0 00000000002237b4 c0000001cca8e5c0 c0000001b6d626c0 
GPR20: c0000001b7989780 c000000000ee0380 d00007fffff0fb98 c0000001ce707e20 
GPR24: 0000000000000003 c0000001b0033408 c0000001b0032b00 0000000000000001 
GPR28: c0000001b706e200 c0000001b0033440 c000000000f39c38 c0000001b7989780 
NIP [c000000000325620] .list_del+0x20/0xb0
LR [d000000003d46840] .ib_mad_recv_done+0xc0/0x10e0 [ib_core]
Call Trace:
[c0000001ce707a60] [c0000001ce707b30] 0xc0000001ce707b30 (unreliable)
[c0000001ce707ae0] [d000000003d46840] .ib_mad_recv_done+0xc0/0x10e0 [ib_core]
[c0000001ce707c70] [d000000003d244bc] .__ib_process_cq+0xbc/0x190 [ib_core]
[c0000001ce707d20] [d000000003d24b70] .ib_cq_poll_work+0x30/0xb0 [ib_core]
[c0000001ce707db0] [c0000000000ba74c] .worker_thread+0x1dc/0x3d0
[c0000001ce707ed0] [c0000000000c1c6c] .kthread+0xdc/0x110
[c0000001ce707f90] [c000000000033c34] .kernel_thread+0x54/0x70

Analysis:
Since ib_comp_wq isn't single threaded, two works can run in parallel for the same CQ,
executing __ib_process_cq.
Since this function isn't thread safe and the wc array is shared, it causes a data corruption
which eventually crashes in the MAD layer due to a double list_del of the same element.

We have the following options to solve this:
1. Instead of cq->wc, allocate an ib_wc array in __ib_process_cq per each call.
2. Make ib_comp_wq a single thread workqueue.
3. Change the locking scheme during poll: Currently only the device's poll_cq implementation
   is done under lock. Change it to also contan the callbacks.

I'd appreciate your insight.

Thanks,
Noa

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
  2017-03-01 14:30   ` Poll CQ syncing problem Noa Osherovich
@ 2017-03-01 14:51         ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2017-03-01 14:51 UTC (permalink / raw)
  To: Noa Osherovich
  Cc: hch-jcswGhMUV9g, sagi-NQWnxTmZq1alnMjI0IkVqw,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Majd Dibbiny,
	tj-DgEjT+Ai2ygdnm+yROfE0A, linux-kernel-u79uwXL29TY76Z2rM5mHXA

On Wed, Mar 01, 2017 at 04:30:26PM +0200, Noa Osherovich wrote:
> Analysis:
> Since ib_comp_wq isn't single threaded, two works can run in parallel for the same CQ,
> executing __ib_process_cq.

They shouldn't.  Each CQ has a single work_struct, and any given work_struct
should only be executing at once:

"Note that the flag ``WQ_NON_REENTRANT`` no longer exists as all
workqueues are now non-reentrant - any work item is guaranteed to be
executed by at most one worker system-wide at any given time."

> Since this function isn't thread safe and the wc array is shared, it causes a data corruption
> which eventually crashes in the MAD layer due to a double list_del of the same element.

This should not be the case.  What kernel version are you testing and does
it contain any patches touching core kernel code?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
@ 2017-03-01 14:51         ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2017-03-01 14:51 UTC (permalink / raw)
  To: Noa Osherovich; +Cc: hch, sagi, linux-rdma, Majd Dibbiny, tj, linux-kernel

On Wed, Mar 01, 2017 at 04:30:26PM +0200, Noa Osherovich wrote:
> Analysis:
> Since ib_comp_wq isn't single threaded, two works can run in parallel for the same CQ,
> executing __ib_process_cq.

They shouldn't.  Each CQ has a single work_struct, and any given work_struct
should only be executing at once:

"Note that the flag ``WQ_NON_REENTRANT`` no longer exists as all
workqueues are now non-reentrant - any work item is guaranteed to be
executed by at most one worker system-wide at any given time."

> Since this function isn't thread safe and the wc array is shared, it causes a data corruption
> which eventually crashes in the MAD layer due to a double list_del of the same element.

This should not be the case.  What kernel version are you testing and does
it contain any patches touching core kernel code?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
  2017-03-01 14:51         ` Christoph Hellwig
@ 2017-03-01 15:28           ` Noa Osherovich
  -1 siblings, 0 replies; 11+ messages in thread
From: Noa Osherovich @ 2017-03-01 15:28 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: sagi, linux-rdma, Majd Dibbiny, tj, linux-kernel

On 3/1/2017 4:51 PM, Christoph Hellwig wrote:

> On Wed, Mar 01, 2017 at 04:30:26PM +0200, Noa Osherovich wrote:
>> Analysis:
>> Since ib_comp_wq isn't single threaded, two works can run in parallel for the same CQ,
>> executing __ib_process_cq.
> They shouldn't.  Each CQ has a single work_struct, and any given work_struct
> should only be executing at once:
>
> "Note that the flag ``WQ_NON_REENTRANT`` no longer exists as all
> workqueues are now non-reentrant - any work item is guaranteed to be
> executed by at most one worker system-wide at any given time."
>
>> Since this function isn't thread safe and the wc array is shared, it causes a data corruption
>> which eventually crashes in the MAD layer due to a double list_del of the same element.
> This should not be the case.  What kernel version are you testing and does
> it contain any patches touching core kernel code?

Thanks Christoph for the quick response.

Currently we see this only in old kernels. I'll investigate this more and update.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
@ 2017-03-01 15:28           ` Noa Osherovich
  0 siblings, 0 replies; 11+ messages in thread
From: Noa Osherovich @ 2017-03-01 15:28 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: sagi, linux-rdma, Majd Dibbiny, tj, linux-kernel

On 3/1/2017 4:51 PM, Christoph Hellwig wrote:

> On Wed, Mar 01, 2017 at 04:30:26PM +0200, Noa Osherovich wrote:
>> Analysis:
>> Since ib_comp_wq isn't single threaded, two works can run in parallel for the same CQ,
>> executing __ib_process_cq.
> They shouldn't.  Each CQ has a single work_struct, and any given work_struct
> should only be executing at once:
>
> "Note that the flag ``WQ_NON_REENTRANT`` no longer exists as all
> workqueues are now non-reentrant - any work item is guaranteed to be
> executed by at most one worker system-wide at any given time."
>
>> Since this function isn't thread safe and the wc array is shared, it causes a data corruption
>> which eventually crashes in the MAD layer due to a double list_del of the same element.
> This should not be the case.  What kernel version are you testing and does
> it contain any patches touching core kernel code?

Thanks Christoph for the quick response.

Currently we see this only in old kernels. I'll investigate this more and update.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
       [not found]     ` <3ba1baab-e2ac-358d-3b3b-ff4a27405c93-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-03-01 14:51         ` Christoph Hellwig
@ 2017-03-01 16:44       ` Sagi Grimberg
       [not found]         ` <0786659a-da12-e8f7-329e-3caa8cc8791f-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
  2017-03-01 16:52       ` Bart Van Assche
  2 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2017-03-01 16:44 UTC (permalink / raw)
  To: Noa Osherovich, hch-jcswGhMUV9g
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Majd Dibbiny

> Hi Christoph, Sagi

Hi Noa,

> Analysis:
> Since ib_comp_wq isn't single threaded, two works can run in parallel for the same CQ,
> executing __ib_process_cq.

How is that even possible? AFAICT a given CQ cannot run more than a
single work item at a time because we simply queue a work when we get
a completion event and rearm it only when we fully drain it. we requeue
if we exhausted our budget but still I don't see a mutual exclusion
violation...

Am I missing anything?

> Since this function isn't thread safe and the wc array is shared, it causes a data corruption
> which eventually crashes in the MAD layer due to a double list_del of the same element.

Hmm. I'm wandering if this is really the root-cause... Can it be the
fact that ib_comp_wq is unbound causing the worker to migrate cpu cores
in its lifetime?

I wanted to change that a while ago and sent a patch for it [1].

> We have the following options to solve this:
> 1. Instead of cq->wc, allocate an ib_wc array in __ib_process_cq per each call.

That is bad practice.

> 2. Make ib_comp_wq a single thread workqueue.

Not going to happen, it'll kill performance.

> 3. Change the locking scheme during poll: Currently only the device's poll_cq implementation
>    is done under lock. Change it to also contan the callbacks.

I don't see a need for this at all.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
       [not found]         ` <0786659a-da12-e8f7-329e-3caa8cc8791f-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
@ 2017-03-01 16:46           ` Sagi Grimberg
  2017-03-02  6:04           ` Noa Osherovich
  1 sibling, 0 replies; 11+ messages in thread
From: Sagi Grimberg @ 2017-03-01 16:46 UTC (permalink / raw)
  To: Noa Osherovich, hch-jcswGhMUV9g
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Majd Dibbiny


> Hmm. I'm wandering if this is really the root-cause... Can it be the
> fact that ib_comp_wq is unbound causing the worker to migrate cpu cores
> in its lifetime?
>
> I wanted to change that a while ago and sent a patch for it [1].

[1]:
--
IB/device: Convert ib-comp-wq to be CPU-bound

This workqueue is used by our storage target mode ULPs
via the new CQ API. Recent observations when working
with very high-end flash storage devices reveal that
UNBOUND workqueue threads can migrate between cpu cores
and even numa nodes (although some numa locality is accounted
for).

While this attribute can be useful in some workloads,
it does not fit in very nicely with the normal
run-to-completion model we usually use in our target-mode
ULPs and the block-mq irq<->cpu affinity facilities.

The whole block-mq concept is that the completion will
land on the same cpu where the submission was performed.
The fact that our submitter thread is migrating cpus
can break this locality.

We assume that as a target mode ULP, we will serve multiple
initiators/clients and we can spread the load enough without
having to use unbound kworkers.

Also, while we're at it, expose this workqueue via sysfs which
is harmless and can be useful for debug.

Signed-off-by: Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
---
  drivers/infiniband/core/device.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/infiniband/core/device.c 
b/drivers/infiniband/core/device.c
index 760ef603a468..15f4bdf89fe1 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -999,8 +999,7 @@ static int __init ib_core_init(void)
                 return -ENOMEM;

         ib_comp_wq = alloc_workqueue("ib-comp-wq",
-                       WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM,
-                       WQ_UNBOUND_MAX_ACTIVE);
+                       WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS, 0);
         if (!ib_comp_wq) {
                 ret = -ENOMEM;
                 goto err;
--
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
       [not found]     ` <3ba1baab-e2ac-358d-3b3b-ff4a27405c93-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-03-01 14:51         ` Christoph Hellwig
  2017-03-01 16:44       ` Sagi Grimberg
@ 2017-03-01 16:52       ` Bart Van Assche
       [not found]         ` <1488387143.2699.6.camel-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  2 siblings, 1 reply; 11+ messages in thread
From: Bart Van Assche @ 2017-03-01 16:52 UTC (permalink / raw)
  To: noaos-VPRAkNaXOzVWk0Htik3J/w, hch-jcswGhMUV9g,
	sagi-NQWnxTmZq1alnMjI0IkVqw
  Cc: majd-VPRAkNaXOzVWk0Htik3J/w, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Wed, 2017-03-01 at 16:30 +0200, Noa Osherovich wrote:
> REGS: c0000001ce7077e0 TRAP: 0300   Not tainted  (2.6.32-642.el6.ppc64)

Hello Noa,

I agree with Christoph and Sagi that your analysis doesn't match the upstream
code. I think the above information means that you are using RHEL 6.8?

Bart.--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
       [not found]         ` <1488387143.2699.6.camel-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2017-03-02  5:59           ` Noa Osherovich
       [not found]             ` <4d8ac8fd-8ef1-6f6b-177c-2c3ab131f99c-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Noa Osherovich @ 2017-03-02  5:59 UTC (permalink / raw)
  To: Bart Van Assche, hch-jcswGhMUV9g, sagi-NQWnxTmZq1alnMjI0IkVqw
  Cc: majd-VPRAkNaXOzVWk0Htik3J/w, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On 3/1/2017 6:52 PM, Bart Van Assche wrote:

> On Wed, 2017-03-01 at 16:30 +0200, Noa Osherovich wrote:
>> REGS: c0000001ce7077e0 TRAP: 0300   Not tainted  (2.6.32-642.el6.ppc64)
> Hello Noa,
>
> I agree with Christoph and Sagi that your analysis doesn't match the upstream
> code. I think the above information means that you are using RHEL 6.8?

Yes, as well as other older kernel, as I wrote to Christoph. We'll adapt the code.

Thanks.

> Bart.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
       [not found]         ` <0786659a-da12-e8f7-329e-3caa8cc8791f-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
  2017-03-01 16:46           ` Sagi Grimberg
@ 2017-03-02  6:04           ` Noa Osherovich
  1 sibling, 0 replies; 11+ messages in thread
From: Noa Osherovich @ 2017-03-02  6:04 UTC (permalink / raw)
  To: Sagi Grimberg, hch-jcswGhMUV9g
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Majd Dibbiny

Hi Sagi,

On 3/1/2017 6:44 PM, Sagi Grimberg wrote:

>> Hi Christoph, Sagi
> Hi Noa,
>> Analysis:
>> Since ib_comp_wq isn't single threaded, two works can run in parallel for the same CQ,
>> executing __ib_process_cq.
>>
> How is that even possible? AFAICT a given CQ cannot run more than a
> single work item at a time because we simply queue a work when we get
> a completion event and rearm it only when we fully drain it. we requeue
> if we exhausted our budget but still I don't see a mutual exclusion
> violation...
> Am I missing anything?

As Christoph and Bart pointed out, we use older kernel versions.

>> Since this function isn't thread safe and the wc array is shared, it causes a data corruption
>> which eventually crashes in the MAD layer due to a double list_del of the same element.
> Hmm. I'm wandering if this is really the root-cause... Can it be the
> fact that ib_comp_wq is unbound causing the worker to migrate cpu cores
> in its lifetime?
>
> I wanted to change that a while ago and sent a patch for it [1].
>
>> We have the following options to solve this:
>> 1. Instead of cq->wc, allocate an ib_wc array in __ib_process_cq per each call.
> That is bad practice.
>
>> 2. Make ib_comp_wq a single thread workqueue.
> Not going to happen, it'll kill performance.
>> 3. Change the locking scheme during poll: Currently only the device's poll_cq implementation
>>    is done under lock. Change it to also contan the callbacks.
> I don't see a need for this at all.

Thanks for your input.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Poll CQ syncing problem
       [not found]             ` <4d8ac8fd-8ef1-6f6b-177c-2c3ab131f99c-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-03-02  6:06               ` Bart Van Assche
  0 siblings, 0 replies; 11+ messages in thread
From: Bart Van Assche @ 2017-03-02  6:06 UTC (permalink / raw)
  To: noaos-VPRAkNaXOzVWk0Htik3J/w, hch-jcswGhMUV9g,
	sagi-NQWnxTmZq1alnMjI0IkVqw
  Cc: majd-VPRAkNaXOzVWk0Htik3J/w, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Thu, 2017-03-02 at 07:59 +0200, Noa Osherovich wrote:
> On 3/1/2017 6:52 PM, Bart Van Assche wrote:
> 
> > On Wed, 2017-03-01 at 16:30 +0200, Noa Osherovich wrote:
> > > REGS: c0000001ce7077e0 TRAP: 0300   Not tainted  (2.6.32-642.el6.ppc64)
> > 
> > Hello Noa,
> > 
> > I agree with Christoph and Sagi that your analysis doesn't match the upstream
> > code. I think the above information means that you are using RHEL 6.8?
> 
> Yes, as well as other older kernel, as I wrote to Christoph. We'll adapt the code.

It might be good to know that I ran into a similar issue earlier today with kernel
v4.10 by using the IB/CM and by destroying the QP before the CM ID. In other words,
the behavior you observed is not necessarily a bug in the polling code but may also
be a use-after-free of the memory that was allocated for the QP WR and/or WC buffers.
With the new polling API such errors are much more serious than before because the
new polling API jumps to an address retrieved from a WC entry.

Bart.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-03-02  6:06 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <b4355d22-fc79-c860-de8a-5a4d468c884d@mellanox.com>
     [not found] ` <b4355d22-fc79-c860-de8a-5a4d468c884d-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-03-01 14:30   ` Poll CQ syncing problem Noa Osherovich
     [not found]     ` <3ba1baab-e2ac-358d-3b3b-ff4a27405c93-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-03-01 14:51       ` Christoph Hellwig
2017-03-01 14:51         ` Christoph Hellwig
2017-03-01 15:28         ` Noa Osherovich
2017-03-01 15:28           ` Noa Osherovich
2017-03-01 16:44       ` Sagi Grimberg
     [not found]         ` <0786659a-da12-e8f7-329e-3caa8cc8791f-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-03-01 16:46           ` Sagi Grimberg
2017-03-02  6:04           ` Noa Osherovich
2017-03-01 16:52       ` Bart Van Assche
     [not found]         ` <1488387143.2699.6.camel-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2017-03-02  5:59           ` Noa Osherovich
     [not found]             ` <4d8ac8fd-8ef1-6f6b-177c-2c3ab131f99c-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-03-02  6:06               ` Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.