All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Logan Gunthorpe <logang@deltatee.com>,
	Christoph Hellwig <hch@lst.de>,
	"James E.J. Bottomley" <jejb@linux.vnet.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>,
	Steve Wise <swise@opengridcomputing.com>,
	Stephen Bates <sbates@raithlin.com>,
	Max Gurtovoy <maxg@mellanox.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Keith Busch <keith.busch@intel.com>,
	Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Cc: linux-scsi@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-rdma@vger.kernel.org, linux-pci@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	Sinan Kaya <okaya@codeaurora.org>
Subject: Re: [RFC 3/8] nvmet: Use p2pmem in nvme target
Date: Thu, 6 Apr 2017 08:47:22 +0300	[thread overview]
Message-ID: <0689e764-bf04-6da2-3b7d-2cbf0b6b94a0@grimberg.me> (raw)
In-Reply-To: <ec05b7d9-8dfd-8227-84d2-7d391df32219@deltatee.com>


> I hadn't done this yet but I think a simple closest device in the tree
> would solve the issue sufficiently. However, I originally had it so the
> user has to pick the device and I prefer that approach. But if the user
> picks the device, then why bother restricting what he picks?

Because the user can get it wrong, and its our job to do what we can in
order to prevent the user from screwing itself.

> Per the
> thread with Sinan, I'd prefer to use what the user picks. You were one
> of the biggest opponents to that so I'd like to hear your opinion on
> removing the restrictions.

I wasn't against it that much, I'm all for making things "just work"
with minimal configuration steps, but I'm not sure we can get it
right without it.

>>> Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
>>> save an extra PCI transfer as the NVME card could just take the data
>>> out of it's own memory. However, at this time, cards with CMB buffers
>>> don't seem to be available.
>>
>> Even if it was available, it would be hard to make real use of this
>> given that we wouldn't know how to pre-post recv buffers (for in-capsule
>> data). But let's leave this out of the scope entirely...
>
> I don't understand what you're referring to. We'd simply use the CMB
> buffer as a p2pmem device, why does that change anything?

I'm referring to the in-capsule data buffers pre-posts that we do.
Because we prepare a buffer that would contain in-capsule data, we have
no knowledge to which device the incoming I/O is directed to, which
means we can (and will) have I/O where the data lies in CMB of device
A but it's really targeted to device B - which sorta defeats the purpose
of what we're trying to optimize here...

>> Why do you need this? you have a reference to the
>> queue itself.
>
> This keeps track of whether the response was actually allocated with
> p2pmem or not. It's needed for when we free the SGL because the queue
> may have a p2pmem device assigned to it but, if the alloc failed and it
> fell back on system memory then we need to know how to free it. I'm
> currently looking at having SGLs having an iomem flag. In which case,
> this would no longer be needed as the flag in the SGL could be used.

That would be better, maybe...

[...]

>> This is a problem. namespaces can be added at any point in time. No one
>> guarantee that dma_devs are all the namepaces we'll ever see.
>
> Yeah, well restricting p2pmem based on all the devices in use is hard.
> So we'd need a call into the transport every time an ns is added and
> we'd have to drop the p2pmem if they add one that isn't supported. This
> complexity is just one of the reasons I prefer just letting the user chose.

Still the user can get it wrong. Not sure we can get a way without
keeping track of this as new devices join the subsystem.

>>> +
>>> +    if (queue->p2pmem)
>>> +        pr_debug("using %s for rdma nvme target queue",
>>> +             dev_name(&queue->p2pmem->dev));
>>> +
>>> +    kfree(dma_devs);
>>> +}
>>> +
>>>  static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
>>>          struct rdma_cm_event *event)
>>>  {
>>> @@ -1199,6 +1271,8 @@ static int nvmet_rdma_queue_connect(struct
>>> rdma_cm_id *cm_id,
>>>      }
>>>      queue->port = cm_id->context;
>>>
>>> +    nvmet_rdma_queue_setup_p2pmem(queue);
>>> +
>>
>> Why is all this done for each queue? looks completely redundant to me.
>
> A little bit. Where would you put it?

I think we'll need a representation of a controller in nvmet-rdma for
that. we sort of got a way without it so far, but I don't think we can
anymore with this.

>>>      ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
>>>      if (ret)
>>>          goto release_queue;
>>
>> You seemed to skip the in-capsule buffers for p2pmem (inline_page), I'm
>> curious why?
>
> Yes, the thinking was that these transfers were small anyway so there
> would not be significant benefit to pushing them through p2pmem. There's
> really no reason why we couldn't do that if it made sense to though.

I don't see an urgent reason for it too. I was just curious...
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
To: Logan Gunthorpe <logang-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>,
	Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>,
	"James E.J. Bottomley"
	<jejb-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>,
	"Martin K. Petersen"
	<martin.petersen-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>,
	Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>,
	Steve Wise
	<swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>,
	Stephen Bates <sbates-pv7U853sEMVWk0Htik3J/w@public.gmane.org>,
	Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>,
	Dan Williams
	<dan.j.williams-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	Keith Busch <keith.busch-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	Jason Gunthorpe
	<jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
Cc: linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
	Sinan Kaya <okaya-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
Subject: Re: [RFC 3/8] nvmet: Use p2pmem in nvme target
Date: Thu, 6 Apr 2017 08:47:22 +0300	[thread overview]
Message-ID: <0689e764-bf04-6da2-3b7d-2cbf0b6b94a0@grimberg.me> (raw)
In-Reply-To: <ec05b7d9-8dfd-8227-84d2-7d391df32219-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>


> I hadn't done this yet but I think a simple closest device in the tree
> would solve the issue sufficiently. However, I originally had it so the
> user has to pick the device and I prefer that approach. But if the user
> picks the device, then why bother restricting what he picks?

Because the user can get it wrong, and its our job to do what we can in
order to prevent the user from screwing itself.

> Per the
> thread with Sinan, I'd prefer to use what the user picks. You were one
> of the biggest opponents to that so I'd like to hear your opinion on
> removing the restrictions.

I wasn't against it that much, I'm all for making things "just work"
with minimal configuration steps, but I'm not sure we can get it
right without it.

>>> Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
>>> save an extra PCI transfer as the NVME card could just take the data
>>> out of it's own memory. However, at this time, cards with CMB buffers
>>> don't seem to be available.
>>
>> Even if it was available, it would be hard to make real use of this
>> given that we wouldn't know how to pre-post recv buffers (for in-capsule
>> data). But let's leave this out of the scope entirely...
>
> I don't understand what you're referring to. We'd simply use the CMB
> buffer as a p2pmem device, why does that change anything?

I'm referring to the in-capsule data buffers pre-posts that we do.
Because we prepare a buffer that would contain in-capsule data, we have
no knowledge to which device the incoming I/O is directed to, which
means we can (and will) have I/O where the data lies in CMB of device
A but it's really targeted to device B - which sorta defeats the purpose
of what we're trying to optimize here...

>> Why do you need this? you have a reference to the
>> queue itself.
>
> This keeps track of whether the response was actually allocated with
> p2pmem or not. It's needed for when we free the SGL because the queue
> may have a p2pmem device assigned to it but, if the alloc failed and it
> fell back on system memory then we need to know how to free it. I'm
> currently looking at having SGLs having an iomem flag. In which case,
> this would no longer be needed as the flag in the SGL could be used.

That would be better, maybe...

[...]

>> This is a problem. namespaces can be added at any point in time. No one
>> guarantee that dma_devs are all the namepaces we'll ever see.
>
> Yeah, well restricting p2pmem based on all the devices in use is hard.
> So we'd need a call into the transport every time an ns is added and
> we'd have to drop the p2pmem if they add one that isn't supported. This
> complexity is just one of the reasons I prefer just letting the user chose.

Still the user can get it wrong. Not sure we can get a way without
keeping track of this as new devices join the subsystem.

>>> +
>>> +    if (queue->p2pmem)
>>> +        pr_debug("using %s for rdma nvme target queue",
>>> +             dev_name(&queue->p2pmem->dev));
>>> +
>>> +    kfree(dma_devs);
>>> +}
>>> +
>>>  static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
>>>          struct rdma_cm_event *event)
>>>  {
>>> @@ -1199,6 +1271,8 @@ static int nvmet_rdma_queue_connect(struct
>>> rdma_cm_id *cm_id,
>>>      }
>>>      queue->port = cm_id->context;
>>>
>>> +    nvmet_rdma_queue_setup_p2pmem(queue);
>>> +
>>
>> Why is all this done for each queue? looks completely redundant to me.
>
> A little bit. Where would you put it?

I think we'll need a representation of a controller in nvmet-rdma for
that. we sort of got a way without it so far, but I don't think we can
anymore with this.

>>>      ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
>>>      if (ret)
>>>          goto release_queue;
>>
>> You seemed to skip the in-capsule buffers for p2pmem (inline_page), I'm
>> curious why?
>
> Yes, the thinking was that these transfers were small anyway so there
> would not be significant benefit to pushing them through p2pmem. There's
> really no reason why we couldn't do that if it made sense to though.

I don't see an urgent reason for it too. I was just curious...

WARNING: multiple messages have this Message-ID (diff)
From: Sagi Grimberg <sagi@grimberg.me>
To: Logan Gunthorpe <logang@deltatee.com>,
	Christoph Hellwig <hch@lst.de>,
	"James E.J. Bottomley" <jejb@linux.vnet.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>,
	Steve Wise <swise@opengridcomputing.com>,
	Stephen Bates <sbates@raithlin.com>,
	Max Gurtovoy <maxg@mellanox.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Keith Busch <keith.busch@intel.com>,
	Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Cc: linux-pci@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org,
	linux-nvdimm@ml01.01.org, linux-kernel@vger.kernel.org,
	Sinan Kaya <okaya@codeaurora.org>
Subject: Re: [RFC 3/8] nvmet: Use p2pmem in nvme target
Date: Thu, 6 Apr 2017 08:47:22 +0300	[thread overview]
Message-ID: <0689e764-bf04-6da2-3b7d-2cbf0b6b94a0@grimberg.me> (raw)
In-Reply-To: <ec05b7d9-8dfd-8227-84d2-7d391df32219@deltatee.com>


> I hadn't done this yet but I think a simple closest device in the tree
> would solve the issue sufficiently. However, I originally had it so the
> user has to pick the device and I prefer that approach. But if the user
> picks the device, then why bother restricting what he picks?

Because the user can get it wrong, and its our job to do what we can in
order to prevent the user from screwing itself.

> Per the
> thread with Sinan, I'd prefer to use what the user picks. You were one
> of the biggest opponents to that so I'd like to hear your opinion on
> removing the restrictions.

I wasn't against it that much, I'm all for making things "just work"
with minimal configuration steps, but I'm not sure we can get it
right without it.

>>> Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
>>> save an extra PCI transfer as the NVME card could just take the data
>>> out of it's own memory. However, at this time, cards with CMB buffers
>>> don't seem to be available.
>>
>> Even if it was available, it would be hard to make real use of this
>> given that we wouldn't know how to pre-post recv buffers (for in-capsule
>> data). But let's leave this out of the scope entirely...
>
> I don't understand what you're referring to. We'd simply use the CMB
> buffer as a p2pmem device, why does that change anything?

I'm referring to the in-capsule data buffers pre-posts that we do.
Because we prepare a buffer that would contain in-capsule data, we have
no knowledge to which device the incoming I/O is directed to, which
means we can (and will) have I/O where the data lies in CMB of device
A but it's really targeted to device B - which sorta defeats the purpose
of what we're trying to optimize here...

>> Why do you need this? you have a reference to the
>> queue itself.
>
> This keeps track of whether the response was actually allocated with
> p2pmem or not. It's needed for when we free the SGL because the queue
> may have a p2pmem device assigned to it but, if the alloc failed and it
> fell back on system memory then we need to know how to free it. I'm
> currently looking at having SGLs having an iomem flag. In which case,
> this would no longer be needed as the flag in the SGL could be used.

That would be better, maybe...

[...]

>> This is a problem. namespaces can be added at any point in time. No one
>> guarantee that dma_devs are all the namepaces we'll ever see.
>
> Yeah, well restricting p2pmem based on all the devices in use is hard.
> So we'd need a call into the transport every time an ns is added and
> we'd have to drop the p2pmem if they add one that isn't supported. This
> complexity is just one of the reasons I prefer just letting the user chose.

Still the user can get it wrong. Not sure we can get a way without
keeping track of this as new devices join the subsystem.

>>> +
>>> +    if (queue->p2pmem)
>>> +        pr_debug("using %s for rdma nvme target queue",
>>> +             dev_name(&queue->p2pmem->dev));
>>> +
>>> +    kfree(dma_devs);
>>> +}
>>> +
>>>  static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
>>>          struct rdma_cm_event *event)
>>>  {
>>> @@ -1199,6 +1271,8 @@ static int nvmet_rdma_queue_connect(struct
>>> rdma_cm_id *cm_id,
>>>      }
>>>      queue->port = cm_id->context;
>>>
>>> +    nvmet_rdma_queue_setup_p2pmem(queue);
>>> +
>>
>> Why is all this done for each queue? looks completely redundant to me.
>
> A little bit. Where would you put it?

I think we'll need a representation of a controller in nvmet-rdma for
that. we sort of got a way without it so far, but I don't think we can
anymore with this.

>>>      ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
>>>      if (ret)
>>>          goto release_queue;
>>
>> You seemed to skip the in-capsule buffers for p2pmem (inline_page), I'm
>> curious why?
>
> Yes, the thinking was that these transfers were small anyway so there
> would not be significant benefit to pushing them through p2pmem. There's
> really no reason why we couldn't do that if it made sense to though.

I don't see an urgent reason for it too. I was just curious...

WARNING: multiple messages have this Message-ID (diff)
From: Sagi Grimberg <sagi@grimberg.me>
To: Logan Gunthorpe <logang@deltatee.com>,
	Christoph Hellwig <hch@lst.de>,
	"James E.J. Bottomley" <jejb@linux.vnet.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>,
	Steve Wise <swise@opengridcomputing.com>,
	Stephen Bates <sbates@raithlin.com>,
	Max Gurtovoy <maxg@mellanox.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Keith Busch <keith.busch@intel.com>,
	Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Cc: linux-pci@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org,
	Sinan Kaya <okaya@codeaurora.org>
Subject: Re: [RFC 3/8] nvmet: Use p2pmem in nvme target
Date: Thu, 6 Apr 2017 08:47:22 +0300	[thread overview]
Message-ID: <0689e764-bf04-6da2-3b7d-2cbf0b6b94a0@grimberg.me> (raw)
In-Reply-To: <ec05b7d9-8dfd-8227-84d2-7d391df32219@deltatee.com>


> I hadn't done this yet but I think a simple closest device in the tree
> would solve the issue sufficiently. However, I originally had it so the
> user has to pick the device and I prefer that approach. But if the user
> picks the device, then why bother restricting what he picks?

Because the user can get it wrong, and its our job to do what we can in
order to prevent the user from screwing itself.

> Per the
> thread with Sinan, I'd prefer to use what the user picks. You were one
> of the biggest opponents to that so I'd like to hear your opinion on
> removing the restrictions.

I wasn't against it that much, I'm all for making things "just work"
with minimal configuration steps, but I'm not sure we can get it
right without it.

>>> Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
>>> save an extra PCI transfer as the NVME card could just take the data
>>> out of it's own memory. However, at this time, cards with CMB buffers
>>> don't seem to be available.
>>
>> Even if it was available, it would be hard to make real use of this
>> given that we wouldn't know how to pre-post recv buffers (for in-capsule
>> data). But let's leave this out of the scope entirely...
>
> I don't understand what you're referring to. We'd simply use the CMB
> buffer as a p2pmem device, why does that change anything?

I'm referring to the in-capsule data buffers pre-posts that we do.
Because we prepare a buffer that would contain in-capsule data, we have
no knowledge to which device the incoming I/O is directed to, which
means we can (and will) have I/O where the data lies in CMB of device
A but it's really targeted to device B - which sorta defeats the purpose
of what we're trying to optimize here...

>> Why do you need this? you have a reference to the
>> queue itself.
>
> This keeps track of whether the response was actually allocated with
> p2pmem or not. It's needed for when we free the SGL because the queue
> may have a p2pmem device assigned to it but, if the alloc failed and it
> fell back on system memory then we need to know how to free it. I'm
> currently looking at having SGLs having an iomem flag. In which case,
> this would no longer be needed as the flag in the SGL could be used.

That would be better, maybe...

[...]

>> This is a problem. namespaces can be added at any point in time. No one
>> guarantee that dma_devs are all the namepaces we'll ever see.
>
> Yeah, well restricting p2pmem based on all the devices in use is hard.
> So we'd need a call into the transport every time an ns is added and
> we'd have to drop the p2pmem if they add one that isn't supported. This
> complexity is just one of the reasons I prefer just letting the user chose.

Still the user can get it wrong. Not sure we can get a way without
keeping track of this as new devices join the subsystem.

>>> +
>>> +    if (queue->p2pmem)
>>> +        pr_debug("using %s for rdma nvme target queue",
>>> +             dev_name(&queue->p2pmem->dev));
>>> +
>>> +    kfree(dma_devs);
>>> +}
>>> +
>>>  static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
>>>          struct rdma_cm_event *event)
>>>  {
>>> @@ -1199,6 +1271,8 @@ static int nvmet_rdma_queue_connect(struct
>>> rdma_cm_id *cm_id,
>>>      }
>>>      queue->port = cm_id->context;
>>>
>>> +    nvmet_rdma_queue_setup_p2pmem(queue);
>>> +
>>
>> Why is all this done for each queue? looks completely redundant to me.
>
> A little bit. Where would you put it?

I think we'll need a representation of a controller in nvmet-rdma for
that. we sort of got a way without it so far, but I don't think we can
anymore with this.

>>>      ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
>>>      if (ret)
>>>          goto release_queue;
>>
>> You seemed to skip the in-capsule buffers for p2pmem (inline_page), I'm
>> curious why?
>
> Yes, the thinking was that these transfers were small anyway so there
> would not be significant benefit to pushing them through p2pmem. There's
> really no reason why we couldn't do that if it made sense to though.

I don't see an urgent reason for it too. I was just curious...

WARNING: multiple messages have this Message-ID (diff)
From: sagi@grimberg.me (Sagi Grimberg)
Subject: [RFC 3/8] nvmet: Use p2pmem in nvme target
Date: Thu, 6 Apr 2017 08:47:22 +0300	[thread overview]
Message-ID: <0689e764-bf04-6da2-3b7d-2cbf0b6b94a0@grimberg.me> (raw)
In-Reply-To: <ec05b7d9-8dfd-8227-84d2-7d391df32219@deltatee.com>


> I hadn't done this yet but I think a simple closest device in the tree
> would solve the issue sufficiently. However, I originally had it so the
> user has to pick the device and I prefer that approach. But if the user
> picks the device, then why bother restricting what he picks?

Because the user can get it wrong, and its our job to do what we can in
order to prevent the user from screwing itself.

> Per the
> thread with Sinan, I'd prefer to use what the user picks. You were one
> of the biggest opponents to that so I'd like to hear your opinion on
> removing the restrictions.

I wasn't against it that much, I'm all for making things "just work"
with minimal configuration steps, but I'm not sure we can get it
right without it.

>>> Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
>>> save an extra PCI transfer as the NVME card could just take the data
>>> out of it's own memory. However, at this time, cards with CMB buffers
>>> don't seem to be available.
>>
>> Even if it was available, it would be hard to make real use of this
>> given that we wouldn't know how to pre-post recv buffers (for in-capsule
>> data). But let's leave this out of the scope entirely...
>
> I don't understand what you're referring to. We'd simply use the CMB
> buffer as a p2pmem device, why does that change anything?

I'm referring to the in-capsule data buffers pre-posts that we do.
Because we prepare a buffer that would contain in-capsule data, we have
no knowledge to which device the incoming I/O is directed to, which
means we can (and will) have I/O where the data lies in CMB of device
A but it's really targeted to device B - which sorta defeats the purpose
of what we're trying to optimize here...

>> Why do you need this? you have a reference to the
>> queue itself.
>
> This keeps track of whether the response was actually allocated with
> p2pmem or not. It's needed for when we free the SGL because the queue
> may have a p2pmem device assigned to it but, if the alloc failed and it
> fell back on system memory then we need to know how to free it. I'm
> currently looking at having SGLs having an iomem flag. In which case,
> this would no longer be needed as the flag in the SGL could be used.

That would be better, maybe...

[...]

>> This is a problem. namespaces can be added at any point in time. No one
>> guarantee that dma_devs are all the namepaces we'll ever see.
>
> Yeah, well restricting p2pmem based on all the devices in use is hard.
> So we'd need a call into the transport every time an ns is added and
> we'd have to drop the p2pmem if they add one that isn't supported. This
> complexity is just one of the reasons I prefer just letting the user chose.

Still the user can get it wrong. Not sure we can get a way without
keeping track of this as new devices join the subsystem.

>>> +
>>> +    if (queue->p2pmem)
>>> +        pr_debug("using %s for rdma nvme target queue",
>>> +             dev_name(&queue->p2pmem->dev));
>>> +
>>> +    kfree(dma_devs);
>>> +}
>>> +
>>>  static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
>>>          struct rdma_cm_event *event)
>>>  {
>>> @@ -1199,6 +1271,8 @@ static int nvmet_rdma_queue_connect(struct
>>> rdma_cm_id *cm_id,
>>>      }
>>>      queue->port = cm_id->context;
>>>
>>> +    nvmet_rdma_queue_setup_p2pmem(queue);
>>> +
>>
>> Why is all this done for each queue? looks completely redundant to me.
>
> A little bit. Where would you put it?

I think we'll need a representation of a controller in nvmet-rdma for
that. we sort of got a way without it so far, but I don't think we can
anymore with this.

>>>      ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
>>>      if (ret)
>>>          goto release_queue;
>>
>> You seemed to skip the in-capsule buffers for p2pmem (inline_page), I'm
>> curious why?
>
> Yes, the thinking was that these transfers were small anyway so there
> would not be significant benefit to pushing them through p2pmem. There's
> really no reason why we couldn't do that if it made sense to though.

I don't see an urgent reason for it too. I was just curious...

  reply	other threads:[~2017-04-06  5:47 UTC|newest]

Thread overview: 325+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-30 22:12 [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Logan Gunthorpe
2017-03-30 22:12 ` Logan Gunthorpe
2017-03-30 22:12 ` Logan Gunthorpe
2017-03-30 22:12 ` Logan Gunthorpe
2017-03-30 22:12 ` Logan Gunthorpe
2017-03-30 22:12 ` [RFC 1/8] Introduce Peer-to-Peer memory (p2pmem) device Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-31 18:49   ` Sinan Kaya
2017-03-31 18:49     ` Sinan Kaya
2017-03-31 18:49     ` Sinan Kaya
2017-03-31 18:49     ` Sinan Kaya
2017-03-31 18:49     ` Sinan Kaya
2017-03-31 21:23     ` Logan Gunthorpe
2017-03-31 21:23       ` Logan Gunthorpe
2017-03-31 21:23       ` Logan Gunthorpe
2017-03-31 21:23       ` Logan Gunthorpe
2017-03-31 21:23       ` Logan Gunthorpe
2017-03-31 21:38       ` Sinan Kaya
2017-03-31 21:38         ` Sinan Kaya
2017-03-31 21:38         ` Sinan Kaya
2017-03-31 21:38         ` Sinan Kaya
2017-03-31 21:38         ` Sinan Kaya
2017-03-31 22:42         ` Logan Gunthorpe
2017-03-31 22:42           ` Logan Gunthorpe
2017-03-31 22:42           ` Logan Gunthorpe
2017-03-31 22:42           ` Logan Gunthorpe
2017-03-31 22:42           ` Logan Gunthorpe
2017-03-31 23:51           ` Sinan Kaya
2017-03-31 23:51             ` Sinan Kaya
2017-03-31 23:51             ` Sinan Kaya
2017-03-31 23:51             ` Sinan Kaya
2017-03-31 23:51             ` Sinan Kaya
2017-04-01  1:57             ` Logan Gunthorpe
2017-04-01  1:57               ` Logan Gunthorpe
2017-04-01  1:57               ` Logan Gunthorpe
2017-04-01  1:57               ` Logan Gunthorpe
2017-04-01  1:57               ` Logan Gunthorpe
2017-04-01  2:17               ` okaya
2017-04-01  2:17                 ` okaya
2017-04-01  2:17                 ` okaya
2017-04-01  2:17                 ` okaya
2017-04-01  2:17                 ` okaya-sgV2jX0FEOL9JmXXK+q4OQ
2017-04-01 22:16                 ` Logan Gunthorpe
2017-04-01 22:16                   ` Logan Gunthorpe
2017-04-01 22:16                   ` Logan Gunthorpe
2017-04-01 22:16                   ` Logan Gunthorpe
2017-04-02  2:26                   ` Sinan Kaya
2017-04-02  2:26                     ` Sinan Kaya
2017-04-02  2:26                     ` Sinan Kaya
2017-04-02  2:26                     ` Sinan Kaya
2017-04-02 17:21                     ` Logan Gunthorpe
2017-04-02 17:21                       ` Logan Gunthorpe
2017-04-02 17:21                       ` Logan Gunthorpe
2017-04-02 17:21                       ` Logan Gunthorpe
2017-04-02 17:21                       ` Logan Gunthorpe
2017-04-02 21:03                       ` Sinan Kaya
2017-04-02 21:03                         ` Sinan Kaya
2017-04-02 21:03                         ` Sinan Kaya
2017-04-02 21:03                         ` Sinan Kaya
2017-04-03  4:26                         ` Logan Gunthorpe
2017-04-03  4:26                           ` Logan Gunthorpe
2017-04-03  4:26                           ` Logan Gunthorpe
2017-04-03  4:26                           ` Logan Gunthorpe
2017-04-25 11:58                           ` Marta Rybczynska
2017-04-25 11:58                             ` Marta Rybczynska
2017-04-25 11:58                             ` Marta Rybczynska
2017-04-25 11:58                             ` Marta Rybczynska
2017-04-25 11:58                             ` Marta Rybczynska
2017-04-25 16:58                             ` Logan Gunthorpe
2017-04-25 16:58                               ` Logan Gunthorpe
2017-04-25 16:58                               ` Logan Gunthorpe
2017-04-25 16:58                               ` Logan Gunthorpe
2017-04-25 16:58                               ` Logan Gunthorpe
2017-03-30 22:12 ` [RFC 2/8] cxgb4: setup pcie memory window 4 and create p2pmem region Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-04-04 10:42   ` Sagi Grimberg
2017-04-04 10:42     ` Sagi Grimberg
2017-04-04 10:42     ` Sagi Grimberg
2017-04-04 10:42     ` Sagi Grimberg
2017-04-04 10:42     ` Sagi Grimberg
2017-04-04 15:56     ` Logan Gunthorpe
2017-04-04 15:56       ` Logan Gunthorpe
2017-04-04 15:56       ` Logan Gunthorpe
2017-04-04 15:56       ` Logan Gunthorpe
2017-04-04 15:56       ` Logan Gunthorpe
2017-04-05 15:41     ` Steve Wise
2017-04-05 15:41       ` Steve Wise
2017-04-05 15:41       ` Steve Wise
2017-04-05 15:41       ` Steve Wise
2017-04-05 15:41       ` Steve Wise
2017-03-30 22:12 ` [RFC 3/8] nvmet: Use p2pmem in nvme target Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-04-04 10:40   ` Sagi Grimberg
2017-04-04 10:40     ` Sagi Grimberg
2017-04-04 10:40     ` Sagi Grimberg
2017-04-04 10:40     ` Sagi Grimberg
2017-04-04 16:16     ` Logan Gunthorpe
2017-04-04 16:16       ` Logan Gunthorpe
2017-04-04 16:16       ` Logan Gunthorpe
2017-04-04 16:16       ` Logan Gunthorpe
2017-04-04 16:16       ` Logan Gunthorpe
2017-04-06  5:47       ` Sagi Grimberg [this message]
2017-04-06  5:47         ` Sagi Grimberg
2017-04-06  5:47         ` Sagi Grimberg
2017-04-06  5:47         ` Sagi Grimberg
2017-04-06  5:47         ` Sagi Grimberg
2017-04-06 15:52         ` Logan Gunthorpe
2017-04-06 15:52           ` Logan Gunthorpe
2017-04-06 15:52           ` Logan Gunthorpe
2017-04-06 15:52           ` Logan Gunthorpe
2017-04-06 15:52           ` Logan Gunthorpe
2017-03-30 22:12 ` [RFC 4/8] p2pmem: Add debugfs "stats" file Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-04-04 10:46   ` Sagi Grimberg
2017-04-04 10:46     ` Sagi Grimberg
2017-04-04 10:46     ` Sagi Grimberg
2017-04-04 10:46     ` Sagi Grimberg
2017-04-04 10:46     ` Sagi Grimberg
2017-04-04 17:25     ` Logan Gunthorpe
2017-04-04 17:25       ` Logan Gunthorpe
2017-04-04 17:25       ` Logan Gunthorpe
2017-04-04 17:25       ` Logan Gunthorpe
2017-04-04 17:25       ` Logan Gunthorpe
2017-04-05 15:43     ` Steve Wise
2017-04-05 15:43       ` Steve Wise
2017-04-05 15:43       ` Steve Wise
2017-04-05 15:43       ` Steve Wise
2017-04-05 15:43       ` Steve Wise
2017-03-30 22:12 ` [RFC 5/8] scatterlist: Modify SG copy functions to support io memory Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-31  7:09   ` Christoph Hellwig
2017-03-31  7:09     ` Christoph Hellwig
2017-03-31  7:09     ` Christoph Hellwig
2017-03-31  7:09     ` Christoph Hellwig
2017-03-31  7:09     ` Christoph Hellwig
2017-03-31 15:41     ` Logan Gunthorpe
2017-03-31 15:41       ` Logan Gunthorpe
2017-03-31 15:41       ` Logan Gunthorpe
2017-03-31 15:41       ` Logan Gunthorpe
2017-03-31 15:41       ` Logan Gunthorpe
2017-04-03 21:20       ` Logan Gunthorpe
2017-04-03 21:20         ` Logan Gunthorpe
2017-04-03 21:20         ` Logan Gunthorpe
2017-04-03 21:20         ` Logan Gunthorpe
2017-04-03 21:20         ` Logan Gunthorpe
2017-04-03 21:44         ` Dan Williams
2017-04-03 21:44           ` Dan Williams
2017-04-03 21:44           ` Dan Williams
2017-04-03 21:44           ` Dan Williams
2017-04-03 21:44           ` Dan Williams
2017-04-03 22:10           ` Logan Gunthorpe
2017-04-03 22:10             ` Logan Gunthorpe
2017-04-03 22:10             ` Logan Gunthorpe
2017-04-03 22:10             ` Logan Gunthorpe
2017-04-03 22:10             ` Logan Gunthorpe
2017-04-03 22:47             ` Dan Williams
2017-04-03 22:47               ` Dan Williams
2017-04-03 22:47               ` Dan Williams
2017-04-03 22:47               ` Dan Williams
2017-04-03 22:47               ` Dan Williams
2017-04-03 23:12               ` Logan Gunthorpe
2017-04-03 23:12                 ` Logan Gunthorpe
2017-04-03 23:12                 ` Logan Gunthorpe
2017-04-03 23:12                 ` Logan Gunthorpe
2017-04-03 23:12                 ` Logan Gunthorpe
2017-04-04  0:07                 ` Dan Williams
2017-04-04  0:07                   ` Dan Williams
2017-04-04  0:07                   ` Dan Williams
2017-04-04  0:07                   ` Dan Williams
2017-04-04  0:07                   ` Dan Williams
2017-04-07 17:59                   ` Logan Gunthorpe
2017-04-07 17:59                     ` Logan Gunthorpe
2017-04-07 17:59                     ` Logan Gunthorpe
2017-04-07 17:59                     ` Logan Gunthorpe
2017-04-07 17:59                     ` Logan Gunthorpe
2017-03-30 22:12 ` [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-04-04 10:59   ` Sagi Grimberg
2017-04-04 10:59     ` Sagi Grimberg
2017-04-04 10:59     ` Sagi Grimberg
2017-04-04 10:59     ` Sagi Grimberg
2017-04-04 10:59     ` Sagi Grimberg
     [not found]     ` <080b68b4-eba3-861c-4f29-5d829425b5e7-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-04-04 15:46       ` Jason Gunthorpe
2017-04-04 15:46         ` Jason Gunthorpe
2017-04-04 15:46         ` Jason Gunthorpe
     [not found]         ` <20170404154629.GA13552-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2017-04-04 17:21           ` Logan Gunthorpe
2017-04-04 17:21             ` Logan Gunthorpe
2017-04-04 17:21             ` Logan Gunthorpe
2017-04-06  5:33           ` Sagi Grimberg
2017-04-06  5:33             ` Sagi Grimberg
2017-04-06  5:33             ` Sagi Grimberg
2017-04-06 16:35             ` Jason Gunthorpe
2017-04-06 16:35               ` Jason Gunthorpe
     [not found]             ` <4df229d8-8124-664a-9bc4-6401bc034be1-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-04-06 16:02               ` Logan Gunthorpe
2017-04-06 16:02                 ` Logan Gunthorpe
2017-04-06 16:02                 ` Logan Gunthorpe
2017-04-07 11:19               ` Stephen  Bates
2017-04-07 11:19                 ` Stephen  Bates
2017-04-07 11:19                 ` Stephen  Bates
2017-04-07 11:19                 ` Stephen  Bates
     [not found]                 ` <3E85B4D4-9EBC-4299-8209-2D8740947764-pv7U853sEMVWk0Htik3J/w@public.gmane.org>
2017-04-10  8:29                   ` Sagi Grimberg
2017-04-10  8:29                     ` Sagi Grimberg
2017-04-10  8:29                     ` Sagi Grimberg
     [not found]                     ` <7fcc3ac8-8b96-90f5-3942-87f999c7499d-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-04-10 16:03                       ` Logan Gunthorpe
2017-04-10 16:03                         ` Logan Gunthorpe
2017-04-10 16:03                         ` Logan Gunthorpe
2017-03-30 22:12 ` [RFC 7/8] p2pmem: Support device removal Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12 ` [RFC 8/8] p2pmem: Added char device user interface Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
2017-03-30 22:12   ` Logan Gunthorpe
     [not found] ` <1490911959-5146-1-git-send-email-logang-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-12  5:22   ` [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Benjamin Herrenschmidt
2017-04-12  5:22     ` Benjamin Herrenschmidt
2017-04-12  5:22     ` Benjamin Herrenschmidt
     [not found]     ` <1491974532.7236.43.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
2017-04-12 17:09       ` Logan Gunthorpe
2017-04-12 17:09         ` Logan Gunthorpe
2017-04-12 17:09         ` Logan Gunthorpe
     [not found]         ` <5ac22496-56ec-025d-f153-140001d2a7f9-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-12 21:55           ` Benjamin Herrenschmidt
2017-04-12 21:55             ` Benjamin Herrenschmidt
2017-04-12 21:55             ` Benjamin Herrenschmidt
     [not found]             ` <1492034124.7236.77.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
2017-04-13 21:22               ` Logan Gunthorpe
2017-04-13 21:22                 ` Logan Gunthorpe
2017-04-13 21:22                 ` Logan Gunthorpe
     [not found]                 ` <81888a1e-eb0d-cbbc-dc66-0a09c32e4ea2-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-13 22:37                   ` Benjamin Herrenschmidt
2017-04-13 22:37                     ` Benjamin Herrenschmidt
2017-04-13 22:37                     ` Benjamin Herrenschmidt
2017-04-13 23:26                   ` Bjorn Helgaas
2017-04-13 23:26                     ` Bjorn Helgaas
2017-04-13 23:26                     ` Bjorn Helgaas
     [not found]                     ` <20170413232631.GB24910-1RhO1Y9PlrlHTL0Zs8A6p5iNqAH0jzoTYJqu5kTmcBRl57MIdRCFDg@public.gmane.org>
2017-04-14  4:16                       ` Jason Gunthorpe
2017-04-14  4:16                         ` Jason Gunthorpe
2017-04-14  4:16                         ` Jason Gunthorpe
     [not found]                         ` <20170414041656.GA30694-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2017-04-14  4:40                           ` Logan Gunthorpe
2017-04-14  4:40                             ` Logan Gunthorpe
2017-04-14  4:40                             ` Logan Gunthorpe
     [not found]                             ` <08c32f0d-6c7c-b65f-6453-dde0d7c173d1-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-14 11:37                               ` Benjamin Herrenschmidt
2017-04-14 11:37                                 ` Benjamin Herrenschmidt
2017-04-14 11:37                                 ` Benjamin Herrenschmidt
     [not found]                                 ` <1492169879.25766.4.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
2017-04-14 11:39                                   ` Benjamin Herrenschmidt
2017-04-14 11:39                                     ` Benjamin Herrenschmidt
2017-04-14 11:39                                     ` Benjamin Herrenschmidt
2017-04-14 11:37                           ` Benjamin Herrenschmidt
2017-04-14 11:37                             ` Benjamin Herrenschmidt
2017-04-14 11:37                             ` Benjamin Herrenschmidt
     [not found]                             ` <1492169849.25766.3.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
2017-04-14 17:30                               ` Logan Gunthorpe
2017-04-14 17:30                                 ` Logan Gunthorpe
2017-04-14 17:30                                 ` Logan Gunthorpe
     [not found]                                 ` <630c1c63-ff17-1116-e069-2b8f93e50fa2-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-14 19:04                                   ` Bjorn Helgaas
2017-04-14 19:04                                     ` Bjorn Helgaas
2017-04-14 19:04                                     ` Bjorn Helgaas
     [not found]                                     ` <20170414190452.GA15679-1RhO1Y9PlrlHTL0Zs8A6p5iNqAH0jzoTYJqu5kTmcBRl57MIdRCFDg@public.gmane.org>
2017-04-14 22:07                                       ` Benjamin Herrenschmidt
2017-04-14 22:07                                         ` Benjamin Herrenschmidt
2017-04-14 22:07                                         ` Benjamin Herrenschmidt
     [not found]                                         ` <1492207643.25766.18.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
2017-04-15 17:41                                           ` Logan Gunthorpe
2017-04-15 17:41                                             ` Logan Gunthorpe
2017-04-15 17:41                                             ` Logan Gunthorpe
2017-04-15 22:09                                             ` Dan Williams
2017-04-15 22:09                                               ` Dan Williams
     [not found]                                               ` <CAPcyv4jUeKzKDARp6Z35kdPLKnP-M6aF8X5KpOx55CLyjnj4dA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-04-16  3:01                                                 ` Benjamin Herrenschmidt
2017-04-16  3:01                                                   ` Benjamin Herrenschmidt
2017-04-16  3:01                                                   ` Benjamin Herrenschmidt
     [not found]                                                   ` <1492311719.25766.37.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
2017-04-16  4:46                                                     ` Logan Gunthorpe
2017-04-16  4:46                                                       ` Logan Gunthorpe
2017-04-16  4:46                                                       ` Logan Gunthorpe
2017-04-16 15:53                                                   ` Dan Williams
2017-04-16 15:53                                                     ` Dan Williams
     [not found]                                                     ` <CAPcyv4iqnz1B00Q3xG-nGrLXdOyB7ditxmwZyotksLFgUqr+jA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-04-16 16:34                                                       ` Logan Gunthorpe
2017-04-16 16:34                                                         ` Logan Gunthorpe
2017-04-16 16:34                                                         ` Logan Gunthorpe
     [not found]                                                         ` <5e43818e-8c6b-8be8-23ff-b798633d2a73-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-16 22:31                                                           ` Benjamin Herrenschmidt
2017-04-16 22:31                                                             ` Benjamin Herrenschmidt
2017-04-16 22:31                                                             ` Benjamin Herrenschmidt
     [not found]                                                             ` <1492381907.25766.49.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
2017-04-24  7:36                                                               ` Knut Omang
2017-04-24  7:36                                                                 ` Knut Omang
2017-04-24  7:36                                                                 ` Knut Omang
2017-04-24  7:36                                                                 ` Knut Omang
     [not found]                                                                 ` <1493019397.3171.118.camel-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2017-04-24 16:14                                                                   ` Logan Gunthorpe
2017-04-24 16:14                                                                     ` Logan Gunthorpe
2017-04-24 16:14                                                                     ` Logan Gunthorpe
     [not found]                                                                     ` <9b6c0830-a728-c7ca-e6c6-2135f3f760ed-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-25  6:30                                                                       ` Knut Omang
2017-04-25  6:30                                                                         ` Knut Omang
2017-04-25  6:30                                                                         ` Knut Omang
2017-04-25  6:30                                                                         ` Knut Omang
     [not found]                                                                         ` <1493101803.3171.246.camel-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2017-04-25 17:03                                                                           ` Logan Gunthorpe
2017-04-25 17:03                                                                             ` Logan Gunthorpe
2017-04-25 17:03                                                                             ` Logan Gunthorpe
     [not found]                                                                             ` <0cc95df5-b9dd-6493-15fe-771d535c1020-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-25 21:23                                                                               ` Stephen  Bates
2017-04-25 21:23                                                                                 ` Stephen  Bates
2017-04-25 21:23                                                                                 ` Stephen  Bates
2017-04-25 21:23                                                                                 ` Stephen  Bates
2017-04-25 21:23                                                                   ` Stephen  Bates
2017-04-25 21:23                                                                     ` Stephen  Bates
2017-04-25 21:23                                                                     ` Stephen  Bates
2017-04-25 21:23                                                                     ` Stephen  Bates
2017-04-16 22:26                                                       ` Benjamin Herrenschmidt
2017-04-16 22:26                                                         ` Benjamin Herrenschmidt
2017-04-16 22:26                                                         ` Benjamin Herrenschmidt
     [not found]                                             ` <bff1e503-95a9-e19f-bfd9-0ff962c63a81-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2017-04-15 22:17                                               ` Benjamin Herrenschmidt
2017-04-15 22:17                                                 ` Benjamin Herrenschmidt
2017-04-15 22:17                                                 ` Benjamin Herrenschmidt
     [not found]                                                 ` <1492294628.25766.33.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
2017-04-16  5:36                                                   ` Logan Gunthorpe
2017-04-16  5:36                                                     ` Logan Gunthorpe
2017-04-16  5:36                                                     ` Logan Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0689e764-bf04-6da2-3b7d-2cbf0b6b94a0@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@kernel.dk \
    --cc=dan.j.williams@intel.com \
    --cc=hch@lst.de \
    --cc=jejb@linux.vnet.ibm.com \
    --cc=jgunthorpe@obsidianresearch.com \
    --cc=keith.busch@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=logang@deltatee.com \
    --cc=martin.petersen@oracle.com \
    --cc=maxg@mellanox.com \
    --cc=okaya@codeaurora.org \
    --cc=sbates@raithlin.com \
    --cc=swise@opengridcomputing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.