All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Freyensee, James P" <james.p.freyensee@intel.com>
To: "hch@lst.de" <hch@lst.de>, "Busch, Keith" <keith.busch@intel.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"ming.l@ssi.samsung.com" <ming.l@ssi.samsung.com>,
	"axboe@kernel.dk" <axboe@kernel.dk>,
	"sagi@grimberg.me" <sagi@grimberg.me>
Subject: Re: [PATCH 5/5] nvme-rdma: add a NVMe over Fabrics RDMA host driver
Date: Tue, 7 Jun 2016 15:15:31 +0000	[thread overview]
Message-ID: <1465312530.3505.2.camel@intel.com> (raw)
In-Reply-To: <20160607144753.GA28414@localhost.localdomain>

T24gVHVlLCAyMDE2LTA2LTA3IGF0IDEwOjQ3IC0wNDAwLCBLZWl0aCBCdXNjaCB3cm90ZToNCj4g
T24gTW9uLCBKdW4gMDYsIDIwMTYgYXQgMTE6MjM6MzVQTSArMDIwMCwgQ2hyaXN0b3BoIEhlbGx3
aWcgd3JvdGU6DQo+ID4gVG8gY29ubmVjdCB0byBhbGwgTlZNZSBvdmVyIEZhYnJpY3MgY29udHJv
bGxlciByZWFjaGFibGUgb24gYSBnaXZlbg0KPiA+IHRhZ2V0DQo+ID4gcG9ydCB1c2luZyBSRE1B
L0NNIHVzZSB0aGUgZm9sbG93aW5nIGNvbW1hbmQ6DQo+ID4gDQo+ID4gCW52bWUgY29ubmVjdC1h
bGwgLXQgcmRtYSAtYSAkSVBBRERSDQo+ID4gDQo+ID4gVGhpcyByZXF1aXJlcyB0aGUgbGF0ZXN0
IHZlcnNpb24gb2YgbnZtZS1jbGkgd2l0aCBGYWJyaWNzIHN1cHBvcnQuDQo+IA0KPiBJcyB0aGVy
ZSBhIHB1YmxpYyBmb3JrIG9yIHBhdGNoIHNldCBhdmFpbGFibGUgZm9yIHRoZSB1c2VyIHRvb2xz
PyBJJ2QNCj4gYmUgaGFwcHkgdG8gbWVyZ2UgdGhhdCBpbi4NCj4gDQo+IE92ZXJhbGwsIHRoaXMg
d2hvbGUgc2VyaWVzIGlzIGxvb2tpbmcgcmVhbGx5IGdvb2QuIA0KDQpTcGVjaWFsIHRoYW5rcyB0
byBTYWdpIGFuZCBDaHJpc3RvcGggZm9yIG9yZ2FuaXppbmcgYWxsIHRoZSBwYXRjaGVzIG9mDQp0
aGlzIGNvZGUgcHJvamVjdCBpbnRvIGEgZHluYW1pdGUgc3VibWlzc2lvbiBzZXJpZXMgZm9yIHRo
ZXNlIG1haWxpbmcNCmxpc3RzLg0KDQo+IEknbGwgdHJ5IHRoaXMgb3V0IG9uDQo+IHNvbWUgbWFj
aGluZXMgdG9kYXkuDQoNCg==

WARNING: multiple messages have this Message-ID (diff)
From: "Freyensee, James P" <james.p.freyensee@intel.com>
To: "hch@lst.de" <hch@lst.de>, "Busch, Keith" <keith.busch@intel.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"ming.l@ssi.samsung.com" <ming.l@ssi.samsung.com>,
	"axboe@kernel.dk" <axboe@kernel.dk>,
	"sagi@grimberg.me" <sagi@grimberg.me>
Subject: Re: [PATCH 5/5] nvme-rdma: add a NVMe over Fabrics RDMA host driver
Date: Tue, 7 Jun 2016 15:15:31 +0000	[thread overview]
Message-ID: <1465312530.3505.2.camel@intel.com> (raw)
In-Reply-To: <20160607144753.GA28414@localhost.localdomain>

On Tue, 2016-06-07 at 10:47 -0400, Keith Busch wrote:
> On Mon, Jun 06, 2016 at 11:23:35PM +0200, Christoph Hellwig wrote:
> > To connect to all NVMe over Fabrics controller reachable on a given
> > taget
> > port using RDMA/CM use the following command:
> > 
> > 	nvme connect-all -t rdma -a $IPADDR
> > 
> > This requires the latest version of nvme-cli with Fabrics support.
> 
> Is there a public fork or patch set available for the user tools? I'd
> be happy to merge that in.
> 
> Overall, this whole series is looking really good. 

Special thanks to Sagi and Christoph for organizing all the patches of
this code project into a dynamite submission series for these mailing
lists.

> I'll try this out on
> some machines today.

WARNING: multiple messages have this Message-ID (diff)
From: james.p.freyensee@intel.com (Freyensee, James P)
Subject: [PATCH 5/5] nvme-rdma: add a NVMe over Fabrics RDMA host driver
Date: Tue, 7 Jun 2016 15:15:31 +0000	[thread overview]
Message-ID: <1465312530.3505.2.camel@intel.com> (raw)
In-Reply-To: <20160607144753.GA28414@localhost.localdomain>

On Tue, 2016-06-07@10:47 -0400, Keith Busch wrote:
> On Mon, Jun 06, 2016@11:23:35PM +0200, Christoph Hellwig wrote:
> > To connect to all NVMe over Fabrics controller reachable on a given
> > taget
> > port using RDMA/CM use the following command:
> > 
> > 	nvme connect-all -t rdma -a $IPADDR
> > 
> > This requires the latest version of nvme-cli with Fabrics support.
> 
> Is there a public fork or patch set available for the user tools? I'd
> be happy to merge that in.
> 
> Overall, this whole series is looking really good. 

Special thanks to Sagi and Christoph for organizing all the patches of
this code project into a dynamite submission series for these mailing
lists.

> I'll try this out on
> some machines today.

  reply	other threads:[~2016-06-07 15:15 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-06 21:23 NVMe over Fabrics RDMA transport drivers Christoph Hellwig
2016-06-06 21:23 ` Christoph Hellwig
2016-06-06 21:23 ` [PATCH 1/5] blk-mq: Introduce blk_mq_reinit_tagset Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-06 21:23 ` [PATCH 2/5] nvme: add new reconnecting controller state Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-06 21:23 ` [PATCH 3/5] nvme-rdma.h: Add includes for nvme rdma_cm negotiation Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-07 11:59   ` Sagi Grimberg
2016-06-07 11:59     ` Sagi Grimberg
2016-06-07 11:59     ` Sagi Grimberg
2016-06-06 21:23 ` [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-07 12:00   ` Sagi Grimberg
2016-06-07 12:00     ` Sagi Grimberg
2016-06-09 21:42     ` Steve Wise
2016-06-09 21:42       ` Steve Wise
2016-06-09 21:42       ` Steve Wise
2016-06-09 21:54       ` Ming Lin
2016-06-09 21:54         ` Ming Lin
2016-06-09 21:54         ` Ming Lin
2016-06-14 14:32       ` Christoph Hellwig
2016-06-14 14:32         ` Christoph Hellwig
2016-06-14 14:32         ` Christoph Hellwig
2016-06-09 23:03     ` Steve Wise
2016-06-09 23:03       ` Steve Wise
2016-06-09 23:03       ` Steve Wise
2016-06-14 14:31       ` Christoph Hellwig
2016-06-14 14:31         ` Christoph Hellwig
2016-06-14 14:31         ` Christoph Hellwig
2016-06-14 15:14         ` Steve Wise
2016-06-14 15:14           ` Steve Wise
2016-06-14 15:14           ` Steve Wise
     [not found]         ` <00ea01d1c64f$64db8880$2e929980$@opengridcomputing.com>
2016-06-14 15:23           ` Steve Wise
2016-06-14 15:23             ` Steve Wise
2016-06-14 15:23             ` Steve Wise
2016-06-14 16:10       ` Steve Wise
2016-06-14 16:10         ` Steve Wise
2016-06-14 16:10         ` Steve Wise
2016-06-14 16:22         ` Steve Wise
2016-06-14 16:22           ` Steve Wise
2016-06-14 16:22           ` Steve Wise
2016-06-15 18:32           ` Sagi Grimberg
2016-06-15 18:32             ` Sagi Grimberg
2016-06-15 18:32             ` Sagi Grimberg
2016-06-14 16:47         ` Hefty, Sean
2016-06-14 16:47           ` Hefty, Sean
2016-06-14 16:47           ` Hefty, Sean
2016-06-06 21:23 ` [PATCH 5/5] nvme-rdma: add a NVMe over Fabrics RDMA host driver Christoph Hellwig
2016-06-06 21:23   ` Christoph Hellwig
2016-06-07 12:00   ` Sagi Grimberg
2016-06-07 12:00     ` Sagi Grimberg
2016-06-07 12:00     ` Sagi Grimberg
2016-06-07 14:47   ` Keith Busch
2016-06-07 14:47     ` Keith Busch
2016-06-07 15:15     ` Freyensee, James P [this message]
2016-06-07 15:15       ` Freyensee, James P
2016-06-07 15:15       ` Freyensee, James P
2016-06-07 11:57 ` NVMe over Fabrics RDMA transport drivers Sagi Grimberg
2016-06-07 11:57   ` Sagi Grimberg
2016-06-07 12:01   ` Christoph Hellwig
2016-06-07 12:01     ` Christoph Hellwig
2016-06-07 12:01     ` Christoph Hellwig
2016-06-07 14:55   ` Woodruff, Robert J
2016-06-07 14:55     ` Woodruff, Robert J
2016-06-07 14:55     ` Woodruff, Robert J
2016-06-07 20:14     ` Steve Wise
2016-06-07 20:14       ` Steve Wise
2016-06-07 20:27       ` Christoph Hellwig
2016-06-07 20:27         ` Christoph Hellwig
2016-07-06 12:55 NVMe over Fabrics RDMA transport drivers V2 Christoph Hellwig
2016-07-06 12:55 ` [PATCH 5/5] nvme-rdma: add a NVMe over Fabrics RDMA host driver Christoph Hellwig
2016-07-06 12:55   ` Christoph Hellwig
2016-07-06 12:55   ` Christoph Hellwig
2016-07-08 13:53   ` Steve Wise
2016-07-08 13:53     ` Steve Wise
2016-07-08 13:53     ` Steve Wise

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1465312530.3505.2.camel@intel.com \
    --to=james.p.freyensee@intel.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.l@ssi.samsung.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.