From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753501AbcFFVXt (ORCPT ); Mon, 6 Jun 2016 17:23:49 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:44915 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752258AbcFFVXr (ORCPT ); Mon, 6 Jun 2016 17:23:47 -0400 From: Christoph Hellwig To: axboe@kernel.dk, keith.busch@intel.com Cc: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: NVMe over Fabrics RDMA transport drivers Date: Mon, 6 Jun 2016 23:23:30 +0200 Message-Id: <1465248215-18186-1-git-send-email-hch@lst.de> X-Mailer: git-send-email 2.1.4 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch set implements the NVMe over Fabrics RDMA host and the target drivers. The host driver is tied into the NVMe host stack and implements the RDMA transport under the NVMe core and Fabrics modules. The NVMe over Fabrics RDMA host module is responsible for establishing a connection against a given target/controller, RDMA event handling and data-plane command processing. The target driver hooks into the NVMe target core stack and implements the RDMA transport. The module is responsible for RDMA connection establishment, RDMA event handling and data-plane RDMA commands processing. RDMA connection establishment is done using RDMA/CM and IP resolution. The data-plane command sequence follows the classic storage model where the target pushes/pulls the data.