From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Gunthorpe Subject: Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem Date: Tue, 4 Apr 2017 09:46:29 -0600 Message-ID: <20170404154629.GA13552@obsidianresearch.com> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <080b68b4-eba3-861c-4f29-5d829425b5e7-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Sagi Grimberg Cc: Jens Axboe , "James E.J. Bottomley" , linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, "Martin K. Petersen" , linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Steve Wise , linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, Keith Busch , linux-nvdimm-y27Ovi1pjclAfugRpC6u6w@public.gmane.org, Max Gurtovoy , Christoph Hellwig List-Id: linux-nvdimm@lists.01.org On Tue, Apr 04, 2017 at 01:59:26PM +0300, Sagi Grimberg wrote: > Note that the nvme completion queues are still on the host memory, so > this means we have lost the ordering between data and completions as > they go to different pcie targets. Hmm, in this simple up/down case with a switch, I think it might actually be OK. Transactions might not complete at the NVMe device before the CPU processes the RDMA completion, however due to the PCI-E ordering rules new TLPs directed to the NVMe will complete after the RMDA TLPs and thus observe the new data. (eg order preserving) It would be very hard to use P2P if fabric ordering is not preserved.. Jason From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932079AbdDDPrJ (ORCPT ); Tue, 4 Apr 2017 11:47:09 -0400 Received: from quartz.orcorp.ca ([184.70.90.242]:33987 "EHLO quartz.orcorp.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754793AbdDDPrF (ORCPT ); Tue, 4 Apr 2017 11:47:05 -0400 Date: Tue, 4 Apr 2017 09:46:29 -0600 From: Jason Gunthorpe To: Sagi Grimberg Cc: Logan Gunthorpe , Christoph Hellwig , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Stephen Bates , Max Gurtovoy , Dan Williams , Keith Busch , linux-pci@vger.kernel.org, linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@ml01.01.org, linux-kernel@vger.kernel.org Subject: Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem Message-ID: <20170404154629.GA13552@obsidianresearch.com> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> User-Agent: Mutt/1.5.24 (2015-08-30) X-Broken-Reverse-DNS: no host name found for IP address 10.0.0.156 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 04, 2017 at 01:59:26PM +0300, Sagi Grimberg wrote: > Note that the nvme completion queues are still on the host memory, so > this means we have lost the ordering between data and completions as > they go to different pcie targets. Hmm, in this simple up/down case with a switch, I think it might actually be OK. Transactions might not complete at the NVMe device before the CPU processes the RDMA completion, however due to the PCI-E ordering rules new TLPs directed to the NVMe will complete after the RMDA TLPs and thus observe the new data. (eg order preserving) It would be very hard to use P2P if fabric ordering is not preserved.. Jason From mboxrd@z Thu Jan 1 00:00:00 1970 From: jgunthorpe@obsidianresearch.com (Jason Gunthorpe) Date: Tue, 4 Apr 2017 09:46:29 -0600 Subject: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem In-Reply-To: <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> Message-ID: <20170404154629.GA13552@obsidianresearch.com> On Tue, Apr 04, 2017@01:59:26PM +0300, Sagi Grimberg wrote: > Note that the nvme completion queues are still on the host memory, so > this means we have lost the ordering between data and completions as > they go to different pcie targets. Hmm, in this simple up/down case with a switch, I think it might actually be OK. Transactions might not complete at the NVMe device before the CPU processes the RDMA completion, however due to the PCI-E ordering rules new TLPs directed to the NVMe will complete after the RMDA TLPs and thus observe the new data. (eg order preserving) It would be very hard to use P2P if fabric ordering is not preserved.. Jason