From mboxrd@z Thu Jan 1 00:00:00 1970 From: Logan Gunthorpe Subject: Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem Date: Mon, 10 Apr 2017 10:03:45 -0600 Message-ID: References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> <20170404154629.GA13552@obsidianresearch.com> <4df229d8-8124-664a-9bc4-6401bc034be1@grimberg.me> <3E85B4D4-9EBC-4299-8209-2D8740947764@raithlin.com> <7fcc3ac8-8b96-90f5-3942-87f999c7499d@grimberg.me> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <7fcc3ac8-8b96-90f5-3942-87f999c7499d-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Sagi Grimberg , Stephen Bates , Jason Gunthorpe Cc: Jens Axboe , "James E.J. Bottomley" , "linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "Martin K. Petersen" , "linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , Steve Wise , "linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org" , Keith Busch , "linux-nvdimm-y27Ovi1pjclAfugRpC6u6w@public.gmane.org" , Max Gurtovoy , Christoph Hellwig List-Id: linux-nvdimm@lists.01.org On 10/04/17 02:29 AM, Sagi Grimberg wrote: > What you are saying is surprising to me. The switch needs to preserve > ordering across different switch ports ?? > You are suggesting that there is a *switch-wide* state that tracks > MemRds never pass MemWrs across all the switch ports? That is a very > non-trivial statement... Yes, it is a requirement of the PCIe spec for transactions to be strongly ordered throughout the fabric so switches must have an internal state across all ports. Without that, it would be impossible to have PCI cards work together even if they are using system memory to do so. Also, I believe, it was done this way to maintain maximum compatibility with the legacy PCI bus. There is also a relaxed ordering bit that allows specific transactions to ignore ordering which can help performance. Obviously this becomes impossible if you have some kind of complex multi-path fabric. Logan From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754509AbdDJQEH (ORCPT ); Mon, 10 Apr 2017 12:04:07 -0400 Received: from ale.deltatee.com ([207.54.116.67]:53787 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752937AbdDJQEC (ORCPT ); Mon, 10 Apr 2017 12:04:02 -0400 To: Sagi Grimberg , Stephen Bates , Jason Gunthorpe References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> <20170404154629.GA13552@obsidianresearch.com> <4df229d8-8124-664a-9bc4-6401bc034be1@grimberg.me> <3E85B4D4-9EBC-4299-8209-2D8740947764@raithlin.com> <7fcc3ac8-8b96-90f5-3942-87f999c7499d@grimberg.me> Cc: Christoph Hellwig , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Max Gurtovoy , Dan Williams , Keith Busch , "linux-pci@vger.kernel.org" , "linux-scsi@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@ml01.01.org" , "linux-kernel@vger.kernel.org" From: Logan Gunthorpe Message-ID: Date: Mon, 10 Apr 2017 10:03:45 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.6.0 MIME-Version: 1.0 In-Reply-To: <7fcc3ac8-8b96-90f5-3942-87f999c7499d@grimberg.me> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 172.16.1.111 X-SA-Exim-Rcpt-To: linux-kernel@vger.kernel.org, linux-nvdimm@ml01.01.org, linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, linux-pci@vger.kernel.org, keith.busch@intel.com, dan.j.williams@intel.com, maxg@mellanox.com, swise@opengridcomputing.com, axboe@kernel.dk, martin.petersen@oracle.com, jejb@linux.vnet.ibm.com, hch@lst.de, jgunthorpe@obsidianresearch.com, sbates@raithlin.com, sagi@grimberg.me X-SA-Exim-Mail-From: logang@deltatee.com Subject: Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem X-SA-Exim-Version: 4.2.1 (built Mon, 26 Dec 2011 16:24:06 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/04/17 02:29 AM, Sagi Grimberg wrote: > What you are saying is surprising to me. The switch needs to preserve > ordering across different switch ports ?? > You are suggesting that there is a *switch-wide* state that tracks > MemRds never pass MemWrs across all the switch ports? That is a very > non-trivial statement... Yes, it is a requirement of the PCIe spec for transactions to be strongly ordered throughout the fabric so switches must have an internal state across all ports. Without that, it would be impossible to have PCI cards work together even if they are using system memory to do so. Also, I believe, it was done this way to maintain maximum compatibility with the legacy PCI bus. There is also a relaxed ordering bit that allows specific transactions to ignore ordering which can help performance. Obviously this becomes impossible if you have some kind of complex multi-path fabric. Logan From mboxrd@z Thu Jan 1 00:00:00 1970 From: logang@deltatee.com (Logan Gunthorpe) Date: Mon, 10 Apr 2017 10:03:45 -0600 Subject: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem In-Reply-To: <7fcc3ac8-8b96-90f5-3942-87f999c7499d@grimberg.me> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> <20170404154629.GA13552@obsidianresearch.com> <4df229d8-8124-664a-9bc4-6401bc034be1@grimberg.me> <3E85B4D4-9EBC-4299-8209-2D8740947764@raithlin.com> <7fcc3ac8-8b96-90f5-3942-87f999c7499d@grimberg.me> Message-ID: On 10/04/17 02:29 AM, Sagi Grimberg wrote: > What you are saying is surprising to me. The switch needs to preserve > ordering across different switch ports ?? > You are suggesting that there is a *switch-wide* state that tracks > MemRds never pass MemWrs across all the switch ports? That is a very > non-trivial statement... Yes, it is a requirement of the PCIe spec for transactions to be strongly ordered throughout the fabric so switches must have an internal state across all ports. Without that, it would be impossible to have PCI cards work together even if they are using system memory to do so. Also, I believe, it was done this way to maintain maximum compatibility with the legacy PCI bus. There is also a relaxed ordering bit that allows specific transactions to ignore ordering which can help performance. Obviously this becomes impossible if you have some kind of complex multi-path fabric. Logan