From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sagi Grimberg Subject: Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem Date: Mon, 10 Apr 2017 11:29:57 +0300 Message-ID: <7fcc3ac8-8b96-90f5-3942-87f999c7499d@grimberg.me> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> <20170404154629.GA13552@obsidianresearch.com> <4df229d8-8124-664a-9bc4-6401bc034be1@grimberg.me> <3E85B4D4-9EBC-4299-8209-2D8740947764@raithlin.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <3E85B4D4-9EBC-4299-8209-2D8740947764-pv7U853sEMVWk0Htik3J/w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Stephen Bates , Jason Gunthorpe Cc: Jens Axboe , "James E.J. Bottomley" , "linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "Martin K. Petersen" , "linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , Steve Wise , "linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org" , Keith Busch , "linux-nvdimm-y27Ovi1pjclAfugRpC6u6w@public.gmane.org" , Max Gurtovoy , Christoph Hellwig List-Id: linux-nvdimm@lists.01.org > Sagi > > As long as legA, legB and the RC are all connected to the same switch then ordering will be preserved (I think many other topologies also work). Here is how it would work for the problem case you are concerned about (which is a read from the NVMe drive). > > 1. Disk device DMAs out the data to the p2pmem device via a string of PCIe MemWr TLPs. > 2. Disk device writes to the completion queue (in system memory) via a MemWr TLP. > 3. The last of the MemWrs from step 1 might have got stalled in the PCIe switch due to congestion but if so they are stalled in the egress path of the switch for the p2pmem port. > 4. The RC determines the IO is complete when the TLP associated with step 2 updates the memory associated with the CQ. It issues some operation to read the p2pmem. > 5. Regardless of whether the MemRd TLP comes from the RC or another device connected to the switch it is queued in the egress queue for the p2pmem FIO behind the last DMA TLP (from step 1). > PCIe ordering ensures that this MemRd cannot overtake the MemWr (Reads can never pass writes). > Therefore the MemRd can never get to the p2pmem device until after the last DMA MemWr has. What you are saying is surprising to me. The switch needs to preserve ordering across different switch ports ?? You are suggesting that there is a *switch-wide* state that tracks MemRds never pass MemWrs across all the switch ports? That is a very non-trivial statement... From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752935AbdDJIaF (ORCPT ); Mon, 10 Apr 2017 04:30:05 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:35507 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751448AbdDJIaC (ORCPT ); Mon, 10 Apr 2017 04:30:02 -0400 Subject: Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem To: Stephen Bates , Jason Gunthorpe References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> <20170404154629.GA13552@obsidianresearch.com> <4df229d8-8124-664a-9bc4-6401bc034be1@grimberg.me> <3E85B4D4-9EBC-4299-8209-2D8740947764@raithlin.com> Cc: Logan Gunthorpe , Christoph Hellwig , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Max Gurtovoy , Dan Williams , Keith Busch , "linux-pci@vger.kernel.org" , "linux-scsi@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@ml01.01.org" , "linux-kernel@vger.kernel.org" From: Sagi Grimberg Message-ID: <7fcc3ac8-8b96-90f5-3942-87f999c7499d@grimberg.me> Date: Mon, 10 Apr 2017 11:29:57 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <3E85B4D4-9EBC-4299-8209-2D8740947764@raithlin.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Sagi > > As long as legA, legB and the RC are all connected to the same switch then ordering will be preserved (I think many other topologies also work). Here is how it would work for the problem case you are concerned about (which is a read from the NVMe drive). > > 1. Disk device DMAs out the data to the p2pmem device via a string of PCIe MemWr TLPs. > 2. Disk device writes to the completion queue (in system memory) via a MemWr TLP. > 3. The last of the MemWrs from step 1 might have got stalled in the PCIe switch due to congestion but if so they are stalled in the egress path of the switch for the p2pmem port. > 4. The RC determines the IO is complete when the TLP associated with step 2 updates the memory associated with the CQ. It issues some operation to read the p2pmem. > 5. Regardless of whether the MemRd TLP comes from the RC or another device connected to the switch it is queued in the egress queue for the p2pmem FIO behind the last DMA TLP (from step 1). > PCIe ordering ensures that this MemRd cannot overtake the MemWr (Reads can never pass writes). > Therefore the MemRd can never get to the p2pmem device until after the last DMA MemWr has. What you are saying is surprising to me. The switch needs to preserve ordering across different switch ports ?? You are suggesting that there is a *switch-wide* state that tracks MemRds never pass MemWrs across all the switch ports? That is a very non-trivial statement... From mboxrd@z Thu Jan 1 00:00:00 1970 From: sagi@grimberg.me (Sagi Grimberg) Date: Mon, 10 Apr 2017 11:29:57 +0300 Subject: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem In-Reply-To: <3E85B4D4-9EBC-4299-8209-2D8740947764@raithlin.com> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1490911959-5146-7-git-send-email-logang@deltatee.com> <080b68b4-eba3-861c-4f29-5d829425b5e7@grimberg.me> <20170404154629.GA13552@obsidianresearch.com> <4df229d8-8124-664a-9bc4-6401bc034be1@grimberg.me> <3E85B4D4-9EBC-4299-8209-2D8740947764@raithlin.com> Message-ID: <7fcc3ac8-8b96-90f5-3942-87f999c7499d@grimberg.me> > Sagi > > As long as legA, legB and the RC are all connected to the same switch then ordering will be preserved (I think many other topologies also work). Here is how it would work for the problem case you are concerned about (which is a read from the NVMe drive). > > 1. Disk device DMAs out the data to the p2pmem device via a string of PCIe MemWr TLPs. > 2. Disk device writes to the completion queue (in system memory) via a MemWr TLP. > 3. The last of the MemWrs from step 1 might have got stalled in the PCIe switch due to congestion but if so they are stalled in the egress path of the switch for the p2pmem port. > 4. The RC determines the IO is complete when the TLP associated with step 2 updates the memory associated with the CQ. It issues some operation to read the p2pmem. > 5. Regardless of whether the MemRd TLP comes from the RC or another device connected to the switch it is queued in the egress queue for the p2pmem FIO behind the last DMA TLP (from step 1). > PCIe ordering ensures that this MemRd cannot overtake the MemWr (Reads can never pass writes). > Therefore the MemRd can never get to the p2pmem device until after the last DMA MemWr has. What you are saying is surprising to me. The switch needs to preserve ordering across different switch ports ?? You are suggesting that there is a *switch-wide* state that tracks MemRds never pass MemWrs across all the switch ports? That is a very non-trivial statement...