From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932638AbeCMXIz (ORCPT ); Tue, 13 Mar 2018 19:08:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:59320 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932205AbeCMXIx (ORCPT ); Tue, 13 Mar 2018 19:08:53 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8F1F20685 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=helgaas@kernel.org Date: Tue, 13 Mar 2018 18:08:50 -0500 From: Bjorn Helgaas To: Stephen Bates Cc: Logan Gunthorpe , Sinan Kaya , "linux-kernel@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-block@vger.kernel.org" , Christoph Hellwig , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , =?iso-8859-1?B?Suly9G1l?= Glisse , Benjamin Herrenschmidt , Alex Williamson Subject: Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory Message-ID: <20180313230850.GA45763@bhelgaas-glaptop.roam.corp.google.com> References: <24d8e5c2-065d-8bde-3f5d-7f158be9c578@deltatee.com> <52cbbbc4-c488-f83f-8d02-14d455b4efd7@codeaurora.org> <3e738f95-d73c-4182-2fa1-8664aafb1ab7@deltatee.com> <703aa92c-0c1c-4852-5887-6f6e6ccde0fb@codeaurora.org> <3ea80992-a0fc-08f2-d93d-ae0ec4e3f4ce@codeaurora.org> <4eb6850c-df1b-fd44-3ee0-d43a50270b53@deltatee.com> <757fca36-dee4-e070-669e-f2788bd78e41@codeaurora.org> <4f761f55-4e9a-dccb-d12f-c59d2cd689db@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 13, 2018 at 10:31:55PM +0000, Stephen Bates wrote: > >> It sounds like you have very tight hardware expectations for this to work > >> at this moment. You also don't want to generalize this code for others and > >> address the shortcomings. > > No, that's the way the community has pushed this work > > Hi Sinan > > Thanks for all the input. As Logan has pointed out the switch > requirement is something that has evolved over time based on input > from the community. You are more than welcome to have an opinion on > this (and you have made that opinion clear ;-)). Over time the > patchset may evolve from its current requirements but right now we > are aligned with the feedback from the community. This part of the community hasn't been convinced of the need to have two bridges, e.g., both an Upstream Port and a Downstream Port, or two conventional PCI bridges, above the peers. Every PCI-to-PCI bridge is required to support routing transactions between devices on its secondary side. Therefore, I think it is sufficient to verify that the potential peers share a single common upstream bridge. This could be a conventional PCI bridge, a Switch Downstream Port, or a Root Port. I've seen the response that peers directly below a Root Port could not DMA to each other through the Root Port because of the "route to self" issue, and I'm not disputing that. But enforcing a requirement for two upstream bridges introduces a weird restriction on conventional PCI topologies, makes the code hard to read, and I don't think it's necessary. If it *is* necessary because Root Ports and devices below them behave differently than in conventional PCI, I think you should include a reference to the relevant section of the spec and check directly for a Root Port. I would prefer that over trying to exclude Root Ports by looking for two upstream bridges. Bjorn