From: "Stephen Bates" <sbates@raithlin.com>
To: Bjorn Helgaas <helgaas@kernel.org>,
Logan Gunthorpe <logang@deltatee.com>
Cc: "Sinan Kaya" <okaya@codeaurora.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"Christoph Hellwig" <hch@lst.de>, "Jens Axboe" <axboe@kernel.dk>,
"Keith Busch" <keith.busch@intel.com>,
"Sagi Grimberg" <sagi@grimberg.me>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Jason Gunthorpe" <jgg@mellanox.com>,
"Max Gurtovoy" <maxg@mellanox.com>,
"Dan Williams" <dan.j.williams@intel.com>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Benjamin Herrenschmidt" <benh@kernel.crashing.org>,
"Alex Williamson" <alex.williamson@redhat.com>
Subject: Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory
Date: Sat, 24 Mar 2018 15:28:43 +0000 [thread overview]
Message-ID: <121026DC-40C7-4F4E-BE27-BDA652BDEB6A@raithlin.com> (raw)
In-Reply-To: <20180324034947.GE210003@bhelgaas-glaptop.roam.corp.google.com>
> That would be very nice but many devices do not support the internal
> route.
But Logan in the NVMe case we are discussing movement within a single function (i.e. from a NVMe namespace to a NVMe CMB on the same function). Bjorn is discussing movement between two functions (PFs or VFs) in the same PCIe EP. In the case of multi-function endpoints I think the standard requires those devices to support internal DMAs for transfers between those functions (but does not require it within a function).
So I think the summary is:
1. There is no requirement for a single function to support internal DMAs but in the case of NVMe we do have a protocol specific way for a NVMe function to indicate it supports via the CMB BAR. Other protocols may also have such methods but I am not aware of them at this time.
2. For multi-function end-points I think it is a requirement that DMAs *between* functions are supported via an internal path but this can be over-ridden by ACS when supported in the EP.
3. For multi-function end-points there is no requirement to support internal DMA within each individual function (i.e. a la point 1 but extended to each function in a MF device).
Based on my review of the specification I concur with Bjorn that p2pdma between functions in a MF end-point should be assured to be supported via the standard. However if the p2pdma involves only a single function in a MF device then we can only support NVMe CMBs for now. Let's review and see what the options are for supporting this in the next respin.
Stephen
next prev parent reply other threads:[~2018-03-24 15:28 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-12 19:35 [PATCH v3 00/11] Copy Offload in NVMe Fabrics with P2P PCI Memory Logan Gunthorpe
2018-03-12 19:35 ` [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory Logan Gunthorpe
2018-03-13 3:28 ` Sinan Kaya
2018-03-13 16:43 ` Logan Gunthorpe
2018-03-13 17:49 ` Sinan Kaya
2018-03-13 18:44 ` Logan Gunthorpe
2018-03-13 19:10 ` Sinan Kaya
2018-03-13 19:19 ` Logan Gunthorpe
2018-03-13 19:53 ` Sinan Kaya
2018-03-13 20:46 ` Logan Gunthorpe
2018-03-13 21:22 ` Sinan Kaya
2018-03-13 22:00 ` Logan Gunthorpe
2018-03-13 22:29 ` Sinan Kaya
2018-03-13 22:45 ` Stephen Bates
2018-03-13 22:48 ` Logan Gunthorpe
2018-03-13 23:19 ` Sinan Kaya
2018-03-13 23:45 ` Logan Gunthorpe
2018-03-14 12:16 ` David Laight
2018-03-14 16:23 ` Logan Gunthorpe
2018-03-13 22:31 ` Stephen Bates
2018-03-13 23:08 ` Bjorn Helgaas
2018-03-13 23:21 ` Logan Gunthorpe
2018-03-14 2:56 ` Bjorn Helgaas
2018-03-14 14:05 ` Stephen Bates
2018-03-14 16:17 ` Logan Gunthorpe
2018-03-14 18:51 ` Bjorn Helgaas
2018-03-14 19:03 ` Logan Gunthorpe
2018-03-14 19:28 ` Dan Williams
2018-03-14 19:30 ` Logan Gunthorpe
2018-03-14 19:34 ` Stephen Bates
2018-03-15 4:00 ` Martin K. Petersen
2018-03-15 4:30 ` Dan Williams
2018-03-22 22:57 ` Stephen Bates
2018-03-23 21:50 ` Bjorn Helgaas
2018-03-23 21:59 ` Logan Gunthorpe
2018-03-24 3:49 ` Bjorn Helgaas
2018-03-24 15:28 ` Stephen Bates [this message]
2018-03-26 15:43 ` Logan Gunthorpe
2018-03-26 11:11 ` Jonathan Cameron
2018-03-26 14:01 ` Bjorn Helgaas
2018-03-26 15:46 ` Logan Gunthorpe
2018-03-27 8:47 ` Jonathan Cameron
2018-03-27 15:37 ` Logan Gunthorpe
2018-04-13 21:56 ` Stephen Bates
2018-03-26 16:41 ` Jason Gunthorpe
2018-03-26 17:30 ` Logan Gunthorpe
2018-03-26 19:35 ` Jason Gunthorpe
2018-03-26 20:42 ` Logan Gunthorpe
2018-03-13 18:40 ` Logan Gunthorpe
2018-03-12 19:35 ` [PATCH v3 02/11] PCI/P2PDMA: Add sysfs group to display p2pmem stats Logan Gunthorpe
2018-03-12 19:35 ` [PATCH v3 03/11] PCI/P2PDMA: Add PCI p2pmem dma mappings to adjust the bus offset Logan Gunthorpe
2018-03-12 19:35 ` [PATCH v3 04/11] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches Logan Gunthorpe
2018-03-12 19:35 ` [PATCH v3 05/11] PCI/P2PDMA: Add P2P DMA driver writer's documentation Logan Gunthorpe
2018-03-12 19:41 ` Jonathan Corbet
2018-03-12 21:18 ` Logan Gunthorpe
2018-03-12 19:35 ` [PATCH v3 06/11] block: Introduce PCI P2P flags for request and request queue Logan Gunthorpe
2018-03-21 9:27 ` Christoph Hellwig
2018-03-12 19:35 ` [PATCH v3 07/11] IB/core: Ensure we map P2P memory correctly in rdma_rw_ctx_[init|destroy]() Logan Gunthorpe
2018-03-21 9:27 ` Christoph Hellwig
2018-03-12 19:35 ` [PATCH v3 08/11] nvme-pci: Use PCI p2pmem subsystem to manage the CMB Logan Gunthorpe
2018-03-13 1:55 ` Sinan Kaya
2018-03-13 1:58 ` Sinan Kaya
2018-03-12 19:35 ` [PATCH v3 09/11] nvme-pci: Add support for P2P memory in requests Logan Gunthorpe
2018-03-21 9:23 ` Christoph Hellwig
2018-03-12 19:35 ` [PATCH v3 10/11] nvme-pci: Add a quirk for a pseudo CMB Logan Gunthorpe
2018-03-12 19:35 ` [PATCH v3 11/11] nvmet: Optionally use PCI P2P memory Logan Gunthorpe
2018-03-21 9:27 ` Christoph Hellwig
2018-03-21 16:52 ` Logan Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=121026DC-40C7-4F4E-BE27-BDA652BDEB6A@raithlin.com \
--to=sbates@raithlin.com \
--cc=alex.williamson@redhat.com \
--cc=axboe@kernel.dk \
--cc=benh@kernel.crashing.org \
--cc=bhelgaas@google.com \
--cc=dan.j.williams@intel.com \
--cc=hch@lst.de \
--cc=helgaas@kernel.org \
--cc=jgg@mellanox.com \
--cc=jglisse@redhat.com \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvdimm@lists.01.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=maxg@mellanox.com \
--cc=okaya@codeaurora.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).