From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 3C2B3203B9931 for ; Thu, 10 May 2018 12:10:17 -0700 (PDT) Date: Thu, 10 May 2018 13:10:15 -0600 From: Alex Williamson Subject: Re: [PATCH v4 04/14] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches Message-ID: <20180510131015.4ad59477@w520.home> In-Reply-To: References: <20180508133407.57a46902@w520.home> <5fc9b1c1-9208-06cc-0ec5-1f54c2520494@deltatee.com> <20180508141331.7cd737cb@w520.home> <20180508205005.GC15608@redhat.com> <7FFB9603-DF9F-4441-82E9-46037CB6C0DE@raithlin.com> <4e0d0b96-ab02-2662-adf3-fa956efd294c@deltatee.com> <2fc61d29-9eb4-d168-a3e5-955c36e5d821@amd.com> <94C8FE12-7FC3-48BD-9DCA-E6A427E71810@raithlin.com> <20180510144137.GA3652@redhat.com> MIME-Version: 1.0 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Stephen Bates Cc: Jens Axboe , Keith Busch , "linux-nvdimm@lists.01.org" , "linux-rdma@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-nvme@lists.infradead.org" , Christoph Hellwig , "linux-block@vger.kernel.org" , Jerome Glisse , Jason Gunthorpe , Bjorn Helgaas , Benjamin Herrenschmidt , Bjorn Helgaas , Max Gurtovoy , Christian =?UTF-8?B?S8O2bmln?= List-ID: On Thu, 10 May 2018 18:41:09 +0000 "Stephen Bates" wrote: > > Reasons is that GPU are giving up on PCIe (see all specialize link like > > NVlink that are popping up in GPU space). So for fast GPU inter-connect > > we have this new links. > > I look forward to Nvidia open-licensing NVLink to anyone who wants to use it ;-). No doubt, the marketing for it is quick to point out the mesh topology of NVLink, but I haven't seen any technical documents that describe the isolation capabilities or IOMMU interaction. Whether this is included or an afterthought, I have no idea. > > Also the IOMMU isolation do matter a lot to us. Think someone using this > > peer to peer to gain control of a server in the cloud. >>From that perspective, do we have any idea what NVLink means for topology and IOMMU provided isolation and translation? I've seen a device assignment user report that seems to suggest it might pretend to be PCIe compatible, but the assigned GPU ultimately doesn't work correctly in a VM, so perhaps the software compatibility is only so deep. Thanks, Alex _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm