From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from CAN01-QB1-obe.outbound.protection.outlook.com (mail-eopbgr660104.outbound.protection.outlook.com [40.107.66.104]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 891F720348606 for ; Thu, 10 May 2018 11:41:11 -0700 (PDT) From: "Stephen Bates" Subject: Re: [PATCH v4 04/14] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches Date: Thu, 10 May 2018 18:41:09 +0000 Message-ID: References: <20180508133407.57a46902@w520.home> <5fc9b1c1-9208-06cc-0ec5-1f54c2520494@deltatee.com> <20180508141331.7cd737cb@w520.home> <20180508205005.GC15608@redhat.com> <7FFB9603-DF9F-4441-82E9-46037CB6C0DE@raithlin.com> <4e0d0b96-ab02-2662-adf3-fa956efd294c@deltatee.com> <2fc61d29-9eb4-d168-a3e5-955c36e5d821@amd.com> <94C8FE12-7FC3-48BD-9DCA-E6A427E71810@raithlin.com> <20180510144137.GA3652@redhat.com> In-Reply-To: <20180510144137.GA3652@redhat.com> Content-Language: en-US Content-ID: MIME-Version: 1.0 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Jerome Glisse Cc: Jens Axboe , Keith Busch , "linux-nvdimm@lists.01.org" , "linux-rdma@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-nvme@lists.infradead.org" , Christoph Hellwig , "linux-block@vger.kernel.org" , Alex Williamson , Jason Gunthorpe , Bjorn Helgaas , Benjamin Herrenschmidt , Bjorn Helgaas , Max Gurtovoy , =?utf-8?B?Q2hyaXN0aWFuIEvDtm5pZw==?= List-ID: Hi Jerome > Note on GPU we do would not rely on ATS for peer to peer. Some part > of the GPU (DMA engines) do not necessarily support ATS. Yet those > are the part likely to be use in peer to peer. OK this is good to know. I agree the DMA engine is probably one of the GPU components most applicable to p2pdma. > We (ake GPU people aka the good guys ;)) do no want to do peer to peer > for performance reasons ie we do not care having our transaction going > to the root complex and back down the destination. At least in use case > i am working on this is fine. If the GPU people are the good guys does that make the NVMe people the bad guys ;-). If so, what are the RDMA people??? Again good to know. > Reasons is that GPU are giving up on PCIe (see all specialize link like > NVlink that are popping up in GPU space). So for fast GPU inter-connect > we have this new links. I look forward to Nvidia open-licensing NVLink to anyone who wants to use it ;-). Or maybe we'll all just switch to OpenGenCCIX when the time comes. > Also the IOMMU isolation do matter a lot to us. Think someone using this > peer to peer to gain control of a server in the cloud. I agree that IOMMU isolation is very desirable. Hence the desire to ensure we can keep the IOMMU on while doing p2pdma if at all possible whilst still delivering the desired performance to the user. Stephen _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: "Stephen Bates" To: Jerome Glisse CC: =?utf-8?B?Q2hyaXN0aWFuIEvDtm5pZw==?= , "Logan Gunthorpe" , Alex Williamson , Bjorn Helgaas , "linux-kernel@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-block@vger.kernel.org" , "Christoph Hellwig" , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , "Benjamin Herrenschmidt" Subject: Re: [PATCH v4 04/14] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches Date: Thu, 10 May 2018 18:41:09 +0000 Message-ID: References: <20180508133407.57a46902@w520.home> <5fc9b1c1-9208-06cc-0ec5-1f54c2520494@deltatee.com> <20180508141331.7cd737cb@w520.home> <20180508205005.GC15608@redhat.com> <7FFB9603-DF9F-4441-82E9-46037CB6C0DE@raithlin.com> <4e0d0b96-ab02-2662-adf3-fa956efd294c@deltatee.com> <2fc61d29-9eb4-d168-a3e5-955c36e5d821@amd.com> <94C8FE12-7FC3-48BD-9DCA-E6A427E71810@raithlin.com> <20180510144137.GA3652@redhat.com> In-Reply-To: <20180510144137.GA3652@redhat.com> Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 List-ID: SGkgSmVyb21lDQoNCj4gICAgTm90ZSBvbiBHUFUgd2UgZG8gd291bGQgbm90IHJlbHkgb24gQVRT IGZvciBwZWVyIHRvIHBlZXIuIFNvbWUgcGFydA0KPiAgICBvZiB0aGUgR1BVIChETUEgZW5naW5l cykgZG8gbm90IG5lY2Vzc2FyaWx5IHN1cHBvcnQgQVRTLiBZZXQgdGhvc2UNCj4gICAgYXJlIHRo ZSBwYXJ0IGxpa2VseSB0byBiZSB1c2UgaW4gcGVlciB0byBwZWVyLg0KDQpPSyB0aGlzIGlzIGdv b2QgdG8ga25vdy4gSSBhZ3JlZSB0aGUgRE1BIGVuZ2luZSBpcyBwcm9iYWJseSBvbmUgb2YgdGhl IEdQVSBjb21wb25lbnRzIG1vc3QgYXBwbGljYWJsZSB0byBwMnBkbWEuDQogICAgDQo+ICAgIFdl IChha2UgR1BVIHBlb3BsZSBha2EgdGhlIGdvb2QgZ3V5cyA7KSkgZG8gbm8gd2FudCB0byBkbyBw ZWVyIHRvIHBlZXINCj4gICAgZm9yIHBlcmZvcm1hbmNlIHJlYXNvbnMgaWUgd2UgZG8gbm90IGNh cmUgaGF2aW5nIG91ciB0cmFuc2FjdGlvbiBnb2luZw0KPiAgICB0byB0aGUgcm9vdCBjb21wbGV4 IGFuZCBiYWNrIGRvd24gdGhlIGRlc3RpbmF0aW9uLiBBdCBsZWFzdCBpbiB1c2UgY2FzZQ0KPiAg ICBpIGFtIHdvcmtpbmcgb24gdGhpcyBpcyBmaW5lLg0KDQpJZiB0aGUgR1BVIHBlb3BsZSBhcmUg dGhlIGdvb2QgZ3V5cyBkb2VzIHRoYXQgbWFrZSB0aGUgTlZNZSBwZW9wbGUgdGhlIGJhZCBndXlz IDstKS4gSWYgc28sIHdoYXQgYXJlIHRoZSBSRE1BIHBlb3BsZT8/PyBBZ2FpbiBnb29kIHRvIGtu b3cuDQogICAgDQo+ICAgIFJlYXNvbnMgaXMgdGhhdCBHUFUgYXJlIGdpdmluZyB1cCBvbiBQQ0ll IChzZWUgYWxsIHNwZWNpYWxpemUgbGluayBsaWtlDQo+ICAgIE5WbGluayB0aGF0IGFyZSBwb3Bw aW5nIHVwIGluIEdQVSBzcGFjZSkuIFNvIGZvciBmYXN0IEdQVSBpbnRlci1jb25uZWN0DQo+ICAg IHdlIGhhdmUgdGhpcyBuZXcgbGlua3MuIA0KDQpJIGxvb2sgZm9yd2FyZCB0byBOdmlkaWEgb3Bl bi1saWNlbnNpbmcgTlZMaW5rIHRvIGFueW9uZSB3aG8gd2FudHMgdG8gdXNlIGl0IDstKS4gT3Ig bWF5YmUgd2UnbGwgYWxsIGp1c3Qgc3dpdGNoIHRvIE9wZW5HZW5DQ0lYIHdoZW4gdGhlIHRpbWUg Y29tZXMuDQogICAgDQo+ICAgIEFsc28gdGhlIElPTU1VIGlzb2xhdGlvbiBkbyBtYXR0ZXIgYSBs b3QgdG8gdXMuIFRoaW5rIHNvbWVvbmUgdXNpbmcgdGhpcw0KPiAgICBwZWVyIHRvIHBlZXIgdG8g Z2FpbiBjb250cm9sIG9mIGEgc2VydmVyIGluIHRoZSBjbG91ZC4NCiAgICANCkkgYWdyZWUgdGhh dCBJT01NVSBpc29sYXRpb24gaXMgdmVyeSBkZXNpcmFibGUuIEhlbmNlIHRoZSBkZXNpcmUgdG8g ZW5zdXJlIHdlIGNhbiBrZWVwIHRoZSBJT01NVSBvbiB3aGlsZSBkb2luZyBwMnBkbWEgaWYgYXQg YWxsIHBvc3NpYmxlIHdoaWxzdCBzdGlsbCBkZWxpdmVyaW5nIHRoZSBkZXNpcmVkIHBlcmZvcm1h bmNlIHRvIHRoZSB1c2VyLg0KDQpTdGVwaGVuICAgIA0KDQo= From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Stephen Bates" Subject: Re: [PATCH v4 04/14] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches Date: Thu, 10 May 2018 18:41:09 +0000 Message-ID: References: <20180508133407.57a46902@w520.home> <5fc9b1c1-9208-06cc-0ec5-1f54c2520494@deltatee.com> <20180508141331.7cd737cb@w520.home> <20180508205005.GC15608@redhat.com> <7FFB9603-DF9F-4441-82E9-46037CB6C0DE@raithlin.com> <4e0d0b96-ab02-2662-adf3-fa956efd294c@deltatee.com> <2fc61d29-9eb4-d168-a3e5-955c36e5d821@amd.com> <94C8FE12-7FC3-48BD-9DCA-E6A427E71810@raithlin.com> <20180510144137.GA3652@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20180510144137.GA3652-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> Content-Language: en-US Content-ID: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Jerome Glisse Cc: Jens Axboe , Keith Busch , "linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org" , "linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org" , Christoph Hellwig , "linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , Alex Williamson , Jason Gunthorpe , Bjorn Helgaas , Benjamin Herrenschmidt , Bjorn Helgaas , Max Gurtovoy , =?utf-8?B?Q2hyaXN0aWFuIEvDtm5pZw==?= List-Id: linux-rdma@vger.kernel.org Hi Jerome > Note on GPU we do would not rely on ATS for peer to peer. Some part > of the GPU (DMA engines) do not necessarily support ATS. Yet those > are the part likely to be use in peer to peer. OK this is good to know. I agree the DMA engine is probably one of the GPU components most applicable to p2pdma. > We (ake GPU people aka the good guys ;)) do no want to do peer to peer > for performance reasons ie we do not care having our transaction going > to the root complex and back down the destination. At least in use case > i am working on this is fine. If the GPU people are the good guys does that make the NVMe people the bad guys ;-). If so, what are the RDMA people??? Again good to know. > Reasons is that GPU are giving up on PCIe (see all specialize link like > NVlink that are popping up in GPU space). So for fast GPU inter-connect > we have this new links. I look forward to Nvidia open-licensing NVLink to anyone who wants to use it ;-). Or maybe we'll all just switch to OpenGenCCIX when the time comes. > Also the IOMMU isolation do matter a lot to us. Think someone using this > peer to peer to gain control of a server in the cloud. I agree that IOMMU isolation is very desirable. Hence the desire to ensure we can keep the IOMMU on while doing p2pdma if at all possible whilst still delivering the desired performance to the user. Stephen From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751906AbeEJSlR (ORCPT ); Thu, 10 May 2018 14:41:17 -0400 Received: from mail-eopbgr660099.outbound.protection.outlook.com ([40.107.66.99]:45624 "EHLO CAN01-QB1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750980AbeEJSlM (ORCPT ); Thu, 10 May 2018 14:41:12 -0400 From: "Stephen Bates" To: Jerome Glisse CC: =?utf-8?B?Q2hyaXN0aWFuIEvDtm5pZw==?= , "Logan Gunthorpe" , Alex Williamson , Bjorn Helgaas , "linux-kernel@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-block@vger.kernel.org" , "Christoph Hellwig" , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , "Benjamin Herrenschmidt" Subject: Re: [PATCH v4 04/14] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches Thread-Topic: [PATCH v4 04/14] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches Thread-Index: AQHT21soiBVnp6SJuEuLCiH6kBzTLKQk+zIAgACHcYCAAJmxAIAABkiAgAAoBgCAAAW2gIAAA0YAgAAHvYCAAAGOgIAACKoAgACuAYCAAGxLgIAAM6EAgAFRXwD//7LJAIAAa6GA///eVgA= Date: Thu, 10 May 2018 18:41:09 +0000 Message-ID: References: <20180508133407.57a46902@w520.home> <5fc9b1c1-9208-06cc-0ec5-1f54c2520494@deltatee.com> <20180508141331.7cd737cb@w520.home> <20180508205005.GC15608@redhat.com> <7FFB9603-DF9F-4441-82E9-46037CB6C0DE@raithlin.com> <4e0d0b96-ab02-2662-adf3-fa956efd294c@deltatee.com> <2fc61d29-9eb4-d168-a3e5-955c36e5d821@amd.com> <94C8FE12-7FC3-48BD-9DCA-E6A427E71810@raithlin.com> <20180510144137.GA3652@redhat.com> In-Reply-To: <20180510144137.GA3652@redhat.com> Accept-Language: en-CA, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/10.d.0.180505 authentication-results: spf=none (sender IP is ) smtp.mailfrom=sbates@raithlin.com; x-originating-ip: [70.73.168.51] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1;YTXPR0101MB1744;7:yOutajXIyMTr18JOln20DBYmxBBsG0KmEe/a79TIs5kErYMzfPYZpHNx6F2RXohj8D04/tHpikNRKZbRMnzr0gc2loaNavwCEVguZ02tOFqPOgNFvSMu3nFnP3L1v6zTEx8pQw7A+3M86UaQF42XNnQU9GRC4TwbO8+FUEDOTfA1ZgeMG8SmsGF1vJkECd8eI3Er4MVqp+yYQtMeY15j+V0HuAZSB8GN4ZcC3Tg43fqpfCKdE0opAS/4xC6sKKq3 x-ms-exchange-antispam-srfa-diagnostics: SOS; x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:(7020095)(4652020)(7021125)(4534165)(7022125)(4603075)(4627221)(201702281549075)(7048125)(7024125)(7027125)(7028125)(7023125)(5600026)(2017052603328)(7153060)(7193020);SRVR:YTXPR0101MB1744; x-ms-traffictypediagnostic: YTXPR0101MB1744: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(158342451672863); x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(6040522)(2401047)(8121501046)(5005006)(3002001)(10201501046)(3231254)(944501410)(52105095)(93006095)(93001095)(149027)(150027)(6041310)(2016111802025)(20161123558120)(20161123564045)(20161123562045)(20161123560045)(6072148)(6043046)(201708071742011);SRVR:YTXPR0101MB1744;BCL:0;PCL:0;RULEID:;SRVR:YTXPR0101MB1744; x-forefront-prvs: 066898046A x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(376002)(346002)(396003)(39380400002)(366004)(39830400003)(189003)(199004)(7736002)(81166006)(2906002)(316002)(81156014)(3660700001)(3846002)(11346002)(6116002)(54906003)(14454004)(2900100001)(82746002)(26005)(5660300001)(6916009)(229853002)(36756003)(305945005)(97736004)(6246003)(2616005)(8936002)(476003)(68736007)(53936002)(8666007)(58126008)(446003)(3280700002)(93886005)(7416002)(8676002)(486006)(6512007)(83716003)(105586002)(106356001)(102836004)(4326008)(86362001)(478600001)(25786009)(76176011)(6436002)(186003)(5250100002)(33656002)(551934003)(6506007)(66066001)(6486002)(99286004);DIR:OUT;SFP:1102;SCL:1;SRVR:YTXPR0101MB1744;H:YTXPR0101MB2045.CANPRD01.PROD.OUTLOOK.COM;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;MX:1;A:1; x-microsoft-antispam-message-info: 19yyJrq4ASRbP+eWNv7pCyqdjtfMNlmUpbAlCr9Bf5pmEnyxohp0JaOfFpANLfAZKyIjEpD7UGQ4DrFs5JcL3J0nbXkkt8uBYuocrK/btbXtx/s6DNad9rLVz1DPpj5Hz7lXMYejLXlMyjrpIKBTMOExVg1XfP3IhZl/MfAa0eEY8/sqOdTzYmYThip/I9cp spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="utf-8" Content-ID: MIME-Version: 1.0 X-MS-Office365-Filtering-Correlation-Id: c17b7d19-f51e-4f3a-8d50-08d5b6a598cb X-OriginatorOrg: raithlin.com X-MS-Exchange-CrossTenant-Network-Message-Id: c17b7d19-f51e-4f3a-8d50-08d5b6a598cb X-MS-Exchange-CrossTenant-originalarrivaltime: 10 May 2018 18:41:09.1504 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 18519031-7ff4-4cbb-bbcb-c3252d330f4b X-MS-Exchange-Transport-CrossTenantHeadersStamped: YTXPR0101MB1744 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id w4AIfPn6003903 Hi Jerome > Note on GPU we do would not rely on ATS for peer to peer. Some part > of the GPU (DMA engines) do not necessarily support ATS. Yet those > are the part likely to be use in peer to peer. OK this is good to know. I agree the DMA engine is probably one of the GPU components most applicable to p2pdma. > We (ake GPU people aka the good guys ;)) do no want to do peer to peer > for performance reasons ie we do not care having our transaction going > to the root complex and back down the destination. At least in use case > i am working on this is fine. If the GPU people are the good guys does that make the NVMe people the bad guys ;-). If so, what are the RDMA people??? Again good to know. > Reasons is that GPU are giving up on PCIe (see all specialize link like > NVlink that are popping up in GPU space). So for fast GPU inter-connect > we have this new links. I look forward to Nvidia open-licensing NVLink to anyone who wants to use it ;-). Or maybe we'll all just switch to OpenGenCCIX when the time comes. > Also the IOMMU isolation do matter a lot to us. Think someone using this > peer to peer to gain control of a server in the cloud. I agree that IOMMU isolation is very desirable. Hence the desire to ensure we can keep the IOMMU on while doing p2pdma if at all possible whilst still delivering the desired performance to the user. Stephen From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Return-Path: From: "Stephen Bates" To: Jerome Glisse Subject: Re: [PATCH v4 04/14] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches Date: Thu, 10 May 2018 18:41:09 +0000 Message-ID: References: <20180508133407.57a46902@w520.home> <5fc9b1c1-9208-06cc-0ec5-1f54c2520494@deltatee.com> <20180508141331.7cd737cb@w520.home> <20180508205005.GC15608@redhat.com> <7FFB9603-DF9F-4441-82E9-46037CB6C0DE@raithlin.com> <4e0d0b96-ab02-2662-adf3-fa956efd294c@deltatee.com> <2fc61d29-9eb4-d168-a3e5-955c36e5d821@amd.com> <94C8FE12-7FC3-48BD-9DCA-E6A427E71810@raithlin.com> <20180510144137.GA3652@redhat.com> In-Reply-To: <20180510144137.GA3652@redhat.com> MIME-Version: 1.0 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jens Axboe , Keith Busch , Sagi Grimberg , "linux-nvdimm@lists.01.org" , "linux-rdma@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-nvme@lists.infradead.org" , Christoph Hellwig , "linux-block@vger.kernel.org" , Alex Williamson , Jason Gunthorpe , Bjorn Helgaas , Benjamin Herrenschmidt , Bjorn Helgaas , Max Gurtovoy , Dan Williams , Logan Gunthorpe , =?utf-8?B?Q2hyaXN0aWFuIEvDtm5pZw==?= Content-Type: text/plain; charset="us-ascii" Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+bjorn=helgaas.com@lists.infradead.org List-ID: Hi Jerome > Note on GPU we do would not rely on ATS for peer to peer. Some part > of the GPU (DMA engines) do not necessarily support ATS. Yet those > are the part likely to be use in peer to peer. OK this is good to know. I agree the DMA engine is probably one of the GPU components most applicable to p2pdma. > We (ake GPU people aka the good guys ;)) do no want to do peer to peer > for performance reasons ie we do not care having our transaction going > to the root complex and back down the destination. At least in use case > i am working on this is fine. If the GPU people are the good guys does that make the NVMe people the bad guys ;-). If so, what are the RDMA people??? Again good to know. > Reasons is that GPU are giving up on PCIe (see all specialize link like > NVlink that are popping up in GPU space). So for fast GPU inter-connect > we have this new links. I look forward to Nvidia open-licensing NVLink to anyone who wants to use it ;-). Or maybe we'll all just switch to OpenGenCCIX when the time comes. > Also the IOMMU isolation do matter a lot to us. Think someone using this > peer to peer to gain control of a server in the cloud. I agree that IOMMU isolation is very desirable. Hence the desire to ensure we can keep the IOMMU on while doing p2pdma if at all possible whilst still delivering the desired performance to the user. Stephen _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme From mboxrd@z Thu Jan 1 00:00:00 1970 From: sbates@raithlin.com (Stephen Bates) Date: Thu, 10 May 2018 18:41:09 +0000 Subject: [PATCH v4 04/14] PCI/P2PDMA: Clear ACS P2P flags for all devices behind switches In-Reply-To: <20180510144137.GA3652@redhat.com> References: <20180508133407.57a46902@w520.home> <5fc9b1c1-9208-06cc-0ec5-1f54c2520494@deltatee.com> <20180508141331.7cd737cb@w520.home> <20180508205005.GC15608@redhat.com> <7FFB9603-DF9F-4441-82E9-46037CB6C0DE@raithlin.com> <4e0d0b96-ab02-2662-adf3-fa956efd294c@deltatee.com> <2fc61d29-9eb4-d168-a3e5-955c36e5d821@amd.com> <94C8FE12-7FC3-48BD-9DCA-E6A427E71810@raithlin.com> <20180510144137.GA3652@redhat.com> Message-ID: Hi Jerome > Note on GPU we do would not rely on ATS for peer to peer. Some part > of the GPU (DMA engines) do not necessarily support ATS. Yet those > are the part likely to be use in peer to peer. OK this is good to know. I agree the DMA engine is probably one of the GPU components most applicable to p2pdma. > We (ake GPU people aka the good guys ;)) do no want to do peer to peer > for performance reasons ie we do not care having our transaction going > to the root complex and back down the destination. At least in use case > i am working on this is fine. If the GPU people are the good guys does that make the NVMe people the bad guys ;-). If so, what are the RDMA people??? Again good to know. > Reasons is that GPU are giving up on PCIe (see all specialize link like > NVlink that are popping up in GPU space). So for fast GPU inter-connect > we have this new links. I look forward to Nvidia open-licensing NVLink to anyone who wants to use it ;-). Or maybe we'll all just switch to OpenGenCCIX when the time comes. > Also the IOMMU isolation do matter a lot to us. Think someone using this > peer to peer to gain control of a server in the cloud. I agree that IOMMU isolation is very desirable. Hence the desire to ensure we can keep the IOMMU on while doing p2pdma if at all possible whilst still delivering the desired performance to the user. Stephen