From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Stephen Bates" Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Date: Tue, 25 Apr 2017 21:23:31 +0000 Message-ID: <705B114E-A084-4C0C-84BD-47E752FEE198@raithlin.com> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1491974532.7236.43.camel@kernel.crashing.org> <5ac22496-56ec-025d-f153-140001d2a7f9@deltatee.com> <1492034124.7236.77.camel@kernel.crashing.org> <81888a1e-eb0d-cbbc-dc66-0a09c32e4ea2@deltatee.com> <20170413232631.GB24910@bhelgaas-glaptop.roam.corp.google.com> <20170414041656.GA30694@obsidianresearch.com> <1492169849.25766.3.camel@kernel.crashing.org> <630c1c63-ff17-1116-e069-2b8f93e50fa2@deltatee.com> <20170414190452.GA15679@bhelgaas-glaptop.roam.corp.google.com> <1492207643.25766.18.camel@kernel.crashing.org> <1492311719.25766.37.camel@kernel.crashing.org> <5e43818e-8c6b-8be8-23ff-b798633d2a73@deltatee.com> <1492381907.25766.49.camel@kernel.crashing.org> <1493019397.3171.118.camel@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1493019397.3171.118.camel-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org> Content-Language: en-US Content-ID: <464D76A8CDBC4F4CB2762334B3E3CD7C-JkAt9bkEularoOM5E8FhRbjFIynDaujOfM0AETQt39g@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Knut Omang , Benjamin Herrenschmidt , Logan Gunthorpe , Dan Williams Cc: Jens Axboe , Jason Gunthorpe , "James E.J. Bottomley" , "Martin K. Petersen" , "linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , Steve Wise , "linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org" , Keith Busch , Jerome Glisse , Bjorn Helgaas , linux-scsi , linux-nvdimm , Max Gurtovoy , Christoph Hellwig List-Id: linux-nvdimm@lists.01.org > My first reflex when reading this thread was to think that this whole domain > lends it self excellently to testing via Qemu. Could it be that doing this in > the opposite direction might be a safer approach in the long run even though > (significant) more work up-front? While the idea of QEMU for this work is attractive it will be a long time before QEMU is in a position to support this development. Another approach is to propose a common development platform for p2pmem work using a platform we know is going to work. This an extreme version of the whitelisting approach that was discussed on this thread. We can list a very specific set of hardware (motherboard, PCIe end-points and (possibly) PCIe switch enclosure) that has been shown to work that others can copy for their development purposes. p2pmem.io perhaps ;-)? Stephen From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2993689AbdDYVXs (ORCPT ); Tue, 25 Apr 2017 17:23:48 -0400 Received: from mail-eopbgr670127.outbound.protection.outlook.com ([40.107.67.127]:49568 "EHLO CAN01-TO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1954852AbdDYVXe (ORCPT ); Tue, 25 Apr 2017 17:23:34 -0400 From: "Stephen Bates" To: Knut Omang , Benjamin Herrenschmidt , Logan Gunthorpe , "Dan Williams" CC: Jens Axboe , Keith Busch , "James E.J. Bottomley" , Sagi Grimberg , "Martin K. Petersen" , "linux-rdma@vger.kernel.org" , "linux-pci@vger.kernel.org" , Steve Wise , "linux-kernel@vger.kernel.org" , "linux-nvme@lists.infradead.org" , Jason Gunthorpe , Jerome Glisse , "Bjorn Helgaas" , linux-scsi , linux-nvdimm , Max Gurtovoy , Christoph Hellwig Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Thread-Topic: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Thread-Index: AQHSqaLO8AzeU4AEL0W+Kr5FXh2edaHBRjQAgADFuoCAAE/FAIABiQcAgAAiw4CAAFElAIAAexaAgABijwCAABpxAIAAMv6AgAFIEwCAAEq/AIAAUdKAgADXooCAAAt/AIAAY7aAgAuYi4CAAgvZAA== Date: Tue, 25 Apr 2017 21:23:31 +0000 Message-ID: <705B114E-A084-4C0C-84BD-47E752FEE198@raithlin.com> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1491974532.7236.43.camel@kernel.crashing.org> <5ac22496-56ec-025d-f153-140001d2a7f9@deltatee.com> <1492034124.7236.77.camel@kernel.crashing.org> <81888a1e-eb0d-cbbc-dc66-0a09c32e4ea2@deltatee.com> <20170413232631.GB24910@bhelgaas-glaptop.roam.corp.google.com> <20170414041656.GA30694@obsidianresearch.com> <1492169849.25766.3.camel@kernel.crashing.org> <630c1c63-ff17-1116-e069-2b8f93e50fa2@deltatee.com> <20170414190452.GA15679@bhelgaas-glaptop.roam.corp.google.com> <1492207643.25766.18.camel@kernel.crashing.org> <1492311719.25766.37.camel@kernel.crashing.org> <5e43818e-8c6b-8be8-23ff-b798633d2a73@deltatee.com> <1492381907.25766.49.camel@kernel.crashing.org> <1493019397.3171.118.camel@oracle.com> In-Reply-To: <1493019397.3171.118.camel@oracle.com> Accept-Language: en-CA, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/f.20.0.170309 authentication-results: oracle.com; dkim=none (message not signed) header.d=none;oracle.com; dmarc=none action=none header.from=raithlin.com; x-originating-ip: [70.65.232.47] x-microsoft-exchange-diagnostics: 1;YTOPR01MB0619;7:7hqF8r37qY7ANfK4buhw8bJdknRhI5ZjNysu+HRNf0mq4bYZ/WLUv8Ep7jKGHubG+NvjRoi18CPeEcqN/eX/G1nPp1v4dM7rDGwG1VvXwg8rRcY2MszhgdBum6wHt2muai2/FnvroPknyV1WNYINwAo1PhThKn8+R4ZqQCwfiWc+SQ1snLAnqfHSfe20ln/1zmqDfcGxBe490AFoaB7IM6G3aQkzxOCBkO8Dup+Daj2foQy9t6Z8fVIPUwCEHujRyuco9oAUOAKiNI+zuRVh/09YxvA87f0ELzB9oQ9cSI7LpyHPj3pRioZVEc2F2WS3KIdiaiOyIVCIXvuBkdfSmQ== x-ms-office365-filtering-correlation-id: 92953de7-3880-4ffb-b7a0-08d48c215285 x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(2017030254075)(201703131423075)(201702281549075);SRVR:YTOPR01MB0619; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(6040450)(2401047)(8121501046)(5005006)(93006095)(93001095)(3002001)(10201501046)(6041248)(20161123560025)(2016111802025)(20161123555025)(20161123564025)(20161123562025)(201703131423075)(201702281528075)(201703061421075)(6043046)(6072148);SRVR:YTOPR01MB0619;BCL:0;PCL:0;RULEID:;SRVR:YTOPR01MB0619; x-forefront-prvs: 0288CD37D9 x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(979002)(6009001)(39400400002)(39410400002)(39830400002)(39450400003)(8676002)(36756003)(4326008)(7736002)(7416002)(83506001)(38730400002)(305945005)(54906002)(81166006)(8936002)(2906002)(189998001)(82746002)(4001350100001)(83716003)(66066001)(33656002)(86362001)(25786009)(3280700002)(54356999)(2900100001)(6246003)(122556002)(3660700001)(6512007)(6436002)(6506006)(230783001)(102836003)(6116002)(6486002)(5660300001)(53936002)(229853002)(2950100002)(50986999)(3846002)(76176999)(93886004)(77096006)(969003)(989001)(999001)(1009001)(1019001);DIR:OUT;SFP:1102;SCL:1;SRVR:YTOPR01MB0619;H:YTOPR01MB0619.CANPRD01.PROD.OUTLOOK.COM;FPR:;SPF:None;MLV:ovrnspm;PTR:InfoNoRecords;LANG:en; spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="utf-8" Content-ID: <464D76A8CDBC4F4CB2762334B3E3CD7C@CANPRD01.PROD.OUTLOOK.COM> MIME-Version: 1.0 X-OriginatorOrg: raithlin.com X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Apr 2017 21:23:31.1045 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 18519031-7ff4-4cbb-bbcb-c3252d330f4b X-MS-Exchange-Transport-CrossTenantHeadersStamped: YTOPR01MB0619 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id v3PLOamP007728 > My first reflex when reading this thread was to think that this whole domain > lends it self excellently to testing via Qemu. Could it be that doing this in > the opposite direction might be a safer approach in the long run even though > (significant) more work up-front? While the idea of QEMU for this work is attractive it will be a long time before QEMU is in a position to support this development. Another approach is to propose a common development platform for p2pmem work using a platform we know is going to work. This an extreme version of the whitelisting approach that was discussed on this thread. We can list a very specific set of hardware (motherboard, PCIe end-points and (possibly) PCIe switch enclosure) that has been shown to work that others can copy for their development purposes. p2pmem.io perhaps ;-)? Stephen From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: "Stephen Bates" To: Knut Omang , Benjamin Herrenschmidt , Logan Gunthorpe , "Dan Williams" CC: Jens Axboe , Keith Busch , "James E.J. Bottomley" , Sagi Grimberg , "Martin K. Petersen" , "linux-rdma@vger.kernel.org" , "linux-pci@vger.kernel.org" , Steve Wise , "linux-kernel@vger.kernel.org" , "linux-nvme@lists.infradead.org" , Jason Gunthorpe , Jerome Glisse , "Bjorn Helgaas" , linux-scsi , linux-nvdimm , Max Gurtovoy , Christoph Hellwig Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Date: Tue, 25 Apr 2017 21:23:31 +0000 Message-ID: <705B114E-A084-4C0C-84BD-47E752FEE198@raithlin.com> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1491974532.7236.43.camel@kernel.crashing.org> <5ac22496-56ec-025d-f153-140001d2a7f9@deltatee.com> <1492034124.7236.77.camel@kernel.crashing.org> <81888a1e-eb0d-cbbc-dc66-0a09c32e4ea2@deltatee.com> <20170413232631.GB24910@bhelgaas-glaptop.roam.corp.google.com> <20170414041656.GA30694@obsidianresearch.com> <1492169849.25766.3.camel@kernel.crashing.org> <630c1c63-ff17-1116-e069-2b8f93e50fa2@deltatee.com> <20170414190452.GA15679@bhelgaas-glaptop.roam.corp.google.com> <1492207643.25766.18.camel@kernel.crashing.org> <1492311719.25766.37.camel@kernel.crashing.org> <5e43818e-8c6b-8be8-23ff-b798633d2a73@deltatee.com> <1492381907.25766.49.camel@kernel.crashing.org> <1493019397.3171.118.camel@oracle.com> In-Reply-To: <1493019397.3171.118.camel@oracle.com> Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: DQo+IE15IGZpcnN0IHJlZmxleCB3aGVuIHJlYWRpbmcgdGhpcyB0aHJlYWQgd2FzIHRvIHRoaW5r IHRoYXQgdGhpcyB3aG9sZSBkb21haW4NCj4gbGVuZHMgaXQgc2VsZiBleGNlbGxlbnRseSB0byB0 ZXN0aW5nIHZpYSBRZW11LiBDb3VsZCBpdCBiZSB0aGF0IGRvaW5nIHRoaXMgaW4gDQo+IHRoZSBv cHBvc2l0ZSBkaXJlY3Rpb24gbWlnaHQgYmUgYSBzYWZlciBhcHByb2FjaCBpbiB0aGUgbG9uZyBy dW4gZXZlbiB0aG91Z2ggDQo+IChzaWduaWZpY2FudCkgbW9yZSB3b3JrIHVwLWZyb250Pw0KDQpX aGlsZSB0aGUgaWRlYSBvZiBRRU1VIGZvciB0aGlzIHdvcmsgaXMgYXR0cmFjdGl2ZSBpdCB3aWxs IGJlIGEgbG9uZyB0aW1lIGJlZm9yZSBRRU1VIGlzIGluIGEgcG9zaXRpb24gdG8gc3VwcG9ydCB0 aGlzIGRldmVsb3BtZW50LiANCg0KQW5vdGhlciBhcHByb2FjaCBpcyB0byBwcm9wb3NlIGEgY29t bW9uIGRldmVsb3BtZW50IHBsYXRmb3JtIGZvciBwMnBtZW0gd29yayB1c2luZyBhIHBsYXRmb3Jt IHdlIGtub3cgaXMgZ29pbmcgdG8gd29yay4gVGhpcyBhbiBleHRyZW1lIHZlcnNpb24gb2YgdGhl IHdoaXRlbGlzdGluZyBhcHByb2FjaCB0aGF0IHdhcyBkaXNjdXNzZWQgb24gdGhpcyB0aHJlYWQu IFdlIGNhbiBsaXN0IGEgdmVyeSBzcGVjaWZpYyBzZXQgb2YgaGFyZHdhcmUgKG1vdGhlcmJvYXJk LCBQQ0llIGVuZC1wb2ludHMgYW5kIChwb3NzaWJseSkgUENJZSBzd2l0Y2ggZW5jbG9zdXJlKSB0 aGF0IGhhcyBiZWVuIHNob3duIHRvIHdvcmsgdGhhdCBvdGhlcnMgY2FuIGNvcHkgZm9yIHRoZWly IGRldmVsb3BtZW50IHB1cnBvc2VzLg0KDQpwMnBtZW0uaW8gcGVyaGFwcyA7LSk/DQoNClN0ZXBo ZW4NCg0KDQo= From mboxrd@z Thu Jan 1 00:00:00 1970 From: sbates@raithlin.com (Stephen Bates) Date: Tue, 25 Apr 2017 21:23:31 +0000 Subject: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory In-Reply-To: <1493019397.3171.118.camel@oracle.com> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1491974532.7236.43.camel@kernel.crashing.org> <5ac22496-56ec-025d-f153-140001d2a7f9@deltatee.com> <1492034124.7236.77.camel@kernel.crashing.org> <81888a1e-eb0d-cbbc-dc66-0a09c32e4ea2@deltatee.com> <20170413232631.GB24910@bhelgaas-glaptop.roam.corp.google.com> <20170414041656.GA30694@obsidianresearch.com> <1492169849.25766.3.camel@kernel.crashing.org> <630c1c63-ff17-1116-e069-2b8f93e50fa2@deltatee.com> <20170414190452.GA15679@bhelgaas-glaptop.roam.corp.google.com> <1492207643.25766.18.camel@kernel.crashing.org> <1492311719.25766.37.camel@kernel.crashing.org> <5e43818e-8c6b-8be8-23ff-b798633d2a73@deltatee.com> <1492381907.25766.49.camel@kernel.crashing.org> <1493019397.3171.118.camel@oracle.com> Message-ID: <705B114E-A084-4C0C-84BD-47E752FEE198@raithlin.com> > My first reflex when reading this thread was to think that this whole domain > lends it self excellently to testing via Qemu. Could it be that doing this in > the opposite direction might be a safer approach in the long run even though > (significant) more work up-front? While the idea of QEMU for this work is attractive it will be a long time before QEMU is in a position to support this development. Another approach is to propose a common development platform for p2pmem work using a platform we know is going to work. This an extreme version of the whitelisting approach that was discussed on this thread. We can list a very specific set of hardware (motherboard, PCIe end-points and (possibly) PCIe switch enclosure) that has been shown to work that others can copy for their development purposes. p2pmem.io perhaps ;-)? Stephen