From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from CAN01-TO1-obe.outbound.protection.outlook.com (mail-eopbgr670099.outbound.protection.outlook.com [40.107.67.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id D9ABA22423821 for ; Thu, 1 Mar 2018 08:08:58 -0800 (PST) From: "Stephen Bates" Subject: Re: [PATCH v2 10/10] nvmet: Optionally use PCI P2P memory Date: Thu, 1 Mar 2018 16:15:04 +0000 Message-ID: References: <20180228234006.21093-1-logang@deltatee.com> <20180228234006.21093-11-logang@deltatee.com> <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> In-Reply-To: <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> Content-Language: en-US Content-ID: <196AA3290052DF43BEE7534D9534469B@CANPRD01.PROD.OUTLOOK.COM> MIME-Version: 1.0 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Sagi Grimberg , Logan Gunthorpe , "linux-kernel@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-block@vger.kernel.org" Cc: Jens Axboe , Benjamin Herrenschmidt , Steve Wise , Alex Williamson , Keith Busch , =?utf-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig List-ID: > > Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would > > save an extra PCI transfer as the NVME card could just take the data > > out of it's own memory. However, at this time, cards with CMB buffers > > don't seem to be available. > Can you describe what would be the plan to have it when these devices > do come along? I'd say that p2p_dev needs to become a nvmet_ns reference > and not from nvmet_ctrl. Then, when cmb capable devices come along, the > ns can prefer to use its own cmb instead of locating a p2p_dev device? Hi Sagi Thanks for the review! That commit message is somewhat dated as NVMe controllers with CMBs that support RDS and WDS are now commercially available [1]. However we have not yet tried to do any kind of optimization around this yet in terms of determining which p2p_dev to use. Your suggest above looks good and we can look into this kind of optimization in due course. [1] http://www.eideticom.com/uploads/images/NoLoad_Product_Spec.pdf >> + ctrl->p2p_dev = pci_p2pmem_find(&ctrl->p2p_clients); > This is the first p2p_dev found right? What happens if I have more than > a single p2p device? In theory I'd have more p2p memory I can use. Have > you considered making pci_p2pmem_find return the least used suitable > device? Yes pci_p2pmem_find will always return the first valid p2p_dev found. At the very least we should update this allocate over all the valid p2p_dev. Since the load on any given p2p_dev will vary over time I think a random allocation of the devices makes sense (at least for now). Stephen _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: "Stephen Bates" To: Sagi Grimberg , Logan Gunthorpe , "linux-kernel@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-block@vger.kernel.org" CC: Christoph Hellwig , Jens Axboe , Keith Busch , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , =?utf-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Benjamin Herrenschmidt , "Alex Williamson" , Steve Wise Subject: Re: [PATCH v2 10/10] nvmet: Optionally use PCI P2P memory Date: Thu, 1 Mar 2018 16:15:04 +0000 Message-ID: References: <20180228234006.21093-1-logang@deltatee.com> <20180228234006.21093-11-logang@deltatee.com> <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> In-Reply-To: <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 List-ID: PiA+IElkZWFsbHksIHdlJ2Qgd2FudCB0byB1c2UgYW4gTlZNRSBDTUIgYnVmZmVyIGFzIHAycCBt ZW1vcnkuIFRoaXMgd291bGQNCj4gPiBzYXZlIGFuIGV4dHJhIFBDSSB0cmFuc2ZlciBhcyB0aGUg TlZNRSBjYXJkIGNvdWxkIGp1c3QgdGFrZSB0aGUgZGF0YQ0KPiA+IG91dCBvZiBpdCdzIG93biBt ZW1vcnkuIEhvd2V2ZXIsIGF0IHRoaXMgdGltZSwgY2FyZHMgd2l0aCBDTUIgYnVmZmVycw0KPiA+ IGRvbid0IHNlZW0gdG8gYmUgYXZhaWxhYmxlLg0KDQo+IENhbiB5b3UgZGVzY3JpYmUgd2hhdCB3 b3VsZCBiZSB0aGUgcGxhbiB0byBoYXZlIGl0IHdoZW4gdGhlc2UgZGV2aWNlcw0KPiBkbyBjb21l IGFsb25nPyBJJ2Qgc2F5IHRoYXQgcDJwX2RldiBuZWVkcyB0byBiZWNvbWUgYSBudm1ldF9ucyBy ZWZlcmVuY2UNCj4gYW5kIG5vdCBmcm9tIG52bWV0X2N0cmwuIFRoZW4sIHdoZW4gY21iIGNhcGFi bGUgZGV2aWNlcyBjb21lIGFsb25nLCB0aGUNCj4gbnMgY2FuIHByZWZlciB0byB1c2UgaXRzIG93 biBjbWIgaW5zdGVhZCBvZiBsb2NhdGluZyBhIHAycF9kZXYgZGV2aWNlPw0KDQpIaSBTYWdpDQoN ClRoYW5rcyBmb3IgdGhlIHJldmlldyEgVGhhdCBjb21taXQgbWVzc2FnZSBpcyBzb21ld2hhdCBk YXRlZCBhcyBOVk1lIGNvbnRyb2xsZXJzIHdpdGggQ01CcyB0aGF0IHN1cHBvcnQgUkRTIGFuZCBX RFMgYXJlIG5vdyBjb21tZXJjaWFsbHkgYXZhaWxhYmxlIFsxXS4gSG93ZXZlciB3ZSBoYXZlIG5v dCB5ZXQgdHJpZWQgdG8gZG8gYW55IGtpbmQgb2Ygb3B0aW1pemF0aW9uIGFyb3VuZCB0aGlzIHll dCBpbiB0ZXJtcyBvZiBkZXRlcm1pbmluZyB3aGljaCBwMnBfZGV2IHRvIHVzZS4gWW91ciBzdWdn ZXN0IGFib3ZlIGxvb2tzIGdvb2QgYW5kIHdlIGNhbiBsb29rIGludG8gdGhpcyBraW5kIG9mIG9w dGltaXphdGlvbiBpbiBkdWUgY291cnNlLg0KDQpbMV0gaHR0cDovL3d3dy5laWRldGljb20uY29t L3VwbG9hZHMvaW1hZ2VzL05vTG9hZF9Qcm9kdWN0X1NwZWMucGRmDQogICAgDQo+PiArCWN0cmwt PnAycF9kZXYgPSBwY2lfcDJwbWVtX2ZpbmQoJmN0cmwtPnAycF9jbGllbnRzKTsNCg0KPiBUaGlz IGlzIHRoZSBmaXJzdCBwMnBfZGV2IGZvdW5kIHJpZ2h0PyBXaGF0IGhhcHBlbnMgaWYgSSBoYXZl IG1vcmUgdGhhbg0KPiBhIHNpbmdsZSBwMnAgZGV2aWNlPyBJbiB0aGVvcnkgSSdkIGhhdmUgbW9y ZSBwMnAgbWVtb3J5IEkgY2FuIHVzZS4gSGF2ZQ0KPiB5b3UgY29uc2lkZXJlZCBtYWtpbmcgcGNp X3AycG1lbV9maW5kIHJldHVybiB0aGUgbGVhc3QgdXNlZCBzdWl0YWJsZQ0KPiBkZXZpY2U/DQog ICAgDQpZZXMgcGNpX3AycG1lbV9maW5kIHdpbGwgYWx3YXlzIHJldHVybiB0aGUgZmlyc3QgdmFs aWQgcDJwX2RldiBmb3VuZC4gQXQgdGhlIHZlcnkgbGVhc3Qgd2Ugc2hvdWxkIHVwZGF0ZSB0aGlz IGFsbG9jYXRlIG92ZXIgYWxsIHRoZSB2YWxpZCBwMnBfZGV2LiBTaW5jZSB0aGUgbG9hZCBvbiBh bnkgZ2l2ZW4gcDJwX2RldiB3aWxsIHZhcnkgb3ZlciB0aW1lIEkgdGhpbmsgYSByYW5kb20gYWxs b2NhdGlvbiBvZiB0aGUgZGV2aWNlcyBtYWtlcyBzZW5zZSAoYXQgbGVhc3QgZm9yIG5vdykuIA0K DQpTdGVwaGVuDQoNCg== From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Stephen Bates" Subject: Re: [PATCH v2 10/10] nvmet: Optionally use PCI P2P memory Date: Thu, 1 Mar 2018 16:15:04 +0000 Message-ID: References: <20180228234006.21093-1-logang@deltatee.com> <20180228234006.21093-11-logang@deltatee.com> <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <749e3752-4349-0bdf-5243-3d510c2b26db-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org> Content-Language: en-US Content-ID: <196AA3290052DF43BEE7534D9534469B-JkAt9bkEularoOM5E8FhRbjFIynDaujOfM0AETQt39g@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Sagi Grimberg , Logan Gunthorpe , "linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org" , "linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org" , "linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" Cc: Jens Axboe , Benjamin Herrenschmidt , Steve Wise , Alex Williamson , Keith Busch , =?utf-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig List-Id: linux-rdma@vger.kernel.org > > Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would > > save an extra PCI transfer as the NVME card could just take the data > > out of it's own memory. However, at this time, cards with CMB buffers > > don't seem to be available. > Can you describe what would be the plan to have it when these devices > do come along? I'd say that p2p_dev needs to become a nvmet_ns reference > and not from nvmet_ctrl. Then, when cmb capable devices come along, the > ns can prefer to use its own cmb instead of locating a p2p_dev device? Hi Sagi Thanks for the review! That commit message is somewhat dated as NVMe controllers with CMBs that support RDS and WDS are now commercially available [1]. However we have not yet tried to do any kind of optimization around this yet in terms of determining which p2p_dev to use. Your suggest above looks good and we can look into this kind of optimization in due course. [1] http://www.eideticom.com/uploads/images/NoLoad_Product_Spec.pdf >> + ctrl->p2p_dev = pci_p2pmem_find(&ctrl->p2p_clients); > This is the first p2p_dev found right? What happens if I have more than > a single p2p device? In theory I'd have more p2p memory I can use. Have > you considered making pci_p2pmem_find return the least used suitable > device? Yes pci_p2pmem_find will always return the first valid p2p_dev found. At the very least we should update this allocate over all the valid p2p_dev. Since the load on any given p2p_dev will vary over time I think a random allocation of the devices makes sense (at least for now). Stephen From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032859AbeCAQPK (ORCPT ); Thu, 1 Mar 2018 11:15:10 -0500 Received: from mail-eopbgr670102.outbound.protection.outlook.com ([40.107.67.102]:5312 "EHLO CAN01-TO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1032539AbeCAQPH (ORCPT ); Thu, 1 Mar 2018 11:15:07 -0500 From: "Stephen Bates" To: Sagi Grimberg , Logan Gunthorpe , "linux-kernel@vger.kernel.org" , "linux-pci@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-block@vger.kernel.org" CC: Christoph Hellwig , Jens Axboe , Keith Busch , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , =?utf-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Benjamin Herrenschmidt , "Alex Williamson" , Steve Wise Subject: Re: [PATCH v2 10/10] nvmet: Optionally use PCI P2P memory Thread-Topic: [PATCH v2 10/10] nvmet: Optionally use PCI P2P memory Thread-Index: AQHTsO2FaiSytyZxbkuik0deJPQRE6O7N/eA///hvwA= Date: Thu, 1 Mar 2018 16:15:04 +0000 Message-ID: References: <20180228234006.21093-1-logang@deltatee.com> <20180228234006.21093-11-logang@deltatee.com> <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> In-Reply-To: <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> Accept-Language: en-CA, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/10.a.0.180210 authentication-results: spf=none (sender IP is ) smtp.mailfrom=sbates@raithlin.com; x-originating-ip: [70.65.224.121] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1;YTXPR0101MB0974;7:VFFmutQodJtY+LvUZkkp3yaJmbetdgj6zFkXKl3lazqzZsJ/3RohdauSP8m43ExsCOuC7Z4nVSpSusdS3T2kyxvoq3O0AxOZHI1sUJoPp4mTkuHi7ivg5TNvzBydmgOLVIstshTzL01QQ1lnnaQufLzFBEdeGXBgKBFCBX9E9lqc0u3VxU8avZAkp2wOv3ZF++qQrhADpB6iS/+njVFfElbF0yWPe2GqIoieTwygs47445vI3/3zsTBuPdg0Ytk4 x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-correlation-id: 6856509f-14b9-412b-282b-08d57f8f97d9 x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:(7020095)(4652020)(7021125)(4534165)(7022125)(4603075)(4627221)(201702281549075)(7048125)(7024125)(7027125)(7028125)(7023125)(5600026)(4604075)(3008032)(2017052603307)(7153060)(7193020);SRVR:YTXPR0101MB0974; x-ms-traffictypediagnostic: YTXPR0101MB0974: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(6040501)(2401047)(8121501046)(5005006)(3231220)(944501227)(93006095)(93001095)(3002001)(10201501046)(6041288)(20161123564045)(20161123560045)(2016111802025)(20161123558120)(20161123562045)(6072148)(6043046)(201708071742011);SRVR:YTXPR0101MB0974;BCL:0;PCL:0;RULEID:;SRVR:YTXPR0101MB0974; x-forefront-prvs: 05986C03E0 x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(39380400002)(366004)(376002)(39830400003)(346002)(396003)(199004)(189003)(51914003)(2906002)(6486002)(6306002)(99286004)(26005)(33656002)(106356001)(478600001)(6512007)(305945005)(186003)(6436002)(68736007)(5660300001)(4326008)(110136005)(2950100002)(58126008)(97736004)(6506007)(229853002)(25786009)(105586002)(102836004)(59450400001)(316002)(54906003)(966005)(2201001)(86362001)(83716003)(6246003)(8676002)(8936002)(5250100002)(2501003)(36756003)(53936002)(14454004)(81156014)(6116002)(3846002)(82746002)(7736002)(3660700001)(76176011)(7416002)(2900100001)(66066001)(81166006)(3280700002);DIR:OUT;SFP:1102;SCL:1;SRVR:YTXPR0101MB0974;H:YTXPR0101MB2045.CANPRD01.PROD.OUTLOOK.COM;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; x-microsoft-antispam-message-info: nlxOCaIn1nGJ/1jzgxhUTV179+wbjP4CAlJKycvsjWyzCNzahyfSnEQx2XyicUYHeKvCuKrMVoVgPI2/tn0dARR1DHJK7MjiO7P+hcG3pjGVvlPIVDUeuNDM+iD9fVmlrDubLD8VrKaIx1NwRKWhNMQl7Tt7AJi2bHMoeyae1Zg= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="utf-8" Content-ID: <196AA3290052DF43BEE7534D9534469B@CANPRD01.PROD.OUTLOOK.COM> MIME-Version: 1.0 X-OriginatorOrg: raithlin.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6856509f-14b9-412b-282b-08d57f8f97d9 X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Mar 2018 16:15:04.6810 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 18519031-7ff4-4cbb-bbcb-c3252d330f4b X-MS-Exchange-Transport-CrossTenantHeadersStamped: YTXPR0101MB0974 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id w21GFHFe020581 > > Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would > > save an extra PCI transfer as the NVME card could just take the data > > out of it's own memory. However, at this time, cards with CMB buffers > > don't seem to be available. > Can you describe what would be the plan to have it when these devices > do come along? I'd say that p2p_dev needs to become a nvmet_ns reference > and not from nvmet_ctrl. Then, when cmb capable devices come along, the > ns can prefer to use its own cmb instead of locating a p2p_dev device? Hi Sagi Thanks for the review! That commit message is somewhat dated as NVMe controllers with CMBs that support RDS and WDS are now commercially available [1]. However we have not yet tried to do any kind of optimization around this yet in terms of determining which p2p_dev to use. Your suggest above looks good and we can look into this kind of optimization in due course. [1] http://www.eideticom.com/uploads/images/NoLoad_Product_Spec.pdf >> + ctrl->p2p_dev = pci_p2pmem_find(&ctrl->p2p_clients); > This is the first p2p_dev found right? What happens if I have more than > a single p2p device? In theory I'd have more p2p memory I can use. Have > you considered making pci_p2pmem_find return the least used suitable > device? Yes pci_p2pmem_find will always return the first valid p2p_dev found. At the very least we should update this allocate over all the valid p2p_dev. Since the load on any given p2p_dev will vary over time I think a random allocation of the devices makes sense (at least for now). Stephen From mboxrd@z Thu Jan 1 00:00:00 1970 From: sbates@raithlin.com (Stephen Bates) Date: Thu, 1 Mar 2018 16:15:04 +0000 Subject: [PATCH v2 10/10] nvmet: Optionally use PCI P2P memory In-Reply-To: <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> References: <20180228234006.21093-1-logang@deltatee.com> <20180228234006.21093-11-logang@deltatee.com> <749e3752-4349-0bdf-5243-3d510c2b26db@grimberg.me> Message-ID: > > Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would > > save an extra PCI transfer as the NVME card could just take the data > > out of it's own memory. However, at this time, cards with CMB buffers > > don't seem to be available. > Can you describe what would be the plan to have it when these devices > do come along? I'd say that p2p_dev needs to become a nvmet_ns reference > and not from nvmet_ctrl. Then, when cmb capable devices come along, the > ns can prefer to use its own cmb instead of locating a p2p_dev device? Hi Sagi Thanks for the review! That commit message is somewhat dated as NVMe controllers with CMBs that support RDS and WDS are now commercially available [1]. However we have not yet tried to do any kind of optimization around this yet in terms of determining which p2p_dev to use. Your suggest above looks good and we can look into this kind of optimization in due course. [1] http://www.eideticom.com/uploads/images/NoLoad_Product_Spec.pdf >> + ctrl->p2p_dev = pci_p2pmem_find(&ctrl->p2p_clients); > This is the first p2p_dev found right? What happens if I have more than > a single p2p device? In theory I'd have more p2p memory I can use. Have > you considered making pci_p2pmem_find return the least used suitable > device? Yes pci_p2pmem_find will always return the first valid p2p_dev found. At the very least we should update this allocate over all the valid p2p_dev. Since the load on any given p2p_dev will vary over time I think a random allocation of the devices makes sense (at least for now). Stephen