From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753277AbcGLW4T (ORCPT ); Tue, 12 Jul 2016 18:56:19 -0400 Received: from mail-by2nam01on0124.outbound.protection.outlook.com ([104.47.34.124]:33056 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752855AbcGLWzF (ORCPT ); Tue, 12 Jul 2016 18:55:05 -0400 X-Greylist: delayed 1972 seconds by postgrey-1.27 at vger.kernel.org; Tue, 12 Jul 2016 18:55:04 EDT From: "Kani, Toshimitsu" To: "snitzer@redhat.com" CC: "linux-raid@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "dan.j.williams@intel.com" , "dm-devel@redhat.com" , "ross.zwisler@linux.intel.com" , "linux-nvdimm@ml01.01.org" , "agk@redhat.com" Subject: Re: dm stripe: add DAX support Thread-Topic: dm stripe: add DAX support Thread-Index: AQHRzkZSRk6OnQwQlkOK5icGBcW5JqAVev4A Date: Tue, 12 Jul 2016 22:22:05 +0000 Message-ID: <1468362104.8908.43.camel@hpe.com> References: <1466792610-30369-1-git-send-email-toshi.kani@hpe.com> <20160624182859.GD13898@redhat.com> In-Reply-To: <20160624182859.GD13898@redhat.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=toshi.kani@hpe.com; x-originating-ip: [15.219.163.9] x-ms-office365-filtering-correlation-id: 4f7eb341-b8d4-4a14-74f3-08d3aaa2f478 x-microsoft-exchange-diagnostics: 1;CS1PR84MB0006;6:TVg1saocfp26Idh67VXyyubs63/LV4xrRi5svyeW5IZyW15hh+ZiCh4muOnVxKBObOPlNkttLFt3Z+hstJmwFb6FESik27TUjhbiKejS+/SfSzyTChkrU/8yyHD/4hr0HUcGssV5HUsZFSE645sffDDcUn5Y4bOMevONIhhIY63gizLG5yZxLzvFFCoXN9I2fkoa4maBdYjzbSbvHywp2BMK8L4/E+U53+n2MDCMu3YOmg/NnrkF2wXK3SJfCTR4A5oM1MX+yfetWdL5Eco2IbJMYPA/kW4eg9/TRsdEn+Q=;5:Ly/aL6HRDKOELQlMpVDo2AP9T1BpAd4WB2oj35jXZ4T05qPdOuClWzZpiuPHACiHOSA1NeeplJleOLj4+ZkepYb53RQHjKyRvePA37rGkXoFm1qhQUywBuHiWdNpZ81/576LakIlfRlbVwD7C8LjAw==;24:pfrk6AnRYINQEMtF8PbKo/25EtnhD5Ne3eZkzfP2eQfqzFvhF85Jx7dI6A5xpH7R54A0moZShdG3FidxpFtgS3GybDy/O385MZ+dNEm6leM=;7:NK6MT1IIY+JbFhFhdC6XBqApny3O6ACwqCymDfGb5m4AVCMYi87GRNIlCtGoM2QE4OiQkrExnV/Ml1nbrfM/ZPSEfjb126UOy+ED5fEiHTeb3u9OiUFA+h/hfwZtDMP2XfpFE2jQTUrzVDLCcAtWRIW8EvSfXvA5zAp7lz+SqOnenf3tJeFC4FjY8ECokBwL+XfMpkqA+2dFekpFuDizFp+e7VFYWvM7S10oMw/b8RRowwdJSQ+QUSUwn96e7fVK x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CS1PR84MB0006; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046);SRVR:CS1PR84MB0006;BCL:0;PCL:0;RULEID:;SRVR:CS1PR84MB0006; x-forefront-prvs: 0001227049 x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(6009001)(7916002)(199003)(24454002)(189002)(377424004)(99286002)(81156014)(8936002)(8676002)(10400500002)(5640700001)(76176999)(586003)(54356999)(50986999)(102836003)(81166006)(3846002)(2950100001)(6116002)(1730700003)(66066001)(101416001)(105586002)(2501003)(106116001)(106356001)(5002640100001)(3660700001)(3280700002)(33646002)(2351001)(189998001)(305945005)(103116003)(11100500001)(110136002)(68736007)(36756003)(97736004)(86362001)(2906002)(92566002)(4326007)(7846002)(87936001)(122556002)(7736002)(2900100001)(77096005);DIR:OUT;SFP:1102;SCL:1;SRVR:CS1PR84MB0006;H:CS1PR84MB0005.NAMPRD84.PROD.OUTLOOK.COM;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="utf-8" Content-ID: MIME-Version: 1.0 X-OriginatorOrg: hpe.com X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jul 2016 22:22:05.1449 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 105b2061-b669-4b31-92ac-24d304d195dc X-MS-Exchange-Transport-CrossTenantHeadersStamped: CS1PR84MB0006 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id u6CMvRTc031607 On Fri, 2016-06-24 at 14:29 -0400, Mike Snitzer wrote: >  > BTW, if in your testing you could evaluate/quantify any extra overhead > from DM that'd be useful to share.  It could be there are bottlenecks > that need to be fixed, etc. Here are some results from fio benchmark.  The test is single-threaded and is bound to one CPU.  DAX  LVM   IOPS   NOTE  ---------------------------------------   Y    N    790K   Y    Y    754K   5% overhead with LVM   N    N    567K   N    Y    457K   20% overhead with LVM  DAX: Y: mount -o dax,noatime, N: mount -o noatime  LVM: Y: dm-linear on pmem0 device, N: pmem0 device  fio: bs=4k, size=2G, direct=1, rw=randread, numjobs=1 Among the 5% overhead with DAX/LVM, the new DM direct_access interfaces account for less than 0.5%.  dm_blk_direct_access 0.28%  linear_direct_access 0.17% The average latency increases slightly from 0.93us to 0.95us.  I think most of the overhead comes from the submit_bio() path, which is used only for accessing metadata with DAX.  I believe this is due to cloning bio for each request in DM.  There is 12% more L2 miss in total. Without DAX, 20% overhead is observed with LVM.  Average latency increases from 1.39us to 1.82us.  Without DAX, bio is cloned for both data and metadata. Thanks, -Toshi