* [PATCH V4 00/15] lightnvm: pblk: implement 2.0 support @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme IyBDaGFuZ2VzIHNpbmNlIFY0CiAtIFJlYmFzZSBvbiB0b3Agb2YgTWF0aWFzJyBmb3ItNC4xNy9j b3JlCiAtIEZpeCBwYmxrJ3Mgd3JpdGUgYnVmZmVyIHNpemUgd2hlbiB1c2luZyBtd19jdWludHMK IC0gUmVtb3ZlIGNodW5rIGluZm9ybWF0aW9uIGZyb20gcGJsaydzIHN5c2ZzLiBXZSBpbnRlbmQg dG8gY2xlYW4gdXAKICAgc3lzZnMsIGFzIGl0IGlzIG1lc3N5IGFzIGl0IGlzIG5vdywgYW5kIHVz ZSB0cmFjZSBwb2ludHMgaW5zdGVhZC4gU28sCiAgIGF2b2lkIGFuIGV4dHJhIHJlZmFjdG9yaW5n IGluIHRoZSBuZWFyIGZ1dHVyZS4KCkZyb20gTWF0aWFzOgogLSBTcXVhc2ggZ2VvbWV0cnkgcGF0 Y2hlcwogLSBTcXVhc2ggY2h1bmsgc3VwcG9ydCBpbiBsaWdodG52bSBjb3JlCiAtIFVzZSBjb3Jl IHN0cnVjdHVyZSBmb3IgY2h1bmsgbWV0YWRhdGEKIC0gUmVtb3ZlIGludGVybWVkaWF0ZSB2YWx1 ZXMgZm9yIGNzZWNzIGFuZCBzb3MgYmVmb3JlIGRpc2sKICAgcmV2YWxpZGF0aW9uLgogLSBWYXJp b3VzIHJlbmFtaW5ncwoKIyBDaGFuZ2VzIHNpbmNlIFYzCkZyb20gTWF0aWFzOgogLSBSZW1vdmUg bnZtX2NvbW1vbl9nZW8KIC0gRG8gYXBwcm9wcmlhdGUgcmVuYW1lcyB3aGVuIGhhdmluZyBhIHNp bmdsZSBnZW9tZXRyeSBmb3IgZGV2aWNlIGFuZAogICAgIHRhcmdldHMKCiMgQ2hhbmdlcyBzaW5j ZSBWMjoKCkFwcGx5IE1hdGlhcycgZmVlZGJhY2s6CiAtIFJlbW92ZSBnZW5lcmljIG52bV9pZCBp ZGVudGlmeSBzdHJ1Y3R1cmUuCiAtIERvIG5vdCByZW1hcCBjYXBhYmlsaXRpZXMgKGNhcCkgdG8g bWVkaWEgYW5kIGNvbnRyb2xsZWQgY2FwYWJpbGl0aWVzCiAgIChtY2NhcCkuIEluc3RlYWQsIGFk ZCBhIGNvbW1lbnQgdG8gcHJldmVudCBjb25mdXNpb24gd2hlbgogICBjcm9zc2NoZWNraW5nIHdp dGggMi4wIHNwZWMuCiAtIENoYW5nZSBtYXhvYyBhbmQgbWF4b2NwdSBkZWZhdWx0cyBmcm9tIDEg YmxvY2sgdG8gdGhlIG1heCBudW1iZXIgb2YKICAgYmxvY2tzLgogLSBSZS1pbXBsZW1lbnQgdGhl IGdlbmVyaWMgZ2VvbWV0cnkgdG8gdXNlIG52bV9nZW8gb24gYm90aCBkZXZpY2UgYW5kCiAgIHRh cmdldHMuIE1haW50YWluIG52bV9jb21tb25fZ2VvIHRvIG1ha2UgaXQgZWFzaWVyIHRvIGNvcHkg dGhlIGNvbW1vbgogICBwYXJ0IG9mIHRoZSBnZW9tZXRyeSAod2l0aG91dCBoYXZpbmcgdG8gb3Zl cndyaXRlIHRhcmdldC1zcGVjaWZpYwogICBmaWVsZHMsIHdoaWNoIGlzIHVnbHkgYW5kIGVycm9y IHByb25lKS4gTWF0aWFzLCBpZiB5b3Ugc3RpbGwgd2FudCB0bwogICBnZXQgcmlkIG9mIHRoaXMs IHdlIGNhbiBkbyBpdC4KIC0gUmUtb3JkZXIgcGF0Y2hlcyB3aXRoIHJlbmFtaW5nIHRvIG1ha2Ug dGhlbSBtb3JlIG1lYW5pbmdmdWwuIFRoZXNlCiAgIGJlbG9uZyB0byB0aGUgc2VyaWVzLCBzaW5j ZSB0aGUgbmFtZSBjaGFuZ2VzIGFyZSBtb3RpdmF0ZWQgYnkgMi4wCiAgIGluY2x1c2lvbnMuIFRo ZSBvbmx5IGV4Y2VwdGlvbiB3b3VsZCBiZSAzNmQxMGJmZDMyMzQsIGJ1dCBJIGhvcGUgaXQKICAg aXMgT0sgSSBpbmNsdWRlIGl0IGhlcmUuCgpBbHNvLAogLSBFbGltaW5hdGUgYSBkZXBlbmRlbmN5 IGJldHdlZW4gbHVucyBhbmQgbGluZXMgaW4gdGhlIGluaXQvZXhpdAogICByZWZhY3RvcmluZy4K IC0gVXNlIHRoZSBnbG9iYWwgYWRkcmVzcyBmb3JtYXQgd2hlbiBwb3NzaWJsZSB0byBhdm9pZCBk ZWZhdWx0aW5nIG9uCiAgIHRoZSAxLjIgcGF0aC4gVGhpcyB3aWxsIHNhZmUgaGVhZGFjaGVzIGlm IHRoZSBhZGRyZXNzIGZvcm1hdCBjaGFuZ2VzCiAgIGF0IHNvbWUgcG9pbnQuCgpJIHRvb2sgb3V0 IHRoZSBwYXRjaCBhbGxvd2luZyB0byBkbyBiaXQgc2hpZnRzIG9uIG5vbiBwb3dlci1vZi0yIG1l ZGlhCmZvcm1hdHMgb24gcGJsaydzIG1hcHBpbmcgc2luY2UgaXQgcmVxdWlyZXMgdG91Y2hpbmcg bWFueSBwbGFjZXMgdGhhdAphcmUgbm90IDIuMCByZWxhdGVkLiBJJ2xsIHN1Ym1pdCB0aGlzIHNl cGFyYXRlbHkuCgojIENoYW5nZXMgc2luY2UgVjE6CgpBcHBseSBNYXRpYXMnIGZlZWRiYWNrOgog LSBSZWJhc2Ugb24gdG9wIG9mIE1hdGlhcycgbGF0ZXN0IHBhdGNoZXMuCiAtIFVzZSBudm1lX2dl dF9sb2dfZXh0IHRvIHN1Ym1pdCByZXBvcnQgY2h1bmsgYW5kIGV4cG9ydCBpdC4KIC0gUmUtd3Jp dGUgcmVwb3J0IGNodW5rIGJhc2VkIG9uIE1hdGlhcycgc3VnZ2VzdGlvbnMuIEhlcmUsIEkKICAg bWFpbnRhaW5lZCB0aGUgbGJhIGludGVyZmFjZSwgYnV0IGl0IHdhcyBuZWNlc3NhcnkgdG8gcmVk byB0aGUKICAgYWRkcmVzcyBmb3JtYXR0aW5nIHRvIG1hdGNoIHRoZSBjaHVuayBsb2cgcGFnZSBm b3JtYXQuIEZvciBwYmxrLAogICB0aGlzIG1lYW5zIGEgZG91YmxlIGFkZHJlc3MgdHJhbnNmb3Jt YXRpb24sIGJ1dCBpdCBlbmFibGVzIHRoZQogICBzdGFuZGFyZCBwYXRoIHRvIHVzZSBsYmFzLCBw bHVzLCB0aGlzIGlzIG5vdCBpbiB0aGUgZmFzdCBwYXRoLgogLSBGb2xkIGFkZHJlc3MgZm9ybWF0 IHRvZ2V0aGVyIHdpdGggYWRkcmVzcyB0cmFuc2Zvcm1hdGlvbnMuCiAtIFNwbGl0IHRoZSBnZW5l cmljIGdlb21ldHJ5IHBhdGNoIGluIGRpZmZlcmVudCBwYXRjaGVzLgogLSBSZW1vdmUgcmVmYWN0 b3Jpbmcgb2YgbGlndGhudm0ncyBjb3JlIHN5c2ZzLgoKRmVlZGJhY2sgbm90IGFwcGxpZWQ6CiAt IE5vdCBsZXR0aW5nIHBibGsga25vdyBhYm91dCAxLjIgYW5kIDIuMCBiYWQgYmxvY2sgcGF0aHMu CiAgIFNpbmNlIHRoZSBpbnRlcmZhY2VzIGZvciBnZXQvc2V0IGJhZCBibG9jayBhbmQgcmVwb3J0 IGNodW5rIGFyZSBzbwogICBkaWZmZXJlbnQsIG1vdmluZyB0aGlzIGxvZ2ljIHRvIGNvcmUgYWRk cyBhc3N1bXB0aW9ucyBvbiBob3cgdGhlCiAgIHRhcmdldHMgd291bGQgd2FudCB0byBnZXQgdGhl IGRhdGEgYmFjay4gQSB3YXkgb2YgZG9pbmcgdGhpcyBpcwogICBjcmVhdGluZyBhIGxvZ2ljYWwg cmVwb3J0IGNodW5rIG9uIHRoZSAxLjIgcGF0aCwgYnV0IHRoaXMgd291bGQKICAgbWVhbiB0aGF0 IHZhbHVlcyBsaWtlIHRoZSB3ZWFyLWluZGV4IGFyZSBpbnZhbGlkLCB3aGljaCByZXF1aXJlcwog ICB0aGUgdGFyZ2V0IGtub3dsZWRnZS4gSSdtIG9wZW4gdG8gc3VnZ2VzdGlvbnMgaGVyZS4KCkFs c286CiAtIERvIHNvbWUgZnVydGhlciByZW5hbWluZ3MKIC0gQ3JlYXRlIGEgZ2VuZXJpYyBhZGRy ZXNzIGZvcm1hdCB0byBtYWtlIGl0IGV4cGxpY2l0IHdoZXJlIHdlIHNoYXJlCiAgIDEuMiBhbmQg Mi4wIGZpZWxkcyB0byBhdm9pZCBhZGRyZXNzIGZvcm1hdHRpbmcgaW4gdGhlIGZhc3QgcGF0aC4K IC0gQWRkIG5ldyBmaWVsZHMgdG8gc3lzZnMgdG8gY29tcGxldGUgc3BlYyBhbmQgc2hvdyBtYWpv ci9taW5vcgogICB2ZXJzaW9ucyAodmVyc2lvbiBhbmQgc3VidmVyc2lvbiB0byByZXNwZWN0IGN1 cnJlbnQgaW50ZXJmYWNlKS4KCkltcGxlbWVudCAyLjAgc3VwcG9ydCBpbiBwYmxrLiBUaGlzIGlu Y2x1ZGVzIHRoZSBhZGRyZXNzIGZvcm1hdHRpbmcgYW5kCm1hcHBpbmcgcGF0aHMsIGFzIHdlbGwg YXMgdGhlIHN5c2ZzIGVudHJpZXMgZm9yIHRoZW0uCgpKYXZpZXIKCgpKYXZpZXIgR29uesOhbGV6 ICgxNSk6CiAgbGlnaHRudm06IHNpbXBsaWZ5IGdlb21ldHJ5IHN0cnVjdHVyZS4KICBsaWdodG52 bTogYWRkIGNvbnRyb2xsZXIgY2FwYWJpbGl0aWVzIHRvIDIuMAogIGxpZ2h0bnZtOiBhZGQgbWlu b3IgdmVyc2lvbiB0byBnZW5lcmljIGdlb21ldHJ5CiAgbGlnaHRudm06IGFkZCBzaG9ydGVuIE9D U1NEIHZlcnNpb24gaW4gZ2VvCiAgbGlnaHRudm06IGNvbXBsZXRlIGdlbyBzdHJ1Y3R1cmUgd2l0 aCBtYXhvYyoKICBsaWdodG52bTogbm9ybWFsaXplIGdlb21ldHJ5IG5vbWVuY2xhdHVyZQogIGxp Z2h0bnZtOiBhZGQgc3VwcG9ydCBmb3IgMi4wIGFkZHJlc3MgZm9ybWF0CiAgbGlnaHRudm06IG1h a2UgYWRkcmVzcyBjb252ZXJzaW9ucyBkZXBlbmQgb24gZ2VuZXJpYyBkZXZpY2UKICBsaWdodG52 bTogaW1wbGVtZW50IGdldCBsb2cgcmVwb3J0IGNodW5rIGhlbHBlcnMKICBsaWdodG52bTogcGJs azogY2hlY2sgZm9yIHN1cHBvcnRlZCB2ZXJzaW9uCiAgbGlnaHRudm06IHBibGs6IHJlbmFtZSBw cGFmKiB0byBhZGRyZioKICBsaWdodG52bjogcGJsazogdXNlIGdlbmVyaWMgYWRkcmVzcyBmb3Jt YXQKICBsaWdodG52bTogcGJsazogaW1wbGVtZW50IGdldCBsb2cgcmVwb3J0IGNodW5rCiAgbGln aHRudm06IHBibGs6IHJlZmFjdG9yIGluaXQvZXhpdCBzZXF1ZW5jZXMKICBsaWdodG52bTogcGJs azogaW1wbGVtZW50IDIuMCBzdXBwb3J0CgogZHJpdmVycy9saWdodG52bS9jb3JlLmMgICAgICAg ICAgfCAxNzggKysrKy0tLS0tCiBkcml2ZXJzL2xpZ2h0bnZtL3BibGstY29yZS5jICAgICB8IDE2 OSArKysrKystLQogZHJpdmVycy9saWdodG52bS9wYmxrLWdjLmMgICAgICAgfCAgIDIgKy0KIGRy aXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMgICAgIHwgODM4ICsrKysrKysrKysrKysrKysrKysr KysrLS0tLS0tLS0tLS0tLS0tLQogZHJpdmVycy9saWdodG52bS9wYmxrLW1hcC5jICAgICAgfCAg IDQgKy0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1yZWFkLmMgICAgIHwgICAyICstCiBkcml2ZXJz L2xpZ2h0bnZtL3BibGstcmVjb3ZlcnkuYyB8ICAxNCArLQogZHJpdmVycy9saWdodG52bS9wYmxr LXJsLmMgICAgICAgfCAgIDIgKy0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1zeXNmcy5jICAgIHwg IDY1ICsrLQogZHJpdmVycy9saWdodG52bS9wYmxrLXdyaXRlLmMgICAgfCAgIDIgKy0KIGRyaXZl cnMvbGlnaHRudm0vcGJsay5oICAgICAgICAgIHwgMjQyICsrKysrKystLS0tCiBkcml2ZXJzL252 bWUvaG9zdC9jb3JlLmMgICAgICAgICB8ICAgNiArLQogZHJpdmVycy9udm1lL2hvc3QvbGlnaHRu dm0uYyAgICAgfCA0NTMgKysrKysrKysrKysrKystLS0tLS0tCiBkcml2ZXJzL252bWUvaG9zdC9u dm1lLmggICAgICAgICB8ICAgMyArCiBpbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmggICAgICAgICB8 IDMzMSArKysrKysrKysrLS0tLS0tCiAxNSBmaWxlcyBjaGFuZ2VkLCAxNDc5IGluc2VydGlvbnMo KyksIDgzMiBkZWxldGlvbnMoLSkKCi0tIAoyLjcuNAoKCl9fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fCkxpbnV4LW52bWUgbWFpbGluZyBsaXN0CkxpbnV4LW52 bWVAbGlzdHMuaW5mcmFkZWFkLm9yZwpodHRwOi8vbGlzdHMuaW5mcmFkZWFkLm9yZy9tYWlsbWFu L2xpc3RpbmZvL2xpbnV4LW52bWUK ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH V4 00/15] lightnvm: pblk: implement 2.0 support @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) # Changes since V4 - Rebase on top of Matias' for-4.17/core - Fix pblk's write buffer size when using mw_cuints - Remove chunk information from pblk's sysfs. We intend to clean up sysfs, as it is messy as it is now, and use trace points instead. So, avoid an extra refactoring in the near future. >From Matias: - Squash geometry patches - Squash chunk support in lightnvm core - Use core structure for chunk metadata - Remove intermediate values for csecs and sos before disk revalidation. - Various renamings # Changes since V3 >From Matias: - Remove nvm_common_geo - Do appropriate renames when having a single geometry for device and targets # Changes since V2: Apply Matias' feedback: - Remove generic nvm_id identify structure. - Do not remap capabilities (cap) to media and controlled capabilities (mccap). Instead, add a comment to prevent confusion when crosschecking with 2.0 spec. - Change maxoc and maxocpu defaults from 1 block to the max number of blocks. - Re-implement the generic geometry to use nvm_geo on both device and targets. Maintain nvm_common_geo to make it easier to copy the common part of the geometry (without having to overwrite target-specific fields, which is ugly and error prone). Matias, if you still want to get rid of this, we can do it. - Re-order patches with renaming to make them more meaningful. These belong to the series, since the name changes are motivated by 2.0 inclusions. The only exception would be 36d10bfd3234, but I hope it is OK I include it here. Also, - Eliminate a dependency between luns and lines in the init/exit refactoring. - Use the global address format when possible to avoid defaulting on the 1.2 path. This will safe headaches if the address format changes at some point. I took out the patch allowing to do bit shifts on non power-of-2 media formats on pblk's mapping since it requires touching many places that are not 2.0 related. I'll submit this separately. # Changes since V1: Apply Matias' feedback: - Rebase on top of Matias' latest patches. - Use nvme_get_log_ext to submit report chunk and export it. - Re-write report chunk based on Matias' suggestions. Here, I maintained the lba interface, but it was necessary to redo the address formatting to match the chunk log page format. For pblk, this means a double address transformation, but it enables the standard path to use lbas, plus, this is not in the fast path. - Fold address format together with address transformations. - Split the generic geometry patch in different patches. - Remove refactoring of ligthnvm's core sysfs. Feedback not applied: - Not letting pblk know about 1.2 and 2.0 bad block paths. Since the interfaces for get/set bad block and report chunk are so different, moving this logic to core adds assumptions on how the targets would want to get the data back. A way of doing this is creating a logical report chunk on the 1.2 path, but this would mean that values like the wear-index are invalid, which requires the target knowledge. I'm open to suggestions here. Also: - Do some further renamings - Create a generic address format to make it explicit where we share 1.2 and 2.0 fields to avoid address formatting in the fast path. - Add new fields to sysfs to complete spec and show major/minor versions (version and subversion to respect current interface). Implement 2.0 support in pblk. This includes the address formatting and mapping paths, as well as the sysfs entries for them. Javier Javier Gonz?lez (15): lightnvm: simplify geometry structure. lightnvm: add controller capabilities to 2.0 lightnvm: add minor version to generic geometry lightnvm: add shorten OCSSD version in geo lightnvm: complete geo structure with maxoc* lightnvm: normalize geometry nomenclature lightnvm: add support for 2.0 address format lightnvm: make address conversions depend on generic device lightnvm: implement get log report chunk helpers lightnvm: pblk: check for supported version lightnvm: pblk: rename ppaf* to addrf* lightnvn: pblk: use generic address format lightnvm: pblk: implement get log report chunk lightnvm: pblk: refactor init/exit sequences lightnvm: pblk: implement 2.0 support drivers/lightnvm/core.c | 178 ++++----- drivers/lightnvm/pblk-core.c | 169 ++++++-- drivers/lightnvm/pblk-gc.c | 2 +- drivers/lightnvm/pblk-init.c | 838 +++++++++++++++++++++++---------------- drivers/lightnvm/pblk-map.c | 4 +- drivers/lightnvm/pblk-read.c | 2 +- drivers/lightnvm/pblk-recovery.c | 14 +- drivers/lightnvm/pblk-rl.c | 2 +- drivers/lightnvm/pblk-sysfs.c | 65 ++- drivers/lightnvm/pblk-write.c | 2 +- drivers/lightnvm/pblk.h | 242 +++++++---- drivers/nvme/host/core.c | 6 +- drivers/nvme/host/lightnvm.c | 453 ++++++++++++++------- drivers/nvme/host/nvme.h | 3 + include/linux/lightnvm.h | 331 ++++++++++------ 15 files changed, 1479 insertions(+), 832 deletions(-) -- 2.7.4 ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH V4 00/15] lightnvm: pblk: implement 2.0 support @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González # Changes since V4 - Rebase on top of Matias' for-4.17/core - Fix pblk's write buffer size when using mw_cuints - Remove chunk information from pblk's sysfs. We intend to clean up sysfs, as it is messy as it is now, and use trace points instead. So, avoid an extra refactoring in the near future. >From Matias: - Squash geometry patches - Squash chunk support in lightnvm core - Use core structure for chunk metadata - Remove intermediate values for csecs and sos before disk revalidation. - Various renamings # Changes since V3 >From Matias: - Remove nvm_common_geo - Do appropriate renames when having a single geometry for device and targets # Changes since V2: Apply Matias' feedback: - Remove generic nvm_id identify structure. - Do not remap capabilities (cap) to media and controlled capabilities (mccap). Instead, add a comment to prevent confusion when crosschecking with 2.0 spec. - Change maxoc and maxocpu defaults from 1 block to the max number of blocks. - Re-implement the generic geometry to use nvm_geo on both device and targets. Maintain nvm_common_geo to make it easier to copy the common part of the geometry (without having to overwrite target-specific fields, which is ugly and error prone). Matias, if you still want to get rid of this, we can do it. - Re-order patches with renaming to make them more meaningful. These belong to the series, since the name changes are motivated by 2.0 inclusions. The only exception would be 36d10bfd3234, but I hope it is OK I include it here. Also, - Eliminate a dependency between luns and lines in the init/exit refactoring. - Use the global address format when possible to avoid defaulting on the 1.2 path. This will safe headaches if the address format changes at some point. I took out the patch allowing to do bit shifts on non power-of-2 media formats on pblk's mapping since it requires touching many places that are not 2.0 related. I'll submit this separately. # Changes since V1: Apply Matias' feedback: - Rebase on top of Matias' latest patches. - Use nvme_get_log_ext to submit report chunk and export it. - Re-write report chunk based on Matias' suggestions. Here, I maintained the lba interface, but it was necessary to redo the address formatting to match the chunk log page format. For pblk, this means a double address transformation, but it enables the standard path to use lbas, plus, this is not in the fast path. - Fold address format together with address transformations. - Split the generic geometry patch in different patches. - Remove refactoring of ligthnvm's core sysfs. Feedback not applied: - Not letting pblk know about 1.2 and 2.0 bad block paths. Since the interfaces for get/set bad block and report chunk are so different, moving this logic to core adds assumptions on how the targets would want to get the data back. A way of doing this is creating a logical report chunk on the 1.2 path, but this would mean that values like the wear-index are invalid, which requires the target knowledge. I'm open to suggestions here. Also: - Do some further renamings - Create a generic address format to make it explicit where we share 1.2 and 2.0 fields to avoid address formatting in the fast path. - Add new fields to sysfs to complete spec and show major/minor versions (version and subversion to respect current interface). Implement 2.0 support in pblk. This includes the address formatting and mapping paths, as well as the sysfs entries for them. Javier Javier González (15): lightnvm: simplify geometry structure. lightnvm: add controller capabilities to 2.0 lightnvm: add minor version to generic geometry lightnvm: add shorten OCSSD version in geo lightnvm: complete geo structure with maxoc* lightnvm: normalize geometry nomenclature lightnvm: add support for 2.0 address format lightnvm: make address conversions depend on generic device lightnvm: implement get log report chunk helpers lightnvm: pblk: check for supported version lightnvm: pblk: rename ppaf* to addrf* lightnvn: pblk: use generic address format lightnvm: pblk: implement get log report chunk lightnvm: pblk: refactor init/exit sequences lightnvm: pblk: implement 2.0 support drivers/lightnvm/core.c | 178 ++++----- drivers/lightnvm/pblk-core.c | 169 ++++++-- drivers/lightnvm/pblk-gc.c | 2 +- drivers/lightnvm/pblk-init.c | 838 +++++++++++++++++++++++---------------- drivers/lightnvm/pblk-map.c | 4 +- drivers/lightnvm/pblk-read.c | 2 +- drivers/lightnvm/pblk-recovery.c | 14 +- drivers/lightnvm/pblk-rl.c | 2 +- drivers/lightnvm/pblk-sysfs.c | 65 ++- drivers/lightnvm/pblk-write.c | 2 +- drivers/lightnvm/pblk.h | 242 +++++++---- drivers/nvme/host/core.c | 6 +- drivers/nvme/host/lightnvm.c | 453 ++++++++++++++------- drivers/nvme/host/nvme.h | 3 + include/linux/lightnvm.h | 331 ++++++++++------ 15 files changed, 1479 insertions(+), 832 deletions(-) -- 2.7.4 ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 01/15] lightnvm: simplify geometry structure. 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme Q3VycmVudGx5LCB0aGUgZGV2aWNlIGdlb21ldHJ5IGlzIHN0b3JlZCByZWR1bmRhbnRseSBpbiB0 aGUgbnZtX2lkIGFuZApudm1fZ2VvIHN0cnVjdHVyZXMgYXQgYSBkZXZpY2UgbGV2ZWwuIE1vcmVv dmVyLCB3aGVuIGluc3RhbnRpYXRpbmcKdGFyZ2V0cyBvbiBhIHNwZWNpZmljIG51bWJlciBvZiBM VU5zLCB0aGVzZSBzdHJ1Y3R1cmVzIGFyZSByZXBsaWNhdGVkCmFuZCBtYW51YWxseSBtb2RpZmll ZCB0byBmaXQgdGhlIGluc3RhbmNlIGNoYW5uZWwgYW5kIExVTiBwYXJ0aXRpb25pbmcuCgpJbnN0 ZWFkLCBjcmVhdGUgYSBnZW5lcmljIGdlb21ldHJ5IGFyb3VuZCBudm1fZ2VvLCB3aGljaCBjYW4g YmUgdXNlZCBieQooaSkgdGhlIHVuZGVybHlpbmcgZGV2aWNlIHRvIGRlc2NyaWJlIHRoZSBnZW9t ZXRyeSBvZiB0aGUgd2hvbGUgZGV2aWNlLAphbmQgKGlpKSBpbnN0YW5jZXMgdG8gZGVzY3JpYmUg dGhlaXIgZ2VvbWV0cnkgaW5kZXBlbmRlbnRseS4KClNpZ25lZC1vZmYtYnk6IEphdmllciBHb256 w6FsZXogPGphdmllckBjbmV4bGFicy5jb20+Ci0tLQogZHJpdmVycy9saWdodG52bS9jb3JlLmMg ICAgICAgICAgfCAgNzAgKysrLS0tLS0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1jb3JlLmMgICAg IHwgIDE2ICstCiBkcml2ZXJzL2xpZ2h0bnZtL3BibGstZ2MuYyAgICAgICB8ICAgMiArLQogZHJp dmVycy9saWdodG52bS9wYmxrLWluaXQuYyAgICAgfCAxMTkgKysrKysrKy0tLS0tLS0KIGRyaXZl cnMvbGlnaHRudm0vcGJsay1yZWFkLmMgICAgIHwgICAyICstCiBkcml2ZXJzL2xpZ2h0bnZtL3Bi bGstcmVjb3ZlcnkuYyB8ICAxNCArLQogZHJpdmVycy9saWdodG52bS9wYmxrLXJsLmMgICAgICAg fCAgIDIgKy0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1zeXNmcy5jICAgIHwgIDM5ICsrKy0tCiBk cml2ZXJzL2xpZ2h0bnZtL3BibGstd3JpdGUuYyAgICB8ICAgMiArLQogZHJpdmVycy9saWdodG52 bS9wYmxrLmggICAgICAgICAgfCAgODcgKysrKystLS0tLQogZHJpdmVycy9udm1lL2hvc3QvbGln aHRudm0uYyAgICAgfCAzNDEgKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tLS0t CiBpbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmggICAgICAgICB8IDIwMCArKysrKysrKysrKy0tLS0t LS0tLS0tLQogMTIgZmlsZXMgY2hhbmdlZCwgNDY1IGluc2VydGlvbnMoKyksIDQyOSBkZWxldGlv bnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL2NvcmUuYyBiL2RyaXZlcnMvbGln aHRudm0vY29yZS5jCmluZGV4IDE5YzQ2ZWJiMWI5MS4uOWE0MTdkOWNkZjBjIDEwMDY0NAotLS0g YS9kcml2ZXJzL2xpZ2h0bnZtL2NvcmUuYworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL2NvcmUuYwpA QCAtMTU1LDcgKzE1NSw3IEBAIHN0YXRpYyBzdHJ1Y3QgbnZtX3RndF9kZXYgKm52bV9jcmVhdGVf dGd0X2RldihzdHJ1Y3QgbnZtX2RldiAqZGV2LAogCWludCBibHVuID0gbHVuX2JlZ2luICUgZGV2 LT5nZW8ubnJfbHVuczsKIAlpbnQgbHVuaWQgPSAwOwogCWludCBsdW5fYmFsYW5jZWQgPSAxOwot CWludCBwcmV2X25yX2x1bnM7CisJaW50IHNlY19wZXJfbHVuLCBwcmV2X25yX2x1bnM7CiAJaW50 IGksIGo7CiAKIAlucl9jaG5scyA9IChucl9jaG5sc19tb2QgPT0gMCkgPyBucl9jaG5scyA6IG5y X2NobmxzICsgMTsKQEAgLTIxNSwxOCArMjE1LDIzIEBAIHN0YXRpYyBzdHJ1Y3QgbnZtX3RndF9k ZXYgKm52bV9jcmVhdGVfdGd0X2RldihzdHJ1Y3QgbnZtX2RldiAqZGV2LAogCWlmICghdGd0X2Rl dikKIAkJZ290byBlcnJfY2g7CiAKKwkvKiBJbmhlcml0IGRldmljZSBnZW9tZXRyeSBmcm9tIHBh cmVudCAqLwogCW1lbWNweSgmdGd0X2Rldi0+Z2VvLCAmZGV2LT5nZW8sIHNpemVvZihzdHJ1Y3Qg bnZtX2dlbykpOworCiAJLyogVGFyZ2V0IGRldmljZSBvbmx5IG93bnMgYSBwb3J0aW9uIG9mIHRo ZSBwaHlzaWNhbCBkZXZpY2UgKi8KIAl0Z3RfZGV2LT5nZW8ubnJfY2hubHMgPSBucl9jaG5sczsK LQl0Z3RfZGV2LT5nZW8uYWxsX2x1bnMgPSBucl9sdW5zOwogCXRndF9kZXYtPmdlby5ucl9sdW5z ID0gKGx1bl9iYWxhbmNlZCkgPyBwcmV2X25yX2x1bnMgOiAtMTsKKwl0Z3RfZGV2LT5nZW8uYWxs X2x1bnMgPSBucl9sdW5zOworCXRndF9kZXYtPmdlby5hbGxfY2h1bmtzID0gbnJfbHVucyAqIGRl di0+Z2VvLm5yX2Noa3M7CisKIAl0Z3RfZGV2LT5nZW8ub3AgPSBvcDsKLQl0Z3RfZGV2LT50b3Rh bF9zZWNzID0gbnJfbHVucyAqIHRndF9kZXYtPmdlby5zZWNfcGVyX2x1bjsKKworCXNlY19wZXJf bHVuID0gZGV2LT5nZW8uY2xiYSAqIGRldi0+Z2VvLm5yX2Noa3M7CisJdGd0X2Rldi0+Z2VvLnRv dGFsX3NlY3MgPSBucl9sdW5zICogc2VjX3Blcl9sdW47CisKIAl0Z3RfZGV2LT5xID0gZGV2LT5x OwogCXRndF9kZXYtPm1hcCA9IGRldl9tYXA7CiAJdGd0X2Rldi0+bHVucyA9IGx1bnM7Ci0JbWVt Y3B5KCZ0Z3RfZGV2LT5pZGVudGl0eSwgJmRldi0+aWRlbnRpdHksIHNpemVvZihzdHJ1Y3QgbnZt X2lkKSk7Ci0KIAl0Z3RfZGV2LT5wYXJlbnQgPSBkZXY7CiAKIAlyZXR1cm4gdGd0X2RldjsKQEAg LTI5Niw4ICszMDEsNiBAQCBzdGF0aWMgaW50IF9fbnZtX2NvbmZpZ19zaW1wbGUoc3RydWN0IG52 bV9kZXYgKmRldiwKIHN0YXRpYyBpbnQgX19udm1fY29uZmlnX2V4dGVuZGVkKHN0cnVjdCBudm1f ZGV2ICpkZXYsCiAJCQkJIHN0cnVjdCBudm1faW9jdGxfY3JlYXRlX2V4dGVuZGVkICplKQogewot CXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87Ci0KIAlpZiAoZS0+bHVuX2JlZ2luID09 IDB4RkZGRiAmJiBlLT5sdW5fZW5kID09IDB4RkZGRikgewogCQllLT5sdW5fYmVnaW4gPSAwOwog CQllLT5sdW5fZW5kID0gZGV2LT5nZW8uYWxsX2x1bnMgLSAxOwpAQCAtMzExLDcgKzMxNCw3IEBA IHN0YXRpYyBpbnQgX19udm1fY29uZmlnX2V4dGVuZGVkKHN0cnVjdCBudm1fZGV2ICpkZXYsCiAJ CXJldHVybiAtRUlOVkFMOwogCX0KIAotCXJldHVybiBudm1fY29uZmlnX2NoZWNrX2x1bnMoZ2Vv LCBlLT5sdW5fYmVnaW4sIGUtPmx1bl9lbmQpOworCXJldHVybiBudm1fY29uZmlnX2NoZWNrX2x1 bnMoJmRldi0+Z2VvLCBlLT5sdW5fYmVnaW4sIGUtPmx1bl9lbmQpOwogfQogCiBzdGF0aWMgaW50 IG52bV9jcmVhdGVfdGd0KHN0cnVjdCBudm1fZGV2ICpkZXYsIHN0cnVjdCBudm1faW9jdGxfY3Jl YXRlICpjcmVhdGUpCkBAIC00MDYsNyArNDA5LDcgQEAgc3RhdGljIGludCBudm1fY3JlYXRlX3Rn dChzdHJ1Y3QgbnZtX2RldiAqZGV2LCBzdHJ1Y3QgbnZtX2lvY3RsX2NyZWF0ZSAqY3JlYXRlKQog CXRxdWV1ZS0+cXVldWVkYXRhID0gdGFyZ2V0ZGF0YTsKIAogCWJsa19xdWV1ZV9tYXhfaHdfc2Vj dG9ycyh0cXVldWUsCi0JCQkoZGV2LT5nZW8uc2VjX3NpemUgPj4gOSkgKiBOVk1fTUFYX1ZMQkEp OworCQkJKGRldi0+Z2VvLmNzZWNzID4+IDkpICogTlZNX01BWF9WTEJBKTsKIAogCXNldF9jYXBh Y2l0eSh0ZGlzaywgdHQtPmNhcGFjaXR5KHRhcmdldGRhdGEpKTsKIAlhZGRfZGlzayh0ZGlzayk7 CkBAIC04NDEsNDAgKzg0NCw5IEBAIEVYUE9SVF9TWU1CT0wobnZtX2dldF90Z3RfYmJfdGJsKTsK IAogc3RhdGljIGludCBudm1fY29yZV9pbml0KHN0cnVjdCBudm1fZGV2ICpkZXYpCiB7Ci0Jc3Ry dWN0IG52bV9pZCAqaWQgPSAmZGV2LT5pZGVudGl0eTsKIAlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0g JmRldi0+Z2VvOwogCWludCByZXQ7CiAKLQltZW1jcHkoJmdlby0+cHBhZiwgJmlkLT5wcGFmLCBz aXplb2Yoc3RydWN0IG52bV9hZGRyX2Zvcm1hdCkpOwotCi0JaWYgKGlkLT5tdHlwZSAhPSAwKSB7 Ci0JCXByX2VycigibnZtOiBtZW1vcnkgdHlwZSBub3Qgc3VwcG9ydGVkXG4iKTsKLQkJcmV0dXJu IC1FSU5WQUw7Ci0JfQotCi0JLyogV2hvbGUgZGV2aWNlIHZhbHVlcyAqLwotCWdlby0+bnJfY2hu bHMgPSBpZC0+bnVtX2NoOwotCWdlby0+bnJfbHVucyA9IGlkLT5udW1fbHVuOwotCi0JLyogR2Vu ZXJpYyBkZXZpY2UgZ2VvbWV0cnkgdmFsdWVzICovCi0JZ2VvLT53c19taW4gPSBpZC0+d3NfbWlu OwotCWdlby0+d3Nfb3B0ID0gaWQtPndzX29wdDsKLQlnZW8tPndzX3NlcSA9IGlkLT53c19zZXE7 Ci0JZ2VvLT53c19wZXJfY2hrID0gaWQtPndzX3Blcl9jaGs7Ci0JZ2VvLT5ucl9jaGtzID0gaWQt Pm51bV9jaGs7Ci0JZ2VvLT5tY2NhcCA9IGlkLT5tY2NhcDsKLQotCWdlby0+c2VjX3Blcl9jaGsg PSBpZC0+Y2xiYTsKLQlnZW8tPnNlY19wZXJfbHVuID0gZ2VvLT5zZWNfcGVyX2NoayAqIGdlby0+ bnJfY2hrczsKLQlnZW8tPmFsbF9sdW5zID0gZ2VvLT5ucl9sdW5zICogZ2VvLT5ucl9jaG5sczsK LQotCS8qIDEuMiBzcGVjIGRldmljZSBnZW9tZXRyeSB2YWx1ZXMgKi8KLQlnZW8tPnBsYW5lX21v ZGUgPSAxIDw8IGdlby0+d3Nfc2VxOwotCWdlby0+bnJfcGxhbmVzID0gZ2VvLT53c19vcHQgLyBn ZW8tPndzX21pbjsKLQlnZW8tPnNlY19wZXJfcGcgPSBnZW8tPndzX21pbjsKLQlnZW8tPnNlY19w ZXJfcGwgPSBnZW8tPnNlY19wZXJfcGcgKiBnZW8tPm5yX3BsYW5lczsKLQotCWRldi0+dG90YWxf c2VjcyA9IGdlby0+YWxsX2x1bnMgKiBnZW8tPnNlY19wZXJfbHVuOwogCWRldi0+bHVuX21hcCA9 IGtjYWxsb2MoQklUU19UT19MT05HUyhnZW8tPmFsbF9sdW5zKSwKIAkJCQkJc2l6ZW9mKHVuc2ln bmVkIGxvbmcpLCBHRlBfS0VSTkVMKTsKIAlpZiAoIWRldi0+bHVuX21hcCkKQEAgLTkxMywxNiAr ODg1LDE0IEBAIHN0YXRpYyBpbnQgbnZtX2luaXQoc3RydWN0IG52bV9kZXYgKmRldikKIAlzdHJ1 Y3QgbnZtX2dlbyAqZ2VvID0gJmRldi0+Z2VvOwogCWludCByZXQgPSAtRUlOVkFMOwogCi0JaWYg KGRldi0+b3BzLT5pZGVudGl0eShkZXYsICZkZXYtPmlkZW50aXR5KSkgeworCWlmIChkZXYtPm9w cy0+aWRlbnRpdHkoZGV2KSkgewogCQlwcl9lcnIoIm52bTogZGV2aWNlIGNvdWxkIG5vdCBiZSBp ZGVudGlmaWVkXG4iKTsKIAkJZ290byBlcnI7CiAJfQogCi0JaWYgKGRldi0+aWRlbnRpdHkudmVy X2lkICE9IDEgJiYgZGV2LT5pZGVudGl0eS52ZXJfaWQgIT0gMikgewotCQlwcl9lcnIoIm52bTog ZGV2aWNlIHZlcl9pZCAlZCBub3Qgc3VwcG9ydGVkIGJ5IGtlcm5lbC5cbiIsCi0JCQkJZGV2LT5p ZGVudGl0eS52ZXJfaWQpOwotCQlnb3RvIGVycjsKLQl9CisJcHJfZGVidWcoIm52bTogdmVyOiV1 IG52bV92ZW5kb3I6JXhcbiIsCisJCQkJZ2VvLT52ZXJfaWQsCisJCQkJZ2VvLT52bW50KTsKIAog CXJldCA9IG52bV9jb3JlX2luaXQoZGV2KTsKIAlpZiAocmV0KSB7CkBAIC05MzAsMTAgKzkwMCwx MCBAQCBzdGF0aWMgaW50IG52bV9pbml0KHN0cnVjdCBudm1fZGV2ICpkZXYpCiAJCWdvdG8gZXJy OwogCX0KIAotCXByX2luZm8oIm52bTogcmVnaXN0ZXJlZCAlcyBbJXUvJXUvJXUvJXUvJXUvJXVd XG4iLAotCQkJZGV2LT5uYW1lLCBnZW8tPnNlY19wZXJfcGcsIGdlby0+bnJfcGxhbmVzLAotCQkJ Z2VvLT53c19wZXJfY2hrLCBnZW8tPm5yX2Noa3MsCi0JCQlnZW8tPmFsbF9sdW5zLCBnZW8tPm5y X2NobmxzKTsKKwlwcl9pbmZvKCJudm06IHJlZ2lzdGVyZWQgJXMgWyV1LyV1LyV1LyV1LyV1XVxu IiwKKwkJCWRldi0+bmFtZSwgZ2VvLT53c19taW4sIGdlby0+d3Nfb3B0LAorCQkJZ2VvLT5ucl9j aGtzLCBnZW8tPmFsbF9sdW5zLAorCQkJZ2VvLT5ucl9jaG5scyk7CiAJcmV0dXJuIDA7CiBlcnI6 CiAJcHJfZXJyKCJudm06IGZhaWxlZCB0byBpbml0aWFsaXplIG52bVxuIik7CmRpZmYgLS1naXQg YS9kcml2ZXJzL2xpZ2h0bnZtL3BibGstY29yZS5jIGIvZHJpdmVycy9saWdodG52bS9wYmxrLWNv cmUuYwppbmRleCA4ODQ4NDQzYTA3MjEuLjE2OTU4OWRkZDQ1NyAxMDA2NDQKLS0tIGEvZHJpdmVy cy9saWdodG52bS9wYmxrLWNvcmUuYworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstY29yZS5j CkBAIC02MTMsNyArNjEzLDcgQEAgc3RhdGljIGludCBwYmxrX2xpbmVfc3VibWl0X2VtZXRhX2lv KHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgcGJsa19saW5lICpsaW5lLAogCW1lbXNldCgmcnFk LCAwLCBzaXplb2Yoc3RydWN0IG52bV9ycSkpOwogCiAJcnFfcHBhcyA9IHBibGtfY2FsY19zZWNz KHBibGssIGxlZnRfcHBhcywgMCk7Ci0JcnFfbGVuID0gcnFfcHBhcyAqIGdlby0+c2VjX3NpemU7 CisJcnFfbGVuID0gcnFfcHBhcyAqIGdlby0+Y3NlY3M7CiAKIAliaW8gPSBwYmxrX2Jpb19tYXBf YWRkcihwYmxrLCBlbWV0YV9idWYsIHJxX3BwYXMsIHJxX2xlbiwKIAkJCQkJbF9tZy0+ZW1ldGFf YWxsb2NfdHlwZSwgR0ZQX0tFUk5FTCk7CkBAIC03MjIsNyArNzIyLDcgQEAgdTY0IHBibGtfbGlu ZV9zbWV0YV9zdGFydChzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBibGtfbGluZSAqbGluZSkK IAlpZiAoYml0ID49IGxtLT5ibGtfcGVyX2xpbmUpCiAJCXJldHVybiAtMTsKIAotCXJldHVybiBi aXQgKiBnZW8tPnNlY19wZXJfcGw7CisJcmV0dXJuIGJpdCAqIGdlby0+d3Nfb3B0OwogfQogCiBz dGF0aWMgaW50IHBibGtfbGluZV9zdWJtaXRfc21ldGFfaW8oc3RydWN0IHBibGsgKnBibGssIHN0 cnVjdCBwYmxrX2xpbmUgKmxpbmUsCkBAIC0xMDM1LDE5ICsxMDM1LDE5IEBAIHN0YXRpYyBpbnQg cGJsa19saW5lX2luaXRfYmIoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xpbmUgKmxp bmUsCiAJLyogQ2FwdHVyZSBiYWQgYmxvY2sgaW5mb3JtYXRpb24gb24gbGluZSBtYXBwaW5nIGJp dG1hcHMgKi8KIAl3aGlsZSAoKGJpdCA9IGZpbmRfbmV4dF9iaXQobGluZS0+YmxrX2JpdG1hcCwg bG0tPmJsa19wZXJfbGluZSwKIAkJCQkJYml0ICsgMSkpIDwgbG0tPmJsa19wZXJfbGluZSkgewot CQlvZmYgPSBiaXQgKiBnZW8tPnNlY19wZXJfcGw7CisJCW9mZiA9IGJpdCAqIGdlby0+d3Nfb3B0 OwogCQliaXRtYXBfc2hpZnRfbGVmdChsX21nLT5iYl9hdXgsIGxfbWctPmJiX3RlbXBsYXRlLCBv ZmYsCiAJCQkJCQkJbG0tPnNlY19wZXJfbGluZSk7CiAJCWJpdG1hcF9vcihsaW5lLT5tYXBfYml0 bWFwLCBsaW5lLT5tYXBfYml0bWFwLCBsX21nLT5iYl9hdXgsCiAJCQkJCQkJbG0tPnNlY19wZXJf bGluZSk7Ci0JCWxpbmUtPnNlY19pbl9saW5lIC09IGdlby0+c2VjX3Blcl9jaGs7CisJCWxpbmUt PnNlY19pbl9saW5lIC09IGdlby0+Y2xiYTsKIAkJaWYgKGJpdCA+PSBsbS0+ZW1ldGFfYmIpCiAJ CQlucl9iYisrOwogCX0KIAogCS8qIE1hcmsgc21ldGEgbWV0YWRhdGEgc2VjdG9ycyBhcyBiYWQg c2VjdG9ycyAqLwogCWJpdCA9IGZpbmRfZmlyc3RfemVyb19iaXQobGluZS0+YmxrX2JpdG1hcCwg bG0tPmJsa19wZXJfbGluZSk7Ci0Jb2ZmID0gYml0ICogZ2VvLT5zZWNfcGVyX3BsOworCW9mZiA9 IGJpdCAqIGdlby0+d3Nfb3B0OwogCWJpdG1hcF9zZXQobGluZS0+bWFwX2JpdG1hcCwgb2ZmLCBs bS0+c21ldGFfc2VjKTsKIAlsaW5lLT5zZWNfaW5fbGluZSAtPSBsbS0+c21ldGFfc2VjOwogCWxp bmUtPnNtZXRhX3NzZWMgPSBvZmY7CkBAIC0xMDY2LDEwICsxMDY2LDEwIEBAIHN0YXRpYyBpbnQg cGJsa19saW5lX2luaXRfYmIoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xpbmUgKmxp bmUsCiAJZW1ldGFfc2VjcyA9IGxtLT5lbWV0YV9zZWNbMF07CiAJb2ZmID0gbG0tPnNlY19wZXJf bGluZTsKIAl3aGlsZSAoZW1ldGFfc2VjcykgewotCQlvZmYgLT0gZ2VvLT5zZWNfcGVyX3BsOwor CQlvZmYgLT0gZ2VvLT53c19vcHQ7CiAJCWlmICghdGVzdF9iaXQob2ZmLCBsaW5lLT5pbnZhbGlk X2JpdG1hcCkpIHsKLQkJCWJpdG1hcF9zZXQobGluZS0+aW52YWxpZF9iaXRtYXAsIG9mZiwgZ2Vv LT5zZWNfcGVyX3BsKTsKLQkJCWVtZXRhX3NlY3MgLT0gZ2VvLT5zZWNfcGVyX3BsOworCQkJYml0 bWFwX3NldChsaW5lLT5pbnZhbGlkX2JpdG1hcCwgb2ZmLCBnZW8tPndzX29wdCk7CisJCQllbWV0 YV9zZWNzIC09IGdlby0+d3Nfb3B0OwogCQl9CiAJfQogCmRpZmYgLS1naXQgYS9kcml2ZXJzL2xp Z2h0bnZtL3BibGstZ2MuYyBiL2RyaXZlcnMvbGlnaHRudm0vcGJsay1nYy5jCmluZGV4IDMyMGY5 OWFmOTllOS4uNjg1MWE1YzY3MTg5IDEwMDY0NAotLS0gYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGst Z2MuYworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstZ2MuYwpAQCAtODgsNyArODgsNyBAQCBz dGF0aWMgdm9pZCBwYmxrX2djX2xpbmVfd3Moc3RydWN0IHdvcmtfc3RydWN0ICp3b3JrKQogCiAJ dXAoJmdjLT5nY19zZW0pOwogCi0JZ2NfcnEtPmRhdGEgPSB2bWFsbG9jKGdjX3JxLT5ucl9zZWNz ICogZ2VvLT5zZWNfc2l6ZSk7CisJZ2NfcnEtPmRhdGEgPSB2bWFsbG9jKGdjX3JxLT5ucl9zZWNz ICogZ2VvLT5jc2Vjcyk7CiAJaWYgKCFnY19ycS0+ZGF0YSkgewogCQlwcl9lcnIoInBibGs6IGNv dWxkIG5vdCBHQyBsaW5lOiVkICglZC8lZClcbiIsCiAJCQkJCWxpbmUtPmlkLCAqbGluZS0+dnNj LCBnY19ycS0+bnJfc2Vjcyk7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGstaW5p dC5jIGIvZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYwppbmRleCAyNWZjNzBjYTA3ZjcuLjli NWVlMDVjMzAyOCAxMDA2NDQKLS0tIGEvZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYworKysg Yi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstaW5pdC5jCkBAIC0xNDYsNyArMTQ2LDcgQEAgc3RhdGlj IGludCBwYmxrX3J3Yl9pbml0KHN0cnVjdCBwYmxrICpwYmxrKQogCQlyZXR1cm4gLUVOT01FTTsK IAogCXBvd2VyX3NpemUgPSBnZXRfY291bnRfb3JkZXIobnJfZW50cmllcyk7Ci0JcG93ZXJfc2Vn X3N6ID0gZ2V0X2NvdW50X29yZGVyKGdlby0+c2VjX3NpemUpOworCXBvd2VyX3NlZ19zeiA9IGdl dF9jb3VudF9vcmRlcihnZW8tPmNzZWNzKTsKIAogCXJldHVybiBwYmxrX3JiX2luaXQoJnBibGst PnJ3YiwgZW50cmllcywgcG93ZXJfc2l6ZSwgcG93ZXJfc2VnX3N6KTsKIH0KQEAgLTE1NCwxMSAr MTU0LDExIEBAIHN0YXRpYyBpbnQgcGJsa19yd2JfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaykKIC8q IE1pbmltdW0gcGFnZXMgbmVlZGVkIHdpdGhpbiBhIGx1biAqLwogI2RlZmluZSBBRERSX1BPT0xf U0laRSA2NAogCi1zdGF0aWMgaW50IHBibGtfc2V0X3BwYWYoc3RydWN0IHBibGsgKnBibGspCitz dGF0aWMgaW50IHBibGtfc2V0X2FkZHJmXzEyKHN0cnVjdCBudm1fZ2VvICpnZW8sCisJCQkgICAg IHN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgKmRzdCkKIHsKLQlzdHJ1Y3QgbnZtX3RndF9kZXYg KmRldiA9IHBibGstPmRldjsKLQlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJmRldi0+Z2VvOwotCXN0 cnVjdCBudm1fYWRkcl9mb3JtYXQgcHBhZiA9IGdlby0+cHBhZjsKKwlzdHJ1Y3QgbnZtX2FkZHJf Zm9ybWF0XzEyICpzcmMgPQorCQkJCShzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICopJmdlby0+ YWRkcmY7CiAJaW50IHBvd2VyX2xlbjsKIAogCS8qIFJlLWNhbGN1bGF0ZSBjaGFubmVsIGFuZCBs dW4gZm9ybWF0IHRvIGFkYXB0IHRvIGNvbmZpZ3VyYXRpb24gKi8KQEAgLTE2NywzNCArMTY3LDUw IEBAIHN0YXRpYyBpbnQgcGJsa19zZXRfcHBhZihzdHJ1Y3QgcGJsayAqcGJsaykKIAkJcHJfZXJy KCJwYmxrOiBzdXBwb3J0cyBvbmx5IHBvd2VyLW9mLXR3byBjaGFubmVsIGNvbmZpZy5cbiIpOwog CQlyZXR1cm4gLUVJTlZBTDsKIAl9Ci0JcHBhZi5jaF9sZW4gPSBwb3dlcl9sZW47CisJZHN0LT5j aF9sZW4gPSBwb3dlcl9sZW47CiAKIAlwb3dlcl9sZW4gPSBnZXRfY291bnRfb3JkZXIoZ2VvLT5u cl9sdW5zKTsKIAlpZiAoMSA8PCBwb3dlcl9sZW4gIT0gZ2VvLT5ucl9sdW5zKSB7CiAJCXByX2Vy cigicGJsazogc3VwcG9ydHMgb25seSBwb3dlci1vZi10d28gTFVOIGNvbmZpZy5cbiIpOwogCQly ZXR1cm4gLUVJTlZBTDsKIAl9Ci0JcHBhZi5sdW5fbGVuID0gcG93ZXJfbGVuOworCWRzdC0+bHVu X2xlbiA9IHBvd2VyX2xlbjsKIAotCXBibGstPnBwYWYuc2VjX29mZnNldCA9IDA7Ci0JcGJsay0+ cHBhZi5wbG5fb2Zmc2V0ID0gcHBhZi5zZWN0X2xlbjsKLQlwYmxrLT5wcGFmLmNoX29mZnNldCA9 IHBibGstPnBwYWYucGxuX29mZnNldCArIHBwYWYucGxuX2xlbjsKLQlwYmxrLT5wcGFmLmx1bl9v ZmZzZXQgPSBwYmxrLT5wcGFmLmNoX29mZnNldCArIHBwYWYuY2hfbGVuOwotCXBibGstPnBwYWYu cGdfb2Zmc2V0ID0gcGJsay0+cHBhZi5sdW5fb2Zmc2V0ICsgcHBhZi5sdW5fbGVuOwotCXBibGst PnBwYWYuYmxrX29mZnNldCA9IHBibGstPnBwYWYucGdfb2Zmc2V0ICsgcHBhZi5wZ19sZW47Ci0J cGJsay0+cHBhZi5zZWNfbWFzayA9ICgxVUxMIDw8IHBwYWYuc2VjdF9sZW4pIC0gMTsKLQlwYmxr LT5wcGFmLnBsbl9tYXNrID0gKCgxVUxMIDw8IHBwYWYucGxuX2xlbikgLSAxKSA8PAotCQkJCQkJ CXBibGstPnBwYWYucGxuX29mZnNldDsKLQlwYmxrLT5wcGFmLmNoX21hc2sgPSAoKDFVTEwgPDwg cHBhZi5jaF9sZW4pIC0gMSkgPDwKLQkJCQkJCQlwYmxrLT5wcGFmLmNoX29mZnNldDsKLQlwYmxr LT5wcGFmLmx1bl9tYXNrID0gKCgxVUxMIDw8IHBwYWYubHVuX2xlbikgLSAxKSA8PAotCQkJCQkJ CXBibGstPnBwYWYubHVuX29mZnNldDsKLQlwYmxrLT5wcGFmLnBnX21hc2sgPSAoKDFVTEwgPDwg cHBhZi5wZ19sZW4pIC0gMSkgPDwKLQkJCQkJCQlwYmxrLT5wcGFmLnBnX29mZnNldDsKLQlwYmxr LT5wcGFmLmJsa19tYXNrID0gKCgxVUxMIDw8IHBwYWYuYmxrX2xlbikgLSAxKSA8PAotCQkJCQkJ CXBibGstPnBwYWYuYmxrX29mZnNldDsKKwlkc3QtPmJsa19sZW4gPSBzcmMtPmJsa19sZW47CisJ ZHN0LT5wZ19sZW4gPSBzcmMtPnBnX2xlbjsKKwlkc3QtPnBsbl9sZW4gPSBzcmMtPnBsbl9sZW47 CisJZHN0LT5zZWN0X2xlbiA9IHNyYy0+c2VjdF9sZW47CiAKLQlwYmxrLT5wcGFmX2JpdHNpemUg PSBwYmxrLT5wcGFmLmJsa19vZmZzZXQgKyBwcGFmLmJsa19sZW47CisJZHN0LT5zZWN0X29mZnNl dCA9IDA7CisJZHN0LT5wbG5fb2Zmc2V0ID0gZHN0LT5zZWN0X2xlbjsKKwlkc3QtPmNoX29mZnNl dCA9IGRzdC0+cGxuX29mZnNldCArIGRzdC0+cGxuX2xlbjsKKwlkc3QtPmx1bl9vZmZzZXQgPSBk c3QtPmNoX29mZnNldCArIGRzdC0+Y2hfbGVuOworCWRzdC0+cGdfb2Zmc2V0ID0gZHN0LT5sdW5f b2Zmc2V0ICsgZHN0LT5sdW5fbGVuOworCWRzdC0+YmxrX29mZnNldCA9IGRzdC0+cGdfb2Zmc2V0 ICsgZHN0LT5wZ19sZW47CisKKwlkc3QtPnNlY19tYXNrID0gKCgxVUxMIDw8IGRzdC0+c2VjdF9s ZW4pIC0gMSkgPDwgZHN0LT5zZWN0X29mZnNldDsKKwlkc3QtPnBsbl9tYXNrID0gKCgxVUxMIDw8 IGRzdC0+cGxuX2xlbikgLSAxKSA8PCBkc3QtPnBsbl9vZmZzZXQ7CisJZHN0LT5jaF9tYXNrID0g KCgxVUxMIDw8IGRzdC0+Y2hfbGVuKSAtIDEpIDw8IGRzdC0+Y2hfb2Zmc2V0OworCWRzdC0+bHVu X21hc2sgPSAoKDFVTEwgPDwgZHN0LT5sdW5fbGVuKSAtIDEpIDw8IGRzdC0+bHVuX29mZnNldDsK Kwlkc3QtPnBnX21hc2sgPSAoKDFVTEwgPDwgZHN0LT5wZ19sZW4pIC0gMSkgPDwgZHN0LT5wZ19v ZmZzZXQ7CisJZHN0LT5ibGtfbWFzayA9ICgoMVVMTCA8PCBkc3QtPmJsa19sZW4pIC0gMSkgPDwg ZHN0LT5ibGtfb2Zmc2V0OworCisJcmV0dXJuIGRzdC0+YmxrX29mZnNldCArIHNyYy0+YmxrX2xl bjsKK30KKworc3RhdGljIGludCBwYmxrX3NldF9wcGFmKHN0cnVjdCBwYmxrICpwYmxrKQorewor CXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJsay0+ZGV2OworCXN0cnVjdCBudm1fZ2VvICpn ZW8gPSAmZGV2LT5nZW87CisJaW50IG1vZDsKKworCWRpdl91NjRfcmVtKGdlby0+Y2xiYSwgcGJs ay0+bWluX3dyaXRlX3BncywgJm1vZCk7CisJaWYgKG1vZCkgeworCQlwcl9lcnIoInBibGs6IGJh ZCBjb25maWd1cmF0aW9uIG9mIHNlY3RvcnMvcGFnZXNcbiIpOworCQlyZXR1cm4gLUVJTlZBTDsK Kwl9CisKKwlwYmxrLT5wcGFmX2JpdHNpemUgPSBwYmxrX3NldF9hZGRyZl8xMihnZW8sICh2b2lk ICopJnBibGstPnBwYWYpOwogCiAJcmV0dXJuIDA7CiB9CkBAIC0yNTMsOCArMjY5LDcgQEAgc3Rh dGljIGludCBwYmxrX2NvcmVfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaykKIAlzdHJ1Y3QgbnZtX3Rn dF9kZXYgKmRldiA9IHBibGstPmRldjsKIAlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJmRldi0+Z2Vv OwogCi0JcGJsay0+cGdzX2luX2J1ZmZlciA9IE5WTV9NRU1fUEFHRV9XUklURSAqIGdlby0+c2Vj X3Blcl9wZyAqCi0JCQkJCQlnZW8tPm5yX3BsYW5lcyAqIGdlby0+YWxsX2x1bnM7CisJcGJsay0+ cGdzX2luX2J1ZmZlciA9IGdlby0+bXdfY3VuaXRzICogZ2VvLT5hbGxfbHVuczsKIAogCWlmIChw YmxrX2luaXRfZ2xvYmFsX2NhY2hlcyhwYmxrKSkKIAkJcmV0dXJuIC1FTk9NRU07CkBAIC01NTIs MTggKzU2NywxOCBAQCBzdGF0aWMgdW5zaWduZWQgaW50IGNhbGNfZW1ldGFfbGVuKHN0cnVjdCBw YmxrICpwYmxrKQogCS8qIFJvdW5kIHRvIHNlY3RvciBzaXplIHNvIHRoYXQgbGJhX2xpc3Qgc3Rh cnRzIG9uIGl0cyBvd24gc2VjdG9yICovCiAJbG0tPmVtZXRhX3NlY1sxXSA9IERJVl9ST1VORF9V UCgKIAkJCXNpemVvZihzdHJ1Y3QgbGluZV9lbWV0YSkgKyBsbS0+YmxrX2JpdG1hcF9sZW4gKwot CQkJc2l6ZW9mKHN0cnVjdCB3YV9jb3VudGVycyksIGdlby0+c2VjX3NpemUpOwotCWxtLT5lbWV0 YV9sZW5bMV0gPSBsbS0+ZW1ldGFfc2VjWzFdICogZ2VvLT5zZWNfc2l6ZTsKKwkJCXNpemVvZihz dHJ1Y3Qgd2FfY291bnRlcnMpLCBnZW8tPmNzZWNzKTsKKwlsbS0+ZW1ldGFfbGVuWzFdID0gbG0t PmVtZXRhX3NlY1sxXSAqIGdlby0+Y3NlY3M7CiAKIAkvKiBSb3VuZCB0byBzZWN0b3Igc2l6ZSBz byB0aGF0IHZzY19saXN0IHN0YXJ0cyBvbiBpdHMgb3duIHNlY3RvciAqLwogCWxtLT5kc2VjX3Bl cl9saW5lID0gbG0tPnNlY19wZXJfbGluZSAtIGxtLT5lbWV0YV9zZWNbMF07CiAJbG0tPmVtZXRh X3NlY1syXSA9IERJVl9ST1VORF9VUChsbS0+ZHNlY19wZXJfbGluZSAqIHNpemVvZih1NjQpLAot CQkJZ2VvLT5zZWNfc2l6ZSk7Ci0JbG0tPmVtZXRhX2xlblsyXSA9IGxtLT5lbWV0YV9zZWNbMl0g KiBnZW8tPnNlY19zaXplOworCQkJZ2VvLT5jc2Vjcyk7CisJbG0tPmVtZXRhX2xlblsyXSA9IGxt LT5lbWV0YV9zZWNbMl0gKiBnZW8tPmNzZWNzOwogCiAJbG0tPmVtZXRhX3NlY1szXSA9IERJVl9S T1VORF9VUChsX21nLT5ucl9saW5lcyAqIHNpemVvZih1MzIpLAotCQkJZ2VvLT5zZWNfc2l6ZSk7 Ci0JbG0tPmVtZXRhX2xlblszXSA9IGxtLT5lbWV0YV9zZWNbM10gKiBnZW8tPnNlY19zaXplOwor CQkJZ2VvLT5jc2Vjcyk7CisJbG0tPmVtZXRhX2xlblszXSA9IGxtLT5lbWV0YV9zZWNbM10gKiBn ZW8tPmNzZWNzOwogCiAJbG0tPnZzY19saXN0X2xlbiA9IGxfbWctPm5yX2xpbmVzICogc2l6ZW9m KHUzMik7CiAKQEAgLTU5NCwxMyArNjA5LDEzIEBAIHN0YXRpYyB2b2lkIHBibGtfc2V0X3Byb3Zp c2lvbihzdHJ1Y3QgcGJsayAqcGJsaywgbG9uZyBucl9mcmVlX2Jsa3MpCiAJICogb24gdXNlciBj YXBhY2l0eSBjb25zaWRlciBvbmx5IHByb3Zpc2lvbmVkIGJsb2NrcwogCSAqLwogCXBibGstPnJs LnRvdGFsX2Jsb2NrcyA9IG5yX2ZyZWVfYmxrczsKLQlwYmxrLT5ybC5ucl9zZWNzID0gbnJfZnJl ZV9ibGtzICogZ2VvLT5zZWNfcGVyX2NoazsKKwlwYmxrLT5ybC5ucl9zZWNzID0gbnJfZnJlZV9i bGtzICogZ2VvLT5jbGJhOwogCiAJLyogQ29uc2lkZXIgc2VjdG9ycyB1c2VkIGZvciBtZXRhZGF0 YSAqLwogCXNlY19tZXRhID0gKGxtLT5zbWV0YV9zZWMgKyBsbS0+ZW1ldGFfc2VjWzBdKSAqIGxf bWctPm5yX2ZyZWVfbGluZXM7Ci0JYmxrX21ldGEgPSBESVZfUk9VTkRfVVAoc2VjX21ldGEsIGdl by0+c2VjX3Blcl9jaGspOworCWJsa19tZXRhID0gRElWX1JPVU5EX1VQKHNlY19tZXRhLCBnZW8t PmNsYmEpOwogCi0JcGJsay0+Y2FwYWNpdHkgPSAocHJvdmlzaW9uZWQgLSBibGtfbWV0YSkgKiBn ZW8tPnNlY19wZXJfY2hrOworCXBibGstPmNhcGFjaXR5ID0gKHByb3Zpc2lvbmVkIC0gYmxrX21l dGEpICogZ2VvLT5jbGJhOwogCiAJYXRvbWljX3NldCgmcGJsay0+cmwuZnJlZV9ibG9ja3MsIG5y X2ZyZWVfYmxrcyk7CiAJYXRvbWljX3NldCgmcGJsay0+cmwuZnJlZV91c2VyX2Jsb2NrcywgbnJf ZnJlZV9ibGtzKTsKQEAgLTcxMSwxMCArNzI2LDEwIEBAIHN0YXRpYyBpbnQgcGJsa19saW5lc19p bml0KHN0cnVjdCBwYmxrICpwYmxrKQogCXZvaWQgKmNodW5rX2xvZzsKIAl1bnNpZ25lZCBpbnQg c21ldGFfbGVuLCBlbWV0YV9sZW47CiAJbG9uZyBucl9iYWRfYmxrcyA9IDAsIG5yX2ZyZWVfYmxr cyA9IDA7Ci0JaW50IGJiX2Rpc3RhbmNlLCBtYXhfd3JpdGVfcHBhcywgbW9kOworCWludCBiYl9k aXN0YW5jZSwgbWF4X3dyaXRlX3BwYXM7CiAJaW50IGksIHJldDsKIAotCXBibGstPm1pbl93cml0 ZV9wZ3MgPSBnZW8tPnNlY19wZXJfcGwgKiAoZ2VvLT5zZWNfc2l6ZSAvIFBBR0VfU0laRSk7CisJ cGJsay0+bWluX3dyaXRlX3BncyA9IGdlby0+d3Nfb3B0ICogKGdlby0+Y3NlY3MgLyBQQUdFX1NJ WkUpOwogCW1heF93cml0ZV9wcGFzID0gcGJsay0+bWluX3dyaXRlX3BncyAqIGdlby0+YWxsX2x1 bnM7CiAJcGJsay0+bWF4X3dyaXRlX3BncyA9IG1pbl90KGludCwgbWF4X3dyaXRlX3BwYXMsIE5W TV9NQVhfVkxCQSk7CiAJcGJsa19zZXRfc2VjX3Blcl93cml0ZShwYmxrLCBwYmxrLT5taW5fd3Jp dGVfcGdzKTsKQEAgLTcyNSwxOSArNzQwLDEzIEBAIHN0YXRpYyBpbnQgcGJsa19saW5lc19pbml0 KHN0cnVjdCBwYmxrICpwYmxrKQogCQlyZXR1cm4gLUVJTlZBTDsKIAl9CiAKLQlkaXZfdTY0X3Jl bShnZW8tPnNlY19wZXJfY2hrLCBwYmxrLT5taW5fd3JpdGVfcGdzLCAmbW9kKTsKLQlpZiAobW9k KSB7Ci0JCXByX2VycigicGJsazogYmFkIGNvbmZpZ3VyYXRpb24gb2Ygc2VjdG9ycy9wYWdlc1xu Iik7Ci0JCXJldHVybiAtRUlOVkFMOwotCX0KLQogCWxfbWctPm5yX2xpbmVzID0gZ2VvLT5ucl9j aGtzOwogCWxfbWctPmxvZ19saW5lID0gbF9tZy0+ZGF0YV9saW5lID0gTlVMTDsKIAlsX21nLT5s X3NlcV9uciA9IGxfbWctPmRfc2VxX25yID0gMDsKIAlsX21nLT5ucl9mcmVlX2xpbmVzID0gMDsK IAliaXRtYXBfemVybygmbF9tZy0+bWV0YV9iaXRtYXAsIFBCTEtfREFUQV9MSU5FUyk7CiAKLQls bS0+c2VjX3Blcl9saW5lID0gZ2VvLT5zZWNfcGVyX2NoayAqIGdlby0+YWxsX2x1bnM7CisJbG0t PnNlY19wZXJfbGluZSA9IGdlby0+Y2xiYSAqIGdlby0+YWxsX2x1bnM7CiAJbG0tPmJsa19wZXJf bGluZSA9IGdlby0+YWxsX2x1bnM7CiAJbG0tPmJsa19iaXRtYXBfbGVuID0gQklUU19UT19MT05H UyhnZW8tPmFsbF9sdW5zKSAqIHNpemVvZihsb25nKTsKIAlsbS0+c2VjX2JpdG1hcF9sZW4gPSBC SVRTX1RPX0xPTkdTKGxtLT5zZWNfcGVyX2xpbmUpICogc2l6ZW9mKGxvbmcpOwpAQCAtNzUxLDgg Kzc2MCw4IEBAIHN0YXRpYyBpbnQgcGJsa19saW5lc19pbml0KHN0cnVjdCBwYmxrICpwYmxrKQog CSAqLwogCWkgPSAxOwogYWRkX3NtZXRhX3BhZ2U6Ci0JbG0tPnNtZXRhX3NlYyA9IGkgKiBnZW8t PnNlY19wZXJfcGw7Ci0JbG0tPnNtZXRhX2xlbiA9IGxtLT5zbWV0YV9zZWMgKiBnZW8tPnNlY19z aXplOworCWxtLT5zbWV0YV9zZWMgPSBpICogZ2VvLT53c19vcHQ7CisJbG0tPnNtZXRhX2xlbiA9 IGxtLT5zbWV0YV9zZWMgKiBnZW8tPmNzZWNzOwogCiAJc21ldGFfbGVuID0gc2l6ZW9mKHN0cnVj dCBsaW5lX3NtZXRhKSArIGxtLT5sdW5fYml0bWFwX2xlbjsKIAlpZiAoc21ldGFfbGVuID4gbG0t PnNtZXRhX2xlbikgewpAQCAtNzY1LDggKzc3NCw4IEBAIHN0YXRpYyBpbnQgcGJsa19saW5lc19p bml0KHN0cnVjdCBwYmxrICpwYmxrKQogCSAqLwogCWkgPSAxOwogYWRkX2VtZXRhX3BhZ2U6Ci0J bG0tPmVtZXRhX3NlY1swXSA9IGkgKiBnZW8tPnNlY19wZXJfcGw7Ci0JbG0tPmVtZXRhX2xlblsw XSA9IGxtLT5lbWV0YV9zZWNbMF0gKiBnZW8tPnNlY19zaXplOworCWxtLT5lbWV0YV9zZWNbMF0g PSBpICogZ2VvLT53c19vcHQ7CisJbG0tPmVtZXRhX2xlblswXSA9IGxtLT5lbWV0YV9zZWNbMF0g KiBnZW8tPmNzZWNzOwogCiAJZW1ldGFfbGVuID0gY2FsY19lbWV0YV9sZW4ocGJsayk7CiAJaWYg KGVtZXRhX2xlbiA+IGxtLT5lbWV0YV9sZW5bMF0pIHsKQEAgLTc3OSw3ICs3ODgsNyBAQCBzdGF0 aWMgaW50IHBibGtfbGluZXNfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaykKIAlsbS0+bWluX2Jsa19s aW5lID0gMTsKIAlpZiAoZ2VvLT5hbGxfbHVucyA+IDEpCiAJCWxtLT5taW5fYmxrX2xpbmUgKz0g RElWX1JPVU5EX1VQKGxtLT5zbWV0YV9zZWMgKwotCQkJCQlsbS0+ZW1ldGFfc2VjWzBdLCBnZW8t PnNlY19wZXJfY2hrKTsKKwkJCQkJbG0tPmVtZXRhX3NlY1swXSwgZ2VvLT5jbGJhKTsKIAogCWlm IChsbS0+bWluX2Jsa19saW5lID4gbG0tPmJsa19wZXJfbGluZSkgewogCQlwcl9lcnIoInBibGs6 IGNvbmZpZy4gbm90IHN1cHBvcnRlZC4gTWluLiBMVU4gaW4gbGluZTolZFxuIiwKQEAgLTgwMyw5 ICs4MTIsOSBAQCBzdGF0aWMgaW50IHBibGtfbGluZXNfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaykK IAkJZ290byBmYWlsX2ZyZWVfYmJfdGVtcGxhdGU7CiAJfQogCi0JYmJfZGlzdGFuY2UgPSAoZ2Vv LT5hbGxfbHVucykgKiBnZW8tPnNlY19wZXJfcGw7CisJYmJfZGlzdGFuY2UgPSAoZ2VvLT5hbGxf bHVucykgKiBnZW8tPndzX29wdDsKIAlmb3IgKGkgPSAwOyBpIDwgbG0tPnNlY19wZXJfbGluZTsg aSArPSBiYl9kaXN0YW5jZSkKLQkJYml0bWFwX3NldChsX21nLT5iYl90ZW1wbGF0ZSwgaSwgZ2Vv LT5zZWNfcGVyX3BsKTsKKwkJYml0bWFwX3NldChsX21nLT5iYl90ZW1wbGF0ZSwgaSwgZ2VvLT53 c19vcHQpOwogCiAJSU5JVF9MSVNUX0hFQUQoJmxfbWctPmZyZWVfbGlzdCk7CiAJSU5JVF9MSVNU X0hFQUQoJmxfbWctPmNvcnJ1cHRfbGlzdCk7CkBAIC05ODIsOSArOTkxLDkgQEAgc3RhdGljIHZv aWQgKnBibGtfaW5pdChzdHJ1Y3QgbnZtX3RndF9kZXYgKmRldiwgc3RydWN0IGdlbmRpc2sgKnRk aXNrLAogCXN0cnVjdCBwYmxrICpwYmxrOwogCWludCByZXQ7CiAKLQlpZiAoZGV2LT5pZGVudGl0 eS5kb20gJiBOVk1fUlNQX0wyUCkgeworCWlmIChkZXYtPmdlby5kb20gJiBOVk1fUlNQX0wyUCkg ewogCQlwcl9lcnIoInBibGs6IGhvc3Qtc2lkZSBMMlAgdGFibGUgbm90IHN1cHBvcnRlZC4gKCV4 KVxuIiwKLQkJCQkJCQlkZXYtPmlkZW50aXR5LmRvbSk7CisJCQkJCQkJZGV2LT5nZW8uZG9tKTsK IAkJcmV0dXJuIEVSUl9QVFIoLUVJTlZBTCk7CiAJfQogCkBAIC0xMDkyLDcgKzExMDEsNyBAQCBz dGF0aWMgdm9pZCAqcGJsa19pbml0KHN0cnVjdCBudm1fdGd0X2RldiAqZGV2LCBzdHJ1Y3QgZ2Vu ZGlzayAqdGRpc2ssCiAKIAlibGtfcXVldWVfd3JpdGVfY2FjaGUodHF1ZXVlLCB0cnVlLCBmYWxz ZSk7CiAKLQl0cXVldWUtPmxpbWl0cy5kaXNjYXJkX2dyYW51bGFyaXR5ID0gZ2VvLT5zZWNfcGVy X2NoayAqIGdlby0+c2VjX3NpemU7CisJdHF1ZXVlLT5saW1pdHMuZGlzY2FyZF9ncmFudWxhcml0 eSA9IGdlby0+Y2xiYSAqIGdlby0+Y3NlY3M7CiAJdHF1ZXVlLT5saW1pdHMuZGlzY2FyZF9hbGln bm1lbnQgPSAwOwogCWJsa19xdWV1ZV9tYXhfZGlzY2FyZF9zZWN0b3JzKHRxdWV1ZSwgVUlOVF9N QVggPj4gOSk7CiAJcXVldWVfZmxhZ19zZXRfdW5sb2NrZWQoUVVFVUVfRkxBR19ESVNDQVJELCB0 cXVldWUpOwpkaWZmIC0tZ2l0IGEvZHJpdmVycy9saWdodG52bS9wYmxrLXJlYWQuYyBiL2RyaXZl cnMvbGlnaHRudm0vcGJsay1yZWFkLmMKaW5kZXggMmY3NjEyODNmNDNlLi45ZWVlMTBmNjlkZjAg MTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vcGJsay1yZWFkLmMKKysrIGIvZHJpdmVycy9s aWdodG52bS9wYmxrLXJlYWQuYwpAQCAtNTYzLDcgKzU2Myw3IEBAIGludCBwYmxrX3N1Ym1pdF9y ZWFkX2djKHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgcGJsa19nY19ycSAqZ2NfcnEpCiAJaWYg KCEoZ2NfcnEtPnNlY3NfdG9fZ2MpKQogCQlnb3RvIG91dDsKIAotCWRhdGFfbGVuID0gKGdjX3Jx LT5zZWNzX3RvX2djKSAqIGdlby0+c2VjX3NpemU7CisJZGF0YV9sZW4gPSAoZ2NfcnEtPnNlY3Nf dG9fZ2MpICogZ2VvLT5jc2VjczsKIAliaW8gPSBwYmxrX2Jpb19tYXBfYWRkcihwYmxrLCBnY19y cS0+ZGF0YSwgZ2NfcnEtPnNlY3NfdG9fZ2MsIGRhdGFfbGVuLAogCQkJCQkJUEJMS19WTUFMTE9D X01FVEEsIEdGUF9LRVJORUwpOwogCWlmIChJU19FUlIoYmlvKSkgewpkaWZmIC0tZ2l0IGEvZHJp dmVycy9saWdodG52bS9wYmxrLXJlY292ZXJ5LmMgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstcmVj b3ZlcnkuYwppbmRleCBhYWFiOWE1YzE3Y2MuLjI2MzU2NDI5ZGM3MiAxMDA2NDQKLS0tIGEvZHJp dmVycy9saWdodG52bS9wYmxrLXJlY292ZXJ5LmMKKysrIGIvZHJpdmVycy9saWdodG52bS9wYmxr LXJlY292ZXJ5LmMKQEAgLTE4NCw3ICsxODQsNyBAQCBzdGF0aWMgaW50IHBibGtfY2FsY19zZWNf aW5fbGluZShzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBibGtfbGluZSAqbGluZSkKIAlpbnQg bnJfYmIgPSBiaXRtYXBfd2VpZ2h0KGxpbmUtPmJsa19iaXRtYXAsIGxtLT5ibGtfcGVyX2xpbmUp OwogCiAJcmV0dXJuIGxtLT5zZWNfcGVyX2xpbmUgLSBsbS0+c21ldGFfc2VjIC0gbG0tPmVtZXRh X3NlY1swXSAtCi0JCQkJbnJfYmIgKiBnZW8tPnNlY19wZXJfY2hrOworCQkJCW5yX2JiICogZ2Vv LT5jbGJhOwogfQogCiBzdHJ1Y3QgcGJsa19yZWNvdl9hbGxvYyB7CkBAIC0yMzIsNyArMjMyLDcg QEAgc3RhdGljIGludCBwYmxrX3JlY292X3JlYWRfb29iKHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1 Y3QgcGJsa19saW5lICpsaW5lLAogCXJxX3BwYXMgPSBwYmxrX2NhbGNfc2VjcyhwYmxrLCBsZWZ0 X3BwYXMsIDApOwogCWlmICghcnFfcHBhcykKIAkJcnFfcHBhcyA9IHBibGstPm1pbl93cml0ZV9w Z3M7Ci0JcnFfbGVuID0gcnFfcHBhcyAqIGdlby0+c2VjX3NpemU7CisJcnFfbGVuID0gcnFfcHBh cyAqIGdlby0+Y3NlY3M7CiAKIAliaW8gPSBiaW9fbWFwX2tlcm4oZGV2LT5xLCBkYXRhLCBycV9s ZW4sIEdGUF9LRVJORUwpOwogCWlmIChJU19FUlIoYmlvKSkKQEAgLTM1MSw3ICszNTEsNyBAQCBz dGF0aWMgaW50IHBibGtfcmVjb3ZfcGFkX29vYihzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBi bGtfbGluZSAqbGluZSwKIAlpZiAoIXBhZF9ycSkKIAkJcmV0dXJuIC1FTk9NRU07CiAKLQlkYXRh ID0gdnphbGxvYyhwYmxrLT5tYXhfd3JpdGVfcGdzICogZ2VvLT5zZWNfc2l6ZSk7CisJZGF0YSA9 IHZ6YWxsb2MocGJsay0+bWF4X3dyaXRlX3BncyAqIGdlby0+Y3NlY3MpOwogCWlmICghZGF0YSkg ewogCQlyZXQgPSAtRU5PTUVNOwogCQlnb3RvIGZyZWVfcnE7CkBAIC0zNjgsNyArMzY4LDcgQEAg c3RhdGljIGludCBwYmxrX3JlY292X3BhZF9vb2Ioc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBw YmxrX2xpbmUgKmxpbmUsCiAJCWdvdG8gZmFpbF9mcmVlX3BhZDsKIAl9CiAKLQlycV9sZW4gPSBy cV9wcGFzICogZ2VvLT5zZWNfc2l6ZTsKKwlycV9sZW4gPSBycV9wcGFzICogZ2VvLT5jc2VjczsK IAogCW1ldGFfbGlzdCA9IG52bV9kZXZfZG1hX2FsbG9jKGRldi0+cGFyZW50LCBHRlBfS0VSTkVM LCAmZG1hX21ldGFfbGlzdCk7CiAJaWYgKCFtZXRhX2xpc3QpIHsKQEAgLTUwOSw3ICs1MDksNyBA QCBzdGF0aWMgaW50IHBibGtfcmVjb3Zfc2Nhbl9hbGxfb29iKHN0cnVjdCBwYmxrICpwYmxrLCBz dHJ1Y3QgcGJsa19saW5lICpsaW5lLAogCXJxX3BwYXMgPSBwYmxrX2NhbGNfc2VjcyhwYmxrLCBs ZWZ0X3BwYXMsIDApOwogCWlmICghcnFfcHBhcykKIAkJcnFfcHBhcyA9IHBibGstPm1pbl93cml0 ZV9wZ3M7Ci0JcnFfbGVuID0gcnFfcHBhcyAqIGdlby0+c2VjX3NpemU7CisJcnFfbGVuID0gcnFf cHBhcyAqIGdlby0+Y3NlY3M7CiAKIAliaW8gPSBiaW9fbWFwX2tlcm4oZGV2LT5xLCBkYXRhLCBy cV9sZW4sIEdGUF9LRVJORUwpOwogCWlmIChJU19FUlIoYmlvKSkKQEAgLTY0MCw3ICs2NDAsNyBA QCBzdGF0aWMgaW50IHBibGtfcmVjb3Zfc2Nhbl9vb2Ioc3RydWN0IHBibGsgKnBibGssIHN0cnVj dCBwYmxrX2xpbmUgKmxpbmUsCiAJcnFfcHBhcyA9IHBibGtfY2FsY19zZWNzKHBibGssIGxlZnRf cHBhcywgMCk7CiAJaWYgKCFycV9wcGFzKQogCQlycV9wcGFzID0gcGJsay0+bWluX3dyaXRlX3Bn czsKLQlycV9sZW4gPSBycV9wcGFzICogZ2VvLT5zZWNfc2l6ZTsKKwlycV9sZW4gPSBycV9wcGFz ICogZ2VvLT5jc2VjczsKIAogCWJpbyA9IGJpb19tYXBfa2VybihkZXYtPnEsIGRhdGEsIHJxX2xl biwgR0ZQX0tFUk5FTCk7CiAJaWYgKElTX0VSUihiaW8pKQpAQCAtNzQ1LDcgKzc0NSw3IEBAIHN0 YXRpYyBpbnQgcGJsa19yZWNvdl9sMnBfZnJvbV9vb2Ioc3RydWN0IHBibGsgKnBibGssIHN0cnVj dCBwYmxrX2xpbmUgKmxpbmUpCiAJcHBhX2xpc3QgPSAodm9pZCAqKShtZXRhX2xpc3QpICsgcGJs a19kbWFfbWV0YV9zaXplOwogCWRtYV9wcGFfbGlzdCA9IGRtYV9tZXRhX2xpc3QgKyBwYmxrX2Rt YV9tZXRhX3NpemU7CiAKLQlkYXRhID0ga2NhbGxvYyhwYmxrLT5tYXhfd3JpdGVfcGdzLCBnZW8t PnNlY19zaXplLCBHRlBfS0VSTkVMKTsKKwlkYXRhID0ga2NhbGxvYyhwYmxrLT5tYXhfd3JpdGVf cGdzLCBnZW8tPmNzZWNzLCBHRlBfS0VSTkVMKTsKIAlpZiAoIWRhdGEpIHsKIAkJcmV0ID0gLUVO T01FTTsKIAkJZ290byBmcmVlX21ldGFfbGlzdDsKZGlmZiAtLWdpdCBhL2RyaXZlcnMvbGlnaHRu dm0vcGJsay1ybC5jIGIvZHJpdmVycy9saWdodG52bS9wYmxrLXJsLmMKaW5kZXggMGQ0NTdiMTYy ZjIzLi44ODNhNzExM2IxOWQgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vcGJsay1ybC5j CisrKyBiL2RyaXZlcnMvbGlnaHRudm0vcGJsay1ybC5jCkBAIC0yMDAsNyArMjAwLDcgQEAgdm9p ZCBwYmxrX3JsX2luaXQoc3RydWN0IHBibGtfcmwgKnJsLCBpbnQgYnVkZ2V0KQogCiAJLyogQ29u c2lkZXIgc2VjdG9ycyB1c2VkIGZvciBtZXRhZGF0YSAqLwogCXNlY19tZXRhID0gKGxtLT5zbWV0 YV9zZWMgKyBsbS0+ZW1ldGFfc2VjWzBdKSAqIGxfbWctPm5yX2ZyZWVfbGluZXM7Ci0JYmxrX21l dGEgPSBESVZfUk9VTkRfVVAoc2VjX21ldGEsIGdlby0+c2VjX3Blcl9jaGspOworCWJsa19tZXRh ID0gRElWX1JPVU5EX1VQKHNlY19tZXRhLCBnZW8tPmNsYmEpOwogCiAJcmwtPmhpZ2ggPSBwYmxr LT5vcF9ibGtzIC0gYmxrX21ldGEgLSBsbS0+YmxrX3Blcl9saW5lOwogCXJsLT5oaWdoX3B3ID0g Z2V0X2NvdW50X29yZGVyKHJsLT5oaWdoKTsKZGlmZiAtLWdpdCBhL2RyaXZlcnMvbGlnaHRudm0v cGJsay1zeXNmcy5jIGIvZHJpdmVycy9saWdodG52bS9wYmxrLXN5c2ZzLmMKaW5kZXggMTY4MGNl MGE4MjhkLi4zMzE5OWM2YWYyNjcgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vcGJsay1z eXNmcy5jCisrKyBiL2RyaXZlcnMvbGlnaHRudm0vcGJsay1zeXNmcy5jCkBAIC0xMTMsMjYgKzEx MywzMSBAQCBzdGF0aWMgc3NpemVfdCBwYmxrX3N5c2ZzX3BwYWYoc3RydWN0IHBibGsgKnBibGss IGNoYXIgKnBhZ2UpCiB7CiAJc3RydWN0IG52bV90Z3RfZGV2ICpkZXYgPSBwYmxrLT5kZXY7CiAJ c3RydWN0IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsKKwlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0 XzEyICpwcGFmOworCXN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgKmdlb19wcGFmOwogCXNzaXpl X3Qgc3ogPSAwOwogCi0Jc3ogPSBzbnByaW50ZihwYWdlLCBQQUdFX1NJWkUgLSBzeiwKLQkJImc6 KGI6JWQpYmxrOiVkLyVkLHBnOiVkLyVkLGx1bjolZC8lZCxjaDolZC8lZCxwbDolZC8lZCxzZWM6 JWQvJWRcbiIsCi0JCXBibGstPnBwYWZfYml0c2l6ZSwKLQkJcGJsay0+cHBhZi5ibGtfb2Zmc2V0 LCBnZW8tPnBwYWYuYmxrX2xlbiwKLQkJcGJsay0+cHBhZi5wZ19vZmZzZXQsIGdlby0+cHBhZi5w Z19sZW4sCi0JCXBibGstPnBwYWYubHVuX29mZnNldCwgZ2VvLT5wcGFmLmx1bl9sZW4sCi0JCXBi bGstPnBwYWYuY2hfb2Zmc2V0LCBnZW8tPnBwYWYuY2hfbGVuLAotCQlwYmxrLT5wcGFmLnBsbl9v ZmZzZXQsIGdlby0+cHBhZi5wbG5fbGVuLAotCQlwYmxrLT5wcGFmLnNlY19vZmZzZXQsIGdlby0+ cHBhZi5zZWN0X2xlbik7CisJcHBhZiA9IChzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICopJnBi bGstPnBwYWY7CisJZ2VvX3BwYWYgPSAoc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqKSZnZW8t PmFkZHJmOworCisJc3ogPSBzbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsCisJCSJwYmxrOihzOiVk KWNoOiVkLyVkLGx1bjolZC8lZCxibGs6JWQvJWQscGc6JWQvJWQscGw6JWQvJWQsc2VjOiVkLyVk XG4iLAorCQkJcGJsay0+cHBhZl9iaXRzaXplLAorCQkJcHBhZi0+Y2hfb2Zmc2V0LCBwcGFmLT5j aF9sZW4sCisJCQlwcGFmLT5sdW5fb2Zmc2V0LCBwcGFmLT5sdW5fbGVuLAorCQkJcHBhZi0+Ymxr X29mZnNldCwgcHBhZi0+YmxrX2xlbiwKKwkJCXBwYWYtPnBnX29mZnNldCwgcHBhZi0+cGdfbGVu LAorCQkJcHBhZi0+cGxuX29mZnNldCwgcHBhZi0+cGxuX2xlbiwKKwkJCXBwYWYtPnNlY3Rfb2Zm c2V0LCBwcGFmLT5zZWN0X2xlbik7CiAKIAlzeiArPSBzbnByaW50ZihwYWdlICsgc3osIFBBR0Vf U0laRSAtIHN6LAotCQkiZDpibGs6JWQvJWQscGc6JWQvJWQsbHVuOiVkLyVkLGNoOiVkLyVkLHBs OiVkLyVkLHNlYzolZC8lZFxuIiwKLQkJZ2VvLT5wcGFmLmJsa19vZmZzZXQsIGdlby0+cHBhZi5i bGtfbGVuLAotCQlnZW8tPnBwYWYucGdfb2Zmc2V0LCBnZW8tPnBwYWYucGdfbGVuLAotCQlnZW8t PnBwYWYubHVuX29mZnNldCwgZ2VvLT5wcGFmLmx1bl9sZW4sCi0JCWdlby0+cHBhZi5jaF9vZmZz ZXQsIGdlby0+cHBhZi5jaF9sZW4sCi0JCWdlby0+cHBhZi5wbG5fb2Zmc2V0LCBnZW8tPnBwYWYu cGxuX2xlbiwKLQkJZ2VvLT5wcGFmLnNlY3Rfb2Zmc2V0LCBnZW8tPnBwYWYuc2VjdF9sZW4pOwor CQkiZGV2aWNlOmNoOiVkLyVkLGx1bjolZC8lZCxibGs6JWQvJWQscGc6JWQvJWQscGw6JWQvJWQs c2VjOiVkLyVkXG4iLAorCQkJZ2VvX3BwYWYtPmNoX29mZnNldCwgZ2VvX3BwYWYtPmNoX2xlbiwK KwkJCWdlb19wcGFmLT5sdW5fb2Zmc2V0LCBnZW9fcHBhZi0+bHVuX2xlbiwKKwkJCWdlb19wcGFm LT5ibGtfb2Zmc2V0LCBnZW9fcHBhZi0+YmxrX2xlbiwKKwkJCWdlb19wcGFmLT5wZ19vZmZzZXQs IGdlb19wcGFmLT5wZ19sZW4sCisJCQlnZW9fcHBhZi0+cGxuX29mZnNldCwgZ2VvX3BwYWYtPnBs bl9sZW4sCisJCQlnZW9fcHBhZi0+c2VjdF9vZmZzZXQsIGdlb19wcGFmLT5zZWN0X2xlbik7CiAK IAlyZXR1cm4gc3o7CiB9CkBAIC0yODgsNyArMjkzLDcgQEAgc3RhdGljIHNzaXplX3QgcGJsa19z eXNmc19saW5lc19pbmZvKHN0cnVjdCBwYmxrICpwYmxrLCBjaGFyICpwYWdlKQogCQkJCSJibGtf bGluZTolZCwgc2VjX2xpbmU6JWQsIHNlY19ibGs6JWRcbiIsCiAJCQkJCWxtLT5ibGtfcGVyX2xp bmUsCiAJCQkJCWxtLT5zZWNfcGVyX2xpbmUsCi0JCQkJCWdlby0+c2VjX3Blcl9jaGspOworCQkJ CQlnZW8tPmNsYmEpOwogCiAJcmV0dXJuIHN6OwogfQpkaWZmIC0tZ2l0IGEvZHJpdmVycy9saWdo dG52bS9wYmxrLXdyaXRlLmMgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstd3JpdGUuYwppbmRleCBh YWU4NmVkNjBiOTguLjNlNmYxZWJkNzQzYSAxMDA2NDQKLS0tIGEvZHJpdmVycy9saWdodG52bS9w YmxrLXdyaXRlLmMKKysrIGIvZHJpdmVycy9saWdodG52bS9wYmxrLXdyaXRlLmMKQEAgLTMzMyw3 ICszMzMsNyBAQCBpbnQgcGJsa19zdWJtaXRfbWV0YV9pbyhzdHJ1Y3QgcGJsayAqcGJsaywgc3Ry dWN0IHBibGtfbGluZSAqbWV0YV9saW5lKQogCW1fY3R4ID0gbnZtX3JxX3RvX3BkdShycWQpOwog CW1fY3R4LT5wcml2YXRlID0gbWV0YV9saW5lOwogCi0JcnFfbGVuID0gcnFfcHBhcyAqIGdlby0+ c2VjX3NpemU7CisJcnFfbGVuID0gcnFfcHBhcyAqIGdlby0+Y3NlY3M7CiAJZGF0YSA9ICgodm9p ZCAqKWVtZXRhLT5idWYpICsgZW1ldGEtPm1lbTsKIAogCWJpbyA9IHBibGtfYmlvX21hcF9hZGRy KHBibGssIGRhdGEsIHJxX3BwYXMsIHJxX2xlbiwKZGlmZiAtLWdpdCBhL2RyaXZlcnMvbGlnaHRu dm0vcGJsay5oIGIvZHJpdmVycy9saWdodG52bS9wYmxrLmgKaW5kZXggZjAzMDlkODE3MmMwLi5i MjljMWU2Njk4YWEgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vcGJsay5oCisrKyBiL2Ry aXZlcnMvbGlnaHRudm0vcGJsay5oCkBAIC01NTEsMjEgKzU1MSw2IEBAIHN0cnVjdCBwYmxrX2xp bmVfbWV0YSB7CiAJdW5zaWduZWQgaW50IG1ldGFfZGlzdGFuY2U7CS8qIERpc3RhbmNlIGJldHdl ZW4gZGF0YSBhbmQgbWV0YWRhdGEgKi8KIH07CiAKLXN0cnVjdCBwYmxrX2FkZHJfZm9ybWF0IHsK LQl1NjQJY2hfbWFzazsKLQl1NjQJbHVuX21hc2s7Ci0JdTY0CXBsbl9tYXNrOwotCXU2NAlibGtf bWFzazsKLQl1NjQJcGdfbWFzazsKLQl1NjQJc2VjX21hc2s7Ci0JdTgJY2hfb2Zmc2V0OwotCXU4 CWx1bl9vZmZzZXQ7Ci0JdTgJcGxuX29mZnNldDsKLQl1OAlibGtfb2Zmc2V0OwotCXU4CXBnX29m ZnNldDsKLQl1OAlzZWNfb2Zmc2V0OwotfTsKLQogZW51bSB7CiAJUEJMS19TVEFURV9SVU5OSU5H ID0gMCwKIAlQQkxLX1NUQVRFX1NUT1BQSU5HID0gMSwKQEAgLTU4NSw4ICs1NzAsOCBAQCBzdHJ1 Y3QgcGJsayB7CiAJc3RydWN0IHBibGtfbGluZV9tZ210IGxfbWc7CQkvKiBMaW5lIG1hbmFnZW1l bnQgKi8KIAlzdHJ1Y3QgcGJsa19saW5lX21ldGEgbG07CQkvKiBMaW5lIG1ldGFkYXRhICovCiAK KwlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0IHBwYWY7CiAJaW50IHBwYWZfYml0c2l6ZTsKLQlzdHJ1 Y3QgcGJsa19hZGRyX2Zvcm1hdCBwcGFmOwogCiAJc3RydWN0IHBibGtfcmIgcndiOwogCkBAIC05 NDEsMTQgKzkyNiwxMiBAQCBzdGF0aWMgaW5saW5lIGludCBwYmxrX2xpbmVfdnNjKHN0cnVjdCBw YmxrX2xpbmUgKmxpbmUpCiAJcmV0dXJuIGxlMzJfdG9fY3B1KCpsaW5lLT52c2MpOwogfQogCi0j ZGVmaW5lIE5WTV9NRU1fUEFHRV9XUklURSAoOCkKLQogc3RhdGljIGlubGluZSBpbnQgcGJsa19w YWRfZGlzdGFuY2Uoc3RydWN0IHBibGsgKnBibGspCiB7CiAJc3RydWN0IG52bV90Z3RfZGV2ICpk ZXYgPSBwYmxrLT5kZXY7CiAJc3RydWN0IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsKIAotCXJl dHVybiBOVk1fTUVNX1BBR0VfV1JJVEUgKiBnZW8tPmFsbF9sdW5zICogZ2VvLT5zZWNfcGVyX3Bs OworCXJldHVybiBnZW8tPm13X2N1bml0cyAqIGdlby0+YWxsX2x1bnMgKiBnZW8tPndzX29wdDsK IH0KIAogc3RhdGljIGlubGluZSBpbnQgcGJsa19wcGFfdG9fbGluZShzdHJ1Y3QgcHBhX2FkZHIg cCkKQEAgLTk2NCwxNSArOTQ3LDE3IEBAIHN0YXRpYyBpbmxpbmUgaW50IHBibGtfcHBhX3RvX3Bv cyhzdHJ1Y3QgbnZtX2dlbyAqZ2VvLCBzdHJ1Y3QgcHBhX2FkZHIgcCkKIHN0YXRpYyBpbmxpbmUg c3RydWN0IHBwYV9hZGRyIGFkZHJfdG9fZ2VuX3BwYShzdHJ1Y3QgcGJsayAqcGJsaywgdTY0IHBh ZGRyLAogCQkJCQkgICAgICB1NjQgbGluZV9pZCkKIHsKKwlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0 XzEyICpwcGFmID0KKwkJCQkoc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqKSZwYmxrLT5wcGFm OwogCXN0cnVjdCBwcGFfYWRkciBwcGE7CiAKIAlwcGEucHBhID0gMDsKIAlwcGEuZy5ibGsgPSBs aW5lX2lkOwotCXBwYS5nLnBnID0gKHBhZGRyICYgcGJsay0+cHBhZi5wZ19tYXNrKSA+PiBwYmxr LT5wcGFmLnBnX29mZnNldDsKLQlwcGEuZy5sdW4gPSAocGFkZHIgJiBwYmxrLT5wcGFmLmx1bl9t YXNrKSA+PiBwYmxrLT5wcGFmLmx1bl9vZmZzZXQ7Ci0JcHBhLmcuY2ggPSAocGFkZHIgJiBwYmxr LT5wcGFmLmNoX21hc2spID4+IHBibGstPnBwYWYuY2hfb2Zmc2V0OwotCXBwYS5nLnBsID0gKHBh ZGRyICYgcGJsay0+cHBhZi5wbG5fbWFzaykgPj4gcGJsay0+cHBhZi5wbG5fb2Zmc2V0OwotCXBw YS5nLnNlYyA9IChwYWRkciAmIHBibGstPnBwYWYuc2VjX21hc2spID4+IHBibGstPnBwYWYuc2Vj X29mZnNldDsKKwlwcGEuZy5wZyA9IChwYWRkciAmIHBwYWYtPnBnX21hc2spID4+IHBwYWYtPnBn X29mZnNldDsKKwlwcGEuZy5sdW4gPSAocGFkZHIgJiBwcGFmLT5sdW5fbWFzaykgPj4gcHBhZi0+ bHVuX29mZnNldDsKKwlwcGEuZy5jaCA9IChwYWRkciAmIHBwYWYtPmNoX21hc2spID4+IHBwYWYt PmNoX29mZnNldDsKKwlwcGEuZy5wbCA9IChwYWRkciAmIHBwYWYtPnBsbl9tYXNrKSA+PiBwcGFm LT5wbG5fb2Zmc2V0OworCXBwYS5nLnNlYyA9IChwYWRkciAmIHBwYWYtPnNlY19tYXNrKSA+PiBw cGFmLT5zZWN0X29mZnNldDsKIAogCXJldHVybiBwcGE7CiB9CkBAIC05ODAsMTMgKzk2NSwxNSBA QCBzdGF0aWMgaW5saW5lIHN0cnVjdCBwcGFfYWRkciBhZGRyX3RvX2dlbl9wcGEoc3RydWN0IHBi bGsgKnBibGssIHU2NCBwYWRkciwKIHN0YXRpYyBpbmxpbmUgdTY0IHBibGtfZGV2X3BwYV90b19s aW5lX2FkZHIoc3RydWN0IHBibGsgKnBibGssCiAJCQkJCQkJc3RydWN0IHBwYV9hZGRyIHApCiB7 CisJc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqcHBhZiA9CisJCQkJKHN0cnVjdCBudm1fYWRk cl9mb3JtYXRfMTIgKikmcGJsay0+cHBhZjsKIAl1NjQgcGFkZHI7CiAKLQlwYWRkciA9ICh1NjQp cC5nLnBnIDw8IHBibGstPnBwYWYucGdfb2Zmc2V0OwotCXBhZGRyIHw9ICh1NjQpcC5nLmx1biA8 PCBwYmxrLT5wcGFmLmx1bl9vZmZzZXQ7Ci0JcGFkZHIgfD0gKHU2NClwLmcuY2ggPDwgcGJsay0+ cHBhZi5jaF9vZmZzZXQ7Ci0JcGFkZHIgfD0gKHU2NClwLmcucGwgPDwgcGJsay0+cHBhZi5wbG5f b2Zmc2V0OwotCXBhZGRyIHw9ICh1NjQpcC5nLnNlYyA8PCBwYmxrLT5wcGFmLnNlY19vZmZzZXQ7 CisJcGFkZHIgPSAodTY0KXAuZy5jaCA8PCBwcGFmLT5jaF9vZmZzZXQ7CisJcGFkZHIgfD0gKHU2 NClwLmcubHVuIDw8IHBwYWYtPmx1bl9vZmZzZXQ7CisJcGFkZHIgfD0gKHU2NClwLmcucGcgPDwg cHBhZi0+cGdfb2Zmc2V0OworCXBhZGRyIHw9ICh1NjQpcC5nLnBsIDw8IHBwYWYtPnBsbl9vZmZz ZXQ7CisJcGFkZHIgfD0gKHU2NClwLmcuc2VjIDw8IHBwYWYtPnNlY3Rfb2Zmc2V0OwogCiAJcmV0 dXJuIHBhZGRyOwogfQpAQCAtMTAwMywxOCArOTkwLDE1IEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0 IHBwYV9hZGRyIHBibGtfcHBhMzJfdG9fcHBhNjQoc3RydWN0IHBibGsgKnBibGssIHUzMiBwcGEz MikKIAkJcHBhNjQuYy5saW5lID0gcHBhMzIgJiAoKH4wVSkgPj4gMSk7CiAJCXBwYTY0LmMuaXNf Y2FjaGVkID0gMTsKIAl9IGVsc2UgewotCQlwcGE2NC5nLmJsayA9IChwcGEzMiAmIHBibGstPnBw YWYuYmxrX21hc2spID4+Ci0JCQkJCQkJcGJsay0+cHBhZi5ibGtfb2Zmc2V0OwotCQlwcGE2NC5n LnBnID0gKHBwYTMyICYgcGJsay0+cHBhZi5wZ19tYXNrKSA+PgotCQkJCQkJCXBibGstPnBwYWYu cGdfb2Zmc2V0OwotCQlwcGE2NC5nLmx1biA9IChwcGEzMiAmIHBibGstPnBwYWYubHVuX21hc2sp ID4+Ci0JCQkJCQkJcGJsay0+cHBhZi5sdW5fb2Zmc2V0OwotCQlwcGE2NC5nLmNoID0gKHBwYTMy ICYgcGJsay0+cHBhZi5jaF9tYXNrKSA+PgotCQkJCQkJCXBibGstPnBwYWYuY2hfb2Zmc2V0Owot CQlwcGE2NC5nLnBsID0gKHBwYTMyICYgcGJsay0+cHBhZi5wbG5fbWFzaykgPj4KLQkJCQkJCQlw YmxrLT5wcGFmLnBsbl9vZmZzZXQ7Ci0JCXBwYTY0Lmcuc2VjID0gKHBwYTMyICYgcGJsay0+cHBh Zi5zZWNfbWFzaykgPj4KLQkJCQkJCQlwYmxrLT5wcGFmLnNlY19vZmZzZXQ7CisJCXN0cnVjdCBu dm1fYWRkcl9mb3JtYXRfMTIgKnBwYWYgPQorCQkJCShzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEy ICopJnBibGstPnBwYWY7CisKKwkJcHBhNjQuZy5jaCA9IChwcGEzMiAmIHBwYWYtPmNoX21hc2sp ID4+IHBwYWYtPmNoX29mZnNldDsKKwkJcHBhNjQuZy5sdW4gPSAocHBhMzIgJiBwcGFmLT5sdW5f bWFzaykgPj4gcHBhZi0+bHVuX29mZnNldDsKKwkJcHBhNjQuZy5ibGsgPSAocHBhMzIgJiBwcGFm LT5ibGtfbWFzaykgPj4gcHBhZi0+YmxrX29mZnNldDsKKwkJcHBhNjQuZy5wZyA9IChwcGEzMiAm IHBwYWYtPnBnX21hc2spID4+IHBwYWYtPnBnX29mZnNldDsKKwkJcHBhNjQuZy5wbCA9IChwcGEz MiAmIHBwYWYtPnBsbl9tYXNrKSA+PiBwcGFmLT5wbG5fb2Zmc2V0OworCQlwcGE2NC5nLnNlYyA9 IChwcGEzMiAmIHBwYWYtPnNlY19tYXNrKSA+PiBwcGFmLT5zZWN0X29mZnNldDsKIAl9CiAKIAly ZXR1cm4gcHBhNjQ7CkBAIC0xMDMwLDEyICsxMDE0LDE1IEBAIHN0YXRpYyBpbmxpbmUgdTMyIHBi bGtfcHBhNjRfdG9fcHBhMzIoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwcGFfYWRkciBwcGE2 NCkKIAkJcHBhMzIgfD0gcHBhNjQuYy5saW5lOwogCQlwcGEzMiB8PSAxVSA8PCAzMTsKIAl9IGVs c2UgewotCQlwcGEzMiB8PSBwcGE2NC5nLmJsayA8PCBwYmxrLT5wcGFmLmJsa19vZmZzZXQ7Ci0J CXBwYTMyIHw9IHBwYTY0LmcucGcgPDwgcGJsay0+cHBhZi5wZ19vZmZzZXQ7Ci0JCXBwYTMyIHw9 IHBwYTY0LmcubHVuIDw8IHBibGstPnBwYWYubHVuX29mZnNldDsKLQkJcHBhMzIgfD0gcHBhNjQu Zy5jaCA8PCBwYmxrLT5wcGFmLmNoX29mZnNldDsKLQkJcHBhMzIgfD0gcHBhNjQuZy5wbCA8PCBw YmxrLT5wcGFmLnBsbl9vZmZzZXQ7Ci0JCXBwYTMyIHw9IHBwYTY0Lmcuc2VjIDw8IHBibGstPnBw YWYuc2VjX29mZnNldDsKKwkJc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqcHBhZiA9CisJCQkJ KHN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgKikmcGJsay0+cHBhZjsKKworCQlwcGEzMiB8PSBw cGE2NC5nLmNoIDw8IHBwYWYtPmNoX29mZnNldDsKKwkJcHBhMzIgfD0gcHBhNjQuZy5sdW4gPDwg cHBhZi0+bHVuX29mZnNldDsKKwkJcHBhMzIgfD0gcHBhNjQuZy5ibGsgPDwgcHBhZi0+YmxrX29m ZnNldDsKKwkJcHBhMzIgfD0gcHBhNjQuZy5wZyA8PCBwcGFmLT5wZ19vZmZzZXQ7CisJCXBwYTMy IHw9IHBwYTY0LmcucGwgPDwgcHBhZi0+cGxuX29mZnNldDsKKwkJcHBhMzIgfD0gcHBhNjQuZy5z ZWMgPDwgcHBhZi0+c2VjdF9vZmZzZXQ7CiAJfQogCiAJcmV0dXJuIHBwYTMyOwpAQCAtMTIyOSwx MCArMTIxNiwxMCBAQCBzdGF0aWMgaW5saW5lIGludCBwYmxrX2JvdW5kYXJ5X3BwYV9jaGVja3Mo c3RydWN0IG52bV90Z3RfZGV2ICp0Z3RfZGV2LAogCQlpZiAoIXBwYS0+Yy5pc19jYWNoZWQgJiYK IAkJCQlwcGEtPmcuY2ggPCBnZW8tPm5yX2NobmxzICYmCiAJCQkJcHBhLT5nLmx1biA8IGdlby0+ bnJfbHVucyAmJgotCQkJCXBwYS0+Zy5wbCA8IGdlby0+bnJfcGxhbmVzICYmCisJCQkJcHBhLT5n LnBsIDwgZ2VvLT5udW1fcGxuICYmCiAJCQkJcHBhLT5nLmJsayA8IGdlby0+bnJfY2hrcyAmJgot CQkJCXBwYS0+Zy5wZyA8IGdlby0+d3NfcGVyX2NoayAmJgotCQkJCXBwYS0+Zy5zZWMgPCBnZW8t PnNlY19wZXJfcGcpCisJCQkJcHBhLT5nLnBnIDwgZ2VvLT5udW1fcGcgJiYKKwkJCQlwcGEtPmcu c2VjIDwgZ2VvLT53c19taW4pCiAJCQljb250aW51ZTsKIAogCQlwcmludF9wcGEocHBhLCAiYm91 bmRhcnkiLCBpKTsKZGlmZiAtLWdpdCBhL2RyaXZlcnMvbnZtZS9ob3N0L2xpZ2h0bnZtLmMgYi9k cml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jCmluZGV4IDgzOWMwYjk2NDY2YS4uZTI3NmFjZTI4 YzY0IDEwMDY0NAotLS0gYS9kcml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jCisrKyBiL2RyaXZl cnMvbnZtZS9ob3N0L2xpZ2h0bnZtLmMKQEAgLTE1Miw4ICsxNTIsOCBAQCBzdHJ1Y3QgbnZtZV9u dm1faWQxMl9hZGRyZiB7CiAJX191OAkJCWJsa19sZW47CiAJX191OAkJCXBnX29mZnNldDsKIAlf X3U4CQkJcGdfbGVuOwotCV9fdTgJCQlzZWN0X29mZnNldDsKLQlfX3U4CQkJc2VjdF9sZW47CisJ X191OAkJCXNlY19vZmZzZXQ7CisJX191OAkJCXNlY19sZW47CiAJX191OAkJCXJlc1s0XTsKIH0g X19wYWNrZWQ7CiAKQEAgLTI1NCwxMDYgKzI1NCwxNjEgQEAgc3RhdGljIGlubGluZSB2b2lkIF9u dm1lX252bV9jaGVja19zaXplKHZvaWQpCiAJQlVJTERfQlVHX09OKHNpemVvZihzdHJ1Y3QgbnZt ZV9udm1faWQyMCkgIT0gTlZNRV9JREVOVElGWV9EQVRBX1NJWkUpOwogfQogCi1zdGF0aWMgaW50 IGluaXRfZ3JwKHN0cnVjdCBudm1faWQgKm52bV9pZCwgc3RydWN0IG52bWVfbnZtX2lkMTIgKmlk MTIpCitzdGF0aWMgdm9pZCBudm1lX252bV9zZXRfYWRkcl8xMihzdHJ1Y3QgbnZtX2FkZHJfZm9y bWF0XzEyICpkc3QsCisJCQkJIHN0cnVjdCBudm1lX252bV9pZDEyX2FkZHJmICpzcmMpCit7CisJ ZHN0LT5jaF9sZW4gPSBzcmMtPmNoX2xlbjsKKwlkc3QtPmx1bl9sZW4gPSBzcmMtPmx1bl9sZW47 CisJZHN0LT5ibGtfbGVuID0gc3JjLT5ibGtfbGVuOworCWRzdC0+cGdfbGVuID0gc3JjLT5wZ19s ZW47CisJZHN0LT5wbG5fbGVuID0gc3JjLT5wbG5fbGVuOworCWRzdC0+c2VjdF9sZW4gPSBzcmMt PnNlY19sZW47CisKKwlkc3QtPmNoX29mZnNldCA9IHNyYy0+Y2hfb2Zmc2V0OworCWRzdC0+bHVu X29mZnNldCA9IHNyYy0+bHVuX29mZnNldDsKKwlkc3QtPmJsa19vZmZzZXQgPSBzcmMtPmJsa19v ZmZzZXQ7CisJZHN0LT5wZ19vZmZzZXQgPSBzcmMtPnBnX29mZnNldDsKKwlkc3QtPnBsbl9vZmZz ZXQgPSBzcmMtPnBsbl9vZmZzZXQ7CisJZHN0LT5zZWN0X29mZnNldCA9IHNyYy0+c2VjX29mZnNl dDsKKworCWRzdC0+Y2hfbWFzayA9ICgoMVVMTCA8PCBkc3QtPmNoX2xlbikgLSAxKSA8PCBkc3Qt PmNoX29mZnNldDsKKwlkc3QtPmx1bl9tYXNrID0gKCgxVUxMIDw8IGRzdC0+bHVuX2xlbikgLSAx KSA8PCBkc3QtPmx1bl9vZmZzZXQ7CisJZHN0LT5ibGtfbWFzayA9ICgoMVVMTCA8PCBkc3QtPmJs a19sZW4pIC0gMSkgPDwgZHN0LT5ibGtfb2Zmc2V0OworCWRzdC0+cGdfbWFzayA9ICgoMVVMTCA8 PCBkc3QtPnBnX2xlbikgLSAxKSA8PCBkc3QtPnBnX29mZnNldDsKKwlkc3QtPnBsbl9tYXNrID0g KCgxVUxMIDw8IGRzdC0+cGxuX2xlbikgLSAxKSA8PCBkc3QtPnBsbl9vZmZzZXQ7CisJZHN0LT5z ZWNfbWFzayA9ICgoMVVMTCA8PCBkc3QtPnNlY3RfbGVuKSAtIDEpIDw8IGRzdC0+c2VjdF9vZmZz ZXQ7Cit9CisKK3N0YXRpYyBpbnQgbnZtZV9udm1fc2V0dXBfMTIoc3RydWN0IG52bWVfbnZtX2lk MTIgKmlkLAorCQkJICAgICBzdHJ1Y3QgbnZtX2dlbyAqZ2VvKQogewogCXN0cnVjdCBudm1lX252 bV9pZDEyX2dycCAqc3JjOwogCWludCBzZWNfcGVyX3BnLCBzZWNfcGVyX3BsLCBwZ19wZXJfYmxr OwogCi0JaWYgKGlkMTItPmNncnBzICE9IDEpCisJaWYgKGlkLT5jZ3JwcyAhPSAxKQogCQlyZXR1 cm4gLUVJTlZBTDsKIAotCXNyYyA9ICZpZDEyLT5ncnA7CisJc3JjID0gJmlkLT5ncnA7CiAKLQlu dm1faWQtPm10eXBlID0gc3JjLT5tdHlwZTsKLQludm1faWQtPmZtdHlwZSA9IHNyYy0+Zm10eXBl OworCWlmIChzcmMtPm10eXBlICE9IDApIHsKKwkJcHJfZXJyKCJudm06IG1lbW9yeSB0eXBlIG5v dCBzdXBwb3J0ZWRcbiIpOworCQlyZXR1cm4gLUVJTlZBTDsKKwl9CisKKwlnZW8tPnZlcl9pZCA9 IGlkLT52ZXJfaWQ7CisKKwlnZW8tPm5yX2NobmxzID0gc3JjLT5udW1fY2g7CisJZ2VvLT5ucl9s dW5zID0gc3JjLT5udW1fbHVuOworCWdlby0+YWxsX2x1bnMgPSBnZW8tPm5yX2NobmxzICogZ2Vv LT5ucl9sdW5zOwogCi0JbnZtX2lkLT5udW1fY2ggPSBzcmMtPm51bV9jaDsKLQludm1faWQtPm51 bV9sdW4gPSBzcmMtPm51bV9sdW47CisJZ2VvLT5ucl9jaGtzID0gbGUxNl90b19jcHUoc3JjLT5u dW1fY2hrKTsKIAotCW52bV9pZC0+bnVtX2NoayA9IGxlMTZfdG9fY3B1KHNyYy0+bnVtX2Noayk7 Ci0JbnZtX2lkLT5jc2VjcyA9IGxlMTZfdG9fY3B1KHNyYy0+Y3NlY3MpOwotCW52bV9pZC0+c29z ID0gbGUxNl90b19jcHUoc3JjLT5zb3MpOworCWdlby0+Y3NlY3MgPSBsZTE2X3RvX2NwdShzcmMt PmNzZWNzKTsKKwlnZW8tPnNvcyA9IGxlMTZfdG9fY3B1KHNyYy0+c29zKTsKIAogCXBnX3Blcl9i bGsgPSBsZTE2X3RvX2NwdShzcmMtPm51bV9wZyk7Ci0Jc2VjX3Blcl9wZyA9IGxlMTZfdG9fY3B1 KHNyYy0+ZnBnX3N6KSAvIG52bV9pZC0+Y3NlY3M7CisJc2VjX3Blcl9wZyA9IGxlMTZfdG9fY3B1 KHNyYy0+ZnBnX3N6KSAvIGdlby0+Y3NlY3M7CiAJc2VjX3Blcl9wbCA9IHNlY19wZXJfcGcgKiBz cmMtPm51bV9wbG47Ci0JbnZtX2lkLT5jbGJhID0gc2VjX3Blcl9wbCAqIHBnX3Blcl9ibGs7Ci0J bnZtX2lkLT53c19wZXJfY2hrID0gcGdfcGVyX2JsazsKLQotCW52bV9pZC0+bXBvcyA9IGxlMzJf dG9fY3B1KHNyYy0+bXBvcyk7Ci0JbnZtX2lkLT5jcGFyID0gbGUxNl90b19jcHUoc3JjLT5jcGFy KTsKLQludm1faWQtPm1jY2FwID0gbGUzMl90b19jcHUoc3JjLT5tY2NhcCk7Ci0KLQludm1faWQt PndzX29wdCA9IG52bV9pZC0+d3NfbWluID0gc2VjX3Blcl9wZzsKLQludm1faWQtPndzX3NlcSA9 IE5WTV9JT19TTkdMX0FDQ0VTUzsKLQotCWlmIChudm1faWQtPm1wb3MgJiAweDAyMDIwMikgewot CQludm1faWQtPndzX3NlcSA9IE5WTV9JT19EVUFMX0FDQ0VTUzsKLQkJbnZtX2lkLT53c19vcHQg PDw9IDE7Ci0JfSBlbHNlIGlmIChudm1faWQtPm1wb3MgJiAweDA0MDQwNCkgewotCQludm1faWQt PndzX3NlcSA9IE5WTV9JT19RVUFEX0FDQ0VTUzsKLQkJbnZtX2lkLT53c19vcHQgPDw9IDI7CisJ Z2VvLT5jbGJhID0gc2VjX3Blcl9wbCAqIHBnX3Blcl9ibGs7CisKKwlnZW8tPmFsbF9jaHVua3Mg PSBnZW8tPmFsbF9sdW5zICogZ2VvLT5ucl9jaGtzOworCWdlby0+dG90YWxfc2VjcyA9IGdlby0+ Y2xiYSAqIGdlby0+YWxsX2NodW5rczsKKworCWdlby0+d3NfbWluID0gc2VjX3Blcl9wZzsKKwln ZW8tPndzX29wdCA9IHNlY19wZXJfcGc7CisJZ2VvLT5td19jdW5pdHMgPSBnZW8tPndzX29wdCA8 PCAzOwkvKiBkZWZhdWx0IHRvIE1MQyBzYWZlIHZhbHVlcyAqLworCisJZ2VvLT5tY2NhcCA9IGxl MzJfdG9fY3B1KHNyYy0+bWNjYXApOworCisJZ2VvLT50cmR0ID0gbGUzMl90b19jcHUoc3JjLT50 cmR0KTsKKwlnZW8tPnRyZG0gPSBsZTMyX3RvX2NwdShzcmMtPnRyZG0pOworCWdlby0+dHBydCA9 IGxlMzJfdG9fY3B1KHNyYy0+dHBydCk7CisJZ2VvLT50cHJtID0gbGUzMl90b19jcHUoc3JjLT50 cHJtKTsKKwlnZW8tPnRiZXQgPSBsZTMyX3RvX2NwdShzcmMtPnRiZXQpOworCWdlby0+dGJlbSA9 IGxlMzJfdG9fY3B1KHNyYy0+dGJlbSk7CisKKwkvKiAxLjIgY29tcGF0aWJpbGl0eSAqLworCWdl by0+dm1udCA9IGlkLT52bW50OworCWdlby0+Y2FwID0gbGUzMl90b19jcHUoaWQtPmNhcCk7CisJ Z2VvLT5kb20gPSBsZTMyX3RvX2NwdShpZC0+ZG9tKTsKKworCWdlby0+bXR5cGUgPSBzcmMtPm10 eXBlOworCWdlby0+Zm10eXBlID0gc3JjLT5mbXR5cGU7CisKKwlnZW8tPmNwYXIgPSBsZTE2X3Rv X2NwdShzcmMtPmNwYXIpOworCWdlby0+bXBvcyA9IGxlMzJfdG9fY3B1KHNyYy0+bXBvcyk7CisK KwlnZW8tPnBsYW5lX21vZGUgPSBOVk1fUExBTkVfU0lOR0xFOworCisJaWYgKGdlby0+bXBvcyAm IDB4MDIwMjAyKSB7CisJCWdlby0+cGxhbmVfbW9kZSA9IE5WTV9QTEFORV9ET1VCTEU7CisJCWdl by0+d3Nfb3B0IDw8PSAxOworCX0gZWxzZSBpZiAoZ2VvLT5tcG9zICYgMHgwNDA0MDQpIHsKKwkJ Z2VvLT5wbGFuZV9tb2RlID0gTlZNX1BMQU5FX1FVQUQ7CisJCWdlby0+d3Nfb3B0IDw8PSAyOwog CX0KIAotCW52bV9pZC0+dHJkdCA9IGxlMzJfdG9fY3B1KHNyYy0+dHJkdCk7Ci0JbnZtX2lkLT50 cmRtID0gbGUzMl90b19jcHUoc3JjLT50cmRtKTsKLQludm1faWQtPnRwcnQgPSBsZTMyX3RvX2Nw dShzcmMtPnRwcnQpOwotCW52bV9pZC0+dHBybSA9IGxlMzJfdG9fY3B1KHNyYy0+dHBybSk7Ci0J bnZtX2lkLT50YmV0ID0gbGUzMl90b19jcHUoc3JjLT50YmV0KTsKLQludm1faWQtPnRiZW0gPSBs ZTMyX3RvX2NwdShzcmMtPnRiZW0pOwotCi0JLyogMS4yIGNvbXBhdGliaWxpdHkgKi8KLQludm1f aWQtPm51bV9wbG4gPSBzcmMtPm51bV9wbG47Ci0JbnZtX2lkLT5udW1fcGcgPSBsZTE2X3RvX2Nw dShzcmMtPm51bV9wZyk7Ci0JbnZtX2lkLT5mcGdfc3ogPSBsZTE2X3RvX2NwdShzcmMtPmZwZ19z eik7CisJZ2VvLT5udW1fcGxuID0gc3JjLT5udW1fcGxuOworCWdlby0+bnVtX3BnID0gbGUxNl90 b19jcHUoc3JjLT5udW1fcGcpOworCWdlby0+ZnBnX3N6ID0gbGUxNl90b19jcHUoc3JjLT5mcGdf c3opOworCisJbnZtZV9udm1fc2V0X2FkZHJfMTIoKHN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIg KikmZ2VvLT5hZGRyZiwKKwkJCQkJCQkJJmlkLT5wcGFmKTsKIAogCXJldHVybiAwOwogfQogCi1z dGF0aWMgaW50IG52bWVfbnZtX3NldHVwXzEyKHN0cnVjdCBudm1fZGV2ICpudm1kZXYsIHN0cnVj dCBudm1faWQgKm52bV9pZCwKLQkJc3RydWN0IG52bWVfbnZtX2lkMTIgKmlkKQorc3RhdGljIHZv aWQgbnZtZV9udm1fc2V0X2FkZHJfMjAoc3RydWN0IG52bV9hZGRyX2Zvcm1hdCAqZHN0LAorCQkJ CSBzdHJ1Y3QgbnZtZV9udm1faWQyMF9hZGRyZiAqc3JjKQogewotCW52bV9pZC0+dmVyX2lkID0g aWQtPnZlcl9pZDsKLQludm1faWQtPnZtbnQgPSBpZC0+dm1udDsKLQludm1faWQtPmNhcCA9IGxl MzJfdG9fY3B1KGlkLT5jYXApOwotCW52bV9pZC0+ZG9tID0gbGUzMl90b19jcHUoaWQtPmRvbSk7 Ci0JbWVtY3B5KCZudm1faWQtPnBwYWYsICZpZC0+cHBhZiwKLQkJCQkJc2l6ZW9mKHN0cnVjdCBu dm1fYWRkcl9mb3JtYXQpKTsKLQotCXJldHVybiBpbml0X2dycChudm1faWQsIGlkKTsKKwlkc3Qt PmNoX2xlbiA9IHNyYy0+Z3JwX2xlbjsKKwlkc3QtPmx1bl9sZW4gPSBzcmMtPnB1X2xlbjsKKwlk c3QtPmNoa19sZW4gPSBzcmMtPmNoa19sZW47CisJZHN0LT5zZWNfbGVuID0gc3JjLT5sYmFfbGVu OworCisJZHN0LT5zZWNfb2Zmc2V0ID0gMDsKKwlkc3QtPmNoa19vZmZzZXQgPSBkc3QtPnNlY19s ZW47CisJZHN0LT5sdW5fb2Zmc2V0ID0gZHN0LT5jaGtfb2Zmc2V0ICsgZHN0LT5jaGtfbGVuOwor CWRzdC0+Y2hfb2Zmc2V0ID0gZHN0LT5sdW5fb2Zmc2V0ICsgZHN0LT5sdW5fbGVuOworCisJZHN0 LT5jaF9tYXNrID0gKCgxVUxMIDw8IGRzdC0+Y2hfbGVuKSAtIDEpIDw8IGRzdC0+Y2hfb2Zmc2V0 OworCWRzdC0+bHVuX21hc2sgPSAoKDFVTEwgPDwgZHN0LT5sdW5fbGVuKSAtIDEpIDw8IGRzdC0+ bHVuX29mZnNldDsKKwlkc3QtPmNoa19tYXNrID0gKCgxVUxMIDw8IGRzdC0+Y2hrX2xlbikgLSAx KSA8PCBkc3QtPmNoa19vZmZzZXQ7CisJZHN0LT5zZWNfbWFzayA9ICgoMVVMTCA8PCBkc3QtPnNl Y19sZW4pIC0gMSkgPDwgZHN0LT5zZWNfb2Zmc2V0OwogfQogCi1zdGF0aWMgaW50IG52bWVfbnZt X3NldHVwXzIwKHN0cnVjdCBudm1fZGV2ICpudm1kZXYsIHN0cnVjdCBudm1faWQgKm52bV9pZCwK LQkJc3RydWN0IG52bWVfbnZtX2lkMjAgKmlkKQorc3RhdGljIGludCBudm1lX252bV9zZXR1cF8y MChzdHJ1Y3QgbnZtZV9udm1faWQyMCAqaWQsCisJCQkgICAgIHN0cnVjdCBudm1fZ2VvICpnZW8p CiB7Ci0JbnZtX2lkLT52ZXJfaWQgPSBpZC0+bWpyOworCWdlby0+dmVyX2lkID0gaWQtPm1qcjsK KworCWdlby0+bnJfY2hubHMgPSBsZTE2X3RvX2NwdShpZC0+bnVtX2dycCk7CisJZ2VvLT5ucl9s dW5zID0gbGUxNl90b19jcHUoaWQtPm51bV9wdSk7CisJZ2VvLT5hbGxfbHVucyA9IGdlby0+bnJf Y2hubHMgKiBnZW8tPm5yX2x1bnM7CiAKLQludm1faWQtPm51bV9jaCA9IGxlMTZfdG9fY3B1KGlk LT5udW1fZ3JwKTsKLQludm1faWQtPm51bV9sdW4gPSBsZTE2X3RvX2NwdShpZC0+bnVtX3B1KTsK LQludm1faWQtPm51bV9jaGsgPSBsZTMyX3RvX2NwdShpZC0+bnVtX2Noayk7Ci0JbnZtX2lkLT5j bGJhID0gbGUzMl90b19jcHUoaWQtPmNsYmEpOworCWdlby0+bnJfY2hrcyA9IGxlMzJfdG9fY3B1 KGlkLT5udW1fY2hrKTsKKwlnZW8tPmNsYmEgPSBsZTMyX3RvX2NwdShpZC0+Y2xiYSk7CiAKLQlu dm1faWQtPndzX21pbiA9IGxlMzJfdG9fY3B1KGlkLT53c19taW4pOwotCW52bV9pZC0+d3Nfb3B0 ID0gbGUzMl90b19jcHUoaWQtPndzX29wdCk7Ci0JbnZtX2lkLT5td19jdW5pdHMgPSBsZTMyX3Rv X2NwdShpZC0+bXdfY3VuaXRzKTsKKwlnZW8tPmFsbF9jaHVua3MgPSBnZW8tPmFsbF9sdW5zICog Z2VvLT5ucl9jaGtzOworCWdlby0+dG90YWxfc2VjcyA9IGdlby0+Y2xiYSAqIGdlby0+YWxsX2No dW5rczsKIAotCW52bV9pZC0+dHJkdCA9IGxlMzJfdG9fY3B1KGlkLT50cmR0KTsKLQludm1faWQt PnRyZG0gPSBsZTMyX3RvX2NwdShpZC0+dHJkbSk7Ci0JbnZtX2lkLT50cHJ0ID0gbGUzMl90b19j cHUoaWQtPnR3cnQpOwotCW52bV9pZC0+dHBybSA9IGxlMzJfdG9fY3B1KGlkLT50d3JtKTsKLQlu dm1faWQtPnRiZXQgPSBsZTMyX3RvX2NwdShpZC0+dGNyc3QpOwotCW52bV9pZC0+dGJlbSA9IGxl MzJfdG9fY3B1KGlkLT50Y3JzbSk7CisJZ2VvLT53c19taW4gPSBsZTMyX3RvX2NwdShpZC0+d3Nf bWluKTsKKwlnZW8tPndzX29wdCA9IGxlMzJfdG9fY3B1KGlkLT53c19vcHQpOworCWdlby0+bXdf Y3VuaXRzID0gbGUzMl90b19jcHUoaWQtPm13X2N1bml0cyk7CiAKLQkvKiBjYWxjdWxhdGVkIHZh bHVlcyAqLwotCW52bV9pZC0+d3NfcGVyX2NoayA9IG52bV9pZC0+Y2xiYSAvIG52bV9pZC0+d3Nf bWluOworCWdlby0+dHJkdCA9IGxlMzJfdG9fY3B1KGlkLT50cmR0KTsKKwlnZW8tPnRyZG0gPSBs ZTMyX3RvX2NwdShpZC0+dHJkbSk7CisJZ2VvLT50cHJ0ID0gbGUzMl90b19jcHUoaWQtPnR3cnQp OworCWdlby0+dHBybSA9IGxlMzJfdG9fY3B1KGlkLT50d3JtKTsKKwlnZW8tPnRiZXQgPSBsZTMy X3RvX2NwdShpZC0+dGNyc3QpOworCWdlby0+dGJlbSA9IGxlMzJfdG9fY3B1KGlkLT50Y3JzbSk7 CiAKLQkvKiAxLjIgY29tcGF0aWJpbGl0eSAqLwotCW52bV9pZC0+d3Nfc2VxID0gTlZNX0lPX1NO R0xfQUNDRVNTOworCW52bWVfbnZtX3NldF9hZGRyXzIwKCZnZW8tPmFkZHJmLCAmaWQtPmxiYWYp OwogCiAJcmV0dXJuIDA7CiB9CiAKLXN0YXRpYyBpbnQgbnZtZV9udm1faWRlbnRpdHkoc3RydWN0 IG52bV9kZXYgKm52bWRldiwgc3RydWN0IG52bV9pZCAqbnZtX2lkKQorc3RhdGljIGludCBudm1l X252bV9pZGVudGl0eShzdHJ1Y3QgbnZtX2RldiAqbnZtZGV2KQogewogCXN0cnVjdCBudm1lX25z ICpucyA9IG52bWRldi0+cS0+cXVldWVkYXRhOwogCXN0cnVjdCBudm1lX252bV9pZDEyICppZDsK QEAgLTM4MCwxOCArNDM1LDE4IEBAIHN0YXRpYyBpbnQgbnZtZV9udm1faWRlbnRpdHkoc3RydWN0 IG52bV9kZXYgKm52bWRldiwgc3RydWN0IG52bV9pZCAqbnZtX2lkKQogCSAqLwogCXN3aXRjaCAo aWQtPnZlcl9pZCkgewogCWNhc2UgMToKLQkJcmV0ID0gbnZtZV9udm1fc2V0dXBfMTIobnZtZGV2 LCBudm1faWQsIGlkKTsKKwkJcmV0ID0gbnZtZV9udm1fc2V0dXBfMTIoaWQsICZudm1kZXYtPmdl byk7CiAJCWJyZWFrOwogCWNhc2UgMjoKLQkJcmV0ID0gbnZtZV9udm1fc2V0dXBfMjAobnZtZGV2 LCBudm1faWQsCi0JCQkJKHN0cnVjdCBudm1lX252bV9pZDIwICopaWQpOworCQlyZXQgPSBudm1l X252bV9zZXR1cF8yMCgoc3RydWN0IG52bWVfbnZtX2lkMjAgKilpZCwKKwkJCQkJCQkmbnZtZGV2 LT5nZW8pOwogCQlicmVhazsKIAlkZWZhdWx0OgotCQlkZXZfZXJyKG5zLT5jdHJsLT5kZXZpY2Us Ci0JCQkiT0NTU0QgcmV2aXNpb24gbm90IHN1cHBvcnRlZCAoJWQpXG4iLAotCQkJbnZtX2lkLT52 ZXJfaWQpOworCQlkZXZfZXJyKG5zLT5jdHJsLT5kZXZpY2UsICJPQ1NTRCByZXZpc2lvbiBub3Qg c3VwcG9ydGVkICglZClcbiIsCisJCQkJCQkJaWQtPnZlcl9pZCk7CiAJCXJldCA9IC1FSU5WQUw7 CiAJfQorCiBvdXQ6CiAJa2ZyZWUoaWQpOwogCXJldHVybiByZXQ7CkBAIC00MDYsNyArNDYxLDcg QEAgc3RhdGljIGludCBudm1lX252bV9nZXRfYmJfdGJsKHN0cnVjdCBudm1fZGV2ICpudm1kZXYs IHN0cnVjdCBwcGFfYWRkciBwcGEsCiAJc3RydWN0IG52bWVfY3RybCAqY3RybCA9IG5zLT5jdHJs OwogCXN0cnVjdCBudm1lX252bV9jb21tYW5kIGMgPSB7fTsKIAlzdHJ1Y3QgbnZtZV9udm1fYmJf dGJsICpiYl90Ymw7Ci0JaW50IG5yX2Jsa3MgPSBnZW8tPm5yX2Noa3MgKiBnZW8tPnBsYW5lX21v ZGU7CisJaW50IG5yX2Jsa3MgPSBnZW8tPm5yX2Noa3MgKiBnZW8tPm51bV9wbG47CiAJaW50IHRi bHN6ID0gc2l6ZW9mKHN0cnVjdCBudm1lX252bV9iYl90YmwpICsgbnJfYmxrczsKIAlpbnQgcmV0 ID0gMDsKIApAQCAtNDQ3LDcgKzUwMiw3IEBAIHN0YXRpYyBpbnQgbnZtZV9udm1fZ2V0X2JiX3Ri bChzdHJ1Y3QgbnZtX2RldiAqbnZtZGV2LCBzdHJ1Y3QgcHBhX2FkZHIgcHBhLAogCQlnb3RvIG91 dDsKIAl9CiAKLQltZW1jcHkoYmxrcywgYmJfdGJsLT5ibGssIGdlby0+bnJfY2hrcyAqIGdlby0+ cGxhbmVfbW9kZSk7CisJbWVtY3B5KGJsa3MsIGJiX3RibC0+YmxrLCBnZW8tPm5yX2Noa3MgKiBn ZW8tPm51bV9wbG4pOwogb3V0OgogCWtmcmVlKGJiX3RibCk7CiAJcmV0dXJuIHJldDsKQEAgLTgx NSw5ICs4NzAsMTAgQEAgaW50IG52bWVfbnZtX2lvY3RsKHN0cnVjdCBudm1lX25zICpucywgdW5z aWduZWQgaW50IGNtZCwgdW5zaWduZWQgbG9uZyBhcmcpCiB2b2lkIG52bWVfbnZtX3VwZGF0ZV9u dm1faW5mbyhzdHJ1Y3QgbnZtZV9ucyAqbnMpCiB7CiAJc3RydWN0IG52bV9kZXYgKm5kZXYgPSBu cy0+bmRldjsKKwlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJm5kZXYtPmdlbzsKIAotCW5kZXYtPmlk ZW50aXR5LmNzZWNzID0gbmRldi0+Z2VvLnNlY19zaXplID0gMSA8PCBucy0+bGJhX3NoaWZ0Owot CW5kZXYtPmlkZW50aXR5LnNvcyA9IG5kZXYtPmdlby5vb2Jfc2l6ZSA9IG5zLT5tczsKKwlnZW8t PmNzZWNzID0gMSA8PCBucy0+bGJhX3NoaWZ0OworCWdlby0+c29zID0gbnMtPm1zOwogfQogCiBp bnQgbnZtZV9udm1fcmVnaXN0ZXIoc3RydWN0IG52bWVfbnMgKm5zLCBjaGFyICpkaXNrX25hbWUs IGludCBub2RlKQpAQCAtODUwLDIzICs5MDYsMjIgQEAgc3RhdGljIHNzaXplX3QgbnZtX2Rldl9h dHRyX3Nob3coc3RydWN0IGRldmljZSAqZGV2LAogewogCXN0cnVjdCBudm1lX25zICpucyA9IG52 bWVfZ2V0X25zX2Zyb21fZGV2KGRldik7CiAJc3RydWN0IG52bV9kZXYgKm5kZXYgPSBucy0+bmRl djsKLQlzdHJ1Y3QgbnZtX2lkICppZDsKKwlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJm5kZXYtPmdl bzsKIAlzdHJ1Y3QgYXR0cmlidXRlICphdHRyOwogCiAJaWYgKCFuZGV2KQogCQlyZXR1cm4gMDsK IAotCWlkID0gJm5kZXYtPmlkZW50aXR5OwogCWF0dHIgPSAmZGF0dHItPmF0dHI7CiAKIAlpZiAo c3RyY21wKGF0dHItPm5hbWUsICJ2ZXJzaW9uIikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRm KHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBpZC0+dmVyX2lkKTsKKwkJcmV0dXJuIHNjbnByaW50 ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwgZ2VvLT52ZXJfaWQpOwogCX0gZWxzZSBpZiAoc3Ry Y21wKGF0dHItPm5hbWUsICJjYXBhYmlsaXRpZXMiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmlu dGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIsIGlkLT5jYXApOworCQlyZXR1cm4gc2NucHJpbnRm KHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPmNhcCk7CiAJfSBlbHNlIGlmIChzdHJjbXAo YXR0ci0+bmFtZSwgInJlYWRfdHlwIikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2Us IFBBR0VfU0laRSwgIiV1XG4iLCBpZC0+dHJkdCk7CisJCXJldHVybiBzY25wcmludGYocGFnZSwg UEFHRV9TSVpFLCAiJXVcbiIsIGdlby0+dHJkdCk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+ bmFtZSwgInJlYWRfbWF4IikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0Vf U0laRSwgIiV1XG4iLCBpZC0+dHJkbSk7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9T SVpFLCAiJXVcbiIsIGdlby0+dHJkbSk7CiAJfSBlbHNlIHsKIAkJcmV0dXJuIHNjbnByaW50Zihw YWdlLAogCQkJCSBQQUdFX1NJWkUsCkBAIC04NzUsNzUgKzkzMCw3OSBAQCBzdGF0aWMgc3NpemVf dCBudm1fZGV2X2F0dHJfc2hvdyhzdHJ1Y3QgZGV2aWNlICpkZXYsCiAJfQogfQogCitzdGF0aWMg c3NpemVfdCBudm1fZGV2X2F0dHJfc2hvd19wcGFmKHN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIg KnBwYWYsCisJCQkJCSBjaGFyICpwYWdlKQoreworCXJldHVybiBzY25wcmludGYocGFnZSwgUEFH RV9TSVpFLAorCQkiMHglMDJ4JTAyeCUwMnglMDJ4JTAyeCUwMnglMDJ4JTAyeCUwMnglMDJ4JTAy eCUwMnhcbiIsCisJCQkJcHBhZi0+Y2hfb2Zmc2V0LCBwcGFmLT5jaF9sZW4sCisJCQkJcHBhZi0+ bHVuX29mZnNldCwgcHBhZi0+bHVuX2xlbiwKKwkJCQlwcGFmLT5wbG5fb2Zmc2V0LCBwcGFmLT5w bG5fbGVuLAorCQkJCXBwYWYtPmJsa19vZmZzZXQsIHBwYWYtPmJsa19sZW4sCisJCQkJcHBhZi0+ cGdfb2Zmc2V0LCBwcGFmLT5wZ19sZW4sCisJCQkJcHBhZi0+c2VjdF9vZmZzZXQsIHBwYWYtPnNl Y3RfbGVuKTsKK30KKwogc3RhdGljIHNzaXplX3QgbnZtX2Rldl9hdHRyX3Nob3dfMTIoc3RydWN0 IGRldmljZSAqZGV2LAogCQlzdHJ1Y3QgZGV2aWNlX2F0dHJpYnV0ZSAqZGF0dHIsIGNoYXIgKnBh Z2UpCiB7CiAJc3RydWN0IG52bWVfbnMgKm5zID0gbnZtZV9nZXRfbnNfZnJvbV9kZXYoZGV2KTsK IAlzdHJ1Y3QgbnZtX2RldiAqbmRldiA9IG5zLT5uZGV2OwotCXN0cnVjdCBudm1faWQgKmlkOwor CXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmbmRldi0+Z2VvOwogCXN0cnVjdCBhdHRyaWJ1dGUgKmF0 dHI7CiAKIAlpZiAoIW5kZXYpCiAJCXJldHVybiAwOwogCi0JaWQgPSAmbmRldi0+aWRlbnRpdHk7 CiAJYXR0ciA9ICZkYXR0ci0+YXR0cjsKIAogCWlmIChzdHJjbXAoYXR0ci0+bmFtZSwgInZlbmRv cl9vcGNvZGUiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAi JXVcbiIsIGlkLT52bW50KTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIl dVxuIiwgZ2VvLT52bW50KTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAiZGV2aWNl X21vZGUiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVc biIsIGlkLT5kb20pOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4i LCBnZW8tPmRvbSk7CiAJLyoga2VwdCBmb3IgY29tcGF0aWJpbGl0eSAqLwogCX0gZWxzZSBpZiAo c3RyY21wKGF0dHItPm5hbWUsICJtZWRpYV9tYW5hZ2VyIikgPT0gMCkgewogCQlyZXR1cm4gc2Nu cHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiVzXG4iLCAiZ2VubnZtIik7CiAJfSBlbHNlIGlmIChz dHJjbXAoYXR0ci0+bmFtZSwgInBwYV9mb3JtYXQiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmlu dGYocGFnZSwgUEFHRV9TSVpFLAotCQkJIjB4JTAyeCUwMnglMDJ4JTAyeCUwMnglMDJ4JTAyeCUw MnglMDJ4JTAyeCUwMnglMDJ4XG4iLAotCQkJaWQtPnBwYWYuY2hfb2Zmc2V0LCBpZC0+cHBhZi5j aF9sZW4sCi0JCQlpZC0+cHBhZi5sdW5fb2Zmc2V0LCBpZC0+cHBhZi5sdW5fbGVuLAotCQkJaWQt PnBwYWYucGxuX29mZnNldCwgaWQtPnBwYWYucGxuX2xlbiwKLQkJCWlkLT5wcGFmLmJsa19vZmZz ZXQsIGlkLT5wcGFmLmJsa19sZW4sCi0JCQlpZC0+cHBhZi5wZ19vZmZzZXQsIGlkLT5wcGFmLnBn X2xlbiwKLQkJCWlkLT5wcGFmLnNlY3Rfb2Zmc2V0LCBpZC0+cHBhZi5zZWN0X2xlbik7CisJCXJl dHVybiBudm1fZGV2X2F0dHJfc2hvd19wcGFmKCh2b2lkICopJmdlby0+YWRkcmYsIHBhZ2UpOwog CX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJtZWRpYV90eXBlIikgPT0gMCkgewkvKiB1 OCAqLwotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBpZC0+bXR5 cGUpOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm10 eXBlKTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAiZmxhc2hfbWVkaWFfdHlwZSIp ID09IDApIHsKLQkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwgaWQt PmZtdHlwZSk7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIsIGdl by0+Zm10eXBlKTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAibnVtX2NoYW5uZWxz IikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBp ZC0+bnVtX2NoKTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwg Z2VvLT5ucl9jaG5scyk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgIm51bV9sdW5z IikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBp ZC0+bnVtX2x1bik7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIs IGdlby0+bnJfbHVucyk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgIm51bV9wbGFu ZXMiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIs IGlkLT5udW1fcGxuKTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxu IiwgZ2VvLT5udW1fcGxuKTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAibnVtX2Js b2NrcyIpID09IDApIHsJLyogdTE2ICovCi0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9T SVpFLCAiJXVcbiIsIGlkLT5udW1fY2hrKTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdF X1NJWkUsICIldVxuIiwgZ2VvLT5ucl9jaGtzKTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5u YW1lLCAibnVtX3BhZ2VzIikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0Vf U0laRSwgIiV1XG4iLCBpZC0+bnVtX3BnKTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdF X1NJWkUsICIldVxuIiwgZ2VvLT5udW1fcGcpOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5h bWUsICJwYWdlX3NpemUiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9T SVpFLCAiJXVcbiIsIGlkLT5mcGdfc3opOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0Vf U0laRSwgIiV1XG4iLCBnZW8tPmZwZ19zeik7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFt ZSwgImh3X3NlY3Rvcl9zaXplIikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBB R0VfU0laRSwgIiV1XG4iLCBpZC0+Y3NlY3MpOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBB R0VfU0laRSwgIiV1XG4iLCBnZW8tPmNzZWNzKTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5u YW1lLCAib29iX3NlY3Rvcl9zaXplIikgPT0gMCkgey8qIHUzMiAqLwotCQlyZXR1cm4gc2NucHJp bnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBpZC0+c29zKTsKKwkJcmV0dXJuIHNjbnByaW50 ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwgZ2VvLT5zb3MpOwogCX0gZWxzZSBpZiAoc3RyY21w KGF0dHItPm5hbWUsICJwcm9nX3R5cCIpID09IDApIHsKLQkJcmV0dXJuIHNjbnByaW50ZihwYWdl LCBQQUdFX1NJWkUsICIldVxuIiwgaWQtPnRwcnQpOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2Us IFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPnRwcnQpOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHIt Pm5hbWUsICJwcm9nX21heCIpID09IDApIHsKLQkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdF X1NJWkUsICIldVxuIiwgaWQtPnRwcm0pOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0Vf U0laRSwgIiV1XG4iLCBnZW8tPnRwcm0pOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUs ICJlcmFzZV90eXAiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpF LCAiJXVcbiIsIGlkLT50YmV0KTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUs ICIldVxuIiwgZ2VvLT50YmV0KTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAiZXJh c2VfbWF4IikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1 XG4iLCBpZC0+dGJlbSk7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVc biIsIGdlby0+dGJlbSk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgIm11bHRpcGxh bmVfbW9kZXMiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAi MHglMDh4XG4iLCBpZC0+bXBvcyk7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpF LCAiMHglMDh4XG4iLCBnZW8tPm1wb3MpOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUs ICJtZWRpYV9jYXBhYmlsaXRpZXMiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwg UEFHRV9TSVpFLCAiMHglMDh4XG4iLCBpZC0+bWNjYXApOworCQlyZXR1cm4gc2NucHJpbnRmKHBh Z2UsIFBBR0VfU0laRSwgIjB4JTA4eFxuIiwgZ2VvLT5tY2NhcCk7CiAJfSBlbHNlIGlmIChzdHJj bXAoYXR0ci0+bmFtZSwgIm1heF9waHlzX3NlY3MiKSA9PSAwKSB7CiAJCXJldHVybiBzY25wcmlu dGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIsIE5WTV9NQVhfVkxCQSk7CiAJfSBlbHNlIHsKLQkJ cmV0dXJuIHNjbnByaW50ZihwYWdlLAotCQkJCSBQQUdFX1NJWkUsCi0JCQkJICJVbmhhbmRsZWQg YXR0ciglcykgaW4gYG52bV9kZXZfYXR0cl9zaG93XzEyYFxuIiwKLQkJCQkgYXR0ci0+bmFtZSk7 CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLAorCQkJIlVuaGFuZGxlZCBhdHRy KCVzKSBpbiBgbnZtX2Rldl9hdHRyX3Nob3dfMTJgXG4iLAorCQkJYXR0ci0+bmFtZSk7CiAJfQog fQogCkBAIC05NTIsNDIgKzEwMTEsNDAgQEAgc3RhdGljIHNzaXplX3QgbnZtX2Rldl9hdHRyX3No b3dfMjAoc3RydWN0IGRldmljZSAqZGV2LAogewogCXN0cnVjdCBudm1lX25zICpucyA9IG52bWVf Z2V0X25zX2Zyb21fZGV2KGRldik7CiAJc3RydWN0IG52bV9kZXYgKm5kZXYgPSBucy0+bmRldjsK LQlzdHJ1Y3QgbnZtX2lkICppZDsKKwlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJm5kZXYtPmdlbzsK IAlzdHJ1Y3QgYXR0cmlidXRlICphdHRyOwogCiAJaWYgKCFuZGV2KQogCQlyZXR1cm4gMDsKIAot CWlkID0gJm5kZXYtPmlkZW50aXR5OwogCWF0dHIgPSAmZGF0dHItPmF0dHI7CiAKIAlpZiAoc3Ry Y21wKGF0dHItPm5hbWUsICJncm91cHMiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFn ZSwgUEFHRV9TSVpFLCAiJXVcbiIsIGlkLT5udW1fY2gpOworCQlyZXR1cm4gc2NucHJpbnRmKHBh Z2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm5yX2NobmxzKTsKIAl9IGVsc2UgaWYgKHN0cmNt cChhdHRyLT5uYW1lLCAicHVuaXRzIikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2Us IFBBR0VfU0laRSwgIiV1XG4iLCBpZC0+bnVtX2x1bik7CisJCXJldHVybiBzY25wcmludGYocGFn ZSwgUEFHRV9TSVpFLCAiJXVcbiIsIGdlby0+bnJfbHVucyk7CiAJfSBlbHNlIGlmIChzdHJjbXAo YXR0ci0+bmFtZSwgImNodW5rcyIpID09IDApIHsKLQkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQ QUdFX1NJWkUsICIldVxuIiwgaWQtPm51bV9jaGspOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2Us IFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm5yX2Noa3MpOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0 dHItPm5hbWUsICJjbGJhIikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0Vf U0laRSwgIiV1XG4iLCBpZC0+Y2xiYSk7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9T SVpFLCAiJXVcbiIsIGdlby0+Y2xiYSk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwg IndzX21pbiIpID09IDApIHsKLQkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIl dVxuIiwgaWQtPndzX21pbik7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAi JXVcbiIsIGdlby0+d3NfbWluKTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAid3Nf b3B0IikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4i LCBpZC0+d3Nfb3B0KTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxu IiwgZ2VvLT53c19vcHQpOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJtd19jdW5p dHMiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIs IGlkLT5td19jdW5pdHMpOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1 XG4iLCBnZW8tPm13X2N1bml0cyk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgIndy aXRlX3R5cCIpID09IDApIHsKLQkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIl dVxuIiwgaWQtPnRwcnQpOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1 XG4iLCBnZW8tPnRwcnQpOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJ3cml0ZV9t YXgiKSA9PSAwKSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIs IGlkLT50cHJtKTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwg Z2VvLT50cHJtKTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAicmVzZXRfdHlwIikg PT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBpZC0+ dGJldCk7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIsIGdlby0+ dGJldCk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgInJlc2V0X21heCIpID09IDAp IHsKLQkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwgaWQtPnRiZW0p OworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPnRiZW0p OwogCX0gZWxzZSB7Ci0JCXJldHVybiBzY25wcmludGYocGFnZSwKLQkJCQkgUEFHRV9TSVpFLAot CQkJCSAiVW5oYW5kbGVkIGF0dHIoJXMpIGluIGBudm1fZGV2X2F0dHJfc2hvd18yMGBcbiIsCi0J CQkJIGF0dHItPm5hbWUpOworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwKKwkJ CSJVbmhhbmRsZWQgYXR0ciglcykgaW4gYG52bV9kZXZfYXR0cl9zaG93XzIwYFxuIiwKKwkJCWF0 dHItPm5hbWUpOwogCX0KIH0KIApAQCAtMTEwNiwxMCArMTE2MywxMyBAQCBzdGF0aWMgY29uc3Qg c3RydWN0IGF0dHJpYnV0ZV9ncm91cCBudm1fZGV2X2F0dHJfZ3JvdXBfMjAgPSB7CiAKIGludCBu dm1lX252bV9yZWdpc3Rlcl9zeXNmcyhzdHJ1Y3QgbnZtZV9ucyAqbnMpCiB7Ci0JaWYgKCFucy0+ bmRldikKKwlzdHJ1Y3QgbnZtX2RldiAqbmRldiA9IG5zLT5uZGV2OworCXN0cnVjdCBudm1fZ2Vv ICpnZW8gPSAmbmRldi0+Z2VvOworCisJaWYgKCFuZGV2KQogCQlyZXR1cm4gLUVJTlZBTDsKIAot CXN3aXRjaCAobnMtPm5kZXYtPmlkZW50aXR5LnZlcl9pZCkgeworCXN3aXRjaCAoZ2VvLT52ZXJf aWQpIHsKIAljYXNlIDE6CiAJCXJldHVybiBzeXNmc19jcmVhdGVfZ3JvdXAoJmRpc2tfdG9fZGV2 KG5zLT5kaXNrKS0+a29iaiwKIAkJCQkJJm52bV9kZXZfYXR0cl9ncm91cF8xMik7CkBAIC0xMTIz LDcgKzExODMsMTAgQEAgaW50IG52bWVfbnZtX3JlZ2lzdGVyX3N5c2ZzKHN0cnVjdCBudm1lX25z ICpucykKIAogdm9pZCBudm1lX252bV91bnJlZ2lzdGVyX3N5c2ZzKHN0cnVjdCBudm1lX25zICpu cykKIHsKLQlzd2l0Y2ggKG5zLT5uZGV2LT5pZGVudGl0eS52ZXJfaWQpIHsKKwlzdHJ1Y3QgbnZt X2RldiAqbmRldiA9IG5zLT5uZGV2OworCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmbmRldi0+Z2Vv OworCisJc3dpdGNoIChnZW8tPnZlcl9pZCkgewogCWNhc2UgMToKIAkJc3lzZnNfcmVtb3ZlX2dy b3VwKCZkaXNrX3RvX2Rldihucy0+ZGlzayktPmtvYmosCiAJCQkJCSZudm1fZGV2X2F0dHJfZ3Jv dXBfMTIpOwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS9saW51eC9saWdodG52bS5oIGIvaW5jbHVkZS9s aW51eC9saWdodG52bS5oCmluZGV4IGU1NWIxMDU3M2M5OS4uMTYyNTVmY2Q1MjUwIDEwMDY0NAot LS0gYS9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKKysrIGIvaW5jbHVkZS9saW51eC9saWdodG52 bS5oCkBAIC01MCw3ICs1MCw3IEBAIHN0cnVjdCBudm1faWQ7CiBzdHJ1Y3QgbnZtX2RldjsKIHN0 cnVjdCBudm1fdGd0X2RldjsKIAotdHlwZWRlZiBpbnQgKG52bV9pZF9mbikoc3RydWN0IG52bV9k ZXYgKiwgc3RydWN0IG52bV9pZCAqKTsKK3R5cGVkZWYgaW50IChudm1faWRfZm4pKHN0cnVjdCBu dm1fZGV2ICopOwogdHlwZWRlZiBpbnQgKG52bV9vcF9iYl90YmxfZm4pKHN0cnVjdCBudm1fZGV2 ICosIHN0cnVjdCBwcGFfYWRkciwgdTggKik7CiB0eXBlZGVmIGludCAobnZtX29wX3NldF9iYl9m bikoc3RydWN0IG52bV9kZXYgKiwgc3RydWN0IHBwYV9hZGRyICosIGludCwgaW50KTsKIHR5cGVk ZWYgaW50IChudm1fc3VibWl0X2lvX2ZuKShzdHJ1Y3QgbnZtX2RldiAqLCBzdHJ1Y3QgbnZtX3Jx ICopOwpAQCAtMTUyLDYyICsxNTIsNDggQEAgc3RydWN0IG52bV9pZF9scF90YmwgewogCXN0cnVj dCBudm1faWRfbHBfbWxjIG1sYzsKIH07CiAKLXN0cnVjdCBudm1fYWRkcl9mb3JtYXQgewotCXU4 CWNoX29mZnNldDsKK3N0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgewogCXU4CWNoX2xlbjsKLQl1 OAlsdW5fb2Zmc2V0OwogCXU4CWx1bl9sZW47Ci0JdTgJcGxuX29mZnNldDsKKwl1OAlibGtfbGVu OworCXU4CXBnX2xlbjsKIAl1OAlwbG5fbGVuOworCXU4CXNlY3RfbGVuOworCisJdTgJY2hfb2Zm c2V0OworCXU4CWx1bl9vZmZzZXQ7CiAJdTgJYmxrX29mZnNldDsKLQl1OAlibGtfbGVuOwogCXU4 CXBnX29mZnNldDsKLQl1OAlwZ19sZW47CisJdTgJcGxuX29mZnNldDsKIAl1OAlzZWN0X29mZnNl dDsKLQl1OAlzZWN0X2xlbjsKLX07Ci0KLXN0cnVjdCBudm1faWQgewotCXU4CXZlcl9pZDsKLQl1 OAl2bW50OwotCXUzMgljYXA7Ci0JdTMyCWRvbTsKLQotCXN0cnVjdAludm1fYWRkcl9mb3JtYXQg cHBhZjsKLQotCXU4CW51bV9jaDsKLQl1OAludW1fbHVuOwotCXUxNgludW1fY2hrOwotCXUxNglj bGJhOwotCXUxNgljc2VjczsKLQl1MTYJc29zOwotCi0JdTMyCXdzX21pbjsKLQl1MzIJd3Nfb3B0 OwotCXUzMgltd19jdW5pdHM7CiAKLQl1MzIJdHJkdDsKLQl1MzIJdHJkbTsKLQl1MzIJdHBydDsK LQl1MzIJdHBybTsKLQl1MzIJdGJldDsKLQl1MzIJdGJlbTsKLQl1MzIJbXBvczsKLQl1MzIJbWNj YXA7Ci0JdTE2CWNwYXI7Ci0KLQkvKiBjYWxjdWxhdGVkIHZhbHVlcyAqLwotCXUxNgl3c19zZXE7 Ci0JdTE2CXdzX3Blcl9jaGs7Ci0KLQkvKiAxLjIgY29tcGF0aWJpbGl0eSAqLwotCXU4CW10eXBl OwotCXU4CWZtdHlwZTsKKwl1NjQJY2hfbWFzazsKKwl1NjQJbHVuX21hc2s7CisJdTY0CWJsa19t YXNrOworCXU2NAlwZ19tYXNrOworCXU2NAlwbG5fbWFzazsKKwl1NjQJc2VjX21hc2s7Cit9Owog Ci0JdTgJbnVtX3BsbjsKLQl1MTYJbnVtX3BnOwotCXUxNglmcGdfc3o7Ci19IF9fcGFja2VkOwor c3RydWN0IG52bV9hZGRyX2Zvcm1hdCB7CisJdTgJY2hfbGVuOworCXU4CWx1bl9sZW47CisJdTgJ Y2hrX2xlbjsKKwl1OAlzZWNfbGVuOworCXU4CXJzdl9sZW5bMl07CisKKwl1OAljaF9vZmZzZXQ7 CisJdTgJbHVuX29mZnNldDsKKwl1OAljaGtfb2Zmc2V0OworCXU4CXNlY19vZmZzZXQ7CisJdTgJ cnN2X29mZlsyXTsKKworCXU2NAljaF9tYXNrOworCXU2NAlsdW5fbWFzazsKKwl1NjQJY2hrX21h c2s7CisJdTY0CXNlY19tYXNrOworCXU2NAlyc3ZfbWFza1syXTsKK307CiAKIHN0cnVjdCBudm1f dGFyZ2V0IHsKIAlzdHJ1Y3QgbGlzdF9oZWFkIGxpc3Q7CkBAIC0yNzQsMzYgKzI2MCw2MyBAQCBl bnVtIHsKIAlOVk1fQkxLX1NUX0JBRCA9CTB4OCwJLyogQmFkIGJsb2NrICovCiB9OwogCi0KLS8q IERldmljZSBnZW5lcmljIGluZm9ybWF0aW9uICovCisvKiBJbnN0YW5jZSBnZW9tZXRyeSAqLwog c3RydWN0IG52bV9nZW8gewotCS8qIGdlbmVyaWMgZ2VvbWV0cnkgKi8KKwkvKiBkZXZpY2UgcmVw b3J0ZWQgdmVyc2lvbiAqLworCXU4CXZlcl9pZDsKKworCS8qIGluc3RhbmNlIHNwZWNpZmljIGdl b21ldHJ5ICovCiAJaW50IG5yX2NobmxzOwotCWludCBhbGxfbHVuczsgLyogYWNyb3NzIGNoYW5u ZWxzICovCi0JaW50IG5yX2x1bnM7IC8qIHBlciBjaGFubmVsICovCi0JaW50IG5yX2Noa3M7IC8q IHBlciBsdW4gKi8KKwlpbnQgbnJfbHVuczsJCS8qIHBlciBjaGFubmVsICovCiAKLQlpbnQgc2Vj X3NpemU7Ci0JaW50IG9vYl9zaXplOwotCWludCBtY2NhcDsKKwkvKiBjYWxjdWxhdGVkIHZhbHVl cyAqLworCWludCBhbGxfbHVuczsJCS8qIGFjcm9zcyBjaGFubmVscyAqLworCWludCBhbGxfY2h1 bmtzOwkJLyogYWNyb3NzIGNoYW5uZWxzICovCiAKLQlpbnQgc2VjX3Blcl9jaGs7Ci0JaW50IHNl Y19wZXJfbHVuOworCWludCBvcDsJCQkvKiBvdmVyLXByb3Zpc2lvbiBpbiBpbnN0YW5jZSAqLwog Ci0JaW50IHdzX21pbjsKLQlpbnQgd3Nfb3B0OwotCWludCB3c19zZXE7Ci0JaW50IHdzX3Blcl9j aGs7CisJc2VjdG9yX3QgdG90YWxfc2VjczsJLyogYWNyb3NzIGNoYW5uZWxzICovCiAKLQlpbnQg b3A7CisJLyogY2h1bmsgZ2VvbWV0cnkgKi8KKwl1MzIJbnJfY2hrczsJLyogY2h1bmtzIHBlciBs dW4gKi8KKwl1MzIJY2xiYTsJCS8qIHNlY3RvcnMgcGVyIGNodW5rICovCisJdTE2CWNzZWNzOwkJ Lyogc2VjdG9yIHNpemUgKi8KKwl1MTYJc29zOwkJLyogb3V0LW9mLWJhbmQgYXJlYSBzaXplICov CiAKLQlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0IHBwYWY7CisJLyogZGV2aWNlIHdyaXRlIGNvbnN0 cmFpbnMgKi8KKwl1MzIJd3NfbWluOwkJLyogbWluaW11bSB3cml0ZSBzaXplICovCisJdTMyCXdz X29wdDsJCS8qIG9wdGltYWwgd3JpdGUgc2l6ZSAqLworCXUzMgltd19jdW5pdHM7CS8qIGRpc3Rh bmNlIHJlcXVpcmVkIGZvciBzdWNjZXNzZnVsIHJlYWQgKi8KIAotCS8qIExlZ2FjeSAxLjIgc3Bl Y2lmaWMgZ2VvbWV0cnkgKi8KLQlpbnQgcGxhbmVfbW9kZTsgLyogZHJpdmUgZGV2aWNlIGluIHNp bmdsZSwgZG91YmxlIG9yIHF1YWQgbW9kZSAqLwotCWludCBucl9wbGFuZXM7Ci0JaW50IHNlY19w ZXJfcGc7IC8qIG9ubHkgc2VjdG9ycyBmb3IgYSBzaW5nbGUgcGFnZSAqLwotCWludCBzZWNfcGVy X3BsOyAvKiBhbGwgc2VjdG9ycyBhY3Jvc3MgcGxhbmVzICovCisJLyogZGV2aWNlIGNhcGFiaWxp dGllcyAqLworCXUzMgltY2NhcDsKKworCS8qIGRldmljZSB0aW1pbmdzICovCisJdTMyCXRyZHQ7 CQkvKiBBdmcuIFRyZWFkIChucykgKi8KKwl1MzIJdHJkbTsJCS8qIE1heCBUcmVhZCAobnMpICov CisJdTMyCXRwcnQ7CQkvKiBBdmcuIFRwcm9nIChucykgKi8KKwl1MzIJdHBybTsJCS8qIE1heCBU cHJvZyAobnMpICovCisJdTMyCXRiZXQ7CQkvKiBBdmcuIFRlcmFzZSAobnMpICovCisJdTMyCXRi ZW07CQkvKiBNYXggVGVyYXNlIChucykgKi8KKworCS8qIGdlbmVyaWMgYWRkcmVzcyBmb3JtYXQg Ki8KKwlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0IGFkZHJmOworCisJLyogMS4yIGNvbXBhdGliaWxp dHkgKi8KKwl1OAl2bW50OworCXUzMgljYXA7CisJdTMyCWRvbTsKKworCXU4CW10eXBlOworCXU4 CWZtdHlwZTsKKworCXUxNgljcGFyOworCXUzMgltcG9zOworCisJdTgJbnVtX3BsbjsKKwl1OAlw bGFuZV9tb2RlOworCXUxNgludW1fcGc7CisJdTE2CWZwZ19zejsKIH07CiAKIC8qIHN1Yi1kZXZp Y2Ugc3RydWN0dXJlICovCkBAIC0zMTQsOSArMzI3LDYgQEAgc3RydWN0IG52bV90Z3RfZGV2IHsK IAkvKiBCYXNlIHBwYXMgZm9yIHRhcmdldCBMVU5zICovCiAJc3RydWN0IHBwYV9hZGRyICpsdW5z OwogCi0Jc2VjdG9yX3QgdG90YWxfc2VjczsKLQotCXN0cnVjdCBudm1faWQgaWRlbnRpdHk7CiAJ c3RydWN0IHJlcXVlc3RfcXVldWUgKnE7CiAKIAlzdHJ1Y3QgbnZtX2RldiAqcGFyZW50OwpAQCAt MzMxLDEzICszNDEsOSBAQCBzdHJ1Y3QgbnZtX2RldiB7CiAJLyogRGV2aWNlIGluZm9ybWF0aW9u ICovCiAJc3RydWN0IG52bV9nZW8gZ2VvOwogCi0JdW5zaWduZWQgbG9uZyB0b3RhbF9zZWNzOwot CiAJdW5zaWduZWQgbG9uZyAqbHVuX21hcDsKIAl2b2lkICpkbWFfcG9vbDsKIAotCXN0cnVjdCBu dm1faWQgaWRlbnRpdHk7Ci0KIAkvKiBCYWNrZW5kIGRldmljZSAqLwogCXN0cnVjdCByZXF1ZXN0 X3F1ZXVlICpxOwogCWNoYXIgbmFtZVtESVNLX05BTUVfTEVOXTsKQEAgLTM1NywxNCArMzYzLDE2 IEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9hZGRyIGdlbmVyaWNfdG9fZGV2X2FkZHIoc3Ry dWN0IG52bV90Z3RfZGV2ICp0Z3RfZGV2LAogCQkJCQkJICBzdHJ1Y3QgcHBhX2FkZHIgcikKIHsK IAlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJnRndF9kZXYtPmdlbzsKKwlzdHJ1Y3QgbnZtX2FkZHJf Zm9ybWF0XzEyICpwcGFmID0KKwkJCQkoc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqKSZnZW8t PmFkZHJmOwogCXN0cnVjdCBwcGFfYWRkciBsOwogCi0JbC5wcGEgPSAoKHU2NClyLmcuYmxrKSA8 PCBnZW8tPnBwYWYuYmxrX29mZnNldDsKLQlsLnBwYSB8PSAoKHU2NClyLmcucGcpIDw8IGdlby0+ cHBhZi5wZ19vZmZzZXQ7Ci0JbC5wcGEgfD0gKCh1NjQpci5nLnNlYykgPDwgZ2VvLT5wcGFmLnNl Y3Rfb2Zmc2V0OwotCWwucHBhIHw9ICgodTY0KXIuZy5wbCkgPDwgZ2VvLT5wcGFmLnBsbl9vZmZz ZXQ7Ci0JbC5wcGEgfD0gKCh1NjQpci5nLmx1bikgPDwgZ2VvLT5wcGFmLmx1bl9vZmZzZXQ7Ci0J bC5wcGEgfD0gKCh1NjQpci5nLmNoKSA8PCBnZW8tPnBwYWYuY2hfb2Zmc2V0OworCWwucHBhID0g KCh1NjQpci5nLmNoKSA8PCBwcGFmLT5jaF9vZmZzZXQ7CisJbC5wcGEgfD0gKCh1NjQpci5nLmx1 bikgPDwgcHBhZi0+bHVuX29mZnNldDsKKwlsLnBwYSB8PSAoKHU2NClyLmcuYmxrKSA8PCBwcGFm LT5ibGtfb2Zmc2V0OworCWwucHBhIHw9ICgodTY0KXIuZy5wZykgPDwgcHBhZi0+cGdfb2Zmc2V0 OworCWwucHBhIHw9ICgodTY0KXIuZy5wbCkgPDwgcHBhZi0+cGxuX29mZnNldDsKKwlsLnBwYSB8 PSAoKHU2NClyLmcuc2VjKSA8PCBwcGFmLT5zZWN0X29mZnNldDsKIAogCXJldHVybiBsOwogfQpA QCAtMzczLDI0ICszODEsMTggQEAgc3RhdGljIGlubGluZSBzdHJ1Y3QgcHBhX2FkZHIgZGV2X3Rv X2dlbmVyaWNfYWRkcihzdHJ1Y3QgbnZtX3RndF9kZXYgKnRndF9kZXYsCiAJCQkJCQkgIHN0cnVj dCBwcGFfYWRkciByKQogewogCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmdGd0X2Rldi0+Z2VvOwor CXN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgKnBwYWYgPQorCQkJCShzdHJ1Y3QgbnZtX2FkZHJf Zm9ybWF0XzEyICopJmdlby0+YWRkcmY7CiAJc3RydWN0IHBwYV9hZGRyIGw7CiAKIAlsLnBwYSA9 IDA7Ci0JLyoKLQkgKiAoci5wcGEgPDwgWCBvZmZzZXQpICYgWCBsZW4gYml0bWFzay4gWCBlcS4g YmxrLCBwZywgZXRjLgotCSAqLwotCWwuZy5ibGsgPSAoci5wcGEgPj4gZ2VvLT5wcGFmLmJsa19v ZmZzZXQpICYKLQkJCQkJKCgoMSA8PCBnZW8tPnBwYWYuYmxrX2xlbikgLSAxKSk7Ci0JbC5nLnBn IHw9IChyLnBwYSA+PiBnZW8tPnBwYWYucGdfb2Zmc2V0KSAmCi0JCQkJCSgoKDEgPDwgZ2VvLT5w cGFmLnBnX2xlbikgLSAxKSk7Ci0JbC5nLnNlYyB8PSAoci5wcGEgPj4gZ2VvLT5wcGFmLnNlY3Rf b2Zmc2V0KSAmCi0JCQkJCSgoKDEgPDwgZ2VvLT5wcGFmLnNlY3RfbGVuKSAtIDEpKTsKLQlsLmcu cGwgfD0gKHIucHBhID4+IGdlby0+cHBhZi5wbG5fb2Zmc2V0KSAmCi0JCQkJCSgoKDEgPDwgZ2Vv LT5wcGFmLnBsbl9sZW4pIC0gMSkpOwotCWwuZy5sdW4gfD0gKHIucHBhID4+IGdlby0+cHBhZi5s dW5fb2Zmc2V0KSAmCi0JCQkJCSgoKDEgPDwgZ2VvLT5wcGFmLmx1bl9sZW4pIC0gMSkpOwotCWwu Zy5jaCB8PSAoci5wcGEgPj4gZ2VvLT5wcGFmLmNoX29mZnNldCkgJgotCQkJCQkoKCgxIDw8IGdl by0+cHBhZi5jaF9sZW4pIC0gMSkpOworCisJbC5nLmNoID0gKHIucHBhICYgcHBhZi0+Y2hfbWFz aykgPj4gcHBhZi0+Y2hfb2Zmc2V0OworCWwuZy5sdW4gPSAoci5wcGEgJiBwcGFmLT5sdW5fbWFz aykgPj4gcHBhZi0+bHVuX29mZnNldDsKKwlsLmcuYmxrID0gKHIucHBhICYgcHBhZi0+YmxrX21h c2spID4+IHBwYWYtPmJsa19vZmZzZXQ7CisJbC5nLnBnID0gKHIucHBhICYgcHBhZi0+cGdfbWFz aykgPj4gcHBhZi0+cGdfb2Zmc2V0OworCWwuZy5wbCA9IChyLnBwYSAmIHBwYWYtPnBsbl9tYXNr KSA+PiBwcGFmLT5wbG5fb2Zmc2V0OworCWwuZy5zZWMgPSAoci5wcGEgJiBwcGFmLT5zZWNfbWFz aykgPj4gcHBhZi0+c2VjdF9vZmZzZXQ7CiAKIAlyZXR1cm4gbDsKIH0KLS0gCjIuNy40CgoKX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KTGludXgtbnZtZSBt YWlsaW5nIGxpc3QKTGludXgtbnZtZUBsaXN0cy5pbmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0cy5p bmZyYWRlYWQub3JnL21haWxtYW4vbGlzdGluZm8vbGludXgtbnZtZQo= ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 01/15] lightnvm: simplify geometry structure. @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Currently, the device geometry is stored redundantly in the nvm_id and nvm_geo structures at a device level. Moreover, when instantiating targets on a specific number of LUNs, these structures are replicated and manually modified to fit the instance channel and LUN partitioning. Instead, create a generic geometry around nvm_geo, which can be used by (i) the underlying device to describe the geometry of the whole device, and (ii) instances to describe their geometry independently. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/core.c | 70 +++----- drivers/lightnvm/pblk-core.c | 16 +- drivers/lightnvm/pblk-gc.c | 2 +- drivers/lightnvm/pblk-init.c | 119 +++++++------- drivers/lightnvm/pblk-read.c | 2 +- drivers/lightnvm/pblk-recovery.c | 14 +- drivers/lightnvm/pblk-rl.c | 2 +- drivers/lightnvm/pblk-sysfs.c | 39 +++-- drivers/lightnvm/pblk-write.c | 2 +- drivers/lightnvm/pblk.h | 87 +++++----- drivers/nvme/host/lightnvm.c | 341 +++++++++++++++++++++++---------------- include/linux/lightnvm.h | 200 +++++++++++------------ 12 files changed, 465 insertions(+), 429 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 19c46ebb1b91..9a417d9cdf0c 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -155,7 +155,7 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, int blun = lun_begin % dev->geo.nr_luns; int lunid = 0; int lun_balanced = 1; - int prev_nr_luns; + int sec_per_lun, prev_nr_luns; int i, j; nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1; @@ -215,18 +215,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, if (!tgt_dev) goto err_ch; + /* Inherit device geometry from parent */ memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo)); + /* Target device only owns a portion of the physical device */ tgt_dev->geo.nr_chnls = nr_chnls; - tgt_dev->geo.all_luns = nr_luns; tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1; + tgt_dev->geo.all_luns = nr_luns; + tgt_dev->geo.all_chunks = nr_luns * dev->geo.nr_chks; + tgt_dev->geo.op = op; - tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun; + + sec_per_lun = dev->geo.clba * dev->geo.nr_chks; + tgt_dev->geo.total_secs = nr_luns * sec_per_lun; + tgt_dev->q = dev->q; tgt_dev->map = dev_map; tgt_dev->luns = luns; - memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id)); - tgt_dev->parent = dev; return tgt_dev; @@ -296,8 +301,6 @@ static int __nvm_config_simple(struct nvm_dev *dev, static int __nvm_config_extended(struct nvm_dev *dev, struct nvm_ioctl_create_extended *e) { - struct nvm_geo *geo = &dev->geo; - if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) { e->lun_begin = 0; e->lun_end = dev->geo.all_luns - 1; @@ -311,7 +314,7 @@ static int __nvm_config_extended(struct nvm_dev *dev, return -EINVAL; } - return nvm_config_check_luns(geo, e->lun_begin, e->lun_end); + return nvm_config_check_luns(&dev->geo, e->lun_begin, e->lun_end); } static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) @@ -406,7 +409,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) tqueue->queuedata = targetdata; blk_queue_max_hw_sectors(tqueue, - (dev->geo.sec_size >> 9) * NVM_MAX_VLBA); + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); set_capacity(tdisk, tt->capacity(targetdata)); add_disk(tdisk); @@ -841,40 +844,9 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl); static int nvm_core_init(struct nvm_dev *dev) { - struct nvm_id *id = &dev->identity; struct nvm_geo *geo = &dev->geo; int ret; - memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format)); - - if (id->mtype != 0) { - pr_err("nvm: memory type not supported\n"); - return -EINVAL; - } - - /* Whole device values */ - geo->nr_chnls = id->num_ch; - geo->nr_luns = id->num_lun; - - /* Generic device geometry values */ - geo->ws_min = id->ws_min; - geo->ws_opt = id->ws_opt; - geo->ws_seq = id->ws_seq; - geo->ws_per_chk = id->ws_per_chk; - geo->nr_chks = id->num_chk; - geo->mccap = id->mccap; - - geo->sec_per_chk = id->clba; - geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks; - geo->all_luns = geo->nr_luns * geo->nr_chnls; - - /* 1.2 spec device geometry values */ - geo->plane_mode = 1 << geo->ws_seq; - geo->nr_planes = geo->ws_opt / geo->ws_min; - geo->sec_per_pg = geo->ws_min; - geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes; - - dev->total_secs = geo->all_luns * geo->sec_per_lun; dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns), sizeof(unsigned long), GFP_KERNEL); if (!dev->lun_map) @@ -913,16 +885,14 @@ static int nvm_init(struct nvm_dev *dev) struct nvm_geo *geo = &dev->geo; int ret = -EINVAL; - if (dev->ops->identity(dev, &dev->identity)) { + if (dev->ops->identity(dev)) { pr_err("nvm: device could not be identified\n"); goto err; } - if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) { - pr_err("nvm: device ver_id %d not supported by kernel.\n", - dev->identity.ver_id); - goto err; - } + pr_debug("nvm: ver:%u nvm_vendor:%x\n", + geo->ver_id, + geo->vmnt); ret = nvm_core_init(dev); if (ret) { @@ -930,10 +900,10 @@ static int nvm_init(struct nvm_dev *dev) goto err; } - pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n", - dev->name, geo->sec_per_pg, geo->nr_planes, - geo->ws_per_chk, geo->nr_chks, - geo->all_luns, geo->nr_chnls); + pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n", + dev->name, geo->ws_min, geo->ws_opt, + geo->nr_chks, geo->all_luns, + geo->nr_chnls); return 0; err: pr_err("nvm: failed to initialize nvm\n"); diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 8848443a0721..169589ddd457 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line, memset(&rqd, 0, sizeof(struct nvm_rq)); rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len, l_mg->emeta_alloc_type, GFP_KERNEL); @@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line) if (bit >= lm->blk_per_line) return -1; - return bit * geo->sec_per_pl; + return bit * geo->ws_opt; } static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line, @@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, /* Capture bad block information on line mapping bitmaps */ while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line, bit + 1)) < lm->blk_per_line) { - off = bit * geo->sec_per_pl; + off = bit * geo->ws_opt; bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off, lm->sec_per_line); bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux, lm->sec_per_line); - line->sec_in_line -= geo->sec_per_chk; + line->sec_in_line -= geo->clba; if (bit >= lm->emeta_bb) nr_bb++; } /* Mark smeta metadata sectors as bad sectors */ bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line); - off = bit * geo->sec_per_pl; + off = bit * geo->ws_opt; bitmap_set(line->map_bitmap, off, lm->smeta_sec); line->sec_in_line -= lm->smeta_sec; line->smeta_ssec = off; @@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, emeta_secs = lm->emeta_sec[0]; off = lm->sec_per_line; while (emeta_secs) { - off -= geo->sec_per_pl; + off -= geo->ws_opt; if (!test_bit(off, line->invalid_bitmap)) { - bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl); - emeta_secs -= geo->sec_per_pl; + bitmap_set(line->invalid_bitmap, off, geo->ws_opt); + emeta_secs -= geo->ws_opt; } } diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c index 320f99af99e9..6851a5c67189 100644 --- a/drivers/lightnvm/pblk-gc.c +++ b/drivers/lightnvm/pblk-gc.c @@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work) up(&gc->gc_sem); - gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size); + gc_rq->data = vmalloc(gc_rq->nr_secs * geo->csecs); if (!gc_rq->data) { pr_err("pblk: could not GC line:%d (%d/%d)\n", line->id, *line->vsc, gc_rq->nr_secs); diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 25fc70ca07f7..9b5ee05c3028 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk) return -ENOMEM; power_size = get_count_order(nr_entries); - power_seg_sz = get_count_order(geo->sec_size); + power_seg_sz = get_count_order(geo->csecs); return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz); } @@ -154,11 +154,11 @@ static int pblk_rwb_init(struct pblk *pblk) /* Minimum pages needed within a lun */ #define ADDR_POOL_SIZE 64 -static int pblk_set_ppaf(struct pblk *pblk) +static int pblk_set_addrf_12(struct nvm_geo *geo, + struct nvm_addr_format_12 *dst) { - struct nvm_tgt_dev *dev = pblk->dev; - struct nvm_geo *geo = &dev->geo; - struct nvm_addr_format ppaf = geo->ppaf; + struct nvm_addr_format_12 *src = + (struct nvm_addr_format_12 *)&geo->addrf; int power_len; /* Re-calculate channel and lun format to adapt to configuration */ @@ -167,34 +167,50 @@ static int pblk_set_ppaf(struct pblk *pblk) pr_err("pblk: supports only power-of-two channel config.\n"); return -EINVAL; } - ppaf.ch_len = power_len; + dst->ch_len = power_len; power_len = get_count_order(geo->nr_luns); if (1 << power_len != geo->nr_luns) { pr_err("pblk: supports only power-of-two LUN config.\n"); return -EINVAL; } - ppaf.lun_len = power_len; + dst->lun_len = power_len; - pblk->ppaf.sec_offset = 0; - pblk->ppaf.pln_offset = ppaf.sect_len; - pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len; - pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len; - pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len; - pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len; - pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1; - pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) << - pblk->ppaf.pln_offset; - pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) << - pblk->ppaf.ch_offset; - pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) << - pblk->ppaf.lun_offset; - pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) << - pblk->ppaf.pg_offset; - pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) << - pblk->ppaf.blk_offset; + dst->blk_len = src->blk_len; + dst->pg_len = src->pg_len; + dst->pln_len = src->pln_len; + dst->sect_len = src->sect_len; - pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len; + dst->sect_offset = 0; + dst->pln_offset = dst->sect_len; + dst->ch_offset = dst->pln_offset + dst->pln_len; + dst->lun_offset = dst->ch_offset + dst->ch_len; + dst->pg_offset = dst->lun_offset + dst->lun_len; + dst->blk_offset = dst->pg_offset + dst->pg_len; + + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; + + return dst->blk_offset + src->blk_len; +} + +static int pblk_set_ppaf(struct pblk *pblk) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + int mod; + + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); + if (mod) { + pr_err("pblk: bad configuration of sectors/pages\n"); + return -EINVAL; + } + + pblk->ppaf_bitsize = pblk_set_addrf_12(geo, (void *)&pblk->ppaf); return 0; } @@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk) struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg * - geo->nr_planes * geo->all_luns; + pblk->pgs_in_buffer = geo->mw_cunits * geo->all_luns; if (pblk_init_global_caches(pblk)) return -ENOMEM; @@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk) /* Round to sector size so that lba_list starts on its own sector */ lm->emeta_sec[1] = DIV_ROUND_UP( sizeof(struct line_emeta) + lm->blk_bitmap_len + - sizeof(struct wa_counters), geo->sec_size); - lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size; + sizeof(struct wa_counters), geo->csecs); + lm->emeta_len[1] = lm->emeta_sec[1] * geo->csecs; /* Round to sector size so that vsc_list starts on its own sector */ lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0]; lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64), - geo->sec_size); - lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size; + geo->csecs); + lm->emeta_len[2] = lm->emeta_sec[2] * geo->csecs; lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32), - geo->sec_size); - lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size; + geo->csecs); + lm->emeta_len[3] = lm->emeta_sec[3] * geo->csecs; lm->vsc_list_len = l_mg->nr_lines * sizeof(u32); @@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) * on user capacity consider only provisioned blocks */ pblk->rl.total_blocks = nr_free_blks; - pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk; + pblk->rl.nr_secs = nr_free_blks * geo->clba; /* Consider sectors used for metadata */ sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); - pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk; + pblk->capacity = (provisioned - blk_meta) * geo->clba; atomic_set(&pblk->rl.free_blocks, nr_free_blks); atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); @@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk) void *chunk_log; unsigned int smeta_len, emeta_len; long nr_bad_blks = 0, nr_free_blks = 0; - int bb_distance, max_write_ppas, mod; + int bb_distance, max_write_ppas; int i, ret; - pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE); + pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); max_write_ppas = pblk->min_write_pgs * geo->all_luns; pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); pblk_set_sec_per_write(pblk, pblk->min_write_pgs); @@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk) return -EINVAL; } - div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod); - if (mod) { - pr_err("pblk: bad configuration of sectors/pages\n"); - return -EINVAL; - } - l_mg->nr_lines = geo->nr_chks; l_mg->log_line = l_mg->data_line = NULL; l_mg->l_seq_nr = l_mg->d_seq_nr = 0; l_mg->nr_free_lines = 0; bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES); - lm->sec_per_line = geo->sec_per_chk * geo->all_luns; + lm->sec_per_line = geo->clba * geo->all_luns; lm->blk_per_line = geo->all_luns; lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long); lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long); @@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk) */ i = 1; add_smeta_page: - lm->smeta_sec = i * geo->sec_per_pl; - lm->smeta_len = lm->smeta_sec * geo->sec_size; + lm->smeta_sec = i * geo->ws_opt; + lm->smeta_len = lm->smeta_sec * geo->csecs; smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len; if (smeta_len > lm->smeta_len) { @@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk) */ i = 1; add_emeta_page: - lm->emeta_sec[0] = i * geo->sec_per_pl; - lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size; + lm->emeta_sec[0] = i * geo->ws_opt; + lm->emeta_len[0] = lm->emeta_sec[0] * geo->csecs; emeta_len = calc_emeta_len(pblk); if (emeta_len > lm->emeta_len[0]) { @@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk) lm->min_blk_line = 1; if (geo->all_luns > 1) lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec + - lm->emeta_sec[0], geo->sec_per_chk); + lm->emeta_sec[0], geo->clba); if (lm->min_blk_line > lm->blk_per_line) { pr_err("pblk: config. not supported. Min. LUN in line:%d\n", @@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk) goto fail_free_bb_template; } - bb_distance = (geo->all_luns) * geo->sec_per_pl; + bb_distance = (geo->all_luns) * geo->ws_opt; for (i = 0; i < lm->sec_per_line; i += bb_distance) - bitmap_set(l_mg->bb_template, i, geo->sec_per_pl); + bitmap_set(l_mg->bb_template, i, geo->ws_opt); INIT_LIST_HEAD(&l_mg->free_list); INIT_LIST_HEAD(&l_mg->corrupt_list); @@ -982,9 +991,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, struct pblk *pblk; int ret; - if (dev->identity.dom & NVM_RSP_L2P) { + if (dev->geo.dom & NVM_RSP_L2P) { pr_err("pblk: host-side L2P table not supported. (%x)\n", - dev->identity.dom); + dev->geo.dom); return ERR_PTR(-EINVAL); } @@ -1092,7 +1101,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, blk_queue_write_cache(tqueue, true, false); - tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size; + tqueue->limits.discard_granularity = geo->clba * geo->csecs; tqueue->limits.discard_alignment = 0; blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9); queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue); diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c index 2f761283f43e..9eee10f69df0 100644 --- a/drivers/lightnvm/pblk-read.c +++ b/drivers/lightnvm/pblk-read.c @@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) if (!(gc_rq->secs_to_gc)) goto out; - data_len = (gc_rq->secs_to_gc) * geo->sec_size; + data_len = (gc_rq->secs_to_gc) * geo->csecs; bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len, PBLK_VMALLOC_META, GFP_KERNEL); if (IS_ERR(bio)) { diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c index aaab9a5c17cc..26356429dc72 100644 --- a/drivers/lightnvm/pblk-recovery.c +++ b/drivers/lightnvm/pblk-recovery.c @@ -184,7 +184,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line) int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line); return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] - - nr_bb * geo->sec_per_chk; + nr_bb * geo->clba; } struct pblk_recov_alloc { @@ -232,7 +232,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line, rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); if (!rq_ppas) rq_ppas = pblk->min_write_pgs; - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); if (IS_ERR(bio)) @@ -351,7 +351,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, if (!pad_rq) return -ENOMEM; - data = vzalloc(pblk->max_write_pgs * geo->sec_size); + data = vzalloc(pblk->max_write_pgs * geo->csecs); if (!data) { ret = -ENOMEM; goto free_rq; @@ -368,7 +368,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, goto fail_free_pad; } - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list); if (!meta_list) { @@ -509,7 +509,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line, rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); if (!rq_ppas) rq_ppas = pblk->min_write_pgs; - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); if (IS_ERR(bio)) @@ -640,7 +640,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); if (!rq_ppas) rq_ppas = pblk->min_write_pgs; - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); if (IS_ERR(bio)) @@ -745,7 +745,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) ppa_list = (void *)(meta_list) + pblk_dma_meta_size; dma_ppa_list = dma_meta_list + pblk_dma_meta_size; - data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL); + data = kcalloc(pblk->max_write_pgs, geo->csecs, GFP_KERNEL); if (!data) { ret = -ENOMEM; goto free_meta_list; diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c index 0d457b162f23..883a7113b19d 100644 --- a/drivers/lightnvm/pblk-rl.c +++ b/drivers/lightnvm/pblk-rl.c @@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget) /* Consider sectors used for metadata */ sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); rl->high = pblk->op_blks - blk_meta - lm->blk_per_line; rl->high_pw = get_count_order(rl->high); diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index 1680ce0a828d..33199c6af267 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; + struct nvm_addr_format_12 *ppaf; + struct nvm_addr_format_12 *geo_ppaf; ssize_t sz = 0; - sz = snprintf(page, PAGE_SIZE - sz, - "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", - pblk->ppaf_bitsize, - pblk->ppaf.blk_offset, geo->ppaf.blk_len, - pblk->ppaf.pg_offset, geo->ppaf.pg_len, - pblk->ppaf.lun_offset, geo->ppaf.lun_len, - pblk->ppaf.ch_offset, geo->ppaf.ch_len, - pblk->ppaf.pln_offset, geo->ppaf.pln_len, - pblk->ppaf.sec_offset, geo->ppaf.sect_len); + ppaf = (struct nvm_addr_format_12 *)&pblk->ppaf; + geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; + + sz = snprintf(page, PAGE_SIZE, + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", + pblk->ppaf_bitsize, + ppaf->ch_offset, ppaf->ch_len, + ppaf->lun_offset, ppaf->lun_len, + ppaf->blk_offset, ppaf->blk_len, + ppaf->pg_offset, ppaf->pg_len, + ppaf->pln_offset, ppaf->pln_len, + ppaf->sect_offset, ppaf->sect_len); sz += snprintf(page + sz, PAGE_SIZE - sz, - "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", - geo->ppaf.blk_offset, geo->ppaf.blk_len, - geo->ppaf.pg_offset, geo->ppaf.pg_len, - geo->ppaf.lun_offset, geo->ppaf.lun_len, - geo->ppaf.ch_offset, geo->ppaf.ch_len, - geo->ppaf.pln_offset, geo->ppaf.pln_len, - geo->ppaf.sect_offset, geo->ppaf.sect_len); + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", + geo_ppaf->ch_offset, geo_ppaf->ch_len, + geo_ppaf->lun_offset, geo_ppaf->lun_len, + geo_ppaf->blk_offset, geo_ppaf->blk_len, + geo_ppaf->pg_offset, geo_ppaf->pg_len, + geo_ppaf->pln_offset, geo_ppaf->pln_len, + geo_ppaf->sect_offset, geo_ppaf->sect_len); return sz; } @@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page) "blk_line:%d, sec_line:%d, sec_blk:%d\n", lm->blk_per_line, lm->sec_per_line, - geo->sec_per_chk); + geo->clba); return sz; } diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c index aae86ed60b98..3e6f1ebd743a 100644 --- a/drivers/lightnvm/pblk-write.c +++ b/drivers/lightnvm/pblk-write.c @@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) m_ctx = nvm_rq_to_pdu(rqd); m_ctx->private = meta_line; - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; data = ((void *)emeta->buf) + emeta->mem; bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len, diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index f0309d8172c0..b29c1e6698aa 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -551,21 +551,6 @@ struct pblk_line_meta { unsigned int meta_distance; /* Distance between data and metadata */ }; -struct pblk_addr_format { - u64 ch_mask; - u64 lun_mask; - u64 pln_mask; - u64 blk_mask; - u64 pg_mask; - u64 sec_mask; - u8 ch_offset; - u8 lun_offset; - u8 pln_offset; - u8 blk_offset; - u8 pg_offset; - u8 sec_offset; -}; - enum { PBLK_STATE_RUNNING = 0, PBLK_STATE_STOPPING = 1, @@ -585,8 +570,8 @@ struct pblk { struct pblk_line_mgmt l_mg; /* Line management */ struct pblk_line_meta lm; /* Line metadata */ + struct nvm_addr_format ppaf; int ppaf_bitsize; - struct pblk_addr_format ppaf; struct pblk_rb rwb; @@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line) return le32_to_cpu(*line->vsc); } -#define NVM_MEM_PAGE_WRITE (8) - static inline int pblk_pad_distance(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl; + return geo->mw_cunits * geo->all_luns * geo->ws_opt; } static inline int pblk_ppa_to_line(struct ppa_addr p) @@ -964,15 +947,17 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, u64 line_id) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->ppaf; struct ppa_addr ppa; ppa.ppa = 0; ppa.g.blk = line_id; - ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset; - ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset; - ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset; - ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset; - ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset; + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sect_offset; return ppa; } @@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, struct ppa_addr p) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->ppaf; u64 paddr; - paddr = (u64)p.g.pg << pblk->ppaf.pg_offset; - paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset; - paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset; - paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset; - paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset; + paddr = (u64)p.g.ch << ppaf->ch_offset; + paddr |= (u64)p.g.lun << ppaf->lun_offset; + paddr |= (u64)p.g.pg << ppaf->pg_offset; + paddr |= (u64)p.g.pl << ppaf->pln_offset; + paddr |= (u64)p.g.sec << ppaf->sect_offset; return paddr; } @@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) ppa64.c.line = ppa32 & ((~0U) >> 1); ppa64.c.is_cached = 1; } else { - ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >> - pblk->ppaf.blk_offset; - ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >> - pblk->ppaf.pg_offset; - ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >> - pblk->ppaf.lun_offset; - ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >> - pblk->ppaf.ch_offset; - ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >> - pblk->ppaf.pln_offset; - ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >> - pblk->ppaf.sec_offset; + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->ppaf; + + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sect_offset; } return ppa64; @@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) ppa32 |= ppa64.c.line; ppa32 |= 1U << 31; } else { - ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset; - ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset; - ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset; - ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset; - ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset; - ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset; + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->ppaf; + + ppa32 |= ppa64.g.ch << ppaf->ch_offset; + ppa32 |= ppa64.g.lun << ppaf->lun_offset; + ppa32 |= ppa64.g.blk << ppaf->blk_offset; + ppa32 |= ppa64.g.pg << ppaf->pg_offset; + ppa32 |= ppa64.g.pl << ppaf->pln_offset; + ppa32 |= ppa64.g.sec << ppaf->sect_offset; } return ppa32; @@ -1229,10 +1216,10 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, if (!ppa->c.is_cached && ppa->g.ch < geo->nr_chnls && ppa->g.lun < geo->nr_luns && - ppa->g.pl < geo->nr_planes && + ppa->g.pl < geo->num_pln && ppa->g.blk < geo->nr_chks && - ppa->g.pg < geo->ws_per_chk && - ppa->g.sec < geo->sec_per_pg) + ppa->g.pg < geo->num_pg && + ppa->g.sec < geo->ws_min) continue; print_ppa(ppa, "boundary", i); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 839c0b96466a..e276ace28c64 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf { __u8 blk_len; __u8 pg_offset; __u8 pg_len; - __u8 sect_offset; - __u8 sect_len; + __u8 sec_offset; + __u8 sec_len; __u8 res[4]; } __packed; @@ -254,106 +254,161 @@ static inline void _nvme_nvm_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); } -static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12) +static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, + struct nvme_nvm_id12_addrf *src) +{ + dst->ch_len = src->ch_len; + dst->lun_len = src->lun_len; + dst->blk_len = src->blk_len; + dst->pg_len = src->pg_len; + dst->pln_len = src->pln_len; + dst->sect_len = src->sec_len; + + dst->ch_offset = src->ch_offset; + dst->lun_offset = src->lun_offset; + dst->blk_offset = src->blk_offset; + dst->pg_offset = src->pg_offset; + dst->pln_offset = src->pln_offset; + dst->sect_offset = src->sec_offset; + + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; +} + +static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, + struct nvm_geo *geo) { struct nvme_nvm_id12_grp *src; int sec_per_pg, sec_per_pl, pg_per_blk; - if (id12->cgrps != 1) + if (id->cgrps != 1) return -EINVAL; - src = &id12->grp; + src = &id->grp; - nvm_id->mtype = src->mtype; - nvm_id->fmtype = src->fmtype; + if (src->mtype != 0) { + pr_err("nvm: memory type not supported\n"); + return -EINVAL; + } + + geo->ver_id = id->ver_id; + + geo->nr_chnls = src->num_ch; + geo->nr_luns = src->num_lun; + geo->all_luns = geo->nr_chnls * geo->nr_luns; - nvm_id->num_ch = src->num_ch; - nvm_id->num_lun = src->num_lun; + geo->nr_chks = le16_to_cpu(src->num_chk); - nvm_id->num_chk = le16_to_cpu(src->num_chk); - nvm_id->csecs = le16_to_cpu(src->csecs); - nvm_id->sos = le16_to_cpu(src->sos); + geo->csecs = le16_to_cpu(src->csecs); + geo->sos = le16_to_cpu(src->sos); pg_per_blk = le16_to_cpu(src->num_pg); - sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs; + sec_per_pg = le16_to_cpu(src->fpg_sz) / geo->csecs; sec_per_pl = sec_per_pg * src->num_pln; - nvm_id->clba = sec_per_pl * pg_per_blk; - nvm_id->ws_per_chk = pg_per_blk; - - nvm_id->mpos = le32_to_cpu(src->mpos); - nvm_id->cpar = le16_to_cpu(src->cpar); - nvm_id->mccap = le32_to_cpu(src->mccap); - - nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg; - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; - - if (nvm_id->mpos & 0x020202) { - nvm_id->ws_seq = NVM_IO_DUAL_ACCESS; - nvm_id->ws_opt <<= 1; - } else if (nvm_id->mpos & 0x040404) { - nvm_id->ws_seq = NVM_IO_QUAD_ACCESS; - nvm_id->ws_opt <<= 2; + geo->clba = sec_per_pl * pg_per_blk; + + geo->all_chunks = geo->all_luns * geo->nr_chks; + geo->total_secs = geo->clba * geo->all_chunks; + + geo->ws_min = sec_per_pg; + geo->ws_opt = sec_per_pg; + geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ + + geo->mccap = le32_to_cpu(src->mccap); + + geo->trdt = le32_to_cpu(src->trdt); + geo->trdm = le32_to_cpu(src->trdm); + geo->tprt = le32_to_cpu(src->tprt); + geo->tprm = le32_to_cpu(src->tprm); + geo->tbet = le32_to_cpu(src->tbet); + geo->tbem = le32_to_cpu(src->tbem); + + /* 1.2 compatibility */ + geo->vmnt = id->vmnt; + geo->cap = le32_to_cpu(id->cap); + geo->dom = le32_to_cpu(id->dom); + + geo->mtype = src->mtype; + geo->fmtype = src->fmtype; + + geo->cpar = le16_to_cpu(src->cpar); + geo->mpos = le32_to_cpu(src->mpos); + + geo->plane_mode = NVM_PLANE_SINGLE; + + if (geo->mpos & 0x020202) { + geo->plane_mode = NVM_PLANE_DOUBLE; + geo->ws_opt <<= 1; + } else if (geo->mpos & 0x040404) { + geo->plane_mode = NVM_PLANE_QUAD; + geo->ws_opt <<= 2; } - nvm_id->trdt = le32_to_cpu(src->trdt); - nvm_id->trdm = le32_to_cpu(src->trdm); - nvm_id->tprt = le32_to_cpu(src->tprt); - nvm_id->tprm = le32_to_cpu(src->tprm); - nvm_id->tbet = le32_to_cpu(src->tbet); - nvm_id->tbem = le32_to_cpu(src->tbem); - - /* 1.2 compatibility */ - nvm_id->num_pln = src->num_pln; - nvm_id->num_pg = le16_to_cpu(src->num_pg); - nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz); + geo->num_pln = src->num_pln; + geo->num_pg = le16_to_cpu(src->num_pg); + geo->fpg_sz = le16_to_cpu(src->fpg_sz); + + nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&geo->addrf, + &id->ppaf); return 0; } -static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, - struct nvme_nvm_id12 *id) +static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst, + struct nvme_nvm_id20_addrf *src) { - nvm_id->ver_id = id->ver_id; - nvm_id->vmnt = id->vmnt; - nvm_id->cap = le32_to_cpu(id->cap); - nvm_id->dom = le32_to_cpu(id->dom); - memcpy(&nvm_id->ppaf, &id->ppaf, - sizeof(struct nvm_addr_format)); - - return init_grp(nvm_id, id); + dst->ch_len = src->grp_len; + dst->lun_len = src->pu_len; + dst->chk_len = src->chk_len; + dst->sec_len = src->lba_len; + + dst->sec_offset = 0; + dst->chk_offset = dst->sec_len; + dst->lun_offset = dst->chk_offset + dst->chk_len; + dst->ch_offset = dst->lun_offset + dst->lun_len; + + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; + dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset; + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; } -static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, - struct nvme_nvm_id20 *id) +static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, + struct nvm_geo *geo) { - nvm_id->ver_id = id->mjr; + geo->ver_id = id->mjr; + + geo->nr_chnls = le16_to_cpu(id->num_grp); + geo->nr_luns = le16_to_cpu(id->num_pu); + geo->all_luns = geo->nr_chnls * geo->nr_luns; - nvm_id->num_ch = le16_to_cpu(id->num_grp); - nvm_id->num_lun = le16_to_cpu(id->num_pu); - nvm_id->num_chk = le32_to_cpu(id->num_chk); - nvm_id->clba = le32_to_cpu(id->clba); + geo->nr_chks = le32_to_cpu(id->num_chk); + geo->clba = le32_to_cpu(id->clba); - nvm_id->ws_min = le32_to_cpu(id->ws_min); - nvm_id->ws_opt = le32_to_cpu(id->ws_opt); - nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits); + geo->all_chunks = geo->all_luns * geo->nr_chks; + geo->total_secs = geo->clba * geo->all_chunks; - nvm_id->trdt = le32_to_cpu(id->trdt); - nvm_id->trdm = le32_to_cpu(id->trdm); - nvm_id->tprt = le32_to_cpu(id->twrt); - nvm_id->tprm = le32_to_cpu(id->twrm); - nvm_id->tbet = le32_to_cpu(id->tcrst); - nvm_id->tbem = le32_to_cpu(id->tcrsm); + geo->ws_min = le32_to_cpu(id->ws_min); + geo->ws_opt = le32_to_cpu(id->ws_opt); + geo->mw_cunits = le32_to_cpu(id->mw_cunits); - /* calculated values */ - nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min; + geo->trdt = le32_to_cpu(id->trdt); + geo->trdm = le32_to_cpu(id->trdm); + geo->tprt = le32_to_cpu(id->twrt); + geo->tprm = le32_to_cpu(id->twrm); + geo->tbet = le32_to_cpu(id->tcrst); + geo->tbem = le32_to_cpu(id->tcrsm); - /* 1.2 compatibility */ - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; + nvme_nvm_set_addr_20(&geo->addrf, &id->lbaf); return 0; } -static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) +static int nvme_nvm_identity(struct nvm_dev *nvmdev) { struct nvme_ns *ns = nvmdev->q->queuedata; struct nvme_nvm_id12 *id; @@ -380,18 +435,18 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) */ switch (id->ver_id) { case 1: - ret = nvme_nvm_setup_12(nvmdev, nvm_id, id); + ret = nvme_nvm_setup_12(id, &nvmdev->geo); break; case 2: - ret = nvme_nvm_setup_20(nvmdev, nvm_id, - (struct nvme_nvm_id20 *)id); + ret = nvme_nvm_setup_20((struct nvme_nvm_id20 *)id, + &nvmdev->geo); break; default: - dev_err(ns->ctrl->device, - "OCSSD revision not supported (%d)\n", - nvm_id->ver_id); + dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n", + id->ver_id); ret = -EINVAL; } + out: kfree(id); return ret; @@ -406,7 +461,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, struct nvme_ctrl *ctrl = ns->ctrl; struct nvme_nvm_command c = {}; struct nvme_nvm_bb_tbl *bb_tbl; - int nr_blks = geo->nr_chks * geo->plane_mode; + int nr_blks = geo->nr_chks * geo->num_pln; int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks; int ret = 0; @@ -447,7 +502,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, goto out; } - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode); + memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->num_pln); out: kfree(bb_tbl); return ret; @@ -815,9 +870,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) void nvme_nvm_update_nvm_info(struct nvme_ns *ns) { struct nvm_dev *ndev = ns->ndev; + struct nvm_geo *geo = &ndev->geo; - ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift; - ndev->identity.sos = ndev->geo.oob_size = ns->ms; + geo->csecs = 1 << ns->lba_shift; + geo->sos = ns->ms; } int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) @@ -850,23 +906,22 @@ static ssize_t nvm_dev_attr_show(struct device *dev, { struct nvme_ns *ns = nvme_get_ns_from_dev(dev); struct nvm_dev *ndev = ns->ndev; - struct nvm_id *id; + struct nvm_geo *geo = &ndev->geo; struct attribute *attr; if (!ndev) return 0; - id = &ndev->identity; attr = &dattr->attr; if (strcmp(attr->name, "version") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ver_id); } else if (strcmp(attr->name, "capabilities") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->cap); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->cap); } else if (strcmp(attr->name, "read_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdt); } else if (strcmp(attr->name, "read_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdm); } else { return scnprintf(page, PAGE_SIZE, @@ -875,75 +930,79 @@ static ssize_t nvm_dev_attr_show(struct device *dev, } } +static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf, + char *page) +{ + return scnprintf(page, PAGE_SIZE, + "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", + ppaf->ch_offset, ppaf->ch_len, + ppaf->lun_offset, ppaf->lun_len, + ppaf->pln_offset, ppaf->pln_len, + ppaf->blk_offset, ppaf->blk_len, + ppaf->pg_offset, ppaf->pg_len, + ppaf->sect_offset, ppaf->sect_len); +} + static ssize_t nvm_dev_attr_show_12(struct device *dev, struct device_attribute *dattr, char *page) { struct nvme_ns *ns = nvme_get_ns_from_dev(dev); struct nvm_dev *ndev = ns->ndev; - struct nvm_id *id; + struct nvm_geo *geo = &ndev->geo; struct attribute *attr; if (!ndev) return 0; - id = &ndev->identity; attr = &dattr->attr; if (strcmp(attr->name, "vendor_opcode") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->vmnt); } else if (strcmp(attr->name, "device_mode") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->dom); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->dom); /* kept for compatibility */ } else if (strcmp(attr->name, "media_manager") == 0) { return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm"); } else if (strcmp(attr->name, "ppa_format") == 0) { - return scnprintf(page, PAGE_SIZE, - "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", - id->ppaf.ch_offset, id->ppaf.ch_len, - id->ppaf.lun_offset, id->ppaf.lun_len, - id->ppaf.pln_offset, id->ppaf.pln_len, - id->ppaf.blk_offset, id->ppaf.blk_len, - id->ppaf.pg_offset, id->ppaf.pg_len, - id->ppaf.sect_offset, id->ppaf.sect_len); + return nvm_dev_attr_show_ppaf((void *)&geo->addrf, page); } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */ - return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mtype); } else if (strcmp(attr->name, "flash_media_type") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fmtype); } else if (strcmp(attr->name, "num_channels") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); } else if (strcmp(attr->name, "num_luns") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); } else if (strcmp(attr->name, "num_planes") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pln); } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */ - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); } else if (strcmp(attr->name, "num_pages") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pg); } else if (strcmp(attr->name, "page_size") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fpg_sz); } else if (strcmp(attr->name, "hw_sector_size") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->csecs); } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */ - return scnprintf(page, PAGE_SIZE, "%u\n", id->sos); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->sos); } else if (strcmp(attr->name, "prog_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); } else if (strcmp(attr->name, "prog_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); } else if (strcmp(attr->name, "erase_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); } else if (strcmp(attr->name, "erase_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); } else if (strcmp(attr->name, "multiplane_modes") == 0) { - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos); + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mpos); } else if (strcmp(attr->name, "media_capabilities") == 0) { - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap); + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mccap); } else if (strcmp(attr->name, "max_phys_secs") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", NVM_MAX_VLBA); } else { - return scnprintf(page, - PAGE_SIZE, - "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", - attr->name); + return scnprintf(page, PAGE_SIZE, + "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", + attr->name); } } @@ -952,42 +1011,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, { struct nvme_ns *ns = nvme_get_ns_from_dev(dev); struct nvm_dev *ndev = ns->ndev; - struct nvm_id *id; + struct nvm_geo *geo = &ndev->geo; struct attribute *attr; if (!ndev) return 0; - id = &ndev->identity; attr = &dattr->attr; if (strcmp(attr->name, "groups") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); } else if (strcmp(attr->name, "punits") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); } else if (strcmp(attr->name, "chunks") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); } else if (strcmp(attr->name, "clba") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->clba); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->clba); } else if (strcmp(attr->name, "ws_min") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_min); } else if (strcmp(attr->name, "ws_opt") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_opt); } else if (strcmp(attr->name, "mw_cunits") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mw_cunits); } else if (strcmp(attr->name, "write_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); } else if (strcmp(attr->name, "write_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); } else if (strcmp(attr->name, "reset_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); } else if (strcmp(attr->name, "reset_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); } else { - return scnprintf(page, - PAGE_SIZE, - "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", - attr->name); + return scnprintf(page, PAGE_SIZE, + "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", + attr->name); } } @@ -1106,10 +1163,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = { int nvme_nvm_register_sysfs(struct nvme_ns *ns) { - if (!ns->ndev) + struct nvm_dev *ndev = ns->ndev; + struct nvm_geo *geo = &ndev->geo; + + if (!ndev) return -EINVAL; - switch (ns->ndev->identity.ver_id) { + switch (geo->ver_id) { case 1: return sysfs_create_group(&disk_to_dev(ns->disk)->kobj, &nvm_dev_attr_group_12); @@ -1123,7 +1183,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns) void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) { - switch (ns->ndev->identity.ver_id) { + struct nvm_dev *ndev = ns->ndev; + struct nvm_geo *geo = &ndev->geo; + + switch (geo->ver_id) { case 1: sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, &nvm_dev_attr_group_12); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index e55b10573c99..16255fcd5250 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -50,7 +50,7 @@ struct nvm_id; struct nvm_dev; struct nvm_tgt_dev; -typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *); +typedef int (nvm_id_fn)(struct nvm_dev *); typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); @@ -152,62 +152,48 @@ struct nvm_id_lp_tbl { struct nvm_id_lp_mlc mlc; }; -struct nvm_addr_format { - u8 ch_offset; +struct nvm_addr_format_12 { u8 ch_len; - u8 lun_offset; u8 lun_len; - u8 pln_offset; + u8 blk_len; + u8 pg_len; u8 pln_len; + u8 sect_len; + + u8 ch_offset; + u8 lun_offset; u8 blk_offset; - u8 blk_len; u8 pg_offset; - u8 pg_len; + u8 pln_offset; u8 sect_offset; - u8 sect_len; -}; - -struct nvm_id { - u8 ver_id; - u8 vmnt; - u32 cap; - u32 dom; - - struct nvm_addr_format ppaf; - - u8 num_ch; - u8 num_lun; - u16 num_chk; - u16 clba; - u16 csecs; - u16 sos; - - u32 ws_min; - u32 ws_opt; - u32 mw_cunits; - u32 trdt; - u32 trdm; - u32 tprt; - u32 tprm; - u32 tbet; - u32 tbem; - u32 mpos; - u32 mccap; - u16 cpar; - - /* calculated values */ - u16 ws_seq; - u16 ws_per_chk; - - /* 1.2 compatibility */ - u8 mtype; - u8 fmtype; + u64 ch_mask; + u64 lun_mask; + u64 blk_mask; + u64 pg_mask; + u64 pln_mask; + u64 sec_mask; +}; - u8 num_pln; - u16 num_pg; - u16 fpg_sz; -} __packed; +struct nvm_addr_format { + u8 ch_len; + u8 lun_len; + u8 chk_len; + u8 sec_len; + u8 rsv_len[2]; + + u8 ch_offset; + u8 lun_offset; + u8 chk_offset; + u8 sec_offset; + u8 rsv_off[2]; + + u64 ch_mask; + u64 lun_mask; + u64 chk_mask; + u64 sec_mask; + u64 rsv_mask[2]; +}; struct nvm_target { struct list_head list; @@ -274,36 +260,63 @@ enum { NVM_BLK_ST_BAD = 0x8, /* Bad block */ }; - -/* Device generic information */ +/* Instance geometry */ struct nvm_geo { - /* generic geometry */ + /* device reported version */ + u8 ver_id; + + /* instance specific geometry */ int nr_chnls; - int all_luns; /* across channels */ - int nr_luns; /* per channel */ - int nr_chks; /* per lun */ + int nr_luns; /* per channel */ - int sec_size; - int oob_size; - int mccap; + /* calculated values */ + int all_luns; /* across channels */ + int all_chunks; /* across channels */ - int sec_per_chk; - int sec_per_lun; + int op; /* over-provision in instance */ - int ws_min; - int ws_opt; - int ws_seq; - int ws_per_chk; + sector_t total_secs; /* across channels */ - int op; + /* chunk geometry */ + u32 nr_chks; /* chunks per lun */ + u32 clba; /* sectors per chunk */ + u16 csecs; /* sector size */ + u16 sos; /* out-of-band area size */ - struct nvm_addr_format ppaf; + /* device write constrains */ + u32 ws_min; /* minimum write size */ + u32 ws_opt; /* optimal write size */ + u32 mw_cunits; /* distance required for successful read */ - /* Legacy 1.2 specific geometry */ - int plane_mode; /* drive device in single, double or quad mode */ - int nr_planes; - int sec_per_pg; /* only sectors for a single page */ - int sec_per_pl; /* all sectors across planes */ + /* device capabilities */ + u32 mccap; + + /* device timings */ + u32 trdt; /* Avg. Tread (ns) */ + u32 trdm; /* Max Tread (ns) */ + u32 tprt; /* Avg. Tprog (ns) */ + u32 tprm; /* Max Tprog (ns) */ + u32 tbet; /* Avg. Terase (ns) */ + u32 tbem; /* Max Terase (ns) */ + + /* generic address format */ + struct nvm_addr_format addrf; + + /* 1.2 compatibility */ + u8 vmnt; + u32 cap; + u32 dom; + + u8 mtype; + u8 fmtype; + + u16 cpar; + u32 mpos; + + u8 num_pln; + u8 plane_mode; + u16 num_pg; + u16 fpg_sz; }; /* sub-device structure */ @@ -314,9 +327,6 @@ struct nvm_tgt_dev { /* Base ppas for target LUNs */ struct ppa_addr *luns; - sector_t total_secs; - - struct nvm_id identity; struct request_queue *q; struct nvm_dev *parent; @@ -331,13 +341,9 @@ struct nvm_dev { /* Device information */ struct nvm_geo geo; - unsigned long total_secs; - unsigned long *lun_map; void *dma_pool; - struct nvm_id identity; - /* Backend device */ struct request_queue *q; char name[DISK_NAME_LEN]; @@ -357,14 +363,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, struct ppa_addr r) { struct nvm_geo *geo = &tgt_dev->geo; + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; struct ppa_addr l; - l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset; - l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset; - l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset; - l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset; - l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset; - l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset; + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; + l.ppa |= ((u64)r.g.sec) << ppaf->sect_offset; return l; } @@ -373,24 +381,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, struct ppa_addr r) { struct nvm_geo *geo = &tgt_dev->geo; + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; struct ppa_addr l; l.ppa = 0; - /* - * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc. - */ - l.g.blk = (r.ppa >> geo->ppaf.blk_offset) & - (((1 << geo->ppaf.blk_len) - 1)); - l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) & - (((1 << geo->ppaf.pg_len) - 1)); - l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) & - (((1 << geo->ppaf.sect_len) - 1)); - l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) & - (((1 << geo->ppaf.pln_len) - 1)); - l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) & - (((1 << geo->ppaf.lun_len) - 1)); - l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) & - (((1 << geo->ppaf.ch_len) - 1)); + + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sect_offset; return l; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 01/15] lightnvm: simplify geometry structure. @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Currently, the device geometry is stored redundantly in the nvm_id and nvm_geo structures at a device level. Moreover, when instantiating targets on a specific number of LUNs, these structures are replicated and manually modified to fit the instance channel and LUN partitioning. Instead, create a generic geometry around nvm_geo, which can be used by (i) the underlying device to describe the geometry of the whole device, and (ii) instances to describe their geometry independently. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/core.c | 70 +++----- drivers/lightnvm/pblk-core.c | 16 +- drivers/lightnvm/pblk-gc.c | 2 +- drivers/lightnvm/pblk-init.c | 119 +++++++------- drivers/lightnvm/pblk-read.c | 2 +- drivers/lightnvm/pblk-recovery.c | 14 +- drivers/lightnvm/pblk-rl.c | 2 +- drivers/lightnvm/pblk-sysfs.c | 39 +++-- drivers/lightnvm/pblk-write.c | 2 +- drivers/lightnvm/pblk.h | 87 +++++----- drivers/nvme/host/lightnvm.c | 341 +++++++++++++++++++++++---------------- include/linux/lightnvm.h | 200 +++++++++++------------ 12 files changed, 465 insertions(+), 429 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 19c46ebb1b91..9a417d9cdf0c 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -155,7 +155,7 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, int blun = lun_begin % dev->geo.nr_luns; int lunid = 0; int lun_balanced = 1; - int prev_nr_luns; + int sec_per_lun, prev_nr_luns; int i, j; nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1; @@ -215,18 +215,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, if (!tgt_dev) goto err_ch; + /* Inherit device geometry from parent */ memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo)); + /* Target device only owns a portion of the physical device */ tgt_dev->geo.nr_chnls = nr_chnls; - tgt_dev->geo.all_luns = nr_luns; tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1; + tgt_dev->geo.all_luns = nr_luns; + tgt_dev->geo.all_chunks = nr_luns * dev->geo.nr_chks; + tgt_dev->geo.op = op; - tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun; + + sec_per_lun = dev->geo.clba * dev->geo.nr_chks; + tgt_dev->geo.total_secs = nr_luns * sec_per_lun; + tgt_dev->q = dev->q; tgt_dev->map = dev_map; tgt_dev->luns = luns; - memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id)); - tgt_dev->parent = dev; return tgt_dev; @@ -296,8 +301,6 @@ static int __nvm_config_simple(struct nvm_dev *dev, static int __nvm_config_extended(struct nvm_dev *dev, struct nvm_ioctl_create_extended *e) { - struct nvm_geo *geo = &dev->geo; - if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) { e->lun_begin = 0; e->lun_end = dev->geo.all_luns - 1; @@ -311,7 +314,7 @@ static int __nvm_config_extended(struct nvm_dev *dev, return -EINVAL; } - return nvm_config_check_luns(geo, e->lun_begin, e->lun_end); + return nvm_config_check_luns(&dev->geo, e->lun_begin, e->lun_end); } static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) @@ -406,7 +409,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) tqueue->queuedata = targetdata; blk_queue_max_hw_sectors(tqueue, - (dev->geo.sec_size >> 9) * NVM_MAX_VLBA); + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); set_capacity(tdisk, tt->capacity(targetdata)); add_disk(tdisk); @@ -841,40 +844,9 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl); static int nvm_core_init(struct nvm_dev *dev) { - struct nvm_id *id = &dev->identity; struct nvm_geo *geo = &dev->geo; int ret; - memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format)); - - if (id->mtype != 0) { - pr_err("nvm: memory type not supported\n"); - return -EINVAL; - } - - /* Whole device values */ - geo->nr_chnls = id->num_ch; - geo->nr_luns = id->num_lun; - - /* Generic device geometry values */ - geo->ws_min = id->ws_min; - geo->ws_opt = id->ws_opt; - geo->ws_seq = id->ws_seq; - geo->ws_per_chk = id->ws_per_chk; - geo->nr_chks = id->num_chk; - geo->mccap = id->mccap; - - geo->sec_per_chk = id->clba; - geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks; - geo->all_luns = geo->nr_luns * geo->nr_chnls; - - /* 1.2 spec device geometry values */ - geo->plane_mode = 1 << geo->ws_seq; - geo->nr_planes = geo->ws_opt / geo->ws_min; - geo->sec_per_pg = geo->ws_min; - geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes; - - dev->total_secs = geo->all_luns * geo->sec_per_lun; dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns), sizeof(unsigned long), GFP_KERNEL); if (!dev->lun_map) @@ -913,16 +885,14 @@ static int nvm_init(struct nvm_dev *dev) struct nvm_geo *geo = &dev->geo; int ret = -EINVAL; - if (dev->ops->identity(dev, &dev->identity)) { + if (dev->ops->identity(dev)) { pr_err("nvm: device could not be identified\n"); goto err; } - if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) { - pr_err("nvm: device ver_id %d not supported by kernel.\n", - dev->identity.ver_id); - goto err; - } + pr_debug("nvm: ver:%u nvm_vendor:%x\n", + geo->ver_id, + geo->vmnt); ret = nvm_core_init(dev); if (ret) { @@ -930,10 +900,10 @@ static int nvm_init(struct nvm_dev *dev) goto err; } - pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n", - dev->name, geo->sec_per_pg, geo->nr_planes, - geo->ws_per_chk, geo->nr_chks, - geo->all_luns, geo->nr_chnls); + pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n", + dev->name, geo->ws_min, geo->ws_opt, + geo->nr_chks, geo->all_luns, + geo->nr_chnls); return 0; err: pr_err("nvm: failed to initialize nvm\n"); diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 8848443a0721..169589ddd457 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line, memset(&rqd, 0, sizeof(struct nvm_rq)); rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len, l_mg->emeta_alloc_type, GFP_KERNEL); @@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line) if (bit >= lm->blk_per_line) return -1; - return bit * geo->sec_per_pl; + return bit * geo->ws_opt; } static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line, @@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, /* Capture bad block information on line mapping bitmaps */ while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line, bit + 1)) < lm->blk_per_line) { - off = bit * geo->sec_per_pl; + off = bit * geo->ws_opt; bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off, lm->sec_per_line); bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux, lm->sec_per_line); - line->sec_in_line -= geo->sec_per_chk; + line->sec_in_line -= geo->clba; if (bit >= lm->emeta_bb) nr_bb++; } /* Mark smeta metadata sectors as bad sectors */ bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line); - off = bit * geo->sec_per_pl; + off = bit * geo->ws_opt; bitmap_set(line->map_bitmap, off, lm->smeta_sec); line->sec_in_line -= lm->smeta_sec; line->smeta_ssec = off; @@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, emeta_secs = lm->emeta_sec[0]; off = lm->sec_per_line; while (emeta_secs) { - off -= geo->sec_per_pl; + off -= geo->ws_opt; if (!test_bit(off, line->invalid_bitmap)) { - bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl); - emeta_secs -= geo->sec_per_pl; + bitmap_set(line->invalid_bitmap, off, geo->ws_opt); + emeta_secs -= geo->ws_opt; } } diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c index 320f99af99e9..6851a5c67189 100644 --- a/drivers/lightnvm/pblk-gc.c +++ b/drivers/lightnvm/pblk-gc.c @@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work) up(&gc->gc_sem); - gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size); + gc_rq->data = vmalloc(gc_rq->nr_secs * geo->csecs); if (!gc_rq->data) { pr_err("pblk: could not GC line:%d (%d/%d)\n", line->id, *line->vsc, gc_rq->nr_secs); diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 25fc70ca07f7..9b5ee05c3028 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk) return -ENOMEM; power_size = get_count_order(nr_entries); - power_seg_sz = get_count_order(geo->sec_size); + power_seg_sz = get_count_order(geo->csecs); return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz); } @@ -154,11 +154,11 @@ static int pblk_rwb_init(struct pblk *pblk) /* Minimum pages needed within a lun */ #define ADDR_POOL_SIZE 64 -static int pblk_set_ppaf(struct pblk *pblk) +static int pblk_set_addrf_12(struct nvm_geo *geo, + struct nvm_addr_format_12 *dst) { - struct nvm_tgt_dev *dev = pblk->dev; - struct nvm_geo *geo = &dev->geo; - struct nvm_addr_format ppaf = geo->ppaf; + struct nvm_addr_format_12 *src = + (struct nvm_addr_format_12 *)&geo->addrf; int power_len; /* Re-calculate channel and lun format to adapt to configuration */ @@ -167,34 +167,50 @@ static int pblk_set_ppaf(struct pblk *pblk) pr_err("pblk: supports only power-of-two channel config.\n"); return -EINVAL; } - ppaf.ch_len = power_len; + dst->ch_len = power_len; power_len = get_count_order(geo->nr_luns); if (1 << power_len != geo->nr_luns) { pr_err("pblk: supports only power-of-two LUN config.\n"); return -EINVAL; } - ppaf.lun_len = power_len; + dst->lun_len = power_len; - pblk->ppaf.sec_offset = 0; - pblk->ppaf.pln_offset = ppaf.sect_len; - pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len; - pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len; - pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len; - pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len; - pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1; - pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) << - pblk->ppaf.pln_offset; - pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) << - pblk->ppaf.ch_offset; - pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) << - pblk->ppaf.lun_offset; - pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) << - pblk->ppaf.pg_offset; - pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) << - pblk->ppaf.blk_offset; + dst->blk_len = src->blk_len; + dst->pg_len = src->pg_len; + dst->pln_len = src->pln_len; + dst->sect_len = src->sect_len; - pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len; + dst->sect_offset = 0; + dst->pln_offset = dst->sect_len; + dst->ch_offset = dst->pln_offset + dst->pln_len; + dst->lun_offset = dst->ch_offset + dst->ch_len; + dst->pg_offset = dst->lun_offset + dst->lun_len; + dst->blk_offset = dst->pg_offset + dst->pg_len; + + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; + + return dst->blk_offset + src->blk_len; +} + +static int pblk_set_ppaf(struct pblk *pblk) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + int mod; + + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); + if (mod) { + pr_err("pblk: bad configuration of sectors/pages\n"); + return -EINVAL; + } + + pblk->ppaf_bitsize = pblk_set_addrf_12(geo, (void *)&pblk->ppaf); return 0; } @@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk) struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg * - geo->nr_planes * geo->all_luns; + pblk->pgs_in_buffer = geo->mw_cunits * geo->all_luns; if (pblk_init_global_caches(pblk)) return -ENOMEM; @@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk) /* Round to sector size so that lba_list starts on its own sector */ lm->emeta_sec[1] = DIV_ROUND_UP( sizeof(struct line_emeta) + lm->blk_bitmap_len + - sizeof(struct wa_counters), geo->sec_size); - lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size; + sizeof(struct wa_counters), geo->csecs); + lm->emeta_len[1] = lm->emeta_sec[1] * geo->csecs; /* Round to sector size so that vsc_list starts on its own sector */ lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0]; lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64), - geo->sec_size); - lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size; + geo->csecs); + lm->emeta_len[2] = lm->emeta_sec[2] * geo->csecs; lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32), - geo->sec_size); - lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size; + geo->csecs); + lm->emeta_len[3] = lm->emeta_sec[3] * geo->csecs; lm->vsc_list_len = l_mg->nr_lines * sizeof(u32); @@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) * on user capacity consider only provisioned blocks */ pblk->rl.total_blocks = nr_free_blks; - pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk; + pblk->rl.nr_secs = nr_free_blks * geo->clba; /* Consider sectors used for metadata */ sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); - pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk; + pblk->capacity = (provisioned - blk_meta) * geo->clba; atomic_set(&pblk->rl.free_blocks, nr_free_blks); atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); @@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk) void *chunk_log; unsigned int smeta_len, emeta_len; long nr_bad_blks = 0, nr_free_blks = 0; - int bb_distance, max_write_ppas, mod; + int bb_distance, max_write_ppas; int i, ret; - pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE); + pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); max_write_ppas = pblk->min_write_pgs * geo->all_luns; pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); pblk_set_sec_per_write(pblk, pblk->min_write_pgs); @@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk) return -EINVAL; } - div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod); - if (mod) { - pr_err("pblk: bad configuration of sectors/pages\n"); - return -EINVAL; - } - l_mg->nr_lines = geo->nr_chks; l_mg->log_line = l_mg->data_line = NULL; l_mg->l_seq_nr = l_mg->d_seq_nr = 0; l_mg->nr_free_lines = 0; bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES); - lm->sec_per_line = geo->sec_per_chk * geo->all_luns; + lm->sec_per_line = geo->clba * geo->all_luns; lm->blk_per_line = geo->all_luns; lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long); lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long); @@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk) */ i = 1; add_smeta_page: - lm->smeta_sec = i * geo->sec_per_pl; - lm->smeta_len = lm->smeta_sec * geo->sec_size; + lm->smeta_sec = i * geo->ws_opt; + lm->smeta_len = lm->smeta_sec * geo->csecs; smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len; if (smeta_len > lm->smeta_len) { @@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk) */ i = 1; add_emeta_page: - lm->emeta_sec[0] = i * geo->sec_per_pl; - lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size; + lm->emeta_sec[0] = i * geo->ws_opt; + lm->emeta_len[0] = lm->emeta_sec[0] * geo->csecs; emeta_len = calc_emeta_len(pblk); if (emeta_len > lm->emeta_len[0]) { @@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk) lm->min_blk_line = 1; if (geo->all_luns > 1) lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec + - lm->emeta_sec[0], geo->sec_per_chk); + lm->emeta_sec[0], geo->clba); if (lm->min_blk_line > lm->blk_per_line) { pr_err("pblk: config. not supported. Min. LUN in line:%d\n", @@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk) goto fail_free_bb_template; } - bb_distance = (geo->all_luns) * geo->sec_per_pl; + bb_distance = (geo->all_luns) * geo->ws_opt; for (i = 0; i < lm->sec_per_line; i += bb_distance) - bitmap_set(l_mg->bb_template, i, geo->sec_per_pl); + bitmap_set(l_mg->bb_template, i, geo->ws_opt); INIT_LIST_HEAD(&l_mg->free_list); INIT_LIST_HEAD(&l_mg->corrupt_list); @@ -982,9 +991,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, struct pblk *pblk; int ret; - if (dev->identity.dom & NVM_RSP_L2P) { + if (dev->geo.dom & NVM_RSP_L2P) { pr_err("pblk: host-side L2P table not supported. (%x)\n", - dev->identity.dom); + dev->geo.dom); return ERR_PTR(-EINVAL); } @@ -1092,7 +1101,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, blk_queue_write_cache(tqueue, true, false); - tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size; + tqueue->limits.discard_granularity = geo->clba * geo->csecs; tqueue->limits.discard_alignment = 0; blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9); queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue); diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c index 2f761283f43e..9eee10f69df0 100644 --- a/drivers/lightnvm/pblk-read.c +++ b/drivers/lightnvm/pblk-read.c @@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) if (!(gc_rq->secs_to_gc)) goto out; - data_len = (gc_rq->secs_to_gc) * geo->sec_size; + data_len = (gc_rq->secs_to_gc) * geo->csecs; bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len, PBLK_VMALLOC_META, GFP_KERNEL); if (IS_ERR(bio)) { diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c index aaab9a5c17cc..26356429dc72 100644 --- a/drivers/lightnvm/pblk-recovery.c +++ b/drivers/lightnvm/pblk-recovery.c @@ -184,7 +184,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line) int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line); return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] - - nr_bb * geo->sec_per_chk; + nr_bb * geo->clba; } struct pblk_recov_alloc { @@ -232,7 +232,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line, rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); if (!rq_ppas) rq_ppas = pblk->min_write_pgs; - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); if (IS_ERR(bio)) @@ -351,7 +351,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, if (!pad_rq) return -ENOMEM; - data = vzalloc(pblk->max_write_pgs * geo->sec_size); + data = vzalloc(pblk->max_write_pgs * geo->csecs); if (!data) { ret = -ENOMEM; goto free_rq; @@ -368,7 +368,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, goto fail_free_pad; } - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list); if (!meta_list) { @@ -509,7 +509,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line, rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); if (!rq_ppas) rq_ppas = pblk->min_write_pgs; - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); if (IS_ERR(bio)) @@ -640,7 +640,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); if (!rq_ppas) rq_ppas = pblk->min_write_pgs; - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); if (IS_ERR(bio)) @@ -745,7 +745,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) ppa_list = (void *)(meta_list) + pblk_dma_meta_size; dma_ppa_list = dma_meta_list + pblk_dma_meta_size; - data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL); + data = kcalloc(pblk->max_write_pgs, geo->csecs, GFP_KERNEL); if (!data) { ret = -ENOMEM; goto free_meta_list; diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c index 0d457b162f23..883a7113b19d 100644 --- a/drivers/lightnvm/pblk-rl.c +++ b/drivers/lightnvm/pblk-rl.c @@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget) /* Consider sectors used for metadata */ sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); rl->high = pblk->op_blks - blk_meta - lm->blk_per_line; rl->high_pw = get_count_order(rl->high); diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index 1680ce0a828d..33199c6af267 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; + struct nvm_addr_format_12 *ppaf; + struct nvm_addr_format_12 *geo_ppaf; ssize_t sz = 0; - sz = snprintf(page, PAGE_SIZE - sz, - "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", - pblk->ppaf_bitsize, - pblk->ppaf.blk_offset, geo->ppaf.blk_len, - pblk->ppaf.pg_offset, geo->ppaf.pg_len, - pblk->ppaf.lun_offset, geo->ppaf.lun_len, - pblk->ppaf.ch_offset, geo->ppaf.ch_len, - pblk->ppaf.pln_offset, geo->ppaf.pln_len, - pblk->ppaf.sec_offset, geo->ppaf.sect_len); + ppaf = (struct nvm_addr_format_12 *)&pblk->ppaf; + geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; + + sz = snprintf(page, PAGE_SIZE, + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", + pblk->ppaf_bitsize, + ppaf->ch_offset, ppaf->ch_len, + ppaf->lun_offset, ppaf->lun_len, + ppaf->blk_offset, ppaf->blk_len, + ppaf->pg_offset, ppaf->pg_len, + ppaf->pln_offset, ppaf->pln_len, + ppaf->sect_offset, ppaf->sect_len); sz += snprintf(page + sz, PAGE_SIZE - sz, - "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", - geo->ppaf.blk_offset, geo->ppaf.blk_len, - geo->ppaf.pg_offset, geo->ppaf.pg_len, - geo->ppaf.lun_offset, geo->ppaf.lun_len, - geo->ppaf.ch_offset, geo->ppaf.ch_len, - geo->ppaf.pln_offset, geo->ppaf.pln_len, - geo->ppaf.sect_offset, geo->ppaf.sect_len); + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", + geo_ppaf->ch_offset, geo_ppaf->ch_len, + geo_ppaf->lun_offset, geo_ppaf->lun_len, + geo_ppaf->blk_offset, geo_ppaf->blk_len, + geo_ppaf->pg_offset, geo_ppaf->pg_len, + geo_ppaf->pln_offset, geo_ppaf->pln_len, + geo_ppaf->sect_offset, geo_ppaf->sect_len); return sz; } @@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page) "blk_line:%d, sec_line:%d, sec_blk:%d\n", lm->blk_per_line, lm->sec_per_line, - geo->sec_per_chk); + geo->clba); return sz; } diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c index aae86ed60b98..3e6f1ebd743a 100644 --- a/drivers/lightnvm/pblk-write.c +++ b/drivers/lightnvm/pblk-write.c @@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) m_ctx = nvm_rq_to_pdu(rqd); m_ctx->private = meta_line; - rq_len = rq_ppas * geo->sec_size; + rq_len = rq_ppas * geo->csecs; data = ((void *)emeta->buf) + emeta->mem; bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len, diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index f0309d8172c0..b29c1e6698aa 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -551,21 +551,6 @@ struct pblk_line_meta { unsigned int meta_distance; /* Distance between data and metadata */ }; -struct pblk_addr_format { - u64 ch_mask; - u64 lun_mask; - u64 pln_mask; - u64 blk_mask; - u64 pg_mask; - u64 sec_mask; - u8 ch_offset; - u8 lun_offset; - u8 pln_offset; - u8 blk_offset; - u8 pg_offset; - u8 sec_offset; -}; - enum { PBLK_STATE_RUNNING = 0, PBLK_STATE_STOPPING = 1, @@ -585,8 +570,8 @@ struct pblk { struct pblk_line_mgmt l_mg; /* Line management */ struct pblk_line_meta lm; /* Line metadata */ + struct nvm_addr_format ppaf; int ppaf_bitsize; - struct pblk_addr_format ppaf; struct pblk_rb rwb; @@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line) return le32_to_cpu(*line->vsc); } -#define NVM_MEM_PAGE_WRITE (8) - static inline int pblk_pad_distance(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl; + return geo->mw_cunits * geo->all_luns * geo->ws_opt; } static inline int pblk_ppa_to_line(struct ppa_addr p) @@ -964,15 +947,17 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, u64 line_id) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->ppaf; struct ppa_addr ppa; ppa.ppa = 0; ppa.g.blk = line_id; - ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset; - ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset; - ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset; - ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset; - ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset; + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sect_offset; return ppa; } @@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, struct ppa_addr p) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->ppaf; u64 paddr; - paddr = (u64)p.g.pg << pblk->ppaf.pg_offset; - paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset; - paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset; - paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset; - paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset; + paddr = (u64)p.g.ch << ppaf->ch_offset; + paddr |= (u64)p.g.lun << ppaf->lun_offset; + paddr |= (u64)p.g.pg << ppaf->pg_offset; + paddr |= (u64)p.g.pl << ppaf->pln_offset; + paddr |= (u64)p.g.sec << ppaf->sect_offset; return paddr; } @@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) ppa64.c.line = ppa32 & ((~0U) >> 1); ppa64.c.is_cached = 1; } else { - ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >> - pblk->ppaf.blk_offset; - ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >> - pblk->ppaf.pg_offset; - ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >> - pblk->ppaf.lun_offset; - ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >> - pblk->ppaf.ch_offset; - ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >> - pblk->ppaf.pln_offset; - ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >> - pblk->ppaf.sec_offset; + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->ppaf; + + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sect_offset; } return ppa64; @@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) ppa32 |= ppa64.c.line; ppa32 |= 1U << 31; } else { - ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset; - ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset; - ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset; - ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset; - ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset; - ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset; + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->ppaf; + + ppa32 |= ppa64.g.ch << ppaf->ch_offset; + ppa32 |= ppa64.g.lun << ppaf->lun_offset; + ppa32 |= ppa64.g.blk << ppaf->blk_offset; + ppa32 |= ppa64.g.pg << ppaf->pg_offset; + ppa32 |= ppa64.g.pl << ppaf->pln_offset; + ppa32 |= ppa64.g.sec << ppaf->sect_offset; } return ppa32; @@ -1229,10 +1216,10 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, if (!ppa->c.is_cached && ppa->g.ch < geo->nr_chnls && ppa->g.lun < geo->nr_luns && - ppa->g.pl < geo->nr_planes && + ppa->g.pl < geo->num_pln && ppa->g.blk < geo->nr_chks && - ppa->g.pg < geo->ws_per_chk && - ppa->g.sec < geo->sec_per_pg) + ppa->g.pg < geo->num_pg && + ppa->g.sec < geo->ws_min) continue; print_ppa(ppa, "boundary", i); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 839c0b96466a..e276ace28c64 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf { __u8 blk_len; __u8 pg_offset; __u8 pg_len; - __u8 sect_offset; - __u8 sect_len; + __u8 sec_offset; + __u8 sec_len; __u8 res[4]; } __packed; @@ -254,106 +254,161 @@ static inline void _nvme_nvm_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); } -static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12) +static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, + struct nvme_nvm_id12_addrf *src) +{ + dst->ch_len = src->ch_len; + dst->lun_len = src->lun_len; + dst->blk_len = src->blk_len; + dst->pg_len = src->pg_len; + dst->pln_len = src->pln_len; + dst->sect_len = src->sec_len; + + dst->ch_offset = src->ch_offset; + dst->lun_offset = src->lun_offset; + dst->blk_offset = src->blk_offset; + dst->pg_offset = src->pg_offset; + dst->pln_offset = src->pln_offset; + dst->sect_offset = src->sec_offset; + + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; +} + +static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, + struct nvm_geo *geo) { struct nvme_nvm_id12_grp *src; int sec_per_pg, sec_per_pl, pg_per_blk; - if (id12->cgrps != 1) + if (id->cgrps != 1) return -EINVAL; - src = &id12->grp; + src = &id->grp; - nvm_id->mtype = src->mtype; - nvm_id->fmtype = src->fmtype; + if (src->mtype != 0) { + pr_err("nvm: memory type not supported\n"); + return -EINVAL; + } + + geo->ver_id = id->ver_id; + + geo->nr_chnls = src->num_ch; + geo->nr_luns = src->num_lun; + geo->all_luns = geo->nr_chnls * geo->nr_luns; - nvm_id->num_ch = src->num_ch; - nvm_id->num_lun = src->num_lun; + geo->nr_chks = le16_to_cpu(src->num_chk); - nvm_id->num_chk = le16_to_cpu(src->num_chk); - nvm_id->csecs = le16_to_cpu(src->csecs); - nvm_id->sos = le16_to_cpu(src->sos); + geo->csecs = le16_to_cpu(src->csecs); + geo->sos = le16_to_cpu(src->sos); pg_per_blk = le16_to_cpu(src->num_pg); - sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs; + sec_per_pg = le16_to_cpu(src->fpg_sz) / geo->csecs; sec_per_pl = sec_per_pg * src->num_pln; - nvm_id->clba = sec_per_pl * pg_per_blk; - nvm_id->ws_per_chk = pg_per_blk; - - nvm_id->mpos = le32_to_cpu(src->mpos); - nvm_id->cpar = le16_to_cpu(src->cpar); - nvm_id->mccap = le32_to_cpu(src->mccap); - - nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg; - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; - - if (nvm_id->mpos & 0x020202) { - nvm_id->ws_seq = NVM_IO_DUAL_ACCESS; - nvm_id->ws_opt <<= 1; - } else if (nvm_id->mpos & 0x040404) { - nvm_id->ws_seq = NVM_IO_QUAD_ACCESS; - nvm_id->ws_opt <<= 2; + geo->clba = sec_per_pl * pg_per_blk; + + geo->all_chunks = geo->all_luns * geo->nr_chks; + geo->total_secs = geo->clba * geo->all_chunks; + + geo->ws_min = sec_per_pg; + geo->ws_opt = sec_per_pg; + geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ + + geo->mccap = le32_to_cpu(src->mccap); + + geo->trdt = le32_to_cpu(src->trdt); + geo->trdm = le32_to_cpu(src->trdm); + geo->tprt = le32_to_cpu(src->tprt); + geo->tprm = le32_to_cpu(src->tprm); + geo->tbet = le32_to_cpu(src->tbet); + geo->tbem = le32_to_cpu(src->tbem); + + /* 1.2 compatibility */ + geo->vmnt = id->vmnt; + geo->cap = le32_to_cpu(id->cap); + geo->dom = le32_to_cpu(id->dom); + + geo->mtype = src->mtype; + geo->fmtype = src->fmtype; + + geo->cpar = le16_to_cpu(src->cpar); + geo->mpos = le32_to_cpu(src->mpos); + + geo->plane_mode = NVM_PLANE_SINGLE; + + if (geo->mpos & 0x020202) { + geo->plane_mode = NVM_PLANE_DOUBLE; + geo->ws_opt <<= 1; + } else if (geo->mpos & 0x040404) { + geo->plane_mode = NVM_PLANE_QUAD; + geo->ws_opt <<= 2; } - nvm_id->trdt = le32_to_cpu(src->trdt); - nvm_id->trdm = le32_to_cpu(src->trdm); - nvm_id->tprt = le32_to_cpu(src->tprt); - nvm_id->tprm = le32_to_cpu(src->tprm); - nvm_id->tbet = le32_to_cpu(src->tbet); - nvm_id->tbem = le32_to_cpu(src->tbem); - - /* 1.2 compatibility */ - nvm_id->num_pln = src->num_pln; - nvm_id->num_pg = le16_to_cpu(src->num_pg); - nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz); + geo->num_pln = src->num_pln; + geo->num_pg = le16_to_cpu(src->num_pg); + geo->fpg_sz = le16_to_cpu(src->fpg_sz); + + nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&geo->addrf, + &id->ppaf); return 0; } -static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, - struct nvme_nvm_id12 *id) +static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst, + struct nvme_nvm_id20_addrf *src) { - nvm_id->ver_id = id->ver_id; - nvm_id->vmnt = id->vmnt; - nvm_id->cap = le32_to_cpu(id->cap); - nvm_id->dom = le32_to_cpu(id->dom); - memcpy(&nvm_id->ppaf, &id->ppaf, - sizeof(struct nvm_addr_format)); - - return init_grp(nvm_id, id); + dst->ch_len = src->grp_len; + dst->lun_len = src->pu_len; + dst->chk_len = src->chk_len; + dst->sec_len = src->lba_len; + + dst->sec_offset = 0; + dst->chk_offset = dst->sec_len; + dst->lun_offset = dst->chk_offset + dst->chk_len; + dst->ch_offset = dst->lun_offset + dst->lun_len; + + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; + dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset; + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; } -static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, - struct nvme_nvm_id20 *id) +static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, + struct nvm_geo *geo) { - nvm_id->ver_id = id->mjr; + geo->ver_id = id->mjr; + + geo->nr_chnls = le16_to_cpu(id->num_grp); + geo->nr_luns = le16_to_cpu(id->num_pu); + geo->all_luns = geo->nr_chnls * geo->nr_luns; - nvm_id->num_ch = le16_to_cpu(id->num_grp); - nvm_id->num_lun = le16_to_cpu(id->num_pu); - nvm_id->num_chk = le32_to_cpu(id->num_chk); - nvm_id->clba = le32_to_cpu(id->clba); + geo->nr_chks = le32_to_cpu(id->num_chk); + geo->clba = le32_to_cpu(id->clba); - nvm_id->ws_min = le32_to_cpu(id->ws_min); - nvm_id->ws_opt = le32_to_cpu(id->ws_opt); - nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits); + geo->all_chunks = geo->all_luns * geo->nr_chks; + geo->total_secs = geo->clba * geo->all_chunks; - nvm_id->trdt = le32_to_cpu(id->trdt); - nvm_id->trdm = le32_to_cpu(id->trdm); - nvm_id->tprt = le32_to_cpu(id->twrt); - nvm_id->tprm = le32_to_cpu(id->twrm); - nvm_id->tbet = le32_to_cpu(id->tcrst); - nvm_id->tbem = le32_to_cpu(id->tcrsm); + geo->ws_min = le32_to_cpu(id->ws_min); + geo->ws_opt = le32_to_cpu(id->ws_opt); + geo->mw_cunits = le32_to_cpu(id->mw_cunits); - /* calculated values */ - nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min; + geo->trdt = le32_to_cpu(id->trdt); + geo->trdm = le32_to_cpu(id->trdm); + geo->tprt = le32_to_cpu(id->twrt); + geo->tprm = le32_to_cpu(id->twrm); + geo->tbet = le32_to_cpu(id->tcrst); + geo->tbem = le32_to_cpu(id->tcrsm); - /* 1.2 compatibility */ - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; + nvme_nvm_set_addr_20(&geo->addrf, &id->lbaf); return 0; } -static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) +static int nvme_nvm_identity(struct nvm_dev *nvmdev) { struct nvme_ns *ns = nvmdev->q->queuedata; struct nvme_nvm_id12 *id; @@ -380,18 +435,18 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) */ switch (id->ver_id) { case 1: - ret = nvme_nvm_setup_12(nvmdev, nvm_id, id); + ret = nvme_nvm_setup_12(id, &nvmdev->geo); break; case 2: - ret = nvme_nvm_setup_20(nvmdev, nvm_id, - (struct nvme_nvm_id20 *)id); + ret = nvme_nvm_setup_20((struct nvme_nvm_id20 *)id, + &nvmdev->geo); break; default: - dev_err(ns->ctrl->device, - "OCSSD revision not supported (%d)\n", - nvm_id->ver_id); + dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n", + id->ver_id); ret = -EINVAL; } + out: kfree(id); return ret; @@ -406,7 +461,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, struct nvme_ctrl *ctrl = ns->ctrl; struct nvme_nvm_command c = {}; struct nvme_nvm_bb_tbl *bb_tbl; - int nr_blks = geo->nr_chks * geo->plane_mode; + int nr_blks = geo->nr_chks * geo->num_pln; int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks; int ret = 0; @@ -447,7 +502,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, goto out; } - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode); + memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->num_pln); out: kfree(bb_tbl); return ret; @@ -815,9 +870,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) void nvme_nvm_update_nvm_info(struct nvme_ns *ns) { struct nvm_dev *ndev = ns->ndev; + struct nvm_geo *geo = &ndev->geo; - ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift; - ndev->identity.sos = ndev->geo.oob_size = ns->ms; + geo->csecs = 1 << ns->lba_shift; + geo->sos = ns->ms; } int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) @@ -850,23 +906,22 @@ static ssize_t nvm_dev_attr_show(struct device *dev, { struct nvme_ns *ns = nvme_get_ns_from_dev(dev); struct nvm_dev *ndev = ns->ndev; - struct nvm_id *id; + struct nvm_geo *geo = &ndev->geo; struct attribute *attr; if (!ndev) return 0; - id = &ndev->identity; attr = &dattr->attr; if (strcmp(attr->name, "version") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ver_id); } else if (strcmp(attr->name, "capabilities") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->cap); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->cap); } else if (strcmp(attr->name, "read_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdt); } else if (strcmp(attr->name, "read_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdm); } else { return scnprintf(page, PAGE_SIZE, @@ -875,75 +930,79 @@ static ssize_t nvm_dev_attr_show(struct device *dev, } } +static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf, + char *page) +{ + return scnprintf(page, PAGE_SIZE, + "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", + ppaf->ch_offset, ppaf->ch_len, + ppaf->lun_offset, ppaf->lun_len, + ppaf->pln_offset, ppaf->pln_len, + ppaf->blk_offset, ppaf->blk_len, + ppaf->pg_offset, ppaf->pg_len, + ppaf->sect_offset, ppaf->sect_len); +} + static ssize_t nvm_dev_attr_show_12(struct device *dev, struct device_attribute *dattr, char *page) { struct nvme_ns *ns = nvme_get_ns_from_dev(dev); struct nvm_dev *ndev = ns->ndev; - struct nvm_id *id; + struct nvm_geo *geo = &ndev->geo; struct attribute *attr; if (!ndev) return 0; - id = &ndev->identity; attr = &dattr->attr; if (strcmp(attr->name, "vendor_opcode") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->vmnt); } else if (strcmp(attr->name, "device_mode") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->dom); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->dom); /* kept for compatibility */ } else if (strcmp(attr->name, "media_manager") == 0) { return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm"); } else if (strcmp(attr->name, "ppa_format") == 0) { - return scnprintf(page, PAGE_SIZE, - "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", - id->ppaf.ch_offset, id->ppaf.ch_len, - id->ppaf.lun_offset, id->ppaf.lun_len, - id->ppaf.pln_offset, id->ppaf.pln_len, - id->ppaf.blk_offset, id->ppaf.blk_len, - id->ppaf.pg_offset, id->ppaf.pg_len, - id->ppaf.sect_offset, id->ppaf.sect_len); + return nvm_dev_attr_show_ppaf((void *)&geo->addrf, page); } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */ - return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mtype); } else if (strcmp(attr->name, "flash_media_type") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fmtype); } else if (strcmp(attr->name, "num_channels") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); } else if (strcmp(attr->name, "num_luns") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); } else if (strcmp(attr->name, "num_planes") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pln); } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */ - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); } else if (strcmp(attr->name, "num_pages") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pg); } else if (strcmp(attr->name, "page_size") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fpg_sz); } else if (strcmp(attr->name, "hw_sector_size") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->csecs); } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */ - return scnprintf(page, PAGE_SIZE, "%u\n", id->sos); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->sos); } else if (strcmp(attr->name, "prog_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); } else if (strcmp(attr->name, "prog_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); } else if (strcmp(attr->name, "erase_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); } else if (strcmp(attr->name, "erase_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); } else if (strcmp(attr->name, "multiplane_modes") == 0) { - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos); + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mpos); } else if (strcmp(attr->name, "media_capabilities") == 0) { - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap); + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mccap); } else if (strcmp(attr->name, "max_phys_secs") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", NVM_MAX_VLBA); } else { - return scnprintf(page, - PAGE_SIZE, - "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", - attr->name); + return scnprintf(page, PAGE_SIZE, + "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", + attr->name); } } @@ -952,42 +1011,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, { struct nvme_ns *ns = nvme_get_ns_from_dev(dev); struct nvm_dev *ndev = ns->ndev; - struct nvm_id *id; + struct nvm_geo *geo = &ndev->geo; struct attribute *attr; if (!ndev) return 0; - id = &ndev->identity; attr = &dattr->attr; if (strcmp(attr->name, "groups") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); } else if (strcmp(attr->name, "punits") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); } else if (strcmp(attr->name, "chunks") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); } else if (strcmp(attr->name, "clba") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->clba); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->clba); } else if (strcmp(attr->name, "ws_min") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_min); } else if (strcmp(attr->name, "ws_opt") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_opt); } else if (strcmp(attr->name, "mw_cunits") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mw_cunits); } else if (strcmp(attr->name, "write_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); } else if (strcmp(attr->name, "write_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); } else if (strcmp(attr->name, "reset_typ") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); } else if (strcmp(attr->name, "reset_max") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); } else { - return scnprintf(page, - PAGE_SIZE, - "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", - attr->name); + return scnprintf(page, PAGE_SIZE, + "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", + attr->name); } } @@ -1106,10 +1163,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = { int nvme_nvm_register_sysfs(struct nvme_ns *ns) { - if (!ns->ndev) + struct nvm_dev *ndev = ns->ndev; + struct nvm_geo *geo = &ndev->geo; + + if (!ndev) return -EINVAL; - switch (ns->ndev->identity.ver_id) { + switch (geo->ver_id) { case 1: return sysfs_create_group(&disk_to_dev(ns->disk)->kobj, &nvm_dev_attr_group_12); @@ -1123,7 +1183,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns) void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) { - switch (ns->ndev->identity.ver_id) { + struct nvm_dev *ndev = ns->ndev; + struct nvm_geo *geo = &ndev->geo; + + switch (geo->ver_id) { case 1: sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, &nvm_dev_attr_group_12); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index e55b10573c99..16255fcd5250 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -50,7 +50,7 @@ struct nvm_id; struct nvm_dev; struct nvm_tgt_dev; -typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *); +typedef int (nvm_id_fn)(struct nvm_dev *); typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); @@ -152,62 +152,48 @@ struct nvm_id_lp_tbl { struct nvm_id_lp_mlc mlc; }; -struct nvm_addr_format { - u8 ch_offset; +struct nvm_addr_format_12 { u8 ch_len; - u8 lun_offset; u8 lun_len; - u8 pln_offset; + u8 blk_len; + u8 pg_len; u8 pln_len; + u8 sect_len; + + u8 ch_offset; + u8 lun_offset; u8 blk_offset; - u8 blk_len; u8 pg_offset; - u8 pg_len; + u8 pln_offset; u8 sect_offset; - u8 sect_len; -}; - -struct nvm_id { - u8 ver_id; - u8 vmnt; - u32 cap; - u32 dom; - - struct nvm_addr_format ppaf; - - u8 num_ch; - u8 num_lun; - u16 num_chk; - u16 clba; - u16 csecs; - u16 sos; - - u32 ws_min; - u32 ws_opt; - u32 mw_cunits; - u32 trdt; - u32 trdm; - u32 tprt; - u32 tprm; - u32 tbet; - u32 tbem; - u32 mpos; - u32 mccap; - u16 cpar; - - /* calculated values */ - u16 ws_seq; - u16 ws_per_chk; - - /* 1.2 compatibility */ - u8 mtype; - u8 fmtype; + u64 ch_mask; + u64 lun_mask; + u64 blk_mask; + u64 pg_mask; + u64 pln_mask; + u64 sec_mask; +}; - u8 num_pln; - u16 num_pg; - u16 fpg_sz; -} __packed; +struct nvm_addr_format { + u8 ch_len; + u8 lun_len; + u8 chk_len; + u8 sec_len; + u8 rsv_len[2]; + + u8 ch_offset; + u8 lun_offset; + u8 chk_offset; + u8 sec_offset; + u8 rsv_off[2]; + + u64 ch_mask; + u64 lun_mask; + u64 chk_mask; + u64 sec_mask; + u64 rsv_mask[2]; +}; struct nvm_target { struct list_head list; @@ -274,36 +260,63 @@ enum { NVM_BLK_ST_BAD = 0x8, /* Bad block */ }; - -/* Device generic information */ +/* Instance geometry */ struct nvm_geo { - /* generic geometry */ + /* device reported version */ + u8 ver_id; + + /* instance specific geometry */ int nr_chnls; - int all_luns; /* across channels */ - int nr_luns; /* per channel */ - int nr_chks; /* per lun */ + int nr_luns; /* per channel */ - int sec_size; - int oob_size; - int mccap; + /* calculated values */ + int all_luns; /* across channels */ + int all_chunks; /* across channels */ - int sec_per_chk; - int sec_per_lun; + int op; /* over-provision in instance */ - int ws_min; - int ws_opt; - int ws_seq; - int ws_per_chk; + sector_t total_secs; /* across channels */ - int op; + /* chunk geometry */ + u32 nr_chks; /* chunks per lun */ + u32 clba; /* sectors per chunk */ + u16 csecs; /* sector size */ + u16 sos; /* out-of-band area size */ - struct nvm_addr_format ppaf; + /* device write constrains */ + u32 ws_min; /* minimum write size */ + u32 ws_opt; /* optimal write size */ + u32 mw_cunits; /* distance required for successful read */ - /* Legacy 1.2 specific geometry */ - int plane_mode; /* drive device in single, double or quad mode */ - int nr_planes; - int sec_per_pg; /* only sectors for a single page */ - int sec_per_pl; /* all sectors across planes */ + /* device capabilities */ + u32 mccap; + + /* device timings */ + u32 trdt; /* Avg. Tread (ns) */ + u32 trdm; /* Max Tread (ns) */ + u32 tprt; /* Avg. Tprog (ns) */ + u32 tprm; /* Max Tprog (ns) */ + u32 tbet; /* Avg. Terase (ns) */ + u32 tbem; /* Max Terase (ns) */ + + /* generic address format */ + struct nvm_addr_format addrf; + + /* 1.2 compatibility */ + u8 vmnt; + u32 cap; + u32 dom; + + u8 mtype; + u8 fmtype; + + u16 cpar; + u32 mpos; + + u8 num_pln; + u8 plane_mode; + u16 num_pg; + u16 fpg_sz; }; /* sub-device structure */ @@ -314,9 +327,6 @@ struct nvm_tgt_dev { /* Base ppas for target LUNs */ struct ppa_addr *luns; - sector_t total_secs; - - struct nvm_id identity; struct request_queue *q; struct nvm_dev *parent; @@ -331,13 +341,9 @@ struct nvm_dev { /* Device information */ struct nvm_geo geo; - unsigned long total_secs; - unsigned long *lun_map; void *dma_pool; - struct nvm_id identity; - /* Backend device */ struct request_queue *q; char name[DISK_NAME_LEN]; @@ -357,14 +363,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, struct ppa_addr r) { struct nvm_geo *geo = &tgt_dev->geo; + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; struct ppa_addr l; - l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset; - l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset; - l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset; - l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset; - l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset; - l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset; + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; + l.ppa |= ((u64)r.g.sec) << ppaf->sect_offset; return l; } @@ -373,24 +381,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, struct ppa_addr r) { struct nvm_geo *geo = &tgt_dev->geo; + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; struct ppa_addr l; l.ppa = 0; - /* - * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc. - */ - l.g.blk = (r.ppa >> geo->ppaf.blk_offset) & - (((1 << geo->ppaf.blk_len) - 1)); - l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) & - (((1 << geo->ppaf.pg_len) - 1)); - l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) & - (((1 << geo->ppaf.sect_len) - 1)); - l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) & - (((1 << geo->ppaf.pln_len) - 1)); - l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) & - (((1 << geo->ppaf.lun_len) - 1)); - l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) & - (((1 << geo->ppaf.ch_len) - 1)); + + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sect_offset; return l; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 01/15] lightnvm: simplify geometry structure. 2018-02-28 15:49 ` Javier González @ 2018-03-01 10:22 ` Matias Bjørling -1 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:22 UTC (permalink / raw) To: Javier González Cc: linux-block, linux-kernel, linux-nvme, Javier González On 02/28/2018 04:49 PM, Javier González wrote: > Currently, the device geometry is stored redundantly in the nvm_id and > nvm_geo structures at a device level. Moreover, when instantiating > targets on a specific number of LUNs, these structures are replicated > and manually modified to fit the instance channel and LUN partitioning. > > Instead, create a generic geometry around nvm_geo, which can be used by > (i) the underlying device to describe the geometry of the whole device, > and (ii) instances to describe their geometry independently. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/lightnvm/core.c | 70 +++----- > drivers/lightnvm/pblk-core.c | 16 +- > drivers/lightnvm/pblk-gc.c | 2 +- > drivers/lightnvm/pblk-init.c | 119 +++++++------- > drivers/lightnvm/pblk-read.c | 2 +- > drivers/lightnvm/pblk-recovery.c | 14 +- > drivers/lightnvm/pblk-rl.c | 2 +- > drivers/lightnvm/pblk-sysfs.c | 39 +++-- > drivers/lightnvm/pblk-write.c | 2 +- > drivers/lightnvm/pblk.h | 87 +++++----- > drivers/nvme/host/lightnvm.c | 341 +++++++++++++++++++++++---------------- > include/linux/lightnvm.h | 200 +++++++++++------------ > 12 files changed, 465 insertions(+), 429 deletions(-) > > diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c > index 19c46ebb1b91..9a417d9cdf0c 100644 > --- a/drivers/lightnvm/core.c > +++ b/drivers/lightnvm/core.c > @@ -155,7 +155,7 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, > int blun = lun_begin % dev->geo.nr_luns; > int lunid = 0; > int lun_balanced = 1; > - int prev_nr_luns; > + int sec_per_lun, prev_nr_luns; > int i, j; > > nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1; > @@ -215,18 +215,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, > if (!tgt_dev) > goto err_ch; > > + /* Inherit device geometry from parent */ > memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo)); > + > /* Target device only owns a portion of the physical device */ > tgt_dev->geo.nr_chnls = nr_chnls; > - tgt_dev->geo.all_luns = nr_luns; > tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1; > + tgt_dev->geo.all_luns = nr_luns; > + tgt_dev->geo.all_chunks = nr_luns * dev->geo.nr_chks; > + > tgt_dev->geo.op = op; > - tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun; > + > + sec_per_lun = dev->geo.clba * dev->geo.nr_chks; > + tgt_dev->geo.total_secs = nr_luns * sec_per_lun; > + > tgt_dev->q = dev->q; > tgt_dev->map = dev_map; > tgt_dev->luns = luns; > - memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id)); > - > tgt_dev->parent = dev; > > return tgt_dev; > @@ -296,8 +301,6 @@ static int __nvm_config_simple(struct nvm_dev *dev, > static int __nvm_config_extended(struct nvm_dev *dev, > struct nvm_ioctl_create_extended *e) > { > - struct nvm_geo *geo = &dev->geo; > - > if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) { > e->lun_begin = 0; > e->lun_end = dev->geo.all_luns - 1; > @@ -311,7 +314,7 @@ static int __nvm_config_extended(struct nvm_dev *dev, > return -EINVAL; > } > > - return nvm_config_check_luns(geo, e->lun_begin, e->lun_end); > + return nvm_config_check_luns(&dev->geo, e->lun_begin, e->lun_end); > } > > static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) > @@ -406,7 +409,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) > tqueue->queuedata = targetdata; > > blk_queue_max_hw_sectors(tqueue, > - (dev->geo.sec_size >> 9) * NVM_MAX_VLBA); > + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); > > set_capacity(tdisk, tt->capacity(targetdata)); > add_disk(tdisk); > @@ -841,40 +844,9 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl); > > static int nvm_core_init(struct nvm_dev *dev) > { > - struct nvm_id *id = &dev->identity; > struct nvm_geo *geo = &dev->geo; > int ret; > > - memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format)); > - > - if (id->mtype != 0) { > - pr_err("nvm: memory type not supported\n"); > - return -EINVAL; > - } > - > - /* Whole device values */ > - geo->nr_chnls = id->num_ch; > - geo->nr_luns = id->num_lun; > - > - /* Generic device geometry values */ > - geo->ws_min = id->ws_min; > - geo->ws_opt = id->ws_opt; > - geo->ws_seq = id->ws_seq; > - geo->ws_per_chk = id->ws_per_chk; > - geo->nr_chks = id->num_chk; > - geo->mccap = id->mccap; > - > - geo->sec_per_chk = id->clba; > - geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks; > - geo->all_luns = geo->nr_luns * geo->nr_chnls; > - > - /* 1.2 spec device geometry values */ > - geo->plane_mode = 1 << geo->ws_seq; > - geo->nr_planes = geo->ws_opt / geo->ws_min; > - geo->sec_per_pg = geo->ws_min; > - geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes; > - > - dev->total_secs = geo->all_luns * geo->sec_per_lun; > dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns), > sizeof(unsigned long), GFP_KERNEL); > if (!dev->lun_map) > @@ -913,16 +885,14 @@ static int nvm_init(struct nvm_dev *dev) > struct nvm_geo *geo = &dev->geo; > int ret = -EINVAL; > > - if (dev->ops->identity(dev, &dev->identity)) { > + if (dev->ops->identity(dev)) { > pr_err("nvm: device could not be identified\n"); > goto err; > } > > - if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) { > - pr_err("nvm: device ver_id %d not supported by kernel.\n", > - dev->identity.ver_id); > - goto err; > - } > + pr_debug("nvm: ver:%u nvm_vendor:%x\n", > + geo->ver_id, > + geo->vmnt); > > ret = nvm_core_init(dev); > if (ret) { > @@ -930,10 +900,10 @@ static int nvm_init(struct nvm_dev *dev) > goto err; > } > > - pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n", > - dev->name, geo->sec_per_pg, geo->nr_planes, > - geo->ws_per_chk, geo->nr_chks, > - geo->all_luns, geo->nr_chnls); > + pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n", > + dev->name, geo->ws_min, geo->ws_opt, > + geo->nr_chks, geo->all_luns, > + geo->nr_chnls); > return 0; > err: > pr_err("nvm: failed to initialize nvm\n"); > diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c > index 8848443a0721..169589ddd457 100644 > --- a/drivers/lightnvm/pblk-core.c > +++ b/drivers/lightnvm/pblk-core.c > @@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line, > memset(&rqd, 0, sizeof(struct nvm_rq)); > > rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len, > l_mg->emeta_alloc_type, GFP_KERNEL); > @@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line) > if (bit >= lm->blk_per_line) > return -1; > > - return bit * geo->sec_per_pl; > + return bit * geo->ws_opt; > } > > static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line, > @@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, > /* Capture bad block information on line mapping bitmaps */ > while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line, > bit + 1)) < lm->blk_per_line) { > - off = bit * geo->sec_per_pl; > + off = bit * geo->ws_opt; > bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off, > lm->sec_per_line); > bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux, > lm->sec_per_line); > - line->sec_in_line -= geo->sec_per_chk; > + line->sec_in_line -= geo->clba; > if (bit >= lm->emeta_bb) > nr_bb++; > } > > /* Mark smeta metadata sectors as bad sectors */ > bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line); > - off = bit * geo->sec_per_pl; > + off = bit * geo->ws_opt; > bitmap_set(line->map_bitmap, off, lm->smeta_sec); > line->sec_in_line -= lm->smeta_sec; > line->smeta_ssec = off; > @@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, > emeta_secs = lm->emeta_sec[0]; > off = lm->sec_per_line; > while (emeta_secs) { > - off -= geo->sec_per_pl; > + off -= geo->ws_opt; > if (!test_bit(off, line->invalid_bitmap)) { > - bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl); > - emeta_secs -= geo->sec_per_pl; > + bitmap_set(line->invalid_bitmap, off, geo->ws_opt); > + emeta_secs -= geo->ws_opt; > } > } > > diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c > index 320f99af99e9..6851a5c67189 100644 > --- a/drivers/lightnvm/pblk-gc.c > +++ b/drivers/lightnvm/pblk-gc.c > @@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work) > > up(&gc->gc_sem); > > - gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size); > + gc_rq->data = vmalloc(gc_rq->nr_secs * geo->csecs); > if (!gc_rq->data) { > pr_err("pblk: could not GC line:%d (%d/%d)\n", > line->id, *line->vsc, gc_rq->nr_secs); > diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c > index 25fc70ca07f7..9b5ee05c3028 100644 > --- a/drivers/lightnvm/pblk-init.c > +++ b/drivers/lightnvm/pblk-init.c > @@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk) > return -ENOMEM; > > power_size = get_count_order(nr_entries); > - power_seg_sz = get_count_order(geo->sec_size); > + power_seg_sz = get_count_order(geo->csecs); > > return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz); > } > @@ -154,11 +154,11 @@ static int pblk_rwb_init(struct pblk *pblk) > /* Minimum pages needed within a lun */ > #define ADDR_POOL_SIZE 64 > > -static int pblk_set_ppaf(struct pblk *pblk) > +static int pblk_set_addrf_12(struct nvm_geo *geo, > + struct nvm_addr_format_12 *dst) > { > - struct nvm_tgt_dev *dev = pblk->dev; > - struct nvm_geo *geo = &dev->geo; > - struct nvm_addr_format ppaf = geo->ppaf; > + struct nvm_addr_format_12 *src = > + (struct nvm_addr_format_12 *)&geo->addrf; > int power_len; > > /* Re-calculate channel and lun format to adapt to configuration */ > @@ -167,34 +167,50 @@ static int pblk_set_ppaf(struct pblk *pblk) > pr_err("pblk: supports only power-of-two channel config.\n"); > return -EINVAL; > } > - ppaf.ch_len = power_len; > + dst->ch_len = power_len; > > power_len = get_count_order(geo->nr_luns); > if (1 << power_len != geo->nr_luns) { > pr_err("pblk: supports only power-of-two LUN config.\n"); > return -EINVAL; > } > - ppaf.lun_len = power_len; > + dst->lun_len = power_len; > > - pblk->ppaf.sec_offset = 0; > - pblk->ppaf.pln_offset = ppaf.sect_len; > - pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len; > - pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len; > - pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len; > - pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len; > - pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1; > - pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) << > - pblk->ppaf.pln_offset; > - pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) << > - pblk->ppaf.ch_offset; > - pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) << > - pblk->ppaf.lun_offset; > - pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) << > - pblk->ppaf.pg_offset; > - pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) << > - pblk->ppaf.blk_offset; > + dst->blk_len = src->blk_len; > + dst->pg_len = src->pg_len; > + dst->pln_len = src->pln_len; > + dst->sect_len = src->sect_len; > > - pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len; > + dst->sect_offset = 0; > + dst->pln_offset = dst->sect_len; > + dst->ch_offset = dst->pln_offset + dst->pln_len; > + dst->lun_offset = dst->ch_offset + dst->ch_len; > + dst->pg_offset = dst->lun_offset + dst->lun_len; > + dst->blk_offset = dst->pg_offset + dst->pg_len; > + > + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; > + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; > + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; > + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; > + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; > + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; > + > + return dst->blk_offset + src->blk_len; > +} > + > +static int pblk_set_ppaf(struct pblk *pblk) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + int mod; > + > + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); > + if (mod) { > + pr_err("pblk: bad configuration of sectors/pages\n"); > + return -EINVAL; > + } > + > + pblk->ppaf_bitsize = pblk_set_addrf_12(geo, (void *)&pblk->ppaf); > > return 0; > } > @@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk) > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > > - pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg * > - geo->nr_planes * geo->all_luns; > + pblk->pgs_in_buffer = geo->mw_cunits * geo->all_luns; > > if (pblk_init_global_caches(pblk)) > return -ENOMEM; > @@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk) > /* Round to sector size so that lba_list starts on its own sector */ > lm->emeta_sec[1] = DIV_ROUND_UP( > sizeof(struct line_emeta) + lm->blk_bitmap_len + > - sizeof(struct wa_counters), geo->sec_size); > - lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size; > + sizeof(struct wa_counters), geo->csecs); > + lm->emeta_len[1] = lm->emeta_sec[1] * geo->csecs; > > /* Round to sector size so that vsc_list starts on its own sector */ > lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0]; > lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64), > - geo->sec_size); > - lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size; > + geo->csecs); > + lm->emeta_len[2] = lm->emeta_sec[2] * geo->csecs; > > lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32), > - geo->sec_size); > - lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size; > + geo->csecs); > + lm->emeta_len[3] = lm->emeta_sec[3] * geo->csecs; > > lm->vsc_list_len = l_mg->nr_lines * sizeof(u32); > > @@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) > * on user capacity consider only provisioned blocks > */ > pblk->rl.total_blocks = nr_free_blks; > - pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk; > + pblk->rl.nr_secs = nr_free_blks * geo->clba; > > /* Consider sectors used for metadata */ > sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; > - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); > + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); > > - pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk; > + pblk->capacity = (provisioned - blk_meta) * geo->clba; > > atomic_set(&pblk->rl.free_blocks, nr_free_blks); > atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); > @@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk) > void *chunk_log; > unsigned int smeta_len, emeta_len; > long nr_bad_blks = 0, nr_free_blks = 0; > - int bb_distance, max_write_ppas, mod; > + int bb_distance, max_write_ppas; > int i, ret; > > - pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE); > + pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); > max_write_ppas = pblk->min_write_pgs * geo->all_luns; > pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); > pblk_set_sec_per_write(pblk, pblk->min_write_pgs); > @@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk) > return -EINVAL; > } > > - div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod); > - if (mod) { > - pr_err("pblk: bad configuration of sectors/pages\n"); > - return -EINVAL; > - } > - > l_mg->nr_lines = geo->nr_chks; > l_mg->log_line = l_mg->data_line = NULL; > l_mg->l_seq_nr = l_mg->d_seq_nr = 0; > l_mg->nr_free_lines = 0; > bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES); > > - lm->sec_per_line = geo->sec_per_chk * geo->all_luns; > + lm->sec_per_line = geo->clba * geo->all_luns; > lm->blk_per_line = geo->all_luns; > lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long); > lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long); > @@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk) > */ > i = 1; > add_smeta_page: > - lm->smeta_sec = i * geo->sec_per_pl; > - lm->smeta_len = lm->smeta_sec * geo->sec_size; > + lm->smeta_sec = i * geo->ws_opt; > + lm->smeta_len = lm->smeta_sec * geo->csecs; > > smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len; > if (smeta_len > lm->smeta_len) { > @@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk) > */ > i = 1; > add_emeta_page: > - lm->emeta_sec[0] = i * geo->sec_per_pl; > - lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size; > + lm->emeta_sec[0] = i * geo->ws_opt; > + lm->emeta_len[0] = lm->emeta_sec[0] * geo->csecs; > > emeta_len = calc_emeta_len(pblk); > if (emeta_len > lm->emeta_len[0]) { > @@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk) > lm->min_blk_line = 1; > if (geo->all_luns > 1) > lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec + > - lm->emeta_sec[0], geo->sec_per_chk); > + lm->emeta_sec[0], geo->clba); > > if (lm->min_blk_line > lm->blk_per_line) { > pr_err("pblk: config. not supported. Min. LUN in line:%d\n", > @@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk) > goto fail_free_bb_template; > } > > - bb_distance = (geo->all_luns) * geo->sec_per_pl; > + bb_distance = (geo->all_luns) * geo->ws_opt; > for (i = 0; i < lm->sec_per_line; i += bb_distance) > - bitmap_set(l_mg->bb_template, i, geo->sec_per_pl); > + bitmap_set(l_mg->bb_template, i, geo->ws_opt); > > INIT_LIST_HEAD(&l_mg->free_list); > INIT_LIST_HEAD(&l_mg->corrupt_list); > @@ -982,9 +991,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, > struct pblk *pblk; > int ret; > > - if (dev->identity.dom & NVM_RSP_L2P) { > + if (dev->geo.dom & NVM_RSP_L2P) { > pr_err("pblk: host-side L2P table not supported. (%x)\n", > - dev->identity.dom); > + dev->geo.dom); > return ERR_PTR(-EINVAL); > } > > @@ -1092,7 +1101,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, > > blk_queue_write_cache(tqueue, true, false); > > - tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size; > + tqueue->limits.discard_granularity = geo->clba * geo->csecs; > tqueue->limits.discard_alignment = 0; > blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9); > queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue); > diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c > index 2f761283f43e..9eee10f69df0 100644 > --- a/drivers/lightnvm/pblk-read.c > +++ b/drivers/lightnvm/pblk-read.c > @@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) > if (!(gc_rq->secs_to_gc)) > goto out; > > - data_len = (gc_rq->secs_to_gc) * geo->sec_size; > + data_len = (gc_rq->secs_to_gc) * geo->csecs; > bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len, > PBLK_VMALLOC_META, GFP_KERNEL); > if (IS_ERR(bio)) { > diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c > index aaab9a5c17cc..26356429dc72 100644 > --- a/drivers/lightnvm/pblk-recovery.c > +++ b/drivers/lightnvm/pblk-recovery.c > @@ -184,7 +184,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line) > int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line); > > return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] - > - nr_bb * geo->sec_per_chk; > + nr_bb * geo->clba; > } > > struct pblk_recov_alloc { > @@ -232,7 +232,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line, > rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); > if (!rq_ppas) > rq_ppas = pblk->min_write_pgs; > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); > if (IS_ERR(bio)) > @@ -351,7 +351,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, > if (!pad_rq) > return -ENOMEM; > > - data = vzalloc(pblk->max_write_pgs * geo->sec_size); > + data = vzalloc(pblk->max_write_pgs * geo->csecs); > if (!data) { > ret = -ENOMEM; > goto free_rq; > @@ -368,7 +368,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, > goto fail_free_pad; > } > > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list); > if (!meta_list) { > @@ -509,7 +509,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line, > rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); > if (!rq_ppas) > rq_ppas = pblk->min_write_pgs; > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); > if (IS_ERR(bio)) > @@ -640,7 +640,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, > rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); > if (!rq_ppas) > rq_ppas = pblk->min_write_pgs; > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); > if (IS_ERR(bio)) > @@ -745,7 +745,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) > ppa_list = (void *)(meta_list) + pblk_dma_meta_size; > dma_ppa_list = dma_meta_list + pblk_dma_meta_size; > > - data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL); > + data = kcalloc(pblk->max_write_pgs, geo->csecs, GFP_KERNEL); > if (!data) { > ret = -ENOMEM; > goto free_meta_list; > diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c > index 0d457b162f23..883a7113b19d 100644 > --- a/drivers/lightnvm/pblk-rl.c > +++ b/drivers/lightnvm/pblk-rl.c > @@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget) > > /* Consider sectors used for metadata */ > sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; > - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); > + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); > > rl->high = pblk->op_blks - blk_meta - lm->blk_per_line; > rl->high_pw = get_count_order(rl->high); > diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c > index 1680ce0a828d..33199c6af267 100644 > --- a/drivers/lightnvm/pblk-sysfs.c > +++ b/drivers/lightnvm/pblk-sysfs.c > @@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > + struct nvm_addr_format_12 *ppaf; > + struct nvm_addr_format_12 *geo_ppaf; > ssize_t sz = 0; > > - sz = snprintf(page, PAGE_SIZE - sz, > - "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", > - pblk->ppaf_bitsize, > - pblk->ppaf.blk_offset, geo->ppaf.blk_len, > - pblk->ppaf.pg_offset, geo->ppaf.pg_len, > - pblk->ppaf.lun_offset, geo->ppaf.lun_len, > - pblk->ppaf.ch_offset, geo->ppaf.ch_len, > - pblk->ppaf.pln_offset, geo->ppaf.pln_len, > - pblk->ppaf.sec_offset, geo->ppaf.sect_len); > + ppaf = (struct nvm_addr_format_12 *)&pblk->ppaf; > + geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; > + > + sz = snprintf(page, PAGE_SIZE, > + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > + pblk->ppaf_bitsize, > + ppaf->ch_offset, ppaf->ch_len, > + ppaf->lun_offset, ppaf->lun_len, > + ppaf->blk_offset, ppaf->blk_len, > + ppaf->pg_offset, ppaf->pg_len, > + ppaf->pln_offset, ppaf->pln_len, > + ppaf->sect_offset, ppaf->sect_len); Is it on purpose here that the code breaks user-space by changing the sysfs print format? > > sz += snprintf(page + sz, PAGE_SIZE - sz, > - "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", > - geo->ppaf.blk_offset, geo->ppaf.blk_len, > - geo->ppaf.pg_offset, geo->ppaf.pg_len, > - geo->ppaf.lun_offset, geo->ppaf.lun_len, > - geo->ppaf.ch_offset, geo->ppaf.ch_len, > - geo->ppaf.pln_offset, geo->ppaf.pln_len, > - geo->ppaf.sect_offset, geo->ppaf.sect_len); > + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > + geo_ppaf->ch_offset, geo_ppaf->ch_len, > + geo_ppaf->lun_offset, geo_ppaf->lun_len, > + geo_ppaf->blk_offset, geo_ppaf->blk_len, > + geo_ppaf->pg_offset, geo_ppaf->pg_len, > + geo_ppaf->pln_offset, geo_ppaf->pln_len, > + geo_ppaf->sect_offset, geo_ppaf->sect_len); Similarily here. > > return sz; > } > @@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page) > "blk_line:%d, sec_line:%d, sec_blk:%d\n", > lm->blk_per_line, > lm->sec_per_line, > - geo->sec_per_chk); > + geo->clba); > > return sz; > } > diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c > index aae86ed60b98..3e6f1ebd743a 100644 > --- a/drivers/lightnvm/pblk-write.c > +++ b/drivers/lightnvm/pblk-write.c > @@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) > m_ctx = nvm_rq_to_pdu(rqd); > m_ctx->private = meta_line; > > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > data = ((void *)emeta->buf) + emeta->mem; > > bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len, > diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h > index f0309d8172c0..b29c1e6698aa 100644 > --- a/drivers/lightnvm/pblk.h > +++ b/drivers/lightnvm/pblk.h > @@ -551,21 +551,6 @@ struct pblk_line_meta { > unsigned int meta_distance; /* Distance between data and metadata */ > }; > > -struct pblk_addr_format { > - u64 ch_mask; > - u64 lun_mask; > - u64 pln_mask; > - u64 blk_mask; > - u64 pg_mask; > - u64 sec_mask; > - u8 ch_offset; > - u8 lun_offset; > - u8 pln_offset; > - u8 blk_offset; > - u8 pg_offset; > - u8 sec_offset; > -}; > - > enum { > PBLK_STATE_RUNNING = 0, > PBLK_STATE_STOPPING = 1, > @@ -585,8 +570,8 @@ struct pblk { > struct pblk_line_mgmt l_mg; /* Line management */ > struct pblk_line_meta lm; /* Line metadata */ > > + struct nvm_addr_format ppaf; > int ppaf_bitsize; > - struct pblk_addr_format ppaf; > > struct pblk_rb rwb; > > @@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line) > return le32_to_cpu(*line->vsc); > } > > -#define NVM_MEM_PAGE_WRITE (8) > - > static inline int pblk_pad_distance(struct pblk *pblk) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > > - return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl; > + return geo->mw_cunits * geo->all_luns * geo->ws_opt; > } > > static inline int pblk_ppa_to_line(struct ppa_addr p) > @@ -964,15 +947,17 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) > static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > u64 line_id) > { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->ppaf; > struct ppa_addr ppa; > > ppa.ppa = 0; > ppa.g.blk = line_id; > - ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset; > - ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset; > - ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset; > - ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset; > - ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset; > + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; > + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; > + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; > + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; > + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sect_offset; > > return ppa; > } > @@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, > struct ppa_addr p) > { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->ppaf; > u64 paddr; > > - paddr = (u64)p.g.pg << pblk->ppaf.pg_offset; > - paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset; > - paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset; > - paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset; > - paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset; > + paddr = (u64)p.g.ch << ppaf->ch_offset; > + paddr |= (u64)p.g.lun << ppaf->lun_offset; > + paddr |= (u64)p.g.pg << ppaf->pg_offset; > + paddr |= (u64)p.g.pl << ppaf->pln_offset; > + paddr |= (u64)p.g.sec << ppaf->sect_offset; > > return paddr; > } > @@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) > ppa64.c.line = ppa32 & ((~0U) >> 1); > ppa64.c.is_cached = 1; > } else { > - ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >> > - pblk->ppaf.blk_offset; > - ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >> > - pblk->ppaf.pg_offset; > - ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >> > - pblk->ppaf.lun_offset; > - ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >> > - pblk->ppaf.ch_offset; > - ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >> > - pblk->ppaf.pln_offset; > - ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >> > - pblk->ppaf.sec_offset; > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->ppaf; > + > + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; > + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; > + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; > + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; > + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; > + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sect_offset; > } > > return ppa64; > @@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) > ppa32 |= ppa64.c.line; > ppa32 |= 1U << 31; > } else { > - ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset; > - ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset; > - ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset; > - ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset; > - ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset; > - ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset; > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->ppaf; > + > + ppa32 |= ppa64.g.ch << ppaf->ch_offset; > + ppa32 |= ppa64.g.lun << ppaf->lun_offset; > + ppa32 |= ppa64.g.blk << ppaf->blk_offset; > + ppa32 |= ppa64.g.pg << ppaf->pg_offset; > + ppa32 |= ppa64.g.pl << ppaf->pln_offset; > + ppa32 |= ppa64.g.sec << ppaf->sect_offset; > } > > return ppa32; > @@ -1229,10 +1216,10 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, > if (!ppa->c.is_cached && > ppa->g.ch < geo->nr_chnls && > ppa->g.lun < geo->nr_luns && > - ppa->g.pl < geo->nr_planes && > + ppa->g.pl < geo->num_pln && > ppa->g.blk < geo->nr_chks && > - ppa->g.pg < geo->ws_per_chk && > - ppa->g.sec < geo->sec_per_pg) > + ppa->g.pg < geo->num_pg && > + ppa->g.sec < geo->ws_min) > continue; > > print_ppa(ppa, "boundary", i); > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index 839c0b96466a..e276ace28c64 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf { > __u8 blk_len; > __u8 pg_offset; > __u8 pg_len; > - __u8 sect_offset; > - __u8 sect_len; > + __u8 sec_offset; > + __u8 sec_len; > __u8 res[4]; > } __packed; > > @@ -254,106 +254,161 @@ static inline void _nvme_nvm_check_size(void) > BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); > } > > -static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12) > +static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, > + struct nvme_nvm_id12_addrf *src) > +{ > + dst->ch_len = src->ch_len; > + dst->lun_len = src->lun_len; > + dst->blk_len = src->blk_len; > + dst->pg_len = src->pg_len; > + dst->pln_len = src->pln_len; > + dst->sect_len = src->sec_len; > + > + dst->ch_offset = src->ch_offset; > + dst->lun_offset = src->lun_offset; > + dst->blk_offset = src->blk_offset; > + dst->pg_offset = src->pg_offset; > + dst->pln_offset = src->pln_offset; > + dst->sect_offset = src->sec_offset; > + > + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; > + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; > + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; > + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; > + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; > + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; > +} > + > +static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, > + struct nvm_geo *geo) > { > struct nvme_nvm_id12_grp *src; > int sec_per_pg, sec_per_pl, pg_per_blk; > > - if (id12->cgrps != 1) > + if (id->cgrps != 1) > return -EINVAL; > > - src = &id12->grp; > + src = &id->grp; > > - nvm_id->mtype = src->mtype; > - nvm_id->fmtype = src->fmtype; > + if (src->mtype != 0) { > + pr_err("nvm: memory type not supported\n"); > + return -EINVAL; > + } > + > + geo->ver_id = id->ver_id; > + > + geo->nr_chnls = src->num_ch; > + geo->nr_luns = src->num_lun; > + geo->all_luns = geo->nr_chnls * geo->nr_luns; > > - nvm_id->num_ch = src->num_ch; > - nvm_id->num_lun = src->num_lun; > + geo->nr_chks = le16_to_cpu(src->num_chk); > > - nvm_id->num_chk = le16_to_cpu(src->num_chk); > - nvm_id->csecs = le16_to_cpu(src->csecs); > - nvm_id->sos = le16_to_cpu(src->sos); > + geo->csecs = le16_to_cpu(src->csecs); > + geo->sos = le16_to_cpu(src->sos); > > pg_per_blk = le16_to_cpu(src->num_pg); > - sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs; > + sec_per_pg = le16_to_cpu(src->fpg_sz) / geo->csecs; > sec_per_pl = sec_per_pg * src->num_pln; > - nvm_id->clba = sec_per_pl * pg_per_blk; > - nvm_id->ws_per_chk = pg_per_blk; > - > - nvm_id->mpos = le32_to_cpu(src->mpos); > - nvm_id->cpar = le16_to_cpu(src->cpar); > - nvm_id->mccap = le32_to_cpu(src->mccap); > - > - nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg; > - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; > - > - if (nvm_id->mpos & 0x020202) { > - nvm_id->ws_seq = NVM_IO_DUAL_ACCESS; > - nvm_id->ws_opt <<= 1; > - } else if (nvm_id->mpos & 0x040404) { > - nvm_id->ws_seq = NVM_IO_QUAD_ACCESS; > - nvm_id->ws_opt <<= 2; > + geo->clba = sec_per_pl * pg_per_blk; > + > + geo->all_chunks = geo->all_luns * geo->nr_chks; > + geo->total_secs = geo->clba * geo->all_chunks; > + > + geo->ws_min = sec_per_pg; > + geo->ws_opt = sec_per_pg; > + geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ > + > + geo->mccap = le32_to_cpu(src->mccap); > + > + geo->trdt = le32_to_cpu(src->trdt); > + geo->trdm = le32_to_cpu(src->trdm); > + geo->tprt = le32_to_cpu(src->tprt); > + geo->tprm = le32_to_cpu(src->tprm); > + geo->tbet = le32_to_cpu(src->tbet); > + geo->tbem = le32_to_cpu(src->tbem); > + > + /* 1.2 compatibility */ > + geo->vmnt = id->vmnt; > + geo->cap = le32_to_cpu(id->cap); > + geo->dom = le32_to_cpu(id->dom); > + > + geo->mtype = src->mtype; > + geo->fmtype = src->fmtype; > + > + geo->cpar = le16_to_cpu(src->cpar); > + geo->mpos = le32_to_cpu(src->mpos); > + > + geo->plane_mode = NVM_PLANE_SINGLE; > + > + if (geo->mpos & 0x020202) { > + geo->plane_mode = NVM_PLANE_DOUBLE; > + geo->ws_opt <<= 1; > + } else if (geo->mpos & 0x040404) { > + geo->plane_mode = NVM_PLANE_QUAD; > + geo->ws_opt <<= 2; > } > > - nvm_id->trdt = le32_to_cpu(src->trdt); > - nvm_id->trdm = le32_to_cpu(src->trdm); > - nvm_id->tprt = le32_to_cpu(src->tprt); > - nvm_id->tprm = le32_to_cpu(src->tprm); > - nvm_id->tbet = le32_to_cpu(src->tbet); > - nvm_id->tbem = le32_to_cpu(src->tbem); > - > - /* 1.2 compatibility */ > - nvm_id->num_pln = src->num_pln; > - nvm_id->num_pg = le16_to_cpu(src->num_pg); > - nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz); > + geo->num_pln = src->num_pln; > + geo->num_pg = le16_to_cpu(src->num_pg); > + geo->fpg_sz = le16_to_cpu(src->fpg_sz); > + > + nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&geo->addrf, > + &id->ppaf); > > return 0; > } > > -static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, > - struct nvme_nvm_id12 *id) > +static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst, > + struct nvme_nvm_id20_addrf *src) > { > - nvm_id->ver_id = id->ver_id; > - nvm_id->vmnt = id->vmnt; > - nvm_id->cap = le32_to_cpu(id->cap); > - nvm_id->dom = le32_to_cpu(id->dom); > - memcpy(&nvm_id->ppaf, &id->ppaf, > - sizeof(struct nvm_addr_format)); > - > - return init_grp(nvm_id, id); > + dst->ch_len = src->grp_len; > + dst->lun_len = src->pu_len; > + dst->chk_len = src->chk_len; > + dst->sec_len = src->lba_len; > + > + dst->sec_offset = 0; > + dst->chk_offset = dst->sec_len; > + dst->lun_offset = dst->chk_offset + dst->chk_len; > + dst->ch_offset = dst->lun_offset + dst->lun_len; > + > + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; > + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; > + dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset; > + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; > } > > -static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, > - struct nvme_nvm_id20 *id) > +static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, > + struct nvm_geo *geo) > { > - nvm_id->ver_id = id->mjr; > + geo->ver_id = id->mjr; > + > + geo->nr_chnls = le16_to_cpu(id->num_grp); > + geo->nr_luns = le16_to_cpu(id->num_pu); > + geo->all_luns = geo->nr_chnls * geo->nr_luns; > > - nvm_id->num_ch = le16_to_cpu(id->num_grp); > - nvm_id->num_lun = le16_to_cpu(id->num_pu); > - nvm_id->num_chk = le32_to_cpu(id->num_chk); > - nvm_id->clba = le32_to_cpu(id->clba); > + geo->nr_chks = le32_to_cpu(id->num_chk); > + geo->clba = le32_to_cpu(id->clba); > > - nvm_id->ws_min = le32_to_cpu(id->ws_min); > - nvm_id->ws_opt = le32_to_cpu(id->ws_opt); > - nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits); > + geo->all_chunks = geo->all_luns * geo->nr_chks; > + geo->total_secs = geo->clba * geo->all_chunks; > > - nvm_id->trdt = le32_to_cpu(id->trdt); > - nvm_id->trdm = le32_to_cpu(id->trdm); > - nvm_id->tprt = le32_to_cpu(id->twrt); > - nvm_id->tprm = le32_to_cpu(id->twrm); > - nvm_id->tbet = le32_to_cpu(id->tcrst); > - nvm_id->tbem = le32_to_cpu(id->tcrsm); > + geo->ws_min = le32_to_cpu(id->ws_min); > + geo->ws_opt = le32_to_cpu(id->ws_opt); > + geo->mw_cunits = le32_to_cpu(id->mw_cunits); > > - /* calculated values */ > - nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min; > + geo->trdt = le32_to_cpu(id->trdt); > + geo->trdm = le32_to_cpu(id->trdm); > + geo->tprt = le32_to_cpu(id->twrt); > + geo->tprm = le32_to_cpu(id->twrm); > + geo->tbet = le32_to_cpu(id->tcrst); > + geo->tbem = le32_to_cpu(id->tcrsm); > > - /* 1.2 compatibility */ > - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; > + nvme_nvm_set_addr_20(&geo->addrf, &id->lbaf); > > return 0; > } > > -static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) > +static int nvme_nvm_identity(struct nvm_dev *nvmdev) > { > struct nvme_ns *ns = nvmdev->q->queuedata; > struct nvme_nvm_id12 *id; > @@ -380,18 +435,18 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) > */ > switch (id->ver_id) { > case 1: > - ret = nvme_nvm_setup_12(nvmdev, nvm_id, id); > + ret = nvme_nvm_setup_12(id, &nvmdev->geo); > break; > case 2: > - ret = nvme_nvm_setup_20(nvmdev, nvm_id, > - (struct nvme_nvm_id20 *)id); > + ret = nvme_nvm_setup_20((struct nvme_nvm_id20 *)id, > + &nvmdev->geo); > break; > default: > - dev_err(ns->ctrl->device, > - "OCSSD revision not supported (%d)\n", > - nvm_id->ver_id); > + dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n", > + id->ver_id); > ret = -EINVAL; > } > + > out: > kfree(id); > return ret; > @@ -406,7 +461,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, > struct nvme_ctrl *ctrl = ns->ctrl; > struct nvme_nvm_command c = {}; > struct nvme_nvm_bb_tbl *bb_tbl; > - int nr_blks = geo->nr_chks * geo->plane_mode; > + int nr_blks = geo->nr_chks * geo->num_pln; > int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks; > int ret = 0; > > @@ -447,7 +502,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, > goto out; > } > > - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode); > + memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->num_pln); > out: > kfree(bb_tbl); > return ret; > @@ -815,9 +870,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) > void nvme_nvm_update_nvm_info(struct nvme_ns *ns) > { > struct nvm_dev *ndev = ns->ndev; > + struct nvm_geo *geo = &ndev->geo; > > - ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift; > - ndev->identity.sos = ndev->geo.oob_size = ns->ms; > + geo->csecs = 1 << ns->lba_shift; > + geo->sos = ns->ms; > } > > int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) > @@ -850,23 +906,22 @@ static ssize_t nvm_dev_attr_show(struct device *dev, > { > struct nvme_ns *ns = nvme_get_ns_from_dev(dev); > struct nvm_dev *ndev = ns->ndev; > - struct nvm_id *id; > + struct nvm_geo *geo = &ndev->geo; > struct attribute *attr; > > if (!ndev) > return 0; > > - id = &ndev->identity; > attr = &dattr->attr; > > if (strcmp(attr->name, "version") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ver_id); > } else if (strcmp(attr->name, "capabilities") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->cap); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->cap); > } else if (strcmp(attr->name, "read_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdt); > } else if (strcmp(attr->name, "read_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdm); > } else { > return scnprintf(page, > PAGE_SIZE, > @@ -875,75 +930,79 @@ static ssize_t nvm_dev_attr_show(struct device *dev, > } > } > > +static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf, > + char *page) > +{ > + return scnprintf(page, PAGE_SIZE, > + "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", > + ppaf->ch_offset, ppaf->ch_len, > + ppaf->lun_offset, ppaf->lun_len, > + ppaf->pln_offset, ppaf->pln_len, > + ppaf->blk_offset, ppaf->blk_len, > + ppaf->pg_offset, ppaf->pg_len, > + ppaf->sect_offset, ppaf->sect_len); > +} > + > static ssize_t nvm_dev_attr_show_12(struct device *dev, > struct device_attribute *dattr, char *page) > { > struct nvme_ns *ns = nvme_get_ns_from_dev(dev); > struct nvm_dev *ndev = ns->ndev; > - struct nvm_id *id; > + struct nvm_geo *geo = &ndev->geo; > struct attribute *attr; > > if (!ndev) > return 0; > > - id = &ndev->identity; > attr = &dattr->attr; > > if (strcmp(attr->name, "vendor_opcode") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->vmnt); > } else if (strcmp(attr->name, "device_mode") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->dom); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->dom); > /* kept for compatibility */ > } else if (strcmp(attr->name, "media_manager") == 0) { > return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm"); > } else if (strcmp(attr->name, "ppa_format") == 0) { > - return scnprintf(page, PAGE_SIZE, > - "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", > - id->ppaf.ch_offset, id->ppaf.ch_len, > - id->ppaf.lun_offset, id->ppaf.lun_len, > - id->ppaf.pln_offset, id->ppaf.pln_len, > - id->ppaf.blk_offset, id->ppaf.blk_len, > - id->ppaf.pg_offset, id->ppaf.pg_len, > - id->ppaf.sect_offset, id->ppaf.sect_len); > + return nvm_dev_attr_show_ppaf((void *)&geo->addrf, page); Why do the code here cast to void *, and not to the address format data structure? Have you thought about doing the cast directly here, instead of making a function for it? > } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */ > - return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mtype); > } else if (strcmp(attr->name, "flash_media_type") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fmtype); > } else if (strcmp(attr->name, "num_channels") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); > } else if (strcmp(attr->name, "num_luns") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); > } else if (strcmp(attr->name, "num_planes") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pln); > } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */ > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); > } else if (strcmp(attr->name, "num_pages") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pg); > } else if (strcmp(attr->name, "page_size") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fpg_sz); > } else if (strcmp(attr->name, "hw_sector_size") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->csecs); > } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */ > - return scnprintf(page, PAGE_SIZE, "%u\n", id->sos); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->sos); > } else if (strcmp(attr->name, "prog_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); > } else if (strcmp(attr->name, "prog_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); > } else if (strcmp(attr->name, "erase_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); > } else if (strcmp(attr->name, "erase_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); > } else if (strcmp(attr->name, "multiplane_modes") == 0) { > - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos); > + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mpos); > } else if (strcmp(attr->name, "media_capabilities") == 0) { > - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap); > + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mccap); > } else if (strcmp(attr->name, "max_phys_secs") == 0) { > return scnprintf(page, PAGE_SIZE, "%u\n", NVM_MAX_VLBA); > } else { > - return scnprintf(page, > - PAGE_SIZE, > - "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", > - attr->name); > + return scnprintf(page, PAGE_SIZE, > + "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", > + attr->name); > } > } > > @@ -952,42 +1011,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, > { > struct nvme_ns *ns = nvme_get_ns_from_dev(dev); > struct nvm_dev *ndev = ns->ndev; > - struct nvm_id *id; > + struct nvm_geo *geo = &ndev->geo; > struct attribute *attr; > > if (!ndev) > return 0; > > - id = &ndev->identity; > attr = &dattr->attr; > > if (strcmp(attr->name, "groups") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); > } else if (strcmp(attr->name, "punits") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); > } else if (strcmp(attr->name, "chunks") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); > } else if (strcmp(attr->name, "clba") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->clba); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->clba); > } else if (strcmp(attr->name, "ws_min") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_min); > } else if (strcmp(attr->name, "ws_opt") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_opt); > } else if (strcmp(attr->name, "mw_cunits") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mw_cunits); > } else if (strcmp(attr->name, "write_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); > } else if (strcmp(attr->name, "write_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); > } else if (strcmp(attr->name, "reset_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); > } else if (strcmp(attr->name, "reset_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); > } else { > - return scnprintf(page, > - PAGE_SIZE, > - "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", > - attr->name); > + return scnprintf(page, PAGE_SIZE, > + "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", > + attr->name); > } > } > > @@ -1106,10 +1163,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = { > > int nvme_nvm_register_sysfs(struct nvme_ns *ns) > { > - if (!ns->ndev) > + struct nvm_dev *ndev = ns->ndev; > + struct nvm_geo *geo = &ndev->geo; > + > + if (!ndev) > return -EINVAL; > > - switch (ns->ndev->identity.ver_id) { > + switch (geo->ver_id) { > case 1: > return sysfs_create_group(&disk_to_dev(ns->disk)->kobj, > &nvm_dev_attr_group_12); > @@ -1123,7 +1183,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns) > > void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) > { > - switch (ns->ndev->identity.ver_id) { > + struct nvm_dev *ndev = ns->ndev; > + struct nvm_geo *geo = &ndev->geo; > + > + switch (geo->ver_id) { > case 1: > sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, > &nvm_dev_attr_group_12); > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index e55b10573c99..16255fcd5250 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -50,7 +50,7 @@ struct nvm_id; > struct nvm_dev; > struct nvm_tgt_dev; > > -typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *); > +typedef int (nvm_id_fn)(struct nvm_dev *); > typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); > typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); > typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); > @@ -152,62 +152,48 @@ struct nvm_id_lp_tbl { > struct nvm_id_lp_mlc mlc; > }; > > -struct nvm_addr_format { > - u8 ch_offset; > +struct nvm_addr_format_12 { I can see in a couple of places a statement has to be over two lines due to the length of writing out nvm_addr_format_12, would it make sense to shorthand it to nvm_addrf_12? > u8 ch_len; > - u8 lun_offset; > u8 lun_len; > - u8 pln_offset; > + u8 blk_len; > + u8 pg_len; > u8 pln_len; > + u8 sect_len; > + > + u8 ch_offset; > + u8 lun_offset; > u8 blk_offset; > - u8 blk_len; > u8 pg_offset; > - u8 pg_len; > + u8 pln_offset; > u8 sect_offset; > - u8 sect_len; > -}; > - > -struct nvm_id { > - u8 ver_id; > - u8 vmnt; > - u32 cap; > - u32 dom; > - > - struct nvm_addr_format ppaf; > - > - u8 num_ch; > - u8 num_lun; > - u16 num_chk; > - u16 clba; > - u16 csecs; > - u16 sos; > - > - u32 ws_min; > - u32 ws_opt; > - u32 mw_cunits; > > - u32 trdt; > - u32 trdm; > - u32 tprt; > - u32 tprm; > - u32 tbet; > - u32 tbem; > - u32 mpos; > - u32 mccap; > - u16 cpar; > - > - /* calculated values */ > - u16 ws_seq; > - u16 ws_per_chk; > - > - /* 1.2 compatibility */ > - u8 mtype; > - u8 fmtype; > + u64 ch_mask; > + u64 lun_mask; > + u64 blk_mask; > + u64 pg_mask; > + u64 pln_mask; > + u64 sec_mask; > +}; > > - u8 num_pln; > - u16 num_pg; > - u16 fpg_sz; > -} __packed; > +struct nvm_addr_format { > + u8 ch_len; > + u8 lun_len; > + u8 chk_len; > + u8 sec_len; > + u8 rsv_len[2]; > + > + u8 ch_offset; > + u8 lun_offset; > + u8 chk_offset; > + u8 sec_offset; > + u8 rsv_off[2]; > + > + u64 ch_mask; > + u64 lun_mask; > + u64 chk_mask; > + u64 sec_mask; > + u64 rsv_mask[2]; > +}; > > struct nvm_target { > struct list_head list; > @@ -274,36 +260,63 @@ enum { > NVM_BLK_ST_BAD = 0x8, /* Bad block */ > }; > > - > -/* Device generic information */ > +/* Instance geometry */ > struct nvm_geo { > - /* generic geometry */ > + /* device reported version */ > + u8 ver_id; > + > + /* instance specific geometry */ > int nr_chnls; > - int all_luns; /* across channels */ > - int nr_luns; /* per channel */ > - int nr_chks; /* per lun */ > + int nr_luns; /* per channel */ > > - int sec_size; > - int oob_size; > - int mccap; > + /* calculated values */ > + int all_luns; /* across channels */ > + int all_chunks; /* across channels */ > > - int sec_per_chk; > - int sec_per_lun; > + int op; /* over-provision in instance */ > > - int ws_min; > - int ws_opt; > - int ws_seq; > - int ws_per_chk; > + sector_t total_secs; /* across channels */ > > - int op; > + /* chunk geometry */ > + u32 nr_chks; /* chunks per lun */ > + u32 clba; /* sectors per chunk */ > + u16 csecs; /* sector size */ > + u16 sos; /* out-of-band area size */ > > - struct nvm_addr_format ppaf; > + /* device write constrains */ > + u32 ws_min; /* minimum write size */ > + u32 ws_opt; /* optimal write size */ > + u32 mw_cunits; /* distance required for successful read */ > > - /* Legacy 1.2 specific geometry */ > - int plane_mode; /* drive device in single, double or quad mode */ > - int nr_planes; > - int sec_per_pg; /* only sectors for a single page */ > - int sec_per_pl; /* all sectors across planes */ > + /* device capabilities */ > + u32 mccap; > + > + /* device timings */ > + u32 trdt; /* Avg. Tread (ns) */ > + u32 trdm; /* Max Tread (ns) */ > + u32 tprt; /* Avg. Tprog (ns) */ > + u32 tprm; /* Max Tprog (ns) */ > + u32 tbet; /* Avg. Terase (ns) */ > + u32 tbem; /* Max Terase (ns) */ > + > + /* generic address format */ > + struct nvm_addr_format addrf; > + > + /* 1.2 compatibility */ > + u8 vmnt; > + u32 cap; > + u32 dom; > + > + u8 mtype; > + u8 fmtype; > + > + u16 cpar; > + u32 mpos; > + > + u8 num_pln; > + u8 plane_mode; > + u16 num_pg; > + u16 fpg_sz; > }; > > /* sub-device structure */ > @@ -314,9 +327,6 @@ struct nvm_tgt_dev { > /* Base ppas for target LUNs */ > struct ppa_addr *luns; > > - sector_t total_secs; > - > - struct nvm_id identity; > struct request_queue *q; > > struct nvm_dev *parent; > @@ -331,13 +341,9 @@ struct nvm_dev { > /* Device information */ > struct nvm_geo geo; > > - unsigned long total_secs; > - > unsigned long *lun_map; > void *dma_pool; > > - struct nvm_id identity; > - > /* Backend device */ > struct request_queue *q; > char name[DISK_NAME_LEN]; > @@ -357,14 +363,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, > struct ppa_addr r) > { > struct nvm_geo *geo = &tgt_dev->geo; > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&geo->addrf; > struct ppa_addr l; > > - l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset; > - l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset; > - l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset; > - l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset; > - l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset; > - l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset; > + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; > + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; > + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; > + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; > + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; > + l.ppa |= ((u64)r.g.sec) << ppaf->sect_offset; > > return l; > } > @@ -373,24 +381,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, > struct ppa_addr r) > { > struct nvm_geo *geo = &tgt_dev->geo; > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&geo->addrf; > struct ppa_addr l; > > l.ppa = 0; > - /* > - * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc. > - */ > - l.g.blk = (r.ppa >> geo->ppaf.blk_offset) & > - (((1 << geo->ppaf.blk_len) - 1)); > - l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) & > - (((1 << geo->ppaf.pg_len) - 1)); > - l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) & > - (((1 << geo->ppaf.sect_len) - 1)); > - l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) & > - (((1 << geo->ppaf.pln_len) - 1)); > - l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) & > - (((1 << geo->ppaf.lun_len) - 1)); > - l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) & > - (((1 << geo->ppaf.ch_len) - 1)); > + > + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; > + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; > + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; > + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; > + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; > + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sect_offset; > > return l; > } > Looks good to me, ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 01/15] lightnvm: simplify geometry structure. @ 2018-03-01 10:22 ` Matias Bjørling 0 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:22 UTC (permalink / raw) On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: > Currently, the device geometry is stored redundantly in the nvm_id and > nvm_geo structures at a device level. Moreover, when instantiating > targets on a specific number of LUNs, these structures are replicated > and manually modified to fit the instance channel and LUN partitioning. > > Instead, create a generic geometry around nvm_geo, which can be used by > (i) the underlying device to describe the geometry of the whole device, > and (ii) instances to describe their geometry independently. > > Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> > --- > drivers/lightnvm/core.c | 70 +++----- > drivers/lightnvm/pblk-core.c | 16 +- > drivers/lightnvm/pblk-gc.c | 2 +- > drivers/lightnvm/pblk-init.c | 119 +++++++------- > drivers/lightnvm/pblk-read.c | 2 +- > drivers/lightnvm/pblk-recovery.c | 14 +- > drivers/lightnvm/pblk-rl.c | 2 +- > drivers/lightnvm/pblk-sysfs.c | 39 +++-- > drivers/lightnvm/pblk-write.c | 2 +- > drivers/lightnvm/pblk.h | 87 +++++----- > drivers/nvme/host/lightnvm.c | 341 +++++++++++++++++++++++---------------- > include/linux/lightnvm.h | 200 +++++++++++------------ > 12 files changed, 465 insertions(+), 429 deletions(-) > > diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c > index 19c46ebb1b91..9a417d9cdf0c 100644 > --- a/drivers/lightnvm/core.c > +++ b/drivers/lightnvm/core.c > @@ -155,7 +155,7 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, > int blun = lun_begin % dev->geo.nr_luns; > int lunid = 0; > int lun_balanced = 1; > - int prev_nr_luns; > + int sec_per_lun, prev_nr_luns; > int i, j; > > nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1; > @@ -215,18 +215,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, > if (!tgt_dev) > goto err_ch; > > + /* Inherit device geometry from parent */ > memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo)); > + > /* Target device only owns a portion of the physical device */ > tgt_dev->geo.nr_chnls = nr_chnls; > - tgt_dev->geo.all_luns = nr_luns; > tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1; > + tgt_dev->geo.all_luns = nr_luns; > + tgt_dev->geo.all_chunks = nr_luns * dev->geo.nr_chks; > + > tgt_dev->geo.op = op; > - tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun; > + > + sec_per_lun = dev->geo.clba * dev->geo.nr_chks; > + tgt_dev->geo.total_secs = nr_luns * sec_per_lun; > + > tgt_dev->q = dev->q; > tgt_dev->map = dev_map; > tgt_dev->luns = luns; > - memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id)); > - > tgt_dev->parent = dev; > > return tgt_dev; > @@ -296,8 +301,6 @@ static int __nvm_config_simple(struct nvm_dev *dev, > static int __nvm_config_extended(struct nvm_dev *dev, > struct nvm_ioctl_create_extended *e) > { > - struct nvm_geo *geo = &dev->geo; > - > if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) { > e->lun_begin = 0; > e->lun_end = dev->geo.all_luns - 1; > @@ -311,7 +314,7 @@ static int __nvm_config_extended(struct nvm_dev *dev, > return -EINVAL; > } > > - return nvm_config_check_luns(geo, e->lun_begin, e->lun_end); > + return nvm_config_check_luns(&dev->geo, e->lun_begin, e->lun_end); > } > > static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) > @@ -406,7 +409,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) > tqueue->queuedata = targetdata; > > blk_queue_max_hw_sectors(tqueue, > - (dev->geo.sec_size >> 9) * NVM_MAX_VLBA); > + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); > > set_capacity(tdisk, tt->capacity(targetdata)); > add_disk(tdisk); > @@ -841,40 +844,9 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl); > > static int nvm_core_init(struct nvm_dev *dev) > { > - struct nvm_id *id = &dev->identity; > struct nvm_geo *geo = &dev->geo; > int ret; > > - memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format)); > - > - if (id->mtype != 0) { > - pr_err("nvm: memory type not supported\n"); > - return -EINVAL; > - } > - > - /* Whole device values */ > - geo->nr_chnls = id->num_ch; > - geo->nr_luns = id->num_lun; > - > - /* Generic device geometry values */ > - geo->ws_min = id->ws_min; > - geo->ws_opt = id->ws_opt; > - geo->ws_seq = id->ws_seq; > - geo->ws_per_chk = id->ws_per_chk; > - geo->nr_chks = id->num_chk; > - geo->mccap = id->mccap; > - > - geo->sec_per_chk = id->clba; > - geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks; > - geo->all_luns = geo->nr_luns * geo->nr_chnls; > - > - /* 1.2 spec device geometry values */ > - geo->plane_mode = 1 << geo->ws_seq; > - geo->nr_planes = geo->ws_opt / geo->ws_min; > - geo->sec_per_pg = geo->ws_min; > - geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes; > - > - dev->total_secs = geo->all_luns * geo->sec_per_lun; > dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns), > sizeof(unsigned long), GFP_KERNEL); > if (!dev->lun_map) > @@ -913,16 +885,14 @@ static int nvm_init(struct nvm_dev *dev) > struct nvm_geo *geo = &dev->geo; > int ret = -EINVAL; > > - if (dev->ops->identity(dev, &dev->identity)) { > + if (dev->ops->identity(dev)) { > pr_err("nvm: device could not be identified\n"); > goto err; > } > > - if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) { > - pr_err("nvm: device ver_id %d not supported by kernel.\n", > - dev->identity.ver_id); > - goto err; > - } > + pr_debug("nvm: ver:%u nvm_vendor:%x\n", > + geo->ver_id, > + geo->vmnt); > > ret = nvm_core_init(dev); > if (ret) { > @@ -930,10 +900,10 @@ static int nvm_init(struct nvm_dev *dev) > goto err; > } > > - pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n", > - dev->name, geo->sec_per_pg, geo->nr_planes, > - geo->ws_per_chk, geo->nr_chks, > - geo->all_luns, geo->nr_chnls); > + pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n", > + dev->name, geo->ws_min, geo->ws_opt, > + geo->nr_chks, geo->all_luns, > + geo->nr_chnls); > return 0; > err: > pr_err("nvm: failed to initialize nvm\n"); > diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c > index 8848443a0721..169589ddd457 100644 > --- a/drivers/lightnvm/pblk-core.c > +++ b/drivers/lightnvm/pblk-core.c > @@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line, > memset(&rqd, 0, sizeof(struct nvm_rq)); > > rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len, > l_mg->emeta_alloc_type, GFP_KERNEL); > @@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line) > if (bit >= lm->blk_per_line) > return -1; > > - return bit * geo->sec_per_pl; > + return bit * geo->ws_opt; > } > > static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line, > @@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, > /* Capture bad block information on line mapping bitmaps */ > while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line, > bit + 1)) < lm->blk_per_line) { > - off = bit * geo->sec_per_pl; > + off = bit * geo->ws_opt; > bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off, > lm->sec_per_line); > bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux, > lm->sec_per_line); > - line->sec_in_line -= geo->sec_per_chk; > + line->sec_in_line -= geo->clba; > if (bit >= lm->emeta_bb) > nr_bb++; > } > > /* Mark smeta metadata sectors as bad sectors */ > bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line); > - off = bit * geo->sec_per_pl; > + off = bit * geo->ws_opt; > bitmap_set(line->map_bitmap, off, lm->smeta_sec); > line->sec_in_line -= lm->smeta_sec; > line->smeta_ssec = off; > @@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, > emeta_secs = lm->emeta_sec[0]; > off = lm->sec_per_line; > while (emeta_secs) { > - off -= geo->sec_per_pl; > + off -= geo->ws_opt; > if (!test_bit(off, line->invalid_bitmap)) { > - bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl); > - emeta_secs -= geo->sec_per_pl; > + bitmap_set(line->invalid_bitmap, off, geo->ws_opt); > + emeta_secs -= geo->ws_opt; > } > } > > diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c > index 320f99af99e9..6851a5c67189 100644 > --- a/drivers/lightnvm/pblk-gc.c > +++ b/drivers/lightnvm/pblk-gc.c > @@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work) > > up(&gc->gc_sem); > > - gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size); > + gc_rq->data = vmalloc(gc_rq->nr_secs * geo->csecs); > if (!gc_rq->data) { > pr_err("pblk: could not GC line:%d (%d/%d)\n", > line->id, *line->vsc, gc_rq->nr_secs); > diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c > index 25fc70ca07f7..9b5ee05c3028 100644 > --- a/drivers/lightnvm/pblk-init.c > +++ b/drivers/lightnvm/pblk-init.c > @@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk) > return -ENOMEM; > > power_size = get_count_order(nr_entries); > - power_seg_sz = get_count_order(geo->sec_size); > + power_seg_sz = get_count_order(geo->csecs); > > return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz); > } > @@ -154,11 +154,11 @@ static int pblk_rwb_init(struct pblk *pblk) > /* Minimum pages needed within a lun */ > #define ADDR_POOL_SIZE 64 > > -static int pblk_set_ppaf(struct pblk *pblk) > +static int pblk_set_addrf_12(struct nvm_geo *geo, > + struct nvm_addr_format_12 *dst) > { > - struct nvm_tgt_dev *dev = pblk->dev; > - struct nvm_geo *geo = &dev->geo; > - struct nvm_addr_format ppaf = geo->ppaf; > + struct nvm_addr_format_12 *src = > + (struct nvm_addr_format_12 *)&geo->addrf; > int power_len; > > /* Re-calculate channel and lun format to adapt to configuration */ > @@ -167,34 +167,50 @@ static int pblk_set_ppaf(struct pblk *pblk) > pr_err("pblk: supports only power-of-two channel config.\n"); > return -EINVAL; > } > - ppaf.ch_len = power_len; > + dst->ch_len = power_len; > > power_len = get_count_order(geo->nr_luns); > if (1 << power_len != geo->nr_luns) { > pr_err("pblk: supports only power-of-two LUN config.\n"); > return -EINVAL; > } > - ppaf.lun_len = power_len; > + dst->lun_len = power_len; > > - pblk->ppaf.sec_offset = 0; > - pblk->ppaf.pln_offset = ppaf.sect_len; > - pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len; > - pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len; > - pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len; > - pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len; > - pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1; > - pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) << > - pblk->ppaf.pln_offset; > - pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) << > - pblk->ppaf.ch_offset; > - pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) << > - pblk->ppaf.lun_offset; > - pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) << > - pblk->ppaf.pg_offset; > - pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) << > - pblk->ppaf.blk_offset; > + dst->blk_len = src->blk_len; > + dst->pg_len = src->pg_len; > + dst->pln_len = src->pln_len; > + dst->sect_len = src->sect_len; > > - pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len; > + dst->sect_offset = 0; > + dst->pln_offset = dst->sect_len; > + dst->ch_offset = dst->pln_offset + dst->pln_len; > + dst->lun_offset = dst->ch_offset + dst->ch_len; > + dst->pg_offset = dst->lun_offset + dst->lun_len; > + dst->blk_offset = dst->pg_offset + dst->pg_len; > + > + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; > + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; > + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; > + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; > + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; > + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; > + > + return dst->blk_offset + src->blk_len; > +} > + > +static int pblk_set_ppaf(struct pblk *pblk) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + int mod; > + > + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); > + if (mod) { > + pr_err("pblk: bad configuration of sectors/pages\n"); > + return -EINVAL; > + } > + > + pblk->ppaf_bitsize = pblk_set_addrf_12(geo, (void *)&pblk->ppaf); > > return 0; > } > @@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk) > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > > - pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg * > - geo->nr_planes * geo->all_luns; > + pblk->pgs_in_buffer = geo->mw_cunits * geo->all_luns; > > if (pblk_init_global_caches(pblk)) > return -ENOMEM; > @@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk) > /* Round to sector size so that lba_list starts on its own sector */ > lm->emeta_sec[1] = DIV_ROUND_UP( > sizeof(struct line_emeta) + lm->blk_bitmap_len + > - sizeof(struct wa_counters), geo->sec_size); > - lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size; > + sizeof(struct wa_counters), geo->csecs); > + lm->emeta_len[1] = lm->emeta_sec[1] * geo->csecs; > > /* Round to sector size so that vsc_list starts on its own sector */ > lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0]; > lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64), > - geo->sec_size); > - lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size; > + geo->csecs); > + lm->emeta_len[2] = lm->emeta_sec[2] * geo->csecs; > > lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32), > - geo->sec_size); > - lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size; > + geo->csecs); > + lm->emeta_len[3] = lm->emeta_sec[3] * geo->csecs; > > lm->vsc_list_len = l_mg->nr_lines * sizeof(u32); > > @@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) > * on user capacity consider only provisioned blocks > */ > pblk->rl.total_blocks = nr_free_blks; > - pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk; > + pblk->rl.nr_secs = nr_free_blks * geo->clba; > > /* Consider sectors used for metadata */ > sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; > - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); > + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); > > - pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk; > + pblk->capacity = (provisioned - blk_meta) * geo->clba; > > atomic_set(&pblk->rl.free_blocks, nr_free_blks); > atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); > @@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk) > void *chunk_log; > unsigned int smeta_len, emeta_len; > long nr_bad_blks = 0, nr_free_blks = 0; > - int bb_distance, max_write_ppas, mod; > + int bb_distance, max_write_ppas; > int i, ret; > > - pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE); > + pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); > max_write_ppas = pblk->min_write_pgs * geo->all_luns; > pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); > pblk_set_sec_per_write(pblk, pblk->min_write_pgs); > @@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk) > return -EINVAL; > } > > - div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod); > - if (mod) { > - pr_err("pblk: bad configuration of sectors/pages\n"); > - return -EINVAL; > - } > - > l_mg->nr_lines = geo->nr_chks; > l_mg->log_line = l_mg->data_line = NULL; > l_mg->l_seq_nr = l_mg->d_seq_nr = 0; > l_mg->nr_free_lines = 0; > bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES); > > - lm->sec_per_line = geo->sec_per_chk * geo->all_luns; > + lm->sec_per_line = geo->clba * geo->all_luns; > lm->blk_per_line = geo->all_luns; > lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long); > lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long); > @@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk) > */ > i = 1; > add_smeta_page: > - lm->smeta_sec = i * geo->sec_per_pl; > - lm->smeta_len = lm->smeta_sec * geo->sec_size; > + lm->smeta_sec = i * geo->ws_opt; > + lm->smeta_len = lm->smeta_sec * geo->csecs; > > smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len; > if (smeta_len > lm->smeta_len) { > @@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk) > */ > i = 1; > add_emeta_page: > - lm->emeta_sec[0] = i * geo->sec_per_pl; > - lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size; > + lm->emeta_sec[0] = i * geo->ws_opt; > + lm->emeta_len[0] = lm->emeta_sec[0] * geo->csecs; > > emeta_len = calc_emeta_len(pblk); > if (emeta_len > lm->emeta_len[0]) { > @@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk) > lm->min_blk_line = 1; > if (geo->all_luns > 1) > lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec + > - lm->emeta_sec[0], geo->sec_per_chk); > + lm->emeta_sec[0], geo->clba); > > if (lm->min_blk_line > lm->blk_per_line) { > pr_err("pblk: config. not supported. Min. LUN in line:%d\n", > @@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk) > goto fail_free_bb_template; > } > > - bb_distance = (geo->all_luns) * geo->sec_per_pl; > + bb_distance = (geo->all_luns) * geo->ws_opt; > for (i = 0; i < lm->sec_per_line; i += bb_distance) > - bitmap_set(l_mg->bb_template, i, geo->sec_per_pl); > + bitmap_set(l_mg->bb_template, i, geo->ws_opt); > > INIT_LIST_HEAD(&l_mg->free_list); > INIT_LIST_HEAD(&l_mg->corrupt_list); > @@ -982,9 +991,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, > struct pblk *pblk; > int ret; > > - if (dev->identity.dom & NVM_RSP_L2P) { > + if (dev->geo.dom & NVM_RSP_L2P) { > pr_err("pblk: host-side L2P table not supported. (%x)\n", > - dev->identity.dom); > + dev->geo.dom); > return ERR_PTR(-EINVAL); > } > > @@ -1092,7 +1101,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, > > blk_queue_write_cache(tqueue, true, false); > > - tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size; > + tqueue->limits.discard_granularity = geo->clba * geo->csecs; > tqueue->limits.discard_alignment = 0; > blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9); > queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue); > diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c > index 2f761283f43e..9eee10f69df0 100644 > --- a/drivers/lightnvm/pblk-read.c > +++ b/drivers/lightnvm/pblk-read.c > @@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) > if (!(gc_rq->secs_to_gc)) > goto out; > > - data_len = (gc_rq->secs_to_gc) * geo->sec_size; > + data_len = (gc_rq->secs_to_gc) * geo->csecs; > bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len, > PBLK_VMALLOC_META, GFP_KERNEL); > if (IS_ERR(bio)) { > diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c > index aaab9a5c17cc..26356429dc72 100644 > --- a/drivers/lightnvm/pblk-recovery.c > +++ b/drivers/lightnvm/pblk-recovery.c > @@ -184,7 +184,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line) > int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line); > > return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] - > - nr_bb * geo->sec_per_chk; > + nr_bb * geo->clba; > } > > struct pblk_recov_alloc { > @@ -232,7 +232,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line, > rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); > if (!rq_ppas) > rq_ppas = pblk->min_write_pgs; > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); > if (IS_ERR(bio)) > @@ -351,7 +351,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, > if (!pad_rq) > return -ENOMEM; > > - data = vzalloc(pblk->max_write_pgs * geo->sec_size); > + data = vzalloc(pblk->max_write_pgs * geo->csecs); > if (!data) { > ret = -ENOMEM; > goto free_rq; > @@ -368,7 +368,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, > goto fail_free_pad; > } > > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list); > if (!meta_list) { > @@ -509,7 +509,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line, > rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); > if (!rq_ppas) > rq_ppas = pblk->min_write_pgs; > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); > if (IS_ERR(bio)) > @@ -640,7 +640,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, > rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); > if (!rq_ppas) > rq_ppas = pblk->min_write_pgs; > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > > bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); > if (IS_ERR(bio)) > @@ -745,7 +745,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) > ppa_list = (void *)(meta_list) + pblk_dma_meta_size; > dma_ppa_list = dma_meta_list + pblk_dma_meta_size; > > - data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL); > + data = kcalloc(pblk->max_write_pgs, geo->csecs, GFP_KERNEL); > if (!data) { > ret = -ENOMEM; > goto free_meta_list; > diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c > index 0d457b162f23..883a7113b19d 100644 > --- a/drivers/lightnvm/pblk-rl.c > +++ b/drivers/lightnvm/pblk-rl.c > @@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget) > > /* Consider sectors used for metadata */ > sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; > - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); > + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); > > rl->high = pblk->op_blks - blk_meta - lm->blk_per_line; > rl->high_pw = get_count_order(rl->high); > diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c > index 1680ce0a828d..33199c6af267 100644 > --- a/drivers/lightnvm/pblk-sysfs.c > +++ b/drivers/lightnvm/pblk-sysfs.c > @@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > + struct nvm_addr_format_12 *ppaf; > + struct nvm_addr_format_12 *geo_ppaf; > ssize_t sz = 0; > > - sz = snprintf(page, PAGE_SIZE - sz, > - "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", > - pblk->ppaf_bitsize, > - pblk->ppaf.blk_offset, geo->ppaf.blk_len, > - pblk->ppaf.pg_offset, geo->ppaf.pg_len, > - pblk->ppaf.lun_offset, geo->ppaf.lun_len, > - pblk->ppaf.ch_offset, geo->ppaf.ch_len, > - pblk->ppaf.pln_offset, geo->ppaf.pln_len, > - pblk->ppaf.sec_offset, geo->ppaf.sect_len); > + ppaf = (struct nvm_addr_format_12 *)&pblk->ppaf; > + geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; > + > + sz = snprintf(page, PAGE_SIZE, > + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > + pblk->ppaf_bitsize, > + ppaf->ch_offset, ppaf->ch_len, > + ppaf->lun_offset, ppaf->lun_len, > + ppaf->blk_offset, ppaf->blk_len, > + ppaf->pg_offset, ppaf->pg_len, > + ppaf->pln_offset, ppaf->pln_len, > + ppaf->sect_offset, ppaf->sect_len); Is it on purpose here that the code breaks user-space by changing the sysfs print format? > > sz += snprintf(page + sz, PAGE_SIZE - sz, > - "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", > - geo->ppaf.blk_offset, geo->ppaf.blk_len, > - geo->ppaf.pg_offset, geo->ppaf.pg_len, > - geo->ppaf.lun_offset, geo->ppaf.lun_len, > - geo->ppaf.ch_offset, geo->ppaf.ch_len, > - geo->ppaf.pln_offset, geo->ppaf.pln_len, > - geo->ppaf.sect_offset, geo->ppaf.sect_len); > + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > + geo_ppaf->ch_offset, geo_ppaf->ch_len, > + geo_ppaf->lun_offset, geo_ppaf->lun_len, > + geo_ppaf->blk_offset, geo_ppaf->blk_len, > + geo_ppaf->pg_offset, geo_ppaf->pg_len, > + geo_ppaf->pln_offset, geo_ppaf->pln_len, > + geo_ppaf->sect_offset, geo_ppaf->sect_len); Similarily here. > > return sz; > } > @@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page) > "blk_line:%d, sec_line:%d, sec_blk:%d\n", > lm->blk_per_line, > lm->sec_per_line, > - geo->sec_per_chk); > + geo->clba); > > return sz; > } > diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c > index aae86ed60b98..3e6f1ebd743a 100644 > --- a/drivers/lightnvm/pblk-write.c > +++ b/drivers/lightnvm/pblk-write.c > @@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) > m_ctx = nvm_rq_to_pdu(rqd); > m_ctx->private = meta_line; > > - rq_len = rq_ppas * geo->sec_size; > + rq_len = rq_ppas * geo->csecs; > data = ((void *)emeta->buf) + emeta->mem; > > bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len, > diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h > index f0309d8172c0..b29c1e6698aa 100644 > --- a/drivers/lightnvm/pblk.h > +++ b/drivers/lightnvm/pblk.h > @@ -551,21 +551,6 @@ struct pblk_line_meta { > unsigned int meta_distance; /* Distance between data and metadata */ > }; > > -struct pblk_addr_format { > - u64 ch_mask; > - u64 lun_mask; > - u64 pln_mask; > - u64 blk_mask; > - u64 pg_mask; > - u64 sec_mask; > - u8 ch_offset; > - u8 lun_offset; > - u8 pln_offset; > - u8 blk_offset; > - u8 pg_offset; > - u8 sec_offset; > -}; > - > enum { > PBLK_STATE_RUNNING = 0, > PBLK_STATE_STOPPING = 1, > @@ -585,8 +570,8 @@ struct pblk { > struct pblk_line_mgmt l_mg; /* Line management */ > struct pblk_line_meta lm; /* Line metadata */ > > + struct nvm_addr_format ppaf; > int ppaf_bitsize; > - struct pblk_addr_format ppaf; > > struct pblk_rb rwb; > > @@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line) > return le32_to_cpu(*line->vsc); > } > > -#define NVM_MEM_PAGE_WRITE (8) > - > static inline int pblk_pad_distance(struct pblk *pblk) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > > - return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl; > + return geo->mw_cunits * geo->all_luns * geo->ws_opt; > } > > static inline int pblk_ppa_to_line(struct ppa_addr p) > @@ -964,15 +947,17 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) > static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > u64 line_id) > { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->ppaf; > struct ppa_addr ppa; > > ppa.ppa = 0; > ppa.g.blk = line_id; > - ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset; > - ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset; > - ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset; > - ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset; > - ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset; > + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; > + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; > + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; > + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; > + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sect_offset; > > return ppa; > } > @@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, > struct ppa_addr p) > { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->ppaf; > u64 paddr; > > - paddr = (u64)p.g.pg << pblk->ppaf.pg_offset; > - paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset; > - paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset; > - paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset; > - paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset; > + paddr = (u64)p.g.ch << ppaf->ch_offset; > + paddr |= (u64)p.g.lun << ppaf->lun_offset; > + paddr |= (u64)p.g.pg << ppaf->pg_offset; > + paddr |= (u64)p.g.pl << ppaf->pln_offset; > + paddr |= (u64)p.g.sec << ppaf->sect_offset; > > return paddr; > } > @@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) > ppa64.c.line = ppa32 & ((~0U) >> 1); > ppa64.c.is_cached = 1; > } else { > - ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >> > - pblk->ppaf.blk_offset; > - ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >> > - pblk->ppaf.pg_offset; > - ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >> > - pblk->ppaf.lun_offset; > - ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >> > - pblk->ppaf.ch_offset; > - ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >> > - pblk->ppaf.pln_offset; > - ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >> > - pblk->ppaf.sec_offset; > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->ppaf; > + > + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; > + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; > + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; > + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; > + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; > + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sect_offset; > } > > return ppa64; > @@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) > ppa32 |= ppa64.c.line; > ppa32 |= 1U << 31; > } else { > - ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset; > - ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset; > - ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset; > - ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset; > - ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset; > - ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset; > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->ppaf; > + > + ppa32 |= ppa64.g.ch << ppaf->ch_offset; > + ppa32 |= ppa64.g.lun << ppaf->lun_offset; > + ppa32 |= ppa64.g.blk << ppaf->blk_offset; > + ppa32 |= ppa64.g.pg << ppaf->pg_offset; > + ppa32 |= ppa64.g.pl << ppaf->pln_offset; > + ppa32 |= ppa64.g.sec << ppaf->sect_offset; > } > > return ppa32; > @@ -1229,10 +1216,10 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, > if (!ppa->c.is_cached && > ppa->g.ch < geo->nr_chnls && > ppa->g.lun < geo->nr_luns && > - ppa->g.pl < geo->nr_planes && > + ppa->g.pl < geo->num_pln && > ppa->g.blk < geo->nr_chks && > - ppa->g.pg < geo->ws_per_chk && > - ppa->g.sec < geo->sec_per_pg) > + ppa->g.pg < geo->num_pg && > + ppa->g.sec < geo->ws_min) > continue; > > print_ppa(ppa, "boundary", i); > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index 839c0b96466a..e276ace28c64 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf { > __u8 blk_len; > __u8 pg_offset; > __u8 pg_len; > - __u8 sect_offset; > - __u8 sect_len; > + __u8 sec_offset; > + __u8 sec_len; > __u8 res[4]; > } __packed; > > @@ -254,106 +254,161 @@ static inline void _nvme_nvm_check_size(void) > BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); > } > > -static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12) > +static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, > + struct nvme_nvm_id12_addrf *src) > +{ > + dst->ch_len = src->ch_len; > + dst->lun_len = src->lun_len; > + dst->blk_len = src->blk_len; > + dst->pg_len = src->pg_len; > + dst->pln_len = src->pln_len; > + dst->sect_len = src->sec_len; > + > + dst->ch_offset = src->ch_offset; > + dst->lun_offset = src->lun_offset; > + dst->blk_offset = src->blk_offset; > + dst->pg_offset = src->pg_offset; > + dst->pln_offset = src->pln_offset; > + dst->sect_offset = src->sec_offset; > + > + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; > + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; > + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; > + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; > + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; > + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; > +} > + > +static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, > + struct nvm_geo *geo) > { > struct nvme_nvm_id12_grp *src; > int sec_per_pg, sec_per_pl, pg_per_blk; > > - if (id12->cgrps != 1) > + if (id->cgrps != 1) > return -EINVAL; > > - src = &id12->grp; > + src = &id->grp; > > - nvm_id->mtype = src->mtype; > - nvm_id->fmtype = src->fmtype; > + if (src->mtype != 0) { > + pr_err("nvm: memory type not supported\n"); > + return -EINVAL; > + } > + > + geo->ver_id = id->ver_id; > + > + geo->nr_chnls = src->num_ch; > + geo->nr_luns = src->num_lun; > + geo->all_luns = geo->nr_chnls * geo->nr_luns; > > - nvm_id->num_ch = src->num_ch; > - nvm_id->num_lun = src->num_lun; > + geo->nr_chks = le16_to_cpu(src->num_chk); > > - nvm_id->num_chk = le16_to_cpu(src->num_chk); > - nvm_id->csecs = le16_to_cpu(src->csecs); > - nvm_id->sos = le16_to_cpu(src->sos); > + geo->csecs = le16_to_cpu(src->csecs); > + geo->sos = le16_to_cpu(src->sos); > > pg_per_blk = le16_to_cpu(src->num_pg); > - sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs; > + sec_per_pg = le16_to_cpu(src->fpg_sz) / geo->csecs; > sec_per_pl = sec_per_pg * src->num_pln; > - nvm_id->clba = sec_per_pl * pg_per_blk; > - nvm_id->ws_per_chk = pg_per_blk; > - > - nvm_id->mpos = le32_to_cpu(src->mpos); > - nvm_id->cpar = le16_to_cpu(src->cpar); > - nvm_id->mccap = le32_to_cpu(src->mccap); > - > - nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg; > - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; > - > - if (nvm_id->mpos & 0x020202) { > - nvm_id->ws_seq = NVM_IO_DUAL_ACCESS; > - nvm_id->ws_opt <<= 1; > - } else if (nvm_id->mpos & 0x040404) { > - nvm_id->ws_seq = NVM_IO_QUAD_ACCESS; > - nvm_id->ws_opt <<= 2; > + geo->clba = sec_per_pl * pg_per_blk; > + > + geo->all_chunks = geo->all_luns * geo->nr_chks; > + geo->total_secs = geo->clba * geo->all_chunks; > + > + geo->ws_min = sec_per_pg; > + geo->ws_opt = sec_per_pg; > + geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ > + > + geo->mccap = le32_to_cpu(src->mccap); > + > + geo->trdt = le32_to_cpu(src->trdt); > + geo->trdm = le32_to_cpu(src->trdm); > + geo->tprt = le32_to_cpu(src->tprt); > + geo->tprm = le32_to_cpu(src->tprm); > + geo->tbet = le32_to_cpu(src->tbet); > + geo->tbem = le32_to_cpu(src->tbem); > + > + /* 1.2 compatibility */ > + geo->vmnt = id->vmnt; > + geo->cap = le32_to_cpu(id->cap); > + geo->dom = le32_to_cpu(id->dom); > + > + geo->mtype = src->mtype; > + geo->fmtype = src->fmtype; > + > + geo->cpar = le16_to_cpu(src->cpar); > + geo->mpos = le32_to_cpu(src->mpos); > + > + geo->plane_mode = NVM_PLANE_SINGLE; > + > + if (geo->mpos & 0x020202) { > + geo->plane_mode = NVM_PLANE_DOUBLE; > + geo->ws_opt <<= 1; > + } else if (geo->mpos & 0x040404) { > + geo->plane_mode = NVM_PLANE_QUAD; > + geo->ws_opt <<= 2; > } > > - nvm_id->trdt = le32_to_cpu(src->trdt); > - nvm_id->trdm = le32_to_cpu(src->trdm); > - nvm_id->tprt = le32_to_cpu(src->tprt); > - nvm_id->tprm = le32_to_cpu(src->tprm); > - nvm_id->tbet = le32_to_cpu(src->tbet); > - nvm_id->tbem = le32_to_cpu(src->tbem); > - > - /* 1.2 compatibility */ > - nvm_id->num_pln = src->num_pln; > - nvm_id->num_pg = le16_to_cpu(src->num_pg); > - nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz); > + geo->num_pln = src->num_pln; > + geo->num_pg = le16_to_cpu(src->num_pg); > + geo->fpg_sz = le16_to_cpu(src->fpg_sz); > + > + nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&geo->addrf, > + &id->ppaf); > > return 0; > } > > -static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, > - struct nvme_nvm_id12 *id) > +static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst, > + struct nvme_nvm_id20_addrf *src) > { > - nvm_id->ver_id = id->ver_id; > - nvm_id->vmnt = id->vmnt; > - nvm_id->cap = le32_to_cpu(id->cap); > - nvm_id->dom = le32_to_cpu(id->dom); > - memcpy(&nvm_id->ppaf, &id->ppaf, > - sizeof(struct nvm_addr_format)); > - > - return init_grp(nvm_id, id); > + dst->ch_len = src->grp_len; > + dst->lun_len = src->pu_len; > + dst->chk_len = src->chk_len; > + dst->sec_len = src->lba_len; > + > + dst->sec_offset = 0; > + dst->chk_offset = dst->sec_len; > + dst->lun_offset = dst->chk_offset + dst->chk_len; > + dst->ch_offset = dst->lun_offset + dst->lun_len; > + > + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; > + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; > + dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset; > + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; > } > > -static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, > - struct nvme_nvm_id20 *id) > +static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, > + struct nvm_geo *geo) > { > - nvm_id->ver_id = id->mjr; > + geo->ver_id = id->mjr; > + > + geo->nr_chnls = le16_to_cpu(id->num_grp); > + geo->nr_luns = le16_to_cpu(id->num_pu); > + geo->all_luns = geo->nr_chnls * geo->nr_luns; > > - nvm_id->num_ch = le16_to_cpu(id->num_grp); > - nvm_id->num_lun = le16_to_cpu(id->num_pu); > - nvm_id->num_chk = le32_to_cpu(id->num_chk); > - nvm_id->clba = le32_to_cpu(id->clba); > + geo->nr_chks = le32_to_cpu(id->num_chk); > + geo->clba = le32_to_cpu(id->clba); > > - nvm_id->ws_min = le32_to_cpu(id->ws_min); > - nvm_id->ws_opt = le32_to_cpu(id->ws_opt); > - nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits); > + geo->all_chunks = geo->all_luns * geo->nr_chks; > + geo->total_secs = geo->clba * geo->all_chunks; > > - nvm_id->trdt = le32_to_cpu(id->trdt); > - nvm_id->trdm = le32_to_cpu(id->trdm); > - nvm_id->tprt = le32_to_cpu(id->twrt); > - nvm_id->tprm = le32_to_cpu(id->twrm); > - nvm_id->tbet = le32_to_cpu(id->tcrst); > - nvm_id->tbem = le32_to_cpu(id->tcrsm); > + geo->ws_min = le32_to_cpu(id->ws_min); > + geo->ws_opt = le32_to_cpu(id->ws_opt); > + geo->mw_cunits = le32_to_cpu(id->mw_cunits); > > - /* calculated values */ > - nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min; > + geo->trdt = le32_to_cpu(id->trdt); > + geo->trdm = le32_to_cpu(id->trdm); > + geo->tprt = le32_to_cpu(id->twrt); > + geo->tprm = le32_to_cpu(id->twrm); > + geo->tbet = le32_to_cpu(id->tcrst); > + geo->tbem = le32_to_cpu(id->tcrsm); > > - /* 1.2 compatibility */ > - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; > + nvme_nvm_set_addr_20(&geo->addrf, &id->lbaf); > > return 0; > } > > -static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) > +static int nvme_nvm_identity(struct nvm_dev *nvmdev) > { > struct nvme_ns *ns = nvmdev->q->queuedata; > struct nvme_nvm_id12 *id; > @@ -380,18 +435,18 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) > */ > switch (id->ver_id) { > case 1: > - ret = nvme_nvm_setup_12(nvmdev, nvm_id, id); > + ret = nvme_nvm_setup_12(id, &nvmdev->geo); > break; > case 2: > - ret = nvme_nvm_setup_20(nvmdev, nvm_id, > - (struct nvme_nvm_id20 *)id); > + ret = nvme_nvm_setup_20((struct nvme_nvm_id20 *)id, > + &nvmdev->geo); > break; > default: > - dev_err(ns->ctrl->device, > - "OCSSD revision not supported (%d)\n", > - nvm_id->ver_id); > + dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n", > + id->ver_id); > ret = -EINVAL; > } > + > out: > kfree(id); > return ret; > @@ -406,7 +461,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, > struct nvme_ctrl *ctrl = ns->ctrl; > struct nvme_nvm_command c = {}; > struct nvme_nvm_bb_tbl *bb_tbl; > - int nr_blks = geo->nr_chks * geo->plane_mode; > + int nr_blks = geo->nr_chks * geo->num_pln; > int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks; > int ret = 0; > > @@ -447,7 +502,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, > goto out; > } > > - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode); > + memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->num_pln); > out: > kfree(bb_tbl); > return ret; > @@ -815,9 +870,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) > void nvme_nvm_update_nvm_info(struct nvme_ns *ns) > { > struct nvm_dev *ndev = ns->ndev; > + struct nvm_geo *geo = &ndev->geo; > > - ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift; > - ndev->identity.sos = ndev->geo.oob_size = ns->ms; > + geo->csecs = 1 << ns->lba_shift; > + geo->sos = ns->ms; > } > > int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) > @@ -850,23 +906,22 @@ static ssize_t nvm_dev_attr_show(struct device *dev, > { > struct nvme_ns *ns = nvme_get_ns_from_dev(dev); > struct nvm_dev *ndev = ns->ndev; > - struct nvm_id *id; > + struct nvm_geo *geo = &ndev->geo; > struct attribute *attr; > > if (!ndev) > return 0; > > - id = &ndev->identity; > attr = &dattr->attr; > > if (strcmp(attr->name, "version") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ver_id); > } else if (strcmp(attr->name, "capabilities") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->cap); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->cap); > } else if (strcmp(attr->name, "read_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdt); > } else if (strcmp(attr->name, "read_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdm); > } else { > return scnprintf(page, > PAGE_SIZE, > @@ -875,75 +930,79 @@ static ssize_t nvm_dev_attr_show(struct device *dev, > } > } > > +static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf, > + char *page) > +{ > + return scnprintf(page, PAGE_SIZE, > + "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", > + ppaf->ch_offset, ppaf->ch_len, > + ppaf->lun_offset, ppaf->lun_len, > + ppaf->pln_offset, ppaf->pln_len, > + ppaf->blk_offset, ppaf->blk_len, > + ppaf->pg_offset, ppaf->pg_len, > + ppaf->sect_offset, ppaf->sect_len); > +} > + > static ssize_t nvm_dev_attr_show_12(struct device *dev, > struct device_attribute *dattr, char *page) > { > struct nvme_ns *ns = nvme_get_ns_from_dev(dev); > struct nvm_dev *ndev = ns->ndev; > - struct nvm_id *id; > + struct nvm_geo *geo = &ndev->geo; > struct attribute *attr; > > if (!ndev) > return 0; > > - id = &ndev->identity; > attr = &dattr->attr; > > if (strcmp(attr->name, "vendor_opcode") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->vmnt); > } else if (strcmp(attr->name, "device_mode") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->dom); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->dom); > /* kept for compatibility */ > } else if (strcmp(attr->name, "media_manager") == 0) { > return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm"); > } else if (strcmp(attr->name, "ppa_format") == 0) { > - return scnprintf(page, PAGE_SIZE, > - "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", > - id->ppaf.ch_offset, id->ppaf.ch_len, > - id->ppaf.lun_offset, id->ppaf.lun_len, > - id->ppaf.pln_offset, id->ppaf.pln_len, > - id->ppaf.blk_offset, id->ppaf.blk_len, > - id->ppaf.pg_offset, id->ppaf.pg_len, > - id->ppaf.sect_offset, id->ppaf.sect_len); > + return nvm_dev_attr_show_ppaf((void *)&geo->addrf, page); Why do the code here cast to void *, and not to the address format data structure? Have you thought about doing the cast directly here, instead of making a function for it? > } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */ > - return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mtype); > } else if (strcmp(attr->name, "flash_media_type") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fmtype); > } else if (strcmp(attr->name, "num_channels") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); > } else if (strcmp(attr->name, "num_luns") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); > } else if (strcmp(attr->name, "num_planes") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pln); > } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */ > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); > } else if (strcmp(attr->name, "num_pages") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pg); > } else if (strcmp(attr->name, "page_size") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fpg_sz); > } else if (strcmp(attr->name, "hw_sector_size") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->csecs); > } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */ > - return scnprintf(page, PAGE_SIZE, "%u\n", id->sos); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->sos); > } else if (strcmp(attr->name, "prog_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); > } else if (strcmp(attr->name, "prog_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); > } else if (strcmp(attr->name, "erase_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); > } else if (strcmp(attr->name, "erase_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); > } else if (strcmp(attr->name, "multiplane_modes") == 0) { > - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos); > + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mpos); > } else if (strcmp(attr->name, "media_capabilities") == 0) { > - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap); > + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mccap); > } else if (strcmp(attr->name, "max_phys_secs") == 0) { > return scnprintf(page, PAGE_SIZE, "%u\n", NVM_MAX_VLBA); > } else { > - return scnprintf(page, > - PAGE_SIZE, > - "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", > - attr->name); > + return scnprintf(page, PAGE_SIZE, > + "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", > + attr->name); > } > } > > @@ -952,42 +1011,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, > { > struct nvme_ns *ns = nvme_get_ns_from_dev(dev); > struct nvm_dev *ndev = ns->ndev; > - struct nvm_id *id; > + struct nvm_geo *geo = &ndev->geo; > struct attribute *attr; > > if (!ndev) > return 0; > > - id = &ndev->identity; > attr = &dattr->attr; > > if (strcmp(attr->name, "groups") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); > } else if (strcmp(attr->name, "punits") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); > } else if (strcmp(attr->name, "chunks") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); > } else if (strcmp(attr->name, "clba") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->clba); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->clba); > } else if (strcmp(attr->name, "ws_min") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_min); > } else if (strcmp(attr->name, "ws_opt") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_opt); > } else if (strcmp(attr->name, "mw_cunits") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mw_cunits); > } else if (strcmp(attr->name, "write_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); > } else if (strcmp(attr->name, "write_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); > } else if (strcmp(attr->name, "reset_typ") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); > } else if (strcmp(attr->name, "reset_max") == 0) { > - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); > + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); > } else { > - return scnprintf(page, > - PAGE_SIZE, > - "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", > - attr->name); > + return scnprintf(page, PAGE_SIZE, > + "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", > + attr->name); > } > } > > @@ -1106,10 +1163,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = { > > int nvme_nvm_register_sysfs(struct nvme_ns *ns) > { > - if (!ns->ndev) > + struct nvm_dev *ndev = ns->ndev; > + struct nvm_geo *geo = &ndev->geo; > + > + if (!ndev) > return -EINVAL; > > - switch (ns->ndev->identity.ver_id) { > + switch (geo->ver_id) { > case 1: > return sysfs_create_group(&disk_to_dev(ns->disk)->kobj, > &nvm_dev_attr_group_12); > @@ -1123,7 +1183,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns) > > void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) > { > - switch (ns->ndev->identity.ver_id) { > + struct nvm_dev *ndev = ns->ndev; > + struct nvm_geo *geo = &ndev->geo; > + > + switch (geo->ver_id) { > case 1: > sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, > &nvm_dev_attr_group_12); > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index e55b10573c99..16255fcd5250 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -50,7 +50,7 @@ struct nvm_id; > struct nvm_dev; > struct nvm_tgt_dev; > > -typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *); > +typedef int (nvm_id_fn)(struct nvm_dev *); > typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); > typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); > typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); > @@ -152,62 +152,48 @@ struct nvm_id_lp_tbl { > struct nvm_id_lp_mlc mlc; > }; > > -struct nvm_addr_format { > - u8 ch_offset; > +struct nvm_addr_format_12 { I can see in a couple of places a statement has to be over two lines due to the length of writing out nvm_addr_format_12, would it make sense to shorthand it to nvm_addrf_12? > u8 ch_len; > - u8 lun_offset; > u8 lun_len; > - u8 pln_offset; > + u8 blk_len; > + u8 pg_len; > u8 pln_len; > + u8 sect_len; > + > + u8 ch_offset; > + u8 lun_offset; > u8 blk_offset; > - u8 blk_len; > u8 pg_offset; > - u8 pg_len; > + u8 pln_offset; > u8 sect_offset; > - u8 sect_len; > -}; > - > -struct nvm_id { > - u8 ver_id; > - u8 vmnt; > - u32 cap; > - u32 dom; > - > - struct nvm_addr_format ppaf; > - > - u8 num_ch; > - u8 num_lun; > - u16 num_chk; > - u16 clba; > - u16 csecs; > - u16 sos; > - > - u32 ws_min; > - u32 ws_opt; > - u32 mw_cunits; > > - u32 trdt; > - u32 trdm; > - u32 tprt; > - u32 tprm; > - u32 tbet; > - u32 tbem; > - u32 mpos; > - u32 mccap; > - u16 cpar; > - > - /* calculated values */ > - u16 ws_seq; > - u16 ws_per_chk; > - > - /* 1.2 compatibility */ > - u8 mtype; > - u8 fmtype; > + u64 ch_mask; > + u64 lun_mask; > + u64 blk_mask; > + u64 pg_mask; > + u64 pln_mask; > + u64 sec_mask; > +}; > > - u8 num_pln; > - u16 num_pg; > - u16 fpg_sz; > -} __packed; > +struct nvm_addr_format { > + u8 ch_len; > + u8 lun_len; > + u8 chk_len; > + u8 sec_len; > + u8 rsv_len[2]; > + > + u8 ch_offset; > + u8 lun_offset; > + u8 chk_offset; > + u8 sec_offset; > + u8 rsv_off[2]; > + > + u64 ch_mask; > + u64 lun_mask; > + u64 chk_mask; > + u64 sec_mask; > + u64 rsv_mask[2]; > +}; > > struct nvm_target { > struct list_head list; > @@ -274,36 +260,63 @@ enum { > NVM_BLK_ST_BAD = 0x8, /* Bad block */ > }; > > - > -/* Device generic information */ > +/* Instance geometry */ > struct nvm_geo { > - /* generic geometry */ > + /* device reported version */ > + u8 ver_id; > + > + /* instance specific geometry */ > int nr_chnls; > - int all_luns; /* across channels */ > - int nr_luns; /* per channel */ > - int nr_chks; /* per lun */ > + int nr_luns; /* per channel */ > > - int sec_size; > - int oob_size; > - int mccap; > + /* calculated values */ > + int all_luns; /* across channels */ > + int all_chunks; /* across channels */ > > - int sec_per_chk; > - int sec_per_lun; > + int op; /* over-provision in instance */ > > - int ws_min; > - int ws_opt; > - int ws_seq; > - int ws_per_chk; > + sector_t total_secs; /* across channels */ > > - int op; > + /* chunk geometry */ > + u32 nr_chks; /* chunks per lun */ > + u32 clba; /* sectors per chunk */ > + u16 csecs; /* sector size */ > + u16 sos; /* out-of-band area size */ > > - struct nvm_addr_format ppaf; > + /* device write constrains */ > + u32 ws_min; /* minimum write size */ > + u32 ws_opt; /* optimal write size */ > + u32 mw_cunits; /* distance required for successful read */ > > - /* Legacy 1.2 specific geometry */ > - int plane_mode; /* drive device in single, double or quad mode */ > - int nr_planes; > - int sec_per_pg; /* only sectors for a single page */ > - int sec_per_pl; /* all sectors across planes */ > + /* device capabilities */ > + u32 mccap; > + > + /* device timings */ > + u32 trdt; /* Avg. Tread (ns) */ > + u32 trdm; /* Max Tread (ns) */ > + u32 tprt; /* Avg. Tprog (ns) */ > + u32 tprm; /* Max Tprog (ns) */ > + u32 tbet; /* Avg. Terase (ns) */ > + u32 tbem; /* Max Terase (ns) */ > + > + /* generic address format */ > + struct nvm_addr_format addrf; > + > + /* 1.2 compatibility */ > + u8 vmnt; > + u32 cap; > + u32 dom; > + > + u8 mtype; > + u8 fmtype; > + > + u16 cpar; > + u32 mpos; > + > + u8 num_pln; > + u8 plane_mode; > + u16 num_pg; > + u16 fpg_sz; > }; > > /* sub-device structure */ > @@ -314,9 +327,6 @@ struct nvm_tgt_dev { > /* Base ppas for target LUNs */ > struct ppa_addr *luns; > > - sector_t total_secs; > - > - struct nvm_id identity; > struct request_queue *q; > > struct nvm_dev *parent; > @@ -331,13 +341,9 @@ struct nvm_dev { > /* Device information */ > struct nvm_geo geo; > > - unsigned long total_secs; > - > unsigned long *lun_map; > void *dma_pool; > > - struct nvm_id identity; > - > /* Backend device */ > struct request_queue *q; > char name[DISK_NAME_LEN]; > @@ -357,14 +363,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, > struct ppa_addr r) > { > struct nvm_geo *geo = &tgt_dev->geo; > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&geo->addrf; > struct ppa_addr l; > > - l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset; > - l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset; > - l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset; > - l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset; > - l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset; > - l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset; > + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; > + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; > + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; > + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; > + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; > + l.ppa |= ((u64)r.g.sec) << ppaf->sect_offset; > > return l; > } > @@ -373,24 +381,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, > struct ppa_addr r) > { > struct nvm_geo *geo = &tgt_dev->geo; > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&geo->addrf; > struct ppa_addr l; > > l.ppa = 0; > - /* > - * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc. > - */ > - l.g.blk = (r.ppa >> geo->ppaf.blk_offset) & > - (((1 << geo->ppaf.blk_len) - 1)); > - l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) & > - (((1 << geo->ppaf.pg_len) - 1)); > - l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) & > - (((1 << geo->ppaf.sect_len) - 1)); > - l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) & > - (((1 << geo->ppaf.pln_len) - 1)); > - l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) & > - (((1 << geo->ppaf.lun_len) - 1)); > - l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) & > - (((1 << geo->ppaf.ch_len) - 1)); > + > + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; > + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; > + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; > + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; > + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; > + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sect_offset; > > return l; > } > Looks good to me, ^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 01/15] lightnvm: simplify geometry structure. 2018-03-01 10:22 ` Matias Bjørling @ 2018-03-02 11:15 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-03-02 11:15 UTC (permalink / raw) To: Matias Bjørling; +Cc: linux-block, linux-kernel, linux-nvme [-- Attachment #1: Type: text/plain, Size: 61360 bytes --] > On 1 Mar 2018, at 11.22, Matias Bjørling <mb@lightnvm.io> wrote: > > On 02/28/2018 04:49 PM, Javier González wrote: >> Currently, the device geometry is stored redundantly in the nvm_id and >> nvm_geo structures at a device level. Moreover, when instantiating >> targets on a specific number of LUNs, these structures are replicated >> and manually modified to fit the instance channel and LUN partitioning. >> Instead, create a generic geometry around nvm_geo, which can be used by >> (i) the underlying device to describe the geometry of the whole device, >> and (ii) instances to describe their geometry independently. >> Signed-off-by: Javier González <javier@cnexlabs.com> >> --- >> drivers/lightnvm/core.c | 70 +++----- >> drivers/lightnvm/pblk-core.c | 16 +- >> drivers/lightnvm/pblk-gc.c | 2 +- >> drivers/lightnvm/pblk-init.c | 119 +++++++------- >> drivers/lightnvm/pblk-read.c | 2 +- >> drivers/lightnvm/pblk-recovery.c | 14 +- >> drivers/lightnvm/pblk-rl.c | 2 +- >> drivers/lightnvm/pblk-sysfs.c | 39 +++-- >> drivers/lightnvm/pblk-write.c | 2 +- >> drivers/lightnvm/pblk.h | 87 +++++----- >> drivers/nvme/host/lightnvm.c | 341 +++++++++++++++++++++++---------------- >> include/linux/lightnvm.h | 200 +++++++++++------------ >> 12 files changed, 465 insertions(+), 429 deletions(-) >> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >> index 19c46ebb1b91..9a417d9cdf0c 100644 >> --- a/drivers/lightnvm/core.c >> +++ b/drivers/lightnvm/core.c >> @@ -155,7 +155,7 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, >> int blun = lun_begin % dev->geo.nr_luns; >> int lunid = 0; >> int lun_balanced = 1; >> - int prev_nr_luns; >> + int sec_per_lun, prev_nr_luns; >> int i, j; >> nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1; >> @@ -215,18 +215,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, >> if (!tgt_dev) >> goto err_ch; >> + /* Inherit device geometry from parent */ >> memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo)); >> + >> /* Target device only owns a portion of the physical device */ >> tgt_dev->geo.nr_chnls = nr_chnls; >> - tgt_dev->geo.all_luns = nr_luns; >> tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1; >> + tgt_dev->geo.all_luns = nr_luns; >> + tgt_dev->geo.all_chunks = nr_luns * dev->geo.nr_chks; >> + >> tgt_dev->geo.op = op; >> - tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun; >> + >> + sec_per_lun = dev->geo.clba * dev->geo.nr_chks; >> + tgt_dev->geo.total_secs = nr_luns * sec_per_lun; >> + >> tgt_dev->q = dev->q; >> tgt_dev->map = dev_map; >> tgt_dev->luns = luns; >> - memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id)); >> - >> tgt_dev->parent = dev; >> return tgt_dev; >> @@ -296,8 +301,6 @@ static int __nvm_config_simple(struct nvm_dev *dev, >> static int __nvm_config_extended(struct nvm_dev *dev, >> struct nvm_ioctl_create_extended *e) >> { >> - struct nvm_geo *geo = &dev->geo; >> - >> if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) { >> e->lun_begin = 0; >> e->lun_end = dev->geo.all_luns - 1; >> @@ -311,7 +314,7 @@ static int __nvm_config_extended(struct nvm_dev *dev, >> return -EINVAL; >> } >> - return nvm_config_check_luns(geo, e->lun_begin, e->lun_end); >> + return nvm_config_check_luns(&dev->geo, e->lun_begin, e->lun_end); >> } >> static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) >> @@ -406,7 +409,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) >> tqueue->queuedata = targetdata; >> blk_queue_max_hw_sectors(tqueue, >> - (dev->geo.sec_size >> 9) * NVM_MAX_VLBA); >> + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); >> set_capacity(tdisk, tt->capacity(targetdata)); >> add_disk(tdisk); >> @@ -841,40 +844,9 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl); >> static int nvm_core_init(struct nvm_dev *dev) >> { >> - struct nvm_id *id = &dev->identity; >> struct nvm_geo *geo = &dev->geo; >> int ret; >> - memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format)); >> - >> - if (id->mtype != 0) { >> - pr_err("nvm: memory type not supported\n"); >> - return -EINVAL; >> - } >> - >> - /* Whole device values */ >> - geo->nr_chnls = id->num_ch; >> - geo->nr_luns = id->num_lun; >> - >> - /* Generic device geometry values */ >> - geo->ws_min = id->ws_min; >> - geo->ws_opt = id->ws_opt; >> - geo->ws_seq = id->ws_seq; >> - geo->ws_per_chk = id->ws_per_chk; >> - geo->nr_chks = id->num_chk; >> - geo->mccap = id->mccap; >> - >> - geo->sec_per_chk = id->clba; >> - geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks; >> - geo->all_luns = geo->nr_luns * geo->nr_chnls; >> - >> - /* 1.2 spec device geometry values */ >> - geo->plane_mode = 1 << geo->ws_seq; >> - geo->nr_planes = geo->ws_opt / geo->ws_min; >> - geo->sec_per_pg = geo->ws_min; >> - geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes; >> - >> - dev->total_secs = geo->all_luns * geo->sec_per_lun; >> dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns), >> sizeof(unsigned long), GFP_KERNEL); >> if (!dev->lun_map) >> @@ -913,16 +885,14 @@ static int nvm_init(struct nvm_dev *dev) >> struct nvm_geo *geo = &dev->geo; >> int ret = -EINVAL; >> - if (dev->ops->identity(dev, &dev->identity)) { >> + if (dev->ops->identity(dev)) { >> pr_err("nvm: device could not be identified\n"); >> goto err; >> } >> - if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) { >> - pr_err("nvm: device ver_id %d not supported by kernel.\n", >> - dev->identity.ver_id); >> - goto err; >> - } >> + pr_debug("nvm: ver:%u nvm_vendor:%x\n", >> + geo->ver_id, >> + geo->vmnt); >> ret = nvm_core_init(dev); >> if (ret) { >> @@ -930,10 +900,10 @@ static int nvm_init(struct nvm_dev *dev) >> goto err; >> } >> - pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n", >> - dev->name, geo->sec_per_pg, geo->nr_planes, >> - geo->ws_per_chk, geo->nr_chks, >> - geo->all_luns, geo->nr_chnls); >> + pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n", >> + dev->name, geo->ws_min, geo->ws_opt, >> + geo->nr_chks, geo->all_luns, >> + geo->nr_chnls); >> return 0; >> err: >> pr_err("nvm: failed to initialize nvm\n"); >> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c >> index 8848443a0721..169589ddd457 100644 >> --- a/drivers/lightnvm/pblk-core.c >> +++ b/drivers/lightnvm/pblk-core.c >> @@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line, >> memset(&rqd, 0, sizeof(struct nvm_rq)); >> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len, >> l_mg->emeta_alloc_type, GFP_KERNEL); >> @@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line) >> if (bit >= lm->blk_per_line) >> return -1; >> - return bit * geo->sec_per_pl; >> + return bit * geo->ws_opt; >> } >> static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line, >> @@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, >> /* Capture bad block information on line mapping bitmaps */ >> while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line, >> bit + 1)) < lm->blk_per_line) { >> - off = bit * geo->sec_per_pl; >> + off = bit * geo->ws_opt; >> bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off, >> lm->sec_per_line); >> bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux, >> lm->sec_per_line); >> - line->sec_in_line -= geo->sec_per_chk; >> + line->sec_in_line -= geo->clba; >> if (bit >= lm->emeta_bb) >> nr_bb++; >> } >> /* Mark smeta metadata sectors as bad sectors */ >> bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line); >> - off = bit * geo->sec_per_pl; >> + off = bit * geo->ws_opt; >> bitmap_set(line->map_bitmap, off, lm->smeta_sec); >> line->sec_in_line -= lm->smeta_sec; >> line->smeta_ssec = off; >> @@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, >> emeta_secs = lm->emeta_sec[0]; >> off = lm->sec_per_line; >> while (emeta_secs) { >> - off -= geo->sec_per_pl; >> + off -= geo->ws_opt; >> if (!test_bit(off, line->invalid_bitmap)) { >> - bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl); >> - emeta_secs -= geo->sec_per_pl; >> + bitmap_set(line->invalid_bitmap, off, geo->ws_opt); >> + emeta_secs -= geo->ws_opt; >> } >> } >> diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c >> index 320f99af99e9..6851a5c67189 100644 >> --- a/drivers/lightnvm/pblk-gc.c >> +++ b/drivers/lightnvm/pblk-gc.c >> @@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work) >> up(&gc->gc_sem); >> - gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size); >> + gc_rq->data = vmalloc(gc_rq->nr_secs * geo->csecs); >> if (!gc_rq->data) { >> pr_err("pblk: could not GC line:%d (%d/%d)\n", >> line->id, *line->vsc, gc_rq->nr_secs); >> diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c >> index 25fc70ca07f7..9b5ee05c3028 100644 >> --- a/drivers/lightnvm/pblk-init.c >> +++ b/drivers/lightnvm/pblk-init.c >> @@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk) >> return -ENOMEM; >> power_size = get_count_order(nr_entries); >> - power_seg_sz = get_count_order(geo->sec_size); >> + power_seg_sz = get_count_order(geo->csecs); >> return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz); >> } >> @@ -154,11 +154,11 @@ static int pblk_rwb_init(struct pblk *pblk) >> /* Minimum pages needed within a lun */ >> #define ADDR_POOL_SIZE 64 >> -static int pblk_set_ppaf(struct pblk *pblk) >> +static int pblk_set_addrf_12(struct nvm_geo *geo, >> + struct nvm_addr_format_12 *dst) >> { >> - struct nvm_tgt_dev *dev = pblk->dev; >> - struct nvm_geo *geo = &dev->geo; >> - struct nvm_addr_format ppaf = geo->ppaf; >> + struct nvm_addr_format_12 *src = >> + (struct nvm_addr_format_12 *)&geo->addrf; >> int power_len; >> /* Re-calculate channel and lun format to adapt to configuration */ >> @@ -167,34 +167,50 @@ static int pblk_set_ppaf(struct pblk *pblk) >> pr_err("pblk: supports only power-of-two channel config.\n"); >> return -EINVAL; >> } >> - ppaf.ch_len = power_len; >> + dst->ch_len = power_len; >> power_len = get_count_order(geo->nr_luns); >> if (1 << power_len != geo->nr_luns) { >> pr_err("pblk: supports only power-of-two LUN config.\n"); >> return -EINVAL; >> } >> - ppaf.lun_len = power_len; >> + dst->lun_len = power_len; >> - pblk->ppaf.sec_offset = 0; >> - pblk->ppaf.pln_offset = ppaf.sect_len; >> - pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len; >> - pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len; >> - pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len; >> - pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len; >> - pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1; >> - pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) << >> - pblk->ppaf.pln_offset; >> - pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) << >> - pblk->ppaf.ch_offset; >> - pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) << >> - pblk->ppaf.lun_offset; >> - pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) << >> - pblk->ppaf.pg_offset; >> - pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) << >> - pblk->ppaf.blk_offset; >> + dst->blk_len = src->blk_len; >> + dst->pg_len = src->pg_len; >> + dst->pln_len = src->pln_len; >> + dst->sect_len = src->sect_len; >> - pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len; >> + dst->sect_offset = 0; >> + dst->pln_offset = dst->sect_len; >> + dst->ch_offset = dst->pln_offset + dst->pln_len; >> + dst->lun_offset = dst->ch_offset + dst->ch_len; >> + dst->pg_offset = dst->lun_offset + dst->lun_len; >> + dst->blk_offset = dst->pg_offset + dst->pg_len; >> + >> + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; >> + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; >> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; >> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; >> + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; >> + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; >> + >> + return dst->blk_offset + src->blk_len; >> +} >> + >> +static int pblk_set_ppaf(struct pblk *pblk) >> +{ >> + struct nvm_tgt_dev *dev = pblk->dev; >> + struct nvm_geo *geo = &dev->geo; >> + int mod; >> + >> + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); >> + if (mod) { >> + pr_err("pblk: bad configuration of sectors/pages\n"); >> + return -EINVAL; >> + } >> + >> + pblk->ppaf_bitsize = pblk_set_addrf_12(geo, (void *)&pblk->ppaf); >> return 0; >> } >> @@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk) >> struct nvm_tgt_dev *dev = pblk->dev; >> struct nvm_geo *geo = &dev->geo; >> - pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg * >> - geo->nr_planes * geo->all_luns; >> + pblk->pgs_in_buffer = geo->mw_cunits * geo->all_luns; >> if (pblk_init_global_caches(pblk)) >> return -ENOMEM; >> @@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk) >> /* Round to sector size so that lba_list starts on its own sector */ >> lm->emeta_sec[1] = DIV_ROUND_UP( >> sizeof(struct line_emeta) + lm->blk_bitmap_len + >> - sizeof(struct wa_counters), geo->sec_size); >> - lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size; >> + sizeof(struct wa_counters), geo->csecs); >> + lm->emeta_len[1] = lm->emeta_sec[1] * geo->csecs; >> /* Round to sector size so that vsc_list starts on its own sector */ >> lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0]; >> lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64), >> - geo->sec_size); >> - lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size; >> + geo->csecs); >> + lm->emeta_len[2] = lm->emeta_sec[2] * geo->csecs; >> lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32), >> - geo->sec_size); >> - lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size; >> + geo->csecs); >> + lm->emeta_len[3] = lm->emeta_sec[3] * geo->csecs; >> lm->vsc_list_len = l_mg->nr_lines * sizeof(u32); >> @@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) >> * on user capacity consider only provisioned blocks >> */ >> pblk->rl.total_blocks = nr_free_blks; >> - pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk; >> + pblk->rl.nr_secs = nr_free_blks * geo->clba; >> /* Consider sectors used for metadata */ >> sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; >> - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); >> + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); >> - pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk; >> + pblk->capacity = (provisioned - blk_meta) * geo->clba; >> atomic_set(&pblk->rl.free_blocks, nr_free_blks); >> atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); >> @@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk) >> void *chunk_log; >> unsigned int smeta_len, emeta_len; >> long nr_bad_blks = 0, nr_free_blks = 0; >> - int bb_distance, max_write_ppas, mod; >> + int bb_distance, max_write_ppas; >> int i, ret; >> - pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE); >> + pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); >> max_write_ppas = pblk->min_write_pgs * geo->all_luns; >> pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); >> pblk_set_sec_per_write(pblk, pblk->min_write_pgs); >> @@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk) >> return -EINVAL; >> } >> - div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod); >> - if (mod) { >> - pr_err("pblk: bad configuration of sectors/pages\n"); >> - return -EINVAL; >> - } >> - >> l_mg->nr_lines = geo->nr_chks; >> l_mg->log_line = l_mg->data_line = NULL; >> l_mg->l_seq_nr = l_mg->d_seq_nr = 0; >> l_mg->nr_free_lines = 0; >> bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES); >> - lm->sec_per_line = geo->sec_per_chk * geo->all_luns; >> + lm->sec_per_line = geo->clba * geo->all_luns; >> lm->blk_per_line = geo->all_luns; >> lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long); >> lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long); >> @@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk) >> */ >> i = 1; >> add_smeta_page: >> - lm->smeta_sec = i * geo->sec_per_pl; >> - lm->smeta_len = lm->smeta_sec * geo->sec_size; >> + lm->smeta_sec = i * geo->ws_opt; >> + lm->smeta_len = lm->smeta_sec * geo->csecs; >> smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len; >> if (smeta_len > lm->smeta_len) { >> @@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk) >> */ >> i = 1; >> add_emeta_page: >> - lm->emeta_sec[0] = i * geo->sec_per_pl; >> - lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size; >> + lm->emeta_sec[0] = i * geo->ws_opt; >> + lm->emeta_len[0] = lm->emeta_sec[0] * geo->csecs; >> emeta_len = calc_emeta_len(pblk); >> if (emeta_len > lm->emeta_len[0]) { >> @@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk) >> lm->min_blk_line = 1; >> if (geo->all_luns > 1) >> lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec + >> - lm->emeta_sec[0], geo->sec_per_chk); >> + lm->emeta_sec[0], geo->clba); >> if (lm->min_blk_line > lm->blk_per_line) { >> pr_err("pblk: config. not supported. Min. LUN in line:%d\n", >> @@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk) >> goto fail_free_bb_template; >> } >> - bb_distance = (geo->all_luns) * geo->sec_per_pl; >> + bb_distance = (geo->all_luns) * geo->ws_opt; >> for (i = 0; i < lm->sec_per_line; i += bb_distance) >> - bitmap_set(l_mg->bb_template, i, geo->sec_per_pl); >> + bitmap_set(l_mg->bb_template, i, geo->ws_opt); >> INIT_LIST_HEAD(&l_mg->free_list); >> INIT_LIST_HEAD(&l_mg->corrupt_list); >> @@ -982,9 +991,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, >> struct pblk *pblk; >> int ret; >> - if (dev->identity.dom & NVM_RSP_L2P) { >> + if (dev->geo.dom & NVM_RSP_L2P) { >> pr_err("pblk: host-side L2P table not supported. (%x)\n", >> - dev->identity.dom); >> + dev->geo.dom); >> return ERR_PTR(-EINVAL); >> } >> @@ -1092,7 +1101,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, >> blk_queue_write_cache(tqueue, true, false); >> - tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size; >> + tqueue->limits.discard_granularity = geo->clba * geo->csecs; >> tqueue->limits.discard_alignment = 0; >> blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9); >> queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue); >> diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c >> index 2f761283f43e..9eee10f69df0 100644 >> --- a/drivers/lightnvm/pblk-read.c >> +++ b/drivers/lightnvm/pblk-read.c >> @@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) >> if (!(gc_rq->secs_to_gc)) >> goto out; >> - data_len = (gc_rq->secs_to_gc) * geo->sec_size; >> + data_len = (gc_rq->secs_to_gc) * geo->csecs; >> bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len, >> PBLK_VMALLOC_META, GFP_KERNEL); >> if (IS_ERR(bio)) { >> diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c >> index aaab9a5c17cc..26356429dc72 100644 >> --- a/drivers/lightnvm/pblk-recovery.c >> +++ b/drivers/lightnvm/pblk-recovery.c >> @@ -184,7 +184,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line) >> int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line); >> return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] - >> - nr_bb * geo->sec_per_chk; >> + nr_bb * geo->clba; >> } >> struct pblk_recov_alloc { >> @@ -232,7 +232,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line, >> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); >> if (!rq_ppas) >> rq_ppas = pblk->min_write_pgs; >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); >> if (IS_ERR(bio)) >> @@ -351,7 +351,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, >> if (!pad_rq) >> return -ENOMEM; >> - data = vzalloc(pblk->max_write_pgs * geo->sec_size); >> + data = vzalloc(pblk->max_write_pgs * geo->csecs); >> if (!data) { >> ret = -ENOMEM; >> goto free_rq; >> @@ -368,7 +368,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, >> goto fail_free_pad; >> } >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list); >> if (!meta_list) { >> @@ -509,7 +509,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line, >> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); >> if (!rq_ppas) >> rq_ppas = pblk->min_write_pgs; >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); >> if (IS_ERR(bio)) >> @@ -640,7 +640,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, >> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); >> if (!rq_ppas) >> rq_ppas = pblk->min_write_pgs; >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); >> if (IS_ERR(bio)) >> @@ -745,7 +745,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) >> ppa_list = (void *)(meta_list) + pblk_dma_meta_size; >> dma_ppa_list = dma_meta_list + pblk_dma_meta_size; >> - data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL); >> + data = kcalloc(pblk->max_write_pgs, geo->csecs, GFP_KERNEL); >> if (!data) { >> ret = -ENOMEM; >> goto free_meta_list; >> diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c >> index 0d457b162f23..883a7113b19d 100644 >> --- a/drivers/lightnvm/pblk-rl.c >> +++ b/drivers/lightnvm/pblk-rl.c >> @@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget) >> /* Consider sectors used for metadata */ >> sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; >> - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); >> + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); >> rl->high = pblk->op_blks - blk_meta - lm->blk_per_line; >> rl->high_pw = get_count_order(rl->high); >> diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c >> index 1680ce0a828d..33199c6af267 100644 >> --- a/drivers/lightnvm/pblk-sysfs.c >> +++ b/drivers/lightnvm/pblk-sysfs.c >> @@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) >> { >> struct nvm_tgt_dev *dev = pblk->dev; >> struct nvm_geo *geo = &dev->geo; >> + struct nvm_addr_format_12 *ppaf; >> + struct nvm_addr_format_12 *geo_ppaf; >> ssize_t sz = 0; >> - sz = snprintf(page, PAGE_SIZE - sz, >> - "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", >> - pblk->ppaf_bitsize, >> - pblk->ppaf.blk_offset, geo->ppaf.blk_len, >> - pblk->ppaf.pg_offset, geo->ppaf.pg_len, >> - pblk->ppaf.lun_offset, geo->ppaf.lun_len, >> - pblk->ppaf.ch_offset, geo->ppaf.ch_len, >> - pblk->ppaf.pln_offset, geo->ppaf.pln_len, >> - pblk->ppaf.sec_offset, geo->ppaf.sect_len); >> + ppaf = (struct nvm_addr_format_12 *)&pblk->ppaf; >> + geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; >> + >> + sz = snprintf(page, PAGE_SIZE, >> + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", >> + pblk->ppaf_bitsize, >> + ppaf->ch_offset, ppaf->ch_len, >> + ppaf->lun_offset, ppaf->lun_len, >> + ppaf->blk_offset, ppaf->blk_len, >> + ppaf->pg_offset, ppaf->pg_len, >> + ppaf->pln_offset, ppaf->pln_len, >> + ppaf->sect_offset, ppaf->sect_len); > > Is it on purpose here that the code breaks user-space by changing the sysfs print format? Fixed. > >> sz += snprintf(page + sz, PAGE_SIZE - sz, >> - "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", >> - geo->ppaf.blk_offset, geo->ppaf.blk_len, >> - geo->ppaf.pg_offset, geo->ppaf.pg_len, >> - geo->ppaf.lun_offset, geo->ppaf.lun_len, >> - geo->ppaf.ch_offset, geo->ppaf.ch_len, >> - geo->ppaf.pln_offset, geo->ppaf.pln_len, >> - geo->ppaf.sect_offset, geo->ppaf.sect_len); >> + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", >> + geo_ppaf->ch_offset, geo_ppaf->ch_len, >> + geo_ppaf->lun_offset, geo_ppaf->lun_len, >> + geo_ppaf->blk_offset, geo_ppaf->blk_len, >> + geo_ppaf->pg_offset, geo_ppaf->pg_len, >> + geo_ppaf->pln_offset, geo_ppaf->pln_len, >> + geo_ppaf->sect_offset, geo_ppaf->sect_len); > > Similarily here. Fixed. > >> return sz; >> } >> @@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page) >> "blk_line:%d, sec_line:%d, sec_blk:%d\n", >> lm->blk_per_line, >> lm->sec_per_line, >> - geo->sec_per_chk); >> + geo->clba); >> return sz; >> } >> diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c >> index aae86ed60b98..3e6f1ebd743a 100644 >> --- a/drivers/lightnvm/pblk-write.c >> +++ b/drivers/lightnvm/pblk-write.c >> @@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) >> m_ctx = nvm_rq_to_pdu(rqd); >> m_ctx->private = meta_line; >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> data = ((void *)emeta->buf) + emeta->mem; >> bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len, >> diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h >> index f0309d8172c0..b29c1e6698aa 100644 >> --- a/drivers/lightnvm/pblk.h >> +++ b/drivers/lightnvm/pblk.h >> @@ -551,21 +551,6 @@ struct pblk_line_meta { >> unsigned int meta_distance; /* Distance between data and metadata */ >> }; >> -struct pblk_addr_format { >> - u64 ch_mask; >> - u64 lun_mask; >> - u64 pln_mask; >> - u64 blk_mask; >> - u64 pg_mask; >> - u64 sec_mask; >> - u8 ch_offset; >> - u8 lun_offset; >> - u8 pln_offset; >> - u8 blk_offset; >> - u8 pg_offset; >> - u8 sec_offset; >> -}; >> - >> enum { >> PBLK_STATE_RUNNING = 0, >> PBLK_STATE_STOPPING = 1, >> @@ -585,8 +570,8 @@ struct pblk { >> struct pblk_line_mgmt l_mg; /* Line management */ >> struct pblk_line_meta lm; /* Line metadata */ >> + struct nvm_addr_format ppaf; >> int ppaf_bitsize; >> - struct pblk_addr_format ppaf; >> struct pblk_rb rwb; >> @@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line) >> return le32_to_cpu(*line->vsc); >> } >> -#define NVM_MEM_PAGE_WRITE (8) >> - >> static inline int pblk_pad_distance(struct pblk *pblk) >> { >> struct nvm_tgt_dev *dev = pblk->dev; >> struct nvm_geo *geo = &dev->geo; >> - return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl; >> + return geo->mw_cunits * geo->all_luns * geo->ws_opt; >> } >> static inline int pblk_ppa_to_line(struct ppa_addr p) >> @@ -964,15 +947,17 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) >> static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, >> u64 line_id) >> { >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&pblk->ppaf; >> struct ppa_addr ppa; >> ppa.ppa = 0; >> ppa.g.blk = line_id; >> - ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset; >> - ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset; >> - ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset; >> - ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset; >> - ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset; >> + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; >> + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; >> + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; >> + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; >> + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sect_offset; >> return ppa; >> } >> @@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, >> static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, >> struct ppa_addr p) >> { >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&pblk->ppaf; >> u64 paddr; >> - paddr = (u64)p.g.pg << pblk->ppaf.pg_offset; >> - paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset; >> - paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset; >> - paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset; >> - paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset; >> + paddr = (u64)p.g.ch << ppaf->ch_offset; >> + paddr |= (u64)p.g.lun << ppaf->lun_offset; >> + paddr |= (u64)p.g.pg << ppaf->pg_offset; >> + paddr |= (u64)p.g.pl << ppaf->pln_offset; >> + paddr |= (u64)p.g.sec << ppaf->sect_offset; >> return paddr; >> } >> @@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) >> ppa64.c.line = ppa32 & ((~0U) >> 1); >> ppa64.c.is_cached = 1; >> } else { >> - ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >> >> - pblk->ppaf.blk_offset; >> - ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >> >> - pblk->ppaf.pg_offset; >> - ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >> >> - pblk->ppaf.lun_offset; >> - ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >> >> - pblk->ppaf.ch_offset; >> - ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >> >> - pblk->ppaf.pln_offset; >> - ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >> >> - pblk->ppaf.sec_offset; >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&pblk->ppaf; >> + >> + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; >> + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; >> + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; >> + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; >> + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; >> + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sect_offset; >> } >> return ppa64; >> @@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) >> ppa32 |= ppa64.c.line; >> ppa32 |= 1U << 31; >> } else { >> - ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset; >> - ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset; >> - ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset; >> - ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset; >> - ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset; >> - ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset; >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&pblk->ppaf; >> + >> + ppa32 |= ppa64.g.ch << ppaf->ch_offset; >> + ppa32 |= ppa64.g.lun << ppaf->lun_offset; >> + ppa32 |= ppa64.g.blk << ppaf->blk_offset; >> + ppa32 |= ppa64.g.pg << ppaf->pg_offset; >> + ppa32 |= ppa64.g.pl << ppaf->pln_offset; >> + ppa32 |= ppa64.g.sec << ppaf->sect_offset; >> } >> return ppa32; >> @@ -1229,10 +1216,10 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, >> if (!ppa->c.is_cached && >> ppa->g.ch < geo->nr_chnls && >> ppa->g.lun < geo->nr_luns && >> - ppa->g.pl < geo->nr_planes && >> + ppa->g.pl < geo->num_pln && >> ppa->g.blk < geo->nr_chks && >> - ppa->g.pg < geo->ws_per_chk && >> - ppa->g.sec < geo->sec_per_pg) >> + ppa->g.pg < geo->num_pg && >> + ppa->g.sec < geo->ws_min) >> continue; >> print_ppa(ppa, "boundary", i); >> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >> index 839c0b96466a..e276ace28c64 100644 >> --- a/drivers/nvme/host/lightnvm.c >> +++ b/drivers/nvme/host/lightnvm.c >> @@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf { >> __u8 blk_len; >> __u8 pg_offset; >> __u8 pg_len; >> - __u8 sect_offset; >> - __u8 sect_len; >> + __u8 sec_offset; >> + __u8 sec_len; >> __u8 res[4]; >> } __packed; >> @@ -254,106 +254,161 @@ static inline void _nvme_nvm_check_size(void) >> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); >> } >> -static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12) >> +static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, >> + struct nvme_nvm_id12_addrf *src) >> +{ >> + dst->ch_len = src->ch_len; >> + dst->lun_len = src->lun_len; >> + dst->blk_len = src->blk_len; >> + dst->pg_len = src->pg_len; >> + dst->pln_len = src->pln_len; >> + dst->sect_len = src->sec_len; >> + >> + dst->ch_offset = src->ch_offset; >> + dst->lun_offset = src->lun_offset; >> + dst->blk_offset = src->blk_offset; >> + dst->pg_offset = src->pg_offset; >> + dst->pln_offset = src->pln_offset; >> + dst->sect_offset = src->sec_offset; >> + >> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; >> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; >> + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; >> + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; >> + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; >> + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; >> +} >> + >> +static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, >> + struct nvm_geo *geo) >> { >> struct nvme_nvm_id12_grp *src; >> int sec_per_pg, sec_per_pl, pg_per_blk; >> - if (id12->cgrps != 1) >> + if (id->cgrps != 1) >> return -EINVAL; >> - src = &id12->grp; >> + src = &id->grp; >> - nvm_id->mtype = src->mtype; >> - nvm_id->fmtype = src->fmtype; >> + if (src->mtype != 0) { >> + pr_err("nvm: memory type not supported\n"); >> + return -EINVAL; >> + } >> + >> + geo->ver_id = id->ver_id; >> + >> + geo->nr_chnls = src->num_ch; >> + geo->nr_luns = src->num_lun; >> + geo->all_luns = geo->nr_chnls * geo->nr_luns; >> - nvm_id->num_ch = src->num_ch; >> - nvm_id->num_lun = src->num_lun; >> + geo->nr_chks = le16_to_cpu(src->num_chk); >> - nvm_id->num_chk = le16_to_cpu(src->num_chk); >> - nvm_id->csecs = le16_to_cpu(src->csecs); >> - nvm_id->sos = le16_to_cpu(src->sos); >> + geo->csecs = le16_to_cpu(src->csecs); >> + geo->sos = le16_to_cpu(src->sos); >> pg_per_blk = le16_to_cpu(src->num_pg); >> - sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs; >> + sec_per_pg = le16_to_cpu(src->fpg_sz) / geo->csecs; >> sec_per_pl = sec_per_pg * src->num_pln; >> - nvm_id->clba = sec_per_pl * pg_per_blk; >> - nvm_id->ws_per_chk = pg_per_blk; >> - >> - nvm_id->mpos = le32_to_cpu(src->mpos); >> - nvm_id->cpar = le16_to_cpu(src->cpar); >> - nvm_id->mccap = le32_to_cpu(src->mccap); >> - >> - nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg; >> - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; >> - >> - if (nvm_id->mpos & 0x020202) { >> - nvm_id->ws_seq = NVM_IO_DUAL_ACCESS; >> - nvm_id->ws_opt <<= 1; >> - } else if (nvm_id->mpos & 0x040404) { >> - nvm_id->ws_seq = NVM_IO_QUAD_ACCESS; >> - nvm_id->ws_opt <<= 2; >> + geo->clba = sec_per_pl * pg_per_blk; >> + >> + geo->all_chunks = geo->all_luns * geo->nr_chks; >> + geo->total_secs = geo->clba * geo->all_chunks; >> + >> + geo->ws_min = sec_per_pg; >> + geo->ws_opt = sec_per_pg; >> + geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ >> + >> + geo->mccap = le32_to_cpu(src->mccap); >> + >> + geo->trdt = le32_to_cpu(src->trdt); >> + geo->trdm = le32_to_cpu(src->trdm); >> + geo->tprt = le32_to_cpu(src->tprt); >> + geo->tprm = le32_to_cpu(src->tprm); >> + geo->tbet = le32_to_cpu(src->tbet); >> + geo->tbem = le32_to_cpu(src->tbem); >> + >> + /* 1.2 compatibility */ >> + geo->vmnt = id->vmnt; >> + geo->cap = le32_to_cpu(id->cap); >> + geo->dom = le32_to_cpu(id->dom); >> + >> + geo->mtype = src->mtype; >> + geo->fmtype = src->fmtype; >> + >> + geo->cpar = le16_to_cpu(src->cpar); >> + geo->mpos = le32_to_cpu(src->mpos); >> + >> + geo->plane_mode = NVM_PLANE_SINGLE; >> + >> + if (geo->mpos & 0x020202) { >> + geo->plane_mode = NVM_PLANE_DOUBLE; >> + geo->ws_opt <<= 1; >> + } else if (geo->mpos & 0x040404) { >> + geo->plane_mode = NVM_PLANE_QUAD; >> + geo->ws_opt <<= 2; >> } >> - nvm_id->trdt = le32_to_cpu(src->trdt); >> - nvm_id->trdm = le32_to_cpu(src->trdm); >> - nvm_id->tprt = le32_to_cpu(src->tprt); >> - nvm_id->tprm = le32_to_cpu(src->tprm); >> - nvm_id->tbet = le32_to_cpu(src->tbet); >> - nvm_id->tbem = le32_to_cpu(src->tbem); >> - >> - /* 1.2 compatibility */ >> - nvm_id->num_pln = src->num_pln; >> - nvm_id->num_pg = le16_to_cpu(src->num_pg); >> - nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz); >> + geo->num_pln = src->num_pln; >> + geo->num_pg = le16_to_cpu(src->num_pg); >> + geo->fpg_sz = le16_to_cpu(src->fpg_sz); >> + >> + nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&geo->addrf, >> + &id->ppaf); >> return 0; >> } >> -static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, >> - struct nvme_nvm_id12 *id) >> +static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst, >> + struct nvme_nvm_id20_addrf *src) >> { >> - nvm_id->ver_id = id->ver_id; >> - nvm_id->vmnt = id->vmnt; >> - nvm_id->cap = le32_to_cpu(id->cap); >> - nvm_id->dom = le32_to_cpu(id->dom); >> - memcpy(&nvm_id->ppaf, &id->ppaf, >> - sizeof(struct nvm_addr_format)); >> - >> - return init_grp(nvm_id, id); >> + dst->ch_len = src->grp_len; >> + dst->lun_len = src->pu_len; >> + dst->chk_len = src->chk_len; >> + dst->sec_len = src->lba_len; >> + >> + dst->sec_offset = 0; >> + dst->chk_offset = dst->sec_len; >> + dst->lun_offset = dst->chk_offset + dst->chk_len; >> + dst->ch_offset = dst->lun_offset + dst->lun_len; >> + >> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; >> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; >> + dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset; >> + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; >> } >> -static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, >> - struct nvme_nvm_id20 *id) >> +static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, >> + struct nvm_geo *geo) >> { >> - nvm_id->ver_id = id->mjr; >> + geo->ver_id = id->mjr; >> + >> + geo->nr_chnls = le16_to_cpu(id->num_grp); >> + geo->nr_luns = le16_to_cpu(id->num_pu); >> + geo->all_luns = geo->nr_chnls * geo->nr_luns; >> - nvm_id->num_ch = le16_to_cpu(id->num_grp); >> - nvm_id->num_lun = le16_to_cpu(id->num_pu); >> - nvm_id->num_chk = le32_to_cpu(id->num_chk); >> - nvm_id->clba = le32_to_cpu(id->clba); >> + geo->nr_chks = le32_to_cpu(id->num_chk); >> + geo->clba = le32_to_cpu(id->clba); >> - nvm_id->ws_min = le32_to_cpu(id->ws_min); >> - nvm_id->ws_opt = le32_to_cpu(id->ws_opt); >> - nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits); >> + geo->all_chunks = geo->all_luns * geo->nr_chks; >> + geo->total_secs = geo->clba * geo->all_chunks; >> - nvm_id->trdt = le32_to_cpu(id->trdt); >> - nvm_id->trdm = le32_to_cpu(id->trdm); >> - nvm_id->tprt = le32_to_cpu(id->twrt); >> - nvm_id->tprm = le32_to_cpu(id->twrm); >> - nvm_id->tbet = le32_to_cpu(id->tcrst); >> - nvm_id->tbem = le32_to_cpu(id->tcrsm); >> + geo->ws_min = le32_to_cpu(id->ws_min); >> + geo->ws_opt = le32_to_cpu(id->ws_opt); >> + geo->mw_cunits = le32_to_cpu(id->mw_cunits); >> - /* calculated values */ >> - nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min; >> + geo->trdt = le32_to_cpu(id->trdt); >> + geo->trdm = le32_to_cpu(id->trdm); >> + geo->tprt = le32_to_cpu(id->twrt); >> + geo->tprm = le32_to_cpu(id->twrm); >> + geo->tbet = le32_to_cpu(id->tcrst); >> + geo->tbem = le32_to_cpu(id->tcrsm); >> - /* 1.2 compatibility */ >> - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; >> + nvme_nvm_set_addr_20(&geo->addrf, &id->lbaf); >> return 0; >> } >> -static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) >> +static int nvme_nvm_identity(struct nvm_dev *nvmdev) >> { >> struct nvme_ns *ns = nvmdev->q->queuedata; >> struct nvme_nvm_id12 *id; >> @@ -380,18 +435,18 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) >> */ >> switch (id->ver_id) { >> case 1: >> - ret = nvme_nvm_setup_12(nvmdev, nvm_id, id); >> + ret = nvme_nvm_setup_12(id, &nvmdev->geo); >> break; >> case 2: >> - ret = nvme_nvm_setup_20(nvmdev, nvm_id, >> - (struct nvme_nvm_id20 *)id); >> + ret = nvme_nvm_setup_20((struct nvme_nvm_id20 *)id, >> + &nvmdev->geo); >> break; >> default: >> - dev_err(ns->ctrl->device, >> - "OCSSD revision not supported (%d)\n", >> - nvm_id->ver_id); >> + dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n", >> + id->ver_id); >> ret = -EINVAL; >> } >> + >> out: >> kfree(id); >> return ret; >> @@ -406,7 +461,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, >> struct nvme_ctrl *ctrl = ns->ctrl; >> struct nvme_nvm_command c = {}; >> struct nvme_nvm_bb_tbl *bb_tbl; >> - int nr_blks = geo->nr_chks * geo->plane_mode; >> + int nr_blks = geo->nr_chks * geo->num_pln; >> int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks; >> int ret = 0; >> @@ -447,7 +502,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, >> goto out; >> } >> - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode); >> + memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->num_pln); >> out: >> kfree(bb_tbl); >> return ret; >> @@ -815,9 +870,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) >> void nvme_nvm_update_nvm_info(struct nvme_ns *ns) >> { >> struct nvm_dev *ndev = ns->ndev; >> + struct nvm_geo *geo = &ndev->geo; >> - ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift; >> - ndev->identity.sos = ndev->geo.oob_size = ns->ms; >> + geo->csecs = 1 << ns->lba_shift; >> + geo->sos = ns->ms; >> } >> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) >> @@ -850,23 +906,22 @@ static ssize_t nvm_dev_attr_show(struct device *dev, >> { >> struct nvme_ns *ns = nvme_get_ns_from_dev(dev); >> struct nvm_dev *ndev = ns->ndev; >> - struct nvm_id *id; >> + struct nvm_geo *geo = &ndev->geo; >> struct attribute *attr; >> if (!ndev) >> return 0; >> - id = &ndev->identity; >> attr = &dattr->attr; >> if (strcmp(attr->name, "version") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ver_id); >> } else if (strcmp(attr->name, "capabilities") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->cap); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->cap); >> } else if (strcmp(attr->name, "read_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdt); >> } else if (strcmp(attr->name, "read_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdm); >> } else { >> return scnprintf(page, >> PAGE_SIZE, >> @@ -875,75 +930,79 @@ static ssize_t nvm_dev_attr_show(struct device *dev, >> } >> } >> +static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf, >> + char *page) >> +{ >> + return scnprintf(page, PAGE_SIZE, >> + "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", >> + ppaf->ch_offset, ppaf->ch_len, >> + ppaf->lun_offset, ppaf->lun_len, >> + ppaf->pln_offset, ppaf->pln_len, >> + ppaf->blk_offset, ppaf->blk_len, >> + ppaf->pg_offset, ppaf->pg_len, >> + ppaf->sect_offset, ppaf->sect_len); >> +} >> + >> static ssize_t nvm_dev_attr_show_12(struct device *dev, >> struct device_attribute *dattr, char *page) >> { >> struct nvme_ns *ns = nvme_get_ns_from_dev(dev); >> struct nvm_dev *ndev = ns->ndev; >> - struct nvm_id *id; >> + struct nvm_geo *geo = &ndev->geo; >> struct attribute *attr; >> if (!ndev) >> return 0; >> - id = &ndev->identity; >> attr = &dattr->attr; >> if (strcmp(attr->name, "vendor_opcode") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->vmnt); >> } else if (strcmp(attr->name, "device_mode") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->dom); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->dom); >> /* kept for compatibility */ >> } else if (strcmp(attr->name, "media_manager") == 0) { >> return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm"); >> } else if (strcmp(attr->name, "ppa_format") == 0) { >> - return scnprintf(page, PAGE_SIZE, >> - "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", >> - id->ppaf.ch_offset, id->ppaf.ch_len, >> - id->ppaf.lun_offset, id->ppaf.lun_len, >> - id->ppaf.pln_offset, id->ppaf.pln_len, >> - id->ppaf.blk_offset, id->ppaf.blk_len, >> - id->ppaf.pg_offset, id->ppaf.pg_len, >> - id->ppaf.sect_offset, id->ppaf.sect_len); >> + return nvm_dev_attr_show_ppaf((void *)&geo->addrf, page); > > Why do the code here cast to void *, and not to the address format data structure? > > Have you thought about doing the cast directly here, instead of making a function for it? I like it better to be tight instead of having to break the line for this. Same point you make below. > >> } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */ >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mtype); >> } else if (strcmp(attr->name, "flash_media_type") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fmtype); >> } else if (strcmp(attr->name, "num_channels") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); >> } else if (strcmp(attr->name, "num_luns") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); >> } else if (strcmp(attr->name, "num_planes") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pln); >> } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */ >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); >> } else if (strcmp(attr->name, "num_pages") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pg); >> } else if (strcmp(attr->name, "page_size") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fpg_sz); >> } else if (strcmp(attr->name, "hw_sector_size") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->csecs); >> } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */ >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->sos); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->sos); >> } else if (strcmp(attr->name, "prog_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); >> } else if (strcmp(attr->name, "prog_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); >> } else if (strcmp(attr->name, "erase_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); >> } else if (strcmp(attr->name, "erase_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); >> } else if (strcmp(attr->name, "multiplane_modes") == 0) { >> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos); >> + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mpos); >> } else if (strcmp(attr->name, "media_capabilities") == 0) { >> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap); >> + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mccap); >> } else if (strcmp(attr->name, "max_phys_secs") == 0) { >> return scnprintf(page, PAGE_SIZE, "%u\n", NVM_MAX_VLBA); >> } else { >> - return scnprintf(page, >> - PAGE_SIZE, >> - "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", >> - attr->name); >> + return scnprintf(page, PAGE_SIZE, >> + "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", >> + attr->name); >> } >> } >> @@ -952,42 +1011,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, >> { >> struct nvme_ns *ns = nvme_get_ns_from_dev(dev); >> struct nvm_dev *ndev = ns->ndev; >> - struct nvm_id *id; >> + struct nvm_geo *geo = &ndev->geo; >> struct attribute *attr; >> if (!ndev) >> return 0; >> - id = &ndev->identity; >> attr = &dattr->attr; >> if (strcmp(attr->name, "groups") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); >> } else if (strcmp(attr->name, "punits") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); >> } else if (strcmp(attr->name, "chunks") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); >> } else if (strcmp(attr->name, "clba") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->clba); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->clba); >> } else if (strcmp(attr->name, "ws_min") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_min); >> } else if (strcmp(attr->name, "ws_opt") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_opt); >> } else if (strcmp(attr->name, "mw_cunits") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mw_cunits); >> } else if (strcmp(attr->name, "write_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); >> } else if (strcmp(attr->name, "write_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); >> } else if (strcmp(attr->name, "reset_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); >> } else if (strcmp(attr->name, "reset_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); >> } else { >> - return scnprintf(page, >> - PAGE_SIZE, >> - "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", >> - attr->name); >> + return scnprintf(page, PAGE_SIZE, >> + "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", >> + attr->name); >> } >> } >> @@ -1106,10 +1163,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = { >> int nvme_nvm_register_sysfs(struct nvme_ns *ns) >> { >> - if (!ns->ndev) >> + struct nvm_dev *ndev = ns->ndev; >> + struct nvm_geo *geo = &ndev->geo; >> + >> + if (!ndev) >> return -EINVAL; >> - switch (ns->ndev->identity.ver_id) { >> + switch (geo->ver_id) { >> case 1: >> return sysfs_create_group(&disk_to_dev(ns->disk)->kobj, >> &nvm_dev_attr_group_12); >> @@ -1123,7 +1183,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns) >> void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) >> { >> - switch (ns->ndev->identity.ver_id) { >> + struct nvm_dev *ndev = ns->ndev; >> + struct nvm_geo *geo = &ndev->geo; >> + >> + switch (geo->ver_id) { >> case 1: >> sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, >> &nvm_dev_attr_group_12); >> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h >> index e55b10573c99..16255fcd5250 100644 >> --- a/include/linux/lightnvm.h >> +++ b/include/linux/lightnvm.h >> @@ -50,7 +50,7 @@ struct nvm_id; >> struct nvm_dev; >> struct nvm_tgt_dev; >> -typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *); >> +typedef int (nvm_id_fn)(struct nvm_dev *); >> typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); >> typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); >> typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); >> @@ -152,62 +152,48 @@ struct nvm_id_lp_tbl { >> struct nvm_id_lp_mlc mlc; >> }; >> -struct nvm_addr_format { >> - u8 ch_offset; >> +struct nvm_addr_format_12 { > > I can see in a couple of places a statement has to be over two lines due > to the length of writing out nvm_addr_format_12, would it make sense to > shorthand it to nvm_addrf_12? Good idea. > >> u8 ch_len; >> - u8 lun_offset; >> u8 lun_len; >> - u8 pln_offset; >> + u8 blk_len; >> + u8 pg_len; >> u8 pln_len; >> + u8 sect_len; >> + >> + u8 ch_offset; >> + u8 lun_offset; >> u8 blk_offset; >> - u8 blk_len; >> u8 pg_offset; >> - u8 pg_len; >> + u8 pln_offset; >> u8 sect_offset; >> - u8 sect_len; >> -}; >> - >> -struct nvm_id { >> - u8 ver_id; >> - u8 vmnt; >> - u32 cap; >> - u32 dom; >> - >> - struct nvm_addr_format ppaf; >> - >> - u8 num_ch; >> - u8 num_lun; >> - u16 num_chk; >> - u16 clba; >> - u16 csecs; >> - u16 sos; >> - >> - u32 ws_min; >> - u32 ws_opt; >> - u32 mw_cunits; >> - u32 trdt; >> - u32 trdm; >> - u32 tprt; >> - u32 tprm; >> - u32 tbet; >> - u32 tbem; >> - u32 mpos; >> - u32 mccap; >> - u16 cpar; >> - >> - /* calculated values */ >> - u16 ws_seq; >> - u16 ws_per_chk; >> - >> - /* 1.2 compatibility */ >> - u8 mtype; >> - u8 fmtype; >> + u64 ch_mask; >> + u64 lun_mask; >> + u64 blk_mask; >> + u64 pg_mask; >> + u64 pln_mask; >> + u64 sec_mask; >> +}; >> - u8 num_pln; >> - u16 num_pg; >> - u16 fpg_sz; >> -} __packed; >> +struct nvm_addr_format { >> + u8 ch_len; >> + u8 lun_len; >> + u8 chk_len; >> + u8 sec_len; >> + u8 rsv_len[2]; >> + >> + u8 ch_offset; >> + u8 lun_offset; >> + u8 chk_offset; >> + u8 sec_offset; >> + u8 rsv_off[2]; >> + >> + u64 ch_mask; >> + u64 lun_mask; >> + u64 chk_mask; >> + u64 sec_mask; >> + u64 rsv_mask[2]; >> +}; >> struct nvm_target { >> struct list_head list; >> @@ -274,36 +260,63 @@ enum { >> NVM_BLK_ST_BAD = 0x8, /* Bad block */ >> }; >> - >> -/* Device generic information */ >> +/* Instance geometry */ >> struct nvm_geo { >> - /* generic geometry */ >> + /* device reported version */ >> + u8 ver_id; >> + >> + /* instance specific geometry */ >> int nr_chnls; >> - int all_luns; /* across channels */ >> - int nr_luns; /* per channel */ >> - int nr_chks; /* per lun */ >> + int nr_luns; /* per channel */ >> - int sec_size; >> - int oob_size; >> - int mccap; >> + /* calculated values */ >> + int all_luns; /* across channels */ >> + int all_chunks; /* across channels */ >> - int sec_per_chk; >> - int sec_per_lun; >> + int op; /* over-provision in instance */ >> - int ws_min; >> - int ws_opt; >> - int ws_seq; >> - int ws_per_chk; >> + sector_t total_secs; /* across channels */ >> - int op; >> + /* chunk geometry */ >> + u32 nr_chks; /* chunks per lun */ >> + u32 clba; /* sectors per chunk */ >> + u16 csecs; /* sector size */ >> + u16 sos; /* out-of-band area size */ >> - struct nvm_addr_format ppaf; >> + /* device write constrains */ >> + u32 ws_min; /* minimum write size */ >> + u32 ws_opt; /* optimal write size */ >> + u32 mw_cunits; /* distance required for successful read */ >> - /* Legacy 1.2 specific geometry */ >> - int plane_mode; /* drive device in single, double or quad mode */ >> - int nr_planes; >> - int sec_per_pg; /* only sectors for a single page */ >> - int sec_per_pl; /* all sectors across planes */ >> + /* device capabilities */ >> + u32 mccap; >> + >> + /* device timings */ >> + u32 trdt; /* Avg. Tread (ns) */ >> + u32 trdm; /* Max Tread (ns) */ >> + u32 tprt; /* Avg. Tprog (ns) */ >> + u32 tprm; /* Max Tprog (ns) */ >> + u32 tbet; /* Avg. Terase (ns) */ >> + u32 tbem; /* Max Terase (ns) */ >> + >> + /* generic address format */ >> + struct nvm_addr_format addrf; >> + >> + /* 1.2 compatibility */ >> + u8 vmnt; >> + u32 cap; >> + u32 dom; >> + >> + u8 mtype; >> + u8 fmtype; >> + >> + u16 cpar; >> + u32 mpos; >> + >> + u8 num_pln; >> + u8 plane_mode; >> + u16 num_pg; >> + u16 fpg_sz; >> }; >> /* sub-device structure */ >> @@ -314,9 +327,6 @@ struct nvm_tgt_dev { >> /* Base ppas for target LUNs */ >> struct ppa_addr *luns; >> - sector_t total_secs; >> - >> - struct nvm_id identity; >> struct request_queue *q; >> struct nvm_dev *parent; >> @@ -331,13 +341,9 @@ struct nvm_dev { >> /* Device information */ >> struct nvm_geo geo; >> - unsigned long total_secs; >> - >> unsigned long *lun_map; >> void *dma_pool; >> - struct nvm_id identity; >> - >> /* Backend device */ >> struct request_queue *q; >> char name[DISK_NAME_LEN]; >> @@ -357,14 +363,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, >> struct ppa_addr r) >> { >> struct nvm_geo *geo = &tgt_dev->geo; >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&geo->addrf; >> struct ppa_addr l; >> - l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset; >> - l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset; >> - l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset; >> - l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset; >> - l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset; >> - l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset; >> + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; >> + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; >> + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; >> + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; >> + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; >> + l.ppa |= ((u64)r.g.sec) << ppaf->sect_offset; >> return l; >> } >> @@ -373,24 +381,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, >> struct ppa_addr r) >> { >> struct nvm_geo *geo = &tgt_dev->geo; >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&geo->addrf; >> struct ppa_addr l; >> l.ppa = 0; >> - /* >> - * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc. >> - */ >> - l.g.blk = (r.ppa >> geo->ppaf.blk_offset) & >> - (((1 << geo->ppaf.blk_len) - 1)); >> - l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) & >> - (((1 << geo->ppaf.pg_len) - 1)); >> - l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) & >> - (((1 << geo->ppaf.sect_len) - 1)); >> - l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) & >> - (((1 << geo->ppaf.pln_len) - 1)); >> - l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) & >> - (((1 << geo->ppaf.lun_len) - 1)); >> - l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) & >> - (((1 << geo->ppaf.ch_len) - 1)); >> + >> + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; >> + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; >> + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; >> + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; >> + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; >> + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sect_offset; >> return l; >> } > > Looks good to me, [-- Attachment #2: Message signed with OpenPGP --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 01/15] lightnvm: simplify geometry structure. @ 2018-03-02 11:15 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-03-02 11:15 UTC (permalink / raw) > On 1 Mar 2018,@11.22, Matias Bj?rling <mb@lightnvm.io> wrote: > > On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: >> Currently, the device geometry is stored redundantly in the nvm_id and >> nvm_geo structures at a device level. Moreover, when instantiating >> targets on a specific number of LUNs, these structures are replicated >> and manually modified to fit the instance channel and LUN partitioning. >> Instead, create a generic geometry around nvm_geo, which can be used by >> (i) the underlying device to describe the geometry of the whole device, >> and (ii) instances to describe their geometry independently. >> Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> >> --- >> drivers/lightnvm/core.c | 70 +++----- >> drivers/lightnvm/pblk-core.c | 16 +- >> drivers/lightnvm/pblk-gc.c | 2 +- >> drivers/lightnvm/pblk-init.c | 119 +++++++------- >> drivers/lightnvm/pblk-read.c | 2 +- >> drivers/lightnvm/pblk-recovery.c | 14 +- >> drivers/lightnvm/pblk-rl.c | 2 +- >> drivers/lightnvm/pblk-sysfs.c | 39 +++-- >> drivers/lightnvm/pblk-write.c | 2 +- >> drivers/lightnvm/pblk.h | 87 +++++----- >> drivers/nvme/host/lightnvm.c | 341 +++++++++++++++++++++++---------------- >> include/linux/lightnvm.h | 200 +++++++++++------------ >> 12 files changed, 465 insertions(+), 429 deletions(-) >> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >> index 19c46ebb1b91..9a417d9cdf0c 100644 >> --- a/drivers/lightnvm/core.c >> +++ b/drivers/lightnvm/core.c >> @@ -155,7 +155,7 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, >> int blun = lun_begin % dev->geo.nr_luns; >> int lunid = 0; >> int lun_balanced = 1; >> - int prev_nr_luns; >> + int sec_per_lun, prev_nr_luns; >> int i, j; >> nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1; >> @@ -215,18 +215,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, >> if (!tgt_dev) >> goto err_ch; >> + /* Inherit device geometry from parent */ >> memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo)); >> + >> /* Target device only owns a portion of the physical device */ >> tgt_dev->geo.nr_chnls = nr_chnls; >> - tgt_dev->geo.all_luns = nr_luns; >> tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1; >> + tgt_dev->geo.all_luns = nr_luns; >> + tgt_dev->geo.all_chunks = nr_luns * dev->geo.nr_chks; >> + >> tgt_dev->geo.op = op; >> - tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun; >> + >> + sec_per_lun = dev->geo.clba * dev->geo.nr_chks; >> + tgt_dev->geo.total_secs = nr_luns * sec_per_lun; >> + >> tgt_dev->q = dev->q; >> tgt_dev->map = dev_map; >> tgt_dev->luns = luns; >> - memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id)); >> - >> tgt_dev->parent = dev; >> return tgt_dev; >> @@ -296,8 +301,6 @@ static int __nvm_config_simple(struct nvm_dev *dev, >> static int __nvm_config_extended(struct nvm_dev *dev, >> struct nvm_ioctl_create_extended *e) >> { >> - struct nvm_geo *geo = &dev->geo; >> - >> if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) { >> e->lun_begin = 0; >> e->lun_end = dev->geo.all_luns - 1; >> @@ -311,7 +314,7 @@ static int __nvm_config_extended(struct nvm_dev *dev, >> return -EINVAL; >> } >> - return nvm_config_check_luns(geo, e->lun_begin, e->lun_end); >> + return nvm_config_check_luns(&dev->geo, e->lun_begin, e->lun_end); >> } >> static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) >> @@ -406,7 +409,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) >> tqueue->queuedata = targetdata; >> blk_queue_max_hw_sectors(tqueue, >> - (dev->geo.sec_size >> 9) * NVM_MAX_VLBA); >> + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); >> set_capacity(tdisk, tt->capacity(targetdata)); >> add_disk(tdisk); >> @@ -841,40 +844,9 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl); >> static int nvm_core_init(struct nvm_dev *dev) >> { >> - struct nvm_id *id = &dev->identity; >> struct nvm_geo *geo = &dev->geo; >> int ret; >> - memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format)); >> - >> - if (id->mtype != 0) { >> - pr_err("nvm: memory type not supported\n"); >> - return -EINVAL; >> - } >> - >> - /* Whole device values */ >> - geo->nr_chnls = id->num_ch; >> - geo->nr_luns = id->num_lun; >> - >> - /* Generic device geometry values */ >> - geo->ws_min = id->ws_min; >> - geo->ws_opt = id->ws_opt; >> - geo->ws_seq = id->ws_seq; >> - geo->ws_per_chk = id->ws_per_chk; >> - geo->nr_chks = id->num_chk; >> - geo->mccap = id->mccap; >> - >> - geo->sec_per_chk = id->clba; >> - geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks; >> - geo->all_luns = geo->nr_luns * geo->nr_chnls; >> - >> - /* 1.2 spec device geometry values */ >> - geo->plane_mode = 1 << geo->ws_seq; >> - geo->nr_planes = geo->ws_opt / geo->ws_min; >> - geo->sec_per_pg = geo->ws_min; >> - geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes; >> - >> - dev->total_secs = geo->all_luns * geo->sec_per_lun; >> dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns), >> sizeof(unsigned long), GFP_KERNEL); >> if (!dev->lun_map) >> @@ -913,16 +885,14 @@ static int nvm_init(struct nvm_dev *dev) >> struct nvm_geo *geo = &dev->geo; >> int ret = -EINVAL; >> - if (dev->ops->identity(dev, &dev->identity)) { >> + if (dev->ops->identity(dev)) { >> pr_err("nvm: device could not be identified\n"); >> goto err; >> } >> - if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) { >> - pr_err("nvm: device ver_id %d not supported by kernel.\n", >> - dev->identity.ver_id); >> - goto err; >> - } >> + pr_debug("nvm: ver:%u nvm_vendor:%x\n", >> + geo->ver_id, >> + geo->vmnt); >> ret = nvm_core_init(dev); >> if (ret) { >> @@ -930,10 +900,10 @@ static int nvm_init(struct nvm_dev *dev) >> goto err; >> } >> - pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n", >> - dev->name, geo->sec_per_pg, geo->nr_planes, >> - geo->ws_per_chk, geo->nr_chks, >> - geo->all_luns, geo->nr_chnls); >> + pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n", >> + dev->name, geo->ws_min, geo->ws_opt, >> + geo->nr_chks, geo->all_luns, >> + geo->nr_chnls); >> return 0; >> err: >> pr_err("nvm: failed to initialize nvm\n"); >> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c >> index 8848443a0721..169589ddd457 100644 >> --- a/drivers/lightnvm/pblk-core.c >> +++ b/drivers/lightnvm/pblk-core.c >> @@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line, >> memset(&rqd, 0, sizeof(struct nvm_rq)); >> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len, >> l_mg->emeta_alloc_type, GFP_KERNEL); >> @@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line) >> if (bit >= lm->blk_per_line) >> return -1; >> - return bit * geo->sec_per_pl; >> + return bit * geo->ws_opt; >> } >> static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line, >> @@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, >> /* Capture bad block information on line mapping bitmaps */ >> while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line, >> bit + 1)) < lm->blk_per_line) { >> - off = bit * geo->sec_per_pl; >> + off = bit * geo->ws_opt; >> bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off, >> lm->sec_per_line); >> bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux, >> lm->sec_per_line); >> - line->sec_in_line -= geo->sec_per_chk; >> + line->sec_in_line -= geo->clba; >> if (bit >= lm->emeta_bb) >> nr_bb++; >> } >> /* Mark smeta metadata sectors as bad sectors */ >> bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line); >> - off = bit * geo->sec_per_pl; >> + off = bit * geo->ws_opt; >> bitmap_set(line->map_bitmap, off, lm->smeta_sec); >> line->sec_in_line -= lm->smeta_sec; >> line->smeta_ssec = off; >> @@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, >> emeta_secs = lm->emeta_sec[0]; >> off = lm->sec_per_line; >> while (emeta_secs) { >> - off -= geo->sec_per_pl; >> + off -= geo->ws_opt; >> if (!test_bit(off, line->invalid_bitmap)) { >> - bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl); >> - emeta_secs -= geo->sec_per_pl; >> + bitmap_set(line->invalid_bitmap, off, geo->ws_opt); >> + emeta_secs -= geo->ws_opt; >> } >> } >> diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c >> index 320f99af99e9..6851a5c67189 100644 >> --- a/drivers/lightnvm/pblk-gc.c >> +++ b/drivers/lightnvm/pblk-gc.c >> @@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work) >> up(&gc->gc_sem); >> - gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size); >> + gc_rq->data = vmalloc(gc_rq->nr_secs * geo->csecs); >> if (!gc_rq->data) { >> pr_err("pblk: could not GC line:%d (%d/%d)\n", >> line->id, *line->vsc, gc_rq->nr_secs); >> diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c >> index 25fc70ca07f7..9b5ee05c3028 100644 >> --- a/drivers/lightnvm/pblk-init.c >> +++ b/drivers/lightnvm/pblk-init.c >> @@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk) >> return -ENOMEM; >> power_size = get_count_order(nr_entries); >> - power_seg_sz = get_count_order(geo->sec_size); >> + power_seg_sz = get_count_order(geo->csecs); >> return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz); >> } >> @@ -154,11 +154,11 @@ static int pblk_rwb_init(struct pblk *pblk) >> /* Minimum pages needed within a lun */ >> #define ADDR_POOL_SIZE 64 >> -static int pblk_set_ppaf(struct pblk *pblk) >> +static int pblk_set_addrf_12(struct nvm_geo *geo, >> + struct nvm_addr_format_12 *dst) >> { >> - struct nvm_tgt_dev *dev = pblk->dev; >> - struct nvm_geo *geo = &dev->geo; >> - struct nvm_addr_format ppaf = geo->ppaf; >> + struct nvm_addr_format_12 *src = >> + (struct nvm_addr_format_12 *)&geo->addrf; >> int power_len; >> /* Re-calculate channel and lun format to adapt to configuration */ >> @@ -167,34 +167,50 @@ static int pblk_set_ppaf(struct pblk *pblk) >> pr_err("pblk: supports only power-of-two channel config.\n"); >> return -EINVAL; >> } >> - ppaf.ch_len = power_len; >> + dst->ch_len = power_len; >> power_len = get_count_order(geo->nr_luns); >> if (1 << power_len != geo->nr_luns) { >> pr_err("pblk: supports only power-of-two LUN config.\n"); >> return -EINVAL; >> } >> - ppaf.lun_len = power_len; >> + dst->lun_len = power_len; >> - pblk->ppaf.sec_offset = 0; >> - pblk->ppaf.pln_offset = ppaf.sect_len; >> - pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len; >> - pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len; >> - pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len; >> - pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len; >> - pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1; >> - pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) << >> - pblk->ppaf.pln_offset; >> - pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) << >> - pblk->ppaf.ch_offset; >> - pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) << >> - pblk->ppaf.lun_offset; >> - pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) << >> - pblk->ppaf.pg_offset; >> - pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) << >> - pblk->ppaf.blk_offset; >> + dst->blk_len = src->blk_len; >> + dst->pg_len = src->pg_len; >> + dst->pln_len = src->pln_len; >> + dst->sect_len = src->sect_len; >> - pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len; >> + dst->sect_offset = 0; >> + dst->pln_offset = dst->sect_len; >> + dst->ch_offset = dst->pln_offset + dst->pln_len; >> + dst->lun_offset = dst->ch_offset + dst->ch_len; >> + dst->pg_offset = dst->lun_offset + dst->lun_len; >> + dst->blk_offset = dst->pg_offset + dst->pg_len; >> + >> + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; >> + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; >> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; >> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; >> + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; >> + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; >> + >> + return dst->blk_offset + src->blk_len; >> +} >> + >> +static int pblk_set_ppaf(struct pblk *pblk) >> +{ >> + struct nvm_tgt_dev *dev = pblk->dev; >> + struct nvm_geo *geo = &dev->geo; >> + int mod; >> + >> + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); >> + if (mod) { >> + pr_err("pblk: bad configuration of sectors/pages\n"); >> + return -EINVAL; >> + } >> + >> + pblk->ppaf_bitsize = pblk_set_addrf_12(geo, (void *)&pblk->ppaf); >> return 0; >> } >> @@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk) >> struct nvm_tgt_dev *dev = pblk->dev; >> struct nvm_geo *geo = &dev->geo; >> - pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg * >> - geo->nr_planes * geo->all_luns; >> + pblk->pgs_in_buffer = geo->mw_cunits * geo->all_luns; >> if (pblk_init_global_caches(pblk)) >> return -ENOMEM; >> @@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk) >> /* Round to sector size so that lba_list starts on its own sector */ >> lm->emeta_sec[1] = DIV_ROUND_UP( >> sizeof(struct line_emeta) + lm->blk_bitmap_len + >> - sizeof(struct wa_counters), geo->sec_size); >> - lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size; >> + sizeof(struct wa_counters), geo->csecs); >> + lm->emeta_len[1] = lm->emeta_sec[1] * geo->csecs; >> /* Round to sector size so that vsc_list starts on its own sector */ >> lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0]; >> lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64), >> - geo->sec_size); >> - lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size; >> + geo->csecs); >> + lm->emeta_len[2] = lm->emeta_sec[2] * geo->csecs; >> lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32), >> - geo->sec_size); >> - lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size; >> + geo->csecs); >> + lm->emeta_len[3] = lm->emeta_sec[3] * geo->csecs; >> lm->vsc_list_len = l_mg->nr_lines * sizeof(u32); >> @@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) >> * on user capacity consider only provisioned blocks >> */ >> pblk->rl.total_blocks = nr_free_blks; >> - pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk; >> + pblk->rl.nr_secs = nr_free_blks * geo->clba; >> /* Consider sectors used for metadata */ >> sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; >> - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); >> + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); >> - pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk; >> + pblk->capacity = (provisioned - blk_meta) * geo->clba; >> atomic_set(&pblk->rl.free_blocks, nr_free_blks); >> atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); >> @@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk) >> void *chunk_log; >> unsigned int smeta_len, emeta_len; >> long nr_bad_blks = 0, nr_free_blks = 0; >> - int bb_distance, max_write_ppas, mod; >> + int bb_distance, max_write_ppas; >> int i, ret; >> - pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE); >> + pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); >> max_write_ppas = pblk->min_write_pgs * geo->all_luns; >> pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); >> pblk_set_sec_per_write(pblk, pblk->min_write_pgs); >> @@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk) >> return -EINVAL; >> } >> - div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod); >> - if (mod) { >> - pr_err("pblk: bad configuration of sectors/pages\n"); >> - return -EINVAL; >> - } >> - >> l_mg->nr_lines = geo->nr_chks; >> l_mg->log_line = l_mg->data_line = NULL; >> l_mg->l_seq_nr = l_mg->d_seq_nr = 0; >> l_mg->nr_free_lines = 0; >> bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES); >> - lm->sec_per_line = geo->sec_per_chk * geo->all_luns; >> + lm->sec_per_line = geo->clba * geo->all_luns; >> lm->blk_per_line = geo->all_luns; >> lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long); >> lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long); >> @@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk) >> */ >> i = 1; >> add_smeta_page: >> - lm->smeta_sec = i * geo->sec_per_pl; >> - lm->smeta_len = lm->smeta_sec * geo->sec_size; >> + lm->smeta_sec = i * geo->ws_opt; >> + lm->smeta_len = lm->smeta_sec * geo->csecs; >> smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len; >> if (smeta_len > lm->smeta_len) { >> @@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk) >> */ >> i = 1; >> add_emeta_page: >> - lm->emeta_sec[0] = i * geo->sec_per_pl; >> - lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size; >> + lm->emeta_sec[0] = i * geo->ws_opt; >> + lm->emeta_len[0] = lm->emeta_sec[0] * geo->csecs; >> emeta_len = calc_emeta_len(pblk); >> if (emeta_len > lm->emeta_len[0]) { >> @@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk) >> lm->min_blk_line = 1; >> if (geo->all_luns > 1) >> lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec + >> - lm->emeta_sec[0], geo->sec_per_chk); >> + lm->emeta_sec[0], geo->clba); >> if (lm->min_blk_line > lm->blk_per_line) { >> pr_err("pblk: config. not supported. Min. LUN in line:%d\n", >> @@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk) >> goto fail_free_bb_template; >> } >> - bb_distance = (geo->all_luns) * geo->sec_per_pl; >> + bb_distance = (geo->all_luns) * geo->ws_opt; >> for (i = 0; i < lm->sec_per_line; i += bb_distance) >> - bitmap_set(l_mg->bb_template, i, geo->sec_per_pl); >> + bitmap_set(l_mg->bb_template, i, geo->ws_opt); >> INIT_LIST_HEAD(&l_mg->free_list); >> INIT_LIST_HEAD(&l_mg->corrupt_list); >> @@ -982,9 +991,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, >> struct pblk *pblk; >> int ret; >> - if (dev->identity.dom & NVM_RSP_L2P) { >> + if (dev->geo.dom & NVM_RSP_L2P) { >> pr_err("pblk: host-side L2P table not supported. (%x)\n", >> - dev->identity.dom); >> + dev->geo.dom); >> return ERR_PTR(-EINVAL); >> } >> @@ -1092,7 +1101,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, >> blk_queue_write_cache(tqueue, true, false); >> - tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size; >> + tqueue->limits.discard_granularity = geo->clba * geo->csecs; >> tqueue->limits.discard_alignment = 0; >> blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9); >> queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue); >> diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c >> index 2f761283f43e..9eee10f69df0 100644 >> --- a/drivers/lightnvm/pblk-read.c >> +++ b/drivers/lightnvm/pblk-read.c >> @@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) >> if (!(gc_rq->secs_to_gc)) >> goto out; >> - data_len = (gc_rq->secs_to_gc) * geo->sec_size; >> + data_len = (gc_rq->secs_to_gc) * geo->csecs; >> bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len, >> PBLK_VMALLOC_META, GFP_KERNEL); >> if (IS_ERR(bio)) { >> diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c >> index aaab9a5c17cc..26356429dc72 100644 >> --- a/drivers/lightnvm/pblk-recovery.c >> +++ b/drivers/lightnvm/pblk-recovery.c >> @@ -184,7 +184,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line) >> int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line); >> return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] - >> - nr_bb * geo->sec_per_chk; >> + nr_bb * geo->clba; >> } >> struct pblk_recov_alloc { >> @@ -232,7 +232,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line, >> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); >> if (!rq_ppas) >> rq_ppas = pblk->min_write_pgs; >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); >> if (IS_ERR(bio)) >> @@ -351,7 +351,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, >> if (!pad_rq) >> return -ENOMEM; >> - data = vzalloc(pblk->max_write_pgs * geo->sec_size); >> + data = vzalloc(pblk->max_write_pgs * geo->csecs); >> if (!data) { >> ret = -ENOMEM; >> goto free_rq; >> @@ -368,7 +368,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line, >> goto fail_free_pad; >> } >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list); >> if (!meta_list) { >> @@ -509,7 +509,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line, >> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); >> if (!rq_ppas) >> rq_ppas = pblk->min_write_pgs; >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); >> if (IS_ERR(bio)) >> @@ -640,7 +640,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, >> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); >> if (!rq_ppas) >> rq_ppas = pblk->min_write_pgs; >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); >> if (IS_ERR(bio)) >> @@ -745,7 +745,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) >> ppa_list = (void *)(meta_list) + pblk_dma_meta_size; >> dma_ppa_list = dma_meta_list + pblk_dma_meta_size; >> - data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL); >> + data = kcalloc(pblk->max_write_pgs, geo->csecs, GFP_KERNEL); >> if (!data) { >> ret = -ENOMEM; >> goto free_meta_list; >> diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c >> index 0d457b162f23..883a7113b19d 100644 >> --- a/drivers/lightnvm/pblk-rl.c >> +++ b/drivers/lightnvm/pblk-rl.c >> @@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget) >> /* Consider sectors used for metadata */ >> sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; >> - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk); >> + blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); >> rl->high = pblk->op_blks - blk_meta - lm->blk_per_line; >> rl->high_pw = get_count_order(rl->high); >> diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c >> index 1680ce0a828d..33199c6af267 100644 >> --- a/drivers/lightnvm/pblk-sysfs.c >> +++ b/drivers/lightnvm/pblk-sysfs.c >> @@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) >> { >> struct nvm_tgt_dev *dev = pblk->dev; >> struct nvm_geo *geo = &dev->geo; >> + struct nvm_addr_format_12 *ppaf; >> + struct nvm_addr_format_12 *geo_ppaf; >> ssize_t sz = 0; >> - sz = snprintf(page, PAGE_SIZE - sz, >> - "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", >> - pblk->ppaf_bitsize, >> - pblk->ppaf.blk_offset, geo->ppaf.blk_len, >> - pblk->ppaf.pg_offset, geo->ppaf.pg_len, >> - pblk->ppaf.lun_offset, geo->ppaf.lun_len, >> - pblk->ppaf.ch_offset, geo->ppaf.ch_len, >> - pblk->ppaf.pln_offset, geo->ppaf.pln_len, >> - pblk->ppaf.sec_offset, geo->ppaf.sect_len); >> + ppaf = (struct nvm_addr_format_12 *)&pblk->ppaf; >> + geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; >> + >> + sz = snprintf(page, PAGE_SIZE, >> + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", >> + pblk->ppaf_bitsize, >> + ppaf->ch_offset, ppaf->ch_len, >> + ppaf->lun_offset, ppaf->lun_len, >> + ppaf->blk_offset, ppaf->blk_len, >> + ppaf->pg_offset, ppaf->pg_len, >> + ppaf->pln_offset, ppaf->pln_len, >> + ppaf->sect_offset, ppaf->sect_len); > > Is it on purpose here that the code breaks user-space by changing the sysfs print format? Fixed. > >> sz += snprintf(page + sz, PAGE_SIZE - sz, >> - "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n", >> - geo->ppaf.blk_offset, geo->ppaf.blk_len, >> - geo->ppaf.pg_offset, geo->ppaf.pg_len, >> - geo->ppaf.lun_offset, geo->ppaf.lun_len, >> - geo->ppaf.ch_offset, geo->ppaf.ch_len, >> - geo->ppaf.pln_offset, geo->ppaf.pln_len, >> - geo->ppaf.sect_offset, geo->ppaf.sect_len); >> + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", >> + geo_ppaf->ch_offset, geo_ppaf->ch_len, >> + geo_ppaf->lun_offset, geo_ppaf->lun_len, >> + geo_ppaf->blk_offset, geo_ppaf->blk_len, >> + geo_ppaf->pg_offset, geo_ppaf->pg_len, >> + geo_ppaf->pln_offset, geo_ppaf->pln_len, >> + geo_ppaf->sect_offset, geo_ppaf->sect_len); > > Similarily here. Fixed. > >> return sz; >> } >> @@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page) >> "blk_line:%d, sec_line:%d, sec_blk:%d\n", >> lm->blk_per_line, >> lm->sec_per_line, >> - geo->sec_per_chk); >> + geo->clba); >> return sz; >> } >> diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c >> index aae86ed60b98..3e6f1ebd743a 100644 >> --- a/drivers/lightnvm/pblk-write.c >> +++ b/drivers/lightnvm/pblk-write.c >> @@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) >> m_ctx = nvm_rq_to_pdu(rqd); >> m_ctx->private = meta_line; >> - rq_len = rq_ppas * geo->sec_size; >> + rq_len = rq_ppas * geo->csecs; >> data = ((void *)emeta->buf) + emeta->mem; >> bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len, >> diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h >> index f0309d8172c0..b29c1e6698aa 100644 >> --- a/drivers/lightnvm/pblk.h >> +++ b/drivers/lightnvm/pblk.h >> @@ -551,21 +551,6 @@ struct pblk_line_meta { >> unsigned int meta_distance; /* Distance between data and metadata */ >> }; >> -struct pblk_addr_format { >> - u64 ch_mask; >> - u64 lun_mask; >> - u64 pln_mask; >> - u64 blk_mask; >> - u64 pg_mask; >> - u64 sec_mask; >> - u8 ch_offset; >> - u8 lun_offset; >> - u8 pln_offset; >> - u8 blk_offset; >> - u8 pg_offset; >> - u8 sec_offset; >> -}; >> - >> enum { >> PBLK_STATE_RUNNING = 0, >> PBLK_STATE_STOPPING = 1, >> @@ -585,8 +570,8 @@ struct pblk { >> struct pblk_line_mgmt l_mg; /* Line management */ >> struct pblk_line_meta lm; /* Line metadata */ >> + struct nvm_addr_format ppaf; >> int ppaf_bitsize; >> - struct pblk_addr_format ppaf; >> struct pblk_rb rwb; >> @@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line) >> return le32_to_cpu(*line->vsc); >> } >> -#define NVM_MEM_PAGE_WRITE (8) >> - >> static inline int pblk_pad_distance(struct pblk *pblk) >> { >> struct nvm_tgt_dev *dev = pblk->dev; >> struct nvm_geo *geo = &dev->geo; >> - return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl; >> + return geo->mw_cunits * geo->all_luns * geo->ws_opt; >> } >> static inline int pblk_ppa_to_line(struct ppa_addr p) >> @@ -964,15 +947,17 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) >> static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, >> u64 line_id) >> { >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&pblk->ppaf; >> struct ppa_addr ppa; >> ppa.ppa = 0; >> ppa.g.blk = line_id; >> - ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset; >> - ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset; >> - ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset; >> - ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset; >> - ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset; >> + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; >> + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; >> + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; >> + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; >> + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sect_offset; >> return ppa; >> } >> @@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, >> static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, >> struct ppa_addr p) >> { >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&pblk->ppaf; >> u64 paddr; >> - paddr = (u64)p.g.pg << pblk->ppaf.pg_offset; >> - paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset; >> - paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset; >> - paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset; >> - paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset; >> + paddr = (u64)p.g.ch << ppaf->ch_offset; >> + paddr |= (u64)p.g.lun << ppaf->lun_offset; >> + paddr |= (u64)p.g.pg << ppaf->pg_offset; >> + paddr |= (u64)p.g.pl << ppaf->pln_offset; >> + paddr |= (u64)p.g.sec << ppaf->sect_offset; >> return paddr; >> } >> @@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) >> ppa64.c.line = ppa32 & ((~0U) >> 1); >> ppa64.c.is_cached = 1; >> } else { >> - ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >> >> - pblk->ppaf.blk_offset; >> - ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >> >> - pblk->ppaf.pg_offset; >> - ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >> >> - pblk->ppaf.lun_offset; >> - ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >> >> - pblk->ppaf.ch_offset; >> - ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >> >> - pblk->ppaf.pln_offset; >> - ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >> >> - pblk->ppaf.sec_offset; >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&pblk->ppaf; >> + >> + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; >> + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; >> + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; >> + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; >> + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; >> + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sect_offset; >> } >> return ppa64; >> @@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) >> ppa32 |= ppa64.c.line; >> ppa32 |= 1U << 31; >> } else { >> - ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset; >> - ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset; >> - ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset; >> - ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset; >> - ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset; >> - ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset; >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&pblk->ppaf; >> + >> + ppa32 |= ppa64.g.ch << ppaf->ch_offset; >> + ppa32 |= ppa64.g.lun << ppaf->lun_offset; >> + ppa32 |= ppa64.g.blk << ppaf->blk_offset; >> + ppa32 |= ppa64.g.pg << ppaf->pg_offset; >> + ppa32 |= ppa64.g.pl << ppaf->pln_offset; >> + ppa32 |= ppa64.g.sec << ppaf->sect_offset; >> } >> return ppa32; >> @@ -1229,10 +1216,10 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, >> if (!ppa->c.is_cached && >> ppa->g.ch < geo->nr_chnls && >> ppa->g.lun < geo->nr_luns && >> - ppa->g.pl < geo->nr_planes && >> + ppa->g.pl < geo->num_pln && >> ppa->g.blk < geo->nr_chks && >> - ppa->g.pg < geo->ws_per_chk && >> - ppa->g.sec < geo->sec_per_pg) >> + ppa->g.pg < geo->num_pg && >> + ppa->g.sec < geo->ws_min) >> continue; >> print_ppa(ppa, "boundary", i); >> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >> index 839c0b96466a..e276ace28c64 100644 >> --- a/drivers/nvme/host/lightnvm.c >> +++ b/drivers/nvme/host/lightnvm.c >> @@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf { >> __u8 blk_len; >> __u8 pg_offset; >> __u8 pg_len; >> - __u8 sect_offset; >> - __u8 sect_len; >> + __u8 sec_offset; >> + __u8 sec_len; >> __u8 res[4]; >> } __packed; >> @@ -254,106 +254,161 @@ static inline void _nvme_nvm_check_size(void) >> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); >> } >> -static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12) >> +static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, >> + struct nvme_nvm_id12_addrf *src) >> +{ >> + dst->ch_len = src->ch_len; >> + dst->lun_len = src->lun_len; >> + dst->blk_len = src->blk_len; >> + dst->pg_len = src->pg_len; >> + dst->pln_len = src->pln_len; >> + dst->sect_len = src->sec_len; >> + >> + dst->ch_offset = src->ch_offset; >> + dst->lun_offset = src->lun_offset; >> + dst->blk_offset = src->blk_offset; >> + dst->pg_offset = src->pg_offset; >> + dst->pln_offset = src->pln_offset; >> + dst->sect_offset = src->sec_offset; >> + >> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; >> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; >> + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; >> + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; >> + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; >> + dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; >> +} >> + >> +static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, >> + struct nvm_geo *geo) >> { >> struct nvme_nvm_id12_grp *src; >> int sec_per_pg, sec_per_pl, pg_per_blk; >> - if (id12->cgrps != 1) >> + if (id->cgrps != 1) >> return -EINVAL; >> - src = &id12->grp; >> + src = &id->grp; >> - nvm_id->mtype = src->mtype; >> - nvm_id->fmtype = src->fmtype; >> + if (src->mtype != 0) { >> + pr_err("nvm: memory type not supported\n"); >> + return -EINVAL; >> + } >> + >> + geo->ver_id = id->ver_id; >> + >> + geo->nr_chnls = src->num_ch; >> + geo->nr_luns = src->num_lun; >> + geo->all_luns = geo->nr_chnls * geo->nr_luns; >> - nvm_id->num_ch = src->num_ch; >> - nvm_id->num_lun = src->num_lun; >> + geo->nr_chks = le16_to_cpu(src->num_chk); >> - nvm_id->num_chk = le16_to_cpu(src->num_chk); >> - nvm_id->csecs = le16_to_cpu(src->csecs); >> - nvm_id->sos = le16_to_cpu(src->sos); >> + geo->csecs = le16_to_cpu(src->csecs); >> + geo->sos = le16_to_cpu(src->sos); >> pg_per_blk = le16_to_cpu(src->num_pg); >> - sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs; >> + sec_per_pg = le16_to_cpu(src->fpg_sz) / geo->csecs; >> sec_per_pl = sec_per_pg * src->num_pln; >> - nvm_id->clba = sec_per_pl * pg_per_blk; >> - nvm_id->ws_per_chk = pg_per_blk; >> - >> - nvm_id->mpos = le32_to_cpu(src->mpos); >> - nvm_id->cpar = le16_to_cpu(src->cpar); >> - nvm_id->mccap = le32_to_cpu(src->mccap); >> - >> - nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg; >> - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; >> - >> - if (nvm_id->mpos & 0x020202) { >> - nvm_id->ws_seq = NVM_IO_DUAL_ACCESS; >> - nvm_id->ws_opt <<= 1; >> - } else if (nvm_id->mpos & 0x040404) { >> - nvm_id->ws_seq = NVM_IO_QUAD_ACCESS; >> - nvm_id->ws_opt <<= 2; >> + geo->clba = sec_per_pl * pg_per_blk; >> + >> + geo->all_chunks = geo->all_luns * geo->nr_chks; >> + geo->total_secs = geo->clba * geo->all_chunks; >> + >> + geo->ws_min = sec_per_pg; >> + geo->ws_opt = sec_per_pg; >> + geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ >> + >> + geo->mccap = le32_to_cpu(src->mccap); >> + >> + geo->trdt = le32_to_cpu(src->trdt); >> + geo->trdm = le32_to_cpu(src->trdm); >> + geo->tprt = le32_to_cpu(src->tprt); >> + geo->tprm = le32_to_cpu(src->tprm); >> + geo->tbet = le32_to_cpu(src->tbet); >> + geo->tbem = le32_to_cpu(src->tbem); >> + >> + /* 1.2 compatibility */ >> + geo->vmnt = id->vmnt; >> + geo->cap = le32_to_cpu(id->cap); >> + geo->dom = le32_to_cpu(id->dom); >> + >> + geo->mtype = src->mtype; >> + geo->fmtype = src->fmtype; >> + >> + geo->cpar = le16_to_cpu(src->cpar); >> + geo->mpos = le32_to_cpu(src->mpos); >> + >> + geo->plane_mode = NVM_PLANE_SINGLE; >> + >> + if (geo->mpos & 0x020202) { >> + geo->plane_mode = NVM_PLANE_DOUBLE; >> + geo->ws_opt <<= 1; >> + } else if (geo->mpos & 0x040404) { >> + geo->plane_mode = NVM_PLANE_QUAD; >> + geo->ws_opt <<= 2; >> } >> - nvm_id->trdt = le32_to_cpu(src->trdt); >> - nvm_id->trdm = le32_to_cpu(src->trdm); >> - nvm_id->tprt = le32_to_cpu(src->tprt); >> - nvm_id->tprm = le32_to_cpu(src->tprm); >> - nvm_id->tbet = le32_to_cpu(src->tbet); >> - nvm_id->tbem = le32_to_cpu(src->tbem); >> - >> - /* 1.2 compatibility */ >> - nvm_id->num_pln = src->num_pln; >> - nvm_id->num_pg = le16_to_cpu(src->num_pg); >> - nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz); >> + geo->num_pln = src->num_pln; >> + geo->num_pg = le16_to_cpu(src->num_pg); >> + geo->fpg_sz = le16_to_cpu(src->fpg_sz); >> + >> + nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&geo->addrf, >> + &id->ppaf); >> return 0; >> } >> -static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, >> - struct nvme_nvm_id12 *id) >> +static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst, >> + struct nvme_nvm_id20_addrf *src) >> { >> - nvm_id->ver_id = id->ver_id; >> - nvm_id->vmnt = id->vmnt; >> - nvm_id->cap = le32_to_cpu(id->cap); >> - nvm_id->dom = le32_to_cpu(id->dom); >> - memcpy(&nvm_id->ppaf, &id->ppaf, >> - sizeof(struct nvm_addr_format)); >> - >> - return init_grp(nvm_id, id); >> + dst->ch_len = src->grp_len; >> + dst->lun_len = src->pu_len; >> + dst->chk_len = src->chk_len; >> + dst->sec_len = src->lba_len; >> + >> + dst->sec_offset = 0; >> + dst->chk_offset = dst->sec_len; >> + dst->lun_offset = dst->chk_offset + dst->chk_len; >> + dst->ch_offset = dst->lun_offset + dst->lun_len; >> + >> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; >> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; >> + dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset; >> + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; >> } >> -static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id, >> - struct nvme_nvm_id20 *id) >> +static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, >> + struct nvm_geo *geo) >> { >> - nvm_id->ver_id = id->mjr; >> + geo->ver_id = id->mjr; >> + >> + geo->nr_chnls = le16_to_cpu(id->num_grp); >> + geo->nr_luns = le16_to_cpu(id->num_pu); >> + geo->all_luns = geo->nr_chnls * geo->nr_luns; >> - nvm_id->num_ch = le16_to_cpu(id->num_grp); >> - nvm_id->num_lun = le16_to_cpu(id->num_pu); >> - nvm_id->num_chk = le32_to_cpu(id->num_chk); >> - nvm_id->clba = le32_to_cpu(id->clba); >> + geo->nr_chks = le32_to_cpu(id->num_chk); >> + geo->clba = le32_to_cpu(id->clba); >> - nvm_id->ws_min = le32_to_cpu(id->ws_min); >> - nvm_id->ws_opt = le32_to_cpu(id->ws_opt); >> - nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits); >> + geo->all_chunks = geo->all_luns * geo->nr_chks; >> + geo->total_secs = geo->clba * geo->all_chunks; >> - nvm_id->trdt = le32_to_cpu(id->trdt); >> - nvm_id->trdm = le32_to_cpu(id->trdm); >> - nvm_id->tprt = le32_to_cpu(id->twrt); >> - nvm_id->tprm = le32_to_cpu(id->twrm); >> - nvm_id->tbet = le32_to_cpu(id->tcrst); >> - nvm_id->tbem = le32_to_cpu(id->tcrsm); >> + geo->ws_min = le32_to_cpu(id->ws_min); >> + geo->ws_opt = le32_to_cpu(id->ws_opt); >> + geo->mw_cunits = le32_to_cpu(id->mw_cunits); >> - /* calculated values */ >> - nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min; >> + geo->trdt = le32_to_cpu(id->trdt); >> + geo->trdm = le32_to_cpu(id->trdm); >> + geo->tprt = le32_to_cpu(id->twrt); >> + geo->tprm = le32_to_cpu(id->twrm); >> + geo->tbet = le32_to_cpu(id->tcrst); >> + geo->tbem = le32_to_cpu(id->tcrsm); >> - /* 1.2 compatibility */ >> - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS; >> + nvme_nvm_set_addr_20(&geo->addrf, &id->lbaf); >> return 0; >> } >> -static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) >> +static int nvme_nvm_identity(struct nvm_dev *nvmdev) >> { >> struct nvme_ns *ns = nvmdev->q->queuedata; >> struct nvme_nvm_id12 *id; >> @@ -380,18 +435,18 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id) >> */ >> switch (id->ver_id) { >> case 1: >> - ret = nvme_nvm_setup_12(nvmdev, nvm_id, id); >> + ret = nvme_nvm_setup_12(id, &nvmdev->geo); >> break; >> case 2: >> - ret = nvme_nvm_setup_20(nvmdev, nvm_id, >> - (struct nvme_nvm_id20 *)id); >> + ret = nvme_nvm_setup_20((struct nvme_nvm_id20 *)id, >> + &nvmdev->geo); >> break; >> default: >> - dev_err(ns->ctrl->device, >> - "OCSSD revision not supported (%d)\n", >> - nvm_id->ver_id); >> + dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n", >> + id->ver_id); >> ret = -EINVAL; >> } >> + >> out: >> kfree(id); >> return ret; >> @@ -406,7 +461,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, >> struct nvme_ctrl *ctrl = ns->ctrl; >> struct nvme_nvm_command c = {}; >> struct nvme_nvm_bb_tbl *bb_tbl; >> - int nr_blks = geo->nr_chks * geo->plane_mode; >> + int nr_blks = geo->nr_chks * geo->num_pln; >> int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks; >> int ret = 0; >> @@ -447,7 +502,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, >> goto out; >> } >> - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode); >> + memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->num_pln); >> out: >> kfree(bb_tbl); >> return ret; >> @@ -815,9 +870,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) >> void nvme_nvm_update_nvm_info(struct nvme_ns *ns) >> { >> struct nvm_dev *ndev = ns->ndev; >> + struct nvm_geo *geo = &ndev->geo; >> - ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift; >> - ndev->identity.sos = ndev->geo.oob_size = ns->ms; >> + geo->csecs = 1 << ns->lba_shift; >> + geo->sos = ns->ms; >> } >> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) >> @@ -850,23 +906,22 @@ static ssize_t nvm_dev_attr_show(struct device *dev, >> { >> struct nvme_ns *ns = nvme_get_ns_from_dev(dev); >> struct nvm_dev *ndev = ns->ndev; >> - struct nvm_id *id; >> + struct nvm_geo *geo = &ndev->geo; >> struct attribute *attr; >> if (!ndev) >> return 0; >> - id = &ndev->identity; >> attr = &dattr->attr; >> if (strcmp(attr->name, "version") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ver_id); >> } else if (strcmp(attr->name, "capabilities") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->cap); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->cap); >> } else if (strcmp(attr->name, "read_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdt); >> } else if (strcmp(attr->name, "read_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->trdm); >> } else { >> return scnprintf(page, >> PAGE_SIZE, >> @@ -875,75 +930,79 @@ static ssize_t nvm_dev_attr_show(struct device *dev, >> } >> } >> +static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf, >> + char *page) >> +{ >> + return scnprintf(page, PAGE_SIZE, >> + "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", >> + ppaf->ch_offset, ppaf->ch_len, >> + ppaf->lun_offset, ppaf->lun_len, >> + ppaf->pln_offset, ppaf->pln_len, >> + ppaf->blk_offset, ppaf->blk_len, >> + ppaf->pg_offset, ppaf->pg_len, >> + ppaf->sect_offset, ppaf->sect_len); >> +} >> + >> static ssize_t nvm_dev_attr_show_12(struct device *dev, >> struct device_attribute *dattr, char *page) >> { >> struct nvme_ns *ns = nvme_get_ns_from_dev(dev); >> struct nvm_dev *ndev = ns->ndev; >> - struct nvm_id *id; >> + struct nvm_geo *geo = &ndev->geo; >> struct attribute *attr; >> if (!ndev) >> return 0; >> - id = &ndev->identity; >> attr = &dattr->attr; >> if (strcmp(attr->name, "vendor_opcode") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->vmnt); >> } else if (strcmp(attr->name, "device_mode") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->dom); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->dom); >> /* kept for compatibility */ >> } else if (strcmp(attr->name, "media_manager") == 0) { >> return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm"); >> } else if (strcmp(attr->name, "ppa_format") == 0) { >> - return scnprintf(page, PAGE_SIZE, >> - "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", >> - id->ppaf.ch_offset, id->ppaf.ch_len, >> - id->ppaf.lun_offset, id->ppaf.lun_len, >> - id->ppaf.pln_offset, id->ppaf.pln_len, >> - id->ppaf.blk_offset, id->ppaf.blk_len, >> - id->ppaf.pg_offset, id->ppaf.pg_len, >> - id->ppaf.sect_offset, id->ppaf.sect_len); >> + return nvm_dev_attr_show_ppaf((void *)&geo->addrf, page); > > Why do the code here cast to void *, and not to the address format data structure? > > Have you thought about doing the cast directly here, instead of making a function for it? I like it better to be tight instead of having to break the line for this. Same point you make below. > >> } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */ >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mtype); >> } else if (strcmp(attr->name, "flash_media_type") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fmtype); >> } else if (strcmp(attr->name, "num_channels") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); >> } else if (strcmp(attr->name, "num_luns") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); >> } else if (strcmp(attr->name, "num_planes") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pln); >> } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */ >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); >> } else if (strcmp(attr->name, "num_pages") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pg); >> } else if (strcmp(attr->name, "page_size") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->fpg_sz); >> } else if (strcmp(attr->name, "hw_sector_size") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->csecs); >> } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */ >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->sos); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->sos); >> } else if (strcmp(attr->name, "prog_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); >> } else if (strcmp(attr->name, "prog_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); >> } else if (strcmp(attr->name, "erase_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); >> } else if (strcmp(attr->name, "erase_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); >> } else if (strcmp(attr->name, "multiplane_modes") == 0) { >> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos); >> + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mpos); >> } else if (strcmp(attr->name, "media_capabilities") == 0) { >> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap); >> + return scnprintf(page, PAGE_SIZE, "0x%08x\n", geo->mccap); >> } else if (strcmp(attr->name, "max_phys_secs") == 0) { >> return scnprintf(page, PAGE_SIZE, "%u\n", NVM_MAX_VLBA); >> } else { >> - return scnprintf(page, >> - PAGE_SIZE, >> - "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", >> - attr->name); >> + return scnprintf(page, PAGE_SIZE, >> + "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n", >> + attr->name); >> } >> } >> @@ -952,42 +1011,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, >> { >> struct nvme_ns *ns = nvme_get_ns_from_dev(dev); >> struct nvm_dev *ndev = ns->ndev; >> - struct nvm_id *id; >> + struct nvm_geo *geo = &ndev->geo; >> struct attribute *attr; >> if (!ndev) >> return 0; >> - id = &ndev->identity; >> attr = &dattr->attr; >> if (strcmp(attr->name, "groups") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); >> } else if (strcmp(attr->name, "punits") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); >> } else if (strcmp(attr->name, "chunks") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); >> } else if (strcmp(attr->name, "clba") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->clba); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->clba); >> } else if (strcmp(attr->name, "ws_min") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_min); >> } else if (strcmp(attr->name, "ws_opt") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_opt); >> } else if (strcmp(attr->name, "mw_cunits") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->mw_cunits); >> } else if (strcmp(attr->name, "write_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprt); >> } else if (strcmp(attr->name, "write_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tprm); >> } else if (strcmp(attr->name, "reset_typ") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbet); >> } else if (strcmp(attr->name, "reset_max") == 0) { >> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem); >> + return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem); >> } else { >> - return scnprintf(page, >> - PAGE_SIZE, >> - "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", >> - attr->name); >> + return scnprintf(page, PAGE_SIZE, >> + "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n", >> + attr->name); >> } >> } >> @@ -1106,10 +1163,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = { >> int nvme_nvm_register_sysfs(struct nvme_ns *ns) >> { >> - if (!ns->ndev) >> + struct nvm_dev *ndev = ns->ndev; >> + struct nvm_geo *geo = &ndev->geo; >> + >> + if (!ndev) >> return -EINVAL; >> - switch (ns->ndev->identity.ver_id) { >> + switch (geo->ver_id) { >> case 1: >> return sysfs_create_group(&disk_to_dev(ns->disk)->kobj, >> &nvm_dev_attr_group_12); >> @@ -1123,7 +1183,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns) >> void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) >> { >> - switch (ns->ndev->identity.ver_id) { >> + struct nvm_dev *ndev = ns->ndev; >> + struct nvm_geo *geo = &ndev->geo; >> + >> + switch (geo->ver_id) { >> case 1: >> sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, >> &nvm_dev_attr_group_12); >> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h >> index e55b10573c99..16255fcd5250 100644 >> --- a/include/linux/lightnvm.h >> +++ b/include/linux/lightnvm.h >> @@ -50,7 +50,7 @@ struct nvm_id; >> struct nvm_dev; >> struct nvm_tgt_dev; >> -typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *); >> +typedef int (nvm_id_fn)(struct nvm_dev *); >> typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); >> typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); >> typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); >> @@ -152,62 +152,48 @@ struct nvm_id_lp_tbl { >> struct nvm_id_lp_mlc mlc; >> }; >> -struct nvm_addr_format { >> - u8 ch_offset; >> +struct nvm_addr_format_12 { > > I can see in a couple of places a statement has to be over two lines due > to the length of writing out nvm_addr_format_12, would it make sense to > shorthand it to nvm_addrf_12? Good idea. > >> u8 ch_len; >> - u8 lun_offset; >> u8 lun_len; >> - u8 pln_offset; >> + u8 blk_len; >> + u8 pg_len; >> u8 pln_len; >> + u8 sect_len; >> + >> + u8 ch_offset; >> + u8 lun_offset; >> u8 blk_offset; >> - u8 blk_len; >> u8 pg_offset; >> - u8 pg_len; >> + u8 pln_offset; >> u8 sect_offset; >> - u8 sect_len; >> -}; >> - >> -struct nvm_id { >> - u8 ver_id; >> - u8 vmnt; >> - u32 cap; >> - u32 dom; >> - >> - struct nvm_addr_format ppaf; >> - >> - u8 num_ch; >> - u8 num_lun; >> - u16 num_chk; >> - u16 clba; >> - u16 csecs; >> - u16 sos; >> - >> - u32 ws_min; >> - u32 ws_opt; >> - u32 mw_cunits; >> - u32 trdt; >> - u32 trdm; >> - u32 tprt; >> - u32 tprm; >> - u32 tbet; >> - u32 tbem; >> - u32 mpos; >> - u32 mccap; >> - u16 cpar; >> - >> - /* calculated values */ >> - u16 ws_seq; >> - u16 ws_per_chk; >> - >> - /* 1.2 compatibility */ >> - u8 mtype; >> - u8 fmtype; >> + u64 ch_mask; >> + u64 lun_mask; >> + u64 blk_mask; >> + u64 pg_mask; >> + u64 pln_mask; >> + u64 sec_mask; >> +}; >> - u8 num_pln; >> - u16 num_pg; >> - u16 fpg_sz; >> -} __packed; >> +struct nvm_addr_format { >> + u8 ch_len; >> + u8 lun_len; >> + u8 chk_len; >> + u8 sec_len; >> + u8 rsv_len[2]; >> + >> + u8 ch_offset; >> + u8 lun_offset; >> + u8 chk_offset; >> + u8 sec_offset; >> + u8 rsv_off[2]; >> + >> + u64 ch_mask; >> + u64 lun_mask; >> + u64 chk_mask; >> + u64 sec_mask; >> + u64 rsv_mask[2]; >> +}; >> struct nvm_target { >> struct list_head list; >> @@ -274,36 +260,63 @@ enum { >> NVM_BLK_ST_BAD = 0x8, /* Bad block */ >> }; >> - >> -/* Device generic information */ >> +/* Instance geometry */ >> struct nvm_geo { >> - /* generic geometry */ >> + /* device reported version */ >> + u8 ver_id; >> + >> + /* instance specific geometry */ >> int nr_chnls; >> - int all_luns; /* across channels */ >> - int nr_luns; /* per channel */ >> - int nr_chks; /* per lun */ >> + int nr_luns; /* per channel */ >> - int sec_size; >> - int oob_size; >> - int mccap; >> + /* calculated values */ >> + int all_luns; /* across channels */ >> + int all_chunks; /* across channels */ >> - int sec_per_chk; >> - int sec_per_lun; >> + int op; /* over-provision in instance */ >> - int ws_min; >> - int ws_opt; >> - int ws_seq; >> - int ws_per_chk; >> + sector_t total_secs; /* across channels */ >> - int op; >> + /* chunk geometry */ >> + u32 nr_chks; /* chunks per lun */ >> + u32 clba; /* sectors per chunk */ >> + u16 csecs; /* sector size */ >> + u16 sos; /* out-of-band area size */ >> - struct nvm_addr_format ppaf; >> + /* device write constrains */ >> + u32 ws_min; /* minimum write size */ >> + u32 ws_opt; /* optimal write size */ >> + u32 mw_cunits; /* distance required for successful read */ >> - /* Legacy 1.2 specific geometry */ >> - int plane_mode; /* drive device in single, double or quad mode */ >> - int nr_planes; >> - int sec_per_pg; /* only sectors for a single page */ >> - int sec_per_pl; /* all sectors across planes */ >> + /* device capabilities */ >> + u32 mccap; >> + >> + /* device timings */ >> + u32 trdt; /* Avg. Tread (ns) */ >> + u32 trdm; /* Max Tread (ns) */ >> + u32 tprt; /* Avg. Tprog (ns) */ >> + u32 tprm; /* Max Tprog (ns) */ >> + u32 tbet; /* Avg. Terase (ns) */ >> + u32 tbem; /* Max Terase (ns) */ >> + >> + /* generic address format */ >> + struct nvm_addr_format addrf; >> + >> + /* 1.2 compatibility */ >> + u8 vmnt; >> + u32 cap; >> + u32 dom; >> + >> + u8 mtype; >> + u8 fmtype; >> + >> + u16 cpar; >> + u32 mpos; >> + >> + u8 num_pln; >> + u8 plane_mode; >> + u16 num_pg; >> + u16 fpg_sz; >> }; >> /* sub-device structure */ >> @@ -314,9 +327,6 @@ struct nvm_tgt_dev { >> /* Base ppas for target LUNs */ >> struct ppa_addr *luns; >> - sector_t total_secs; >> - >> - struct nvm_id identity; >> struct request_queue *q; >> struct nvm_dev *parent; >> @@ -331,13 +341,9 @@ struct nvm_dev { >> /* Device information */ >> struct nvm_geo geo; >> - unsigned long total_secs; >> - >> unsigned long *lun_map; >> void *dma_pool; >> - struct nvm_id identity; >> - >> /* Backend device */ >> struct request_queue *q; >> char name[DISK_NAME_LEN]; >> @@ -357,14 +363,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, >> struct ppa_addr r) >> { >> struct nvm_geo *geo = &tgt_dev->geo; >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&geo->addrf; >> struct ppa_addr l; >> - l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset; >> - l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset; >> - l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset; >> - l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset; >> - l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset; >> - l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset; >> + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; >> + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; >> + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; >> + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; >> + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; >> + l.ppa |= ((u64)r.g.sec) << ppaf->sect_offset; >> return l; >> } >> @@ -373,24 +381,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, >> struct ppa_addr r) >> { >> struct nvm_geo *geo = &tgt_dev->geo; >> + struct nvm_addr_format_12 *ppaf = >> + (struct nvm_addr_format_12 *)&geo->addrf; >> struct ppa_addr l; >> l.ppa = 0; >> - /* >> - * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc. >> - */ >> - l.g.blk = (r.ppa >> geo->ppaf.blk_offset) & >> - (((1 << geo->ppaf.blk_len) - 1)); >> - l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) & >> - (((1 << geo->ppaf.pg_len) - 1)); >> - l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) & >> - (((1 << geo->ppaf.sect_len) - 1)); >> - l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) & >> - (((1 << geo->ppaf.pln_len) - 1)); >> - l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) & >> - (((1 << geo->ppaf.lun_len) - 1)); >> - l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) & >> - (((1 << geo->ppaf.ch_len) - 1)); >> + >> + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; >> + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; >> + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; >> + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; >> + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; >> + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sect_offset; >> return l; >> } > > Looks good to me, -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180302/f92b0d87/attachment-0001.sig> ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 02/15] lightnvm: add controller capabilities to 2.0 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme QXNzaWduIG1pc3NpbmcgbWNjYXAgdmFsdWUgb24gMi4wIHBhdGgKClNpZ25lZC1vZmYtYnk6IEph dmllciBHb256w6FsZXogPGphdmllckBjbmV4bGFicy5jb20+Ci0tLQogZHJpdmVycy9udm1lL2hv c3QvbGlnaHRudm0uYyB8IDQgKysrLQogaW5jbHVkZS9saW51eC9saWdodG52bS5oICAgICB8IDgg KysrKystLS0KIDIgZmlsZXMgY2hhbmdlZCwgOCBpbnNlcnRpb25zKCspLCA0IGRlbGV0aW9ucygt KQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMvbnZtZS9ob3N0L2xpZ2h0bnZtLmMgYi9kcml2ZXJzL252 bWUvaG9zdC9saWdodG52bS5jCmluZGV4IGUyNzZhY2UyOGM2NC4uNWIyMDI0ZWJhYzc2IDEwMDY0 NAotLS0gYS9kcml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jCisrKyBiL2RyaXZlcnMvbnZtZS9o b3N0L2xpZ2h0bnZtLmMKQEAgLTMxOCw3ICszMTgsNyBAQCBzdGF0aWMgaW50IG52bWVfbnZtX3Nl dHVwXzEyKHN0cnVjdCBudm1lX252bV9pZDEyICppZCwKIAlnZW8tPndzX29wdCA9IHNlY19wZXJf cGc7CiAJZ2VvLT5td19jdW5pdHMgPSBnZW8tPndzX29wdCA8PCAzOwkvKiBkZWZhdWx0IHRvIE1M QyBzYWZlIHZhbHVlcyAqLwogCi0JZ2VvLT5tY2NhcCA9IGxlMzJfdG9fY3B1KHNyYy0+bWNjYXAp OworCWdlby0+Y2FwID0gbGUzMl90b19jcHUoc3JjLT5tY2NhcCk7CiAKIAlnZW8tPnRyZHQgPSBs ZTMyX3RvX2NwdShzcmMtPnRyZHQpOwogCWdlby0+dHJkbSA9IGxlMzJfdG9fY3B1KHNyYy0+dHJk bSk7CkBAIC0zOTYsNiArMzk2LDggQEAgc3RhdGljIGludCBudm1lX252bV9zZXR1cF8yMChzdHJ1 Y3QgbnZtZV9udm1faWQyMCAqaWQsCiAJZ2VvLT53c19vcHQgPSBsZTMyX3RvX2NwdShpZC0+d3Nf b3B0KTsKIAlnZW8tPm13X2N1bml0cyA9IGxlMzJfdG9fY3B1KGlkLT5td19jdW5pdHMpOwogCisJ Z2VvLT5jYXAgPSBsZTMyX3RvX2NwdShpZC0+bWNjYXApOworCiAJZ2VvLT50cmR0ID0gbGUzMl90 b19jcHUoaWQtPnRyZHQpOwogCWdlby0+dHJkbSA9IGxlMzJfdG9fY3B1KGlkLT50cmRtKTsKIAln ZW8tPnRwcnQgPSBsZTMyX3RvX2NwdShpZC0+dHdydCk7CmRpZmYgLS1naXQgYS9pbmNsdWRlL2xp bnV4L2xpZ2h0bnZtLmggYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKaW5kZXggMTYyNTVmY2Q1 MjUwLi5iOWYwZDIwNzBkZTkgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvbGlnaHRudm0uaAor KysgYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKQEAgLTI4OCw4ICsyODgsMTAgQEAgc3RydWN0 IG52bV9nZW8gewogCXUzMgl3c19vcHQ7CQkvKiBvcHRpbWFsIHdyaXRlIHNpemUgKi8KIAl1MzIJ bXdfY3VuaXRzOwkvKiBkaXN0YW5jZSByZXF1aXJlZCBmb3Igc3VjY2Vzc2Z1bCByZWFkICovCiAK LQkvKiBkZXZpY2UgY2FwYWJpbGl0aWVzICovCi0JdTMyCW1jY2FwOworCS8qIGRldmljZSBjYXBh YmlsaXRpZXMuIE5vdGUgdGhhdCB0aGlzIHJlcHJlc2VudHMgY2FwYWJpbGl0aWVzIGluIDEuMgor CSAqIGFuZCBtZWRpYSBhbmQgY29udHJvbGxlciBjYXBhYmlsaXRpZXMgaW4gMi4wCisJICovCisJ dTMyCWNhcDsKIAogCS8qIGRldmljZSB0aW1pbmdzICovCiAJdTMyCXRyZHQ7CQkvKiBBdmcuIFRy ZWFkIChucykgKi8KQEAgLTMwNCw3ICszMDYsNyBAQCBzdHJ1Y3QgbnZtX2dlbyB7CiAKIAkvKiAx LjIgY29tcGF0aWJpbGl0eSAqLwogCXU4CXZtbnQ7Ci0JdTMyCWNhcDsKKwl1MzIJbWNjYXA7CiAJ dTMyCWRvbTsKIAogCXU4CW10eXBlOwotLSAKMi43LjQKCgpfX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fXwpMaW51eC1udm1lIG1haWxpbmcgbGlzdApMaW51eC1u dm1lQGxpc3RzLmluZnJhZGVhZC5vcmcKaHR0cDovL2xpc3RzLmluZnJhZGVhZC5vcmcvbWFpbG1h bi9saXN0aW5mby9saW51eC1udm1lCg== ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 02/15] lightnvm: add controller capabilities to 2.0 @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Assign missing mccap value on 2.0 path Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/nvme/host/lightnvm.c | 4 +++- include/linux/lightnvm.h | 8 +++++--- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index e276ace28c64..5b2024ebac76 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -318,7 +318,7 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, geo->ws_opt = sec_per_pg; geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ - geo->mccap = le32_to_cpu(src->mccap); + geo->cap = le32_to_cpu(src->mccap); geo->trdt = le32_to_cpu(src->trdt); geo->trdm = le32_to_cpu(src->trdm); @@ -396,6 +396,8 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, geo->ws_opt = le32_to_cpu(id->ws_opt); geo->mw_cunits = le32_to_cpu(id->mw_cunits); + geo->cap = le32_to_cpu(id->mccap); + geo->trdt = le32_to_cpu(id->trdt); geo->trdm = le32_to_cpu(id->trdm); geo->tprt = le32_to_cpu(id->twrt); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 16255fcd5250..b9f0d2070de9 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -288,8 +288,10 @@ struct nvm_geo { u32 ws_opt; /* optimal write size */ u32 mw_cunits; /* distance required for successful read */ - /* device capabilities */ - u32 mccap; + /* device capabilities. Note that this represents capabilities in 1.2 + * and media and controller capabilities in 2.0 + */ + u32 cap; /* device timings */ u32 trdt; /* Avg. Tread (ns) */ @@ -304,7 +306,7 @@ struct nvm_geo { /* 1.2 compatibility */ u8 vmnt; - u32 cap; + u32 mccap; u32 dom; u8 mtype; -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 02/15] lightnvm: add controller capabilities to 2.0 @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Assign missing mccap value on 2.0 path Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/nvme/host/lightnvm.c | 4 +++- include/linux/lightnvm.h | 8 +++++--- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index e276ace28c64..5b2024ebac76 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -318,7 +318,7 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, geo->ws_opt = sec_per_pg; geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ - geo->mccap = le32_to_cpu(src->mccap); + geo->cap = le32_to_cpu(src->mccap); geo->trdt = le32_to_cpu(src->trdt); geo->trdm = le32_to_cpu(src->trdm); @@ -396,6 +396,8 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, geo->ws_opt = le32_to_cpu(id->ws_opt); geo->mw_cunits = le32_to_cpu(id->mw_cunits); + geo->cap = le32_to_cpu(id->mccap); + geo->trdt = le32_to_cpu(id->trdt); geo->trdm = le32_to_cpu(id->trdm); geo->tprt = le32_to_cpu(id->twrt); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 16255fcd5250..b9f0d2070de9 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -288,8 +288,10 @@ struct nvm_geo { u32 ws_opt; /* optimal write size */ u32 mw_cunits; /* distance required for successful read */ - /* device capabilities */ - u32 mccap; + /* device capabilities. Note that this represents capabilities in 1.2 + * and media and controller capabilities in 2.0 + */ + u32 cap; /* device timings */ u32 trdt; /* Avg. Tread (ns) */ @@ -304,7 +306,7 @@ struct nvm_geo { /* 1.2 compatibility */ u8 vmnt; - u32 cap; + u32 mccap; u32 dom; u8 mtype; -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 02/15] lightnvm: add controller capabilities to 2.0 2018-02-28 15:49 ` Javier González (?) @ 2018-03-01 10:33 ` Matias Bjørling -1 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:33 UTC (permalink / raw) To: Javier González Cc: linux-block, Javier González, linux-kernel, linux-nvme T24gMDIvMjgvMjAxOCAwNDo0OSBQTSwgSmF2aWVyIEdvbnrDoWxleiB3cm90ZToKPiBBc3NpZ24g bWlzc2luZyBtY2NhcCB2YWx1ZSBvbiAyLjAgcGF0aAo+IAo+IFNpZ25lZC1vZmYtYnk6IEphdmll ciBHb256w6FsZXogPGphdmllckBjbmV4bGFicy5jb20+Cj4gLS0tCj4gICBkcml2ZXJzL252bWUv aG9zdC9saWdodG52bS5jIHwgNCArKystCj4gICBpbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmggICAg IHwgOCArKysrKy0tLQo+ICAgMiBmaWxlcyBjaGFuZ2VkLCA4IGluc2VydGlvbnMoKyksIDQgZGVs ZXRpb25zKC0pCj4gCj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMvbnZtZS9ob3N0L2xpZ2h0bnZtLmMg Yi9kcml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jCj4gaW5kZXggZTI3NmFjZTI4YzY0Li41YjIw MjRlYmFjNzYgMTAwNjQ0Cj4gLS0tIGEvZHJpdmVycy9udm1lL2hvc3QvbGlnaHRudm0uYwo+ICsr KyBiL2RyaXZlcnMvbnZtZS9ob3N0L2xpZ2h0bnZtLmMKPiBAQCAtMzE4LDcgKzMxOCw3IEBAIHN0 YXRpYyBpbnQgbnZtZV9udm1fc2V0dXBfMTIoc3RydWN0IG52bWVfbnZtX2lkMTIgKmlkLAo+ICAg CWdlby0+d3Nfb3B0ID0gc2VjX3Blcl9wZzsKPiAgIAlnZW8tPm13X2N1bml0cyA9IGdlby0+d3Nf b3B0IDw8IDM7CS8qIGRlZmF1bHQgdG8gTUxDIHNhZmUgdmFsdWVzICovCj4gICAKPiAtCWdlby0+ bWNjYXAgPSBsZTMyX3RvX2NwdShzcmMtPm1jY2FwKTsKPiArCWdlby0+Y2FwID0gbGUzMl90b19j cHUoc3JjLT5tY2NhcCk7Cj4gICAKPiAgIAlnZW8tPnRyZHQgPSBsZTMyX3RvX2NwdShzcmMtPnRy ZHQpOwo+ICAgCWdlby0+dHJkbSA9IGxlMzJfdG9fY3B1KHNyYy0+dHJkbSk7Cj4gQEAgLTM5Niw2 ICszOTYsOCBAQCBzdGF0aWMgaW50IG52bWVfbnZtX3NldHVwXzIwKHN0cnVjdCBudm1lX252bV9p ZDIwICppZCwKPiAgIAlnZW8tPndzX29wdCA9IGxlMzJfdG9fY3B1KGlkLT53c19vcHQpOwo+ICAg CWdlby0+bXdfY3VuaXRzID0gbGUzMl90b19jcHUoaWQtPm13X2N1bml0cyk7Cj4gICAKPiArCWdl by0+Y2FwID0gbGUzMl90b19jcHUoaWQtPm1jY2FwKTsKPiArCj4gICAJZ2VvLT50cmR0ID0gbGUz Ml90b19jcHUoaWQtPnRyZHQpOwo+ICAgCWdlby0+dHJkbSA9IGxlMzJfdG9fY3B1KGlkLT50cmRt KTsKPiAgIAlnZW8tPnRwcnQgPSBsZTMyX3RvX2NwdShpZC0+dHdydCk7Cj4gZGlmZiAtLWdpdCBh L2luY2x1ZGUvbGludXgvbGlnaHRudm0uaCBiL2luY2x1ZGUvbGludXgvbGlnaHRudm0uaAo+IGlu ZGV4IDE2MjU1ZmNkNTI1MC4uYjlmMGQyMDcwZGU5IDEwMDY0NAo+IC0tLSBhL2luY2x1ZGUvbGlu dXgvbGlnaHRudm0uaAo+ICsrKyBiL2luY2x1ZGUvbGludXgvbGlnaHRudm0uaAo+IEBAIC0yODgs OCArMjg4LDEwIEBAIHN0cnVjdCBudm1fZ2VvIHsKPiAgIAl1MzIJd3Nfb3B0OwkJLyogb3B0aW1h bCB3cml0ZSBzaXplICovCj4gICAJdTMyCW13X2N1bml0czsJLyogZGlzdGFuY2UgcmVxdWlyZWQg Zm9yIHN1Y2Nlc3NmdWwgcmVhZCAqLwo+ICAgCj4gLQkvKiBkZXZpY2UgY2FwYWJpbGl0aWVzICov Cj4gLQl1MzIJbWNjYXA7Cj4gKwkvKiBkZXZpY2UgY2FwYWJpbGl0aWVzLiBOb3RlIHRoYXQgdGhp cyByZXByZXNlbnRzIGNhcGFiaWxpdGllcyBpbiAxLjIKPiArCSAqIGFuZCBtZWRpYSBhbmQgY29u dHJvbGxlciBjYXBhYmlsaXRpZXMgaW4gMi4wCj4gKwkgKi8KPiArCXUzMgljYXA7CgpIZXJlIGlz IGEgbGlzdCBvZiBjYXBhYmlsaXRpZXM6CgoxLjIKQmFkIGJsb2NrIG1nbXQKSHlicmlkIGNvbW1h bmQgc3VwcG9ydAoKMi4wCgpWZWN0b3IgY29weQpEb3VibGUgcmVzZXQKClRoZSB3YXkgSSB3YXMg dGhpbmtpbmcgaXQgd291bGQgYmUgaW1wbGVtZW50ZWQgaXMgdG8gc3BsaXQgdGhlIHVwcGVyIGNh cCAKYml0cyB0byAyLjAsIGFuZCBsZXQgdGhlIGxvd2VyIGJpdHMgYmUgcmVzZXJ2ZWQgZm9yIDEu Mi4KClN1Y2ggdGhhdCBvbmUgd291bGQgZGVmaW5lIHRoZSBmb2xsb3dpbmc6CgplbnVtIHsKCU5W TV9DQVBfQkJNIAkxIDw8IDA7CglOVk1fQ0FQX0hDUyAJMSA8PCAxOwoKCU5WTV9DQVBfVkNQWSAJ MSA8PCAxNjsKCU5WTV9DQVBfRFJTVAkxIDw8IDE3Owp9OwoKVGhhdCB3YXksIHRoZSBhc3NpZ25t ZW50IGZyb20gMi4wIGNhbiBlYXNpbHkgYmUgZG9uZSB3aXRoIGNhcCA9IApsZTMyX3RvX2NwdShp ZC0+bWNjYXApIDw8IDE2OwoKYW5kIHRhcmdldHMgYW5kIG90aGVyIGRvbid0IG5lZWQgdG8gdW5k ZXJzdGFuZCB0aGUgZGlmZmVyZW5jZSBiZXR3ZWVuIAoxLjIgYW5kIDIuMCBmb3JtYXQuCj4gICAK PiAgIAkvKiBkZXZpY2UgdGltaW5ncyAqLwo+ICAgCXUzMgl0cmR0OwkJLyogQXZnLiBUcmVhZCAo bnMpICovCj4gQEAgLTMwNCw3ICszMDYsNyBAQCBzdHJ1Y3QgbnZtX2dlbyB7Cj4gICAKPiAgIAkv KiAxLjIgY29tcGF0aWJpbGl0eSAqLwo+ICAgCXU4CXZtbnQ7Cj4gLQl1MzIJY2FwOwo+ICsJdTMy CW1jY2FwOwo+ICAgCXUzMglkb207Cj4gICAKPiAgIAl1OAltdHlwZTsKPiAKCgpfX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpMaW51eC1udm1lIG1haWxpbmcg bGlzdApMaW51eC1udm1lQGxpc3RzLmluZnJhZGVhZC5vcmcKaHR0cDovL2xpc3RzLmluZnJhZGVh ZC5vcmcvbWFpbG1hbi9saXN0aW5mby9saW51eC1udm1lCg== ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 02/15] lightnvm: add controller capabilities to 2.0 @ 2018-03-01 10:33 ` Matias Bjørling 0 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:33 UTC (permalink / raw) On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: > Assign missing mccap value on 2.0 path > > Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> > --- > drivers/nvme/host/lightnvm.c | 4 +++- > include/linux/lightnvm.h | 8 +++++--- > 2 files changed, 8 insertions(+), 4 deletions(-) > > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index e276ace28c64..5b2024ebac76 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -318,7 +318,7 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, > geo->ws_opt = sec_per_pg; > geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ > > - geo->mccap = le32_to_cpu(src->mccap); > + geo->cap = le32_to_cpu(src->mccap); > > geo->trdt = le32_to_cpu(src->trdt); > geo->trdm = le32_to_cpu(src->trdm); > @@ -396,6 +396,8 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, > geo->ws_opt = le32_to_cpu(id->ws_opt); > geo->mw_cunits = le32_to_cpu(id->mw_cunits); > > + geo->cap = le32_to_cpu(id->mccap); > + > geo->trdt = le32_to_cpu(id->trdt); > geo->trdm = le32_to_cpu(id->trdm); > geo->tprt = le32_to_cpu(id->twrt); > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 16255fcd5250..b9f0d2070de9 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -288,8 +288,10 @@ struct nvm_geo { > u32 ws_opt; /* optimal write size */ > u32 mw_cunits; /* distance required for successful read */ > > - /* device capabilities */ > - u32 mccap; > + /* device capabilities. Note that this represents capabilities in 1.2 > + * and media and controller capabilities in 2.0 > + */ > + u32 cap; Here is a list of capabilities: 1.2 Bad block mgmt Hybrid command support 2.0 Vector copy Double reset The way I was thinking it would be implemented is to split the upper cap bits to 2.0, and let the lower bits be reserved for 1.2. Such that one would define the following: enum { NVM_CAP_BBM 1 << 0; NVM_CAP_HCS 1 << 1; NVM_CAP_VCPY 1 << 16; NVM_CAP_DRST 1 << 17; }; That way, the assignment from 2.0 can easily be done with cap = le32_to_cpu(id->mccap) << 16; and targets and other don't need to understand the difference between 1.2 and 2.0 format. > > /* device timings */ > u32 trdt; /* Avg. Tread (ns) */ > @@ -304,7 +306,7 @@ struct nvm_geo { > > /* 1.2 compatibility */ > u8 vmnt; > - u32 cap; > + u32 mccap; > u32 dom; > > u8 mtype; > ^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 02/15] lightnvm: add controller capabilities to 2.0 @ 2018-03-01 10:33 ` Matias Bjørling 0 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:33 UTC (permalink / raw) To: Javier González Cc: linux-block, linux-kernel, linux-nvme, Javier González On 02/28/2018 04:49 PM, Javier González wrote: > Assign missing mccap value on 2.0 path > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/nvme/host/lightnvm.c | 4 +++- > include/linux/lightnvm.h | 8 +++++--- > 2 files changed, 8 insertions(+), 4 deletions(-) > > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index e276ace28c64..5b2024ebac76 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -318,7 +318,7 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, > geo->ws_opt = sec_per_pg; > geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ > > - geo->mccap = le32_to_cpu(src->mccap); > + geo->cap = le32_to_cpu(src->mccap); > > geo->trdt = le32_to_cpu(src->trdt); > geo->trdm = le32_to_cpu(src->trdm); > @@ -396,6 +396,8 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, > geo->ws_opt = le32_to_cpu(id->ws_opt); > geo->mw_cunits = le32_to_cpu(id->mw_cunits); > > + geo->cap = le32_to_cpu(id->mccap); > + > geo->trdt = le32_to_cpu(id->trdt); > geo->trdm = le32_to_cpu(id->trdm); > geo->tprt = le32_to_cpu(id->twrt); > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 16255fcd5250..b9f0d2070de9 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -288,8 +288,10 @@ struct nvm_geo { > u32 ws_opt; /* optimal write size */ > u32 mw_cunits; /* distance required for successful read */ > > - /* device capabilities */ > - u32 mccap; > + /* device capabilities. Note that this represents capabilities in 1.2 > + * and media and controller capabilities in 2.0 > + */ > + u32 cap; Here is a list of capabilities: 1.2 Bad block mgmt Hybrid command support 2.0 Vector copy Double reset The way I was thinking it would be implemented is to split the upper cap bits to 2.0, and let the lower bits be reserved for 1.2. Such that one would define the following: enum { NVM_CAP_BBM 1 << 0; NVM_CAP_HCS 1 << 1; NVM_CAP_VCPY 1 << 16; NVM_CAP_DRST 1 << 17; }; That way, the assignment from 2.0 can easily be done with cap = le32_to_cpu(id->mccap) << 16; and targets and other don't need to understand the difference between 1.2 and 2.0 format. > > /* device timings */ > u32 trdt; /* Avg. Tread (ns) */ > @@ -304,7 +306,7 @@ struct nvm_geo { > > /* 1.2 compatibility */ > u8 vmnt; > - u32 cap; > + u32 mccap; > u32 dom; > > u8 mtype; > ^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 02/15] lightnvm: add controller capabilities to 2.0 2018-03-01 10:33 ` Matias Bjørling @ 2018-03-02 11:59 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-03-02 11:59 UTC (permalink / raw) To: Matias Bjørling; +Cc: linux-block, linux-kernel, linux-nvme [-- Attachment #1: Type: text/plain, Size: 2598 bytes --] > On 1 Mar 2018, at 11.33, Matias Bjørling <mb@lightnvm.io> wrote: > > On 02/28/2018 04:49 PM, Javier González wrote: >> Assign missing mccap value on 2.0 path >> Signed-off-by: Javier González <javier@cnexlabs.com> >> --- >> drivers/nvme/host/lightnvm.c | 4 +++- >> include/linux/lightnvm.h | 8 +++++--- >> 2 files changed, 8 insertions(+), 4 deletions(-) >> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >> index e276ace28c64..5b2024ebac76 100644 >> --- a/drivers/nvme/host/lightnvm.c >> +++ b/drivers/nvme/host/lightnvm.c >> @@ -318,7 +318,7 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, >> geo->ws_opt = sec_per_pg; >> geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ >> - geo->mccap = le32_to_cpu(src->mccap); >> + geo->cap = le32_to_cpu(src->mccap); >> geo->trdt = le32_to_cpu(src->trdt); >> geo->trdm = le32_to_cpu(src->trdm); >> @@ -396,6 +396,8 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, >> geo->ws_opt = le32_to_cpu(id->ws_opt); >> geo->mw_cunits = le32_to_cpu(id->mw_cunits); >> + geo->cap = le32_to_cpu(id->mccap); >> + >> geo->trdt = le32_to_cpu(id->trdt); >> geo->trdm = le32_to_cpu(id->trdm); >> geo->tprt = le32_to_cpu(id->twrt); >> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h >> index 16255fcd5250..b9f0d2070de9 100644 >> --- a/include/linux/lightnvm.h >> +++ b/include/linux/lightnvm.h >> @@ -288,8 +288,10 @@ struct nvm_geo { >> u32 ws_opt; /* optimal write size */ >> u32 mw_cunits; /* distance required for successful read */ >> - /* device capabilities */ >> - u32 mccap; >> + /* device capabilities. Note that this represents capabilities in 1.2 >> + * and media and controller capabilities in 2.0 >> + */ >> + u32 cap; > > Here is a list of capabilities: > > 1.2 > Bad block mgmt > Hybrid command support > > 2.0 > > Vector copy > Double reset > > The way I was thinking it would be implemented is to split the upper cap bits to 2.0, and let the lower bits be reserved for 1.2. > > Such that one would define the following: > > enum { > NVM_CAP_BBM 1 << 0; > NVM_CAP_HCS 1 << 1; > > NVM_CAP_VCPY 1 << 16; > NVM_CAP_DRST 1 << 17; > }; > > That way, the assignment from 2.0 can easily be done with cap = le32_to_cpu(id->mccap) << 16; > > and targets and other don't need to understand the difference between 1.2 and 2.0 format. I can see that you already have a way to do it in mind. I'll remove this patch and you can implement it later on. Javier [-- Attachment #2: Message signed with OpenPGP --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 02/15] lightnvm: add controller capabilities to 2.0 @ 2018-03-02 11:59 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-03-02 11:59 UTC (permalink / raw) > On 1 Mar 2018,@11.33, Matias Bj?rling <mb@lightnvm.io> wrote: > > On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: >> Assign missing mccap value on 2.0 path >> Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> >> --- >> drivers/nvme/host/lightnvm.c | 4 +++- >> include/linux/lightnvm.h | 8 +++++--- >> 2 files changed, 8 insertions(+), 4 deletions(-) >> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >> index e276ace28c64..5b2024ebac76 100644 >> --- a/drivers/nvme/host/lightnvm.c >> +++ b/drivers/nvme/host/lightnvm.c >> @@ -318,7 +318,7 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, >> geo->ws_opt = sec_per_pg; >> geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ >> - geo->mccap = le32_to_cpu(src->mccap); >> + geo->cap = le32_to_cpu(src->mccap); >> geo->trdt = le32_to_cpu(src->trdt); >> geo->trdm = le32_to_cpu(src->trdm); >> @@ -396,6 +396,8 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, >> geo->ws_opt = le32_to_cpu(id->ws_opt); >> geo->mw_cunits = le32_to_cpu(id->mw_cunits); >> + geo->cap = le32_to_cpu(id->mccap); >> + >> geo->trdt = le32_to_cpu(id->trdt); >> geo->trdm = le32_to_cpu(id->trdm); >> geo->tprt = le32_to_cpu(id->twrt); >> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h >> index 16255fcd5250..b9f0d2070de9 100644 >> --- a/include/linux/lightnvm.h >> +++ b/include/linux/lightnvm.h >> @@ -288,8 +288,10 @@ struct nvm_geo { >> u32 ws_opt; /* optimal write size */ >> u32 mw_cunits; /* distance required for successful read */ >> - /* device capabilities */ >> - u32 mccap; >> + /* device capabilities. Note that this represents capabilities in 1.2 >> + * and media and controller capabilities in 2.0 >> + */ >> + u32 cap; > > Here is a list of capabilities: > > 1.2 > Bad block mgmt > Hybrid command support > > 2.0 > > Vector copy > Double reset > > The way I was thinking it would be implemented is to split the upper cap bits to 2.0, and let the lower bits be reserved for 1.2. > > Such that one would define the following: > > enum { > NVM_CAP_BBM 1 << 0; > NVM_CAP_HCS 1 << 1; > > NVM_CAP_VCPY 1 << 16; > NVM_CAP_DRST 1 << 17; > }; > > That way, the assignment from 2.0 can easily be done with cap = le32_to_cpu(id->mccap) << 16; > > and targets and other don't need to understand the difference between 1.2 and 2.0 format. I can see that you already have a way to do it in mind. I'll remove this patch and you can implement it later on. Javier -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180302/5da768fe/attachment.sig> ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 03/15] lightnvm: add minor version to generic geometry 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme U2VwYXJhdGUgdGhlIHZlcnNpb24gYmV0d2VlbiBtYWpvciBhbmQgbWlub3Igb24gdGhlIGdlbmVy aWMgZ2VvbWV0cnkgYW5kCnJlcHJlc2VudCBpdCB0aHJvdWdoIHN5c2ZzIGluIHRoZSAyLjAgcGF0 aC4gVGhlIDEuMiBwYXRoIG9ubHkgc2hvd3MgdGhlCm1ham9yIHZlcnNpb24gdG8gcHJlc2VydmUg dGhlIGV4aXN0aW5nIHVzZXIgc3BhY2UgaW50ZXJmYWNlLgoKU2lnbmVkLW9mZi1ieTogSmF2aWVy IEdvbnrDoWxleiA8amF2aWVyQGNuZXhsYWJzLmNvbT4KLS0tCiBkcml2ZXJzL2xpZ2h0bnZtL2Nv cmUuYyAgICAgIHwgIDQgKystLQogZHJpdmVycy9udm1lL2hvc3QvbGlnaHRudm0uYyB8IDI1ICsr KysrKysrKysrKysrKysrKysrLS0tLS0KIGluY2x1ZGUvbGludXgvbGlnaHRudm0uaCAgICAgfCAg MyArKy0KIDMgZmlsZXMgY2hhbmdlZCwgMjQgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMoLSkK CmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL2NvcmUuYyBiL2RyaXZlcnMvbGlnaHRudm0v Y29yZS5jCmluZGV4IDlhNDE3ZDljZGYwYy4uYzRmNzJmYmFkMmJmIDEwMDY0NAotLS0gYS9kcml2 ZXJzL2xpZ2h0bnZtL2NvcmUuYworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL2NvcmUuYwpAQCAtODkw LDggKzg5MCw4IEBAIHN0YXRpYyBpbnQgbnZtX2luaXQoc3RydWN0IG52bV9kZXYgKmRldikKIAkJ Z290byBlcnI7CiAJfQogCi0JcHJfZGVidWcoIm52bTogdmVyOiV1IG52bV92ZW5kb3I6JXhcbiIs Ci0JCQkJZ2VvLT52ZXJfaWQsCisJcHJfZGVidWcoIm52bTogdmVyOiV1LiV1IG52bV92ZW5kb3I6 JXhcbiIsCisJCQkJZ2VvLT5tYWpvcl92ZXJfaWQsIGdlby0+bWlub3JfdmVyX2lkLAogCQkJCWdl by0+dm1udCk7CiAKIAlyZXQgPSBudm1fY29yZV9pbml0KGRldik7CmRpZmYgLS1naXQgYS9kcml2 ZXJzL252bWUvaG9zdC9saWdodG52bS5jIGIvZHJpdmVycy9udm1lL2hvc3QvbGlnaHRudm0uYwpp bmRleCA1YjIwMjRlYmFjNzYuLmE2MDBlNzBiNmU2YiAxMDA2NDQKLS0tIGEvZHJpdmVycy9udm1l L2hvc3QvbGlnaHRudm0uYworKysgYi9kcml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jCkBAIC0y OTUsNyArMjk1LDkgQEAgc3RhdGljIGludCBudm1lX252bV9zZXR1cF8xMihzdHJ1Y3QgbnZtZV9u dm1faWQxMiAqaWQsCiAJCXJldHVybiAtRUlOVkFMOwogCX0KIAotCWdlby0+dmVyX2lkID0gaWQt PnZlcl9pZDsKKwkvKiAxLjIgc3BlYy4gb25seSByZXBvcnRzIGEgc2luZ2xlIHZlcnNpb24gaWQg LSB1bmZvbGQgKi8KKwlnZW8tPm1ham9yX3Zlcl9pZCA9IGlkLT52ZXJfaWQ7CisJZ2VvLT5taW5v cl92ZXJfaWQgPSAyOwogCiAJZ2VvLT5ucl9jaG5scyA9IHNyYy0+bnVtX2NoOwogCWdlby0+bnJf bHVucyA9IHNyYy0+bnVtX2x1bjsKQEAgLTM4MCw3ICszODIsMTQgQEAgc3RhdGljIHZvaWQgbnZt ZV9udm1fc2V0X2FkZHJfMjAoc3RydWN0IG52bV9hZGRyX2Zvcm1hdCAqZHN0LAogc3RhdGljIGlu dCBudm1lX252bV9zZXR1cF8yMChzdHJ1Y3QgbnZtZV9udm1faWQyMCAqaWQsCiAJCQkgICAgIHN0 cnVjdCBudm1fZ2VvICpnZW8pCiB7Ci0JZ2VvLT52ZXJfaWQgPSBpZC0+bWpyOworCWdlby0+bWFq b3JfdmVyX2lkID0gaWQtPm1qcjsKKwlnZW8tPm1pbm9yX3Zlcl9pZCA9IGlkLT5tbnI7CisKKwlp ZiAoIShnZW8tPm1ham9yX3Zlcl9pZCA9PSAyICYmIGdlby0+bWlub3JfdmVyX2lkID09IDApKSB7 CisJCXByX2VycigibnZtOiBPQ1NTRCB2ZXJzaW9uIG5vdCBzdXBwb3J0ZWQgKHYlZC4lZClcbiIs CisJCQkJZ2VvLT5tYWpvcl92ZXJfaWQsIGdlby0+bWlub3JfdmVyX2lkKTsKKwkJcmV0dXJuIC1F SU5WQUw7CisJfQogCiAJZ2VvLT5ucl9jaG5scyA9IGxlMTZfdG9fY3B1KGlkLT5udW1fZ3JwKTsK IAlnZW8tPm5yX2x1bnMgPSBsZTE2X3RvX2NwdShpZC0+bnVtX3B1KTsKQEAgLTkxNyw3ICs5MjYs MTMgQEAgc3RhdGljIHNzaXplX3QgbnZtX2Rldl9hdHRyX3Nob3coc3RydWN0IGRldmljZSAqZGV2 LAogCWF0dHIgPSAmZGF0dHItPmF0dHI7CiAKIAlpZiAoc3RyY21wKGF0dHItPm5hbWUsICJ2ZXJz aW9uIikgPT0gMCkgewotCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4i LCBnZW8tPnZlcl9pZCk7CisJCWlmIChnZW8tPm1ham9yX3Zlcl9pZCA9PSAxKQorCQkJcmV0dXJu IHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwKKwkJCQkJCWdlby0+bWFqb3JfdmVy X2lkKTsKKwkJZWxzZQorCQkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldS4l dVxuIiwKKwkJCQkJCWdlby0+bWFqb3JfdmVyX2lkLAorCQkJCQkJZ2VvLT5taW5vcl92ZXJfaWQp OwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJjYXBhYmlsaXRpZXMiKSA9PSAwKSB7 CiAJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIsIGdlby0+Y2FwKTsK IAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAicmVhZF90eXAiKSA9PSAwKSB7CkBAIC0x MTcxLDcgKzExODYsNyBAQCBpbnQgbnZtZV9udm1fcmVnaXN0ZXJfc3lzZnMoc3RydWN0IG52bWVf bnMgKm5zKQogCWlmICghbmRldikKIAkJcmV0dXJuIC1FSU5WQUw7CiAKLQlzd2l0Y2ggKGdlby0+ dmVyX2lkKSB7CisJc3dpdGNoIChnZW8tPm1ham9yX3Zlcl9pZCkgewogCWNhc2UgMToKIAkJcmV0 dXJuIHN5c2ZzX2NyZWF0ZV9ncm91cCgmZGlza190b19kZXYobnMtPmRpc2spLT5rb2JqLAogCQkJ CQkmbnZtX2Rldl9hdHRyX2dyb3VwXzEyKTsKQEAgLTExODgsNyArMTIwMyw3IEBAIHZvaWQgbnZt ZV9udm1fdW5yZWdpc3Rlcl9zeXNmcyhzdHJ1Y3QgbnZtZV9ucyAqbnMpCiAJc3RydWN0IG52bV9k ZXYgKm5kZXYgPSBucy0+bmRldjsKIAlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJm5kZXYtPmdlbzsK IAotCXN3aXRjaCAoZ2VvLT52ZXJfaWQpIHsKKwlzd2l0Y2ggKGdlby0+bWFqb3JfdmVyX2lkKSB7 CiAJY2FzZSAxOgogCQlzeXNmc19yZW1vdmVfZ3JvdXAoJmRpc2tfdG9fZGV2KG5zLT5kaXNrKS0+ a29iaiwKIAkJCQkJJm52bV9kZXZfYXR0cl9ncm91cF8xMik7CmRpZmYgLS1naXQgYS9pbmNsdWRl L2xpbnV4L2xpZ2h0bnZtLmggYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKaW5kZXggYjlmMGQy MDcwZGU5Li40YjJlY2JmNDVmZDkgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvbGlnaHRudm0u aAorKysgYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKQEAgLTI2Myw3ICsyNjMsOCBAQCBlbnVt IHsKIC8qIEluc3RhbmNlIGdlb21ldHJ5ICovCiBzdHJ1Y3QgbnZtX2dlbyB7CiAJLyogZGV2aWNl IHJlcG9ydGVkIHZlcnNpb24gKi8KLQl1OAl2ZXJfaWQ7CisJdTgJbWFqb3JfdmVyX2lkOworCXU4 CW1pbm9yX3Zlcl9pZDsKIAogCS8qIGluc3RhbmNlIHNwZWNpZmljIGdlb21ldHJ5ICovCiAJaW50 IG5yX2NobmxzOwotLSAKMi43LjQKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXwpMaW51eC1udm1lIG1haWxpbmcgbGlzdApMaW51eC1udm1lQGxpc3RzLmlu ZnJhZGVhZC5vcmcKaHR0cDovL2xpc3RzLmluZnJhZGVhZC5vcmcvbWFpbG1hbi9saXN0aW5mby9s aW51eC1udm1lCg== ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 03/15] lightnvm: add minor version to generic geometry @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Separate the version between major and minor on the generic geometry and represent it through sysfs in the 2.0 path. The 1.2 path only shows the major version to preserve the existing user space interface. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/core.c | 4 ++-- drivers/nvme/host/lightnvm.c | 25 ++++++++++++++++++++----- include/linux/lightnvm.h | 3 ++- 3 files changed, 24 insertions(+), 8 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 9a417d9cdf0c..c4f72fbad2bf 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -890,8 +890,8 @@ static int nvm_init(struct nvm_dev *dev) goto err; } - pr_debug("nvm: ver:%u nvm_vendor:%x\n", - geo->ver_id, + pr_debug("nvm: ver:%u.%u nvm_vendor:%x\n", + geo->major_ver_id, geo->minor_ver_id, geo->vmnt); ret = nvm_core_init(dev); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 5b2024ebac76..a600e70b6e6b 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -295,7 +295,9 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, return -EINVAL; } - geo->ver_id = id->ver_id; + /* 1.2 spec. only reports a single version id - unfold */ + geo->major_ver_id = id->ver_id; + geo->minor_ver_id = 2; geo->nr_chnls = src->num_ch; geo->nr_luns = src->num_lun; @@ -380,7 +382,14 @@ static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst, static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, struct nvm_geo *geo) { - geo->ver_id = id->mjr; + geo->major_ver_id = id->mjr; + geo->minor_ver_id = id->mnr; + + if (!(geo->major_ver_id == 2 && geo->minor_ver_id == 0)) { + pr_err("nvm: OCSSD version not supported (v%d.%d)\n", + geo->major_ver_id, geo->minor_ver_id); + return -EINVAL; + } geo->nr_chnls = le16_to_cpu(id->num_grp); geo->nr_luns = le16_to_cpu(id->num_pu); @@ -917,7 +926,13 @@ static ssize_t nvm_dev_attr_show(struct device *dev, attr = &dattr->attr; if (strcmp(attr->name, "version") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->ver_id); + if (geo->major_ver_id == 1) + return scnprintf(page, PAGE_SIZE, "%u\n", + geo->major_ver_id); + else + return scnprintf(page, PAGE_SIZE, "%u.%u\n", + geo->major_ver_id, + geo->minor_ver_id); } else if (strcmp(attr->name, "capabilities") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->cap); } else if (strcmp(attr->name, "read_typ") == 0) { @@ -1171,7 +1186,7 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns) if (!ndev) return -EINVAL; - switch (geo->ver_id) { + switch (geo->major_ver_id) { case 1: return sysfs_create_group(&disk_to_dev(ns->disk)->kobj, &nvm_dev_attr_group_12); @@ -1188,7 +1203,7 @@ void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) struct nvm_dev *ndev = ns->ndev; struct nvm_geo *geo = &ndev->geo; - switch (geo->ver_id) { + switch (geo->major_ver_id) { case 1: sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, &nvm_dev_attr_group_12); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index b9f0d2070de9..4b2ecbf45fd9 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -263,7 +263,8 @@ enum { /* Instance geometry */ struct nvm_geo { /* device reported version */ - u8 ver_id; + u8 major_ver_id; + u8 minor_ver_id; /* instance specific geometry */ int nr_chnls; -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 03/15] lightnvm: add minor version to generic geometry @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Separate the version between major and minor on the generic geometry and represent it through sysfs in the 2.0 path. The 1.2 path only shows the major version to preserve the existing user space interface. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/core.c | 4 ++-- drivers/nvme/host/lightnvm.c | 25 ++++++++++++++++++++----- include/linux/lightnvm.h | 3 ++- 3 files changed, 24 insertions(+), 8 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 9a417d9cdf0c..c4f72fbad2bf 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -890,8 +890,8 @@ static int nvm_init(struct nvm_dev *dev) goto err; } - pr_debug("nvm: ver:%u nvm_vendor:%x\n", - geo->ver_id, + pr_debug("nvm: ver:%u.%u nvm_vendor:%x\n", + geo->major_ver_id, geo->minor_ver_id, geo->vmnt); ret = nvm_core_init(dev); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 5b2024ebac76..a600e70b6e6b 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -295,7 +295,9 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, return -EINVAL; } - geo->ver_id = id->ver_id; + /* 1.2 spec. only reports a single version id - unfold */ + geo->major_ver_id = id->ver_id; + geo->minor_ver_id = 2; geo->nr_chnls = src->num_ch; geo->nr_luns = src->num_lun; @@ -380,7 +382,14 @@ static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst, static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, struct nvm_geo *geo) { - geo->ver_id = id->mjr; + geo->major_ver_id = id->mjr; + geo->minor_ver_id = id->mnr; + + if (!(geo->major_ver_id == 2 && geo->minor_ver_id == 0)) { + pr_err("nvm: OCSSD version not supported (v%d.%d)\n", + geo->major_ver_id, geo->minor_ver_id); + return -EINVAL; + } geo->nr_chnls = le16_to_cpu(id->num_grp); geo->nr_luns = le16_to_cpu(id->num_pu); @@ -917,7 +926,13 @@ static ssize_t nvm_dev_attr_show(struct device *dev, attr = &dattr->attr; if (strcmp(attr->name, "version") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->ver_id); + if (geo->major_ver_id == 1) + return scnprintf(page, PAGE_SIZE, "%u\n", + geo->major_ver_id); + else + return scnprintf(page, PAGE_SIZE, "%u.%u\n", + geo->major_ver_id, + geo->minor_ver_id); } else if (strcmp(attr->name, "capabilities") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->cap); } else if (strcmp(attr->name, "read_typ") == 0) { @@ -1171,7 +1186,7 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns) if (!ndev) return -EINVAL; - switch (geo->ver_id) { + switch (geo->major_ver_id) { case 1: return sysfs_create_group(&disk_to_dev(ns->disk)->kobj, &nvm_dev_attr_group_12); @@ -1188,7 +1203,7 @@ void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) struct nvm_dev *ndev = ns->ndev; struct nvm_geo *geo = &ndev->geo; - switch (geo->ver_id) { + switch (geo->major_ver_id) { case 1: sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, &nvm_dev_attr_group_12); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index b9f0d2070de9..4b2ecbf45fd9 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -263,7 +263,8 @@ enum { /* Instance geometry */ struct nvm_geo { /* device reported version */ - u8 ver_id; + u8 major_ver_id; + u8 minor_ver_id; /* instance specific geometry */ int nr_chnls; -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 04/15] lightnvm: add shorten OCSSD version in geo 2018-02-28 15:49 ` Javier González @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Create a shorten version to use in the generic geometry. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/nvme/host/lightnvm.c | 6 ++++++ include/linux/lightnvm.h | 8 ++++++++ 2 files changed, 14 insertions(+) diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index a600e70b6e6b..85f336a79cda 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -299,6 +299,9 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, geo->major_ver_id = id->ver_id; geo->minor_ver_id = 2; + /* Set compacted version for upper layers */ + geo->version = NVM_OCSSD_SPEC_12; + geo->nr_chnls = src->num_ch; geo->nr_luns = src->num_lun; geo->all_luns = geo->nr_chnls * geo->nr_luns; @@ -385,6 +388,9 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, geo->major_ver_id = id->mjr; geo->minor_ver_id = id->mnr; + /* Set compacted version for upper layers */ + geo->version = NVM_OCSSD_SPEC_20; + if (!(geo->major_ver_id == 2 && geo->minor_ver_id == 0)) { pr_err("nvm: OCSSD version not supported (v%d.%d)\n", geo->major_ver_id, geo->minor_ver_id); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 4b2ecbf45fd9..b8bc158a2472 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -23,6 +23,11 @@ enum { #define NVM_LUN_BITS (8) #define NVM_CH_BITS (7) +enum { + NVM_OCSSD_SPEC_12 = 12, + NVM_OCSSD_SPEC_20 = 20, +}; + struct ppa_addr { /* Generic structure for all addresses */ union { @@ -266,6 +271,9 @@ struct nvm_geo { u8 major_ver_id; u8 minor_ver_id; + /* kernel short version */ + u8 version; + /* instance specific geometry */ int nr_chnls; int nr_luns; /* per channel */ -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 04/15] lightnvm: add shorten OCSSD version in geo @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Create a shorten version to use in the generic geometry. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/nvme/host/lightnvm.c | 6 ++++++ include/linux/lightnvm.h | 8 ++++++++ 2 files changed, 14 insertions(+) diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index a600e70b6e6b..85f336a79cda 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -299,6 +299,9 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, geo->major_ver_id = id->ver_id; geo->minor_ver_id = 2; + /* Set compacted version for upper layers */ + geo->version = NVM_OCSSD_SPEC_12; + geo->nr_chnls = src->num_ch; geo->nr_luns = src->num_lun; geo->all_luns = geo->nr_chnls * geo->nr_luns; @@ -385,6 +388,9 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, geo->major_ver_id = id->mjr; geo->minor_ver_id = id->mnr; + /* Set compacted version for upper layers */ + geo->version = NVM_OCSSD_SPEC_20; + if (!(geo->major_ver_id == 2 && geo->minor_ver_id == 0)) { pr_err("nvm: OCSSD version not supported (v%d.%d)\n", geo->major_ver_id, geo->minor_ver_id); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 4b2ecbf45fd9..b8bc158a2472 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -23,6 +23,11 @@ enum { #define NVM_LUN_BITS (8) #define NVM_CH_BITS (7) +enum { + NVM_OCSSD_SPEC_12 = 12, + NVM_OCSSD_SPEC_20 = 20, +}; + struct ppa_addr { /* Generic structure for all addresses */ union { @@ -266,6 +271,9 @@ struct nvm_geo { u8 major_ver_id; u8 minor_ver_id; + /* kernel short version */ + u8 version; + /* instance specific geometry */ int nr_chnls; int nr_luns; /* per channel */ -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 05/15] lightnvm: complete geo structure with maxoc* 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme Q29tcGxldGUgdGhlIGdlbmVyaWMgZ2VvbWV0cnkgc3RydWN0dXJlIHdpdGggdGhlIG1heG9jIGFu ZCBtYXhvY3B1CmZlbGRzLCBwcmVzZW50IGluIHRoZSAyLjAgc3BlYy4gQWxzbywgZXhwb3NlIHRo ZW0gdGhyb3VnaCBzeXNmcy4KClNpZ25lZC1vZmYtYnk6IEphdmllciBHb256w6FsZXogPGphdmll ckBjbmV4bGFicy5jb20+Ci0tLQogZHJpdmVycy9udm1lL2hvc3QvbGlnaHRudm0uYyB8IDE3ICsr KysrKysrKysrKysrKysrCiBpbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmggICAgIHwgIDIgKysKIDIg ZmlsZXMgY2hhbmdlZCwgMTkgaW5zZXJ0aW9ucygrKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMvbnZt ZS9ob3N0L2xpZ2h0bnZtLmMgYi9kcml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jCmluZGV4IDg1 ZjMzNmE3OWNkYS4uYWZiNWY4ODNmOGM4IDEwMDY0NAotLS0gYS9kcml2ZXJzL252bWUvaG9zdC9s aWdodG52bS5jCisrKyBiL2RyaXZlcnMvbnZtZS9ob3N0L2xpZ2h0bnZtLmMKQEAgLTMyMyw2ICsz MjMsMTMgQEAgc3RhdGljIGludCBudm1lX252bV9zZXR1cF8xMihzdHJ1Y3QgbnZtZV9udm1faWQx MiAqaWQsCiAJZ2VvLT53c19vcHQgPSBzZWNfcGVyX3BnOwogCWdlby0+bXdfY3VuaXRzID0gZ2Vv LT53c19vcHQgPDwgMzsJLyogZGVmYXVsdCB0byBNTEMgc2FmZSB2YWx1ZXMgKi8KIAorCS8qIERv IG5vdCBpbXBvc2UgdmFsdWVzIGZvciBtYXhpbXVtIG51bWJlciBvZiBvcGVuIGJsb2NrcyBhcyBp dCBpcworCSAqIHVuc3BlY2lmaWVkIGluIDEuMi4gVXNlcnMgb2YgMS4yIG11c3QgYmUgYXdhcmUg b2YgdGhpcyBhbmQgZXZlbnR1YWxseQorCSAqIHNwZWNpZnkgdGhlc2UgdmFsdWVzIHRocm91Z2gg YSBxdWlyayBpZiByZXN0cmljdGlvbnMgYXBwbHkuCisJICovCisJZ2VvLT5tYXhvYyA9IGdlby0+ YWxsX2x1bnMgKiBnZW8tPm5yX2Noa3M7CisJZ2VvLT5tYXhvY3B1ID0gZ2VvLT5ucl9jaGtzOwor CiAJZ2VvLT5jYXAgPSBsZTMyX3RvX2NwdShzcmMtPm1jY2FwKTsKIAogCWdlby0+dHJkdCA9IGxl MzJfdG9fY3B1KHNyYy0+dHJkdCk7CkBAIC00MTAsNiArNDE3LDggQEAgc3RhdGljIGludCBudm1l X252bV9zZXR1cF8yMChzdHJ1Y3QgbnZtZV9udm1faWQyMCAqaWQsCiAJZ2VvLT53c19taW4gPSBs ZTMyX3RvX2NwdShpZC0+d3NfbWluKTsKIAlnZW8tPndzX29wdCA9IGxlMzJfdG9fY3B1KGlkLT53 c19vcHQpOwogCWdlby0+bXdfY3VuaXRzID0gbGUzMl90b19jcHUoaWQtPm13X2N1bml0cyk7CisJ Z2VvLT5tYXhvYyA9IGxlMzJfdG9fY3B1KGlkLT5tYXhvYyk7CisJZ2VvLT5tYXhvY3B1ID0gbGUz Ml90b19jcHUoaWQtPm1heG9jcHUpOwogCiAJZ2VvLT5jYXAgPSBsZTMyX3RvX2NwdShpZC0+bWNj YXApOwogCkBAIC0xMDU0LDYgKzEwNjMsMTAgQEAgc3RhdGljIHNzaXplX3QgbnZtX2Rldl9hdHRy X3Nob3dfMjAoc3RydWN0IGRldmljZSAqZGV2LAogCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBB R0VfU0laRSwgIiV1XG4iLCBnZW8tPndzX21pbik7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+ bmFtZSwgIndzX29wdCIpID09IDApIHsKIAkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJ WkUsICIldVxuIiwgZ2VvLT53c19vcHQpOworCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUs ICJtYXhvYyIpID09IDApIHsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIl dVxuIiwgZ2VvLT5tYXhvYyk7CisJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgIm1heG9j cHUiKSA9PSAwKSB7CisJCXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIs IGdlby0+bWF4b2NwdSk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgIm13X2N1bml0 cyIpID09IDApIHsKIAkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwg Z2VvLT5td19jdW5pdHMpOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJ3cml0ZV90 eXAiKSA9PSAwKSB7CkBAIC0xMTUxLDYgKzExNjQsOCBAQCBzdGF0aWMgTlZNX0RFVl9BVFRSXzIw X1JPKGNodW5rcyk7CiBzdGF0aWMgTlZNX0RFVl9BVFRSXzIwX1JPKGNsYmEpOwogc3RhdGljIE5W TV9ERVZfQVRUUl8yMF9STyh3c19taW4pOwogc3RhdGljIE5WTV9ERVZfQVRUUl8yMF9STyh3c19v cHQpOworc3RhdGljIE5WTV9ERVZfQVRUUl8yMF9STyhtYXhvYyk7CitzdGF0aWMgTlZNX0RFVl9B VFRSXzIwX1JPKG1heG9jcHUpOwogc3RhdGljIE5WTV9ERVZfQVRUUl8yMF9STyhtd19jdW5pdHMp Owogc3RhdGljIE5WTV9ERVZfQVRUUl8yMF9STyh3cml0ZV90eXApOwogc3RhdGljIE5WTV9ERVZf QVRUUl8yMF9STyh3cml0ZV9tYXgpOwpAQCAtMTE2Nyw2ICsxMTgyLDggQEAgc3RhdGljIHN0cnVj dCBhdHRyaWJ1dGUgKm52bV9kZXZfYXR0cnNfMjBbXSA9IHsKIAkmZGV2X2F0dHJfY2xiYS5hdHRy LAogCSZkZXZfYXR0cl93c19taW4uYXR0ciwKIAkmZGV2X2F0dHJfd3Nfb3B0LmF0dHIsCisJJmRl dl9hdHRyX21heG9jLmF0dHIsCisJJmRldl9hdHRyX21heG9jcHUuYXR0ciwKIAkmZGV2X2F0dHJf bXdfY3VuaXRzLmF0dHIsCiAKIAkmZGV2X2F0dHJfcmVhZF90eXAuYXR0ciwKZGlmZiAtLWdpdCBh L2luY2x1ZGUvbGludXgvbGlnaHRudm0uaCBiL2luY2x1ZGUvbGludXgvbGlnaHRudm0uaAppbmRl eCBiOGJjMTU4YTI0NzIuLjIxMDJiMDkyYzdlYiAxMDA2NDQKLS0tIGEvaW5jbHVkZS9saW51eC9s aWdodG52bS5oCisrKyBiL2luY2x1ZGUvbGludXgvbGlnaHRudm0uaApAQCAtMjk2LDYgKzI5Niw4 IEBAIHN0cnVjdCBudm1fZ2VvIHsKIAl1MzIJd3NfbWluOwkJLyogbWluaW11bSB3cml0ZSBzaXpl ICovCiAJdTMyCXdzX29wdDsJCS8qIG9wdGltYWwgd3JpdGUgc2l6ZSAqLwogCXUzMgltd19jdW5p dHM7CS8qIGRpc3RhbmNlIHJlcXVpcmVkIGZvciBzdWNjZXNzZnVsIHJlYWQgKi8KKwl1MzIJbWF4 b2M7CQkvKiBtYXhpbXVtIG9wZW4gY2h1bmtzICovCisJdTMyCW1heG9jcHU7CS8qIG1heGltdW0g b3BlbiBjaHVua3MgcGVyIHBhcmFsbGVsIHVuaXQgKi8KIAogCS8qIGRldmljZSBjYXBhYmlsaXRp ZXMuIE5vdGUgdGhhdCB0aGlzIHJlcHJlc2VudHMgY2FwYWJpbGl0aWVzIGluIDEuMgogCSAqIGFu ZCBtZWRpYSBhbmQgY29udHJvbGxlciBjYXBhYmlsaXRpZXMgaW4gMi4wCi0tIAoyLjcuNAoKCl9f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCkxpbnV4LW52bWUg bWFpbGluZyBsaXN0CkxpbnV4LW52bWVAbGlzdHMuaW5mcmFkZWFkLm9yZwpodHRwOi8vbGlzdHMu aW5mcmFkZWFkLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2xpbnV4LW52bWUK ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 05/15] lightnvm: complete geo structure with maxoc* @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Complete the generic geometry structure with the maxoc and maxocpu felds, present in the 2.0 spec. Also, expose them through sysfs. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/nvme/host/lightnvm.c | 17 +++++++++++++++++ include/linux/lightnvm.h | 2 ++ 2 files changed, 19 insertions(+) diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 85f336a79cda..afb5f883f8c8 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -323,6 +323,13 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, geo->ws_opt = sec_per_pg; geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ + /* Do not impose values for maximum number of open blocks as it is + * unspecified in 1.2. Users of 1.2 must be aware of this and eventually + * specify these values through a quirk if restrictions apply. + */ + geo->maxoc = geo->all_luns * geo->nr_chks; + geo->maxocpu = geo->nr_chks; + geo->cap = le32_to_cpu(src->mccap); geo->trdt = le32_to_cpu(src->trdt); @@ -410,6 +417,8 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, geo->ws_min = le32_to_cpu(id->ws_min); geo->ws_opt = le32_to_cpu(id->ws_opt); geo->mw_cunits = le32_to_cpu(id->mw_cunits); + geo->maxoc = le32_to_cpu(id->maxoc); + geo->maxocpu = le32_to_cpu(id->maxocpu); geo->cap = le32_to_cpu(id->mccap); @@ -1054,6 +1063,10 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_min); } else if (strcmp(attr->name, "ws_opt") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_opt); + } else if (strcmp(attr->name, "maxoc") == 0) { + return scnprintf(page, PAGE_SIZE, "%u\n", geo->maxoc); + } else if (strcmp(attr->name, "maxocpu") == 0) { + return scnprintf(page, PAGE_SIZE, "%u\n", geo->maxocpu); } else if (strcmp(attr->name, "mw_cunits") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->mw_cunits); } else if (strcmp(attr->name, "write_typ") == 0) { @@ -1151,6 +1164,8 @@ static NVM_DEV_ATTR_20_RO(chunks); static NVM_DEV_ATTR_20_RO(clba); static NVM_DEV_ATTR_20_RO(ws_min); static NVM_DEV_ATTR_20_RO(ws_opt); +static NVM_DEV_ATTR_20_RO(maxoc); +static NVM_DEV_ATTR_20_RO(maxocpu); static NVM_DEV_ATTR_20_RO(mw_cunits); static NVM_DEV_ATTR_20_RO(write_typ); static NVM_DEV_ATTR_20_RO(write_max); @@ -1167,6 +1182,8 @@ static struct attribute *nvm_dev_attrs_20[] = { &dev_attr_clba.attr, &dev_attr_ws_min.attr, &dev_attr_ws_opt.attr, + &dev_attr_maxoc.attr, + &dev_attr_maxocpu.attr, &dev_attr_mw_cunits.attr, &dev_attr_read_typ.attr, diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index b8bc158a2472..2102b092c7eb 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -296,6 +296,8 @@ struct nvm_geo { u32 ws_min; /* minimum write size */ u32 ws_opt; /* optimal write size */ u32 mw_cunits; /* distance required for successful read */ + u32 maxoc; /* maximum open chunks */ + u32 maxocpu; /* maximum open chunks per parallel unit */ /* device capabilities. Note that this represents capabilities in 1.2 * and media and controller capabilities in 2.0 -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 05/15] lightnvm: complete geo structure with maxoc* @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Complete the generic geometry structure with the maxoc and maxocpu felds, present in the 2.0 spec. Also, expose them through sysfs. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/nvme/host/lightnvm.c | 17 +++++++++++++++++ include/linux/lightnvm.h | 2 ++ 2 files changed, 19 insertions(+) diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 85f336a79cda..afb5f883f8c8 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -323,6 +323,13 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, geo->ws_opt = sec_per_pg; geo->mw_cunits = geo->ws_opt << 3; /* default to MLC safe values */ + /* Do not impose values for maximum number of open blocks as it is + * unspecified in 1.2. Users of 1.2 must be aware of this and eventually + * specify these values through a quirk if restrictions apply. + */ + geo->maxoc = geo->all_luns * geo->nr_chks; + geo->maxocpu = geo->nr_chks; + geo->cap = le32_to_cpu(src->mccap); geo->trdt = le32_to_cpu(src->trdt); @@ -410,6 +417,8 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, geo->ws_min = le32_to_cpu(id->ws_min); geo->ws_opt = le32_to_cpu(id->ws_opt); geo->mw_cunits = le32_to_cpu(id->mw_cunits); + geo->maxoc = le32_to_cpu(id->maxoc); + geo->maxocpu = le32_to_cpu(id->maxocpu); geo->cap = le32_to_cpu(id->mccap); @@ -1054,6 +1063,10 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_min); } else if (strcmp(attr->name, "ws_opt") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->ws_opt); + } else if (strcmp(attr->name, "maxoc") == 0) { + return scnprintf(page, PAGE_SIZE, "%u\n", geo->maxoc); + } else if (strcmp(attr->name, "maxocpu") == 0) { + return scnprintf(page, PAGE_SIZE, "%u\n", geo->maxocpu); } else if (strcmp(attr->name, "mw_cunits") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->mw_cunits); } else if (strcmp(attr->name, "write_typ") == 0) { @@ -1151,6 +1164,8 @@ static NVM_DEV_ATTR_20_RO(chunks); static NVM_DEV_ATTR_20_RO(clba); static NVM_DEV_ATTR_20_RO(ws_min); static NVM_DEV_ATTR_20_RO(ws_opt); +static NVM_DEV_ATTR_20_RO(maxoc); +static NVM_DEV_ATTR_20_RO(maxocpu); static NVM_DEV_ATTR_20_RO(mw_cunits); static NVM_DEV_ATTR_20_RO(write_typ); static NVM_DEV_ATTR_20_RO(write_max); @@ -1167,6 +1182,8 @@ static struct attribute *nvm_dev_attrs_20[] = { &dev_attr_clba.attr, &dev_attr_ws_min.attr, &dev_attr_ws_opt.attr, + &dev_attr_maxoc.attr, + &dev_attr_maxocpu.attr, &dev_attr_mw_cunits.attr, &dev_attr_read_typ.attr, diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index b8bc158a2472..2102b092c7eb 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -296,6 +296,8 @@ struct nvm_geo { u32 ws_min; /* minimum write size */ u32 ws_opt; /* optimal write size */ u32 mw_cunits; /* distance required for successful read */ + u32 maxoc; /* maximum open chunks */ + u32 maxocpu; /* maximum open chunks per parallel unit */ /* device capabilities. Note that this represents capabilities in 1.2 * and media and controller capabilities in 2.0 -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 06/15] lightnvm: normalize geometry nomenclature 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme Tm9ybWFsaXplIG5vbWVuY2xhdHVyZSBmb3IgbmFtaW5nIGNoYW5uZWxzLCBsdW5zLCBjaHVua3Ms IHBsYW5lcyBhbmQKc2VjdG9ycyBhcyB3ZWxsIGFzIGRlcml2YXRpb25zIGluIG9yZGVyIHRvIGlt cHJvdmUgcmVhZGFiaWxpdHkuCgpTaWduZWQtb2ZmLWJ5OiBKYXZpZXIgR29uesOhbGV6IDxqYXZp ZXJAY25leGxhYnMuY29tPgotLS0KIGRyaXZlcnMvbGlnaHRudm0vY29yZS5jICAgICAgIHwgODkg KysrKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogZHJpdmVycy9saWdo dG52bS9wYmxrLWNvcmUuYyAgfCAgNCArLQogZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYyAg fCAzMCArKysrKysrLS0tLS0tLS0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1zeXNmcy5jIHwgIDQg Ky0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay5oICAgICAgIHwgMjAgKysrKystLS0tLQogZHJpdmVy cy9udm1lL2hvc3QvbGlnaHRudm0uYyAgfCA1NCArKysrKysrKysrKysrLS0tLS0tLS0tLS0tLQog aW5jbHVkZS9saW51eC9saWdodG52bS5oICAgICAgfCAxNiArKysrLS0tLQogNyBmaWxlcyBjaGFu Z2VkLCAxMDggaW5zZXJ0aW9ucygrKSwgMTA5IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2Ry aXZlcnMvbGlnaHRudm0vY29yZS5jIGIvZHJpdmVycy9saWdodG52bS9jb3JlLmMKaW5kZXggYzRm NzJmYmFkMmJmLi5iODY5ZTMwNTEyNjUgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vY29y ZS5jCisrKyBiL2RyaXZlcnMvbGlnaHRudm0vY29yZS5jCkBAIC0zNiwxMyArMzYsMTMgQEAgc3Rh dGljIERFQ0xBUkVfUldTRU0obnZtX2xvY2spOwogLyogTWFwIGJldHdlZW4gdmlydHVhbCBhbmQg cGh5c2ljYWwgY2hhbm5lbCBhbmQgbHVuICovCiBzdHJ1Y3QgbnZtX2NoX21hcCB7CiAJaW50IGNo X29mZjsKLQlpbnQgbnJfbHVuczsKKwlpbnQgbnVtX2x1bjsKIAlpbnQgKmx1bl9vZmZzOwogfTsK IAogc3RydWN0IG52bV9kZXZfbWFwIHsKIAlzdHJ1Y3QgbnZtX2NoX21hcCAqY2hubHM7Ci0JaW50 IG5yX2NobmxzOworCWludCBudW1fY2g7CiB9OwogCiBzdGF0aWMgc3RydWN0IG52bV90YXJnZXQg Km52bV9maW5kX3RhcmdldChzdHJ1Y3QgbnZtX2RldiAqZGV2LCBjb25zdCBjaGFyICpuYW1lKQpA QCAtMTE0LDE1ICsxMTQsMTUgQEAgc3RhdGljIHZvaWQgbnZtX3JlbW92ZV90Z3RfZGV2KHN0cnVj dCBudm1fdGd0X2RldiAqdGd0X2RldiwgaW50IGNsZWFyKQogCXN0cnVjdCBudm1fZGV2X21hcCAq ZGV2X21hcCA9IHRndF9kZXYtPm1hcDsKIAlpbnQgaSwgajsKIAotCWZvciAoaSA9IDA7IGkgPCBk ZXZfbWFwLT5ucl9jaG5sczsgaSsrKSB7CisJZm9yIChpID0gMDsgaSA8IGRldl9tYXAtPm51bV9j aDsgaSsrKSB7CiAJCXN0cnVjdCBudm1fY2hfbWFwICpjaF9tYXAgPSAmZGV2X21hcC0+Y2hubHNb aV07CiAJCWludCAqbHVuX29mZnMgPSBjaF9tYXAtPmx1bl9vZmZzOwogCQlpbnQgY2ggPSBpICsg Y2hfbWFwLT5jaF9vZmY7CiAKIAkJaWYgKGNsZWFyKSB7Ci0JCQlmb3IgKGogPSAwOyBqIDwgY2hf bWFwLT5ucl9sdW5zOyBqKyspIHsKKwkJCWZvciAoaiA9IDA7IGogPCBjaF9tYXAtPm51bV9sdW47 IGorKykgewogCQkJCWludCBsdW4gPSBqICsgbHVuX29mZnNbal07Ci0JCQkJaW50IGx1bmlkID0g KGNoICogZGV2LT5nZW8ubnJfbHVucykgKyBsdW47CisJCQkJaW50IGx1bmlkID0gKGNoICogZGV2 LT5nZW8ubnVtX2x1bikgKyBsdW47CiAKIAkJCQlXQVJOX09OKCF0ZXN0X2FuZF9jbGVhcl9iaXQo bHVuaWQsCiAJCQkJCQkJZGV2LT5sdW5fbWFwKSk7CkBAIC0xNDcsNDcgKzE0Nyw0NiBAQCBzdGF0 aWMgc3RydWN0IG52bV90Z3RfZGV2ICpudm1fY3JlYXRlX3RndF9kZXYoc3RydWN0IG52bV9kZXYg KmRldiwKIAlzdHJ1Y3QgbnZtX2Rldl9tYXAgKmRldl9ybWFwID0gZGV2LT5ybWFwOwogCXN0cnVj dCBudm1fZGV2X21hcCAqZGV2X21hcDsKIAlzdHJ1Y3QgcHBhX2FkZHIgKmx1bnM7Ci0JaW50IG5y X2x1bnMgPSBsdW5fZW5kIC0gbHVuX2JlZ2luICsgMTsKLQlpbnQgbHVuc19sZWZ0ID0gbnJfbHVu czsKLQlpbnQgbnJfY2hubHMgPSBucl9sdW5zIC8gZGV2LT5nZW8ubnJfbHVuczsKLQlpbnQgbnJf Y2hubHNfbW9kID0gbnJfbHVucyAlIGRldi0+Z2VvLm5yX2x1bnM7Ci0JaW50IGJjaCA9IGx1bl9i ZWdpbiAvIGRldi0+Z2VvLm5yX2x1bnM7Ci0JaW50IGJsdW4gPSBsdW5fYmVnaW4gJSBkZXYtPmdl by5ucl9sdW5zOworCWludCBudW1fbHVuID0gbHVuX2VuZCAtIGx1bl9iZWdpbiArIDE7CisJaW50 IGx1bnNfbGVmdCA9IG51bV9sdW47CisJaW50IG51bV9jaCA9IG51bV9sdW4gLyBkZXYtPmdlby5u dW1fbHVuOworCWludCBudW1fY2hfbW9kID0gbnVtX2x1biAlIGRldi0+Z2VvLm51bV9sdW47CisJ aW50IGJjaCA9IGx1bl9iZWdpbiAvIGRldi0+Z2VvLm51bV9sdW47CisJaW50IGJsdW4gPSBsdW5f YmVnaW4gJSBkZXYtPmdlby5udW1fbHVuOwogCWludCBsdW5pZCA9IDA7CiAJaW50IGx1bl9iYWxh bmNlZCA9IDE7Ci0JaW50IHNlY19wZXJfbHVuLCBwcmV2X25yX2x1bnM7CisJaW50IHNlY19wZXJf bHVuLCBwcmV2X251bV9sdW47CiAJaW50IGksIGo7CiAKLQlucl9jaG5scyA9IChucl9jaG5sc19t b2QgPT0gMCkgPyBucl9jaG5scyA6IG5yX2NobmxzICsgMTsKKwludW1fY2ggPSAobnVtX2NoX21v ZCA9PSAwKSA/IG51bV9jaCA6IG51bV9jaCArIDE7CiAKIAlkZXZfbWFwID0ga21hbGxvYyhzaXpl b2Yoc3RydWN0IG52bV9kZXZfbWFwKSwgR0ZQX0tFUk5FTCk7CiAJaWYgKCFkZXZfbWFwKQogCQln b3RvIGVycl9kZXY7CiAKLQlkZXZfbWFwLT5jaG5scyA9IGtjYWxsb2MobnJfY2hubHMsIHNpemVv ZihzdHJ1Y3QgbnZtX2NoX21hcCksCi0JCQkJCQkJCUdGUF9LRVJORUwpOworCWRldl9tYXAtPmNo bmxzID0ga2NhbGxvYyhudW1fY2gsIHNpemVvZihzdHJ1Y3QgbnZtX2NoX21hcCksIEdGUF9LRVJO RUwpOwogCWlmICghZGV2X21hcC0+Y2hubHMpCiAJCWdvdG8gZXJyX2NobmxzOwogCi0JbHVucyA9 IGtjYWxsb2MobnJfbHVucywgc2l6ZW9mKHN0cnVjdCBwcGFfYWRkciksIEdGUF9LRVJORUwpOwor CWx1bnMgPSBrY2FsbG9jKG51bV9sdW4sIHNpemVvZihzdHJ1Y3QgcHBhX2FkZHIpLCBHRlBfS0VS TkVMKTsKIAlpZiAoIWx1bnMpCiAJCWdvdG8gZXJyX2x1bnM7CiAKLQlwcmV2X25yX2x1bnMgPSAo bHVuc19sZWZ0ID4gZGV2LT5nZW8ubnJfbHVucykgPwotCQkJCQlkZXYtPmdlby5ucl9sdW5zIDog bHVuc19sZWZ0OwotCWZvciAoaSA9IDA7IGkgPCBucl9jaG5sczsgaSsrKSB7CisJcHJldl9udW1f bHVuID0gKGx1bnNfbGVmdCA+IGRldi0+Z2VvLm51bV9sdW4pID8KKwkJCQkJZGV2LT5nZW8ubnVt X2x1biA6IGx1bnNfbGVmdDsKKwlmb3IgKGkgPSAwOyBpIDwgbnVtX2NoOyBpKyspIHsKIAkJc3Ry dWN0IG52bV9jaF9tYXAgKmNoX3JtYXAgPSAmZGV2X3JtYXAtPmNobmxzW2kgKyBiY2hdOwogCQlp bnQgKmx1bl9yb2ZmcyA9IGNoX3JtYXAtPmx1bl9vZmZzOwogCQlzdHJ1Y3QgbnZtX2NoX21hcCAq Y2hfbWFwID0gJmRldl9tYXAtPmNobmxzW2ldOwogCQlpbnQgKmx1bl9vZmZzOwotCQlpbnQgbHVu c19pbl9jaG5sID0gKGx1bnNfbGVmdCA+IGRldi0+Z2VvLm5yX2x1bnMpID8KLQkJCQkJZGV2LT5n ZW8ubnJfbHVucyA6IGx1bnNfbGVmdDsKKwkJaW50IGx1bnNfaW5fY2hubCA9IChsdW5zX2xlZnQg PiBkZXYtPmdlby5udW1fbHVuKSA/CisJCQkJCWRldi0+Z2VvLm51bV9sdW4gOiBsdW5zX2xlZnQ7 CiAKLQkJaWYgKGx1bl9iYWxhbmNlZCAmJiBwcmV2X25yX2x1bnMgIT0gbHVuc19pbl9jaG5sKQor CQlpZiAobHVuX2JhbGFuY2VkICYmIHByZXZfbnVtX2x1biAhPSBsdW5zX2luX2NobmwpCiAJCQls dW5fYmFsYW5jZWQgPSAwOwogCiAJCWNoX21hcC0+Y2hfb2ZmID0gY2hfcm1hcC0+Y2hfb2ZmID0g YmNoOwotCQljaF9tYXAtPm5yX2x1bnMgPSBsdW5zX2luX2Nobmw7CisJCWNoX21hcC0+bnVtX2x1 biA9IGx1bnNfaW5fY2hubDsKIAogCQlsdW5fb2ZmcyA9IGtjYWxsb2MobHVuc19pbl9jaG5sLCBz aXplb2YoaW50KSwgR0ZQX0tFUk5FTCk7CiAJCWlmICghbHVuX29mZnMpCkBAIC0yMDksNyArMjA4 LDcgQEAgc3RhdGljIHN0cnVjdCBudm1fdGd0X2RldiAqbnZtX2NyZWF0ZV90Z3RfZGV2KHN0cnVj dCBudm1fZGV2ICpkZXYsCiAJCWx1bnNfbGVmdCAtPSBsdW5zX2luX2Nobmw7CiAJfQogCi0JZGV2 X21hcC0+bnJfY2hubHMgPSBucl9jaG5sczsKKwlkZXZfbWFwLT5udW1fY2ggPSBudW1fY2g7CiAK IAl0Z3RfZGV2ID0ga21hbGxvYyhzaXplb2Yoc3RydWN0IG52bV90Z3RfZGV2KSwgR0ZQX0tFUk5F TCk7CiAJaWYgKCF0Z3RfZGV2KQpAQCAtMjE5LDE1ICsyMTgsMTUgQEAgc3RhdGljIHN0cnVjdCBu dm1fdGd0X2RldiAqbnZtX2NyZWF0ZV90Z3RfZGV2KHN0cnVjdCBudm1fZGV2ICpkZXYsCiAJbWVt Y3B5KCZ0Z3RfZGV2LT5nZW8sICZkZXYtPmdlbywgc2l6ZW9mKHN0cnVjdCBudm1fZ2VvKSk7CiAK IAkvKiBUYXJnZXQgZGV2aWNlIG9ubHkgb3ducyBhIHBvcnRpb24gb2YgdGhlIHBoeXNpY2FsIGRl dmljZSAqLwotCXRndF9kZXYtPmdlby5ucl9jaG5scyA9IG5yX2NobmxzOwotCXRndF9kZXYtPmdl by5ucl9sdW5zID0gKGx1bl9iYWxhbmNlZCkgPyBwcmV2X25yX2x1bnMgOiAtMTsKLQl0Z3RfZGV2 LT5nZW8uYWxsX2x1bnMgPSBucl9sdW5zOwotCXRndF9kZXYtPmdlby5hbGxfY2h1bmtzID0gbnJf bHVucyAqIGRldi0+Z2VvLm5yX2Noa3M7CisJdGd0X2Rldi0+Z2VvLm51bV9jaCA9IG51bV9jaDsK Kwl0Z3RfZGV2LT5nZW8ubnVtX2x1biA9IChsdW5fYmFsYW5jZWQpID8gcHJldl9udW1fbHVuIDog LTE7CisJdGd0X2Rldi0+Z2VvLmFsbF9sdW5zID0gbnVtX2x1bjsKKwl0Z3RfZGV2LT5nZW8uYWxs X2NodW5rcyA9IG51bV9sdW4gKiBkZXYtPmdlby5udW1fY2hrOwogCiAJdGd0X2Rldi0+Z2VvLm9w ID0gb3A7CiAKLQlzZWNfcGVyX2x1biA9IGRldi0+Z2VvLmNsYmEgKiBkZXYtPmdlby5ucl9jaGtz OwotCXRndF9kZXYtPmdlby50b3RhbF9zZWNzID0gbnJfbHVucyAqIHNlY19wZXJfbHVuOworCXNl Y19wZXJfbHVuID0gZGV2LT5nZW8uY2xiYSAqIGRldi0+Z2VvLm51bV9jaGs7CisJdGd0X2Rldi0+ Z2VvLnRvdGFsX3NlY3MgPSBudW1fbHVuICogc2VjX3Blcl9sdW47CiAKIAl0Z3RfZGV2LT5xID0g ZGV2LT5xOwogCXRndF9kZXYtPm1hcCA9IGRldl9tYXA7CkBAIC01MDUsMjAgKzUwNCwyMCBAQCBz dGF0aWMgaW50IG52bV9yZWdpc3Rlcl9tYXAoc3RydWN0IG52bV9kZXYgKmRldikKIAlpZiAoIXJt YXApCiAJCWdvdG8gZXJyX3JtYXA7CiAKLQlybWFwLT5jaG5scyA9IGtjYWxsb2MoZGV2LT5nZW8u bnJfY2hubHMsIHNpemVvZihzdHJ1Y3QgbnZtX2NoX21hcCksCisJcm1hcC0+Y2hubHMgPSBrY2Fs bG9jKGRldi0+Z2VvLm51bV9jaCwgc2l6ZW9mKHN0cnVjdCBudm1fY2hfbWFwKSwKIAkJCQkJCQkJ R0ZQX0tFUk5FTCk7CiAJaWYgKCFybWFwLT5jaG5scykKIAkJZ290byBlcnJfY2hubHM7CiAKLQlm b3IgKGkgPSAwOyBpIDwgZGV2LT5nZW8ubnJfY2hubHM7IGkrKykgeworCWZvciAoaSA9IDA7IGkg PCBkZXYtPmdlby5udW1fY2g7IGkrKykgewogCQlzdHJ1Y3QgbnZtX2NoX21hcCAqY2hfcm1hcDsK IAkJaW50ICpsdW5fcm9mZnM7Ci0JCWludCBsdW5zX2luX2NobmwgPSBkZXYtPmdlby5ucl9sdW5z OworCQlpbnQgbHVuc19pbl9jaG5sID0gZGV2LT5nZW8ubnVtX2x1bjsKIAogCQljaF9ybWFwID0g JnJtYXAtPmNobmxzW2ldOwogCiAJCWNoX3JtYXAtPmNoX29mZiA9IC0xOwotCQljaF9ybWFwLT5u cl9sdW5zID0gbHVuc19pbl9jaG5sOworCQljaF9ybWFwLT5udW1fbHVuID0gbHVuc19pbl9jaG5s OwogCiAJCWx1bl9yb2ZmcyA9IGtjYWxsb2MobHVuc19pbl9jaG5sLCBzaXplb2YoaW50KSwgR0ZQ X0tFUk5FTCk7CiAJCWlmICghbHVuX3JvZmZzKQpAQCAtNTQ3LDcgKzU0Niw3IEBAIHN0YXRpYyB2 b2lkIG52bV91bnJlZ2lzdGVyX21hcChzdHJ1Y3QgbnZtX2RldiAqZGV2KQogCXN0cnVjdCBudm1f ZGV2X21hcCAqcm1hcCA9IGRldi0+cm1hcDsKIAlpbnQgaTsKIAotCWZvciAoaSA9IDA7IGkgPCBk ZXYtPmdlby5ucl9jaG5sczsgaSsrKQorCWZvciAoaSA9IDA7IGkgPCBkZXYtPmdlby5udW1fY2g7 IGkrKykKIAkJa2ZyZWUocm1hcC0+Y2hubHNbaV0ubHVuX29mZnMpOwogCiAJa2ZyZWUocm1hcC0+ Y2hubHMpOwpAQCAtNjc2LDcgKzY3NSw3IEBAIHN0YXRpYyBpbnQgbnZtX3NldF9ycWRfcHBhbGlz dChzdHJ1Y3QgbnZtX3RndF9kZXYgKnRndF9kZXYsIHN0cnVjdCBudm1fcnEgKnJxZCwKIAlpbnQg aSwgcGxhbmVfY250LCBwbF9pZHg7CiAJc3RydWN0IHBwYV9hZGRyIHBwYTsKIAotCWlmIChnZW8t PnBsYW5lX21vZGUgPT0gTlZNX1BMQU5FX1NJTkdMRSAmJiBucl9wcGFzID09IDEpIHsKKwlpZiAo Z2VvLT5wbG5fbW9kZSA9PSBOVk1fUExBTkVfU0lOR0xFICYmIG5yX3BwYXMgPT0gMSkgewogCQly cWQtPm5yX3BwYXMgPSBucl9wcGFzOwogCQlycWQtPnBwYV9hZGRyID0gcHBhc1swXTsKIApAQCAt NjkwLDcgKzY4OSw3IEBAIHN0YXRpYyBpbnQgbnZtX3NldF9ycWRfcHBhbGlzdChzdHJ1Y3QgbnZt X3RndF9kZXYgKnRndF9kZXYsIHN0cnVjdCBudm1fcnEgKnJxZCwKIAkJcmV0dXJuIC1FTk9NRU07 CiAJfQogCi0JcGxhbmVfY250ID0gZ2VvLT5wbGFuZV9tb2RlOworCXBsYW5lX2NudCA9IGdlby0+ cGxuX21vZGU7CiAJcnFkLT5ucl9wcGFzICo9IHBsYW5lX2NudDsKIAogCWZvciAoaSA9IDA7IGkg PCBucl9wcGFzOyBpKyspIHsKQEAgLTgwOCwxNSArODA3LDE1IEBAIGludCBudm1fYmJfdGJsX2Zv bGQoc3RydWN0IG52bV9kZXYgKmRldiwgdTggKmJsa3MsIGludCBucl9ibGtzKQogCXN0cnVjdCBu dm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CiAJaW50IGJsaywgb2Zmc2V0LCBwbCwgYmxrdHlwZTsK IAotCWlmIChucl9ibGtzICE9IGdlby0+bnJfY2hrcyAqIGdlby0+cGxhbmVfbW9kZSkKKwlpZiAo bnJfYmxrcyAhPSBnZW8tPm51bV9jaGsgKiBnZW8tPnBsbl9tb2RlKQogCQlyZXR1cm4gLUVJTlZB TDsKIAotCWZvciAoYmxrID0gMDsgYmxrIDwgZ2VvLT5ucl9jaGtzOyBibGsrKykgewotCQlvZmZz ZXQgPSBibGsgKiBnZW8tPnBsYW5lX21vZGU7CisJZm9yIChibGsgPSAwOyBibGsgPCBnZW8tPm51 bV9jaGs7IGJsaysrKSB7CisJCW9mZnNldCA9IGJsayAqIGdlby0+cGxuX21vZGU7CiAJCWJsa3R5 cGUgPSBibGtzW29mZnNldF07CiAKIAkJLyogQmFkIGJsb2NrcyBvbiBhbnkgcGxhbmVzIHRha2Ug cHJlY2VkZW5jZSBvdmVyIG90aGVyIHR5cGVzICovCi0JCWZvciAocGwgPSAwOyBwbCA8IGdlby0+ cGxhbmVfbW9kZTsgcGwrKykgeworCQlmb3IgKHBsID0gMDsgcGwgPCBnZW8tPnBsbl9tb2RlOyBw bCsrKSB7CiAJCQlpZiAoYmxrc1tvZmZzZXQgKyBwbF0gJgogCQkJCQkoTlZNX0JMS19UX0JBRHxO Vk1fQkxLX1RfR1JXTl9CQUQpKSB7CiAJCQkJYmxrdHlwZSA9IGJsa3Nbb2Zmc2V0ICsgcGxdOwpA QCAtODI3LDcgKzgyNiw3IEBAIGludCBudm1fYmJfdGJsX2ZvbGQoc3RydWN0IG52bV9kZXYgKmRl diwgdTggKmJsa3MsIGludCBucl9ibGtzKQogCQlibGtzW2Jsa10gPSBibGt0eXBlOwogCX0KIAot CXJldHVybiBnZW8tPm5yX2Noa3M7CisJcmV0dXJuIGdlby0+bnVtX2NoazsKIH0KIEVYUE9SVF9T WU1CT0wobnZtX2JiX3RibF9mb2xkKTsKIApAQCAtOTAxLDkgKzkwMCw5IEBAIHN0YXRpYyBpbnQg bnZtX2luaXQoc3RydWN0IG52bV9kZXYgKmRldikKIAl9CiAKIAlwcl9pbmZvKCJudm06IHJlZ2lz dGVyZWQgJXMgWyV1LyV1LyV1LyV1LyV1XVxuIiwKLQkJCWRldi0+bmFtZSwgZ2VvLT53c19taW4s IGdlby0+d3Nfb3B0LAotCQkJZ2VvLT5ucl9jaGtzLCBnZW8tPmFsbF9sdW5zLAotCQkJZ2VvLT5u cl9jaG5scyk7CisJCQlkZXYtPm5hbWUsIGRldi0+Z2VvLndzX21pbiwgZGV2LT5nZW8ud3Nfb3B0 LAorCQkJZGV2LT5nZW8ubnVtX2NoaywgZGV2LT5nZW8uYWxsX2x1bnMsCisJCQlkZXYtPmdlby5u dW1fY2gpOwogCXJldHVybiAwOwogZXJyOgogCXByX2VycigibnZtOiBmYWlsZWQgdG8gaW5pdGlh bGl6ZSBudm1cbiIpOwpkaWZmIC0tZ2l0IGEvZHJpdmVycy9saWdodG52bS9wYmxrLWNvcmUuYyBi L2RyaXZlcnMvbGlnaHRudm0vcGJsay1jb3JlLmMKaW5kZXggMTY5NTg5ZGRkNDU3Li43ZDBiZDMz ZjExZDkgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vcGJsay1jb3JlLmMKKysrIGIvZHJp dmVycy9saWdodG52bS9wYmxrLWNvcmUuYwpAQCAtMTc0NSwxMCArMTc0NSwxMCBAQCB2b2lkIHBi bGtfdXBfcnEoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwcGFfYWRkciAqcHBhX2xpc3QsIGlu dCBucl9wcGFzLAogCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJsay0+ZGV2OwogCXN0cnVj dCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CiAJc3RydWN0IHBibGtfbHVuICpybHVuOwotCWlu dCBucl9sdW5zID0gZ2VvLT5hbGxfbHVuczsKKwlpbnQgbnVtX2x1biA9IGdlby0+YWxsX2x1bnM7 CiAJaW50IGJpdCA9IC0xOwogCi0Jd2hpbGUgKChiaXQgPSBmaW5kX25leHRfYml0KGx1bl9iaXRt YXAsIG5yX2x1bnMsIGJpdCArIDEpKSA8IG5yX2x1bnMpIHsKKwl3aGlsZSAoKGJpdCA9IGZpbmRf bmV4dF9iaXQobHVuX2JpdG1hcCwgbnVtX2x1biwgYml0ICsgMSkpIDwgbnVtX2x1bikgewogCQly bHVuID0gJnBibGstPmx1bnNbYml0XTsKIAkJdXAoJnJsdW4tPndyX3NlbSk7CiAJfQpkaWZmIC0t Z2l0IGEvZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYyBiL2RyaXZlcnMvbGlnaHRudm0vcGJs ay1pbml0LmMKaW5kZXggOWI1ZWUwNWMzMDI4Li4xMTQyNGJlYjIxNGMgMTAwNjQ0Ci0tLSBhL2Ry aXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMKKysrIGIvZHJpdmVycy9saWdodG52bS9wYmxrLWlu aXQuYwpAQCAtMTYyLDE1ICsxNjIsMTUgQEAgc3RhdGljIGludCBwYmxrX3NldF9hZGRyZl8xMihz dHJ1Y3QgbnZtX2dlbyAqZ2VvLAogCWludCBwb3dlcl9sZW47CiAKIAkvKiBSZS1jYWxjdWxhdGUg Y2hhbm5lbCBhbmQgbHVuIGZvcm1hdCB0byBhZGFwdCB0byBjb25maWd1cmF0aW9uICovCi0JcG93 ZXJfbGVuID0gZ2V0X2NvdW50X29yZGVyKGdlby0+bnJfY2hubHMpOwotCWlmICgxIDw8IHBvd2Vy X2xlbiAhPSBnZW8tPm5yX2NobmxzKSB7CisJcG93ZXJfbGVuID0gZ2V0X2NvdW50X29yZGVyKGdl by0+bnVtX2NoKTsKKwlpZiAoMSA8PCBwb3dlcl9sZW4gIT0gZ2VvLT5udW1fY2gpIHsKIAkJcHJf ZXJyKCJwYmxrOiBzdXBwb3J0cyBvbmx5IHBvd2VyLW9mLXR3byBjaGFubmVsIGNvbmZpZy5cbiIp OwogCQlyZXR1cm4gLUVJTlZBTDsKIAl9CiAJZHN0LT5jaF9sZW4gPSBwb3dlcl9sZW47CiAKLQlw b3dlcl9sZW4gPSBnZXRfY291bnRfb3JkZXIoZ2VvLT5ucl9sdW5zKTsKLQlpZiAoMSA8PCBwb3dl cl9sZW4gIT0gZ2VvLT5ucl9sdW5zKSB7CisJcG93ZXJfbGVuID0gZ2V0X2NvdW50X29yZGVyKGdl by0+bnVtX2x1bik7CisJaWYgKDEgPDwgcG93ZXJfbGVuICE9IGdlby0+bnVtX2x1bikgewogCQlw cl9lcnIoInBibGs6IHN1cHBvcnRzIG9ubHkgcG93ZXItb2YtdHdvIExVTiBjb25maWcuXG4iKTsK IAkJcmV0dXJuIC1FSU5WQUw7CiAJfQpAQCAtMTc5LDE2ICsxNzksMTYgQEAgc3RhdGljIGludCBw YmxrX3NldF9hZGRyZl8xMihzdHJ1Y3QgbnZtX2dlbyAqZ2VvLAogCWRzdC0+YmxrX2xlbiA9IHNy Yy0+YmxrX2xlbjsKIAlkc3QtPnBnX2xlbiA9IHNyYy0+cGdfbGVuOwogCWRzdC0+cGxuX2xlbiA9 IHNyYy0+cGxuX2xlbjsKLQlkc3QtPnNlY3RfbGVuID0gc3JjLT5zZWN0X2xlbjsKKwlkc3QtPnNl Y19sZW4gPSBzcmMtPnNlY19sZW47CiAKLQlkc3QtPnNlY3Rfb2Zmc2V0ID0gMDsKLQlkc3QtPnBs bl9vZmZzZXQgPSBkc3QtPnNlY3RfbGVuOworCWRzdC0+c2VjX29mZnNldCA9IDA7CisJZHN0LT5w bG5fb2Zmc2V0ID0gZHN0LT5zZWNfbGVuOwogCWRzdC0+Y2hfb2Zmc2V0ID0gZHN0LT5wbG5fb2Zm c2V0ICsgZHN0LT5wbG5fbGVuOwogCWRzdC0+bHVuX29mZnNldCA9IGRzdC0+Y2hfb2Zmc2V0ICsg ZHN0LT5jaF9sZW47CiAJZHN0LT5wZ19vZmZzZXQgPSBkc3QtPmx1bl9vZmZzZXQgKyBkc3QtPmx1 bl9sZW47CiAJZHN0LT5ibGtfb2Zmc2V0ID0gZHN0LT5wZ19vZmZzZXQgKyBkc3QtPnBnX2xlbjsK IAotCWRzdC0+c2VjX21hc2sgPSAoKDFVTEwgPDwgZHN0LT5zZWN0X2xlbikgLSAxKSA8PCBkc3Qt PnNlY3Rfb2Zmc2V0OworCWRzdC0+c2VjX21hc2sgPSAoKDFVTEwgPDwgZHN0LT5zZWNfbGVuKSAt IDEpIDw8IGRzdC0+c2VjX29mZnNldDsKIAlkc3QtPnBsbl9tYXNrID0gKCgxVUxMIDw8IGRzdC0+ cGxuX2xlbikgLSAxKSA8PCBkc3QtPnBsbl9vZmZzZXQ7CiAJZHN0LT5jaF9tYXNrID0gKCgxVUxM IDw8IGRzdC0+Y2hfbGVuKSAtIDEpIDw8IGRzdC0+Y2hfb2Zmc2V0OwogCWRzdC0+bHVuX21hc2sg PSAoKDFVTEwgPDwgZHN0LT5sdW5fbGVuKSAtIDEpIDw8IGRzdC0+bHVuX29mZnNldDsKQEAgLTQ0 OCw3ICs0NDgsNyBAQCBzdGF0aWMgdm9pZCAqcGJsa19iYl9nZXRfbG9nKHN0cnVjdCBwYmxrICpw YmxrKQogCWludCBpLCBucl9ibGtzLCBibGtfcGVyX2x1bjsKIAlpbnQgcmV0OwogCi0JYmxrX3Bl cl9sdW4gPSBnZW8tPm5yX2Noa3MgKiBnZW8tPnBsYW5lX21vZGU7CisJYmxrX3Blcl9sdW4gPSBn ZW8tPm51bV9jaGsgKiBnZW8tPnBsbl9tb2RlOwogCW5yX2Jsa3MgPSBibGtfcGVyX2x1biAqIGdl by0+YWxsX2x1bnM7CiAKIAlsb2cgPSBrbWFsbG9jKG5yX2Jsa3MsIEdGUF9LRVJORUwpOwpAQCAt NDc1LDcgKzQ3NSw3IEBAIHN0YXRpYyBpbnQgcGJsa19iYl9saW5lKHN0cnVjdCBwYmxrICpwYmxr LCBzdHJ1Y3QgcGJsa19saW5lICpsaW5lLAogCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJs ay0+ZGV2OwogCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CiAJaW50IGksIGJiX2Nu dCA9IDA7Ci0JaW50IGJsa19wZXJfbHVuID0gZ2VvLT5ucl9jaGtzICogZ2VvLT5wbGFuZV9tb2Rl OworCWludCBibGtfcGVyX2x1biA9IGdlby0+bnVtX2NoayAqIGdlby0+cGxuX21vZGU7CiAKIAlm b3IgKGkgPSAwOyBpIDwgYmxrX3Blcl9saW5lOyBpKyspIHsKIAkJc3RydWN0IHBibGtfbHVuICpy bHVuID0gJnBibGstPmx1bnNbaV07CkBAIC00OTksNyArNDk5LDcgQEAgc3RhdGljIGludCBwYmxr X2x1bnNfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBwYV9hZGRyICpsdW5zKQogCWlu dCBpOwogCiAJLyogVE9ETzogSW1wbGVtZW50IHVuYmFsYW5jZWQgTFVOIHN1cHBvcnQgKi8KLQlp ZiAoZ2VvLT5ucl9sdW5zIDwgMCkgeworCWlmIChnZW8tPm51bV9sdW4gPCAwKSB7CiAJCXByX2Vy cigicGJsazogdW5iYWxhbmNlZCBMVU4gY29uZmlnLlxuIik7CiAJCXJldHVybiAtRUlOVkFMOwog CX0KQEAgLTUxMSw5ICs1MTEsOSBAQCBzdGF0aWMgaW50IHBibGtfbHVuc19pbml0KHN0cnVjdCBw YmxrICpwYmxrLCBzdHJ1Y3QgcHBhX2FkZHIgKmx1bnMpCiAKIAlmb3IgKGkgPSAwOyBpIDwgZ2Vv LT5hbGxfbHVuczsgaSsrKSB7CiAJCS8qIFN0cmlwZSBhY3Jvc3MgY2hhbm5lbHMgKi8KLQkJaW50 IGNoID0gaSAlIGdlby0+bnJfY2hubHM7Ci0JCWludCBsdW5fcmF3ID0gaSAvIGdlby0+bnJfY2hu bHM7Ci0JCWludCBsdW5pZCA9IGx1bl9yYXcgKyBjaCAqIGdlby0+bnJfbHVuczsKKwkJaW50IGNo ID0gaSAlIGdlby0+bnVtX2NoOworCQlpbnQgbHVuX3JhdyA9IGkgLyBnZW8tPm51bV9jaDsKKwkJ aW50IGx1bmlkID0gbHVuX3JhdyArIGNoICogZ2VvLT5udW1fbHVuOwogCiAJCXJsdW4gPSAmcGJs ay0+bHVuc1tpXTsKIAkJcmx1bi0+YnBwYSA9IGx1bnNbbHVuaWRdOwpAQCAtNzQwLDcgKzc0MCw3 IEBAIHN0YXRpYyBpbnQgcGJsa19saW5lc19pbml0KHN0cnVjdCBwYmxrICpwYmxrKQogCQlyZXR1 cm4gLUVJTlZBTDsKIAl9CiAKLQlsX21nLT5ucl9saW5lcyA9IGdlby0+bnJfY2hrczsKKwlsX21n LT5ucl9saW5lcyA9IGdlby0+bnVtX2NoazsKIAlsX21nLT5sb2dfbGluZSA9IGxfbWctPmRhdGFf bGluZSA9IE5VTEw7CiAJbF9tZy0+bF9zZXFfbnIgPSBsX21nLT5kX3NlcV9uciA9IDA7CiAJbF9t Zy0+bnJfZnJlZV9saW5lcyA9IDA7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGst c3lzZnMuYyBiL2RyaXZlcnMvbGlnaHRudm0vcGJsay1zeXNmcy5jCmluZGV4IDMzMTk5YzZhZjI2 Ny4uNDYyYTc4Nzg5M2Q1IDEwMDY0NAotLS0gYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGstc3lzZnMu YworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstc3lzZnMuYwpAQCAtMTI4LDcgKzEyOCw3IEBA IHN0YXRpYyBzc2l6ZV90IHBibGtfc3lzZnNfcHBhZihzdHJ1Y3QgcGJsayAqcGJsaywgY2hhciAq cGFnZSkKIAkJCXBwYWYtPmJsa19vZmZzZXQsIHBwYWYtPmJsa19sZW4sCiAJCQlwcGFmLT5wZ19v ZmZzZXQsIHBwYWYtPnBnX2xlbiwKIAkJCXBwYWYtPnBsbl9vZmZzZXQsIHBwYWYtPnBsbl9sZW4s Ci0JCQlwcGFmLT5zZWN0X29mZnNldCwgcHBhZi0+c2VjdF9sZW4pOworCQkJcHBhZi0+c2VjX29m ZnNldCwgcHBhZi0+c2VjX2xlbik7CiAKIAlzeiArPSBzbnByaW50ZihwYWdlICsgc3osIFBBR0Vf U0laRSAtIHN6LAogCQkiZGV2aWNlOmNoOiVkLyVkLGx1bjolZC8lZCxibGs6JWQvJWQscGc6JWQv JWQscGw6JWQvJWQsc2VjOiVkLyVkXG4iLApAQCAtMTM3LDcgKzEzNyw3IEBAIHN0YXRpYyBzc2l6 ZV90IHBibGtfc3lzZnNfcHBhZihzdHJ1Y3QgcGJsayAqcGJsaywgY2hhciAqcGFnZSkKIAkJCWdl b19wcGFmLT5ibGtfb2Zmc2V0LCBnZW9fcHBhZi0+YmxrX2xlbiwKIAkJCWdlb19wcGFmLT5wZ19v ZmZzZXQsIGdlb19wcGFmLT5wZ19sZW4sCiAJCQlnZW9fcHBhZi0+cGxuX29mZnNldCwgZ2VvX3Bw YWYtPnBsbl9sZW4sCi0JCQlnZW9fcHBhZi0+c2VjdF9vZmZzZXQsIGdlb19wcGFmLT5zZWN0X2xl bik7CisJCQlnZW9fcHBhZi0+c2VjX29mZnNldCwgZ2VvX3BwYWYtPnNlY19sZW4pOwogCiAJcmV0 dXJuIHN6OwogfQpkaWZmIC0tZ2l0IGEvZHJpdmVycy9saWdodG52bS9wYmxrLmggYi9kcml2ZXJz L2xpZ2h0bnZtL3BibGsuaAppbmRleCBiMjljMWU2Njk4YWEuLmJhZTJjYzc1OGRlOCAxMDA2NDQK LS0tIGEvZHJpdmVycy9saWdodG52bS9wYmxrLmgKKysrIGIvZHJpdmVycy9saWdodG52bS9wYmxr LmgKQEAgLTk0MSw3ICs5NDEsNyBAQCBzdGF0aWMgaW5saW5lIGludCBwYmxrX3BwYV90b19saW5l KHN0cnVjdCBwcGFfYWRkciBwKQogCiBzdGF0aWMgaW5saW5lIGludCBwYmxrX3BwYV90b19wb3Mo c3RydWN0IG52bV9nZW8gKmdlbywgc3RydWN0IHBwYV9hZGRyIHApCiB7Ci0JcmV0dXJuIHAuZy5s dW4gKiBnZW8tPm5yX2NobmxzICsgcC5nLmNoOworCXJldHVybiBwLmcubHVuICogZ2VvLT5udW1f Y2ggKyBwLmcuY2g7CiB9CiAKIHN0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9hZGRyIGFkZHJfdG9f Z2VuX3BwYShzdHJ1Y3QgcGJsayAqcGJsaywgdTY0IHBhZGRyLApAQCAtOTU3LDcgKzk1Nyw3IEBA IHN0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9hZGRyIGFkZHJfdG9fZ2VuX3BwYShzdHJ1Y3QgcGJs ayAqcGJsaywgdTY0IHBhZGRyLAogCXBwYS5nLmx1biA9IChwYWRkciAmIHBwYWYtPmx1bl9tYXNr KSA+PiBwcGFmLT5sdW5fb2Zmc2V0OwogCXBwYS5nLmNoID0gKHBhZGRyICYgcHBhZi0+Y2hfbWFz aykgPj4gcHBhZi0+Y2hfb2Zmc2V0OwogCXBwYS5nLnBsID0gKHBhZGRyICYgcHBhZi0+cGxuX21h c2spID4+IHBwYWYtPnBsbl9vZmZzZXQ7Ci0JcHBhLmcuc2VjID0gKHBhZGRyICYgcHBhZi0+c2Vj X21hc2spID4+IHBwYWYtPnNlY3Rfb2Zmc2V0OworCXBwYS5nLnNlYyA9IChwYWRkciAmIHBwYWYt PnNlY19tYXNrKSA+PiBwcGFmLT5zZWNfb2Zmc2V0OwogCiAJcmV0dXJuIHBwYTsKIH0KQEAgLTk3 Myw3ICs5NzMsNyBAQCBzdGF0aWMgaW5saW5lIHU2NCBwYmxrX2Rldl9wcGFfdG9fbGluZV9hZGRy KHN0cnVjdCBwYmxrICpwYmxrLAogCXBhZGRyIHw9ICh1NjQpcC5nLmx1biA8PCBwcGFmLT5sdW5f b2Zmc2V0OwogCXBhZGRyIHw9ICh1NjQpcC5nLnBnIDw8IHBwYWYtPnBnX29mZnNldDsKIAlwYWRk ciB8PSAodTY0KXAuZy5wbCA8PCBwcGFmLT5wbG5fb2Zmc2V0OwotCXBhZGRyIHw9ICh1NjQpcC5n LnNlYyA8PCBwcGFmLT5zZWN0X29mZnNldDsKKwlwYWRkciB8PSAodTY0KXAuZy5zZWMgPDwgcHBh Zi0+c2VjX29mZnNldDsKIAogCXJldHVybiBwYWRkcjsKIH0KQEAgLTk5OCw3ICs5OTgsNyBAQCBz dGF0aWMgaW5saW5lIHN0cnVjdCBwcGFfYWRkciBwYmxrX3BwYTMyX3RvX3BwYTY0KHN0cnVjdCBw YmxrICpwYmxrLCB1MzIgcHBhMzIpCiAJCXBwYTY0LmcuYmxrID0gKHBwYTMyICYgcHBhZi0+Ymxr X21hc2spID4+IHBwYWYtPmJsa19vZmZzZXQ7CiAJCXBwYTY0LmcucGcgPSAocHBhMzIgJiBwcGFm LT5wZ19tYXNrKSA+PiBwcGFmLT5wZ19vZmZzZXQ7CiAJCXBwYTY0LmcucGwgPSAocHBhMzIgJiBw cGFmLT5wbG5fbWFzaykgPj4gcHBhZi0+cGxuX29mZnNldDsKLQkJcHBhNjQuZy5zZWMgPSAocHBh MzIgJiBwcGFmLT5zZWNfbWFzaykgPj4gcHBhZi0+c2VjdF9vZmZzZXQ7CisJCXBwYTY0Lmcuc2Vj ID0gKHBwYTMyICYgcHBhZi0+c2VjX21hc2spID4+IHBwYWYtPnNlY19vZmZzZXQ7CiAJfQogCiAJ cmV0dXJuIHBwYTY0OwpAQCAtMTAyMiw3ICsxMDIyLDcgQEAgc3RhdGljIGlubGluZSB1MzIgcGJs a19wcGE2NF90b19wcGEzMihzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBwYV9hZGRyIHBwYTY0 KQogCQlwcGEzMiB8PSBwcGE2NC5nLmJsayA8PCBwcGFmLT5ibGtfb2Zmc2V0OwogCQlwcGEzMiB8 PSBwcGE2NC5nLnBnIDw8IHBwYWYtPnBnX29mZnNldDsKIAkJcHBhMzIgfD0gcHBhNjQuZy5wbCA8 PCBwcGFmLT5wbG5fb2Zmc2V0OwotCQlwcGEzMiB8PSBwcGE2NC5nLnNlYyA8PCBwcGFmLT5zZWN0 X29mZnNldDsKKwkJcHBhMzIgfD0gcHBhNjQuZy5zZWMgPDwgcHBhZi0+c2VjX29mZnNldDsKIAl9 CiAKIAlyZXR1cm4gcHBhMzI7CkBAIC0xMTQwLDcgKzExNDAsNyBAQCBzdGF0aWMgaW5saW5lIGlu dCBwYmxrX3NldF9wcm9ncl9tb2RlKHN0cnVjdCBwYmxrICpwYmxrLCBpbnQgdHlwZSkKIAlzdHJ1 Y3QgbnZtX2dlbyAqZ2VvID0gJmRldi0+Z2VvOwogCWludCBmbGFnczsKIAotCWZsYWdzID0gZ2Vv LT5wbGFuZV9tb2RlID4+IDE7CisJZmxhZ3MgPSBnZW8tPnBsbl9tb2RlID4+IDE7CiAKIAlpZiAo dHlwZSA9PSBQQkxLX1dSSVRFKQogCQlmbGFncyB8PSBOVk1fSU9fU0NSQU1CTEVfRU5BQkxFOwpA QCAtMTE2MSw3ICsxMTYxLDcgQEAgc3RhdGljIGlubGluZSBpbnQgcGJsa19zZXRfcmVhZF9tb2Rl KHN0cnVjdCBwYmxrICpwYmxrLCBpbnQgdHlwZSkKIAogCWZsYWdzID0gTlZNX0lPX1NVU1BFTkQg fCBOVk1fSU9fU0NSQU1CTEVfRU5BQkxFOwogCWlmICh0eXBlID09IFBCTEtfUkVBRF9TRVFVRU5U SUFMKQotCQlmbGFncyB8PSBnZW8tPnBsYW5lX21vZGUgPj4gMTsKKwkJZmxhZ3MgfD0gZ2VvLT5w bG5fbW9kZSA+PiAxOwogCiAJcmV0dXJuIGZsYWdzOwogfQpAQCAtMTIxNCwxMCArMTIxNCwxMCBA QCBzdGF0aWMgaW5saW5lIGludCBwYmxrX2JvdW5kYXJ5X3BwYV9jaGVja3Moc3RydWN0IG52bV90 Z3RfZGV2ICp0Z3RfZGV2LAogCQlwcGEgPSAmcHBhc1tpXTsKIAogCQlpZiAoIXBwYS0+Yy5pc19j YWNoZWQgJiYKLQkJCQlwcGEtPmcuY2ggPCBnZW8tPm5yX2NobmxzICYmCi0JCQkJcHBhLT5nLmx1 biA8IGdlby0+bnJfbHVucyAmJgorCQkJCXBwYS0+Zy5jaCA8IGdlby0+bnVtX2NoICYmCisJCQkJ cHBhLT5nLmx1biA8IGdlby0+bnVtX2x1biAmJgogCQkJCXBwYS0+Zy5wbCA8IGdlby0+bnVtX3Bs biAmJgotCQkJCXBwYS0+Zy5ibGsgPCBnZW8tPm5yX2Noa3MgJiYKKwkJCQlwcGEtPmcuYmxrIDwg Z2VvLT5udW1fY2hrICYmCiAJCQkJcHBhLT5nLnBnIDwgZ2VvLT5udW1fcGcgJiYKIAkJCQlwcGEt Pmcuc2VjIDwgZ2VvLT53c19taW4pCiAJCQljb250aW51ZTsKZGlmZiAtLWdpdCBhL2RyaXZlcnMv bnZtZS9ob3N0L2xpZ2h0bnZtLmMgYi9kcml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jCmluZGV4 IGFmYjVmODgzZjhjOC4uZjcxMzU2NTlmOTE4IDEwMDY0NAotLS0gYS9kcml2ZXJzL252bWUvaG9z dC9saWdodG52bS5jCisrKyBiL2RyaXZlcnMvbnZtZS9ob3N0L2xpZ2h0bnZtLmMKQEAgLTI2Miwy MSArMjYyLDIxIEBAIHN0YXRpYyB2b2lkIG52bWVfbnZtX3NldF9hZGRyXzEyKHN0cnVjdCBudm1f YWRkcl9mb3JtYXRfMTIgKmRzdCwKIAlkc3QtPmJsa19sZW4gPSBzcmMtPmJsa19sZW47CiAJZHN0 LT5wZ19sZW4gPSBzcmMtPnBnX2xlbjsKIAlkc3QtPnBsbl9sZW4gPSBzcmMtPnBsbl9sZW47Ci0J ZHN0LT5zZWN0X2xlbiA9IHNyYy0+c2VjX2xlbjsKKwlkc3QtPnNlY19sZW4gPSBzcmMtPnNlY19s ZW47CiAKIAlkc3QtPmNoX29mZnNldCA9IHNyYy0+Y2hfb2Zmc2V0OwogCWRzdC0+bHVuX29mZnNl dCA9IHNyYy0+bHVuX29mZnNldDsKIAlkc3QtPmJsa19vZmZzZXQgPSBzcmMtPmJsa19vZmZzZXQ7 CiAJZHN0LT5wZ19vZmZzZXQgPSBzcmMtPnBnX29mZnNldDsKIAlkc3QtPnBsbl9vZmZzZXQgPSBz cmMtPnBsbl9vZmZzZXQ7Ci0JZHN0LT5zZWN0X29mZnNldCA9IHNyYy0+c2VjX29mZnNldDsKKwlk c3QtPnNlY19vZmZzZXQgPSBzcmMtPnNlY19vZmZzZXQ7CiAKIAlkc3QtPmNoX21hc2sgPSAoKDFV TEwgPDwgZHN0LT5jaF9sZW4pIC0gMSkgPDwgZHN0LT5jaF9vZmZzZXQ7CiAJZHN0LT5sdW5fbWFz ayA9ICgoMVVMTCA8PCBkc3QtPmx1bl9sZW4pIC0gMSkgPDwgZHN0LT5sdW5fb2Zmc2V0OwogCWRz dC0+YmxrX21hc2sgPSAoKDFVTEwgPDwgZHN0LT5ibGtfbGVuKSAtIDEpIDw8IGRzdC0+YmxrX29m ZnNldDsKIAlkc3QtPnBnX21hc2sgPSAoKDFVTEwgPDwgZHN0LT5wZ19sZW4pIC0gMSkgPDwgZHN0 LT5wZ19vZmZzZXQ7CiAJZHN0LT5wbG5fbWFzayA9ICgoMVVMTCA8PCBkc3QtPnBsbl9sZW4pIC0g MSkgPDwgZHN0LT5wbG5fb2Zmc2V0OwotCWRzdC0+c2VjX21hc2sgPSAoKDFVTEwgPDwgZHN0LT5z ZWN0X2xlbikgLSAxKSA8PCBkc3QtPnNlY3Rfb2Zmc2V0OworCWRzdC0+c2VjX21hc2sgPSAoKDFV TEwgPDwgZHN0LT5zZWNfbGVuKSAtIDEpIDw8IGRzdC0+c2VjX29mZnNldDsKIH0KIAogc3RhdGlj IGludCBudm1lX252bV9zZXR1cF8xMihzdHJ1Y3QgbnZtZV9udm1faWQxMiAqaWQsCkBAIC0zMDIs MTEgKzMwMiwxMSBAQCBzdGF0aWMgaW50IG52bWVfbnZtX3NldHVwXzEyKHN0cnVjdCBudm1lX252 bV9pZDEyICppZCwKIAkvKiBTZXQgY29tcGFjdGVkIHZlcnNpb24gZm9yIHVwcGVyIGxheWVycyAq LwogCWdlby0+dmVyc2lvbiA9IE5WTV9PQ1NTRF9TUEVDXzEyOwogCi0JZ2VvLT5ucl9jaG5scyA9 IHNyYy0+bnVtX2NoOwotCWdlby0+bnJfbHVucyA9IHNyYy0+bnVtX2x1bjsKLQlnZW8tPmFsbF9s dW5zID0gZ2VvLT5ucl9jaG5scyAqIGdlby0+bnJfbHVuczsKKwlnZW8tPm51bV9jaCA9IHNyYy0+ bnVtX2NoOworCWdlby0+bnVtX2x1biA9IHNyYy0+bnVtX2x1bjsKKwlnZW8tPmFsbF9sdW5zID0g Z2VvLT5udW1fY2ggKiBnZW8tPm51bV9sdW47CiAKLQlnZW8tPm5yX2Noa3MgPSBsZTE2X3RvX2Nw dShzcmMtPm51bV9jaGspOworCWdlby0+bnVtX2NoayA9IGxlMTZfdG9fY3B1KHNyYy0+bnVtX2No ayk7CiAKIAlnZW8tPmNzZWNzID0gbGUxNl90b19jcHUoc3JjLT5jc2Vjcyk7CiAJZ2VvLT5zb3Mg PSBsZTE2X3RvX2NwdShzcmMtPnNvcyk7CkBAIC0zMTYsNyArMzE2LDcgQEAgc3RhdGljIGludCBu dm1lX252bV9zZXR1cF8xMihzdHJ1Y3QgbnZtZV9udm1faWQxMiAqaWQsCiAJc2VjX3Blcl9wbCA9 IHNlY19wZXJfcGcgKiBzcmMtPm51bV9wbG47CiAJZ2VvLT5jbGJhID0gc2VjX3Blcl9wbCAqIHBn X3Blcl9ibGs7CiAKLQlnZW8tPmFsbF9jaHVua3MgPSBnZW8tPmFsbF9sdW5zICogZ2VvLT5ucl9j aGtzOworCWdlby0+YWxsX2NodW5rcyA9IGdlby0+YWxsX2x1bnMgKiBnZW8tPm51bV9jaGs7CiAJ Z2VvLT50b3RhbF9zZWNzID0gZ2VvLT5jbGJhICogZ2VvLT5hbGxfY2h1bmtzOwogCiAJZ2VvLT53 c19taW4gPSBzZWNfcGVyX3BnOwpAQCAtMzI3LDggKzMyNyw4IEBAIHN0YXRpYyBpbnQgbnZtZV9u dm1fc2V0dXBfMTIoc3RydWN0IG52bWVfbnZtX2lkMTIgKmlkLAogCSAqIHVuc3BlY2lmaWVkIGlu IDEuMi4gVXNlcnMgb2YgMS4yIG11c3QgYmUgYXdhcmUgb2YgdGhpcyBhbmQgZXZlbnR1YWxseQog CSAqIHNwZWNpZnkgdGhlc2UgdmFsdWVzIHRocm91Z2ggYSBxdWlyayBpZiByZXN0cmljdGlvbnMg YXBwbHkuCiAJICovCi0JZ2VvLT5tYXhvYyA9IGdlby0+YWxsX2x1bnMgKiBnZW8tPm5yX2Noa3M7 Ci0JZ2VvLT5tYXhvY3B1ID0gZ2VvLT5ucl9jaGtzOworCWdlby0+bWF4b2MgPSBnZW8tPmFsbF9s dW5zICogZ2VvLT5udW1fY2hrOworCWdlby0+bWF4b2NwdSA9IGdlby0+bnVtX2NoazsKIAogCWdl by0+Y2FwID0gbGUzMl90b19jcHUoc3JjLT5tY2NhcCk7CiAKQEAgLTM1MCwxMyArMzUwLDEzIEBA IHN0YXRpYyBpbnQgbnZtZV9udm1fc2V0dXBfMTIoc3RydWN0IG52bWVfbnZtX2lkMTIgKmlkLAog CWdlby0+Y3BhciA9IGxlMTZfdG9fY3B1KHNyYy0+Y3Bhcik7CiAJZ2VvLT5tcG9zID0gbGUzMl90 b19jcHUoc3JjLT5tcG9zKTsKIAotCWdlby0+cGxhbmVfbW9kZSA9IE5WTV9QTEFORV9TSU5HTEU7 CisJZ2VvLT5wbG5fbW9kZSA9IE5WTV9QTEFORV9TSU5HTEU7CiAKIAlpZiAoZ2VvLT5tcG9zICYg MHgwMjAyMDIpIHsKLQkJZ2VvLT5wbGFuZV9tb2RlID0gTlZNX1BMQU5FX0RPVUJMRTsKKwkJZ2Vv LT5wbG5fbW9kZSA9IE5WTV9QTEFORV9ET1VCTEU7CiAJCWdlby0+d3Nfb3B0IDw8PSAxOwogCX0g ZWxzZSBpZiAoZ2VvLT5tcG9zICYgMHgwNDA0MDQpIHsKLQkJZ2VvLT5wbGFuZV9tb2RlID0gTlZN X1BMQU5FX1FVQUQ7CisJCWdlby0+cGxuX21vZGUgPSBOVk1fUExBTkVfUVVBRDsKIAkJZ2VvLT53 c19vcHQgPDw9IDI7CiAJfQogCkBAIC00MDQsMTQgKzQwNCwxNCBAQCBzdGF0aWMgaW50IG52bWVf bnZtX3NldHVwXzIwKHN0cnVjdCBudm1lX252bV9pZDIwICppZCwKIAkJcmV0dXJuIC1FSU5WQUw7 CiAJfQogCi0JZ2VvLT5ucl9jaG5scyA9IGxlMTZfdG9fY3B1KGlkLT5udW1fZ3JwKTsKLQlnZW8t Pm5yX2x1bnMgPSBsZTE2X3RvX2NwdShpZC0+bnVtX3B1KTsKLQlnZW8tPmFsbF9sdW5zID0gZ2Vv LT5ucl9jaG5scyAqIGdlby0+bnJfbHVuczsKKwlnZW8tPm51bV9jaCA9IGxlMTZfdG9fY3B1KGlk LT5udW1fZ3JwKTsKKwlnZW8tPm51bV9sdW4gPSBsZTE2X3RvX2NwdShpZC0+bnVtX3B1KTsKKwln ZW8tPmFsbF9sdW5zID0gZ2VvLT5udW1fY2ggKiBnZW8tPm51bV9sdW47CiAKLQlnZW8tPm5yX2No a3MgPSBsZTMyX3RvX2NwdShpZC0+bnVtX2Noayk7CisJZ2VvLT5udW1fY2hrID0gbGUzMl90b19j cHUoaWQtPm51bV9jaGspOwogCWdlby0+Y2xiYSA9IGxlMzJfdG9fY3B1KGlkLT5jbGJhKTsKIAot CWdlby0+YWxsX2NodW5rcyA9IGdlby0+YWxsX2x1bnMgKiBnZW8tPm5yX2Noa3M7CisJZ2VvLT5h bGxfY2h1bmtzID0gZ2VvLT5hbGxfbHVucyAqIGdlby0+bnVtX2NoazsKIAlnZW8tPnRvdGFsX3Nl Y3MgPSBnZW8tPmNsYmEgKiBnZW8tPmFsbF9jaHVua3M7CiAKIAlnZW8tPndzX21pbiA9IGxlMzJf dG9fY3B1KGlkLT53c19taW4pOwpAQCAtNDg3LDcgKzQ4Nyw3IEBAIHN0YXRpYyBpbnQgbnZtZV9u dm1fZ2V0X2JiX3RibChzdHJ1Y3QgbnZtX2RldiAqbnZtZGV2LCBzdHJ1Y3QgcHBhX2FkZHIgcHBh LAogCXN0cnVjdCBudm1lX2N0cmwgKmN0cmwgPSBucy0+Y3RybDsKIAlzdHJ1Y3QgbnZtZV9udm1f Y29tbWFuZCBjID0ge307CiAJc3RydWN0IG52bWVfbnZtX2JiX3RibCAqYmJfdGJsOwotCWludCBu cl9ibGtzID0gZ2VvLT5ucl9jaGtzICogZ2VvLT5udW1fcGxuOworCWludCBucl9ibGtzID0gZ2Vv LT5udW1fY2hrICogZ2VvLT5udW1fcGxuOwogCWludCB0YmxzeiA9IHNpemVvZihzdHJ1Y3QgbnZt ZV9udm1fYmJfdGJsKSArIG5yX2Jsa3M7CiAJaW50IHJldCA9IDA7CiAKQEAgLTUyOCw3ICs1Mjgs NyBAQCBzdGF0aWMgaW50IG52bWVfbnZtX2dldF9iYl90Ymwoc3RydWN0IG52bV9kZXYgKm52bWRl diwgc3RydWN0IHBwYV9hZGRyIHBwYSwKIAkJZ290byBvdXQ7CiAJfQogCi0JbWVtY3B5KGJsa3Ms IGJiX3RibC0+YmxrLCBnZW8tPm5yX2Noa3MgKiBnZW8tPm51bV9wbG4pOworCW1lbWNweShibGtz LCBiYl90YmwtPmJsaywgZ2VvLT5udW1fY2hrICogZ2VvLT5udW1fcGxuKTsKIG91dDoKIAlrZnJl ZShiYl90YmwpOwogCXJldHVybiByZXQ7CkBAIC05NzIsNyArOTcyLDcgQEAgc3RhdGljIHNzaXpl X3QgbnZtX2Rldl9hdHRyX3Nob3dfcHBhZihzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICpwcGFm LAogCQkJCXBwYWYtPnBsbl9vZmZzZXQsIHBwYWYtPnBsbl9sZW4sCiAJCQkJcHBhZi0+YmxrX29m ZnNldCwgcHBhZi0+YmxrX2xlbiwKIAkJCQlwcGFmLT5wZ19vZmZzZXQsIHBwYWYtPnBnX2xlbiwK LQkJCQlwcGFmLT5zZWN0X29mZnNldCwgcHBhZi0+c2VjdF9sZW4pOworCQkJCXBwYWYtPnNlY19v ZmZzZXQsIHBwYWYtPnNlY19sZW4pOwogfQogCiBzdGF0aWMgc3NpemVfdCBudm1fZGV2X2F0dHJf c2hvd18xMihzdHJ1Y3QgZGV2aWNlICpkZXYsCkBAIC0xMDAyLDEzICsxMDAyLDEzIEBAIHN0YXRp YyBzc2l6ZV90IG52bV9kZXZfYXR0cl9zaG93XzEyKHN0cnVjdCBkZXZpY2UgKmRldiwKIAl9IGVs c2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAiZmxhc2hfbWVkaWFfdHlwZSIpID09IDApIHsKIAkJ cmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwgZ2VvLT5mbXR5cGUpOwog CX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJudW1fY2hhbm5lbHMiKSA9PSAwKSB7Ci0J CXJldHVybiBzY25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIsIGdlby0+bnJfY2hubHMp OworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm51bV9j aCk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgIm51bV9sdW5zIikgPT0gMCkgewot CQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm5yX2x1bnMp OworCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm51bV9s dW4pOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJudW1fcGxhbmVzIikgPT0gMCkg ewogCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm51bV9w bG4pOwogCX0gZWxzZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJudW1fYmxvY2tzIikgPT0gMCkg ewkvKiB1MTYgKi8KLQkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwg Z2VvLT5ucl9jaGtzKTsKKwkJcmV0dXJuIHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxu IiwgZ2VvLT5udW1fY2hrKTsKIAl9IGVsc2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAibnVtX3Bh Z2VzIikgPT0gMCkgewogCQlyZXR1cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4i LCBnZW8tPm51bV9wZyk7CiAJfSBlbHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgInBhZ2Vfc2l6 ZSIpID09IDApIHsKQEAgLTEwNTIsMTEgKzEwNTIsMTEgQEAgc3RhdGljIHNzaXplX3QgbnZtX2Rl dl9hdHRyX3Nob3dfMjAoc3RydWN0IGRldmljZSAqZGV2LAogCWF0dHIgPSAmZGF0dHItPmF0dHI7 CiAKIAlpZiAoc3RyY21wKGF0dHItPm5hbWUsICJncm91cHMiKSA9PSAwKSB7Ci0JCXJldHVybiBz Y25wcmludGYocGFnZSwgUEFHRV9TSVpFLCAiJXVcbiIsIGdlby0+bnJfY2hubHMpOworCQlyZXR1 cm4gc2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm51bV9jaCk7CiAJfSBl bHNlIGlmIChzdHJjbXAoYXR0ci0+bmFtZSwgInB1bml0cyIpID09IDApIHsKLQkJcmV0dXJuIHNj bnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwgZ2VvLT5ucl9sdW5zKTsKKwkJcmV0dXJu IHNjbnByaW50ZihwYWdlLCBQQUdFX1NJWkUsICIldVxuIiwgZ2VvLT5udW1fbHVuKTsKIAl9IGVs c2UgaWYgKHN0cmNtcChhdHRyLT5uYW1lLCAiY2h1bmtzIikgPT0gMCkgewotCQlyZXR1cm4gc2Nu cHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm5yX2Noa3MpOworCQlyZXR1cm4g c2NucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPm51bV9jaGspOwogCX0gZWxz ZSBpZiAoc3RyY21wKGF0dHItPm5hbWUsICJjbGJhIikgPT0gMCkgewogCQlyZXR1cm4gc2NucHJp bnRmKHBhZ2UsIFBBR0VfU0laRSwgIiV1XG4iLCBnZW8tPmNsYmEpOwogCX0gZWxzZSBpZiAoc3Ry Y21wKGF0dHItPm5hbWUsICJ3c19taW4iKSA9PSAwKSB7CmRpZmYgLS1naXQgYS9pbmNsdWRlL2xp bnV4L2xpZ2h0bnZtLmggYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKaW5kZXggMjEwMmIwOTJj N2ViLi40Zjg4ZTNkYzRkOGMgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvbGlnaHRudm0uaAor KysgYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKQEAgLTE2MywxNCArMTYzLDE0IEBAIHN0cnVj dCBudm1fYWRkcl9mb3JtYXRfMTIgewogCXU4CWJsa19sZW47CiAJdTgJcGdfbGVuOwogCXU4CXBs bl9sZW47Ci0JdTgJc2VjdF9sZW47CisJdTgJc2VjX2xlbjsKIAogCXU4CWNoX29mZnNldDsKIAl1 OAlsdW5fb2Zmc2V0OwogCXU4CWJsa19vZmZzZXQ7CiAJdTgJcGdfb2Zmc2V0OwogCXU4CXBsbl9v ZmZzZXQ7Ci0JdTgJc2VjdF9vZmZzZXQ7CisJdTgJc2VjX29mZnNldDsKIAogCXU2NAljaF9tYXNr OwogCXU2NAlsdW5fbWFzazsKQEAgLTI3NSw4ICsyNzUsOCBAQCBzdHJ1Y3QgbnZtX2dlbyB7CiAJ dTgJdmVyc2lvbjsKIAogCS8qIGluc3RhbmNlIHNwZWNpZmljIGdlb21ldHJ5ICovCi0JaW50IG5y X2NobmxzOwotCWludCBucl9sdW5zOwkJLyogcGVyIGNoYW5uZWwgKi8KKwlpbnQgbnVtX2NoOwor CWludCBudW1fbHVuOwkJLyogcGVyIGNoYW5uZWwgKi8KIAogCS8qIGNhbGN1bGF0ZWQgdmFsdWVz ICovCiAJaW50IGFsbF9sdW5zOwkJLyogYWNyb3NzIGNoYW5uZWxzICovCkBAIC0yODcsNyArMjg3 LDcgQEAgc3RydWN0IG52bV9nZW8gewogCXNlY3Rvcl90IHRvdGFsX3NlY3M7CS8qIGFjcm9zcyBj aGFubmVscyAqLwogCiAJLyogY2h1bmsgZ2VvbWV0cnkgKi8KLQl1MzIJbnJfY2hrczsJLyogY2h1 bmtzIHBlciBsdW4gKi8KKwl1MzIJbnVtX2NoazsJLyogY2h1bmtzIHBlciBsdW4gKi8KIAl1MzIJ Y2xiYTsJCS8qIHNlY3RvcnMgcGVyIGNodW5rICovCiAJdTE2CWNzZWNzOwkJLyogc2VjdG9yIHNp emUgKi8KIAl1MTYJc29zOwkJLyogb3V0LW9mLWJhbmQgYXJlYSBzaXplICovCkBAIC0zMjcsNyAr MzI3LDcgQEAgc3RydWN0IG52bV9nZW8gewogCXUzMgltcG9zOwogCiAJdTgJbnVtX3BsbjsKLQl1 OAlwbGFuZV9tb2RlOworCXU4CXBsbl9tb2RlOwogCXUxNgludW1fcGc7CiAJdTE2CWZwZ19zejsK IH07CkBAIC0zODUsNyArMzg1LDcgQEAgc3RhdGljIGlubGluZSBzdHJ1Y3QgcHBhX2FkZHIgZ2Vu ZXJpY190b19kZXZfYWRkcihzdHJ1Y3QgbnZtX3RndF9kZXYgKnRndF9kZXYsCiAJbC5wcGEgfD0g KCh1NjQpci5nLmJsaykgPDwgcHBhZi0+YmxrX29mZnNldDsKIAlsLnBwYSB8PSAoKHU2NClyLmcu cGcpIDw8IHBwYWYtPnBnX29mZnNldDsKIAlsLnBwYSB8PSAoKHU2NClyLmcucGwpIDw8IHBwYWYt PnBsbl9vZmZzZXQ7Ci0JbC5wcGEgfD0gKCh1NjQpci5nLnNlYykgPDwgcHBhZi0+c2VjdF9vZmZz ZXQ7CisJbC5wcGEgfD0gKCh1NjQpci5nLnNlYykgPDwgcHBhZi0+c2VjX29mZnNldDsKIAogCXJl dHVybiBsOwogfQpAQCAtNDA1LDcgKzQwNSw3IEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9h ZGRyIGRldl90b19nZW5lcmljX2FkZHIoc3RydWN0IG52bV90Z3RfZGV2ICp0Z3RfZGV2LAogCWwu Zy5ibGsgPSAoci5wcGEgJiBwcGFmLT5ibGtfbWFzaykgPj4gcHBhZi0+YmxrX29mZnNldDsKIAls LmcucGcgPSAoci5wcGEgJiBwcGFmLT5wZ19tYXNrKSA+PiBwcGFmLT5wZ19vZmZzZXQ7CiAJbC5n LnBsID0gKHIucHBhICYgcHBhZi0+cGxuX21hc2spID4+IHBwYWYtPnBsbl9vZmZzZXQ7Ci0JbC5n LnNlYyA9IChyLnBwYSAmIHBwYWYtPnNlY19tYXNrKSA+PiBwcGFmLT5zZWN0X29mZnNldDsKKwls Lmcuc2VjID0gKHIucHBhICYgcHBhZi0+c2VjX21hc2spID4+IHBwYWYtPnNlY19vZmZzZXQ7CiAK IAlyZXR1cm4gbDsKIH0KLS0gCjIuNy40CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX18KTGludXgtbnZtZSBtYWlsaW5nIGxpc3QKTGludXgtbnZtZUBsaXN0 cy5pbmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0cy5pbmZyYWRlYWQub3JnL21haWxtYW4vbGlzdGlu Zm8vbGludXgtbnZtZQo= ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 06/15] lightnvm: normalize geometry nomenclature @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Normalize nomenclature for naming channels, luns, chunks, planes and sectors as well as derivations in order to improve readability. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/core.c | 89 +++++++++++++++++++++---------------------- drivers/lightnvm/pblk-core.c | 4 +- drivers/lightnvm/pblk-init.c | 30 +++++++-------- drivers/lightnvm/pblk-sysfs.c | 4 +- drivers/lightnvm/pblk.h | 20 +++++----- drivers/nvme/host/lightnvm.c | 54 +++++++++++++------------- include/linux/lightnvm.h | 16 ++++---- 7 files changed, 108 insertions(+), 109 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index c4f72fbad2bf..b869e3051265 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -36,13 +36,13 @@ static DECLARE_RWSEM(nvm_lock); /* Map between virtual and physical channel and lun */ struct nvm_ch_map { int ch_off; - int nr_luns; + int num_lun; int *lun_offs; }; struct nvm_dev_map { struct nvm_ch_map *chnls; - int nr_chnls; + int num_ch; }; static struct nvm_target *nvm_find_target(struct nvm_dev *dev, const char *name) @@ -114,15 +114,15 @@ static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear) struct nvm_dev_map *dev_map = tgt_dev->map; int i, j; - for (i = 0; i < dev_map->nr_chnls; i++) { + for (i = 0; i < dev_map->num_ch; i++) { struct nvm_ch_map *ch_map = &dev_map->chnls[i]; int *lun_offs = ch_map->lun_offs; int ch = i + ch_map->ch_off; if (clear) { - for (j = 0; j < ch_map->nr_luns; j++) { + for (j = 0; j < ch_map->num_lun; j++) { int lun = j + lun_offs[j]; - int lunid = (ch * dev->geo.nr_luns) + lun; + int lunid = (ch * dev->geo.num_lun) + lun; WARN_ON(!test_and_clear_bit(lunid, dev->lun_map)); @@ -147,47 +147,46 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, struct nvm_dev_map *dev_rmap = dev->rmap; struct nvm_dev_map *dev_map; struct ppa_addr *luns; - int nr_luns = lun_end - lun_begin + 1; - int luns_left = nr_luns; - int nr_chnls = nr_luns / dev->geo.nr_luns; - int nr_chnls_mod = nr_luns % dev->geo.nr_luns; - int bch = lun_begin / dev->geo.nr_luns; - int blun = lun_begin % dev->geo.nr_luns; + int num_lun = lun_end - lun_begin + 1; + int luns_left = num_lun; + int num_ch = num_lun / dev->geo.num_lun; + int num_ch_mod = num_lun % dev->geo.num_lun; + int bch = lun_begin / dev->geo.num_lun; + int blun = lun_begin % dev->geo.num_lun; int lunid = 0; int lun_balanced = 1; - int sec_per_lun, prev_nr_luns; + int sec_per_lun, prev_num_lun; int i, j; - nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1; + num_ch = (num_ch_mod == 0) ? num_ch : num_ch + 1; dev_map = kmalloc(sizeof(struct nvm_dev_map), GFP_KERNEL); if (!dev_map) goto err_dev; - dev_map->chnls = kcalloc(nr_chnls, sizeof(struct nvm_ch_map), - GFP_KERNEL); + dev_map->chnls = kcalloc(num_ch, sizeof(struct nvm_ch_map), GFP_KERNEL); if (!dev_map->chnls) goto err_chnls; - luns = kcalloc(nr_luns, sizeof(struct ppa_addr), GFP_KERNEL); + luns = kcalloc(num_lun, sizeof(struct ppa_addr), GFP_KERNEL); if (!luns) goto err_luns; - prev_nr_luns = (luns_left > dev->geo.nr_luns) ? - dev->geo.nr_luns : luns_left; - for (i = 0; i < nr_chnls; i++) { + prev_num_lun = (luns_left > dev->geo.num_lun) ? + dev->geo.num_lun : luns_left; + for (i = 0; i < num_ch; i++) { struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[i + bch]; int *lun_roffs = ch_rmap->lun_offs; struct nvm_ch_map *ch_map = &dev_map->chnls[i]; int *lun_offs; - int luns_in_chnl = (luns_left > dev->geo.nr_luns) ? - dev->geo.nr_luns : luns_left; + int luns_in_chnl = (luns_left > dev->geo.num_lun) ? + dev->geo.num_lun : luns_left; - if (lun_balanced && prev_nr_luns != luns_in_chnl) + if (lun_balanced && prev_num_lun != luns_in_chnl) lun_balanced = 0; ch_map->ch_off = ch_rmap->ch_off = bch; - ch_map->nr_luns = luns_in_chnl; + ch_map->num_lun = luns_in_chnl; lun_offs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL); if (!lun_offs) @@ -209,7 +208,7 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, luns_left -= luns_in_chnl; } - dev_map->nr_chnls = nr_chnls; + dev_map->num_ch = num_ch; tgt_dev = kmalloc(sizeof(struct nvm_tgt_dev), GFP_KERNEL); if (!tgt_dev) @@ -219,15 +218,15 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo)); /* Target device only owns a portion of the physical device */ - tgt_dev->geo.nr_chnls = nr_chnls; - tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1; - tgt_dev->geo.all_luns = nr_luns; - tgt_dev->geo.all_chunks = nr_luns * dev->geo.nr_chks; + tgt_dev->geo.num_ch = num_ch; + tgt_dev->geo.num_lun = (lun_balanced) ? prev_num_lun : -1; + tgt_dev->geo.all_luns = num_lun; + tgt_dev->geo.all_chunks = num_lun * dev->geo.num_chk; tgt_dev->geo.op = op; - sec_per_lun = dev->geo.clba * dev->geo.nr_chks; - tgt_dev->geo.total_secs = nr_luns * sec_per_lun; + sec_per_lun = dev->geo.clba * dev->geo.num_chk; + tgt_dev->geo.total_secs = num_lun * sec_per_lun; tgt_dev->q = dev->q; tgt_dev->map = dev_map; @@ -505,20 +504,20 @@ static int nvm_register_map(struct nvm_dev *dev) if (!rmap) goto err_rmap; - rmap->chnls = kcalloc(dev->geo.nr_chnls, sizeof(struct nvm_ch_map), + rmap->chnls = kcalloc(dev->geo.num_ch, sizeof(struct nvm_ch_map), GFP_KERNEL); if (!rmap->chnls) goto err_chnls; - for (i = 0; i < dev->geo.nr_chnls; i++) { + for (i = 0; i < dev->geo.num_ch; i++) { struct nvm_ch_map *ch_rmap; int *lun_roffs; - int luns_in_chnl = dev->geo.nr_luns; + int luns_in_chnl = dev->geo.num_lun; ch_rmap = &rmap->chnls[i]; ch_rmap->ch_off = -1; - ch_rmap->nr_luns = luns_in_chnl; + ch_rmap->num_lun = luns_in_chnl; lun_roffs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL); if (!lun_roffs) @@ -547,7 +546,7 @@ static void nvm_unregister_map(struct nvm_dev *dev) struct nvm_dev_map *rmap = dev->rmap; int i; - for (i = 0; i < dev->geo.nr_chnls; i++) + for (i = 0; i < dev->geo.num_ch; i++) kfree(rmap->chnls[i].lun_offs); kfree(rmap->chnls); @@ -676,7 +675,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd, int i, plane_cnt, pl_idx; struct ppa_addr ppa; - if (geo->plane_mode == NVM_PLANE_SINGLE && nr_ppas == 1) { + if (geo->pln_mode == NVM_PLANE_SINGLE && nr_ppas == 1) { rqd->nr_ppas = nr_ppas; rqd->ppa_addr = ppas[0]; @@ -690,7 +689,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd, return -ENOMEM; } - plane_cnt = geo->plane_mode; + plane_cnt = geo->pln_mode; rqd->nr_ppas *= plane_cnt; for (i = 0; i < nr_ppas; i++) { @@ -808,15 +807,15 @@ int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks) struct nvm_geo *geo = &dev->geo; int blk, offset, pl, blktype; - if (nr_blks != geo->nr_chks * geo->plane_mode) + if (nr_blks != geo->num_chk * geo->pln_mode) return -EINVAL; - for (blk = 0; blk < geo->nr_chks; blk++) { - offset = blk * geo->plane_mode; + for (blk = 0; blk < geo->num_chk; blk++) { + offset = blk * geo->pln_mode; blktype = blks[offset]; /* Bad blocks on any planes take precedence over other types */ - for (pl = 0; pl < geo->plane_mode; pl++) { + for (pl = 0; pl < geo->pln_mode; pl++) { if (blks[offset + pl] & (NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) { blktype = blks[offset + pl]; @@ -827,7 +826,7 @@ int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks) blks[blk] = blktype; } - return geo->nr_chks; + return geo->num_chk; } EXPORT_SYMBOL(nvm_bb_tbl_fold); @@ -901,9 +900,9 @@ static int nvm_init(struct nvm_dev *dev) } pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n", - dev->name, geo->ws_min, geo->ws_opt, - geo->nr_chks, geo->all_luns, - geo->nr_chnls); + dev->name, dev->geo.ws_min, dev->geo.ws_opt, + dev->geo.num_chk, dev->geo.all_luns, + dev->geo.num_ch); return 0; err: pr_err("nvm: failed to initialize nvm\n"); diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 169589ddd457..7d0bd33f11d9 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -1745,10 +1745,10 @@ void pblk_up_rq(struct pblk *pblk, struct ppa_addr *ppa_list, int nr_ppas, struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; struct pblk_lun *rlun; - int nr_luns = geo->all_luns; + int num_lun = geo->all_luns; int bit = -1; - while ((bit = find_next_bit(lun_bitmap, nr_luns, bit + 1)) < nr_luns) { + while ((bit = find_next_bit(lun_bitmap, num_lun, bit + 1)) < num_lun) { rlun = &pblk->luns[bit]; up(&rlun->wr_sem); } diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 9b5ee05c3028..11424beb214c 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -162,15 +162,15 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, int power_len; /* Re-calculate channel and lun format to adapt to configuration */ - power_len = get_count_order(geo->nr_chnls); - if (1 << power_len != geo->nr_chnls) { + power_len = get_count_order(geo->num_ch); + if (1 << power_len != geo->num_ch) { pr_err("pblk: supports only power-of-two channel config.\n"); return -EINVAL; } dst->ch_len = power_len; - power_len = get_count_order(geo->nr_luns); - if (1 << power_len != geo->nr_luns) { + power_len = get_count_order(geo->num_lun); + if (1 << power_len != geo->num_lun) { pr_err("pblk: supports only power-of-two LUN config.\n"); return -EINVAL; } @@ -179,16 +179,16 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, dst->blk_len = src->blk_len; dst->pg_len = src->pg_len; dst->pln_len = src->pln_len; - dst->sect_len = src->sect_len; + dst->sec_len = src->sec_len; - dst->sect_offset = 0; - dst->pln_offset = dst->sect_len; + dst->sec_offset = 0; + dst->pln_offset = dst->sec_len; dst->ch_offset = dst->pln_offset + dst->pln_len; dst->lun_offset = dst->ch_offset + dst->ch_len; dst->pg_offset = dst->lun_offset + dst->lun_len; dst->blk_offset = dst->pg_offset + dst->pg_len; - dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; @@ -448,7 +448,7 @@ static void *pblk_bb_get_log(struct pblk *pblk) int i, nr_blks, blk_per_lun; int ret; - blk_per_lun = geo->nr_chks * geo->plane_mode; + blk_per_lun = geo->num_chk * geo->pln_mode; nr_blks = blk_per_lun * geo->all_luns; log = kmalloc(nr_blks, GFP_KERNEL); @@ -475,7 +475,7 @@ static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line, struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; int i, bb_cnt = 0; - int blk_per_lun = geo->nr_chks * geo->plane_mode; + int blk_per_lun = geo->num_chk * geo->pln_mode; for (i = 0; i < blk_per_line; i++) { struct pblk_lun *rlun = &pblk->luns[i]; @@ -499,7 +499,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) int i; /* TODO: Implement unbalanced LUN support */ - if (geo->nr_luns < 0) { + if (geo->num_lun < 0) { pr_err("pblk: unbalanced LUN config.\n"); return -EINVAL; } @@ -511,9 +511,9 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) for (i = 0; i < geo->all_luns; i++) { /* Stripe across channels */ - int ch = i % geo->nr_chnls; - int lun_raw = i / geo->nr_chnls; - int lunid = lun_raw + ch * geo->nr_luns; + int ch = i % geo->num_ch; + int lun_raw = i / geo->num_ch; + int lunid = lun_raw + ch * geo->num_lun; rlun = &pblk->luns[i]; rlun->bppa = luns[lunid]; @@ -740,7 +740,7 @@ static int pblk_lines_init(struct pblk *pblk) return -EINVAL; } - l_mg->nr_lines = geo->nr_chks; + l_mg->nr_lines = geo->num_chk; l_mg->log_line = l_mg->data_line = NULL; l_mg->l_seq_nr = l_mg->d_seq_nr = 0; l_mg->nr_free_lines = 0; diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index 33199c6af267..462a787893d5 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -128,7 +128,7 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) ppaf->blk_offset, ppaf->blk_len, ppaf->pg_offset, ppaf->pg_len, ppaf->pln_offset, ppaf->pln_len, - ppaf->sect_offset, ppaf->sect_len); + ppaf->sec_offset, ppaf->sec_len); sz += snprintf(page + sz, PAGE_SIZE - sz, "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", @@ -137,7 +137,7 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) geo_ppaf->blk_offset, geo_ppaf->blk_len, geo_ppaf->pg_offset, geo_ppaf->pg_len, geo_ppaf->pln_offset, geo_ppaf->pln_len, - geo_ppaf->sect_offset, geo_ppaf->sect_len); + geo_ppaf->sec_offset, geo_ppaf->sec_len); return sz; } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index b29c1e6698aa..bae2cc758de8 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -941,7 +941,7 @@ static inline int pblk_ppa_to_line(struct ppa_addr p) static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) { - return p.g.lun * geo->nr_chnls + p.g.ch; + return p.g.lun * geo->num_ch + p.g.ch; } static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, @@ -957,7 +957,7 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; - ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sect_offset; + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; return ppa; } @@ -973,7 +973,7 @@ static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, paddr |= (u64)p.g.lun << ppaf->lun_offset; paddr |= (u64)p.g.pg << ppaf->pg_offset; paddr |= (u64)p.g.pl << ppaf->pln_offset; - paddr |= (u64)p.g.sec << ppaf->sect_offset; + paddr |= (u64)p.g.sec << ppaf->sec_offset; return paddr; } @@ -998,7 +998,7 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; - ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sect_offset; + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset; } return ppa64; @@ -1022,7 +1022,7 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) ppa32 |= ppa64.g.blk << ppaf->blk_offset; ppa32 |= ppa64.g.pg << ppaf->pg_offset; ppa32 |= ppa64.g.pl << ppaf->pln_offset; - ppa32 |= ppa64.g.sec << ppaf->sect_offset; + ppa32 |= ppa64.g.sec << ppaf->sec_offset; } return ppa32; @@ -1140,7 +1140,7 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type) struct nvm_geo *geo = &dev->geo; int flags; - flags = geo->plane_mode >> 1; + flags = geo->pln_mode >> 1; if (type == PBLK_WRITE) flags |= NVM_IO_SCRAMBLE_ENABLE; @@ -1161,7 +1161,7 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type) flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE; if (type == PBLK_READ_SEQUENTIAL) - flags |= geo->plane_mode >> 1; + flags |= geo->pln_mode >> 1; return flags; } @@ -1214,10 +1214,10 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, ppa = &ppas[i]; if (!ppa->c.is_cached && - ppa->g.ch < geo->nr_chnls && - ppa->g.lun < geo->nr_luns && + ppa->g.ch < geo->num_ch && + ppa->g.lun < geo->num_lun && ppa->g.pl < geo->num_pln && - ppa->g.blk < geo->nr_chks && + ppa->g.blk < geo->num_chk && ppa->g.pg < geo->num_pg && ppa->g.sec < geo->ws_min) continue; diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index afb5f883f8c8..f7135659f918 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -262,21 +262,21 @@ static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, dst->blk_len = src->blk_len; dst->pg_len = src->pg_len; dst->pln_len = src->pln_len; - dst->sect_len = src->sec_len; + dst->sec_len = src->sec_len; dst->ch_offset = src->ch_offset; dst->lun_offset = src->lun_offset; dst->blk_offset = src->blk_offset; dst->pg_offset = src->pg_offset; dst->pln_offset = src->pln_offset; - dst->sect_offset = src->sec_offset; + dst->sec_offset = src->sec_offset; dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; - dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; } static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, @@ -302,11 +302,11 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, /* Set compacted version for upper layers */ geo->version = NVM_OCSSD_SPEC_12; - geo->nr_chnls = src->num_ch; - geo->nr_luns = src->num_lun; - geo->all_luns = geo->nr_chnls * geo->nr_luns; + geo->num_ch = src->num_ch; + geo->num_lun = src->num_lun; + geo->all_luns = geo->num_ch * geo->num_lun; - geo->nr_chks = le16_to_cpu(src->num_chk); + geo->num_chk = le16_to_cpu(src->num_chk); geo->csecs = le16_to_cpu(src->csecs); geo->sos = le16_to_cpu(src->sos); @@ -316,7 +316,7 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, sec_per_pl = sec_per_pg * src->num_pln; geo->clba = sec_per_pl * pg_per_blk; - geo->all_chunks = geo->all_luns * geo->nr_chks; + geo->all_chunks = geo->all_luns * geo->num_chk; geo->total_secs = geo->clba * geo->all_chunks; geo->ws_min = sec_per_pg; @@ -327,8 +327,8 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, * unspecified in 1.2. Users of 1.2 must be aware of this and eventually * specify these values through a quirk if restrictions apply. */ - geo->maxoc = geo->all_luns * geo->nr_chks; - geo->maxocpu = geo->nr_chks; + geo->maxoc = geo->all_luns * geo->num_chk; + geo->maxocpu = geo->num_chk; geo->cap = le32_to_cpu(src->mccap); @@ -350,13 +350,13 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, geo->cpar = le16_to_cpu(src->cpar); geo->mpos = le32_to_cpu(src->mpos); - geo->plane_mode = NVM_PLANE_SINGLE; + geo->pln_mode = NVM_PLANE_SINGLE; if (geo->mpos & 0x020202) { - geo->plane_mode = NVM_PLANE_DOUBLE; + geo->pln_mode = NVM_PLANE_DOUBLE; geo->ws_opt <<= 1; } else if (geo->mpos & 0x040404) { - geo->plane_mode = NVM_PLANE_QUAD; + geo->pln_mode = NVM_PLANE_QUAD; geo->ws_opt <<= 2; } @@ -404,14 +404,14 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, return -EINVAL; } - geo->nr_chnls = le16_to_cpu(id->num_grp); - geo->nr_luns = le16_to_cpu(id->num_pu); - geo->all_luns = geo->nr_chnls * geo->nr_luns; + geo->num_ch = le16_to_cpu(id->num_grp); + geo->num_lun = le16_to_cpu(id->num_pu); + geo->all_luns = geo->num_ch * geo->num_lun; - geo->nr_chks = le32_to_cpu(id->num_chk); + geo->num_chk = le32_to_cpu(id->num_chk); geo->clba = le32_to_cpu(id->clba); - geo->all_chunks = geo->all_luns * geo->nr_chks; + geo->all_chunks = geo->all_luns * geo->num_chk; geo->total_secs = geo->clba * geo->all_chunks; geo->ws_min = le32_to_cpu(id->ws_min); @@ -487,7 +487,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, struct nvme_ctrl *ctrl = ns->ctrl; struct nvme_nvm_command c = {}; struct nvme_nvm_bb_tbl *bb_tbl; - int nr_blks = geo->nr_chks * geo->num_pln; + int nr_blks = geo->num_chk * geo->num_pln; int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks; int ret = 0; @@ -528,7 +528,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, goto out; } - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->num_pln); + memcpy(blks, bb_tbl->blk, geo->num_chk * geo->num_pln); out: kfree(bb_tbl); return ret; @@ -972,7 +972,7 @@ static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf, ppaf->pln_offset, ppaf->pln_len, ppaf->blk_offset, ppaf->blk_len, ppaf->pg_offset, ppaf->pg_len, - ppaf->sect_offset, ppaf->sect_len); + ppaf->sec_offset, ppaf->sec_len); } static ssize_t nvm_dev_attr_show_12(struct device *dev, @@ -1002,13 +1002,13 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev, } else if (strcmp(attr->name, "flash_media_type") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->fmtype); } else if (strcmp(attr->name, "num_channels") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_ch); } else if (strcmp(attr->name, "num_luns") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_lun); } else if (strcmp(attr->name, "num_planes") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pln); } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */ - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_chk); } else if (strcmp(attr->name, "num_pages") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pg); } else if (strcmp(attr->name, "page_size") == 0) { @@ -1052,11 +1052,11 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, attr = &dattr->attr; if (strcmp(attr->name, "groups") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_ch); } else if (strcmp(attr->name, "punits") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_lun); } else if (strcmp(attr->name, "chunks") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_chk); } else if (strcmp(attr->name, "clba") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->clba); } else if (strcmp(attr->name, "ws_min") == 0) { diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 2102b092c7eb..4f88e3dc4d8c 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -163,14 +163,14 @@ struct nvm_addr_format_12 { u8 blk_len; u8 pg_len; u8 pln_len; - u8 sect_len; + u8 sec_len; u8 ch_offset; u8 lun_offset; u8 blk_offset; u8 pg_offset; u8 pln_offset; - u8 sect_offset; + u8 sec_offset; u64 ch_mask; u64 lun_mask; @@ -275,8 +275,8 @@ struct nvm_geo { u8 version; /* instance specific geometry */ - int nr_chnls; - int nr_luns; /* per channel */ + int num_ch; + int num_lun; /* per channel */ /* calculated values */ int all_luns; /* across channels */ @@ -287,7 +287,7 @@ struct nvm_geo { sector_t total_secs; /* across channels */ /* chunk geometry */ - u32 nr_chks; /* chunks per lun */ + u32 num_chk; /* chunks per lun */ u32 clba; /* sectors per chunk */ u16 csecs; /* sector size */ u16 sos; /* out-of-band area size */ @@ -327,7 +327,7 @@ struct nvm_geo { u32 mpos; u8 num_pln; - u8 plane_mode; + u8 pln_mode; u16 num_pg; u16 fpg_sz; }; @@ -385,7 +385,7 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; - l.ppa |= ((u64)r.g.sec) << ppaf->sect_offset; + l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset; return l; } @@ -405,7 +405,7 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; - l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sect_offset; + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset; return l; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 06/15] lightnvm: normalize geometry nomenclature @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Normalize nomenclature for naming channels, luns, chunks, planes and sectors as well as derivations in order to improve readability. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/core.c | 89 +++++++++++++++++++++---------------------- drivers/lightnvm/pblk-core.c | 4 +- drivers/lightnvm/pblk-init.c | 30 +++++++-------- drivers/lightnvm/pblk-sysfs.c | 4 +- drivers/lightnvm/pblk.h | 20 +++++----- drivers/nvme/host/lightnvm.c | 54 +++++++++++++------------- include/linux/lightnvm.h | 16 ++++---- 7 files changed, 108 insertions(+), 109 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index c4f72fbad2bf..b869e3051265 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -36,13 +36,13 @@ static DECLARE_RWSEM(nvm_lock); /* Map between virtual and physical channel and lun */ struct nvm_ch_map { int ch_off; - int nr_luns; + int num_lun; int *lun_offs; }; struct nvm_dev_map { struct nvm_ch_map *chnls; - int nr_chnls; + int num_ch; }; static struct nvm_target *nvm_find_target(struct nvm_dev *dev, const char *name) @@ -114,15 +114,15 @@ static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear) struct nvm_dev_map *dev_map = tgt_dev->map; int i, j; - for (i = 0; i < dev_map->nr_chnls; i++) { + for (i = 0; i < dev_map->num_ch; i++) { struct nvm_ch_map *ch_map = &dev_map->chnls[i]; int *lun_offs = ch_map->lun_offs; int ch = i + ch_map->ch_off; if (clear) { - for (j = 0; j < ch_map->nr_luns; j++) { + for (j = 0; j < ch_map->num_lun; j++) { int lun = j + lun_offs[j]; - int lunid = (ch * dev->geo.nr_luns) + lun; + int lunid = (ch * dev->geo.num_lun) + lun; WARN_ON(!test_and_clear_bit(lunid, dev->lun_map)); @@ -147,47 +147,46 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, struct nvm_dev_map *dev_rmap = dev->rmap; struct nvm_dev_map *dev_map; struct ppa_addr *luns; - int nr_luns = lun_end - lun_begin + 1; - int luns_left = nr_luns; - int nr_chnls = nr_luns / dev->geo.nr_luns; - int nr_chnls_mod = nr_luns % dev->geo.nr_luns; - int bch = lun_begin / dev->geo.nr_luns; - int blun = lun_begin % dev->geo.nr_luns; + int num_lun = lun_end - lun_begin + 1; + int luns_left = num_lun; + int num_ch = num_lun / dev->geo.num_lun; + int num_ch_mod = num_lun % dev->geo.num_lun; + int bch = lun_begin / dev->geo.num_lun; + int blun = lun_begin % dev->geo.num_lun; int lunid = 0; int lun_balanced = 1; - int sec_per_lun, prev_nr_luns; + int sec_per_lun, prev_num_lun; int i, j; - nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1; + num_ch = (num_ch_mod == 0) ? num_ch : num_ch + 1; dev_map = kmalloc(sizeof(struct nvm_dev_map), GFP_KERNEL); if (!dev_map) goto err_dev; - dev_map->chnls = kcalloc(nr_chnls, sizeof(struct nvm_ch_map), - GFP_KERNEL); + dev_map->chnls = kcalloc(num_ch, sizeof(struct nvm_ch_map), GFP_KERNEL); if (!dev_map->chnls) goto err_chnls; - luns = kcalloc(nr_luns, sizeof(struct ppa_addr), GFP_KERNEL); + luns = kcalloc(num_lun, sizeof(struct ppa_addr), GFP_KERNEL); if (!luns) goto err_luns; - prev_nr_luns = (luns_left > dev->geo.nr_luns) ? - dev->geo.nr_luns : luns_left; - for (i = 0; i < nr_chnls; i++) { + prev_num_lun = (luns_left > dev->geo.num_lun) ? + dev->geo.num_lun : luns_left; + for (i = 0; i < num_ch; i++) { struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[i + bch]; int *lun_roffs = ch_rmap->lun_offs; struct nvm_ch_map *ch_map = &dev_map->chnls[i]; int *lun_offs; - int luns_in_chnl = (luns_left > dev->geo.nr_luns) ? - dev->geo.nr_luns : luns_left; + int luns_in_chnl = (luns_left > dev->geo.num_lun) ? + dev->geo.num_lun : luns_left; - if (lun_balanced && prev_nr_luns != luns_in_chnl) + if (lun_balanced && prev_num_lun != luns_in_chnl) lun_balanced = 0; ch_map->ch_off = ch_rmap->ch_off = bch; - ch_map->nr_luns = luns_in_chnl; + ch_map->num_lun = luns_in_chnl; lun_offs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL); if (!lun_offs) @@ -209,7 +208,7 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, luns_left -= luns_in_chnl; } - dev_map->nr_chnls = nr_chnls; + dev_map->num_ch = num_ch; tgt_dev = kmalloc(sizeof(struct nvm_tgt_dev), GFP_KERNEL); if (!tgt_dev) @@ -219,15 +218,15 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo)); /* Target device only owns a portion of the physical device */ - tgt_dev->geo.nr_chnls = nr_chnls; - tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1; - tgt_dev->geo.all_luns = nr_luns; - tgt_dev->geo.all_chunks = nr_luns * dev->geo.nr_chks; + tgt_dev->geo.num_ch = num_ch; + tgt_dev->geo.num_lun = (lun_balanced) ? prev_num_lun : -1; + tgt_dev->geo.all_luns = num_lun; + tgt_dev->geo.all_chunks = num_lun * dev->geo.num_chk; tgt_dev->geo.op = op; - sec_per_lun = dev->geo.clba * dev->geo.nr_chks; - tgt_dev->geo.total_secs = nr_luns * sec_per_lun; + sec_per_lun = dev->geo.clba * dev->geo.num_chk; + tgt_dev->geo.total_secs = num_lun * sec_per_lun; tgt_dev->q = dev->q; tgt_dev->map = dev_map; @@ -505,20 +504,20 @@ static int nvm_register_map(struct nvm_dev *dev) if (!rmap) goto err_rmap; - rmap->chnls = kcalloc(dev->geo.nr_chnls, sizeof(struct nvm_ch_map), + rmap->chnls = kcalloc(dev->geo.num_ch, sizeof(struct nvm_ch_map), GFP_KERNEL); if (!rmap->chnls) goto err_chnls; - for (i = 0; i < dev->geo.nr_chnls; i++) { + for (i = 0; i < dev->geo.num_ch; i++) { struct nvm_ch_map *ch_rmap; int *lun_roffs; - int luns_in_chnl = dev->geo.nr_luns; + int luns_in_chnl = dev->geo.num_lun; ch_rmap = &rmap->chnls[i]; ch_rmap->ch_off = -1; - ch_rmap->nr_luns = luns_in_chnl; + ch_rmap->num_lun = luns_in_chnl; lun_roffs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL); if (!lun_roffs) @@ -547,7 +546,7 @@ static void nvm_unregister_map(struct nvm_dev *dev) struct nvm_dev_map *rmap = dev->rmap; int i; - for (i = 0; i < dev->geo.nr_chnls; i++) + for (i = 0; i < dev->geo.num_ch; i++) kfree(rmap->chnls[i].lun_offs); kfree(rmap->chnls); @@ -676,7 +675,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd, int i, plane_cnt, pl_idx; struct ppa_addr ppa; - if (geo->plane_mode == NVM_PLANE_SINGLE && nr_ppas == 1) { + if (geo->pln_mode == NVM_PLANE_SINGLE && nr_ppas == 1) { rqd->nr_ppas = nr_ppas; rqd->ppa_addr = ppas[0]; @@ -690,7 +689,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd, return -ENOMEM; } - plane_cnt = geo->plane_mode; + plane_cnt = geo->pln_mode; rqd->nr_ppas *= plane_cnt; for (i = 0; i < nr_ppas; i++) { @@ -808,15 +807,15 @@ int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks) struct nvm_geo *geo = &dev->geo; int blk, offset, pl, blktype; - if (nr_blks != geo->nr_chks * geo->plane_mode) + if (nr_blks != geo->num_chk * geo->pln_mode) return -EINVAL; - for (blk = 0; blk < geo->nr_chks; blk++) { - offset = blk * geo->plane_mode; + for (blk = 0; blk < geo->num_chk; blk++) { + offset = blk * geo->pln_mode; blktype = blks[offset]; /* Bad blocks on any planes take precedence over other types */ - for (pl = 0; pl < geo->plane_mode; pl++) { + for (pl = 0; pl < geo->pln_mode; pl++) { if (blks[offset + pl] & (NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) { blktype = blks[offset + pl]; @@ -827,7 +826,7 @@ int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks) blks[blk] = blktype; } - return geo->nr_chks; + return geo->num_chk; } EXPORT_SYMBOL(nvm_bb_tbl_fold); @@ -901,9 +900,9 @@ static int nvm_init(struct nvm_dev *dev) } pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n", - dev->name, geo->ws_min, geo->ws_opt, - geo->nr_chks, geo->all_luns, - geo->nr_chnls); + dev->name, dev->geo.ws_min, dev->geo.ws_opt, + dev->geo.num_chk, dev->geo.all_luns, + dev->geo.num_ch); return 0; err: pr_err("nvm: failed to initialize nvm\n"); diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 169589ddd457..7d0bd33f11d9 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -1745,10 +1745,10 @@ void pblk_up_rq(struct pblk *pblk, struct ppa_addr *ppa_list, int nr_ppas, struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; struct pblk_lun *rlun; - int nr_luns = geo->all_luns; + int num_lun = geo->all_luns; int bit = -1; - while ((bit = find_next_bit(lun_bitmap, nr_luns, bit + 1)) < nr_luns) { + while ((bit = find_next_bit(lun_bitmap, num_lun, bit + 1)) < num_lun) { rlun = &pblk->luns[bit]; up(&rlun->wr_sem); } diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 9b5ee05c3028..11424beb214c 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -162,15 +162,15 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, int power_len; /* Re-calculate channel and lun format to adapt to configuration */ - power_len = get_count_order(geo->nr_chnls); - if (1 << power_len != geo->nr_chnls) { + power_len = get_count_order(geo->num_ch); + if (1 << power_len != geo->num_ch) { pr_err("pblk: supports only power-of-two channel config.\n"); return -EINVAL; } dst->ch_len = power_len; - power_len = get_count_order(geo->nr_luns); - if (1 << power_len != geo->nr_luns) { + power_len = get_count_order(geo->num_lun); + if (1 << power_len != geo->num_lun) { pr_err("pblk: supports only power-of-two LUN config.\n"); return -EINVAL; } @@ -179,16 +179,16 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, dst->blk_len = src->blk_len; dst->pg_len = src->pg_len; dst->pln_len = src->pln_len; - dst->sect_len = src->sect_len; + dst->sec_len = src->sec_len; - dst->sect_offset = 0; - dst->pln_offset = dst->sect_len; + dst->sec_offset = 0; + dst->pln_offset = dst->sec_len; dst->ch_offset = dst->pln_offset + dst->pln_len; dst->lun_offset = dst->ch_offset + dst->ch_len; dst->pg_offset = dst->lun_offset + dst->lun_len; dst->blk_offset = dst->pg_offset + dst->pg_len; - dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; @@ -448,7 +448,7 @@ static void *pblk_bb_get_log(struct pblk *pblk) int i, nr_blks, blk_per_lun; int ret; - blk_per_lun = geo->nr_chks * geo->plane_mode; + blk_per_lun = geo->num_chk * geo->pln_mode; nr_blks = blk_per_lun * geo->all_luns; log = kmalloc(nr_blks, GFP_KERNEL); @@ -475,7 +475,7 @@ static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line, struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; int i, bb_cnt = 0; - int blk_per_lun = geo->nr_chks * geo->plane_mode; + int blk_per_lun = geo->num_chk * geo->pln_mode; for (i = 0; i < blk_per_line; i++) { struct pblk_lun *rlun = &pblk->luns[i]; @@ -499,7 +499,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) int i; /* TODO: Implement unbalanced LUN support */ - if (geo->nr_luns < 0) { + if (geo->num_lun < 0) { pr_err("pblk: unbalanced LUN config.\n"); return -EINVAL; } @@ -511,9 +511,9 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) for (i = 0; i < geo->all_luns; i++) { /* Stripe across channels */ - int ch = i % geo->nr_chnls; - int lun_raw = i / geo->nr_chnls; - int lunid = lun_raw + ch * geo->nr_luns; + int ch = i % geo->num_ch; + int lun_raw = i / geo->num_ch; + int lunid = lun_raw + ch * geo->num_lun; rlun = &pblk->luns[i]; rlun->bppa = luns[lunid]; @@ -740,7 +740,7 @@ static int pblk_lines_init(struct pblk *pblk) return -EINVAL; } - l_mg->nr_lines = geo->nr_chks; + l_mg->nr_lines = geo->num_chk; l_mg->log_line = l_mg->data_line = NULL; l_mg->l_seq_nr = l_mg->d_seq_nr = 0; l_mg->nr_free_lines = 0; diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index 33199c6af267..462a787893d5 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -128,7 +128,7 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) ppaf->blk_offset, ppaf->blk_len, ppaf->pg_offset, ppaf->pg_len, ppaf->pln_offset, ppaf->pln_len, - ppaf->sect_offset, ppaf->sect_len); + ppaf->sec_offset, ppaf->sec_len); sz += snprintf(page + sz, PAGE_SIZE - sz, "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", @@ -137,7 +137,7 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) geo_ppaf->blk_offset, geo_ppaf->blk_len, geo_ppaf->pg_offset, geo_ppaf->pg_len, geo_ppaf->pln_offset, geo_ppaf->pln_len, - geo_ppaf->sect_offset, geo_ppaf->sect_len); + geo_ppaf->sec_offset, geo_ppaf->sec_len); return sz; } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index b29c1e6698aa..bae2cc758de8 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -941,7 +941,7 @@ static inline int pblk_ppa_to_line(struct ppa_addr p) static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) { - return p.g.lun * geo->nr_chnls + p.g.ch; + return p.g.lun * geo->num_ch + p.g.ch; } static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, @@ -957,7 +957,7 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; - ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sect_offset; + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; return ppa; } @@ -973,7 +973,7 @@ static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, paddr |= (u64)p.g.lun << ppaf->lun_offset; paddr |= (u64)p.g.pg << ppaf->pg_offset; paddr |= (u64)p.g.pl << ppaf->pln_offset; - paddr |= (u64)p.g.sec << ppaf->sect_offset; + paddr |= (u64)p.g.sec << ppaf->sec_offset; return paddr; } @@ -998,7 +998,7 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; - ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sect_offset; + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset; } return ppa64; @@ -1022,7 +1022,7 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) ppa32 |= ppa64.g.blk << ppaf->blk_offset; ppa32 |= ppa64.g.pg << ppaf->pg_offset; ppa32 |= ppa64.g.pl << ppaf->pln_offset; - ppa32 |= ppa64.g.sec << ppaf->sect_offset; + ppa32 |= ppa64.g.sec << ppaf->sec_offset; } return ppa32; @@ -1140,7 +1140,7 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type) struct nvm_geo *geo = &dev->geo; int flags; - flags = geo->plane_mode >> 1; + flags = geo->pln_mode >> 1; if (type == PBLK_WRITE) flags |= NVM_IO_SCRAMBLE_ENABLE; @@ -1161,7 +1161,7 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type) flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE; if (type == PBLK_READ_SEQUENTIAL) - flags |= geo->plane_mode >> 1; + flags |= geo->pln_mode >> 1; return flags; } @@ -1214,10 +1214,10 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, ppa = &ppas[i]; if (!ppa->c.is_cached && - ppa->g.ch < geo->nr_chnls && - ppa->g.lun < geo->nr_luns && + ppa->g.ch < geo->num_ch && + ppa->g.lun < geo->num_lun && ppa->g.pl < geo->num_pln && - ppa->g.blk < geo->nr_chks && + ppa->g.blk < geo->num_chk && ppa->g.pg < geo->num_pg && ppa->g.sec < geo->ws_min) continue; diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index afb5f883f8c8..f7135659f918 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -262,21 +262,21 @@ static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, dst->blk_len = src->blk_len; dst->pg_len = src->pg_len; dst->pln_len = src->pln_len; - dst->sect_len = src->sec_len; + dst->sec_len = src->sec_len; dst->ch_offset = src->ch_offset; dst->lun_offset = src->lun_offset; dst->blk_offset = src->blk_offset; dst->pg_offset = src->pg_offset; dst->pln_offset = src->pln_offset; - dst->sect_offset = src->sec_offset; + dst->sec_offset = src->sec_offset; dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset; dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset; dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset; dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset; dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset; - dst->sec_mask = ((1ULL << dst->sect_len) - 1) << dst->sect_offset; + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset; } static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, @@ -302,11 +302,11 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, /* Set compacted version for upper layers */ geo->version = NVM_OCSSD_SPEC_12; - geo->nr_chnls = src->num_ch; - geo->nr_luns = src->num_lun; - geo->all_luns = geo->nr_chnls * geo->nr_luns; + geo->num_ch = src->num_ch; + geo->num_lun = src->num_lun; + geo->all_luns = geo->num_ch * geo->num_lun; - geo->nr_chks = le16_to_cpu(src->num_chk); + geo->num_chk = le16_to_cpu(src->num_chk); geo->csecs = le16_to_cpu(src->csecs); geo->sos = le16_to_cpu(src->sos); @@ -316,7 +316,7 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, sec_per_pl = sec_per_pg * src->num_pln; geo->clba = sec_per_pl * pg_per_blk; - geo->all_chunks = geo->all_luns * geo->nr_chks; + geo->all_chunks = geo->all_luns * geo->num_chk; geo->total_secs = geo->clba * geo->all_chunks; geo->ws_min = sec_per_pg; @@ -327,8 +327,8 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, * unspecified in 1.2. Users of 1.2 must be aware of this and eventually * specify these values through a quirk if restrictions apply. */ - geo->maxoc = geo->all_luns * geo->nr_chks; - geo->maxocpu = geo->nr_chks; + geo->maxoc = geo->all_luns * geo->num_chk; + geo->maxocpu = geo->num_chk; geo->cap = le32_to_cpu(src->mccap); @@ -350,13 +350,13 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id, geo->cpar = le16_to_cpu(src->cpar); geo->mpos = le32_to_cpu(src->mpos); - geo->plane_mode = NVM_PLANE_SINGLE; + geo->pln_mode = NVM_PLANE_SINGLE; if (geo->mpos & 0x020202) { - geo->plane_mode = NVM_PLANE_DOUBLE; + geo->pln_mode = NVM_PLANE_DOUBLE; geo->ws_opt <<= 1; } else if (geo->mpos & 0x040404) { - geo->plane_mode = NVM_PLANE_QUAD; + geo->pln_mode = NVM_PLANE_QUAD; geo->ws_opt <<= 2; } @@ -404,14 +404,14 @@ static int nvme_nvm_setup_20(struct nvme_nvm_id20 *id, return -EINVAL; } - geo->nr_chnls = le16_to_cpu(id->num_grp); - geo->nr_luns = le16_to_cpu(id->num_pu); - geo->all_luns = geo->nr_chnls * geo->nr_luns; + geo->num_ch = le16_to_cpu(id->num_grp); + geo->num_lun = le16_to_cpu(id->num_pu); + geo->all_luns = geo->num_ch * geo->num_lun; - geo->nr_chks = le32_to_cpu(id->num_chk); + geo->num_chk = le32_to_cpu(id->num_chk); geo->clba = le32_to_cpu(id->clba); - geo->all_chunks = geo->all_luns * geo->nr_chks; + geo->all_chunks = geo->all_luns * geo->num_chk; geo->total_secs = geo->clba * geo->all_chunks; geo->ws_min = le32_to_cpu(id->ws_min); @@ -487,7 +487,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, struct nvme_ctrl *ctrl = ns->ctrl; struct nvme_nvm_command c = {}; struct nvme_nvm_bb_tbl *bb_tbl; - int nr_blks = geo->nr_chks * geo->num_pln; + int nr_blks = geo->num_chk * geo->num_pln; int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks; int ret = 0; @@ -528,7 +528,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, goto out; } - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->num_pln); + memcpy(blks, bb_tbl->blk, geo->num_chk * geo->num_pln); out: kfree(bb_tbl); return ret; @@ -972,7 +972,7 @@ static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf, ppaf->pln_offset, ppaf->pln_len, ppaf->blk_offset, ppaf->blk_len, ppaf->pg_offset, ppaf->pg_len, - ppaf->sect_offset, ppaf->sect_len); + ppaf->sec_offset, ppaf->sec_len); } static ssize_t nvm_dev_attr_show_12(struct device *dev, @@ -1002,13 +1002,13 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev, } else if (strcmp(attr->name, "flash_media_type") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->fmtype); } else if (strcmp(attr->name, "num_channels") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_ch); } else if (strcmp(attr->name, "num_luns") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_lun); } else if (strcmp(attr->name, "num_planes") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pln); } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */ - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_chk); } else if (strcmp(attr->name, "num_pages") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_pg); } else if (strcmp(attr->name, "page_size") == 0) { @@ -1052,11 +1052,11 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev, attr = &dattr->attr; if (strcmp(attr->name, "groups") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chnls); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_ch); } else if (strcmp(attr->name, "punits") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_luns); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_lun); } else if (strcmp(attr->name, "chunks") == 0) { - return scnprintf(page, PAGE_SIZE, "%u\n", geo->nr_chks); + return scnprintf(page, PAGE_SIZE, "%u\n", geo->num_chk); } else if (strcmp(attr->name, "clba") == 0) { return scnprintf(page, PAGE_SIZE, "%u\n", geo->clba); } else if (strcmp(attr->name, "ws_min") == 0) { diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 2102b092c7eb..4f88e3dc4d8c 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -163,14 +163,14 @@ struct nvm_addr_format_12 { u8 blk_len; u8 pg_len; u8 pln_len; - u8 sect_len; + u8 sec_len; u8 ch_offset; u8 lun_offset; u8 blk_offset; u8 pg_offset; u8 pln_offset; - u8 sect_offset; + u8 sec_offset; u64 ch_mask; u64 lun_mask; @@ -275,8 +275,8 @@ struct nvm_geo { u8 version; /* instance specific geometry */ - int nr_chnls; - int nr_luns; /* per channel */ + int num_ch; + int num_lun; /* per channel */ /* calculated values */ int all_luns; /* across channels */ @@ -287,7 +287,7 @@ struct nvm_geo { sector_t total_secs; /* across channels */ /* chunk geometry */ - u32 nr_chks; /* chunks per lun */ + u32 num_chk; /* chunks per lun */ u32 clba; /* sectors per chunk */ u16 csecs; /* sector size */ u16 sos; /* out-of-band area size */ @@ -327,7 +327,7 @@ struct nvm_geo { u32 mpos; u8 num_pln; - u8 plane_mode; + u8 pln_mode; u16 num_pg; u16 fpg_sz; }; @@ -385,7 +385,7 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; - l.ppa |= ((u64)r.g.sec) << ppaf->sect_offset; + l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset; return l; } @@ -405,7 +405,7 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; - l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sect_offset; + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset; return l; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 07/15] lightnvm: add support for 2.0 address format 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme QWRkIHN1cHBvcnQgZm9yIDIuMCBhZGRyZXNzIGZvcm1hdC4gQWxzbywgYWxpZ24gYWRkcmVzcyBi aXRzIGZvciAxLjIgYW5kCjIuMCB0byBiZSBhYmxlIHRvIG9wZXJhdGUgb24gY2hhbm5lbCBhbmQg bHVucyB3aXRob3V0IHJlcXVpcmluZyBhIGZvcm1hdApjb252ZXJzaW9uLiBVc2UgYSBnZW5lcmlj IGFkZHJlc3MgZm9ybWF0IGZvciB0aGlzIHB1cnBvc2UuCgpTaWduZWQtb2ZmLWJ5OiBKYXZpZXIg R29uesOhbGV6IDxqYXZpZXJAY25leGxhYnMuY29tPgotLS0KIGRyaXZlcnMvbGlnaHRudm0vY29y ZS5jICB8ICAyMCArKysrLS0tLS0KIGluY2x1ZGUvbGludXgvbGlnaHRudm0uaCB8IDEwNSArKysr KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLS0tLQogMiBmaWxlcyBjaGFu Z2VkLCA4NiBpbnNlcnRpb25zKCspLCAzOSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2 ZXJzL2xpZ2h0bnZtL2NvcmUuYyBiL2RyaXZlcnMvbGlnaHRudm0vY29yZS5jCmluZGV4IGI4Njll MzA1MTI2NS4uMzZkNzZkZTIyZGZjIDEwMDY0NAotLS0gYS9kcml2ZXJzL2xpZ2h0bnZtL2NvcmUu YworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL2NvcmUuYwpAQCAtMTk0LDggKzE5NCw4IEBAIHN0YXRp YyBzdHJ1Y3QgbnZtX3RndF9kZXYgKm52bV9jcmVhdGVfdGd0X2RldihzdHJ1Y3QgbnZtX2RldiAq ZGV2LAogCiAJCWZvciAoaiA9IDA7IGogPCBsdW5zX2luX2Nobmw7IGorKykgewogCQkJbHVuc1ts dW5pZF0ucHBhID0gMDsKLQkJCWx1bnNbbHVuaWRdLmcuY2ggPSBpOwotCQkJbHVuc1tsdW5pZCsr XS5nLmx1biA9IGo7CisJCQlsdW5zW2x1bmlkXS5hLmNoID0gaTsKKwkJCWx1bnNbbHVuaWQrK10u YS5sdW4gPSBqOwogCiAJCQlsdW5fb2Zmc1tqXSA9IGJsdW47CiAJCQlsdW5fcm9mZnNbaiArIGJs dW5dID0gYmx1bjsKQEAgLTU1NiwyMiArNTU2LDIyIEBAIHN0YXRpYyB2b2lkIG52bV91bnJlZ2lz dGVyX21hcChzdHJ1Y3QgbnZtX2RldiAqZGV2KQogc3RhdGljIHZvaWQgbnZtX21hcF90b19kZXYo c3RydWN0IG52bV90Z3RfZGV2ICp0Z3RfZGV2LCBzdHJ1Y3QgcHBhX2FkZHIgKnApCiB7CiAJc3Ry dWN0IG52bV9kZXZfbWFwICpkZXZfbWFwID0gdGd0X2Rldi0+bWFwOwotCXN0cnVjdCBudm1fY2hf bWFwICpjaF9tYXAgPSAmZGV2X21hcC0+Y2hubHNbcC0+Zy5jaF07Ci0JaW50IGx1bl9vZmYgPSBj aF9tYXAtPmx1bl9vZmZzW3AtPmcubHVuXTsKKwlzdHJ1Y3QgbnZtX2NoX21hcCAqY2hfbWFwID0g JmRldl9tYXAtPmNobmxzW3AtPmEuY2hdOworCWludCBsdW5fb2ZmID0gY2hfbWFwLT5sdW5fb2Zm c1twLT5hLmx1bl07CiAKLQlwLT5nLmNoICs9IGNoX21hcC0+Y2hfb2ZmOwotCXAtPmcubHVuICs9 IGx1bl9vZmY7CisJcC0+YS5jaCArPSBjaF9tYXAtPmNoX29mZjsKKwlwLT5hLmx1biArPSBsdW5f b2ZmOwogfQogCiBzdGF0aWMgdm9pZCBudm1fbWFwX3RvX3RndChzdHJ1Y3QgbnZtX3RndF9kZXYg KnRndF9kZXYsIHN0cnVjdCBwcGFfYWRkciAqcCkKIHsKIAlzdHJ1Y3QgbnZtX2RldiAqZGV2ID0g dGd0X2Rldi0+cGFyZW50OwogCXN0cnVjdCBudm1fZGV2X21hcCAqZGV2X3JtYXAgPSBkZXYtPnJt YXA7Ci0Jc3RydWN0IG52bV9jaF9tYXAgKmNoX3JtYXAgPSAmZGV2X3JtYXAtPmNobmxzW3AtPmcu Y2hdOwotCWludCBsdW5fcm9mZiA9IGNoX3JtYXAtPmx1bl9vZmZzW3AtPmcubHVuXTsKKwlzdHJ1 Y3QgbnZtX2NoX21hcCAqY2hfcm1hcCA9ICZkZXZfcm1hcC0+Y2hubHNbcC0+YS5jaF07CisJaW50 IGx1bl9yb2ZmID0gY2hfcm1hcC0+bHVuX29mZnNbcC0+YS5sdW5dOwogCi0JcC0+Zy5jaCAtPSBj aF9ybWFwLT5jaF9vZmY7Ci0JcC0+Zy5sdW4gLT0gbHVuX3JvZmY7CisJcC0+YS5jaCAtPSBjaF9y bWFwLT5jaF9vZmY7CisJcC0+YS5sdW4gLT0gbHVuX3JvZmY7CiB9CiAKIHN0YXRpYyB2b2lkIG52 bV9wcGFfdGd0X3RvX2RldihzdHJ1Y3QgbnZtX3RndF9kZXYgKnRndF9kZXYsCmRpZmYgLS1naXQg YS9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmggYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKaW5k ZXggNGY4OGUzZGM0ZDhjLi43MzExMGFkZjI3YWQgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgv bGlnaHRudm0uaAorKysgYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKQEAgLTE2LDEyICsxNiwy MSBAQCBlbnVtIHsKIAlOVk1fSU9UWVBFX0dDID0gMSwKIH07CiAKLSNkZWZpbmUgTlZNX0JMS19C SVRTICgxNikKLSNkZWZpbmUgTlZNX1BHX0JJVFMgICgxNikKLSNkZWZpbmUgTlZNX1NFQ19CSVRT ICg4KQotI2RlZmluZSBOVk1fUExfQklUUyAgKDgpCi0jZGVmaW5lIE5WTV9MVU5fQklUUyAoOCkK LSNkZWZpbmUgTlZNX0NIX0JJVFMgICg3KQorLyogY29tbW9uIGZvcm1hdCAqLworI2RlZmluZSBO Vk1fR0VOX0NIX0JJVFMgICg4KQorI2RlZmluZSBOVk1fR0VOX0xVTl9CSVRTICg4KQorI2RlZmlu ZSBOVk1fR0VOX0JMS19CSVRTICgxNikKKyNkZWZpbmUgTlZNX0dFTl9SRVNFUlZFRCAoMzIpCisK Ky8qIDEuMiBmb3JtYXQgKi8KKyNkZWZpbmUgTlZNXzEyX1BHX0JJVFMgICgxNikKKyNkZWZpbmUg TlZNXzEyX1BMX0JJVFMgICg0KQorI2RlZmluZSBOVk1fMTJfU0VDX0JJVFMgKDQpCisjZGVmaW5l IE5WTV8xMl9SRVNFUlZFRCAoOCkKKworLyogMi4wIGZvcm1hdCAqLworI2RlZmluZSBOVk1fMjBf U0VDX0JJVFMgKDI0KQorI2RlZmluZSBOVk1fMjBfUkVTRVJWRUQgKDgpCiAKIGVudW0gewogCU5W TV9PQ1NTRF9TUEVDXzEyID0gMTIsCkBAIC0zMSwxNiArNDAsMzQgQEAgZW51bSB7CiBzdHJ1Y3Qg cHBhX2FkZHIgewogCS8qIEdlbmVyaWMgc3RydWN0dXJlIGZvciBhbGwgYWRkcmVzc2VzICovCiAJ dW5pb24geworCQkvKiBnZW5lcmljIGRldmljZSBmb3JtYXQgKi8KIAkJc3RydWN0IHsKLQkJCXU2 NCBibGsJCTogTlZNX0JMS19CSVRTOwotCQkJdTY0IHBnCQk6IE5WTV9QR19CSVRTOwotCQkJdTY0 IHNlYwkJOiBOVk1fU0VDX0JJVFM7Ci0JCQl1NjQgcGwJCTogTlZNX1BMX0JJVFM7Ci0JCQl1NjQg bHVuCQk6IE5WTV9MVU5fQklUUzsKLQkJCXU2NCBjaAkJOiBOVk1fQ0hfQklUUzsKLQkJCXU2NCBy ZXNlcnZlZAk6IDE7CisJCQl1NjQgY2gJCTogTlZNX0dFTl9DSF9CSVRTOworCQkJdTY0IGx1bgkJ OiBOVk1fR0VOX0xVTl9CSVRTOworCQkJdTY0IGJsawkJOiBOVk1fR0VOX0JMS19CSVRTOworCQkJ dTY0IHJlc2VydmVkCTogTlZNX0dFTl9SRVNFUlZFRDsKKwkJfSBhOworCisJCS8qIDEuMiBkZXZp Y2UgZm9ybWF0ICovCisJCXN0cnVjdCB7CisJCQl1NjQgY2gJCTogTlZNX0dFTl9DSF9CSVRTOwor CQkJdTY0IGx1bgkJOiBOVk1fR0VOX0xVTl9CSVRTOworCQkJdTY0IGJsawkJOiBOVk1fR0VOX0JM S19CSVRTOworCQkJdTY0IHBnCQk6IE5WTV8xMl9QR19CSVRTOworCQkJdTY0IHBsCQk6IE5WTV8x Ml9QTF9CSVRTOworCQkJdTY0IHNlYwkJOiBOVk1fMTJfU0VDX0JJVFM7CisJCQl1NjQgcmVzZXJ2 ZWQJOiBOVk1fMTJfUkVTRVJWRUQ7CiAJCX0gZzsKIAorCQkvKiAyLjAgZGV2aWNlIGZvcm1hdCAq LworCQlzdHJ1Y3QgeworCQkJdTY0IGdycAkJOiBOVk1fR0VOX0NIX0JJVFM7CisJCQl1NjQgcHUJ CTogTlZNX0dFTl9MVU5fQklUUzsKKwkJCXU2NCBjaGsJCTogTlZNX0dFTl9CTEtfQklUUzsKKwkJ CXU2NCBzZWMJCTogTlZNXzIwX1NFQ19CSVRTOworCQkJdTY0IHJlc2VydmVkCTogTlZNXzIwX1JF U0VSVkVEOworCQl9IG07CisKIAkJc3RydWN0IHsKIAkJCXU2NCBsaW5lCTogNjM7CiAJCQl1NjQg aXNfY2FjaGVkCTogMTsKQEAgLTM3NiwxNiArNDAzLDI2IEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0 IHBwYV9hZGRyIGdlbmVyaWNfdG9fZGV2X2FkZHIoc3RydWN0IG52bV90Z3RfZGV2ICp0Z3RfZGV2 LAogCQkJCQkJICBzdHJ1Y3QgcHBhX2FkZHIgcikKIHsKIAlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0g JnRndF9kZXYtPmdlbzsKLQlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICpwcGFmID0KLQkJCQko c3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqKSZnZW8tPmFkZHJmOwogCXN0cnVjdCBwcGFfYWRk ciBsOwogCi0JbC5wcGEgPSAoKHU2NClyLmcuY2gpIDw8IHBwYWYtPmNoX29mZnNldDsKLQlsLnBw YSB8PSAoKHU2NClyLmcubHVuKSA8PCBwcGFmLT5sdW5fb2Zmc2V0OwotCWwucHBhIHw9ICgodTY0 KXIuZy5ibGspIDw8IHBwYWYtPmJsa19vZmZzZXQ7Ci0JbC5wcGEgfD0gKCh1NjQpci5nLnBnKSA8 PCBwcGFmLT5wZ19vZmZzZXQ7Ci0JbC5wcGEgfD0gKCh1NjQpci5nLnBsKSA8PCBwcGFmLT5wbG5f b2Zmc2V0OwotCWwucHBhIHw9ICgodTY0KXIuZy5zZWMpIDw8IHBwYWYtPnNlY19vZmZzZXQ7CisJ aWYgKGdlby0+dmVyc2lvbiA9PSBOVk1fT0NTU0RfU1BFQ18xMikgeworCQlzdHJ1Y3QgbnZtX2Fk ZHJfZm9ybWF0XzEyICpwcGFmID0KKwkJCQkoc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqKSZn ZW8tPmFkZHJmOworCisJCWwucHBhID0gKCh1NjQpci5nLmNoKSA8PCBwcGFmLT5jaF9vZmZzZXQ7 CisJCWwucHBhIHw9ICgodTY0KXIuZy5sdW4pIDw8IHBwYWYtPmx1bl9vZmZzZXQ7CisJCWwucHBh IHw9ICgodTY0KXIuZy5ibGspIDw8IHBwYWYtPmJsa19vZmZzZXQ7CisJCWwucHBhIHw9ICgodTY0 KXIuZy5wZykgPDwgcHBhZi0+cGdfb2Zmc2V0OworCQlsLnBwYSB8PSAoKHU2NClyLmcucGwpIDw8 IHBwYWYtPnBsbl9vZmZzZXQ7CisJCWwucHBhIHw9ICgodTY0KXIuZy5zZWMpIDw8IHBwYWYtPnNl Y19vZmZzZXQ7CisJfSBlbHNlIHsKKwkJc3RydWN0IG52bV9hZGRyX2Zvcm1hdCAqbGJhZiA9ICZn ZW8tPmFkZHJmOworCisJCWwucHBhID0gKCh1NjQpci5tLmdycCkgPDwgbGJhZi0+Y2hfb2Zmc2V0 OworCQlsLnBwYSB8PSAoKHU2NClyLm0ucHUpIDw8IGxiYWYtPmx1bl9vZmZzZXQ7CisJCWwucHBh IHw9ICgodTY0KXIubS5jaGspIDw8IGxiYWYtPmNoa19vZmZzZXQ7CisJCWwucHBhIHw9ICgodTY0 KXIubS5zZWMpIDw8IGxiYWYtPnNlY19vZmZzZXQ7CisJfQogCiAJcmV0dXJuIGw7CiB9CkBAIC0z OTQsMTggKzQzMSwyOCBAQCBzdGF0aWMgaW5saW5lIHN0cnVjdCBwcGFfYWRkciBkZXZfdG9fZ2Vu ZXJpY19hZGRyKHN0cnVjdCBudm1fdGd0X2RldiAqdGd0X2RldiwKIAkJCQkJCSAgc3RydWN0IHBw YV9hZGRyIHIpCiB7CiAJc3RydWN0IG52bV9nZW8gKmdlbyA9ICZ0Z3RfZGV2LT5nZW87Ci0Jc3Ry dWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqcHBhZiA9Ci0JCQkJKHN0cnVjdCBudm1fYWRkcl9mb3Jt YXRfMTIgKikmZ2VvLT5hZGRyZjsKIAlzdHJ1Y3QgcHBhX2FkZHIgbDsKIAogCWwucHBhID0gMDsK IAotCWwuZy5jaCA9IChyLnBwYSAmIHBwYWYtPmNoX21hc2spID4+IHBwYWYtPmNoX29mZnNldDsK LQlsLmcubHVuID0gKHIucHBhICYgcHBhZi0+bHVuX21hc2spID4+IHBwYWYtPmx1bl9vZmZzZXQ7 Ci0JbC5nLmJsayA9IChyLnBwYSAmIHBwYWYtPmJsa19tYXNrKSA+PiBwcGFmLT5ibGtfb2Zmc2V0 OwotCWwuZy5wZyA9IChyLnBwYSAmIHBwYWYtPnBnX21hc2spID4+IHBwYWYtPnBnX29mZnNldDsK LQlsLmcucGwgPSAoci5wcGEgJiBwcGFmLT5wbG5fbWFzaykgPj4gcHBhZi0+cGxuX29mZnNldDsK LQlsLmcuc2VjID0gKHIucHBhICYgcHBhZi0+c2VjX21hc2spID4+IHBwYWYtPnNlY19vZmZzZXQ7 CisJaWYgKGdlby0+dmVyc2lvbiA9PSBOVk1fT0NTU0RfU1BFQ18xMikgeworCQlzdHJ1Y3QgbnZt X2FkZHJfZm9ybWF0XzEyICpwcGFmID0KKwkJCQkoc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAq KSZnZW8tPmFkZHJmOworCisJCWwuZy5jaCA9IChyLnBwYSAmIHBwYWYtPmNoX21hc2spID4+IHBw YWYtPmNoX29mZnNldDsKKwkJbC5nLmx1biA9IChyLnBwYSAmIHBwYWYtPmx1bl9tYXNrKSA+PiBw cGFmLT5sdW5fb2Zmc2V0OworCQlsLmcuYmxrID0gKHIucHBhICYgcHBhZi0+YmxrX21hc2spID4+ IHBwYWYtPmJsa19vZmZzZXQ7CisJCWwuZy5wZyA9IChyLnBwYSAmIHBwYWYtPnBnX21hc2spID4+ IHBwYWYtPnBnX29mZnNldDsKKwkJbC5nLnBsID0gKHIucHBhICYgcHBhZi0+cGxuX21hc2spID4+ IHBwYWYtPnBsbl9vZmZzZXQ7CisJCWwuZy5zZWMgPSAoci5wcGEgJiBwcGFmLT5zZWNfbWFzaykg Pj4gcHBhZi0+c2VjX29mZnNldDsKKwl9IGVsc2UgeworCQlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0 ICpsYmFmID0gJmdlby0+YWRkcmY7CisKKwkJbC5tLmdycCA9IChyLnBwYSAmIGxiYWYtPmNoX21h c2spID4+IGxiYWYtPmNoX29mZnNldDsKKwkJbC5tLnB1ID0gKHIucHBhICYgbGJhZi0+bHVuX21h c2spID4+IGxiYWYtPmx1bl9vZmZzZXQ7CisJCWwubS5jaGsgPSAoci5wcGEgJiBsYmFmLT5jaGtf bWFzaykgPj4gbGJhZi0+Y2hrX29mZnNldDsKKwkJbC5tLnNlYyA9IChyLnBwYSAmIGxiYWYtPnNl Y19tYXNrKSA+PiBsYmFmLT5zZWNfb2Zmc2V0OworCX0KIAogCXJldHVybiBsOwogfQotLSAKMi43 LjQKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpMaW51 eC1udm1lIG1haWxpbmcgbGlzdApMaW51eC1udm1lQGxpc3RzLmluZnJhZGVhZC5vcmcKaHR0cDov L2xpc3RzLmluZnJhZGVhZC5vcmcvbWFpbG1hbi9saXN0aW5mby9saW51eC1udm1lCg== ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 07/15] lightnvm: add support for 2.0 address format @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Add support for 2.0 address format. Also, align address bits for 1.2 and 2.0 to be able to operate on channel and luns without requiring a format conversion. Use a generic address format for this purpose. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/core.c | 20 ++++----- include/linux/lightnvm.h | 105 ++++++++++++++++++++++++++++++++++------------- 2 files changed, 86 insertions(+), 39 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index b869e3051265..36d76de22dfc 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -194,8 +194,8 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, for (j = 0; j < luns_in_chnl; j++) { luns[lunid].ppa = 0; - luns[lunid].g.ch = i; - luns[lunid++].g.lun = j; + luns[lunid].a.ch = i; + luns[lunid++].a.lun = j; lun_offs[j] = blun; lun_roffs[j + blun] = blun; @@ -556,22 +556,22 @@ static void nvm_unregister_map(struct nvm_dev *dev) static void nvm_map_to_dev(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p) { struct nvm_dev_map *dev_map = tgt_dev->map; - struct nvm_ch_map *ch_map = &dev_map->chnls[p->g.ch]; - int lun_off = ch_map->lun_offs[p->g.lun]; + struct nvm_ch_map *ch_map = &dev_map->chnls[p->a.ch]; + int lun_off = ch_map->lun_offs[p->a.lun]; - p->g.ch += ch_map->ch_off; - p->g.lun += lun_off; + p->a.ch += ch_map->ch_off; + p->a.lun += lun_off; } static void nvm_map_to_tgt(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p) { struct nvm_dev *dev = tgt_dev->parent; struct nvm_dev_map *dev_rmap = dev->rmap; - struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[p->g.ch]; - int lun_roff = ch_rmap->lun_offs[p->g.lun]; + struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[p->a.ch]; + int lun_roff = ch_rmap->lun_offs[p->a.lun]; - p->g.ch -= ch_rmap->ch_off; - p->g.lun -= lun_roff; + p->a.ch -= ch_rmap->ch_off; + p->a.lun -= lun_roff; } static void nvm_ppa_tgt_to_dev(struct nvm_tgt_dev *tgt_dev, diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 4f88e3dc4d8c..73110adf27ad 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -16,12 +16,21 @@ enum { NVM_IOTYPE_GC = 1, }; -#define NVM_BLK_BITS (16) -#define NVM_PG_BITS (16) -#define NVM_SEC_BITS (8) -#define NVM_PL_BITS (8) -#define NVM_LUN_BITS (8) -#define NVM_CH_BITS (7) +/* common format */ +#define NVM_GEN_CH_BITS (8) +#define NVM_GEN_LUN_BITS (8) +#define NVM_GEN_BLK_BITS (16) +#define NVM_GEN_RESERVED (32) + +/* 1.2 format */ +#define NVM_12_PG_BITS (16) +#define NVM_12_PL_BITS (4) +#define NVM_12_SEC_BITS (4) +#define NVM_12_RESERVED (8) + +/* 2.0 format */ +#define NVM_20_SEC_BITS (24) +#define NVM_20_RESERVED (8) enum { NVM_OCSSD_SPEC_12 = 12, @@ -31,16 +40,34 @@ enum { struct ppa_addr { /* Generic structure for all addresses */ union { + /* generic device format */ struct { - u64 blk : NVM_BLK_BITS; - u64 pg : NVM_PG_BITS; - u64 sec : NVM_SEC_BITS; - u64 pl : NVM_PL_BITS; - u64 lun : NVM_LUN_BITS; - u64 ch : NVM_CH_BITS; - u64 reserved : 1; + u64 ch : NVM_GEN_CH_BITS; + u64 lun : NVM_GEN_LUN_BITS; + u64 blk : NVM_GEN_BLK_BITS; + u64 reserved : NVM_GEN_RESERVED; + } a; + + /* 1.2 device format */ + struct { + u64 ch : NVM_GEN_CH_BITS; + u64 lun : NVM_GEN_LUN_BITS; + u64 blk : NVM_GEN_BLK_BITS; + u64 pg : NVM_12_PG_BITS; + u64 pl : NVM_12_PL_BITS; + u64 sec : NVM_12_SEC_BITS; + u64 reserved : NVM_12_RESERVED; } g; + /* 2.0 device format */ + struct { + u64 grp : NVM_GEN_CH_BITS; + u64 pu : NVM_GEN_LUN_BITS; + u64 chk : NVM_GEN_BLK_BITS; + u64 sec : NVM_20_SEC_BITS; + u64 reserved : NVM_20_RESERVED; + } m; + struct { u64 line : 63; u64 is_cached : 1; @@ -376,16 +403,26 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, struct ppa_addr r) { struct nvm_geo *geo = &tgt_dev->geo; - struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&geo->addrf; struct ppa_addr l; - l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; - l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; - l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; - l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; - l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; - l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; + + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; + l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset; + } else { + struct nvm_addr_format *lbaf = &geo->addrf; + + l.ppa = ((u64)r.m.grp) << lbaf->ch_offset; + l.ppa |= ((u64)r.m.pu) << lbaf->lun_offset; + l.ppa |= ((u64)r.m.chk) << lbaf->chk_offset; + l.ppa |= ((u64)r.m.sec) << lbaf->sec_offset; + } return l; } @@ -394,18 +431,28 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, struct ppa_addr r) { struct nvm_geo *geo = &tgt_dev->geo; - struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&geo->addrf; struct ppa_addr l; l.ppa = 0; - l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; - l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; - l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; - l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; - l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; - l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; + + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset; + } else { + struct nvm_addr_format *lbaf = &geo->addrf; + + l.m.grp = (r.ppa & lbaf->ch_mask) >> lbaf->ch_offset; + l.m.pu = (r.ppa & lbaf->lun_mask) >> lbaf->lun_offset; + l.m.chk = (r.ppa & lbaf->chk_mask) >> lbaf->chk_offset; + l.m.sec = (r.ppa & lbaf->sec_mask) >> lbaf->sec_offset; + } return l; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 07/15] lightnvm: add support for 2.0 address format @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Add support for 2.0 address format. Also, align address bits for 1.2 and 2.0 to be able to operate on channel and luns without requiring a format conversion. Use a generic address format for this purpose. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/core.c | 20 ++++----- include/linux/lightnvm.h | 105 ++++++++++++++++++++++++++++++++++------------- 2 files changed, 86 insertions(+), 39 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index b869e3051265..36d76de22dfc 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -194,8 +194,8 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev, for (j = 0; j < luns_in_chnl; j++) { luns[lunid].ppa = 0; - luns[lunid].g.ch = i; - luns[lunid++].g.lun = j; + luns[lunid].a.ch = i; + luns[lunid++].a.lun = j; lun_offs[j] = blun; lun_roffs[j + blun] = blun; @@ -556,22 +556,22 @@ static void nvm_unregister_map(struct nvm_dev *dev) static void nvm_map_to_dev(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p) { struct nvm_dev_map *dev_map = tgt_dev->map; - struct nvm_ch_map *ch_map = &dev_map->chnls[p->g.ch]; - int lun_off = ch_map->lun_offs[p->g.lun]; + struct nvm_ch_map *ch_map = &dev_map->chnls[p->a.ch]; + int lun_off = ch_map->lun_offs[p->a.lun]; - p->g.ch += ch_map->ch_off; - p->g.lun += lun_off; + p->a.ch += ch_map->ch_off; + p->a.lun += lun_off; } static void nvm_map_to_tgt(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p) { struct nvm_dev *dev = tgt_dev->parent; struct nvm_dev_map *dev_rmap = dev->rmap; - struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[p->g.ch]; - int lun_roff = ch_rmap->lun_offs[p->g.lun]; + struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[p->a.ch]; + int lun_roff = ch_rmap->lun_offs[p->a.lun]; - p->g.ch -= ch_rmap->ch_off; - p->g.lun -= lun_roff; + p->a.ch -= ch_rmap->ch_off; + p->a.lun -= lun_roff; } static void nvm_ppa_tgt_to_dev(struct nvm_tgt_dev *tgt_dev, diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 4f88e3dc4d8c..73110adf27ad 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -16,12 +16,21 @@ enum { NVM_IOTYPE_GC = 1, }; -#define NVM_BLK_BITS (16) -#define NVM_PG_BITS (16) -#define NVM_SEC_BITS (8) -#define NVM_PL_BITS (8) -#define NVM_LUN_BITS (8) -#define NVM_CH_BITS (7) +/* common format */ +#define NVM_GEN_CH_BITS (8) +#define NVM_GEN_LUN_BITS (8) +#define NVM_GEN_BLK_BITS (16) +#define NVM_GEN_RESERVED (32) + +/* 1.2 format */ +#define NVM_12_PG_BITS (16) +#define NVM_12_PL_BITS (4) +#define NVM_12_SEC_BITS (4) +#define NVM_12_RESERVED (8) + +/* 2.0 format */ +#define NVM_20_SEC_BITS (24) +#define NVM_20_RESERVED (8) enum { NVM_OCSSD_SPEC_12 = 12, @@ -31,16 +40,34 @@ enum { struct ppa_addr { /* Generic structure for all addresses */ union { + /* generic device format */ struct { - u64 blk : NVM_BLK_BITS; - u64 pg : NVM_PG_BITS; - u64 sec : NVM_SEC_BITS; - u64 pl : NVM_PL_BITS; - u64 lun : NVM_LUN_BITS; - u64 ch : NVM_CH_BITS; - u64 reserved : 1; + u64 ch : NVM_GEN_CH_BITS; + u64 lun : NVM_GEN_LUN_BITS; + u64 blk : NVM_GEN_BLK_BITS; + u64 reserved : NVM_GEN_RESERVED; + } a; + + /* 1.2 device format */ + struct { + u64 ch : NVM_GEN_CH_BITS; + u64 lun : NVM_GEN_LUN_BITS; + u64 blk : NVM_GEN_BLK_BITS; + u64 pg : NVM_12_PG_BITS; + u64 pl : NVM_12_PL_BITS; + u64 sec : NVM_12_SEC_BITS; + u64 reserved : NVM_12_RESERVED; } g; + /* 2.0 device format */ + struct { + u64 grp : NVM_GEN_CH_BITS; + u64 pu : NVM_GEN_LUN_BITS; + u64 chk : NVM_GEN_BLK_BITS; + u64 sec : NVM_20_SEC_BITS; + u64 reserved : NVM_20_RESERVED; + } m; + struct { u64 line : 63; u64 is_cached : 1; @@ -376,16 +403,26 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, struct ppa_addr r) { struct nvm_geo *geo = &tgt_dev->geo; - struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&geo->addrf; struct ppa_addr l; - l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; - l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; - l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; - l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; - l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; - l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; + + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset; + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset; + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset; + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset; + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset; + l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset; + } else { + struct nvm_addr_format *lbaf = &geo->addrf; + + l.ppa = ((u64)r.m.grp) << lbaf->ch_offset; + l.ppa |= ((u64)r.m.pu) << lbaf->lun_offset; + l.ppa |= ((u64)r.m.chk) << lbaf->chk_offset; + l.ppa |= ((u64)r.m.sec) << lbaf->sec_offset; + } return l; } @@ -394,18 +431,28 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, struct ppa_addr r) { struct nvm_geo *geo = &tgt_dev->geo; - struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&geo->addrf; struct ppa_addr l; l.ppa = 0; - l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; - l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; - l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; - l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; - l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; - l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; + + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset; + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset; + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset; + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset; + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset; + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset; + } else { + struct nvm_addr_format *lbaf = &geo->addrf; + + l.m.grp = (r.ppa & lbaf->ch_mask) >> lbaf->ch_offset; + l.m.pu = (r.ppa & lbaf->lun_mask) >> lbaf->lun_offset; + l.m.chk = (r.ppa & lbaf->chk_mask) >> lbaf->chk_offset; + l.m.sec = (r.ppa & lbaf->sec_mask) >> lbaf->sec_offset; + } return l; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 08/15] lightnvm: make address conversions depend on generic device 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme T24gYWRkcmVzcyBjb252ZXJzaW9ucywgdXNlIHRoZSBnZW5lcmljIGRldmljZSwgaW5zdGVhZCBv ZiB0aGUgdGFyZ2V0CmRldmljZS4gVGhpcyBhbGxvd3MgdG8gdXNlIGNvbnZlcnNpb25zIG91dHNp ZGUgb2YgdGhlIHRhcmdldCdzIHJlYWxtLgoKU2lnbmVkLW9mZi1ieTogSmF2aWVyIEdvbnrDoWxl eiA8amF2aWVyQGNuZXhsYWJzLmNvbT4KLS0tCiBkcml2ZXJzL2xpZ2h0bnZtL2NvcmUuYyAgfCA0 ICsrLS0KIGluY2x1ZGUvbGludXgvbGlnaHRudm0uaCB8IDggKysrKy0tLS0KIDIgZmlsZXMgY2hh bmdlZCwgNiBpbnNlcnRpb25zKCspLCA2IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2RyaXZl cnMvbGlnaHRudm0vY29yZS5jIGIvZHJpdmVycy9saWdodG52bS9jb3JlLmMKaW5kZXggMzZkNzZk ZTIyZGZjLi5lZDMzZTBiMTE3ODggMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vY29yZS5j CisrKyBiL2RyaXZlcnMvbGlnaHRudm0vY29yZS5jCkBAIC01ODEsNyArNTgxLDcgQEAgc3RhdGlj IHZvaWQgbnZtX3BwYV90Z3RfdG9fZGV2KHN0cnVjdCBudm1fdGd0X2RldiAqdGd0X2RldiwKIAog CWZvciAoaSA9IDA7IGkgPCBucl9wcGFzOyBpKyspIHsKIAkJbnZtX21hcF90b19kZXYodGd0X2Rl diwgJnBwYV9saXN0W2ldKTsKLQkJcHBhX2xpc3RbaV0gPSBnZW5lcmljX3RvX2Rldl9hZGRyKHRn dF9kZXYsIHBwYV9saXN0W2ldKTsKKwkJcHBhX2xpc3RbaV0gPSBnZW5lcmljX3RvX2Rldl9hZGRy KHRndF9kZXYtPnBhcmVudCwgcHBhX2xpc3RbaV0pOwogCX0KIH0KIApAQCAtNTkxLDcgKzU5MSw3 IEBAIHN0YXRpYyB2b2lkIG52bV9wcGFfZGV2X3RvX3RndChzdHJ1Y3QgbnZtX3RndF9kZXYgKnRn dF9kZXYsCiAJaW50IGk7CiAKIAlmb3IgKGkgPSAwOyBpIDwgbnJfcHBhczsgaSsrKSB7Ci0JCXBw YV9saXN0W2ldID0gZGV2X3RvX2dlbmVyaWNfYWRkcih0Z3RfZGV2LCBwcGFfbGlzdFtpXSk7CisJ CXBwYV9saXN0W2ldID0gZGV2X3RvX2dlbmVyaWNfYWRkcih0Z3RfZGV2LT5wYXJlbnQsIHBwYV9s aXN0W2ldKTsKIAkJbnZtX21hcF90b190Z3QodGd0X2RldiwgJnBwYV9saXN0W2ldKTsKIAl9CiB9 CmRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmggYi9pbmNsdWRlL2xpbnV4L2xp Z2h0bnZtLmgKaW5kZXggNzMxMTBhZGYyN2FkLi5lODc4Yjk1YWVlYzQgMTAwNjQ0Ci0tLSBhL2lu Y2x1ZGUvbGludXgvbGlnaHRudm0uaAorKysgYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKQEAg LTM5OSwxMCArMzk5LDEwIEBAIHN0cnVjdCBudm1fZGV2IHsKIAlzdHJ1Y3QgbGlzdF9oZWFkIHRh cmdldHM7CiB9OwogCi1zdGF0aWMgaW5saW5lIHN0cnVjdCBwcGFfYWRkciBnZW5lcmljX3RvX2Rl dl9hZGRyKHN0cnVjdCBudm1fdGd0X2RldiAqdGd0X2RldiwKK3N0YXRpYyBpbmxpbmUgc3RydWN0 IHBwYV9hZGRyIGdlbmVyaWNfdG9fZGV2X2FkZHIoc3RydWN0IG52bV9kZXYgKmRldiwKIAkJCQkJ CSAgc3RydWN0IHBwYV9hZGRyIHIpCiB7Ci0Jc3RydWN0IG52bV9nZW8gKmdlbyA9ICZ0Z3RfZGV2 LT5nZW87CisJc3RydWN0IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsKIAlzdHJ1Y3QgcHBhX2Fk ZHIgbDsKIAogCWlmIChnZW8tPnZlcnNpb24gPT0gTlZNX09DU1NEX1NQRUNfMTIpIHsKQEAgLTQy NywxMCArNDI3LDEwIEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9hZGRyIGdlbmVyaWNfdG9f ZGV2X2FkZHIoc3RydWN0IG52bV90Z3RfZGV2ICp0Z3RfZGV2LAogCXJldHVybiBsOwogfQogCi1z dGF0aWMgaW5saW5lIHN0cnVjdCBwcGFfYWRkciBkZXZfdG9fZ2VuZXJpY19hZGRyKHN0cnVjdCBu dm1fdGd0X2RldiAqdGd0X2RldiwKK3N0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9hZGRyIGRldl90 b19nZW5lcmljX2FkZHIoc3RydWN0IG52bV9kZXYgKmRldiwKIAkJCQkJCSAgc3RydWN0IHBwYV9h ZGRyIHIpCiB7Ci0Jc3RydWN0IG52bV9nZW8gKmdlbyA9ICZ0Z3RfZGV2LT5nZW87CisJc3RydWN0 IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsKIAlzdHJ1Y3QgcHBhX2FkZHIgbDsKIAogCWwucHBh ID0gMDsKLS0gCjIuNy40CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX18KTGludXgtbnZtZSBtYWlsaW5nIGxpc3QKTGludXgtbnZtZUBsaXN0cy5pbmZyYWRl YWQub3JnCmh0dHA6Ly9saXN0cy5pbmZyYWRlYWQub3JnL21haWxtYW4vbGlzdGluZm8vbGludXgt bnZtZQo= ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 08/15] lightnvm: make address conversions depend on generic device @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) On address conversions, use the generic device, instead of the target device. This allows to use conversions outside of the target's realm. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/core.c | 4 ++-- include/linux/lightnvm.h | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 36d76de22dfc..ed33e0b11788 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -581,7 +581,7 @@ static void nvm_ppa_tgt_to_dev(struct nvm_tgt_dev *tgt_dev, for (i = 0; i < nr_ppas; i++) { nvm_map_to_dev(tgt_dev, &ppa_list[i]); - ppa_list[i] = generic_to_dev_addr(tgt_dev, ppa_list[i]); + ppa_list[i] = generic_to_dev_addr(tgt_dev->parent, ppa_list[i]); } } @@ -591,7 +591,7 @@ static void nvm_ppa_dev_to_tgt(struct nvm_tgt_dev *tgt_dev, int i; for (i = 0; i < nr_ppas; i++) { - ppa_list[i] = dev_to_generic_addr(tgt_dev, ppa_list[i]); + ppa_list[i] = dev_to_generic_addr(tgt_dev->parent, ppa_list[i]); nvm_map_to_tgt(tgt_dev, &ppa_list[i]); } } diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 73110adf27ad..e878b95aeec4 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -399,10 +399,10 @@ struct nvm_dev { struct list_head targets; }; -static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, +static inline struct ppa_addr generic_to_dev_addr(struct nvm_dev *dev, struct ppa_addr r) { - struct nvm_geo *geo = &tgt_dev->geo; + struct nvm_geo *geo = &dev->geo; struct ppa_addr l; if (geo->version == NVM_OCSSD_SPEC_12) { @@ -427,10 +427,10 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, return l; } -static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, +static inline struct ppa_addr dev_to_generic_addr(struct nvm_dev *dev, struct ppa_addr r) { - struct nvm_geo *geo = &tgt_dev->geo; + struct nvm_geo *geo = &dev->geo; struct ppa_addr l; l.ppa = 0; -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 08/15] lightnvm: make address conversions depend on generic device @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González On address conversions, use the generic device, instead of the target device. This allows to use conversions outside of the target's realm. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/core.c | 4 ++-- include/linux/lightnvm.h | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 36d76de22dfc..ed33e0b11788 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -581,7 +581,7 @@ static void nvm_ppa_tgt_to_dev(struct nvm_tgt_dev *tgt_dev, for (i = 0; i < nr_ppas; i++) { nvm_map_to_dev(tgt_dev, &ppa_list[i]); - ppa_list[i] = generic_to_dev_addr(tgt_dev, ppa_list[i]); + ppa_list[i] = generic_to_dev_addr(tgt_dev->parent, ppa_list[i]); } } @@ -591,7 +591,7 @@ static void nvm_ppa_dev_to_tgt(struct nvm_tgt_dev *tgt_dev, int i; for (i = 0; i < nr_ppas; i++) { - ppa_list[i] = dev_to_generic_addr(tgt_dev, ppa_list[i]); + ppa_list[i] = dev_to_generic_addr(tgt_dev->parent, ppa_list[i]); nvm_map_to_tgt(tgt_dev, &ppa_list[i]); } } diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 73110adf27ad..e878b95aeec4 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -399,10 +399,10 @@ struct nvm_dev { struct list_head targets; }; -static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, +static inline struct ppa_addr generic_to_dev_addr(struct nvm_dev *dev, struct ppa_addr r) { - struct nvm_geo *geo = &tgt_dev->geo; + struct nvm_geo *geo = &dev->geo; struct ppa_addr l; if (geo->version == NVM_OCSSD_SPEC_12) { @@ -427,10 +427,10 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev, return l; } -static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev, +static inline struct ppa_addr dev_to_generic_addr(struct nvm_dev *dev, struct ppa_addr r) { - struct nvm_geo *geo = &tgt_dev->geo; + struct nvm_geo *geo = &dev->geo; struct ppa_addr l; l.ppa = 0; -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 09/15] lightnvm: implement get log report chunk helpers 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme VGhlIDIuMCBzcGVjIHByb3ZpZGVzIGEgcmVwb3J0IGNodW5rIGxvZyBwYWdlIHRoYXQgY2FuIGJl IHJldHJpZXZlZAp1c2luZyB0aGUgc3RhbmdhcmQgbnZtZSBnZXQgbG9nIHBhZ2UuIFRoaXMgcmVw bGFjZXMgdGhlIGRlZGljYXRlZApnZXQvcHV0IGJhZCBibG9jayB0YWJsZSBpbiAxLjIuCgpUaGlz IHBhdGNoIGltcGxlbWVudHMgdGhlIGhlbHBlciBmdW5jdGlvbnMgdG8gYWxsb3cgdGFyZ2V0cyBy ZXRyaWV2ZSB0aGUKY2h1bmsgbWV0YWRhdGEgdXNpbmcgZ2V0IGxvZyBwYWdlLiBJdCBtYWtlcyBu dm1lX2dldF9sb2dfZXh0IGF2YWlsYWJsZQpvdXRzaWRlIG9mIG52bWUgY29yZSBzbyB0aGF0IHdl IGNhbiB1c2UgaXQgZm9ybSBsaWdodG52bS4KClNpZ25lZC1vZmYtYnk6IEphdmllciBHb256w6Fs ZXogPGphdmllckBjbmV4bGFicy5jb20+Ci0tLQogZHJpdmVycy9saWdodG52bS9jb3JlLmMgICAg ICB8IDExICsrKysrKysKIGRyaXZlcnMvbnZtZS9ob3N0L2NvcmUuYyAgICAgfCAgNiArKy0tCiBk cml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jIHwgNzQgKysrKysrKysrKysrKysrKysrKysrKysr KysrKysrKysrKysrKysrKysrKysKIGRyaXZlcnMvbnZtZS9ob3N0L252bWUuaCAgICAgfCAgMyAr KwogaW5jbHVkZS9saW51eC9saWdodG52bS5oICAgICB8IDI0ICsrKysrKysrKysrKysrCiA1IGZp bGVzIGNoYW5nZWQsIDExNSBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdp dCBhL2RyaXZlcnMvbGlnaHRudm0vY29yZS5jIGIvZHJpdmVycy9saWdodG52bS9jb3JlLmMKaW5k ZXggZWQzM2UwYjExNzg4Li40MTQxODcxZjQ2MGQgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRu dm0vY29yZS5jCisrKyBiL2RyaXZlcnMvbGlnaHRudm0vY29yZS5jCkBAIC03MTIsNiArNzEyLDE3 IEBAIHN0YXRpYyB2b2lkIG52bV9mcmVlX3JxZF9wcGFsaXN0KHN0cnVjdCBudm1fdGd0X2RldiAq dGd0X2RldiwKIAludm1fZGV2X2RtYV9mcmVlKHRndF9kZXYtPnBhcmVudCwgcnFkLT5wcGFfbGlz dCwgcnFkLT5kbWFfcHBhX2xpc3QpOwogfQogCitpbnQgbnZtX2dldF9jaHVua19tZXRhKHN0cnVj dCBudm1fdGd0X2RldiAqdGd0X2Rldiwgc3RydWN0IG52bV9jaGtfbWV0YSAqbWV0YSwKKwkJc3Ry dWN0IHBwYV9hZGRyIHBwYSwgaW50IG5jaGtzKQoreworCXN0cnVjdCBudm1fZGV2ICpkZXYgPSB0 Z3RfZGV2LT5wYXJlbnQ7CisKKwludm1fcHBhX3RndF90b19kZXYodGd0X2RldiwgJnBwYSwgMSk7 CisKKwlyZXR1cm4gZGV2LT5vcHMtPmdldF9jaGtfbWV0YSh0Z3RfZGV2LT5wYXJlbnQsIG1ldGEs CisJCQkJCQkoc2VjdG9yX3QpcHBhLnBwYSwgbmNoa3MpOworfQorRVhQT1JUX1NZTUJPTChudm1f Z2V0X2NodW5rX21ldGEpOwogCiBpbnQgbnZtX3NldF90Z3RfYmJfdGJsKHN0cnVjdCBudm1fdGd0 X2RldiAqdGd0X2Rldiwgc3RydWN0IHBwYV9hZGRyICpwcGFzLAogCQkgICAgICAgaW50IG5yX3Bw YXMsIGludCB0eXBlKQpkaWZmIC0tZ2l0IGEvZHJpdmVycy9udm1lL2hvc3QvY29yZS5jIGIvZHJp dmVycy9udm1lL2hvc3QvY29yZS5jCmluZGV4IDJlOWU5Zjk3M2E3NS4uYWY2NDJjZTZiYTY5IDEw MDY0NAotLS0gYS9kcml2ZXJzL252bWUvaG9zdC9jb3JlLmMKKysrIGIvZHJpdmVycy9udm1lL2hv c3QvY29yZS5jCkBAIC0yMTI3LDkgKzIxMjcsOSBAQCBzdGF0aWMgaW50IG52bWVfaW5pdF9zdWJz eXN0ZW0oc3RydWN0IG52bWVfY3RybCAqY3RybCwgc3RydWN0IG52bWVfaWRfY3RybCAqaWQpCiAJ cmV0dXJuIHJldDsKIH0KIAotc3RhdGljIGludCBudm1lX2dldF9sb2dfZXh0KHN0cnVjdCBudm1l X2N0cmwgKmN0cmwsIHN0cnVjdCBudm1lX25zICpucywKLQkJCSAgICB1OCBsb2dfcGFnZSwgdm9p ZCAqbG9nLAotCQkJICAgIHNpemVfdCBzaXplLCBzaXplX3Qgb2Zmc2V0KQoraW50IG52bWVfZ2V0 X2xvZ19leHQoc3RydWN0IG52bWVfY3RybCAqY3RybCwgc3RydWN0IG52bWVfbnMgKm5zLAorCQkg ICAgIHU4IGxvZ19wYWdlLCB2b2lkICpsb2csCisJCSAgICAgc2l6ZV90IHNpemUsIHNpemVfdCBv ZmZzZXQpCiB7CiAJc3RydWN0IG52bWVfY29tbWFuZCBjID0geyB9OwogCXVuc2lnbmVkIGxvbmcg ZHdsZW4gPSBzaXplIC8gNCAtIDE7CmRpZmYgLS1naXQgYS9kcml2ZXJzL252bWUvaG9zdC9saWdo dG52bS5jIGIvZHJpdmVycy9udm1lL2hvc3QvbGlnaHRudm0uYwppbmRleCBmNzEzNTY1OWY5MTgu LmExNzk2MjQxMDQwZiAxMDA2NDQKLS0tIGEvZHJpdmVycy9udm1lL2hvc3QvbGlnaHRudm0uYwor KysgYi9kcml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jCkBAIC0zNSw2ICszNSwxMCBAQCBlbnVt IG52bWVfbnZtX2FkbWluX29wY29kZSB7CiAJbnZtZV9udm1fYWRtaW5fc2V0X2JiX3RibAk9IDB4 ZjEsCiB9OwogCitlbnVtIG52bWVfbnZtX2xvZ19wYWdlIHsKKwlOVk1FX05WTV9MT0dfUkVQT1JU X0NIVU5LCT0gMHhjYSwKK307CisKIHN0cnVjdCBudm1lX252bV9waF9ydyB7CiAJX191OAkJCW9w Y29kZTsKIAlfX3U4CQkJZmxhZ3M7CkBAIC0yMzYsNiArMjQwLDE2IEBAIHN0cnVjdCBudm1lX252 bV9pZDIwIHsKIAlfX3U4CQkJdnNbMTAyNF07CiB9OwogCitzdHJ1Y3QgbnZtZV9udm1fY2hrX21l dGEgeworCV9fdTgJc3RhdGU7CisJX191OAl0eXBlOworCV9fdTgJd2k7CisJX191OAlyc3ZkWzVd OworCV9fbGU2NAlzbGJhOworCV9fbGU2NAljbmxiOworCV9fbGU2NAl3cDsKK307CisKIC8qCiAg KiBDaGVjayB3ZSBkaWRuJ3QgaW5hZHZlcnRlbnRseSBncm93IHRoZSBjb21tYW5kIHN0cnVjdAog ICovCkBAIC0yNTIsNiArMjY2LDkgQEAgc3RhdGljIGlubGluZSB2b2lkIF9udm1lX252bV9jaGVj a19zaXplKHZvaWQpCiAJQlVJTERfQlVHX09OKHNpemVvZihzdHJ1Y3QgbnZtZV9udm1fYmJfdGJs KSAhPSA2NCk7CiAJQlVJTERfQlVHX09OKHNpemVvZihzdHJ1Y3QgbnZtZV9udm1faWQyMF9hZGRy ZikgIT0gOCk7CiAJQlVJTERfQlVHX09OKHNpemVvZihzdHJ1Y3QgbnZtZV9udm1faWQyMCkgIT0g TlZNRV9JREVOVElGWV9EQVRBX1NJWkUpOworCUJVSUxEX0JVR19PTihzaXplb2Yoc3RydWN0IG52 bWVfbnZtX2Noa19tZXRhKSAhPSAzMik7CisJQlVJTERfQlVHX09OKHNpemVvZihzdHJ1Y3QgbnZt ZV9udm1fY2hrX21ldGEpICE9CisJCQkJCQlzaXplb2Yoc3RydWN0IG52bV9jaGtfbWV0YSkpOwog fQogCiBzdGF0aWMgdm9pZCBudm1lX252bV9zZXRfYWRkcl8xMihzdHJ1Y3QgbnZtX2FkZHJfZm9y bWF0XzEyICpkc3QsCkBAIC01NTUsNiArNTcyLDYxIEBAIHN0YXRpYyBpbnQgbnZtZV9udm1fc2V0 X2JiX3RibChzdHJ1Y3QgbnZtX2RldiAqbnZtZGV2LCBzdHJ1Y3QgcHBhX2FkZHIgKnBwYXMsCiAJ cmV0dXJuIHJldDsKIH0KIAorLyoKKyAqIEV4cGVjdCB0aGUgbGJhIGluIGRldmljZSBmb3JtYXQK KyAqLworc3RhdGljIGludCBudm1lX252bV9nZXRfY2hrX21ldGEoc3RydWN0IG52bV9kZXYgKm5k ZXYsCisJCQkJIHN0cnVjdCBudm1fY2hrX21ldGEgKm1ldGEsCisJCQkJIHNlY3Rvcl90IHNsYmEs IGludCBuY2hrcykKK3sKKwlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJm5kZXYtPmdlbzsKKwlzdHJ1 Y3QgbnZtZV9ucyAqbnMgPSBuZGV2LT5xLT5xdWV1ZWRhdGE7CisJc3RydWN0IG52bWVfY3RybCAq Y3RybCA9IG5zLT5jdHJsOworCXN0cnVjdCBudm1lX252bV9jaGtfbWV0YSAqZGV2X21ldGEgPSAo c3RydWN0IG52bWVfbnZtX2Noa19tZXRhICopbWV0YTsKKwlzdHJ1Y3QgcHBhX2FkZHIgcHBhOwor CXNpemVfdCBsZWZ0ID0gbmNoa3MgKiBzaXplb2Yoc3RydWN0IG52bWVfbnZtX2Noa19tZXRhKTsK KwlzaXplX3QgbG9nX3Bvcywgb2Zmc2V0LCBsZW47CisJaW50IHJldCwgaTsKKworCS8qIE5vcm1h bGl6ZSBsYmEgYWRkcmVzcyBzcGFjZSB0byBvYnRhaW4gbG9nIG9mZnNldCAqLworCXBwYS5wcGEg PSBzbGJhOworCXBwYSA9IGRldl90b19nZW5lcmljX2FkZHIobmRldiwgcHBhKTsKKworCWxvZ19w b3MgPSBwcGEubS5jaGs7CisJbG9nX3BvcyArPSBwcGEubS5wdSAqIGdlby0+bnVtX2NoazsKKwls b2dfcG9zICs9IHBwYS5tLmdycCAqIGdlby0+bnVtX2x1biAqIGdlby0+bnVtX2NoazsKKworCW9m ZnNldCA9IGxvZ19wb3MgKiBzaXplb2Yoc3RydWN0IG52bWVfbnZtX2Noa19tZXRhKTsKKworCXdo aWxlIChsZWZ0KSB7CisJCWxlbiA9IG1pbl90KHVuc2lnbmVkIGludCwgbGVmdCwgY3RybC0+bWF4 X2h3X3NlY3RvcnMgPDwgOSk7CisKKwkJcmV0ID0gbnZtZV9nZXRfbG9nX2V4dChjdHJsLCBucywg TlZNRV9OVk1fTE9HX1JFUE9SVF9DSFVOSywKKwkJCQlkZXZfbWV0YSwgbGVuLCBvZmZzZXQpOwor CQlpZiAocmV0KSB7CisJCQlkZXZfZXJyKGN0cmwtPmRldmljZSwgIkdldCBSRVBPUlQgQ0hVTksg bG9nIGVycm9yXG4iKTsKKwkJCWJyZWFrOworCQl9CisKKwkJZm9yIChpID0gMDsgaSA8IGxlbjsg aSArPSBzaXplb2Yoc3RydWN0IG52bWVfbnZtX2Noa19tZXRhKSkgeworCQkJbWV0YS0+c3RhdGUg PSBkZXZfbWV0YS0+c3RhdGU7CisJCQltZXRhLT50eXBlID0gZGV2X21ldGEtPnR5cGU7CisJCQlt ZXRhLT53aSA9IGRldl9tZXRhLT53aTsKKwkJCW1ldGEtPnNsYmEgPSBsZTY0X3RvX2NwdShkZXZf bWV0YS0+c2xiYSk7CisJCQltZXRhLT5jbmxiID0gbGU2NF90b19jcHUoZGV2X21ldGEtPmNubGIp OworCQkJbWV0YS0+d3AgPSBsZTY0X3RvX2NwdShkZXZfbWV0YS0+d3ApOworCisJCQltZXRhKys7 CisJCQlkZXZfbWV0YSsrOworCQl9CisKKwkJb2Zmc2V0ICs9IGxlbjsKKwkJbGVmdCAtPSBsZW47 CisJfQorCisJcmV0dXJuIHJldDsKK30KKwogc3RhdGljIGlubGluZSB2b2lkIG52bWVfbnZtX3Jx dG9jbWQoc3RydWN0IG52bV9ycSAqcnFkLCBzdHJ1Y3QgbnZtZV9ucyAqbnMsCiAJCQkJICAgIHN0 cnVjdCBudm1lX252bV9jb21tYW5kICpjKQogewpAQCAtNjg2LDYgKzc1OCw4IEBAIHN0YXRpYyBz dHJ1Y3QgbnZtX2Rldl9vcHMgbnZtZV9udm1fZGV2X29wcyA9IHsKIAkuZ2V0X2JiX3RibAkJPSBu dm1lX252bV9nZXRfYmJfdGJsLAogCS5zZXRfYmJfdGJsCQk9IG52bWVfbnZtX3NldF9iYl90Ymws CiAKKwkuZ2V0X2Noa19tZXRhCQk9IG52bWVfbnZtX2dldF9jaGtfbWV0YSwKKwogCS5zdWJtaXRf aW8JCT0gbnZtZV9udm1fc3VibWl0X2lvLAogCS5zdWJtaXRfaW9fc3luYwkJPSBudm1lX252bV9z dWJtaXRfaW9fc3luYywKIApkaWZmIC0tZ2l0IGEvZHJpdmVycy9udm1lL2hvc3QvbnZtZS5oIGIv ZHJpdmVycy9udm1lL2hvc3QvbnZtZS5oCmluZGV4IDFjYTA4ZjQ5OTNiYS4uNTA1Zjc5N2Y4YzZj IDEwMDY0NAotLS0gYS9kcml2ZXJzL252bWUvaG9zdC9udm1lLmgKKysrIGIvZHJpdmVycy9udm1l L2hvc3QvbnZtZS5oCkBAIC0zOTYsNiArMzk2LDkgQEAgaW50IG52bWVfcmVzZXRfY3RybChzdHJ1 Y3QgbnZtZV9jdHJsICpjdHJsKTsKIGludCBudm1lX2RlbGV0ZV9jdHJsKHN0cnVjdCBudm1lX2N0 cmwgKmN0cmwpOwogaW50IG52bWVfZGVsZXRlX2N0cmxfc3luYyhzdHJ1Y3QgbnZtZV9jdHJsICpj dHJsKTsKIAoraW50IG52bWVfZ2V0X2xvZ19leHQoc3RydWN0IG52bWVfY3RybCAqY3RybCwgc3Ry dWN0IG52bWVfbnMgKm5zLAorCQkgICAgIHU4IGxvZ19wYWdlLCB2b2lkICpsb2csIHNpemVfdCBz aXplLCBzaXplX3Qgb2Zmc2V0KTsKKwogZXh0ZXJuIGNvbnN0IHN0cnVjdCBhdHRyaWJ1dGVfZ3Jv dXAgbnZtZV9uc19pZF9hdHRyX2dyb3VwOwogZXh0ZXJuIGNvbnN0IHN0cnVjdCBibG9ja19kZXZp Y2Vfb3BlcmF0aW9ucyBudm1lX25zX2hlYWRfb3BzOwogCmRpZmYgLS1naXQgYS9pbmNsdWRlL2xp bnV4L2xpZ2h0bnZtLmggYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKaW5kZXggZTg3OGI5NWFl ZWM0Li45ZmUzN2Y3ZTgxODUgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvbGlnaHRudm0uaAor KysgYi9pbmNsdWRlL2xpbnV4L2xpZ2h0bnZtLmgKQEAgLTgxLDEwICs4MSwxMyBAQCBzdHJ1Y3Qg bnZtX3JxOwogc3RydWN0IG52bV9pZDsKIHN0cnVjdCBudm1fZGV2Owogc3RydWN0IG52bV90Z3Rf ZGV2Oworc3RydWN0IG52bV9jaGtfbWV0YTsKIAogdHlwZWRlZiBpbnQgKG52bV9pZF9mbikoc3Ry dWN0IG52bV9kZXYgKik7CiB0eXBlZGVmIGludCAobnZtX29wX2JiX3RibF9mbikoc3RydWN0IG52 bV9kZXYgKiwgc3RydWN0IHBwYV9hZGRyLCB1OCAqKTsKIHR5cGVkZWYgaW50IChudm1fb3Bfc2V0 X2JiX2ZuKShzdHJ1Y3QgbnZtX2RldiAqLCBzdHJ1Y3QgcHBhX2FkZHIgKiwgaW50LCBpbnQpOwor dHlwZWRlZiBpbnQgKG52bV9nZXRfY2hrX21ldGFfZm4pKHN0cnVjdCBudm1fZGV2ICosIHN0cnVj dCBudm1fY2hrX21ldGEgKiwKKwkJCQkJCQkJc2VjdG9yX3QsIGludCk7CiB0eXBlZGVmIGludCAo bnZtX3N1Ym1pdF9pb19mbikoc3RydWN0IG52bV9kZXYgKiwgc3RydWN0IG52bV9ycSAqKTsKIHR5 cGVkZWYgaW50IChudm1fc3VibWl0X2lvX3N5bmNfZm4pKHN0cnVjdCBudm1fZGV2ICosIHN0cnVj dCBudm1fcnEgKik7CiB0eXBlZGVmIHZvaWQgKihudm1fY3JlYXRlX2RtYV9wb29sX2ZuKShzdHJ1 Y3QgbnZtX2RldiAqLCBjaGFyICopOwpAQCAtOTgsNiArMTAxLDggQEAgc3RydWN0IG52bV9kZXZf b3BzIHsKIAludm1fb3BfYmJfdGJsX2ZuCSpnZXRfYmJfdGJsOwogCW52bV9vcF9zZXRfYmJfZm4J KnNldF9iYl90Ymw7CiAKKwludm1fZ2V0X2Noa19tZXRhX2ZuCSpnZXRfY2hrX21ldGE7CisKIAlu dm1fc3VibWl0X2lvX2ZuCSpzdWJtaXRfaW87CiAJbnZtX3N1Ym1pdF9pb19zeW5jX2ZuCSpzdWJt aXRfaW9fc3luYzsKIApAQCAtMjI3LDYgKzIzMiwyMCBAQCBzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0 IHsKIAl1NjQJcnN2X21hc2tbMl07CiB9OwogCisvKgorICogTm90ZTogVGhlIHN0cnVjdHVyZSBz aXplIGlzIGxpbmtlZCB0byBudm1lX252bV9jaGtfbWV0YSBzdWNoIHRoYXQgdGhlIHNhbWUKKyAq IGJ1ZmZlciBjYW4gYmUgdXNlZCB3aGVuIGNvbnZlcnRpbmcgZnJvbSBsaXR0bGUgZW5kaWFuIHRv IGNwdSBhZGRyZXNzaW5nLgorICovCitzdHJ1Y3QgbnZtX2Noa19tZXRhIHsKKwl1OAlzdGF0ZTsK Kwl1OAl0eXBlOworCXU4CXdpOworCXU4CXJzdmRbNV07CisJdTY0CXNsYmE7CisJdTY0CWNubGI7 CisJdTY0CXdwOworfTsKKwogc3RydWN0IG52bV90YXJnZXQgewogCXN0cnVjdCBsaXN0X2hlYWQg bGlzdDsKIAlzdHJ1Y3QgbnZtX3RndF9kZXYgKmRldjsKQEAgLTQ5Niw2ICs1MTUsMTEgQEAgZXh0 ZXJuIHN0cnVjdCBudm1fZGV2ICpudm1fYWxsb2NfZGV2KGludCk7CiBleHRlcm4gaW50IG52bV9y ZWdpc3RlcihzdHJ1Y3QgbnZtX2RldiAqKTsKIGV4dGVybiB2b2lkIG52bV91bnJlZ2lzdGVyKHN0 cnVjdCBudm1fZGV2ICopOwogCisKK2V4dGVybiBpbnQgbnZtX2dldF9jaHVua19tZXRhKHN0cnVj dCBudm1fdGd0X2RldiAqdGd0X2RldiwKKwkJCSAgICAgIHN0cnVjdCBudm1fY2hrX21ldGEgKm1l dGEsIHN0cnVjdCBwcGFfYWRkciBwcGEsCisJCQkgICAgICBpbnQgbmNoa3MpOworCiBleHRlcm4g aW50IG52bV9zZXRfdGd0X2JiX3RibChzdHJ1Y3QgbnZtX3RndF9kZXYgKiwgc3RydWN0IHBwYV9h ZGRyICosCiAJCQkgICAgICBpbnQsIGludCk7CiBleHRlcm4gaW50IG52bV9zdWJtaXRfaW8oc3Ry dWN0IG52bV90Z3RfZGV2ICosIHN0cnVjdCBudm1fcnEgKik7Ci0tIAoyLjcuNAoKCl9fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCkxpbnV4LW52bWUgbWFpbGlu ZyBsaXN0CkxpbnV4LW52bWVAbGlzdHMuaW5mcmFkZWFkLm9yZwpodHRwOi8vbGlzdHMuaW5mcmFk ZWFkLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2xpbnV4LW52bWUK ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 09/15] lightnvm: implement get log report chunk helpers @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) The 2.0 spec provides a report chunk log page that can be retrieved using the stangard nvme get log page. This replaces the dedicated get/put bad block table in 1.2. This patch implements the helper functions to allow targets retrieve the chunk metadata using get log page. It makes nvme_get_log_ext available outside of nvme core so that we can use it form lightnvm. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/core.c | 11 +++++++ drivers/nvme/host/core.c | 6 ++-- drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ drivers/nvme/host/nvme.h | 3 ++ include/linux/lightnvm.h | 24 ++++++++++++++ 5 files changed, 115 insertions(+), 3 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index ed33e0b11788..4141871f460d 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); } +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, + struct ppa_addr ppa, int nchks) +{ + struct nvm_dev *dev = tgt_dev->parent; + + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); + + return dev->ops->get_chk_meta(tgt_dev->parent, meta, + (sector_t)ppa.ppa, nchks); +} +EXPORT_SYMBOL(nvm_get_chunk_meta); int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, int nr_ppas, int type) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 2e9e9f973a75..af642ce6ba69 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) return ret; } -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, - u8 log_page, void *log, - size_t size, size_t offset) +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, + u8 log_page, void *log, + size_t size, size_t offset) { struct nvme_command c = { }; unsigned long dwlen = size / 4 - 1; diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index f7135659f918..a1796241040f 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { nvme_nvm_admin_set_bb_tbl = 0xf1, }; +enum nvme_nvm_log_page { + NVME_NVM_LOG_REPORT_CHUNK = 0xca, +}; + struct nvme_nvm_ph_rw { __u8 opcode; __u8 flags; @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { __u8 vs[1024]; }; +struct nvme_nvm_chk_meta { + __u8 state; + __u8 type; + __u8 wi; + __u8 rsvd[5]; + __le64 slba; + __le64 cnlb; + __le64 wp; +}; + /* * Check we didn't inadvertently grow the command struct */ @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != + sizeof(struct nvm_chk_meta)); } static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, return ret; } +/* + * Expect the lba in device format + */ +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, + struct nvm_chk_meta *meta, + sector_t slba, int nchks) +{ + struct nvm_geo *geo = &ndev->geo; + struct nvme_ns *ns = ndev->q->queuedata; + struct nvme_ctrl *ctrl = ns->ctrl; + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; + struct ppa_addr ppa; + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); + size_t log_pos, offset, len; + int ret, i; + + /* Normalize lba address space to obtain log offset */ + ppa.ppa = slba; + ppa = dev_to_generic_addr(ndev, ppa); + + log_pos = ppa.m.chk; + log_pos += ppa.m.pu * geo->num_chk; + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; + + offset = log_pos * sizeof(struct nvme_nvm_chk_meta); + + while (left) { + len = min_t(unsigned int, left, ctrl->max_hw_sectors << 9); + + ret = nvme_get_log_ext(ctrl, ns, NVME_NVM_LOG_REPORT_CHUNK, + dev_meta, len, offset); + if (ret) { + dev_err(ctrl->device, "Get REPORT CHUNK log error\n"); + break; + } + + for (i = 0; i < len; i += sizeof(struct nvme_nvm_chk_meta)) { + meta->state = dev_meta->state; + meta->type = dev_meta->type; + meta->wi = dev_meta->wi; + meta->slba = le64_to_cpu(dev_meta->slba); + meta->cnlb = le64_to_cpu(dev_meta->cnlb); + meta->wp = le64_to_cpu(dev_meta->wp); + + meta++; + dev_meta++; + } + + offset += len; + left -= len; + } + + return ret; +} + static inline void nvme_nvm_rqtocmd(struct nvm_rq *rqd, struct nvme_ns *ns, struct nvme_nvm_command *c) { @@ -686,6 +758,8 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = { .get_bb_tbl = nvme_nvm_get_bb_tbl, .set_bb_tbl = nvme_nvm_set_bb_tbl, + .get_chk_meta = nvme_nvm_get_chk_meta, + .submit_io = nvme_nvm_submit_io, .submit_io_sync = nvme_nvm_submit_io_sync, diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 1ca08f4993ba..505f797f8c6c 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -396,6 +396,9 @@ int nvme_reset_ctrl(struct nvme_ctrl *ctrl); int nvme_delete_ctrl(struct nvme_ctrl *ctrl); int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl); +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, + u8 log_page, void *log, size_t size, size_t offset); + extern const struct attribute_group nvme_ns_id_attr_group; extern const struct block_device_operations nvme_ns_head_ops; diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index e878b95aeec4..9fe37f7e8185 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -81,10 +81,13 @@ struct nvm_rq; struct nvm_id; struct nvm_dev; struct nvm_tgt_dev; +struct nvm_chk_meta; typedef int (nvm_id_fn)(struct nvm_dev *); typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); +typedef int (nvm_get_chk_meta_fn)(struct nvm_dev *, struct nvm_chk_meta *, + sector_t, int); typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *); typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *); @@ -98,6 +101,8 @@ struct nvm_dev_ops { nvm_op_bb_tbl_fn *get_bb_tbl; nvm_op_set_bb_fn *set_bb_tbl; + nvm_get_chk_meta_fn *get_chk_meta; + nvm_submit_io_fn *submit_io; nvm_submit_io_sync_fn *submit_io_sync; @@ -227,6 +232,20 @@ struct nvm_addr_format { u64 rsv_mask[2]; }; +/* + * Note: The structure size is linked to nvme_nvm_chk_meta such that the same + * buffer can be used when converting from little endian to cpu addressing. + */ +struct nvm_chk_meta { + u8 state; + u8 type; + u8 wi; + u8 rsvd[5]; + u64 slba; + u64 cnlb; + u64 wp; +}; + struct nvm_target { struct list_head list; struct nvm_tgt_dev *dev; @@ -496,6 +515,11 @@ extern struct nvm_dev *nvm_alloc_dev(int); extern int nvm_register(struct nvm_dev *); extern void nvm_unregister(struct nvm_dev *); + +extern int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, + struct nvm_chk_meta *meta, struct ppa_addr ppa, + int nchks); + extern int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *, struct ppa_addr *, int, int); extern int nvm_submit_io(struct nvm_tgt_dev *, struct nvm_rq *); -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 09/15] lightnvm: implement get log report chunk helpers @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González The 2.0 spec provides a report chunk log page that can be retrieved using the stangard nvme get log page. This replaces the dedicated get/put bad block table in 1.2. This patch implements the helper functions to allow targets retrieve the chunk metadata using get log page. It makes nvme_get_log_ext available outside of nvme core so that we can use it form lightnvm. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/core.c | 11 +++++++ drivers/nvme/host/core.c | 6 ++-- drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ drivers/nvme/host/nvme.h | 3 ++ include/linux/lightnvm.h | 24 ++++++++++++++ 5 files changed, 115 insertions(+), 3 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index ed33e0b11788..4141871f460d 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); } +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, + struct ppa_addr ppa, int nchks) +{ + struct nvm_dev *dev = tgt_dev->parent; + + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); + + return dev->ops->get_chk_meta(tgt_dev->parent, meta, + (sector_t)ppa.ppa, nchks); +} +EXPORT_SYMBOL(nvm_get_chunk_meta); int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, int nr_ppas, int type) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 2e9e9f973a75..af642ce6ba69 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) return ret; } -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, - u8 log_page, void *log, - size_t size, size_t offset) +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, + u8 log_page, void *log, + size_t size, size_t offset) { struct nvme_command c = { }; unsigned long dwlen = size / 4 - 1; diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index f7135659f918..a1796241040f 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { nvme_nvm_admin_set_bb_tbl = 0xf1, }; +enum nvme_nvm_log_page { + NVME_NVM_LOG_REPORT_CHUNK = 0xca, +}; + struct nvme_nvm_ph_rw { __u8 opcode; __u8 flags; @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { __u8 vs[1024]; }; +struct nvme_nvm_chk_meta { + __u8 state; + __u8 type; + __u8 wi; + __u8 rsvd[5]; + __le64 slba; + __le64 cnlb; + __le64 wp; +}; + /* * Check we didn't inadvertently grow the command struct */ @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != + sizeof(struct nvm_chk_meta)); } static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, return ret; } +/* + * Expect the lba in device format + */ +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, + struct nvm_chk_meta *meta, + sector_t slba, int nchks) +{ + struct nvm_geo *geo = &ndev->geo; + struct nvme_ns *ns = ndev->q->queuedata; + struct nvme_ctrl *ctrl = ns->ctrl; + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; + struct ppa_addr ppa; + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); + size_t log_pos, offset, len; + int ret, i; + + /* Normalize lba address space to obtain log offset */ + ppa.ppa = slba; + ppa = dev_to_generic_addr(ndev, ppa); + + log_pos = ppa.m.chk; + log_pos += ppa.m.pu * geo->num_chk; + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; + + offset = log_pos * sizeof(struct nvme_nvm_chk_meta); + + while (left) { + len = min_t(unsigned int, left, ctrl->max_hw_sectors << 9); + + ret = nvme_get_log_ext(ctrl, ns, NVME_NVM_LOG_REPORT_CHUNK, + dev_meta, len, offset); + if (ret) { + dev_err(ctrl->device, "Get REPORT CHUNK log error\n"); + break; + } + + for (i = 0; i < len; i += sizeof(struct nvme_nvm_chk_meta)) { + meta->state = dev_meta->state; + meta->type = dev_meta->type; + meta->wi = dev_meta->wi; + meta->slba = le64_to_cpu(dev_meta->slba); + meta->cnlb = le64_to_cpu(dev_meta->cnlb); + meta->wp = le64_to_cpu(dev_meta->wp); + + meta++; + dev_meta++; + } + + offset += len; + left -= len; + } + + return ret; +} + static inline void nvme_nvm_rqtocmd(struct nvm_rq *rqd, struct nvme_ns *ns, struct nvme_nvm_command *c) { @@ -686,6 +758,8 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = { .get_bb_tbl = nvme_nvm_get_bb_tbl, .set_bb_tbl = nvme_nvm_set_bb_tbl, + .get_chk_meta = nvme_nvm_get_chk_meta, + .submit_io = nvme_nvm_submit_io, .submit_io_sync = nvme_nvm_submit_io_sync, diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 1ca08f4993ba..505f797f8c6c 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -396,6 +396,9 @@ int nvme_reset_ctrl(struct nvme_ctrl *ctrl); int nvme_delete_ctrl(struct nvme_ctrl *ctrl); int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl); +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, + u8 log_page, void *log, size_t size, size_t offset); + extern const struct attribute_group nvme_ns_id_attr_group; extern const struct block_device_operations nvme_ns_head_ops; diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index e878b95aeec4..9fe37f7e8185 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -81,10 +81,13 @@ struct nvm_rq; struct nvm_id; struct nvm_dev; struct nvm_tgt_dev; +struct nvm_chk_meta; typedef int (nvm_id_fn)(struct nvm_dev *); typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); +typedef int (nvm_get_chk_meta_fn)(struct nvm_dev *, struct nvm_chk_meta *, + sector_t, int); typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *); typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *); @@ -98,6 +101,8 @@ struct nvm_dev_ops { nvm_op_bb_tbl_fn *get_bb_tbl; nvm_op_set_bb_fn *set_bb_tbl; + nvm_get_chk_meta_fn *get_chk_meta; + nvm_submit_io_fn *submit_io; nvm_submit_io_sync_fn *submit_io_sync; @@ -227,6 +232,20 @@ struct nvm_addr_format { u64 rsv_mask[2]; }; +/* + * Note: The structure size is linked to nvme_nvm_chk_meta such that the same + * buffer can be used when converting from little endian to cpu addressing. + */ +struct nvm_chk_meta { + u8 state; + u8 type; + u8 wi; + u8 rsvd[5]; + u64 slba; + u64 cnlb; + u64 wp; +}; + struct nvm_target { struct list_head list; struct nvm_tgt_dev *dev; @@ -496,6 +515,11 @@ extern struct nvm_dev *nvm_alloc_dev(int); extern int nvm_register(struct nvm_dev *); extern void nvm_unregister(struct nvm_dev *); + +extern int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, + struct nvm_chk_meta *meta, struct ppa_addr ppa, + int nchks); + extern int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *, struct ppa_addr *, int, int); extern int nvm_submit_io(struct nvm_tgt_dev *, struct nvm_rq *); -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 09/15] lightnvm: implement get log report chunk helpers 2018-02-28 15:49 ` Javier González @ 2018-03-01 10:40 ` Matias Bjørling -1 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:40 UTC (permalink / raw) To: Javier González Cc: linux-block, linux-kernel, linux-nvme, Javier González On 02/28/2018 04:49 PM, Javier González wrote: > The 2.0 spec provides a report chunk log page that can be retrieved > using the stangard nvme get log page. This replaces the dedicated > get/put bad block table in 1.2. > > This patch implements the helper functions to allow targets retrieve the > chunk metadata using get log page. It makes nvme_get_log_ext available > outside of nvme core so that we can use it form lightnvm. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/lightnvm/core.c | 11 +++++++ > drivers/nvme/host/core.c | 6 ++-- > drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ > drivers/nvme/host/nvme.h | 3 ++ > include/linux/lightnvm.h | 24 ++++++++++++++ > 5 files changed, 115 insertions(+), 3 deletions(-) > > diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c > index ed33e0b11788..4141871f460d 100644 > --- a/drivers/lightnvm/core.c > +++ b/drivers/lightnvm/core.c > @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, > nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); > } > > +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, > + struct ppa_addr ppa, int nchks) > +{ > + struct nvm_dev *dev = tgt_dev->parent; > + > + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); > + > + return dev->ops->get_chk_meta(tgt_dev->parent, meta, > + (sector_t)ppa.ppa, nchks); > +} > +EXPORT_SYMBOL(nvm_get_chunk_meta); > > int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, > int nr_ppas, int type) > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 2e9e9f973a75..af642ce6ba69 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) > return ret; > } > > -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, > - u8 log_page, void *log, > - size_t size, size_t offset) > +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, > + u8 log_page, void *log, > + size_t size, size_t offset) > { > struct nvme_command c = { }; > unsigned long dwlen = size / 4 - 1; > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index f7135659f918..a1796241040f 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { > nvme_nvm_admin_set_bb_tbl = 0xf1, > }; > > +enum nvme_nvm_log_page { > + NVME_NVM_LOG_REPORT_CHUNK = 0xca, > +}; > + > struct nvme_nvm_ph_rw { > __u8 opcode; > __u8 flags; > @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { > __u8 vs[1024]; > }; > > +struct nvme_nvm_chk_meta { > + __u8 state; > + __u8 type; > + __u8 wi; > + __u8 rsvd[5]; > + __le64 slba; > + __le64 cnlb; > + __le64 wp; > +}; > + > /* > * Check we didn't inadvertently grow the command struct > */ > @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) > BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); > BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); > BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); > + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); > + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != > + sizeof(struct nvm_chk_meta)); > } > > static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, > @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, > return ret; > } > > +/* > + * Expect the lba in device format > + */ > +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, > + struct nvm_chk_meta *meta, > + sector_t slba, int nchks) > +{ > + struct nvm_geo *geo = &ndev->geo; > + struct nvme_ns *ns = ndev->q->queuedata; > + struct nvme_ctrl *ctrl = ns->ctrl; > + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; > + struct ppa_addr ppa; > + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); > + size_t log_pos, offset, len; > + int ret, i; > + > + /* Normalize lba address space to obtain log offset */ > + ppa.ppa = slba; > + ppa = dev_to_generic_addr(ndev, ppa); > + > + log_pos = ppa.m.chk; > + log_pos += ppa.m.pu * geo->num_chk; > + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; Why is this done? ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 09/15] lightnvm: implement get log report chunk helpers @ 2018-03-01 10:40 ` Matias Bjørling 0 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:40 UTC (permalink / raw) On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: > The 2.0 spec provides a report chunk log page that can be retrieved > using the stangard nvme get log page. This replaces the dedicated > get/put bad block table in 1.2. > > This patch implements the helper functions to allow targets retrieve the > chunk metadata using get log page. It makes nvme_get_log_ext available > outside of nvme core so that we can use it form lightnvm. > > Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> > --- > drivers/lightnvm/core.c | 11 +++++++ > drivers/nvme/host/core.c | 6 ++-- > drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ > drivers/nvme/host/nvme.h | 3 ++ > include/linux/lightnvm.h | 24 ++++++++++++++ > 5 files changed, 115 insertions(+), 3 deletions(-) > > diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c > index ed33e0b11788..4141871f460d 100644 > --- a/drivers/lightnvm/core.c > +++ b/drivers/lightnvm/core.c > @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, > nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); > } > > +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, > + struct ppa_addr ppa, int nchks) > +{ > + struct nvm_dev *dev = tgt_dev->parent; > + > + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); > + > + return dev->ops->get_chk_meta(tgt_dev->parent, meta, > + (sector_t)ppa.ppa, nchks); > +} > +EXPORT_SYMBOL(nvm_get_chunk_meta); > > int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, > int nr_ppas, int type) > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 2e9e9f973a75..af642ce6ba69 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) > return ret; > } > > -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, > - u8 log_page, void *log, > - size_t size, size_t offset) > +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, > + u8 log_page, void *log, > + size_t size, size_t offset) > { > struct nvme_command c = { }; > unsigned long dwlen = size / 4 - 1; > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index f7135659f918..a1796241040f 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { > nvme_nvm_admin_set_bb_tbl = 0xf1, > }; > > +enum nvme_nvm_log_page { > + NVME_NVM_LOG_REPORT_CHUNK = 0xca, > +}; > + > struct nvme_nvm_ph_rw { > __u8 opcode; > __u8 flags; > @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { > __u8 vs[1024]; > }; > > +struct nvme_nvm_chk_meta { > + __u8 state; > + __u8 type; > + __u8 wi; > + __u8 rsvd[5]; > + __le64 slba; > + __le64 cnlb; > + __le64 wp; > +}; > + > /* > * Check we didn't inadvertently grow the command struct > */ > @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) > BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); > BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); > BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); > + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); > + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != > + sizeof(struct nvm_chk_meta)); > } > > static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, > @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, > return ret; > } > > +/* > + * Expect the lba in device format > + */ > +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, > + struct nvm_chk_meta *meta, > + sector_t slba, int nchks) > +{ > + struct nvm_geo *geo = &ndev->geo; > + struct nvme_ns *ns = ndev->q->queuedata; > + struct nvme_ctrl *ctrl = ns->ctrl; > + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; > + struct ppa_addr ppa; > + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); > + size_t log_pos, offset, len; > + int ret, i; > + > + /* Normalize lba address space to obtain log offset */ > + ppa.ppa = slba; > + ppa = dev_to_generic_addr(ndev, ppa); > + > + log_pos = ppa.m.chk; > + log_pos += ppa.m.pu * geo->num_chk; > + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; Why is this done? ^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 09/15] lightnvm: implement get log report chunk helpers 2018-03-01 10:40 ` Matias Bjørling @ 2018-03-01 11:02 ` Javier Gonzalez -1 siblings, 0 replies; 71+ messages in thread From: Javier Gonzalez @ 2018-03-01 11:02 UTC (permalink / raw) To: Matias Bjørling; +Cc: linux-block, linux-kernel, linux-nvme [-- Attachment #1: Type: text/plain, Size: 4993 bytes --] > On 1 Mar 2018, at 11.40, Matias Bjørling <mb@lightnvm.io> wrote: > > On 02/28/2018 04:49 PM, Javier González wrote: >> The 2.0 spec provides a report chunk log page that can be retrieved >> using the stangard nvme get log page. This replaces the dedicated >> get/put bad block table in 1.2. >> This patch implements the helper functions to allow targets retrieve the >> chunk metadata using get log page. It makes nvme_get_log_ext available >> outside of nvme core so that we can use it form lightnvm. >> Signed-off-by: Javier González <javier@cnexlabs.com> >> --- >> drivers/lightnvm/core.c | 11 +++++++ >> drivers/nvme/host/core.c | 6 ++-- >> drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ >> drivers/nvme/host/nvme.h | 3 ++ >> include/linux/lightnvm.h | 24 ++++++++++++++ >> 5 files changed, 115 insertions(+), 3 deletions(-) >> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >> index ed33e0b11788..4141871f460d 100644 >> --- a/drivers/lightnvm/core.c >> +++ b/drivers/lightnvm/core.c >> @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, >> nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); >> } >> +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, >> + struct ppa_addr ppa, int nchks) >> +{ >> + struct nvm_dev *dev = tgt_dev->parent; >> + >> + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); >> + >> + return dev->ops->get_chk_meta(tgt_dev->parent, meta, >> + (sector_t)ppa.ppa, nchks); >> +} >> +EXPORT_SYMBOL(nvm_get_chunk_meta); >> int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, >> int nr_ppas, int type) >> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >> index 2e9e9f973a75..af642ce6ba69 100644 >> --- a/drivers/nvme/host/core.c >> +++ b/drivers/nvme/host/core.c >> @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) >> return ret; >> } >> -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >> - u8 log_page, void *log, >> - size_t size, size_t offset) >> +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >> + u8 log_page, void *log, >> + size_t size, size_t offset) >> { >> struct nvme_command c = { }; >> unsigned long dwlen = size / 4 - 1; >> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >> index f7135659f918..a1796241040f 100644 >> --- a/drivers/nvme/host/lightnvm.c >> +++ b/drivers/nvme/host/lightnvm.c >> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { >> nvme_nvm_admin_set_bb_tbl = 0xf1, >> }; >> +enum nvme_nvm_log_page { >> + NVME_NVM_LOG_REPORT_CHUNK = 0xca, >> +}; >> + >> struct nvme_nvm_ph_rw { >> __u8 opcode; >> __u8 flags; >> @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { >> __u8 vs[1024]; >> }; >> +struct nvme_nvm_chk_meta { >> + __u8 state; >> + __u8 type; >> + __u8 wi; >> + __u8 rsvd[5]; >> + __le64 slba; >> + __le64 cnlb; >> + __le64 wp; >> +}; >> + >> /* >> * Check we didn't inadvertently grow the command struct >> */ >> @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) >> BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); >> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); >> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); >> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); >> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != >> + sizeof(struct nvm_chk_meta)); >> } >> static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, >> @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, >> return ret; >> } >> +/* >> + * Expect the lba in device format >> + */ >> +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, >> + struct nvm_chk_meta *meta, >> + sector_t slba, int nchks) >> +{ >> + struct nvm_geo *geo = &ndev->geo; >> + struct nvme_ns *ns = ndev->q->queuedata; >> + struct nvme_ctrl *ctrl = ns->ctrl; >> + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; >> + struct ppa_addr ppa; >> + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); >> + size_t log_pos, offset, len; >> + int ret, i; >> + >> + /* Normalize lba address space to obtain log offset */ >> + ppa.ppa = slba; >> + ppa = dev_to_generic_addr(ndev, ppa); >> + >> + log_pos = ppa.m.chk; >> + log_pos += ppa.m.pu * geo->num_chk; >> + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; > > Why is this done? The log page does not map to the lba space. You need to convert it to get one chunk at a time in the format. GRP:PU:CHK I can see why taking a lba as argument is better than a ppa, since users might use the lbas directly, but the conversion needs to be done somewhere. Javier [-- Attachment #2: Message signed with OpenPGP --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 09/15] lightnvm: implement get log report chunk helpers @ 2018-03-01 11:02 ` Javier Gonzalez 0 siblings, 0 replies; 71+ messages in thread From: Javier Gonzalez @ 2018-03-01 11:02 UTC (permalink / raw) > On 1 Mar 2018,@11.40, Matias Bj?rling <mb@lightnvm.io> wrote: > > On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: >> The 2.0 spec provides a report chunk log page that can be retrieved >> using the stangard nvme get log page. This replaces the dedicated >> get/put bad block table in 1.2. >> This patch implements the helper functions to allow targets retrieve the >> chunk metadata using get log page. It makes nvme_get_log_ext available >> outside of nvme core so that we can use it form lightnvm. >> Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> >> --- >> drivers/lightnvm/core.c | 11 +++++++ >> drivers/nvme/host/core.c | 6 ++-- >> drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ >> drivers/nvme/host/nvme.h | 3 ++ >> include/linux/lightnvm.h | 24 ++++++++++++++ >> 5 files changed, 115 insertions(+), 3 deletions(-) >> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >> index ed33e0b11788..4141871f460d 100644 >> --- a/drivers/lightnvm/core.c >> +++ b/drivers/lightnvm/core.c >> @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, >> nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); >> } >> +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, >> + struct ppa_addr ppa, int nchks) >> +{ >> + struct nvm_dev *dev = tgt_dev->parent; >> + >> + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); >> + >> + return dev->ops->get_chk_meta(tgt_dev->parent, meta, >> + (sector_t)ppa.ppa, nchks); >> +} >> +EXPORT_SYMBOL(nvm_get_chunk_meta); >> int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, >> int nr_ppas, int type) >> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >> index 2e9e9f973a75..af642ce6ba69 100644 >> --- a/drivers/nvme/host/core.c >> +++ b/drivers/nvme/host/core.c >> @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) >> return ret; >> } >> -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >> - u8 log_page, void *log, >> - size_t size, size_t offset) >> +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >> + u8 log_page, void *log, >> + size_t size, size_t offset) >> { >> struct nvme_command c = { }; >> unsigned long dwlen = size / 4 - 1; >> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >> index f7135659f918..a1796241040f 100644 >> --- a/drivers/nvme/host/lightnvm.c >> +++ b/drivers/nvme/host/lightnvm.c >> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { >> nvme_nvm_admin_set_bb_tbl = 0xf1, >> }; >> +enum nvme_nvm_log_page { >> + NVME_NVM_LOG_REPORT_CHUNK = 0xca, >> +}; >> + >> struct nvme_nvm_ph_rw { >> __u8 opcode; >> __u8 flags; >> @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { >> __u8 vs[1024]; >> }; >> +struct nvme_nvm_chk_meta { >> + __u8 state; >> + __u8 type; >> + __u8 wi; >> + __u8 rsvd[5]; >> + __le64 slba; >> + __le64 cnlb; >> + __le64 wp; >> +}; >> + >> /* >> * Check we didn't inadvertently grow the command struct >> */ >> @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) >> BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); >> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); >> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); >> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); >> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != >> + sizeof(struct nvm_chk_meta)); >> } >> static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, >> @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, >> return ret; >> } >> +/* >> + * Expect the lba in device format >> + */ >> +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, >> + struct nvm_chk_meta *meta, >> + sector_t slba, int nchks) >> +{ >> + struct nvm_geo *geo = &ndev->geo; >> + struct nvme_ns *ns = ndev->q->queuedata; >> + struct nvme_ctrl *ctrl = ns->ctrl; >> + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; >> + struct ppa_addr ppa; >> + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); >> + size_t log_pos, offset, len; >> + int ret, i; >> + >> + /* Normalize lba address space to obtain log offset */ >> + ppa.ppa = slba; >> + ppa = dev_to_generic_addr(ndev, ppa); >> + >> + log_pos = ppa.m.chk; >> + log_pos += ppa.m.pu * geo->num_chk; >> + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; > > Why is this done? The log page does not map to the lba space. You need to convert it to get one chunk at a time in the format. GRP:PU:CHK I can see why taking a lba as argument is better than a ppa, since users might use the lbas directly, but the conversion needs to be done somewhere. Javier -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180301/a9004746/attachment.sig> ^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 09/15] lightnvm: implement get log report chunk helpers 2018-03-01 11:02 ` Javier Gonzalez @ 2018-03-01 11:51 ` Matias Bjørling -1 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 11:51 UTC (permalink / raw) To: Javier Gonzalez; +Cc: linux-block, linux-kernel, linux-nvme On 03/01/2018 12:02 PM, Javier Gonzalez wrote: >> On 1 Mar 2018, at 11.40, Matias Bjørling <mb@lightnvm.io> wrote: >> >> On 02/28/2018 04:49 PM, Javier González wrote: >>> The 2.0 spec provides a report chunk log page that can be retrieved >>> using the stangard nvme get log page. This replaces the dedicated >>> get/put bad block table in 1.2. >>> This patch implements the helper functions to allow targets retrieve the >>> chunk metadata using get log page. It makes nvme_get_log_ext available >>> outside of nvme core so that we can use it form lightnvm. >>> Signed-off-by: Javier González <javier@cnexlabs.com> >>> --- >>> drivers/lightnvm/core.c | 11 +++++++ >>> drivers/nvme/host/core.c | 6 ++-- >>> drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ >>> drivers/nvme/host/nvme.h | 3 ++ >>> include/linux/lightnvm.h | 24 ++++++++++++++ >>> 5 files changed, 115 insertions(+), 3 deletions(-) >>> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >>> index ed33e0b11788..4141871f460d 100644 >>> --- a/drivers/lightnvm/core.c >>> +++ b/drivers/lightnvm/core.c >>> @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, >>> nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); >>> } >>> +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, >>> + struct ppa_addr ppa, int nchks) >>> +{ >>> + struct nvm_dev *dev = tgt_dev->parent; >>> + >>> + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); >>> + >>> + return dev->ops->get_chk_meta(tgt_dev->parent, meta, >>> + (sector_t)ppa.ppa, nchks); >>> +} >>> +EXPORT_SYMBOL(nvm_get_chunk_meta); >>> int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, >>> int nr_ppas, int type) >>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >>> index 2e9e9f973a75..af642ce6ba69 100644 >>> --- a/drivers/nvme/host/core.c >>> +++ b/drivers/nvme/host/core.c >>> @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) >>> return ret; >>> } >>> -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >>> - u8 log_page, void *log, >>> - size_t size, size_t offset) >>> +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >>> + u8 log_page, void *log, >>> + size_t size, size_t offset) >>> { >>> struct nvme_command c = { }; >>> unsigned long dwlen = size / 4 - 1; >>> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >>> index f7135659f918..a1796241040f 100644 >>> --- a/drivers/nvme/host/lightnvm.c >>> +++ b/drivers/nvme/host/lightnvm.c >>> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { >>> nvme_nvm_admin_set_bb_tbl = 0xf1, >>> }; >>> +enum nvme_nvm_log_page { >>> + NVME_NVM_LOG_REPORT_CHUNK = 0xca, >>> +}; >>> + >>> struct nvme_nvm_ph_rw { >>> __u8 opcode; >>> __u8 flags; >>> @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { >>> __u8 vs[1024]; >>> }; >>> +struct nvme_nvm_chk_meta { >>> + __u8 state; >>> + __u8 type; >>> + __u8 wi; >>> + __u8 rsvd[5]; >>> + __le64 slba; >>> + __le64 cnlb; >>> + __le64 wp; >>> +}; >>> + >>> /* >>> * Check we didn't inadvertently grow the command struct >>> */ >>> @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) >>> BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); >>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); >>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); >>> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); >>> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != >>> + sizeof(struct nvm_chk_meta)); >>> } >>> static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, >>> @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, >>> return ret; >>> } >>> +/* >>> + * Expect the lba in device format >>> + */ >>> +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, >>> + struct nvm_chk_meta *meta, >>> + sector_t slba, int nchks) >>> +{ >>> + struct nvm_geo *geo = &ndev->geo; >>> + struct nvme_ns *ns = ndev->q->queuedata; >>> + struct nvme_ctrl *ctrl = ns->ctrl; >>> + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; >>> + struct ppa_addr ppa; >>> + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); >>> + size_t log_pos, offset, len; >>> + int ret, i; >>> + >>> + /* Normalize lba address space to obtain log offset */ >>> + ppa.ppa = slba; >>> + ppa = dev_to_generic_addr(ndev, ppa); >>> + >>> + log_pos = ppa.m.chk; >>> + log_pos += ppa.m.pu * geo->num_chk; >>> + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; >> >> Why is this done? > > The log page does not map to the lba space. You need to convert it to > get one chunk at a time in the format. > > GRP:PU:CHK > > I can see why taking a lba as argument is better than a ppa, since users > might use the lbas directly, but the conversion needs to be done > somewhere. > Good point. I guess this is clash between the two APIs. Chunk metadata being laid out sequentially, while the address space is sparse. I'm good with the conversion being in the fn. ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 09/15] lightnvm: implement get log report chunk helpers @ 2018-03-01 11:51 ` Matias Bjørling 0 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 11:51 UTC (permalink / raw) On 03/01/2018 12:02 PM, Javier Gonzalez wrote: >> On 1 Mar 2018,@11.40, Matias Bj?rling <mb@lightnvm.io> wrote: >> >> On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: >>> The 2.0 spec provides a report chunk log page that can be retrieved >>> using the stangard nvme get log page. This replaces the dedicated >>> get/put bad block table in 1.2. >>> This patch implements the helper functions to allow targets retrieve the >>> chunk metadata using get log page. It makes nvme_get_log_ext available >>> outside of nvme core so that we can use it form lightnvm. >>> Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> >>> --- >>> drivers/lightnvm/core.c | 11 +++++++ >>> drivers/nvme/host/core.c | 6 ++-- >>> drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ >>> drivers/nvme/host/nvme.h | 3 ++ >>> include/linux/lightnvm.h | 24 ++++++++++++++ >>> 5 files changed, 115 insertions(+), 3 deletions(-) >>> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >>> index ed33e0b11788..4141871f460d 100644 >>> --- a/drivers/lightnvm/core.c >>> +++ b/drivers/lightnvm/core.c >>> @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, >>> nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); >>> } >>> +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, >>> + struct ppa_addr ppa, int nchks) >>> +{ >>> + struct nvm_dev *dev = tgt_dev->parent; >>> + >>> + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); >>> + >>> + return dev->ops->get_chk_meta(tgt_dev->parent, meta, >>> + (sector_t)ppa.ppa, nchks); >>> +} >>> +EXPORT_SYMBOL(nvm_get_chunk_meta); >>> int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, >>> int nr_ppas, int type) >>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >>> index 2e9e9f973a75..af642ce6ba69 100644 >>> --- a/drivers/nvme/host/core.c >>> +++ b/drivers/nvme/host/core.c >>> @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) >>> return ret; >>> } >>> -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >>> - u8 log_page, void *log, >>> - size_t size, size_t offset) >>> +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >>> + u8 log_page, void *log, >>> + size_t size, size_t offset) >>> { >>> struct nvme_command c = { }; >>> unsigned long dwlen = size / 4 - 1; >>> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >>> index f7135659f918..a1796241040f 100644 >>> --- a/drivers/nvme/host/lightnvm.c >>> +++ b/drivers/nvme/host/lightnvm.c >>> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { >>> nvme_nvm_admin_set_bb_tbl = 0xf1, >>> }; >>> +enum nvme_nvm_log_page { >>> + NVME_NVM_LOG_REPORT_CHUNK = 0xca, >>> +}; >>> + >>> struct nvme_nvm_ph_rw { >>> __u8 opcode; >>> __u8 flags; >>> @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { >>> __u8 vs[1024]; >>> }; >>> +struct nvme_nvm_chk_meta { >>> + __u8 state; >>> + __u8 type; >>> + __u8 wi; >>> + __u8 rsvd[5]; >>> + __le64 slba; >>> + __le64 cnlb; >>> + __le64 wp; >>> +}; >>> + >>> /* >>> * Check we didn't inadvertently grow the command struct >>> */ >>> @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) >>> BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); >>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); >>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); >>> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); >>> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != >>> + sizeof(struct nvm_chk_meta)); >>> } >>> static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, >>> @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, >>> return ret; >>> } >>> +/* >>> + * Expect the lba in device format >>> + */ >>> +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, >>> + struct nvm_chk_meta *meta, >>> + sector_t slba, int nchks) >>> +{ >>> + struct nvm_geo *geo = &ndev->geo; >>> + struct nvme_ns *ns = ndev->q->queuedata; >>> + struct nvme_ctrl *ctrl = ns->ctrl; >>> + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; >>> + struct ppa_addr ppa; >>> + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); >>> + size_t log_pos, offset, len; >>> + int ret, i; >>> + >>> + /* Normalize lba address space to obtain log offset */ >>> + ppa.ppa = slba; >>> + ppa = dev_to_generic_addr(ndev, ppa); >>> + >>> + log_pos = ppa.m.chk; >>> + log_pos += ppa.m.pu * geo->num_chk; >>> + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; >> >> Why is this done? > > The log page does not map to the lba space. You need to convert it to > get one chunk at a time in the format. > > GRP:PU:CHK > > I can see why taking a lba as argument is better than a ppa, since users > might use the lbas directly, but the conversion needs to be done > somewhere. > Good point. I guess this is clash between the two APIs. Chunk metadata being laid out sequentially, while the address space is sparse. I'm good with the conversion being in the fn. ^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 09/15] lightnvm: implement get log report chunk helpers 2018-03-01 11:51 ` Matias Bjørling @ 2018-03-01 11:54 ` Javier Gonzalez -1 siblings, 0 replies; 71+ messages in thread From: Javier Gonzalez @ 2018-03-01 11:54 UTC (permalink / raw) To: Matias Bjørling; +Cc: linux-block, linux-kernel, linux-nvme [-- Attachment #1: Type: text/plain, Size: 5752 bytes --] > On 1 Mar 2018, at 12.51, Matias Bjørling <mb@lightnvm.io> wrote: > > On 03/01/2018 12:02 PM, Javier Gonzalez wrote: >>> On 1 Mar 2018, at 11.40, Matias Bjørling <mb@lightnvm.io> wrote: >>> >>> On 02/28/2018 04:49 PM, Javier González wrote: >>>> The 2.0 spec provides a report chunk log page that can be retrieved >>>> using the stangard nvme get log page. This replaces the dedicated >>>> get/put bad block table in 1.2. >>>> This patch implements the helper functions to allow targets retrieve the >>>> chunk metadata using get log page. It makes nvme_get_log_ext available >>>> outside of nvme core so that we can use it form lightnvm. >>>> Signed-off-by: Javier González <javier@cnexlabs.com> >>>> --- >>>> drivers/lightnvm/core.c | 11 +++++++ >>>> drivers/nvme/host/core.c | 6 ++-- >>>> drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ >>>> drivers/nvme/host/nvme.h | 3 ++ >>>> include/linux/lightnvm.h | 24 ++++++++++++++ >>>> 5 files changed, 115 insertions(+), 3 deletions(-) >>>> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >>>> index ed33e0b11788..4141871f460d 100644 >>>> --- a/drivers/lightnvm/core.c >>>> +++ b/drivers/lightnvm/core.c >>>> @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, >>>> nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); >>>> } >>>> +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, >>>> + struct ppa_addr ppa, int nchks) >>>> +{ >>>> + struct nvm_dev *dev = tgt_dev->parent; >>>> + >>>> + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); >>>> + >>>> + return dev->ops->get_chk_meta(tgt_dev->parent, meta, >>>> + (sector_t)ppa.ppa, nchks); >>>> +} >>>> +EXPORT_SYMBOL(nvm_get_chunk_meta); >>>> int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, >>>> int nr_ppas, int type) >>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >>>> index 2e9e9f973a75..af642ce6ba69 100644 >>>> --- a/drivers/nvme/host/core.c >>>> +++ b/drivers/nvme/host/core.c >>>> @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) >>>> return ret; >>>> } >>>> -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >>>> - u8 log_page, void *log, >>>> - size_t size, size_t offset) >>>> +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >>>> + u8 log_page, void *log, >>>> + size_t size, size_t offset) >>>> { >>>> struct nvme_command c = { }; >>>> unsigned long dwlen = size / 4 - 1; >>>> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >>>> index f7135659f918..a1796241040f 100644 >>>> --- a/drivers/nvme/host/lightnvm.c >>>> +++ b/drivers/nvme/host/lightnvm.c >>>> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { >>>> nvme_nvm_admin_set_bb_tbl = 0xf1, >>>> }; >>>> +enum nvme_nvm_log_page { >>>> + NVME_NVM_LOG_REPORT_CHUNK = 0xca, >>>> +}; >>>> + >>>> struct nvme_nvm_ph_rw { >>>> __u8 opcode; >>>> __u8 flags; >>>> @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { >>>> __u8 vs[1024]; >>>> }; >>>> +struct nvme_nvm_chk_meta { >>>> + __u8 state; >>>> + __u8 type; >>>> + __u8 wi; >>>> + __u8 rsvd[5]; >>>> + __le64 slba; >>>> + __le64 cnlb; >>>> + __le64 wp; >>>> +}; >>>> + >>>> /* >>>> * Check we didn't inadvertently grow the command struct >>>> */ >>>> @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) >>>> BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); >>>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); >>>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); >>>> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); >>>> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != >>>> + sizeof(struct nvm_chk_meta)); >>>> } >>>> static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, >>>> @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, >>>> return ret; >>>> } >>>> +/* >>>> + * Expect the lba in device format >>>> + */ >>>> +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, >>>> + struct nvm_chk_meta *meta, >>>> + sector_t slba, int nchks) >>>> +{ >>>> + struct nvm_geo *geo = &ndev->geo; >>>> + struct nvme_ns *ns = ndev->q->queuedata; >>>> + struct nvme_ctrl *ctrl = ns->ctrl; >>>> + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; >>>> + struct ppa_addr ppa; >>>> + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); >>>> + size_t log_pos, offset, len; >>>> + int ret, i; >>>> + >>>> + /* Normalize lba address space to obtain log offset */ >>>> + ppa.ppa = slba; >>>> + ppa = dev_to_generic_addr(ndev, ppa); >>>> + >>>> + log_pos = ppa.m.chk; >>>> + log_pos += ppa.m.pu * geo->num_chk; >>>> + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; >>> >>> Why is this done? >> The log page does not map to the lba space. You need to convert it to >> get one chunk at a time in the format. >> GRP:PU:CHK >> I can see why taking a lba as argument is better than a ppa, since users >> might use the lbas directly, but the conversion needs to be done >> somewhere. > > Good point. I guess this is clash between the two APIs. Chunk metadata > being laid out sequentially, while the address space is sparse. Exactly. > I'm good with the conversion being in the fn. Cool. I think it is good here too, as it hides the ppa format from the upper layers. It requires a double conversion from pblk, but it is not on the fast path anyway... Javier [-- Attachment #2: Message signed with OpenPGP --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 09/15] lightnvm: implement get log report chunk helpers @ 2018-03-01 11:54 ` Javier Gonzalez 0 siblings, 0 replies; 71+ messages in thread From: Javier Gonzalez @ 2018-03-01 11:54 UTC (permalink / raw) > On 1 Mar 2018,@12.51, Matias Bj?rling <mb@lightnvm.io> wrote: > > On 03/01/2018 12:02 PM, Javier Gonzalez wrote: >>> On 1 Mar 2018,@11.40, Matias Bj?rling <mb@lightnvm.io> wrote: >>> >>> On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: >>>> The 2.0 spec provides a report chunk log page that can be retrieved >>>> using the stangard nvme get log page. This replaces the dedicated >>>> get/put bad block table in 1.2. >>>> This patch implements the helper functions to allow targets retrieve the >>>> chunk metadata using get log page. It makes nvme_get_log_ext available >>>> outside of nvme core so that we can use it form lightnvm. >>>> Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> >>>> --- >>>> drivers/lightnvm/core.c | 11 +++++++ >>>> drivers/nvme/host/core.c | 6 ++-- >>>> drivers/nvme/host/lightnvm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ >>>> drivers/nvme/host/nvme.h | 3 ++ >>>> include/linux/lightnvm.h | 24 ++++++++++++++ >>>> 5 files changed, 115 insertions(+), 3 deletions(-) >>>> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >>>> index ed33e0b11788..4141871f460d 100644 >>>> --- a/drivers/lightnvm/core.c >>>> +++ b/drivers/lightnvm/core.c >>>> @@ -712,6 +712,17 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, >>>> nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list); >>>> } >>>> +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta, >>>> + struct ppa_addr ppa, int nchks) >>>> +{ >>>> + struct nvm_dev *dev = tgt_dev->parent; >>>> + >>>> + nvm_ppa_tgt_to_dev(tgt_dev, &ppa, 1); >>>> + >>>> + return dev->ops->get_chk_meta(tgt_dev->parent, meta, >>>> + (sector_t)ppa.ppa, nchks); >>>> +} >>>> +EXPORT_SYMBOL(nvm_get_chunk_meta); >>>> int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas, >>>> int nr_ppas, int type) >>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >>>> index 2e9e9f973a75..af642ce6ba69 100644 >>>> --- a/drivers/nvme/host/core.c >>>> +++ b/drivers/nvme/host/core.c >>>> @@ -2127,9 +2127,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) >>>> return ret; >>>> } >>>> -static int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >>>> - u8 log_page, void *log, >>>> - size_t size, size_t offset) >>>> +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns, >>>> + u8 log_page, void *log, >>>> + size_t size, size_t offset) >>>> { >>>> struct nvme_command c = { }; >>>> unsigned long dwlen = size / 4 - 1; >>>> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >>>> index f7135659f918..a1796241040f 100644 >>>> --- a/drivers/nvme/host/lightnvm.c >>>> +++ b/drivers/nvme/host/lightnvm.c >>>> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode { >>>> nvme_nvm_admin_set_bb_tbl = 0xf1, >>>> }; >>>> +enum nvme_nvm_log_page { >>>> + NVME_NVM_LOG_REPORT_CHUNK = 0xca, >>>> +}; >>>> + >>>> struct nvme_nvm_ph_rw { >>>> __u8 opcode; >>>> __u8 flags; >>>> @@ -236,6 +240,16 @@ struct nvme_nvm_id20 { >>>> __u8 vs[1024]; >>>> }; >>>> +struct nvme_nvm_chk_meta { >>>> + __u8 state; >>>> + __u8 type; >>>> + __u8 wi; >>>> + __u8 rsvd[5]; >>>> + __le64 slba; >>>> + __le64 cnlb; >>>> + __le64 wp; >>>> +}; >>>> + >>>> /* >>>> * Check we didn't inadvertently grow the command struct >>>> */ >>>> @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void) >>>> BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64); >>>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8); >>>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE); >>>> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32); >>>> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != >>>> + sizeof(struct nvm_chk_meta)); >>>> } >>>> static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst, >>>> @@ -555,6 +572,61 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas, >>>> return ret; >>>> } >>>> +/* >>>> + * Expect the lba in device format >>>> + */ >>>> +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, >>>> + struct nvm_chk_meta *meta, >>>> + sector_t slba, int nchks) >>>> +{ >>>> + struct nvm_geo *geo = &ndev->geo; >>>> + struct nvme_ns *ns = ndev->q->queuedata; >>>> + struct nvme_ctrl *ctrl = ns->ctrl; >>>> + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; >>>> + struct ppa_addr ppa; >>>> + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); >>>> + size_t log_pos, offset, len; >>>> + int ret, i; >>>> + >>>> + /* Normalize lba address space to obtain log offset */ >>>> + ppa.ppa = slba; >>>> + ppa = dev_to_generic_addr(ndev, ppa); >>>> + >>>> + log_pos = ppa.m.chk; >>>> + log_pos += ppa.m.pu * geo->num_chk; >>>> + log_pos += ppa.m.grp * geo->num_lun * geo->num_chk; >>> >>> Why is this done? >> The log page does not map to the lba space. You need to convert it to >> get one chunk at a time in the format. >> GRP:PU:CHK >> I can see why taking a lba as argument is better than a ppa, since users >> might use the lbas directly, but the conversion needs to be done >> somewhere. > > Good point. I guess this is clash between the two APIs. Chunk metadata > being laid out sequentially, while the address space is sparse. Exactly. > I'm good with the conversion being in the fn. Cool. I think it is good here too, as it hides the ppa format from the upper layers. It requires a double conversion from pblk, but it is not on the fast path anyway... Javier -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180301/fed75ea9/attachment-0001.sig> ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 10/15] lightnvm: pblk: check for supported version 2018-02-28 15:49 ` Javier González @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González At this point, only 1.2 spec is supported, thus check for it. Also, since device-side L2P is only supported in the 1.2 spec, make sure to only check its value under 1.2. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/pblk-init.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 11424beb214c..b67b5b11ae16 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -991,9 +991,15 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, struct pblk *pblk; int ret; - if (dev->geo.dom & NVM_RSP_L2P) { + if (geo->version != NVM_OCSSD_SPEC_12) { + pr_err("pblk: OCSSD version not supported (%u)\n", + geo->version); + return ERR_PTR(-EINVAL); + } + + if (geo->version == NVM_OCSSD_SPEC_12 && geo->dom & NVM_RSP_L2P) { pr_err("pblk: host-side L2P table not supported. (%x)\n", - dev->geo.dom); + geo->dom); return ERR_PTR(-EINVAL); } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 10/15] lightnvm: pblk: check for supported version @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) At this point, only 1.2 spec is supported, thus check for it. Also, since device-side L2P is only supported in the 1.2 spec, make sure to only check its value under 1.2. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/pblk-init.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 11424beb214c..b67b5b11ae16 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -991,9 +991,15 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, struct pblk *pblk; int ret; - if (dev->geo.dom & NVM_RSP_L2P) { + if (geo->version != NVM_OCSSD_SPEC_12) { + pr_err("pblk: OCSSD version not supported (%u)\n", + geo->version); + return ERR_PTR(-EINVAL); + } + + if (geo->version == NVM_OCSSD_SPEC_12 && geo->dom & NVM_RSP_L2P) { pr_err("pblk: host-side L2P table not supported. (%x)\n", - dev->geo.dom); + geo->dom); return ERR_PTR(-EINVAL); } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 11/15] lightnvm: pblk: rename ppaf* to addrf* 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme SW4gcHJlcGFyYXRpb24gZm9yIDIuMCBzdXBwb3J0IGluIHBibGssIHJlbmFtZSB2YXJpYWJsZXMg cmVmZXJyaW5nIHRvCnRoZSBhZGRyZXNzIGZvcm1hdCB0byBhZGRyZiBhbmQgcmVzZXJ2ZSBwcGFm IGZvciB0aGUgMS4yIHBhdGguCgpTaWduZWQtb2ZmLWJ5OiBKYXZpZXIgR29uesOhbGV6IDxqYXZp ZXJAY25leGxhYnMuY29tPgotLS0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMgIHwgIDgg KysrKy0tLS0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1zeXNmcy5jIHwgIDQgKystLQogZHJpdmVy cy9saWdodG52bS9wYmxrLmggICAgICAgfCAxNiArKysrKysrKy0tLS0tLS0tCiAzIGZpbGVzIGNo YW5nZWQsIDE0IGluc2VydGlvbnMoKyksIDE0IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2Ry aXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstaW5pdC5j CmluZGV4IGI2N2I1YjExYWUxNi4uNzNiMjIxYzY5Y2ZkIDEwMDY0NAotLS0gYS9kcml2ZXJzL2xp Z2h0bnZtL3BibGstaW5pdC5jCisrKyBiL2RyaXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMKQEAg LTgwLDcgKzgwLDcgQEAgc3RhdGljIHNpemVfdCBwYmxrX3RyYW5zX21hcF9zaXplKHN0cnVjdCBw YmxrICpwYmxrKQogewogCWludCBlbnRyeV9zaXplID0gODsKIAotCWlmIChwYmxrLT5wcGFmX2Jp dHNpemUgPCAzMikKKwlpZiAocGJsay0+YWRkcmZfbGVuIDwgMzIpCiAJCWVudHJ5X3NpemUgPSA0 OwogCiAJcmV0dXJuIGVudHJ5X3NpemUgKiBwYmxrLT5ybC5ucl9zZWNzOwpAQCAtMTk4LDcgKzE5 OCw3IEBAIHN0YXRpYyBpbnQgcGJsa19zZXRfYWRkcmZfMTIoc3RydWN0IG52bV9nZW8gKmdlbywK IAlyZXR1cm4gZHN0LT5ibGtfb2Zmc2V0ICsgc3JjLT5ibGtfbGVuOwogfQogCi1zdGF0aWMgaW50 IHBibGtfc2V0X3BwYWYoc3RydWN0IHBibGsgKnBibGspCitzdGF0aWMgaW50IHBibGtfc2V0X2Fk ZHJmKHN0cnVjdCBwYmxrICpwYmxrKQogewogCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJs ay0+ZGV2OwogCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CkBAIC0yMTAsNyArMjEw LDcgQEAgc3RhdGljIGludCBwYmxrX3NldF9wcGFmKHN0cnVjdCBwYmxrICpwYmxrKQogCQlyZXR1 cm4gLUVJTlZBTDsKIAl9CiAKLQlwYmxrLT5wcGFmX2JpdHNpemUgPSBwYmxrX3NldF9hZGRyZl8x MihnZW8sICh2b2lkICopJnBibGstPnBwYWYpOworCXBibGstPmFkZHJmX2xlbiA9IHBibGtfc2V0 X2FkZHJmXzEyKGdlbywgKHZvaWQgKikmcGJsay0+YWRkcmYpOwogCiAJcmV0dXJuIDA7CiB9CkBA IC0zMTksNyArMzE5LDcgQEAgc3RhdGljIGludCBwYmxrX2NvcmVfaW5pdChzdHJ1Y3QgcGJsayAq cGJsaykKIAlpZiAoIXBibGstPnJfZW5kX3dxKQogCQlnb3RvIGZyZWVfYmJfd3E7CiAKLQlpZiAo cGJsa19zZXRfcHBhZihwYmxrKSkKKwlpZiAocGJsa19zZXRfYWRkcmYocGJsaykpCiAJCWdvdG8g ZnJlZV9yX2VuZF93cTsKIAogCWlmIChwYmxrX3J3Yl9pbml0KHBibGspKQpkaWZmIC0tZ2l0IGEv ZHJpdmVycy9saWdodG52bS9wYmxrLXN5c2ZzLmMgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstc3lz ZnMuYwppbmRleCA0NjJhNzg3ODkzZDUuLmNiYjViNmVkYjdiZiAxMDA2NDQKLS0tIGEvZHJpdmVy cy9saWdodG52bS9wYmxrLXN5c2ZzLmMKKysrIGIvZHJpdmVycy9saWdodG52bS9wYmxrLXN5c2Zz LmMKQEAgLTExNywxMiArMTE3LDEyIEBAIHN0YXRpYyBzc2l6ZV90IHBibGtfc3lzZnNfcHBhZihz dHJ1Y3QgcGJsayAqcGJsaywgY2hhciAqcGFnZSkKIAlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEy ICpnZW9fcHBhZjsKIAlzc2l6ZV90IHN6ID0gMDsKIAotCXBwYWYgPSAoc3RydWN0IG52bV9hZGRy X2Zvcm1hdF8xMiAqKSZwYmxrLT5wcGFmOworCXBwYWYgPSAoc3RydWN0IG52bV9hZGRyX2Zvcm1h dF8xMiAqKSZwYmxrLT5hZGRyZjsKIAlnZW9fcHBhZiA9IChzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0 XzEyICopJmdlby0+YWRkcmY7CiAKIAlzeiA9IHNucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwKIAkJ InBibGs6KHM6JWQpY2g6JWQvJWQsbHVuOiVkLyVkLGJsazolZC8lZCxwZzolZC8lZCxwbDolZC8l ZCxzZWM6JWQvJWRcbiIsCi0JCQlwYmxrLT5wcGFmX2JpdHNpemUsCisJCQlwYmxrLT5hZGRyZl9s ZW4sCiAJCQlwcGFmLT5jaF9vZmZzZXQsIHBwYWYtPmNoX2xlbiwKIAkJCXBwYWYtPmx1bl9vZmZz ZXQsIHBwYWYtPmx1bl9sZW4sCiAJCQlwcGFmLT5ibGtfb2Zmc2V0LCBwcGFmLT5ibGtfbGVuLApk aWZmIC0tZ2l0IGEvZHJpdmVycy9saWdodG52bS9wYmxrLmggYi9kcml2ZXJzL2xpZ2h0bnZtL3Bi bGsuaAppbmRleCBiYWUyY2M3NThkZTguLmRkMDA4OWZlNjJiOSAxMDA2NDQKLS0tIGEvZHJpdmVy cy9saWdodG52bS9wYmxrLmgKKysrIGIvZHJpdmVycy9saWdodG52bS9wYmxrLmgKQEAgLTU3MCw4 ICs1NzAsOCBAQCBzdHJ1Y3QgcGJsayB7CiAJc3RydWN0IHBibGtfbGluZV9tZ210IGxfbWc7CQkv KiBMaW5lIG1hbmFnZW1lbnQgKi8KIAlzdHJ1Y3QgcGJsa19saW5lX21ldGEgbG07CQkvKiBMaW5l IG1ldGFkYXRhICovCiAKLQlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0IHBwYWY7Ci0JaW50IHBwYWZf Yml0c2l6ZTsKKwlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0IGFkZHJmOworCWludCBhZGRyZl9sZW47 CiAKIAlzdHJ1Y3QgcGJsa19yYiByd2I7CiAKQEAgLTk0OCw3ICs5NDgsNyBAQCBzdGF0aWMgaW5s aW5lIHN0cnVjdCBwcGFfYWRkciBhZGRyX3RvX2dlbl9wcGEoc3RydWN0IHBibGsgKnBibGssIHU2 NCBwYWRkciwKIAkJCQkJICAgICAgdTY0IGxpbmVfaWQpCiB7CiAJc3RydWN0IG52bV9hZGRyX2Zv cm1hdF8xMiAqcHBhZiA9Ci0JCQkJKHN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgKikmcGJsay0+ cHBhZjsKKwkJCQkoc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqKSZwYmxrLT5hZGRyZjsKIAlz dHJ1Y3QgcHBhX2FkZHIgcHBhOwogCiAJcHBhLnBwYSA9IDA7CkBAIC05NjYsNyArOTY2LDcgQEAg c3RhdGljIGlubGluZSB1NjQgcGJsa19kZXZfcHBhX3RvX2xpbmVfYWRkcihzdHJ1Y3QgcGJsayAq cGJsaywKIAkJCQkJCQlzdHJ1Y3QgcHBhX2FkZHIgcCkKIHsKIAlzdHJ1Y3QgbnZtX2FkZHJfZm9y bWF0XzEyICpwcGFmID0KLQkJCQkoc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqKSZwYmxrLT5w cGFmOworCQkJCShzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICopJnBibGstPmFkZHJmOwogCXU2 NCBwYWRkcjsKIAogCXBhZGRyID0gKHU2NClwLmcuY2ggPDwgcHBhZi0+Y2hfb2Zmc2V0OwpAQCAt OTkxLDcgKzk5MSw3IEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9hZGRyIHBibGtfcHBhMzJf dG9fcHBhNjQoc3RydWN0IHBibGsgKnBibGssIHUzMiBwcGEzMikKIAkJcHBhNjQuYy5pc19jYWNo ZWQgPSAxOwogCX0gZWxzZSB7CiAJCXN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgKnBwYWYgPQot CQkJCShzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICopJnBibGstPnBwYWY7CisJCQkJKHN0cnVj dCBudm1fYWRkcl9mb3JtYXRfMTIgKikmcGJsay0+YWRkcmY7CiAKIAkJcHBhNjQuZy5jaCA9IChw cGEzMiAmIHBwYWYtPmNoX21hc2spID4+IHBwYWYtPmNoX29mZnNldDsKIAkJcHBhNjQuZy5sdW4g PSAocHBhMzIgJiBwcGFmLT5sdW5fbWFzaykgPj4gcHBhZi0+bHVuX29mZnNldDsKQEAgLTEwMTUs NyArMTAxNSw3IEBAIHN0YXRpYyBpbmxpbmUgdTMyIHBibGtfcHBhNjRfdG9fcHBhMzIoc3RydWN0 IHBibGsgKnBibGssIHN0cnVjdCBwcGFfYWRkciBwcGE2NCkKIAkJcHBhMzIgfD0gMVUgPDwgMzE7 CiAJfSBlbHNlIHsKIAkJc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqcHBhZiA9Ci0JCQkJKHN0 cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgKikmcGJsay0+cHBhZjsKKwkJCQkoc3RydWN0IG52bV9h ZGRyX2Zvcm1hdF8xMiAqKSZwYmxrLT5hZGRyZjsKIAogCQlwcGEzMiB8PSBwcGE2NC5nLmNoIDw8 IHBwYWYtPmNoX29mZnNldDsKIAkJcHBhMzIgfD0gcHBhNjQuZy5sdW4gPDwgcHBhZi0+bHVuX29m ZnNldDsKQEAgLTEwMzMsNyArMTAzMyw3IEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9hZGRy IHBibGtfdHJhbnNfbWFwX2dldChzdHJ1Y3QgcGJsayAqcGJsaywKIHsKIAlzdHJ1Y3QgcHBhX2Fk ZHIgcHBhOwogCi0JaWYgKHBibGstPnBwYWZfYml0c2l6ZSA8IDMyKSB7CisJaWYgKHBibGstPmFk ZHJmX2xlbiA8IDMyKSB7CiAJCXUzMiAqbWFwID0gKHUzMiAqKXBibGstPnRyYW5zX21hcDsKIAog CQlwcGEgPSBwYmxrX3BwYTMyX3RvX3BwYTY0KHBibGssIG1hcFtsYmFdKTsKQEAgLTEwNDksNyAr MTA0OSw3IEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0IHBwYV9hZGRyIHBibGtfdHJhbnNfbWFwX2dl dChzdHJ1Y3QgcGJsayAqcGJsaywKIHN0YXRpYyBpbmxpbmUgdm9pZCBwYmxrX3RyYW5zX21hcF9z ZXQoc3RydWN0IHBibGsgKnBibGssIHNlY3Rvcl90IGxiYSwKIAkJCQkJCXN0cnVjdCBwcGFfYWRk ciBwcGEpCiB7Ci0JaWYgKHBibGstPnBwYWZfYml0c2l6ZSA8IDMyKSB7CisJaWYgKHBibGstPmFk ZHJmX2xlbiA8IDMyKSB7CiAJCXUzMiAqbWFwID0gKHUzMiAqKXBibGstPnRyYW5zX21hcDsKIAog CQltYXBbbGJhXSA9IHBibGtfcHBhNjRfdG9fcHBhMzIocGJsaywgcHBhKTsKLS0gCjIuNy40CgoK X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KTGludXgtbnZt ZSBtYWlsaW5nIGxpc3QKTGludXgtbnZtZUBsaXN0cy5pbmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0 cy5pbmZyYWRlYWQub3JnL21haWxtYW4vbGlzdGluZm8vbGludXgtbnZtZQo= ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 11/15] lightnvm: pblk: rename ppaf* to addrf* @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) In preparation for 2.0 support in pblk, rename variables referring to the address format to addrf and reserve ppaf for the 1.2 path. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/pblk-init.c | 8 ++++---- drivers/lightnvm/pblk-sysfs.c | 4 ++-- drivers/lightnvm/pblk.h | 16 ++++++++-------- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index b67b5b11ae16..73b221c69cfd 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -80,7 +80,7 @@ static size_t pblk_trans_map_size(struct pblk *pblk) { int entry_size = 8; - if (pblk->ppaf_bitsize < 32) + if (pblk->addrf_len < 32) entry_size = 4; return entry_size * pblk->rl.nr_secs; @@ -198,7 +198,7 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, return dst->blk_offset + src->blk_len; } -static int pblk_set_ppaf(struct pblk *pblk) +static int pblk_set_addrf(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; @@ -210,7 +210,7 @@ static int pblk_set_ppaf(struct pblk *pblk) return -EINVAL; } - pblk->ppaf_bitsize = pblk_set_addrf_12(geo, (void *)&pblk->ppaf); + pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); return 0; } @@ -319,7 +319,7 @@ static int pblk_core_init(struct pblk *pblk) if (!pblk->r_end_wq) goto free_bb_wq; - if (pblk_set_ppaf(pblk)) + if (pblk_set_addrf(pblk)) goto free_r_end_wq; if (pblk_rwb_init(pblk)) diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index 462a787893d5..cbb5b6edb7bf 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -117,12 +117,12 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) struct nvm_addr_format_12 *geo_ppaf; ssize_t sz = 0; - ppaf = (struct nvm_addr_format_12 *)&pblk->ppaf; + ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; sz = snprintf(page, PAGE_SIZE, "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", - pblk->ppaf_bitsize, + pblk->addrf_len, ppaf->ch_offset, ppaf->ch_len, ppaf->lun_offset, ppaf->lun_len, ppaf->blk_offset, ppaf->blk_len, diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index bae2cc758de8..dd0089fe62b9 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -570,8 +570,8 @@ struct pblk { struct pblk_line_mgmt l_mg; /* Line management */ struct pblk_line_meta lm; /* Line metadata */ - struct nvm_addr_format ppaf; - int ppaf_bitsize; + struct nvm_addr_format addrf; + int addrf_len; struct pblk_rb rwb; @@ -948,7 +948,7 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, u64 line_id) { struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->ppaf; + (struct nvm_addr_format_12 *)&pblk->addrf; struct ppa_addr ppa; ppa.ppa = 0; @@ -966,7 +966,7 @@ static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, struct ppa_addr p) { struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->ppaf; + (struct nvm_addr_format_12 *)&pblk->addrf; u64 paddr; paddr = (u64)p.g.ch << ppaf->ch_offset; @@ -991,7 +991,7 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) ppa64.c.is_cached = 1; } else { struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->ppaf; + (struct nvm_addr_format_12 *)&pblk->addrf; ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; @@ -1015,7 +1015,7 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) ppa32 |= 1U << 31; } else { struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->ppaf; + (struct nvm_addr_format_12 *)&pblk->addrf; ppa32 |= ppa64.g.ch << ppaf->ch_offset; ppa32 |= ppa64.g.lun << ppaf->lun_offset; @@ -1033,7 +1033,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk, { struct ppa_addr ppa; - if (pblk->ppaf_bitsize < 32) { + if (pblk->addrf_len < 32) { u32 *map = (u32 *)pblk->trans_map; ppa = pblk_ppa32_to_ppa64(pblk, map[lba]); @@ -1049,7 +1049,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk, static inline void pblk_trans_map_set(struct pblk *pblk, sector_t lba, struct ppa_addr ppa) { - if (pblk->ppaf_bitsize < 32) { + if (pblk->addrf_len < 32) { u32 *map = (u32 *)pblk->trans_map; map[lba] = pblk_ppa64_to_ppa32(pblk, ppa); -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 11/15] lightnvm: pblk: rename ppaf* to addrf* @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González In preparation for 2.0 support in pblk, rename variables referring to the address format to addrf and reserve ppaf for the 1.2 path. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/pblk-init.c | 8 ++++---- drivers/lightnvm/pblk-sysfs.c | 4 ++-- drivers/lightnvm/pblk.h | 16 ++++++++-------- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index b67b5b11ae16..73b221c69cfd 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -80,7 +80,7 @@ static size_t pblk_trans_map_size(struct pblk *pblk) { int entry_size = 8; - if (pblk->ppaf_bitsize < 32) + if (pblk->addrf_len < 32) entry_size = 4; return entry_size * pblk->rl.nr_secs; @@ -198,7 +198,7 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, return dst->blk_offset + src->blk_len; } -static int pblk_set_ppaf(struct pblk *pblk) +static int pblk_set_addrf(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; @@ -210,7 +210,7 @@ static int pblk_set_ppaf(struct pblk *pblk) return -EINVAL; } - pblk->ppaf_bitsize = pblk_set_addrf_12(geo, (void *)&pblk->ppaf); + pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); return 0; } @@ -319,7 +319,7 @@ static int pblk_core_init(struct pblk *pblk) if (!pblk->r_end_wq) goto free_bb_wq; - if (pblk_set_ppaf(pblk)) + if (pblk_set_addrf(pblk)) goto free_r_end_wq; if (pblk_rwb_init(pblk)) diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index 462a787893d5..cbb5b6edb7bf 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -117,12 +117,12 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) struct nvm_addr_format_12 *geo_ppaf; ssize_t sz = 0; - ppaf = (struct nvm_addr_format_12 *)&pblk->ppaf; + ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; sz = snprintf(page, PAGE_SIZE, "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", - pblk->ppaf_bitsize, + pblk->addrf_len, ppaf->ch_offset, ppaf->ch_len, ppaf->lun_offset, ppaf->lun_len, ppaf->blk_offset, ppaf->blk_len, diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index bae2cc758de8..dd0089fe62b9 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -570,8 +570,8 @@ struct pblk { struct pblk_line_mgmt l_mg; /* Line management */ struct pblk_line_meta lm; /* Line metadata */ - struct nvm_addr_format ppaf; - int ppaf_bitsize; + struct nvm_addr_format addrf; + int addrf_len; struct pblk_rb rwb; @@ -948,7 +948,7 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, u64 line_id) { struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->ppaf; + (struct nvm_addr_format_12 *)&pblk->addrf; struct ppa_addr ppa; ppa.ppa = 0; @@ -966,7 +966,7 @@ static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, struct ppa_addr p) { struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->ppaf; + (struct nvm_addr_format_12 *)&pblk->addrf; u64 paddr; paddr = (u64)p.g.ch << ppaf->ch_offset; @@ -991,7 +991,7 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) ppa64.c.is_cached = 1; } else { struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->ppaf; + (struct nvm_addr_format_12 *)&pblk->addrf; ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; @@ -1015,7 +1015,7 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) ppa32 |= 1U << 31; } else { struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->ppaf; + (struct nvm_addr_format_12 *)&pblk->addrf; ppa32 |= ppa64.g.ch << ppaf->ch_offset; ppa32 |= ppa64.g.lun << ppaf->lun_offset; @@ -1033,7 +1033,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk, { struct ppa_addr ppa; - if (pblk->ppaf_bitsize < 32) { + if (pblk->addrf_len < 32) { u32 *map = (u32 *)pblk->trans_map; ppa = pblk_ppa32_to_ppa64(pblk, map[lba]); @@ -1049,7 +1049,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk, static inline void pblk_trans_map_set(struct pblk *pblk, sector_t lba, struct ppa_addr ppa) { - if (pblk->ppaf_bitsize < 32) { + if (pblk->addrf_len < 32) { u32 *map = (u32 *)pblk->trans_map; map[lba] = pblk_ppa64_to_ppa32(pblk, ppa); -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 12/15] lightnvn: pblk: use generic address format 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme VXNlIHRoZSBnZW5lcmljIGFkZHJlc3MgZm9ybWF0IG9uIGNvbW1vbiBhZGRyZXNzIG1hbmlwdWxh dGlvbnMuCgpTaWduZWQtb2ZmLWJ5OiBKYXZpZXIgR29uesOhbGV6IDxqYXZpZXJAY25leGxhYnMu Y29tPgotLS0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1jb3JlLmMgIHwgMTAgKysrKystLS0tLQog ZHJpdmVycy9saWdodG52bS9wYmxrLW1hcC5jICAgfCAgNCArKy0tCiBkcml2ZXJzL2xpZ2h0bnZt L3BibGstc3lzZnMuYyB8ICA0ICsrLS0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay5oICAgICAgIHwg IDQgKystLQogNCBmaWxlcyBjaGFuZ2VkLCAxMSBpbnNlcnRpb25zKCspLCAxMSBkZWxldGlvbnMo LSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGstY29yZS5jIGIvZHJpdmVycy9s aWdodG52bS9wYmxrLWNvcmUuYwppbmRleCA3ZDBiZDMzZjExZDkuLjJlMTBiMThiNjFlMyAxMDA2 NDQKLS0tIGEvZHJpdmVycy9saWdodG52bS9wYmxrLWNvcmUuYworKysgYi9kcml2ZXJzL2xpZ2h0 bnZtL3BibGstY29yZS5jCkBAIC04ODUsNyArODg1LDcgQEAgaW50IHBibGtfbGluZV9lcmFzZShz dHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBibGtfbGluZSAqbGluZSkKIAkJfQogCiAJCXBwYSA9 IHBibGstPmx1bnNbYml0XS5icHBhOyAvKiBzZXQgY2ggYW5kIGx1biAqLwotCQlwcGEuZy5ibGsg PSBsaW5lLT5pZDsKKwkJcHBhLmEuYmxrID0gbGluZS0+aWQ7CiAKIAkJYXRvbWljX2RlYygmbGlu ZS0+bGVmdF9lYmxrcyk7CiAJCVdBUk5fT04odGVzdF9hbmRfc2V0X2JpdChiaXQsIGxpbmUtPmVy YXNlX2JpdG1hcCkpOwpAQCAtMTY4Niw4ICsxNjg2LDggQEAgc3RhdGljIHZvaWQgX19wYmxrX2Rv d25fcGFnZShzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBwYV9hZGRyICpwcGFfbGlzdCwKIAlp bnQgaTsKIAogCWZvciAoaSA9IDE7IGkgPCBucl9wcGFzOyBpKyspCi0JCVdBUk5fT04ocHBhX2xp c3RbMF0uZy5sdW4gIT0gcHBhX2xpc3RbaV0uZy5sdW4gfHwKLQkJCQlwcGFfbGlzdFswXS5nLmNo ICE9IHBwYV9saXN0W2ldLmcuY2gpOworCQlXQVJOX09OKHBwYV9saXN0WzBdLmEubHVuICE9IHBw YV9saXN0W2ldLmEubHVuIHx8CisJCQkJcHBhX2xpc3RbMF0uYS5jaCAhPSBwcGFfbGlzdFtpXS5h LmNoKTsKICNlbmRpZgogCiAJcmV0ID0gZG93bl90aW1lb3V0KCZybHVuLT53cl9zZW0sIG1zZWNz X3RvX2ppZmZpZXMoMzAwMDApKTsKQEAgLTE3MzEsOCArMTczMSw4IEBAIHZvaWQgcGJsa191cF9w YWdlKHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgcHBhX2FkZHIgKnBwYV9saXN0LCBpbnQgbnJf cHBhcykKIAlpbnQgaTsKIAogCWZvciAoaSA9IDE7IGkgPCBucl9wcGFzOyBpKyspCi0JCVdBUk5f T04ocHBhX2xpc3RbMF0uZy5sdW4gIT0gcHBhX2xpc3RbaV0uZy5sdW4gfHwKLQkJCQlwcGFfbGlz dFswXS5nLmNoICE9IHBwYV9saXN0W2ldLmcuY2gpOworCQlXQVJOX09OKHBwYV9saXN0WzBdLmEu bHVuICE9IHBwYV9saXN0W2ldLmEubHVuIHx8CisJCQkJcHBhX2xpc3RbMF0uYS5jaCAhPSBwcGFf bGlzdFtpXS5hLmNoKTsKICNlbmRpZgogCiAJcmx1biA9ICZwYmxrLT5sdW5zW3Bvc107CmRpZmYg LS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGstbWFwLmMgYi9kcml2ZXJzL2xpZ2h0bnZtL3Bi bGstbWFwLmMKaW5kZXggMDRlMDhkNzZlYTVmLi4yMGRiYWE4OWM5ZGYgMTAwNjQ0Ci0tLSBhL2Ry aXZlcnMvbGlnaHRudm0vcGJsay1tYXAuYworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstbWFw LmMKQEAgLTEyNyw3ICsxMjcsNyBAQCB2b2lkIHBibGtfbWFwX2VyYXNlX3JxKHN0cnVjdCBwYmxr ICpwYmxrLCBzdHJ1Y3QgbnZtX3JxICpycWQsCiAJCQlhdG9taWNfZGVjKCZlX2xpbmUtPmxlZnRf ZWJsa3MpOwogCiAJCQkqZXJhc2VfcHBhID0gcnFkLT5wcGFfbGlzdFtpXTsKLQkJCWVyYXNlX3Bw YS0+Zy5ibGsgPSBlX2xpbmUtPmlkOworCQkJZXJhc2VfcHBhLT5hLmJsayA9IGVfbGluZS0+aWQ7 CiAKIAkJCXNwaW5fdW5sb2NrKCZlX2xpbmUtPmxvY2spOwogCkBAIC0xNjgsNiArMTY4LDYgQEAg dm9pZCBwYmxrX21hcF9lcmFzZV9ycShzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IG52bV9ycSAq cnFkLAogCQlzZXRfYml0KGJpdCwgZV9saW5lLT5lcmFzZV9iaXRtYXApOwogCQlhdG9taWNfZGVj KCZlX2xpbmUtPmxlZnRfZWJsa3MpOwogCQkqZXJhc2VfcHBhID0gcGJsay0+bHVuc1tiaXRdLmJw cGE7IC8qIHNldCBjaCBhbmQgbHVuICovCi0JCWVyYXNlX3BwYS0+Zy5ibGsgPSBlX2xpbmUtPmlk OworCQllcmFzZV9wcGEtPmEuYmxrID0gZV9saW5lLT5pZDsKIAl9CiB9CmRpZmYgLS1naXQgYS9k cml2ZXJzL2xpZ2h0bnZtL3BibGstc3lzZnMuYyBiL2RyaXZlcnMvbGlnaHRudm0vcGJsay1zeXNm cy5jCmluZGV4IGNiYjViNmVkYjdiZi4uYTY0M2RjNjIzNzMxIDEwMDY0NAotLS0gYS9kcml2ZXJz L2xpZ2h0bnZtL3BibGstc3lzZnMuYworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstc3lzZnMu YwpAQCAtMzksOCArMzksOCBAQCBzdGF0aWMgc3NpemVfdCBwYmxrX3N5c2ZzX2x1bnNfc2hvdyhz dHJ1Y3QgcGJsayAqcGJsaywgY2hhciAqcGFnZSkKIAkJc3ogKz0gc25wcmludGYocGFnZSArIHN6 LCBQQUdFX1NJWkUgLSBzeiwKIAkJCQkicGJsazogcG9zOiVkLCBjaDolZCwgbHVuOiVkIC0gJWRc biIsCiAJCQkJCWksCi0JCQkJCXJsdW4tPmJwcGEuZy5jaCwKLQkJCQkJcmx1bi0+YnBwYS5nLmx1 biwKKwkJCQkJcmx1bi0+YnBwYS5hLmNoLAorCQkJCQlybHVuLT5icHBhLmEubHVuLAogCQkJCQlh Y3RpdmUpOwogCX0KIApkaWZmIC0tZ2l0IGEvZHJpdmVycy9saWdodG52bS9wYmxrLmggYi9kcml2 ZXJzL2xpZ2h0bnZtL3BibGsuaAppbmRleCBkZDAwODlmZTYyYjkuLjZhYzY0ZDllYjU3ZSAxMDA2 NDQKLS0tIGEvZHJpdmVycy9saWdodG52bS9wYmxrLmgKKysrIGIvZHJpdmVycy9saWdodG52bS9w YmxrLmgKQEAgLTkzNiwxMiArOTM2LDEyIEBAIHN0YXRpYyBpbmxpbmUgaW50IHBibGtfcGFkX2Rp c3RhbmNlKHN0cnVjdCBwYmxrICpwYmxrKQogCiBzdGF0aWMgaW5saW5lIGludCBwYmxrX3BwYV90 b19saW5lKHN0cnVjdCBwcGFfYWRkciBwKQogewotCXJldHVybiBwLmcuYmxrOworCXJldHVybiBw LmEuYmxrOwogfQogCiBzdGF0aWMgaW5saW5lIGludCBwYmxrX3BwYV90b19wb3Moc3RydWN0IG52 bV9nZW8gKmdlbywgc3RydWN0IHBwYV9hZGRyIHApCiB7Ci0JcmV0dXJuIHAuZy5sdW4gKiBnZW8t Pm51bV9jaCArIHAuZy5jaDsKKwlyZXR1cm4gcC5hLmx1biAqIGdlby0+bnVtX2NoICsgcC5hLmNo OwogfQogCiBzdGF0aWMgaW5saW5lIHN0cnVjdCBwcGFfYWRkciBhZGRyX3RvX2dlbl9wcGEoc3Ry dWN0IHBibGsgKnBibGssIHU2NCBwYWRkciwKLS0gCjIuNy40CgoKX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KTGludXgtbnZtZSBtYWlsaW5nIGxpc3QKTGlu dXgtbnZtZUBsaXN0cy5pbmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0cy5pbmZyYWRlYWQub3JnL21h aWxtYW4vbGlzdGluZm8vbGludXgtbnZtZQo= ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 12/15] lightnvn: pblk: use generic address format @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Use the generic address format on common address manipulations. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/pblk-core.c | 10 +++++----- drivers/lightnvm/pblk-map.c | 4 ++-- drivers/lightnvm/pblk-sysfs.c | 4 ++-- drivers/lightnvm/pblk.h | 4 ++-- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 7d0bd33f11d9..2e10b18b61e3 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -885,7 +885,7 @@ int pblk_line_erase(struct pblk *pblk, struct pblk_line *line) } ppa = pblk->luns[bit].bppa; /* set ch and lun */ - ppa.g.blk = line->id; + ppa.a.blk = line->id; atomic_dec(&line->left_eblks); WARN_ON(test_and_set_bit(bit, line->erase_bitmap)); @@ -1686,8 +1686,8 @@ static void __pblk_down_page(struct pblk *pblk, struct ppa_addr *ppa_list, int i; for (i = 1; i < nr_ppas; i++) - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || - ppa_list[0].g.ch != ppa_list[i].g.ch); + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || + ppa_list[0].a.ch != ppa_list[i].a.ch); #endif ret = down_timeout(&rlun->wr_sem, msecs_to_jiffies(30000)); @@ -1731,8 +1731,8 @@ void pblk_up_page(struct pblk *pblk, struct ppa_addr *ppa_list, int nr_ppas) int i; for (i = 1; i < nr_ppas; i++) - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || - ppa_list[0].g.ch != ppa_list[i].g.ch); + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || + ppa_list[0].a.ch != ppa_list[i].a.ch); #endif rlun = &pblk->luns[pos]; diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c index 04e08d76ea5f..20dbaa89c9df 100644 --- a/drivers/lightnvm/pblk-map.c +++ b/drivers/lightnvm/pblk-map.c @@ -127,7 +127,7 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, atomic_dec(&e_line->left_eblks); *erase_ppa = rqd->ppa_list[i]; - erase_ppa->g.blk = e_line->id; + erase_ppa->a.blk = e_line->id; spin_unlock(&e_line->lock); @@ -168,6 +168,6 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, set_bit(bit, e_line->erase_bitmap); atomic_dec(&e_line->left_eblks); *erase_ppa = pblk->luns[bit].bppa; /* set ch and lun */ - erase_ppa->g.blk = e_line->id; + erase_ppa->a.blk = e_line->id; } } diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index cbb5b6edb7bf..a643dc623731 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -39,8 +39,8 @@ static ssize_t pblk_sysfs_luns_show(struct pblk *pblk, char *page) sz += snprintf(page + sz, PAGE_SIZE - sz, "pblk: pos:%d, ch:%d, lun:%d - %d\n", i, - rlun->bppa.g.ch, - rlun->bppa.g.lun, + rlun->bppa.a.ch, + rlun->bppa.a.lun, active); } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index dd0089fe62b9..6ac64d9eb57e 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -936,12 +936,12 @@ static inline int pblk_pad_distance(struct pblk *pblk) static inline int pblk_ppa_to_line(struct ppa_addr p) { - return p.g.blk; + return p.a.blk; } static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) { - return p.g.lun * geo->num_ch + p.g.ch; + return p.a.lun * geo->num_ch + p.a.ch; } static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 12/15] lightnvn: pblk: use generic address format @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Use the generic address format on common address manipulations. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/pblk-core.c | 10 +++++----- drivers/lightnvm/pblk-map.c | 4 ++-- drivers/lightnvm/pblk-sysfs.c | 4 ++-- drivers/lightnvm/pblk.h | 4 ++-- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 7d0bd33f11d9..2e10b18b61e3 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -885,7 +885,7 @@ int pblk_line_erase(struct pblk *pblk, struct pblk_line *line) } ppa = pblk->luns[bit].bppa; /* set ch and lun */ - ppa.g.blk = line->id; + ppa.a.blk = line->id; atomic_dec(&line->left_eblks); WARN_ON(test_and_set_bit(bit, line->erase_bitmap)); @@ -1686,8 +1686,8 @@ static void __pblk_down_page(struct pblk *pblk, struct ppa_addr *ppa_list, int i; for (i = 1; i < nr_ppas; i++) - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || - ppa_list[0].g.ch != ppa_list[i].g.ch); + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || + ppa_list[0].a.ch != ppa_list[i].a.ch); #endif ret = down_timeout(&rlun->wr_sem, msecs_to_jiffies(30000)); @@ -1731,8 +1731,8 @@ void pblk_up_page(struct pblk *pblk, struct ppa_addr *ppa_list, int nr_ppas) int i; for (i = 1; i < nr_ppas; i++) - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || - ppa_list[0].g.ch != ppa_list[i].g.ch); + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || + ppa_list[0].a.ch != ppa_list[i].a.ch); #endif rlun = &pblk->luns[pos]; diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c index 04e08d76ea5f..20dbaa89c9df 100644 --- a/drivers/lightnvm/pblk-map.c +++ b/drivers/lightnvm/pblk-map.c @@ -127,7 +127,7 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, atomic_dec(&e_line->left_eblks); *erase_ppa = rqd->ppa_list[i]; - erase_ppa->g.blk = e_line->id; + erase_ppa->a.blk = e_line->id; spin_unlock(&e_line->lock); @@ -168,6 +168,6 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, set_bit(bit, e_line->erase_bitmap); atomic_dec(&e_line->left_eblks); *erase_ppa = pblk->luns[bit].bppa; /* set ch and lun */ - erase_ppa->g.blk = e_line->id; + erase_ppa->a.blk = e_line->id; } } diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index cbb5b6edb7bf..a643dc623731 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -39,8 +39,8 @@ static ssize_t pblk_sysfs_luns_show(struct pblk *pblk, char *page) sz += snprintf(page + sz, PAGE_SIZE - sz, "pblk: pos:%d, ch:%d, lun:%d - %d\n", i, - rlun->bppa.g.ch, - rlun->bppa.g.lun, + rlun->bppa.a.ch, + rlun->bppa.a.lun, active); } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index dd0089fe62b9..6ac64d9eb57e 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -936,12 +936,12 @@ static inline int pblk_pad_distance(struct pblk *pblk) static inline int pblk_ppa_to_line(struct ppa_addr p) { - return p.g.blk; + return p.a.blk; } static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) { - return p.g.lun * geo->num_ch + p.g.ch; + return p.a.lun * geo->num_ch + p.a.ch; } static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 12/15] lightnvn: pblk: use generic address format 2018-02-28 15:49 ` Javier González @ 2018-03-01 10:41 ` Matias Bjørling -1 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:41 UTC (permalink / raw) To: Javier González Cc: linux-block, linux-kernel, linux-nvme, Javier González On 02/28/2018 04:49 PM, Javier González wrote: > Use the generic address format on common address manipulations. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/lightnvm/pblk-core.c | 10 +++++----- > drivers/lightnvm/pblk-map.c | 4 ++-- > drivers/lightnvm/pblk-sysfs.c | 4 ++-- > drivers/lightnvm/pblk.h | 4 ++-- > 4 files changed, 11 insertions(+), 11 deletions(-) > > diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c > index 7d0bd33f11d9..2e10b18b61e3 100644 > --- a/drivers/lightnvm/pblk-core.c > +++ b/drivers/lightnvm/pblk-core.c > @@ -885,7 +885,7 @@ int pblk_line_erase(struct pblk *pblk, struct pblk_line *line) > } > > ppa = pblk->luns[bit].bppa; /* set ch and lun */ > - ppa.g.blk = line->id; > + ppa.a.blk = line->id; > > atomic_dec(&line->left_eblks); > WARN_ON(test_and_set_bit(bit, line->erase_bitmap)); > @@ -1686,8 +1686,8 @@ static void __pblk_down_page(struct pblk *pblk, struct ppa_addr *ppa_list, > int i; > > for (i = 1; i < nr_ppas; i++) > - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || > - ppa_list[0].g.ch != ppa_list[i].g.ch); > + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || > + ppa_list[0].a.ch != ppa_list[i].a.ch); > #endif > > ret = down_timeout(&rlun->wr_sem, msecs_to_jiffies(30000)); > @@ -1731,8 +1731,8 @@ void pblk_up_page(struct pblk *pblk, struct ppa_addr *ppa_list, int nr_ppas) > int i; > > for (i = 1; i < nr_ppas; i++) > - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || > - ppa_list[0].g.ch != ppa_list[i].g.ch); > + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || > + ppa_list[0].a.ch != ppa_list[i].a.ch); > #endif > > rlun = &pblk->luns[pos]; > diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c > index 04e08d76ea5f..20dbaa89c9df 100644 > --- a/drivers/lightnvm/pblk-map.c > +++ b/drivers/lightnvm/pblk-map.c > @@ -127,7 +127,7 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, > atomic_dec(&e_line->left_eblks); > > *erase_ppa = rqd->ppa_list[i]; > - erase_ppa->g.blk = e_line->id; > + erase_ppa->a.blk = e_line->id; > > spin_unlock(&e_line->lock); > > @@ -168,6 +168,6 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, > set_bit(bit, e_line->erase_bitmap); > atomic_dec(&e_line->left_eblks); > *erase_ppa = pblk->luns[bit].bppa; /* set ch and lun */ > - erase_ppa->g.blk = e_line->id; > + erase_ppa->a.blk = e_line->id; > } > } > diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c > index cbb5b6edb7bf..a643dc623731 100644 > --- a/drivers/lightnvm/pblk-sysfs.c > +++ b/drivers/lightnvm/pblk-sysfs.c > @@ -39,8 +39,8 @@ static ssize_t pblk_sysfs_luns_show(struct pblk *pblk, char *page) > sz += snprintf(page + sz, PAGE_SIZE - sz, > "pblk: pos:%d, ch:%d, lun:%d - %d\n", > i, > - rlun->bppa.g.ch, > - rlun->bppa.g.lun, > + rlun->bppa.a.ch, > + rlun->bppa.a.lun, > active); > } > > diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h > index dd0089fe62b9..6ac64d9eb57e 100644 > --- a/drivers/lightnvm/pblk.h > +++ b/drivers/lightnvm/pblk.h > @@ -936,12 +936,12 @@ static inline int pblk_pad_distance(struct pblk *pblk) > > static inline int pblk_ppa_to_line(struct ppa_addr p) > { > - return p.g.blk; > + return p.a.blk; > } > > static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) > { > - return p.g.lun * geo->num_ch + p.g.ch; > + return p.a.lun * geo->num_ch + p.a.ch; > } > > static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > Would it make sense to merge this with 7/15? ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 12/15] lightnvn: pblk: use generic address format @ 2018-03-01 10:41 ` Matias Bjørling 0 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:41 UTC (permalink / raw) On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: > Use the generic address format on common address manipulations. > > Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> > --- > drivers/lightnvm/pblk-core.c | 10 +++++----- > drivers/lightnvm/pblk-map.c | 4 ++-- > drivers/lightnvm/pblk-sysfs.c | 4 ++-- > drivers/lightnvm/pblk.h | 4 ++-- > 4 files changed, 11 insertions(+), 11 deletions(-) > > diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c > index 7d0bd33f11d9..2e10b18b61e3 100644 > --- a/drivers/lightnvm/pblk-core.c > +++ b/drivers/lightnvm/pblk-core.c > @@ -885,7 +885,7 @@ int pblk_line_erase(struct pblk *pblk, struct pblk_line *line) > } > > ppa = pblk->luns[bit].bppa; /* set ch and lun */ > - ppa.g.blk = line->id; > + ppa.a.blk = line->id; > > atomic_dec(&line->left_eblks); > WARN_ON(test_and_set_bit(bit, line->erase_bitmap)); > @@ -1686,8 +1686,8 @@ static void __pblk_down_page(struct pblk *pblk, struct ppa_addr *ppa_list, > int i; > > for (i = 1; i < nr_ppas; i++) > - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || > - ppa_list[0].g.ch != ppa_list[i].g.ch); > + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || > + ppa_list[0].a.ch != ppa_list[i].a.ch); > #endif > > ret = down_timeout(&rlun->wr_sem, msecs_to_jiffies(30000)); > @@ -1731,8 +1731,8 @@ void pblk_up_page(struct pblk *pblk, struct ppa_addr *ppa_list, int nr_ppas) > int i; > > for (i = 1; i < nr_ppas; i++) > - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || > - ppa_list[0].g.ch != ppa_list[i].g.ch); > + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || > + ppa_list[0].a.ch != ppa_list[i].a.ch); > #endif > > rlun = &pblk->luns[pos]; > diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c > index 04e08d76ea5f..20dbaa89c9df 100644 > --- a/drivers/lightnvm/pblk-map.c > +++ b/drivers/lightnvm/pblk-map.c > @@ -127,7 +127,7 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, > atomic_dec(&e_line->left_eblks); > > *erase_ppa = rqd->ppa_list[i]; > - erase_ppa->g.blk = e_line->id; > + erase_ppa->a.blk = e_line->id; > > spin_unlock(&e_line->lock); > > @@ -168,6 +168,6 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, > set_bit(bit, e_line->erase_bitmap); > atomic_dec(&e_line->left_eblks); > *erase_ppa = pblk->luns[bit].bppa; /* set ch and lun */ > - erase_ppa->g.blk = e_line->id; > + erase_ppa->a.blk = e_line->id; > } > } > diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c > index cbb5b6edb7bf..a643dc623731 100644 > --- a/drivers/lightnvm/pblk-sysfs.c > +++ b/drivers/lightnvm/pblk-sysfs.c > @@ -39,8 +39,8 @@ static ssize_t pblk_sysfs_luns_show(struct pblk *pblk, char *page) > sz += snprintf(page + sz, PAGE_SIZE - sz, > "pblk: pos:%d, ch:%d, lun:%d - %d\n", > i, > - rlun->bppa.g.ch, > - rlun->bppa.g.lun, > + rlun->bppa.a.ch, > + rlun->bppa.a.lun, > active); > } > > diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h > index dd0089fe62b9..6ac64d9eb57e 100644 > --- a/drivers/lightnvm/pblk.h > +++ b/drivers/lightnvm/pblk.h > @@ -936,12 +936,12 @@ static inline int pblk_pad_distance(struct pblk *pblk) > > static inline int pblk_ppa_to_line(struct ppa_addr p) > { > - return p.g.blk; > + return p.a.blk; > } > > static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) > { > - return p.g.lun * geo->num_ch + p.g.ch; > + return p.a.lun * geo->num_ch + p.a.ch; > } > > static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > Would it make sense to merge this with 7/15? ^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 12/15] lightnvn: pblk: use generic address format 2018-03-01 10:41 ` Matias Bjørling @ 2018-03-01 11:05 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-03-01 11:05 UTC (permalink / raw) To: Matias Bjørling; +Cc: linux-block, linux-kernel, linux-nvme [-- Attachment #1: Type: text/plain, Size: 4021 bytes --] > On 1 Mar 2018, at 11.41, Matias Bjørling <mb@lightnvm.io> wrote: > > On 02/28/2018 04:49 PM, Javier González wrote: >> Use the generic address format on common address manipulations. >> Signed-off-by: Javier González <javier@cnexlabs.com> >> --- >> drivers/lightnvm/pblk-core.c | 10 +++++----- >> drivers/lightnvm/pblk-map.c | 4 ++-- >> drivers/lightnvm/pblk-sysfs.c | 4 ++-- >> drivers/lightnvm/pblk.h | 4 ++-- >> 4 files changed, 11 insertions(+), 11 deletions(-) >> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c >> index 7d0bd33f11d9..2e10b18b61e3 100644 >> --- a/drivers/lightnvm/pblk-core.c >> +++ b/drivers/lightnvm/pblk-core.c >> @@ -885,7 +885,7 @@ int pblk_line_erase(struct pblk *pblk, struct pblk_line *line) >> } >> ppa = pblk->luns[bit].bppa; /* set ch and lun */ >> - ppa.g.blk = line->id; >> + ppa.a.blk = line->id; >> atomic_dec(&line->left_eblks); >> WARN_ON(test_and_set_bit(bit, line->erase_bitmap)); >> @@ -1686,8 +1686,8 @@ static void __pblk_down_page(struct pblk *pblk, struct ppa_addr *ppa_list, >> int i; >> for (i = 1; i < nr_ppas; i++) >> - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || >> - ppa_list[0].g.ch != ppa_list[i].g.ch); >> + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || >> + ppa_list[0].a.ch != ppa_list[i].a.ch); >> #endif >> ret = down_timeout(&rlun->wr_sem, msecs_to_jiffies(30000)); >> @@ -1731,8 +1731,8 @@ void pblk_up_page(struct pblk *pblk, struct ppa_addr *ppa_list, int nr_ppas) >> int i; >> for (i = 1; i < nr_ppas; i++) >> - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || >> - ppa_list[0].g.ch != ppa_list[i].g.ch); >> + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || >> + ppa_list[0].a.ch != ppa_list[i].a.ch); >> #endif >> rlun = &pblk->luns[pos]; >> diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c >> index 04e08d76ea5f..20dbaa89c9df 100644 >> --- a/drivers/lightnvm/pblk-map.c >> +++ b/drivers/lightnvm/pblk-map.c >> @@ -127,7 +127,7 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, >> atomic_dec(&e_line->left_eblks); >> *erase_ppa = rqd->ppa_list[i]; >> - erase_ppa->g.blk = e_line->id; >> + erase_ppa->a.blk = e_line->id; >> spin_unlock(&e_line->lock); >> @@ -168,6 +168,6 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, >> set_bit(bit, e_line->erase_bitmap); >> atomic_dec(&e_line->left_eblks); >> *erase_ppa = pblk->luns[bit].bppa; /* set ch and lun */ >> - erase_ppa->g.blk = e_line->id; >> + erase_ppa->a.blk = e_line->id; >> } >> } >> diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c >> index cbb5b6edb7bf..a643dc623731 100644 >> --- a/drivers/lightnvm/pblk-sysfs.c >> +++ b/drivers/lightnvm/pblk-sysfs.c >> @@ -39,8 +39,8 @@ static ssize_t pblk_sysfs_luns_show(struct pblk *pblk, char *page) >> sz += snprintf(page + sz, PAGE_SIZE - sz, >> "pblk: pos:%d, ch:%d, lun:%d - %d\n", >> i, >> - rlun->bppa.g.ch, >> - rlun->bppa.g.lun, >> + rlun->bppa.a.ch, >> + rlun->bppa.a.lun, >> active); >> } >> diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h >> index dd0089fe62b9..6ac64d9eb57e 100644 >> --- a/drivers/lightnvm/pblk.h >> +++ b/drivers/lightnvm/pblk.h >> @@ -936,12 +936,12 @@ static inline int pblk_pad_distance(struct pblk *pblk) >> static inline int pblk_ppa_to_line(struct ppa_addr p) >> { >> - return p.g.blk; >> + return p.a.blk; >> } >> static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) >> { >> - return p.g.lun * geo->num_ch + p.g.ch; >> + return p.a.lun * geo->num_ch + p.a.ch; >> } >> static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > > Would it make sense to merge this with 7/15? Sure. I've tried to decouple pblk and lightnvm core patches, but they can go together. I'll merge in V5. Javier [-- Attachment #2: Message signed with OpenPGP --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 12/15] lightnvn: pblk: use generic address format @ 2018-03-01 11:05 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-03-01 11:05 UTC (permalink / raw) > On 1 Mar 2018,@11.41, Matias Bj?rling <mb@lightnvm.io> wrote: > > On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: >> Use the generic address format on common address manipulations. >> Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> >> --- >> drivers/lightnvm/pblk-core.c | 10 +++++----- >> drivers/lightnvm/pblk-map.c | 4 ++-- >> drivers/lightnvm/pblk-sysfs.c | 4 ++-- >> drivers/lightnvm/pblk.h | 4 ++-- >> 4 files changed, 11 insertions(+), 11 deletions(-) >> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c >> index 7d0bd33f11d9..2e10b18b61e3 100644 >> --- a/drivers/lightnvm/pblk-core.c >> +++ b/drivers/lightnvm/pblk-core.c >> @@ -885,7 +885,7 @@ int pblk_line_erase(struct pblk *pblk, struct pblk_line *line) >> } >> ppa = pblk->luns[bit].bppa; /* set ch and lun */ >> - ppa.g.blk = line->id; >> + ppa.a.blk = line->id; >> atomic_dec(&line->left_eblks); >> WARN_ON(test_and_set_bit(bit, line->erase_bitmap)); >> @@ -1686,8 +1686,8 @@ static void __pblk_down_page(struct pblk *pblk, struct ppa_addr *ppa_list, >> int i; >> for (i = 1; i < nr_ppas; i++) >> - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || >> - ppa_list[0].g.ch != ppa_list[i].g.ch); >> + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || >> + ppa_list[0].a.ch != ppa_list[i].a.ch); >> #endif >> ret = down_timeout(&rlun->wr_sem, msecs_to_jiffies(30000)); >> @@ -1731,8 +1731,8 @@ void pblk_up_page(struct pblk *pblk, struct ppa_addr *ppa_list, int nr_ppas) >> int i; >> for (i = 1; i < nr_ppas; i++) >> - WARN_ON(ppa_list[0].g.lun != ppa_list[i].g.lun || >> - ppa_list[0].g.ch != ppa_list[i].g.ch); >> + WARN_ON(ppa_list[0].a.lun != ppa_list[i].a.lun || >> + ppa_list[0].a.ch != ppa_list[i].a.ch); >> #endif >> rlun = &pblk->luns[pos]; >> diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c >> index 04e08d76ea5f..20dbaa89c9df 100644 >> --- a/drivers/lightnvm/pblk-map.c >> +++ b/drivers/lightnvm/pblk-map.c >> @@ -127,7 +127,7 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, >> atomic_dec(&e_line->left_eblks); >> *erase_ppa = rqd->ppa_list[i]; >> - erase_ppa->g.blk = e_line->id; >> + erase_ppa->a.blk = e_line->id; >> spin_unlock(&e_line->lock); >> @@ -168,6 +168,6 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, >> set_bit(bit, e_line->erase_bitmap); >> atomic_dec(&e_line->left_eblks); >> *erase_ppa = pblk->luns[bit].bppa; /* set ch and lun */ >> - erase_ppa->g.blk = e_line->id; >> + erase_ppa->a.blk = e_line->id; >> } >> } >> diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c >> index cbb5b6edb7bf..a643dc623731 100644 >> --- a/drivers/lightnvm/pblk-sysfs.c >> +++ b/drivers/lightnvm/pblk-sysfs.c >> @@ -39,8 +39,8 @@ static ssize_t pblk_sysfs_luns_show(struct pblk *pblk, char *page) >> sz += snprintf(page + sz, PAGE_SIZE - sz, >> "pblk: pos:%d, ch:%d, lun:%d - %d\n", >> i, >> - rlun->bppa.g.ch, >> - rlun->bppa.g.lun, >> + rlun->bppa.a.ch, >> + rlun->bppa.a.lun, >> active); >> } >> diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h >> index dd0089fe62b9..6ac64d9eb57e 100644 >> --- a/drivers/lightnvm/pblk.h >> +++ b/drivers/lightnvm/pblk.h >> @@ -936,12 +936,12 @@ static inline int pblk_pad_distance(struct pblk *pblk) >> static inline int pblk_ppa_to_line(struct ppa_addr p) >> { >> - return p.g.blk; >> + return p.a.blk; >> } >> static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) >> { >> - return p.g.lun * geo->num_ch + p.g.ch; >> + return p.a.lun * geo->num_ch + p.a.ch; >> } >> static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > > Would it make sense to merge this with 7/15? Sure. I've tried to decouple pblk and lightnvm core patches, but they can go together. I'll merge in V5. Javier -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180301/46cff5dd/attachment-0001.sig> ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 13/15] lightnvm: pblk: implement get log report chunk 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme SW4gcHJlcGFyYXRpb24gb2YgcGJsayBzdXBwb3J0aW5nIDIuMCwgaW1wbGVtZW50IHRoZSBnZXQg bG9nIHJlcG9ydApjaHVuayBpbiBwYmxrLiBBbHNvLCBkZWZpbmUgdGhlIGNodW5rIHN0YXRlcyBh cyBnaXZlbiBpbiB0aGUgMi4wIHNwZWMuCgpTaWduZWQtb2ZmLWJ5OiBKYXZpZXIgR29uesOhbGV6 IDxqYXZpZXJAY25leGxhYnMuY29tPgotLS0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1jb3JlLmMg fCAxMzkgKysrKysrKysrKysrKysrKysrKysrKystLS0tCiBkcml2ZXJzL2xpZ2h0bnZtL3BibGst aW5pdC5jIHwgMjIzICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0K IGRyaXZlcnMvbGlnaHRudm0vcGJsay5oICAgICAgfCAgIDcgKysKIGluY2x1ZGUvbGludXgvbGln aHRudm0uaCAgICAgfCAgMTMgKysrCiA0IGZpbGVzIGNoYW5nZWQsIDMwMSBpbnNlcnRpb25zKCsp LCA4MSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGstY29y ZS5jIGIvZHJpdmVycy9saWdodG52bS9wYmxrLWNvcmUuYwppbmRleCAyZTEwYjE4YjYxZTMuLmNk NjYzODU1ZWU4OCAxMDA2NDQKLS0tIGEvZHJpdmVycy9saWdodG52bS9wYmxrLWNvcmUuYworKysg Yi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstY29yZS5jCkBAIC00NCwxMSArNDQsMTIgQEAgc3RhdGlj IHZvaWQgcGJsa19saW5lX21hcmtfYmIoc3RydWN0IHdvcmtfc3RydWN0ICp3b3JrKQogfQogCiBz dGF0aWMgdm9pZCBwYmxrX21hcmtfYmIoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xp bmUgKmxpbmUsCi0JCQkgc3RydWN0IHBwYV9hZGRyICpwcGEpCisJCQkgc3RydWN0IHBwYV9hZGRy IHBwYV9hZGRyKQogewogCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJsay0+ZGV2OwogCXN0 cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87Ci0JaW50IHBvcyA9IHBibGtfcHBhX3RvX3Bv cyhnZW8sICpwcGEpOworCXN0cnVjdCBwcGFfYWRkciAqcHBhOworCWludCBwb3MgPSBwYmxrX3Bw YV90b19wb3MoZ2VvLCBwcGFfYWRkcik7CiAKIAlwcl9kZWJ1ZygicGJsazogZXJhc2UgZmFpbGVk OiBsaW5lOiVkLCBwb3M6JWRcbiIsIGxpbmUtPmlkLCBwb3MpOwogCWF0b21pY19sb25nX2luYygm cGJsay0+ZXJhc2VfZmFpbGVkKTsKQEAgLTU4LDI2ICs1OSwzOCBAQCBzdGF0aWMgdm9pZCBwYmxr X21hcmtfYmIoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xpbmUgKmxpbmUsCiAJCXBy X2VycigicGJsazogYXR0ZW1wdGVkIHRvIGVyYXNlIGJiOiBsaW5lOiVkLCBwb3M6JWRcbiIsCiAJ CQkJCQkJbGluZS0+aWQsIHBvcyk7CiAKKwkvKiBOb3QgbmVjZXNzYXJ5IHRvIG1hcmsgYmFkIGJs b2NrcyBvbiAyLjAgc3BlYy4gKi8KKwlpZiAoZ2VvLT52ZXJzaW9uID09IE5WTV9PQ1NTRF9TUEVD XzIwKQorCQlyZXR1cm47CisKKwlwcGEgPSBrbWFsbG9jKHNpemVvZihzdHJ1Y3QgcHBhX2FkZHIp LCBHRlBfQVRPTUlDKTsKKwlpZiAoIXBwYSkKKwkJcmV0dXJuOworCisJKnBwYSA9IHBwYV9hZGRy OwogCXBibGtfZ2VuX3J1bl93cyhwYmxrLCBOVUxMLCBwcGEsIHBibGtfbGluZV9tYXJrX2JiLAog CQkJCQkJR0ZQX0FUT01JQywgcGJsay0+YmJfd3EpOwogfQogCiBzdGF0aWMgdm9pZCBfX3BibGtf ZW5kX2lvX2VyYXNlKHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgbnZtX3JxICpycWQpCiB7CisJ c3RydWN0IG52bV90Z3RfZGV2ICpkZXYgPSBwYmxrLT5kZXY7CisJc3RydWN0IG52bV9nZW8gKmdl byA9ICZkZXYtPmdlbzsKKwlzdHJ1Y3QgbnZtX2Noa19tZXRhICpjaHVuazsKIAlzdHJ1Y3QgcGJs a19saW5lICpsaW5lOworCWludCBwb3M7CiAKIAlsaW5lID0gJnBibGstPmxpbmVzW3BibGtfcHBh X3RvX2xpbmUocnFkLT5wcGFfYWRkcildOworCXBvcyA9IHBibGtfcHBhX3RvX3BvcyhnZW8sIHJx ZC0+cHBhX2FkZHIpOworCWNodW5rID0gJmxpbmUtPmNoa3NbcG9zXTsKKwogCWF0b21pY19kZWMo JmxpbmUtPmxlZnRfc2VibGtzKTsKIAogCWlmIChycWQtPmVycm9yKSB7Ci0JCXN0cnVjdCBwcGFf YWRkciAqcHBhOwotCi0JCXBwYSA9IGttYWxsb2Moc2l6ZW9mKHN0cnVjdCBwcGFfYWRkciksIEdG UF9BVE9NSUMpOwotCQlpZiAoIXBwYSkKLQkJCXJldHVybjsKLQotCQkqcHBhID0gcnFkLT5wcGFf YWRkcjsKLQkJcGJsa19tYXJrX2JiKHBibGssIGxpbmUsIHBwYSk7CisJCWNodW5rLT5zdGF0ZSA9 IE5WTV9DSEtfU1RfT0ZGTElORTsKKwkJcGJsa19tYXJrX2JiKHBibGssIGxpbmUsIHJxZC0+cHBh X2FkZHIpOworCX0gZWxzZSB7CisJCWNodW5rLT5zdGF0ZSA9IE5WTV9DSEtfU1RfRlJFRTsKIAl9 CiAKIAlhdG9taWNfZGVjKCZwYmxrLT5pbmZsaWdodF9pbyk7CkBAIC05Miw2ICsxMDUsNTAgQEAg c3RhdGljIHZvaWQgcGJsa19lbmRfaW9fZXJhc2Uoc3RydWN0IG52bV9ycSAqcnFkKQogCW1lbXBv b2xfZnJlZShycWQsIHBibGstPmVfcnFfcG9vbCk7CiB9CiAKKy8qCisgKiBHZXQgaW5mb3JtYXRp b24gZm9yIGFsbCBjaHVua3MgZnJvbSB0aGUgZGV2aWNlLgorICoKKyAqIFRoZSBjYWxsZXIgaXMg cmVzcG9uc2libGUgZm9yIGZyZWVpbmcgdGhlIHJldHVybmVkIHN0cnVjdHVyZQorICovCitzdHJ1 Y3QgbnZtX2Noa19tZXRhICpwYmxrX2NodW5rX2dldF9pbmZvKHN0cnVjdCBwYmxrICpwYmxrKQor eworCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJsay0+ZGV2OworCXN0cnVjdCBudm1fZ2Vv ICpnZW8gPSAmZGV2LT5nZW87CisJc3RydWN0IG52bV9jaGtfbWV0YSAqbWV0YTsKKwlzdHJ1Y3Qg cHBhX2FkZHIgcHBhOworCXVuc2lnbmVkIGxvbmcgbGVuOworCWludCByZXQ7CisKKwlwcGEucHBh ID0gMDsKKworCWxlbiA9IGdlby0+YWxsX2NodW5rcyAqIHNpemVvZigqbWV0YSk7CisJbWV0YSA9 IGt6YWxsb2MobGVuLCBHRlBfS0VSTkVMKTsKKwlpZiAoIW1ldGEpCisJCXJldHVybiBFUlJfUFRS KC1FTk9NRU0pOworCisJcmV0ID0gbnZtX2dldF9jaHVua19tZXRhKGRldiwgbWV0YSwgcHBhLCBn ZW8tPmFsbF9jaHVua3MpOworCWlmIChyZXQpIHsKKwkJcHJfZXJyKCJwYmxrOiBjb3VsZCBub3Qg Z2V0IGNodW5rIG1ldGFkYXRhICglZClcbiIsIHJldCk7CisJCWtmcmVlKG1ldGEpOworCQlyZXR1 cm4gRVJSX1BUUigtRUlPKTsKKwl9CisKKwlyZXR1cm4gbWV0YTsKK30KKworc3RydWN0IG52bV9j aGtfbWV0YSAqcGJsa19jaHVua19nZXRfb2ZmKHN0cnVjdCBwYmxrICpwYmxrLAorCQkJCQkgICAg ICBzdHJ1Y3QgbnZtX2Noa19tZXRhICptZXRhLAorCQkJCQkgICAgICBzdHJ1Y3QgcHBhX2FkZHIg cHBhKQoreworCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJsay0+ZGV2OworCXN0cnVjdCBu dm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CisJaW50IGNoX29mZiA9IHBwYS5tLmdycCAqIGdlby0+ bnVtX2NoayAqIGdlby0+bnVtX2x1bjsKKwlpbnQgbHVuX29mZiA9IHBwYS5tLnB1ICogZ2VvLT5u dW1fY2hrOworCWludCBjaGtfb2ZmID0gcHBhLm0uY2hrOworCisJcmV0dXJuIG1ldGEgKyBjaF9v ZmYgKyBsdW5fb2ZmICsgY2hrX29mZjsKK30KKwogdm9pZCBfX3BibGtfbWFwX2ludmFsaWRhdGUo c3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xpbmUgKmxpbmUsCiAJCQkgICB1NjQgcGFk ZHIpCiB7CkBAIC0xMDk0LDEwICsxMTUxLDM0IEBAIHN0YXRpYyBpbnQgcGJsa19saW5lX2luaXRf YmIoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xpbmUgKmxpbmUsCiAJcmV0dXJuIDE7 CiB9CiAKK3N0YXRpYyBpbnQgcGJsa19wcmVwYXJlX25ld19saW5lKHN0cnVjdCBwYmxrICpwYmxr LCBzdHJ1Y3QgcGJsa19saW5lICpsaW5lKQoreworCXN0cnVjdCBwYmxrX2xpbmVfbWV0YSAqbG0g PSAmcGJsay0+bG07CisJc3RydWN0IG52bV90Z3RfZGV2ICpkZXYgPSBwYmxrLT5kZXY7CisJc3Ry dWN0IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsKKwlpbnQgYmxrX3RvX2VyYXNlID0gYXRvbWlj X3JlYWQoJmxpbmUtPmJsa19pbl9saW5lKTsKKwlpbnQgaTsKKworCWZvciAoaSA9IDA7IGkgPCBs bS0+YmxrX3Blcl9saW5lOyBpKyspIHsKKwkJc3RydWN0IHBibGtfbHVuICpybHVuID0gJnBibGst Pmx1bnNbaV07CisJCWludCBwb3MgPSBwYmxrX3BwYV90b19wb3MoZ2VvLCBybHVuLT5icHBhKTsK KwkJaW50IHN0YXRlID0gbGluZS0+Y2hrc1twb3NdLnN0YXRlOworCisJCS8qIEZyZWUgY2h1bmtz IHNob3VsZCBub3QgYmUgZXJhc2VkICovCisJCWlmIChzdGF0ZSAmIE5WTV9DSEtfU1RfRlJFRSkg eworCQkJc2V0X2JpdChwYmxrX3BwYV90b19wb3MoZ2VvLCBybHVuLT5icHBhKSwKKwkJCQkJCQls aW5lLT5lcmFzZV9iaXRtYXApOworCQkJYmxrX3RvX2VyYXNlLS07CisJCX0KKwl9CisKKwlyZXR1 cm4gYmxrX3RvX2VyYXNlOworfQorCiBzdGF0aWMgaW50IHBibGtfbGluZV9wcmVwYXJlKHN0cnVj dCBwYmxrICpwYmxrLCBzdHJ1Y3QgcGJsa19saW5lICpsaW5lKQogewogCXN0cnVjdCBwYmxrX2xp bmVfbWV0YSAqbG0gPSAmcGJsay0+bG07Ci0JaW50IGJsa19pbl9saW5lID0gYXRvbWljX3JlYWQo JmxpbmUtPmJsa19pbl9saW5lKTsKKwlpbnQgYmxrX3RvX2VyYXNlOwogCiAJbGluZS0+bWFwX2Jp dG1hcCA9IGt6YWxsb2MobG0tPnNlY19iaXRtYXBfbGVuLCBHRlBfQVRPTUlDKTsKIAlpZiAoIWxp bmUtPm1hcF9iaXRtYXApCkBAIC0xMTEwLDcgKzExOTEsMjEgQEAgc3RhdGljIGludCBwYmxrX2xp bmVfcHJlcGFyZShzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBibGtfbGluZSAqbGluZSkKIAkJ cmV0dXJuIC1FTk9NRU07CiAJfQogCisJLyogQmFkIGJsb2NrcyBkbyBub3QgbmVlZCB0byBiZSBl cmFzZWQgKi8KKwliaXRtYXBfY29weShsaW5lLT5lcmFzZV9iaXRtYXAsIGxpbmUtPmJsa19iaXRt YXAsIGxtLT5ibGtfcGVyX2xpbmUpOworCiAJc3Bpbl9sb2NrKCZsaW5lLT5sb2NrKTsKKworCS8q IElmIHdlIGhhdmUgbm90IHdyaXR0ZW4gdG8gdGhpcyBsaW5lLCB3ZSBuZWVkIHRvIG1hcmsgdXAg ZnJlZSBjaHVua3MKKwkgKiBhcyBhbHJlYWR5IGVyYXNlZAorCSAqLworCWlmIChsaW5lLT5zdGF0 ZSA9PSBQQkxLX0xJTkVTVEFURV9ORVcpIHsKKwkJYmxrX3RvX2VyYXNlID0gcGJsa19wcmVwYXJl X25ld19saW5lKHBibGssIGxpbmUpOworCQlsaW5lLT5zdGF0ZSA9IFBCTEtfTElORVNUQVRFX0ZS RUU7CisJfSBlbHNlIHsKKwkJYmxrX3RvX2VyYXNlID0gYXRvbWljX3JlYWQoJmxpbmUtPmJsa19p bl9saW5lKTsKKwl9CisKIAlpZiAobGluZS0+c3RhdGUgIT0gUEJMS19MSU5FU1RBVEVfRlJFRSkg ewogCQlrZnJlZShsaW5lLT5tYXBfYml0bWFwKTsKIAkJa2ZyZWUobGluZS0+aW52YWxpZF9iaXRt YXApOwpAQCAtMTEyMiwxNSArMTIxNywxMiBAQCBzdGF0aWMgaW50IHBibGtfbGluZV9wcmVwYXJl KHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgcGJsa19saW5lICpsaW5lKQogCiAJbGluZS0+c3Rh dGUgPSBQQkxLX0xJTkVTVEFURV9PUEVOOwogCi0JYXRvbWljX3NldCgmbGluZS0+bGVmdF9lYmxr cywgYmxrX2luX2xpbmUpOwotCWF0b21pY19zZXQoJmxpbmUtPmxlZnRfc2VibGtzLCBibGtfaW5f bGluZSk7CisJYXRvbWljX3NldCgmbGluZS0+bGVmdF9lYmxrcywgYmxrX3RvX2VyYXNlKTsKKwlh dG9taWNfc2V0KCZsaW5lLT5sZWZ0X3NlYmxrcywgYmxrX3RvX2VyYXNlKTsKIAogCWxpbmUtPm1l dGFfZGlzdGFuY2UgPSBsbS0+bWV0YV9kaXN0YW5jZTsKIAlzcGluX3VubG9jaygmbGluZS0+bG9j ayk7CiAKLQkvKiBCYWQgYmxvY2tzIGRvIG5vdCBuZWVkIHRvIGJlIGVyYXNlZCAqLwotCWJpdG1h cF9jb3B5KGxpbmUtPmVyYXNlX2JpdG1hcCwgbGluZS0+YmxrX2JpdG1hcCwgbG0tPmJsa19wZXJf bGluZSk7Ci0KIAlrcmVmX2luaXQoJmxpbmUtPnJlZik7CiAKIAlyZXR1cm4gMDsKQEAgLTE1ODYs MTIgKzE2NzgsMTQgQEAgc3RhdGljIHZvaWQgcGJsa19saW5lX3Nob3VsZF9zeW5jX21ldGEoc3Ry dWN0IHBibGsgKnBibGspCiAKIHZvaWQgcGJsa19saW5lX2Nsb3NlKHN0cnVjdCBwYmxrICpwYmxr LCBzdHJ1Y3QgcGJsa19saW5lICpsaW5lKQogeworCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0g cGJsay0+ZGV2OworCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CisJc3RydWN0IHBi bGtfbGluZV9tZXRhICpsbSA9ICZwYmxrLT5sbTsKIAlzdHJ1Y3QgcGJsa19saW5lX21nbXQgKmxf bWcgPSAmcGJsay0+bF9tZzsKIAlzdHJ1Y3QgbGlzdF9oZWFkICptb3ZlX2xpc3Q7CisJaW50IGk7 CiAKICNpZmRlZiBDT05GSUdfTlZNX0RFQlVHCi0Jc3RydWN0IHBibGtfbGluZV9tZXRhICpsbSA9 ICZwYmxrLT5sbTsKLQogCVdBUk4oIWJpdG1hcF9mdWxsKGxpbmUtPm1hcF9iaXRtYXAsIGxtLT5z ZWNfcGVyX2xpbmUpLAogCQkJCSJwYmxrOiBjb3JydXB0IGNsb3NlZCBsaW5lICVkXG4iLCBsaW5l LT5pZCk7CiAjZW5kaWYKQEAgLTE2MTMsNiArMTcwNywxNSBAQCB2b2lkIHBibGtfbGluZV9jbG9z ZShzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBibGtfbGluZSAqbGluZSkKIAlsaW5lLT5zbWV0 YSA9IE5VTEw7CiAJbGluZS0+ZW1ldGEgPSBOVUxMOwogCisJZm9yIChpID0gMDsgaSA8IGxtLT5i bGtfcGVyX2xpbmU7IGkrKykgeworCQlzdHJ1Y3QgcGJsa19sdW4gKnJsdW4gPSAmcGJsay0+bHVu c1tpXTsKKwkJaW50IHBvcyA9IHBibGtfcHBhX3RvX3BvcyhnZW8sIHJsdW4tPmJwcGEpOworCQlp bnQgc3RhdGUgPSBsaW5lLT5jaGtzW3Bvc10uc3RhdGU7CisKKwkJaWYgKCEoc3RhdGUgJiBOVk1f Q0hLX1NUX09GRkxJTkUpKQorCQkJc3RhdGUgPSBOVk1fQ0hLX1NUX0NMT1NFRDsKKwl9CisKIAlz cGluX3VubG9jaygmbGluZS0+bG9jayk7CiAJc3Bpbl91bmxvY2soJmxfbWctPmdjX2xvY2spOwog fQpkaWZmIC0tZ2l0IGEvZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYyBiL2RyaXZlcnMvbGln aHRudm0vcGJsay1pbml0LmMKaW5kZXggNzNiMjIxYzY5Y2ZkLi5iZDI1OTJmYzMzNzggMTAwNjQ0 Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMKKysrIGIvZHJpdmVycy9saWdodG52 bS9wYmxrLWluaXQuYwpAQCAtNDAxLDYgKzQwMSw3IEBAIHN0YXRpYyB2b2lkIHBibGtfbGluZV9t ZXRhX2ZyZWUoc3RydWN0IHBibGtfbGluZSAqbGluZSkKIHsKIAlrZnJlZShsaW5lLT5ibGtfYml0 bWFwKTsKIAlrZnJlZShsaW5lLT5lcmFzZV9iaXRtYXApOworCWtmcmVlKGxpbmUtPmNoa3MpOwog fQogCiBzdGF0aWMgdm9pZCBwYmxrX2xpbmVzX2ZyZWUoc3RydWN0IHBibGsgKnBibGspCkBAIC00 NDAsNTUgKzQ0MSw0NCBAQCBzdGF0aWMgaW50IHBibGtfYmJfZ2V0X3RibChzdHJ1Y3QgbnZtX3Rn dF9kZXYgKmRldiwgc3RydWN0IHBibGtfbHVuICpybHVuLAogCXJldHVybiAwOwogfQogCi1zdGF0 aWMgdm9pZCAqcGJsa19iYl9nZXRfbG9nKHN0cnVjdCBwYmxrICpwYmxrKQorc3RhdGljIHZvaWQg KnBibGtfYmJfZ2V0X21ldGEoc3RydWN0IHBibGsgKnBibGspCiB7CiAJc3RydWN0IG52bV90Z3Rf ZGV2ICpkZXYgPSBwYmxrLT5kZXY7CiAJc3RydWN0IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsK LQl1OCAqbG9nOworCXU4ICptZXRhOwogCWludCBpLCBucl9ibGtzLCBibGtfcGVyX2x1bjsKIAlp bnQgcmV0OwogCiAJYmxrX3Blcl9sdW4gPSBnZW8tPm51bV9jaGsgKiBnZW8tPnBsbl9tb2RlOwog CW5yX2Jsa3MgPSBibGtfcGVyX2x1biAqIGdlby0+YWxsX2x1bnM7CiAKLQlsb2cgPSBrbWFsbG9j KG5yX2Jsa3MsIEdGUF9LRVJORUwpOwotCWlmICghbG9nKQorCW1ldGEgPSBrbWFsbG9jKG5yX2Js a3MsIEdGUF9LRVJORUwpOworCWlmICghbWV0YSkKIAkJcmV0dXJuIEVSUl9QVFIoLUVOT01FTSk7 CiAKIAlmb3IgKGkgPSAwOyBpIDwgZ2VvLT5hbGxfbHVuczsgaSsrKSB7CiAJCXN0cnVjdCBwYmxr X2x1biAqcmx1biA9ICZwYmxrLT5sdW5zW2ldOwotCQl1OCAqbG9nX3BvcyA9IGxvZyArIGkgKiBi bGtfcGVyX2x1bjsKKwkJdTggKm1ldGFfcG9zID0gbWV0YSArIGkgKiBibGtfcGVyX2x1bjsKIAot CQlyZXQgPSBwYmxrX2JiX2dldF90YmwoZGV2LCBybHVuLCBsb2dfcG9zLCBibGtfcGVyX2x1bik7 CisJCXJldCA9IHBibGtfYmJfZ2V0X3RibChkZXYsIHJsdW4sIG1ldGFfcG9zLCBibGtfcGVyX2x1 bik7CiAJCWlmIChyZXQpIHsKLQkJCWtmcmVlKGxvZyk7CisJCQlrZnJlZShtZXRhKTsKIAkJCXJl dHVybiBFUlJfUFRSKC1FSU8pOwogCQl9CiAJfQogCi0JcmV0dXJuIGxvZzsKKwlyZXR1cm4gbWV0 YTsKIH0KIAotc3RhdGljIGludCBwYmxrX2JiX2xpbmUoc3RydWN0IHBibGsgKnBibGssIHN0cnVj dCBwYmxrX2xpbmUgKmxpbmUsCi0JCQl1OCAqYmJfbG9nLCBpbnQgYmxrX3Blcl9saW5lKQorc3Rh dGljIHZvaWQgKnBibGtfY2h1bmtfZ2V0X21ldGEoc3RydWN0IHBibGsgKnBibGspCiB7CiAJc3Ry dWN0IG52bV90Z3RfZGV2ICpkZXYgPSBwYmxrLT5kZXY7CiAJc3RydWN0IG52bV9nZW8gKmdlbyA9 ICZkZXYtPmdlbzsKLQlpbnQgaSwgYmJfY250ID0gMDsKLQlpbnQgYmxrX3Blcl9sdW4gPSBnZW8t Pm51bV9jaGsgKiBnZW8tPnBsbl9tb2RlOwogCi0JZm9yIChpID0gMDsgaSA8IGJsa19wZXJfbGlu ZTsgaSsrKSB7Ci0JCXN0cnVjdCBwYmxrX2x1biAqcmx1biA9ICZwYmxrLT5sdW5zW2ldOwotCQl1 OCAqbHVuX2JiX2xvZyA9IGJiX2xvZyArIGkgKiBibGtfcGVyX2x1bjsKLQotCQlpZiAobHVuX2Ji X2xvZ1tsaW5lLT5pZF0gPT0gTlZNX0JMS19UX0ZSRUUpCi0JCQljb250aW51ZTsKLQotCQlzZXRf Yml0KHBibGtfcHBhX3RvX3BvcyhnZW8sIHJsdW4tPmJwcGEpLCBsaW5lLT5ibGtfYml0bWFwKTsK LQkJYmJfY250Kys7Ci0JfQotCi0JcmV0dXJuIGJiX2NudDsKKwlpZiAoZ2VvLT52ZXJzaW9uID09 IE5WTV9PQ1NTRF9TUEVDXzEyKQorCQlyZXR1cm4gcGJsa19iYl9nZXRfbWV0YShwYmxrKTsKKwll bHNlCisJCXJldHVybiBwYmxrX2NodW5rX2dldF9pbmZvKHBibGspOwogfQogCiBzdGF0aWMgaW50 IHBibGtfbHVuc19pbml0KHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgcHBhX2FkZHIgKmx1bnMp CkBAIC02OTYsOCArNjg2LDEzMSBAQCBzdGF0aWMgaW50IHBibGtfbGluZXNfYWxsb2NfbWV0YWRh dGEoc3RydWN0IHBibGsgKnBibGspCiAJcmV0dXJuIC1FTk9NRU07CiB9CiAKLXN0YXRpYyBpbnQg cGJsa19zZXR1cF9saW5lX21ldGEoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xpbmUg KmxpbmUsCi0JCQkJdm9pZCAqY2h1bmtfbG9nLCBsb25nICpucl9iYWRfYmxrcykKK3N0YXRpYyBp bnQgcGJsa19zZXR1cF9saW5lX21ldGFfMTIoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxr X2xpbmUgKmxpbmUsCisJCQkJICAgdm9pZCAqY2h1bmtfbWV0YSkKK3sKKwlzdHJ1Y3QgbnZtX3Rn dF9kZXYgKmRldiA9IHBibGstPmRldjsKKwlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJmRldi0+Z2Vv OworCXN0cnVjdCBwYmxrX2xpbmVfbWV0YSAqbG0gPSAmcGJsay0+bG07CisJaW50IGksIGNoa19w ZXJfbHVuLCBucl9iYWRfY2hrcyA9IDA7CisKKwljaGtfcGVyX2x1biA9IGdlby0+bnVtX2NoayAq IGdlby0+cGxuX21vZGU7CisKKwlmb3IgKGkgPSAwOyBpIDwgbG0tPmJsa19wZXJfbGluZTsgaSsr KSB7CisJCXN0cnVjdCBwYmxrX2x1biAqcmx1biA9ICZwYmxrLT5sdW5zW2ldOworCQlzdHJ1Y3Qg bnZtX2Noa19tZXRhICpjaHVuazsKKwkJaW50IHBvcyA9IHBibGtfcHBhX3RvX3BvcyhnZW8sIHJs dW4tPmJwcGEpOworCQl1OCAqbHVuX2JiX21ldGEgPSBjaHVua19tZXRhICsgcG9zICogY2hrX3Bl cl9sdW47CisKKwkJY2h1bmsgPSAmbGluZS0+Y2hrc1twb3NdOworCisJCS8qCisJCSAqIEluIDEu MiBzcGVjLiBjaHVuayBzdGF0ZSBpcyBub3QgcGVyc2lzdGVkIGJ5IHRoZSBkZXZpY2UuIFRodXMK KwkJICogc29tZSBvZiB0aGUgdmFsdWVzIGFyZSByZXNldCBlYWNoIHRpbWUgcGJsayBpcyBpbnN0 YW50aWF0ZWQuCisJCSAqLworCQlpZiAobHVuX2JiX21ldGFbbGluZS0+aWRdID09IE5WTV9CTEtf VF9GUkVFKQorCQkJY2h1bmstPnN0YXRlID0gIE5WTV9DSEtfU1RfRlJFRTsKKwkJZWxzZQorCQkJ Y2h1bmstPnN0YXRlID0gTlZNX0NIS19TVF9PRkZMSU5FOworCisJCWNodW5rLT50eXBlID0gTlZN X0NIS19UUF9XX1NFUTsKKwkJY2h1bmstPndpID0gMDsKKwkJY2h1bmstPnNsYmEgPSAtMTsKKwkJ Y2h1bmstPmNubGIgPSBnZW8tPmNsYmE7CisJCWNodW5rLT53cCA9IDA7CisKKwkJaWYgKCEoY2h1 bmstPnN0YXRlICYgTlZNX0NIS19TVF9PRkZMSU5FKSkKKwkJCWNvbnRpbnVlOworCisJCXNldF9i aXQocG9zLCBsaW5lLT5ibGtfYml0bWFwKTsKKwkJbnJfYmFkX2Noa3MrKzsKKwl9CisKKwlyZXR1 cm4gbnJfYmFkX2Noa3M7Cit9CisKK3N0YXRpYyBpbnQgcGJsa19zZXR1cF9saW5lX21ldGFfMjAo c3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xpbmUgKmxpbmUsCisJCQkJICAgc3RydWN0 IG52bV9jaGtfbWV0YSAqbWV0YSkKK3sKKwlzdHJ1Y3QgbnZtX3RndF9kZXYgKmRldiA9IHBibGst PmRldjsKKwlzdHJ1Y3QgbnZtX2dlbyAqZ2VvID0gJmRldi0+Z2VvOworCXN0cnVjdCBwYmxrX2xp bmVfbWV0YSAqbG0gPSAmcGJsay0+bG07CisJaW50IGksIG5yX2JhZF9jaGtzID0gMDsKKworCWZv ciAoaSA9IDA7IGkgPCBsbS0+YmxrX3Blcl9saW5lOyBpKyspIHsKKwkJc3RydWN0IHBibGtfbHVu ICpybHVuID0gJnBibGstPmx1bnNbaV07CisJCXN0cnVjdCBudm1fY2hrX21ldGEgKmNodW5rOwor CQlzdHJ1Y3QgbnZtX2Noa19tZXRhICpjaHVua19tZXRhOworCQlzdHJ1Y3QgcHBhX2FkZHIgcHBh OworCQlpbnQgcG9zOworCisJCXBwYSA9IHJsdW4tPmJwcGE7CisJCXBvcyA9IHBibGtfcHBhX3Rv X3BvcyhnZW8sIHBwYSk7CisJCWNodW5rID0gJmxpbmUtPmNoa3NbcG9zXTsKKworCQlwcGEubS5j aGsgPSBsaW5lLT5pZDsKKwkJY2h1bmtfbWV0YSA9IHBibGtfY2h1bmtfZ2V0X29mZihwYmxrLCBt ZXRhLCBwcGEpOworCisJCWNodW5rLT5zdGF0ZSA9IGNodW5rX21ldGEtPnN0YXRlOworCQljaHVu ay0+dHlwZSA9IGNodW5rX21ldGEtPnR5cGU7CisJCWNodW5rLT53aSA9IGNodW5rX21ldGEtPndp OworCQljaHVuay0+c2xiYSA9IGNodW5rX21ldGEtPnNsYmE7CisJCWNodW5rLT5jbmxiID0gY2h1 bmtfbWV0YS0+Y25sYjsKKwkJY2h1bmstPndwID0gY2h1bmtfbWV0YS0+d3A7CisKKwkJaWYgKCEo Y2h1bmstPnN0YXRlICYgTlZNX0NIS19TVF9PRkZMSU5FKSkKKwkJCWNvbnRpbnVlOworCisJCWlm IChjaHVuay0+dHlwZSAmIE5WTV9DSEtfVFBfU1pfU1BFQykgeworCQkJV0FSTl9PTkNFKDEsICJw YmxrOiBjdXN0b20tc2l6ZWQgY2h1bmtzIHVuc3VwcG9ydGVkXG4iKTsKKwkJCWNvbnRpbnVlOwor CQl9CisKKwkJc2V0X2JpdChwb3MsIGxpbmUtPmJsa19iaXRtYXApOworCQlucl9iYWRfY2hrcysr OworCX0KKworCXJldHVybiBucl9iYWRfY2hrczsKK30KKworc3RhdGljIGxvbmcgcGJsa19zZXR1 cF9saW5lX21ldGEoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBwYmxrX2xpbmUgKmxpbmUsCisJ CQkJIHZvaWQgKmNodW5rX21ldGEsIGludCBsaW5lX2lkKQoreworCXN0cnVjdCBudm1fdGd0X2Rl diAqZGV2ID0gcGJsay0+ZGV2OworCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CisJ c3RydWN0IHBibGtfbGluZV9tZ210ICpsX21nID0gJnBibGstPmxfbWc7CisJc3RydWN0IHBibGtf bGluZV9tZXRhICpsbSA9ICZwYmxrLT5sbTsKKwlsb25nIG5yX2JhZF9jaGtzLCBjaGtfaW5fbGlu ZTsKKworCWxpbmUtPnBibGsgPSBwYmxrOworCWxpbmUtPmlkID0gbGluZV9pZDsKKwlsaW5lLT50 eXBlID0gUEJMS19MSU5FVFlQRV9GUkVFOworCWxpbmUtPnN0YXRlID0gUEJMS19MSU5FU1RBVEVf TkVXOworCWxpbmUtPmdjX2dyb3VwID0gUEJMS19MSU5FR0NfTk9ORTsKKwlsaW5lLT52c2MgPSAm bF9tZy0+dnNjX2xpc3RbbGluZV9pZF07CisJc3Bpbl9sb2NrX2luaXQoJmxpbmUtPmxvY2spOwor CisJaWYgKGdlby0+dmVyc2lvbiA9PSBOVk1fT0NTU0RfU1BFQ18xMikKKwkJbnJfYmFkX2Noa3Mg PSBwYmxrX3NldHVwX2xpbmVfbWV0YV8xMihwYmxrLCBsaW5lLCBjaHVua19tZXRhKTsKKwllbHNl CisJCW5yX2JhZF9jaGtzID0gcGJsa19zZXR1cF9saW5lX21ldGFfMjAocGJsaywgbGluZSwgY2h1 bmtfbWV0YSk7CisKKwljaGtfaW5fbGluZSA9IGxtLT5ibGtfcGVyX2xpbmUgLSBucl9iYWRfY2hr czsKKwlpZiAobnJfYmFkX2Noa3MgPCAwIHx8IG5yX2JhZF9jaGtzID4gbG0tPmJsa19wZXJfbGlu ZSB8fAorCQkJCQljaGtfaW5fbGluZSA8IGxtLT5taW5fYmxrX2xpbmUpIHsKKwkJbGluZS0+c3Rh dGUgPSBQQkxLX0xJTkVTVEFURV9CQUQ7CisJCWxpc3RfYWRkX3RhaWwoJmxpbmUtPmxpc3QsICZs X21nLT5iYWRfbGlzdCk7CisJCXJldHVybiAwOworCX0KKworCWF0b21pY19zZXQoJmxpbmUtPmJs a19pbl9saW5lLCBjaGtfaW5fbGluZSk7CisJbGlzdF9hZGRfdGFpbCgmbGluZS0+bGlzdCwgJmxf bWctPmZyZWVfbGlzdCk7CisJbF9tZy0+bnJfZnJlZV9saW5lcysrOworCisJcmV0dXJuIGNoa19p bl9saW5lOworfQorCitzdGF0aWMgaW50IHBibGtfYWxsb2NfbGluZV9tZXRhKHN0cnVjdCBwYmxr ICpwYmxrLCBzdHJ1Y3QgcGJsa19saW5lICpsaW5lKQogewogCXN0cnVjdCBwYmxrX2xpbmVfbWV0 YSAqbG0gPSAmcGJsay0+bG07CiAKQEAgLTcxMSw3ICs4MjQsMTMgQEAgc3RhdGljIGludCBwYmxr X3NldHVwX2xpbmVfbWV0YShzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBibGtfbGluZSAqbGlu ZSwKIAkJcmV0dXJuIC1FTk9NRU07CiAJfQogCi0JKm5yX2JhZF9ibGtzID0gcGJsa19iYl9saW5l KHBibGssIGxpbmUsIGNodW5rX2xvZywgbG0tPmJsa19wZXJfbGluZSk7CisJbGluZS0+Y2hrcyA9 IGttYWxsb2MobG0tPmJsa19wZXJfbGluZSAqIHNpemVvZihzdHJ1Y3QgbnZtX2Noa19tZXRhKSwK KwkJCQkJCQkJR0ZQX0tFUk5FTCk7CisJaWYgKCFsaW5lLT5jaGtzKSB7CisJCWtmcmVlKGxpbmUt PmVyYXNlX2JpdG1hcCk7CisJCWtmcmVlKGxpbmUtPmJsa19iaXRtYXApOworCQlyZXR1cm4gLUVO T01FTTsKKwl9CiAKIAlyZXR1cm4gMDsKIH0KQEAgLTcyMyw5ICs4NDIsOSBAQCBzdGF0aWMgaW50 IHBibGtfbGluZXNfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaykKIAlzdHJ1Y3QgcGJsa19saW5lX21n bXQgKmxfbWcgPSAmcGJsay0+bF9tZzsKIAlzdHJ1Y3QgcGJsa19saW5lX21ldGEgKmxtID0gJnBi bGstPmxtOwogCXN0cnVjdCBwYmxrX2xpbmUgKmxpbmU7Ci0Jdm9pZCAqY2h1bmtfbG9nOworCXZv aWQgKmNodW5rX21ldGE7CiAJdW5zaWduZWQgaW50IHNtZXRhX2xlbiwgZW1ldGFfbGVuOwotCWxv bmcgbnJfYmFkX2Jsa3MgPSAwLCBucl9mcmVlX2Jsa3MgPSAwOworCWxvbmcgbnJfZnJlZV9jaGtz ID0gMDsKIAlpbnQgYmJfZGlzdGFuY2UsIG1heF93cml0ZV9wcGFzOwogCWludCBpLCByZXQ7CiAK QEAgLTg0Miw1MyArOTYxLDMxIEBAIHN0YXRpYyBpbnQgcGJsa19saW5lc19pbml0KHN0cnVjdCBw YmxrICpwYmxrKQogCQlnb3RvIGZhaWxfZnJlZV9iYl9hdXg7CiAJfQogCi0JY2h1bmtfbG9nID0g cGJsa19iYl9nZXRfbG9nKHBibGspOwotCWlmIChJU19FUlIoY2h1bmtfbG9nKSkgewotCQlwcl9l cnIoInBibGs6IGNvdWxkIG5vdCBnZXQgYmFkIGJsb2NrIGxvZyAoJWx1KVxuIiwKLQkJCQkJCQlQ VFJfRVJSKGNodW5rX2xvZykpOwotCQlyZXQgPSBQVFJfRVJSKGNodW5rX2xvZyk7CisJY2h1bmtf bWV0YSA9IHBibGtfY2h1bmtfZ2V0X21ldGEocGJsayk7CisJaWYgKElTX0VSUihjaHVua19tZXRh KSkgeworCQlwcl9lcnIoInBibGs6IGNvdWxkIG5vdCBnZXQgY2h1bmsgbG9nICglbHUpXG4iLAor CQkJCQkJCVBUUl9FUlIoY2h1bmtfbWV0YSkpOworCQlyZXQgPSBQVFJfRVJSKGNodW5rX21ldGEp OwogCQlnb3RvIGZhaWxfZnJlZV9saW5lczsKIAl9CiAKIAlmb3IgKGkgPSAwOyBpIDwgbF9tZy0+ bnJfbGluZXM7IGkrKykgewotCQlpbnQgY2hrX2luX2xpbmU7Ci0KIAkJbGluZSA9ICZwYmxrLT5s aW5lc1tpXTsKIAotCQlsaW5lLT5wYmxrID0gcGJsazsKLQkJbGluZS0+aWQgPSBpOwotCQlsaW5l LT50eXBlID0gUEJMS19MSU5FVFlQRV9GUkVFOwotCQlsaW5lLT5zdGF0ZSA9IFBCTEtfTElORVNU QVRFX0ZSRUU7Ci0JCWxpbmUtPmdjX2dyb3VwID0gUEJMS19MSU5FR0NfTk9ORTsKLQkJbGluZS0+ dnNjID0gJmxfbWctPnZzY19saXN0W2ldOwotCQlzcGluX2xvY2tfaW5pdCgmbGluZS0+bG9jayk7 Ci0KLQkJcmV0ID0gcGJsa19zZXR1cF9saW5lX21ldGEocGJsaywgbGluZSwgY2h1bmtfbG9nLCAm bnJfYmFkX2Jsa3MpOworCQlyZXQgPSBwYmxrX2FsbG9jX2xpbmVfbWV0YShwYmxrLCBsaW5lKTsK IAkJaWYgKHJldCkKLQkJCWdvdG8gZmFpbF9mcmVlX2NodW5rX2xvZzsKKwkJCWdvdG8gZmFpbF9m cmVlX2NodW5rX21ldGE7CiAKLQkJY2hrX2luX2xpbmUgPSBsbS0+YmxrX3Blcl9saW5lIC0gbnJf YmFkX2Jsa3M7Ci0JCWlmIChucl9iYWRfYmxrcyA8IDAgfHwgbnJfYmFkX2Jsa3MgPiBsbS0+Ymxr X3Blcl9saW5lIHx8Ci0JCQkJCWNoa19pbl9saW5lIDwgbG0tPm1pbl9ibGtfbGluZSkgewotCQkJ bGluZS0+c3RhdGUgPSBQQkxLX0xJTkVTVEFURV9CQUQ7Ci0JCQlsaXN0X2FkZF90YWlsKCZsaW5l LT5saXN0LCAmbF9tZy0+YmFkX2xpc3QpOwotCQkJY29udGludWU7Ci0JCX0KLQotCQlucl9mcmVl X2Jsa3MgKz0gY2hrX2luX2xpbmU7Ci0JCWF0b21pY19zZXQoJmxpbmUtPmJsa19pbl9saW5lLCBj aGtfaW5fbGluZSk7Ci0KLQkJbF9tZy0+bnJfZnJlZV9saW5lcysrOwotCQlsaXN0X2FkZF90YWls KCZsaW5lLT5saXN0LCAmbF9tZy0+ZnJlZV9saXN0KTsKKwkJbnJfZnJlZV9jaGtzICs9IHBibGtf c2V0dXBfbGluZV9tZXRhKHBibGssIGxpbmUsIGNodW5rX21ldGEsIGkpOwogCX0KIAotCXBibGtf c2V0X3Byb3Zpc2lvbihwYmxrLCBucl9mcmVlX2Jsa3MpOworCXBibGtfc2V0X3Byb3Zpc2lvbihw YmxrLCBucl9mcmVlX2Noa3MpOwogCi0Ja2ZyZWUoY2h1bmtfbG9nKTsKKwlrZnJlZShjaHVua19t ZXRhKTsKIAlyZXR1cm4gMDsKIAotZmFpbF9mcmVlX2NodW5rX2xvZzoKLQlrZnJlZShjaHVua19s b2cpOworZmFpbF9mcmVlX2NodW5rX21ldGE6CisJa2ZyZWUoY2h1bmtfbWV0YSk7CiAJd2hpbGUg KC0taSA+PSAwKQogCQlwYmxrX2xpbmVfbWV0YV9mcmVlKCZwYmxrLT5saW5lc1tpXSk7CiBmYWls X2ZyZWVfbGluZXM6CmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGsuaCBiL2RyaXZl cnMvbGlnaHRudm0vcGJsay5oCmluZGV4IDZhYzY0ZDllYjU3ZS4uZWUxNDk3NjZiN2EwIDEwMDY0 NAotLS0gYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGsuaAorKysgYi9kcml2ZXJzL2xpZ2h0bnZtL3Bi bGsuaApAQCAtMjk3LDYgKzI5Nyw3IEBAIGVudW0gewogCVBCTEtfTElORVRZUEVfREFUQSA9IDIs CiAKIAkvKiBMaW5lIHN0YXRlICovCisJUEJMS19MSU5FU1RBVEVfTkVXID0gOSwKIAlQQkxLX0xJ TkVTVEFURV9GUkVFID0gMTAsCiAJUEJMS19MSU5FU1RBVEVfT1BFTiA9IDExLAogCVBCTEtfTElO RVNUQVRFX0NMT1NFRCA9IDEyLApAQCAtNDI2LDYgKzQyNyw4IEBAIHN0cnVjdCBwYmxrX2xpbmUg ewogCiAJdW5zaWduZWQgbG9uZyAqbHVuX2JpdG1hcDsJLyogQml0bWFwIGZvciBMVU5zIG1hcHBl ZCBpbiBsaW5lICovCiAKKwlzdHJ1Y3QgbnZtX2Noa19tZXRhICpjaGtzOwkvKiBDaHVua3MgZm9y bWluZyBsaW5lICovCisKIAlzdHJ1Y3QgcGJsa19zbWV0YSAqc21ldGE7CS8qIFN0YXJ0IG1ldGFk YXRhICovCiAJc3RydWN0IHBibGtfZW1ldGEgKmVtZXRhOwkvKiBFbmQgbWVkYXRhZGEgKi8KIApA QCAtNzI5LDYgKzczMiwxMCBAQCB2b2lkIHBibGtfc2V0X3NlY19wZXJfd3JpdGUoc3RydWN0IHBi bGsgKnBibGssIGludCBzZWNfcGVyX3dyaXRlKTsKIGludCBwYmxrX3NldHVwX3dfcmVjX3JxKHN0 cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgbnZtX3JxICpycWQsCiAJCQlzdHJ1Y3QgcGJsa19jX2N0 eCAqY19jdHgpOwogdm9pZCBwYmxrX2Rpc2NhcmQoc3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBi aW8gKmJpbyk7CitzdHJ1Y3QgbnZtX2Noa19tZXRhICpwYmxrX2NodW5rX2dldF9pbmZvKHN0cnVj dCBwYmxrICpwYmxrKTsKK3N0cnVjdCBudm1fY2hrX21ldGEgKnBibGtfY2h1bmtfZ2V0X29mZihz dHJ1Y3QgcGJsayAqcGJsaywKKwkJCQkJICAgICAgc3RydWN0IG52bV9jaGtfbWV0YSAqbHAsCisJ CQkJCSAgICAgIHN0cnVjdCBwcGFfYWRkciBwcGEpOwogdm9pZCBwYmxrX2xvZ193cml0ZV9lcnIo c3RydWN0IHBibGsgKnBibGssIHN0cnVjdCBudm1fcnEgKnJxZCk7CiB2b2lkIHBibGtfbG9nX3Jl YWRfZXJyKHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgbnZtX3JxICpycWQpOwogaW50IHBibGtf c3VibWl0X2lvKHN0cnVjdCBwYmxrICpwYmxrLCBzdHJ1Y3QgbnZtX3JxICpycWQpOwpkaWZmIC0t Z2l0IGEvaW5jbHVkZS9saW51eC9saWdodG52bS5oIGIvaW5jbHVkZS9saW51eC9saWdodG52bS5o CmluZGV4IDlmZTM3ZjdlODE4NS4uYzEyMGIyMjQzNzU4IDEwMDY0NAotLS0gYS9pbmNsdWRlL2xp bnV4L2xpZ2h0bnZtLmgKKysrIGIvaW5jbHVkZS9saW51eC9saWdodG52bS5oCkBAIC0yMzIsNiAr MjMyLDE5IEBAIHN0cnVjdCBudm1fYWRkcl9mb3JtYXQgewogCXU2NAlyc3ZfbWFza1syXTsKIH07 CiAKK2VudW0geworCS8qIENodW5rIHN0YXRlcyAqLworCU5WTV9DSEtfU1RfRlJFRSA9CTEgPDwg MCwKKwlOVk1fQ0hLX1NUX0NMT1NFRCA9CTEgPDwgMSwKKwlOVk1fQ0hLX1NUX09QRU4gPQkxIDw8 IDIsCisJTlZNX0NIS19TVF9PRkZMSU5FID0JMSA8PCAzLAorCisJLyogQ2h1bmsgdHlwZXMgKi8K KwlOVk1fQ0hLX1RQX1dfU0VRID0JMSA8PCAwLAorCU5WTV9DSEtfVFBfV19SQU4gPQkxIDw8IDEs CisJTlZNX0NIS19UUF9TWl9TUEVDID0JMSA8PCA0LAorfTsKKwogLyoKICAqIE5vdGU6IFRoZSBz dHJ1Y3R1cmUgc2l6ZSBpcyBsaW5rZWQgdG8gbnZtZV9udm1fY2hrX21ldGEgc3VjaCB0aGF0IHRo ZSBzYW1lCiAgKiBidWZmZXIgY2FuIGJlIHVzZWQgd2hlbiBjb252ZXJ0aW5nIGZyb20gbGl0dGxl IGVuZGlhbiB0byBjcHUgYWRkcmVzc2luZy4KLS0gCjIuNy40CgoKX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KTGludXgtbnZtZSBtYWlsaW5nIGxpc3QKTGlu dXgtbnZtZUBsaXN0cy5pbmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0cy5pbmZyYWRlYWQub3JnL21h aWxtYW4vbGlzdGluZm8vbGludXgtbnZtZQo= ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 13/15] lightnvm: pblk: implement get log report chunk @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) In preparation of pblk supporting 2.0, implement the get log report chunk in pblk. Also, define the chunk states as given in the 2.0 spec. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/pblk-core.c | 139 +++++++++++++++++++++++---- drivers/lightnvm/pblk-init.c | 223 +++++++++++++++++++++++++++++++------------ drivers/lightnvm/pblk.h | 7 ++ include/linux/lightnvm.h | 13 +++ 4 files changed, 301 insertions(+), 81 deletions(-) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 2e10b18b61e3..cd663855ee88 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -44,11 +44,12 @@ static void pblk_line_mark_bb(struct work_struct *work) } static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, - struct ppa_addr *ppa) + struct ppa_addr ppa_addr) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - int pos = pblk_ppa_to_pos(geo, *ppa); + struct ppa_addr *ppa; + int pos = pblk_ppa_to_pos(geo, ppa_addr); pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos); atomic_long_inc(&pblk->erase_failed); @@ -58,26 +59,38 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n", line->id, pos); + /* Not necessary to mark bad blocks on 2.0 spec. */ + if (geo->version == NVM_OCSSD_SPEC_20) + return; + + ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC); + if (!ppa) + return; + + *ppa = ppa_addr; pblk_gen_run_ws(pblk, NULL, ppa, pblk_line_mark_bb, GFP_ATOMIC, pblk->bb_wq); } static void __pblk_end_io_erase(struct pblk *pblk, struct nvm_rq *rqd) { + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct nvm_chk_meta *chunk; struct pblk_line *line; + int pos; line = &pblk->lines[pblk_ppa_to_line(rqd->ppa_addr)]; + pos = pblk_ppa_to_pos(geo, rqd->ppa_addr); + chunk = &line->chks[pos]; + atomic_dec(&line->left_seblks); if (rqd->error) { - struct ppa_addr *ppa; - - ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC); - if (!ppa) - return; - - *ppa = rqd->ppa_addr; - pblk_mark_bb(pblk, line, ppa); + chunk->state = NVM_CHK_ST_OFFLINE; + pblk_mark_bb(pblk, line, rqd->ppa_addr); + } else { + chunk->state = NVM_CHK_ST_FREE; } atomic_dec(&pblk->inflight_io); @@ -92,6 +105,50 @@ static void pblk_end_io_erase(struct nvm_rq *rqd) mempool_free(rqd, pblk->e_rq_pool); } +/* + * Get information for all chunks from the device. + * + * The caller is responsible for freeing the returned structure + */ +struct nvm_chk_meta *pblk_chunk_get_info(struct pblk *pblk) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct nvm_chk_meta *meta; + struct ppa_addr ppa; + unsigned long len; + int ret; + + ppa.ppa = 0; + + len = geo->all_chunks * sizeof(*meta); + meta = kzalloc(len, GFP_KERNEL); + if (!meta) + return ERR_PTR(-ENOMEM); + + ret = nvm_get_chunk_meta(dev, meta, ppa, geo->all_chunks); + if (ret) { + pr_err("pblk: could not get chunk metadata (%d)\n", ret); + kfree(meta); + return ERR_PTR(-EIO); + } + + return meta; +} + +struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, + struct nvm_chk_meta *meta, + struct ppa_addr ppa) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + int ch_off = ppa.m.grp * geo->num_chk * geo->num_lun; + int lun_off = ppa.m.pu * geo->num_chk; + int chk_off = ppa.m.chk; + + return meta + ch_off + lun_off + chk_off; +} + void __pblk_map_invalidate(struct pblk *pblk, struct pblk_line *line, u64 paddr) { @@ -1094,10 +1151,34 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, return 1; } +static int pblk_prepare_new_line(struct pblk *pblk, struct pblk_line *line) +{ + struct pblk_line_meta *lm = &pblk->lm; + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + int blk_to_erase = atomic_read(&line->blk_in_line); + int i; + + for (i = 0; i < lm->blk_per_line; i++) { + struct pblk_lun *rlun = &pblk->luns[i]; + int pos = pblk_ppa_to_pos(geo, rlun->bppa); + int state = line->chks[pos].state; + + /* Free chunks should not be erased */ + if (state & NVM_CHK_ST_FREE) { + set_bit(pblk_ppa_to_pos(geo, rlun->bppa), + line->erase_bitmap); + blk_to_erase--; + } + } + + return blk_to_erase; +} + static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) { struct pblk_line_meta *lm = &pblk->lm; - int blk_in_line = atomic_read(&line->blk_in_line); + int blk_to_erase; line->map_bitmap = kzalloc(lm->sec_bitmap_len, GFP_ATOMIC); if (!line->map_bitmap) @@ -1110,7 +1191,21 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) return -ENOMEM; } + /* Bad blocks do not need to be erased */ + bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line); + spin_lock(&line->lock); + + /* If we have not written to this line, we need to mark up free chunks + * as already erased + */ + if (line->state == PBLK_LINESTATE_NEW) { + blk_to_erase = pblk_prepare_new_line(pblk, line); + line->state = PBLK_LINESTATE_FREE; + } else { + blk_to_erase = atomic_read(&line->blk_in_line); + } + if (line->state != PBLK_LINESTATE_FREE) { kfree(line->map_bitmap); kfree(line->invalid_bitmap); @@ -1122,15 +1217,12 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) line->state = PBLK_LINESTATE_OPEN; - atomic_set(&line->left_eblks, blk_in_line); - atomic_set(&line->left_seblks, blk_in_line); + atomic_set(&line->left_eblks, blk_to_erase); + atomic_set(&line->left_seblks, blk_to_erase); line->meta_distance = lm->meta_distance; spin_unlock(&line->lock); - /* Bad blocks do not need to be erased */ - bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line); - kref_init(&line->ref); return 0; @@ -1586,12 +1678,14 @@ static void pblk_line_should_sync_meta(struct pblk *pblk) void pblk_line_close(struct pblk *pblk, struct pblk_line *line) { + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct list_head *move_list; + int i; #ifdef CONFIG_NVM_DEBUG - struct pblk_line_meta *lm = &pblk->lm; - WARN(!bitmap_full(line->map_bitmap, lm->sec_per_line), "pblk: corrupt closed line %d\n", line->id); #endif @@ -1613,6 +1707,15 @@ void pblk_line_close(struct pblk *pblk, struct pblk_line *line) line->smeta = NULL; line->emeta = NULL; + for (i = 0; i < lm->blk_per_line; i++) { + struct pblk_lun *rlun = &pblk->luns[i]; + int pos = pblk_ppa_to_pos(geo, rlun->bppa); + int state = line->chks[pos].state; + + if (!(state & NVM_CHK_ST_OFFLINE)) + state = NVM_CHK_ST_CLOSED; + } + spin_unlock(&line->lock); spin_unlock(&l_mg->gc_lock); } diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 73b221c69cfd..bd2592fc3378 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -401,6 +401,7 @@ static void pblk_line_meta_free(struct pblk_line *line) { kfree(line->blk_bitmap); kfree(line->erase_bitmap); + kfree(line->chks); } static void pblk_lines_free(struct pblk *pblk) @@ -440,55 +441,44 @@ static int pblk_bb_get_tbl(struct nvm_tgt_dev *dev, struct pblk_lun *rlun, return 0; } -static void *pblk_bb_get_log(struct pblk *pblk) +static void *pblk_bb_get_meta(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - u8 *log; + u8 *meta; int i, nr_blks, blk_per_lun; int ret; blk_per_lun = geo->num_chk * geo->pln_mode; nr_blks = blk_per_lun * geo->all_luns; - log = kmalloc(nr_blks, GFP_KERNEL); - if (!log) + meta = kmalloc(nr_blks, GFP_KERNEL); + if (!meta) return ERR_PTR(-ENOMEM); for (i = 0; i < geo->all_luns; i++) { struct pblk_lun *rlun = &pblk->luns[i]; - u8 *log_pos = log + i * blk_per_lun; + u8 *meta_pos = meta + i * blk_per_lun; - ret = pblk_bb_get_tbl(dev, rlun, log_pos, blk_per_lun); + ret = pblk_bb_get_tbl(dev, rlun, meta_pos, blk_per_lun); if (ret) { - kfree(log); + kfree(meta); return ERR_PTR(-EIO); } } - return log; + return meta; } -static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line, - u8 *bb_log, int blk_per_line) +static void *pblk_chunk_get_meta(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - int i, bb_cnt = 0; - int blk_per_lun = geo->num_chk * geo->pln_mode; - for (i = 0; i < blk_per_line; i++) { - struct pblk_lun *rlun = &pblk->luns[i]; - u8 *lun_bb_log = bb_log + i * blk_per_lun; - - if (lun_bb_log[line->id] == NVM_BLK_T_FREE) - continue; - - set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap); - bb_cnt++; - } - - return bb_cnt; + if (geo->version == NVM_OCSSD_SPEC_12) + return pblk_bb_get_meta(pblk); + else + return pblk_chunk_get_info(pblk); } static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) @@ -696,8 +686,131 @@ static int pblk_lines_alloc_metadata(struct pblk *pblk) return -ENOMEM; } -static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, - void *chunk_log, long *nr_bad_blks) +static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line, + void *chunk_meta) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; + int i, chk_per_lun, nr_bad_chks = 0; + + chk_per_lun = geo->num_chk * geo->pln_mode; + + for (i = 0; i < lm->blk_per_line; i++) { + struct pblk_lun *rlun = &pblk->luns[i]; + struct nvm_chk_meta *chunk; + int pos = pblk_ppa_to_pos(geo, rlun->bppa); + u8 *lun_bb_meta = chunk_meta + pos * chk_per_lun; + + chunk = &line->chks[pos]; + + /* + * In 1.2 spec. chunk state is not persisted by the device. Thus + * some of the values are reset each time pblk is instantiated. + */ + if (lun_bb_meta[line->id] == NVM_BLK_T_FREE) + chunk->state = NVM_CHK_ST_FREE; + else + chunk->state = NVM_CHK_ST_OFFLINE; + + chunk->type = NVM_CHK_TP_W_SEQ; + chunk->wi = 0; + chunk->slba = -1; + chunk->cnlb = geo->clba; + chunk->wp = 0; + + if (!(chunk->state & NVM_CHK_ST_OFFLINE)) + continue; + + set_bit(pos, line->blk_bitmap); + nr_bad_chks++; + } + + return nr_bad_chks; +} + +static int pblk_setup_line_meta_20(struct pblk *pblk, struct pblk_line *line, + struct nvm_chk_meta *meta) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; + int i, nr_bad_chks = 0; + + for (i = 0; i < lm->blk_per_line; i++) { + struct pblk_lun *rlun = &pblk->luns[i]; + struct nvm_chk_meta *chunk; + struct nvm_chk_meta *chunk_meta; + struct ppa_addr ppa; + int pos; + + ppa = rlun->bppa; + pos = pblk_ppa_to_pos(geo, ppa); + chunk = &line->chks[pos]; + + ppa.m.chk = line->id; + chunk_meta = pblk_chunk_get_off(pblk, meta, ppa); + + chunk->state = chunk_meta->state; + chunk->type = chunk_meta->type; + chunk->wi = chunk_meta->wi; + chunk->slba = chunk_meta->slba; + chunk->cnlb = chunk_meta->cnlb; + chunk->wp = chunk_meta->wp; + + if (!(chunk->state & NVM_CHK_ST_OFFLINE)) + continue; + + if (chunk->type & NVM_CHK_TP_SZ_SPEC) { + WARN_ONCE(1, "pblk: custom-sized chunks unsupported\n"); + continue; + } + + set_bit(pos, line->blk_bitmap); + nr_bad_chks++; + } + + return nr_bad_chks; +} + +static long pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, + void *chunk_meta, int line_id) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_mgmt *l_mg = &pblk->l_mg; + struct pblk_line_meta *lm = &pblk->lm; + long nr_bad_chks, chk_in_line; + + line->pblk = pblk; + line->id = line_id; + line->type = PBLK_LINETYPE_FREE; + line->state = PBLK_LINESTATE_NEW; + line->gc_group = PBLK_LINEGC_NONE; + line->vsc = &l_mg->vsc_list[line_id]; + spin_lock_init(&line->lock); + + if (geo->version == NVM_OCSSD_SPEC_12) + nr_bad_chks = pblk_setup_line_meta_12(pblk, line, chunk_meta); + else + nr_bad_chks = pblk_setup_line_meta_20(pblk, line, chunk_meta); + + chk_in_line = lm->blk_per_line - nr_bad_chks; + if (nr_bad_chks < 0 || nr_bad_chks > lm->blk_per_line || + chk_in_line < lm->min_blk_line) { + line->state = PBLK_LINESTATE_BAD; + list_add_tail(&line->list, &l_mg->bad_list); + return 0; + } + + atomic_set(&line->blk_in_line, chk_in_line); + list_add_tail(&line->list, &l_mg->free_list); + l_mg->nr_free_lines++; + + return chk_in_line; +} + +static int pblk_alloc_line_meta(struct pblk *pblk, struct pblk_line *line) { struct pblk_line_meta *lm = &pblk->lm; @@ -711,7 +824,13 @@ static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, return -ENOMEM; } - *nr_bad_blks = pblk_bb_line(pblk, line, chunk_log, lm->blk_per_line); + line->chks = kmalloc(lm->blk_per_line * sizeof(struct nvm_chk_meta), + GFP_KERNEL); + if (!line->chks) { + kfree(line->erase_bitmap); + kfree(line->blk_bitmap); + return -ENOMEM; + } return 0; } @@ -723,9 +842,9 @@ static int pblk_lines_init(struct pblk *pblk) struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; struct pblk_line *line; - void *chunk_log; + void *chunk_meta; unsigned int smeta_len, emeta_len; - long nr_bad_blks = 0, nr_free_blks = 0; + long nr_free_chks = 0; int bb_distance, max_write_ppas; int i, ret; @@ -842,53 +961,31 @@ static int pblk_lines_init(struct pblk *pblk) goto fail_free_bb_aux; } - chunk_log = pblk_bb_get_log(pblk); - if (IS_ERR(chunk_log)) { - pr_err("pblk: could not get bad block log (%lu)\n", - PTR_ERR(chunk_log)); - ret = PTR_ERR(chunk_log); + chunk_meta = pblk_chunk_get_meta(pblk); + if (IS_ERR(chunk_meta)) { + pr_err("pblk: could not get chunk log (%lu)\n", + PTR_ERR(chunk_meta)); + ret = PTR_ERR(chunk_meta); goto fail_free_lines; } for (i = 0; i < l_mg->nr_lines; i++) { - int chk_in_line; - line = &pblk->lines[i]; - line->pblk = pblk; - line->id = i; - line->type = PBLK_LINETYPE_FREE; - line->state = PBLK_LINESTATE_FREE; - line->gc_group = PBLK_LINEGC_NONE; - line->vsc = &l_mg->vsc_list[i]; - spin_lock_init(&line->lock); - - ret = pblk_setup_line_meta(pblk, line, chunk_log, &nr_bad_blks); + ret = pblk_alloc_line_meta(pblk, line); if (ret) - goto fail_free_chunk_log; + goto fail_free_chunk_meta; - chk_in_line = lm->blk_per_line - nr_bad_blks; - if (nr_bad_blks < 0 || nr_bad_blks > lm->blk_per_line || - chk_in_line < lm->min_blk_line) { - line->state = PBLK_LINESTATE_BAD; - list_add_tail(&line->list, &l_mg->bad_list); - continue; - } - - nr_free_blks += chk_in_line; - atomic_set(&line->blk_in_line, chk_in_line); - - l_mg->nr_free_lines++; - list_add_tail(&line->list, &l_mg->free_list); + nr_free_chks += pblk_setup_line_meta(pblk, line, chunk_meta, i); } - pblk_set_provision(pblk, nr_free_blks); + pblk_set_provision(pblk, nr_free_chks); - kfree(chunk_log); + kfree(chunk_meta); return 0; -fail_free_chunk_log: - kfree(chunk_log); +fail_free_chunk_meta: + kfree(chunk_meta); while (--i >= 0) pblk_line_meta_free(&pblk->lines[i]); fail_free_lines: diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index 6ac64d9eb57e..ee149766b7a0 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -297,6 +297,7 @@ enum { PBLK_LINETYPE_DATA = 2, /* Line state */ + PBLK_LINESTATE_NEW = 9, PBLK_LINESTATE_FREE = 10, PBLK_LINESTATE_OPEN = 11, PBLK_LINESTATE_CLOSED = 12, @@ -426,6 +427,8 @@ struct pblk_line { unsigned long *lun_bitmap; /* Bitmap for LUNs mapped in line */ + struct nvm_chk_meta *chks; /* Chunks forming line */ + struct pblk_smeta *smeta; /* Start metadata */ struct pblk_emeta *emeta; /* End medatada */ @@ -729,6 +732,10 @@ void pblk_set_sec_per_write(struct pblk *pblk, int sec_per_write); int pblk_setup_w_rec_rq(struct pblk *pblk, struct nvm_rq *rqd, struct pblk_c_ctx *c_ctx); void pblk_discard(struct pblk *pblk, struct bio *bio); +struct nvm_chk_meta *pblk_chunk_get_info(struct pblk *pblk); +struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, + struct nvm_chk_meta *lp, + struct ppa_addr ppa); void pblk_log_write_err(struct pblk *pblk, struct nvm_rq *rqd); void pblk_log_read_err(struct pblk *pblk, struct nvm_rq *rqd); int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 9fe37f7e8185..c120b2243758 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -232,6 +232,19 @@ struct nvm_addr_format { u64 rsv_mask[2]; }; +enum { + /* Chunk states */ + NVM_CHK_ST_FREE = 1 << 0, + NVM_CHK_ST_CLOSED = 1 << 1, + NVM_CHK_ST_OPEN = 1 << 2, + NVM_CHK_ST_OFFLINE = 1 << 3, + + /* Chunk types */ + NVM_CHK_TP_W_SEQ = 1 << 0, + NVM_CHK_TP_W_RAN = 1 << 1, + NVM_CHK_TP_SZ_SPEC = 1 << 4, +}; + /* * Note: The structure size is linked to nvme_nvm_chk_meta such that the same * buffer can be used when converting from little endian to cpu addressing. -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 13/15] lightnvm: pblk: implement get log report chunk @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González In preparation of pblk supporting 2.0, implement the get log report chunk in pblk. Also, define the chunk states as given in the 2.0 spec. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/pblk-core.c | 139 +++++++++++++++++++++++---- drivers/lightnvm/pblk-init.c | 223 +++++++++++++++++++++++++++++++------------ drivers/lightnvm/pblk.h | 7 ++ include/linux/lightnvm.h | 13 +++ 4 files changed, 301 insertions(+), 81 deletions(-) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 2e10b18b61e3..cd663855ee88 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -44,11 +44,12 @@ static void pblk_line_mark_bb(struct work_struct *work) } static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, - struct ppa_addr *ppa) + struct ppa_addr ppa_addr) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - int pos = pblk_ppa_to_pos(geo, *ppa); + struct ppa_addr *ppa; + int pos = pblk_ppa_to_pos(geo, ppa_addr); pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos); atomic_long_inc(&pblk->erase_failed); @@ -58,26 +59,38 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n", line->id, pos); + /* Not necessary to mark bad blocks on 2.0 spec. */ + if (geo->version == NVM_OCSSD_SPEC_20) + return; + + ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC); + if (!ppa) + return; + + *ppa = ppa_addr; pblk_gen_run_ws(pblk, NULL, ppa, pblk_line_mark_bb, GFP_ATOMIC, pblk->bb_wq); } static void __pblk_end_io_erase(struct pblk *pblk, struct nvm_rq *rqd) { + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct nvm_chk_meta *chunk; struct pblk_line *line; + int pos; line = &pblk->lines[pblk_ppa_to_line(rqd->ppa_addr)]; + pos = pblk_ppa_to_pos(geo, rqd->ppa_addr); + chunk = &line->chks[pos]; + atomic_dec(&line->left_seblks); if (rqd->error) { - struct ppa_addr *ppa; - - ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC); - if (!ppa) - return; - - *ppa = rqd->ppa_addr; - pblk_mark_bb(pblk, line, ppa); + chunk->state = NVM_CHK_ST_OFFLINE; + pblk_mark_bb(pblk, line, rqd->ppa_addr); + } else { + chunk->state = NVM_CHK_ST_FREE; } atomic_dec(&pblk->inflight_io); @@ -92,6 +105,50 @@ static void pblk_end_io_erase(struct nvm_rq *rqd) mempool_free(rqd, pblk->e_rq_pool); } +/* + * Get information for all chunks from the device. + * + * The caller is responsible for freeing the returned structure + */ +struct nvm_chk_meta *pblk_chunk_get_info(struct pblk *pblk) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct nvm_chk_meta *meta; + struct ppa_addr ppa; + unsigned long len; + int ret; + + ppa.ppa = 0; + + len = geo->all_chunks * sizeof(*meta); + meta = kzalloc(len, GFP_KERNEL); + if (!meta) + return ERR_PTR(-ENOMEM); + + ret = nvm_get_chunk_meta(dev, meta, ppa, geo->all_chunks); + if (ret) { + pr_err("pblk: could not get chunk metadata (%d)\n", ret); + kfree(meta); + return ERR_PTR(-EIO); + } + + return meta; +} + +struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, + struct nvm_chk_meta *meta, + struct ppa_addr ppa) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + int ch_off = ppa.m.grp * geo->num_chk * geo->num_lun; + int lun_off = ppa.m.pu * geo->num_chk; + int chk_off = ppa.m.chk; + + return meta + ch_off + lun_off + chk_off; +} + void __pblk_map_invalidate(struct pblk *pblk, struct pblk_line *line, u64 paddr) { @@ -1094,10 +1151,34 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, return 1; } +static int pblk_prepare_new_line(struct pblk *pblk, struct pblk_line *line) +{ + struct pblk_line_meta *lm = &pblk->lm; + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + int blk_to_erase = atomic_read(&line->blk_in_line); + int i; + + for (i = 0; i < lm->blk_per_line; i++) { + struct pblk_lun *rlun = &pblk->luns[i]; + int pos = pblk_ppa_to_pos(geo, rlun->bppa); + int state = line->chks[pos].state; + + /* Free chunks should not be erased */ + if (state & NVM_CHK_ST_FREE) { + set_bit(pblk_ppa_to_pos(geo, rlun->bppa), + line->erase_bitmap); + blk_to_erase--; + } + } + + return blk_to_erase; +} + static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) { struct pblk_line_meta *lm = &pblk->lm; - int blk_in_line = atomic_read(&line->blk_in_line); + int blk_to_erase; line->map_bitmap = kzalloc(lm->sec_bitmap_len, GFP_ATOMIC); if (!line->map_bitmap) @@ -1110,7 +1191,21 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) return -ENOMEM; } + /* Bad blocks do not need to be erased */ + bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line); + spin_lock(&line->lock); + + /* If we have not written to this line, we need to mark up free chunks + * as already erased + */ + if (line->state == PBLK_LINESTATE_NEW) { + blk_to_erase = pblk_prepare_new_line(pblk, line); + line->state = PBLK_LINESTATE_FREE; + } else { + blk_to_erase = atomic_read(&line->blk_in_line); + } + if (line->state != PBLK_LINESTATE_FREE) { kfree(line->map_bitmap); kfree(line->invalid_bitmap); @@ -1122,15 +1217,12 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) line->state = PBLK_LINESTATE_OPEN; - atomic_set(&line->left_eblks, blk_in_line); - atomic_set(&line->left_seblks, blk_in_line); + atomic_set(&line->left_eblks, blk_to_erase); + atomic_set(&line->left_seblks, blk_to_erase); line->meta_distance = lm->meta_distance; spin_unlock(&line->lock); - /* Bad blocks do not need to be erased */ - bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line); - kref_init(&line->ref); return 0; @@ -1586,12 +1678,14 @@ static void pblk_line_should_sync_meta(struct pblk *pblk) void pblk_line_close(struct pblk *pblk, struct pblk_line *line) { + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct list_head *move_list; + int i; #ifdef CONFIG_NVM_DEBUG - struct pblk_line_meta *lm = &pblk->lm; - WARN(!bitmap_full(line->map_bitmap, lm->sec_per_line), "pblk: corrupt closed line %d\n", line->id); #endif @@ -1613,6 +1707,15 @@ void pblk_line_close(struct pblk *pblk, struct pblk_line *line) line->smeta = NULL; line->emeta = NULL; + for (i = 0; i < lm->blk_per_line; i++) { + struct pblk_lun *rlun = &pblk->luns[i]; + int pos = pblk_ppa_to_pos(geo, rlun->bppa); + int state = line->chks[pos].state; + + if (!(state & NVM_CHK_ST_OFFLINE)) + state = NVM_CHK_ST_CLOSED; + } + spin_unlock(&line->lock); spin_unlock(&l_mg->gc_lock); } diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 73b221c69cfd..bd2592fc3378 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -401,6 +401,7 @@ static void pblk_line_meta_free(struct pblk_line *line) { kfree(line->blk_bitmap); kfree(line->erase_bitmap); + kfree(line->chks); } static void pblk_lines_free(struct pblk *pblk) @@ -440,55 +441,44 @@ static int pblk_bb_get_tbl(struct nvm_tgt_dev *dev, struct pblk_lun *rlun, return 0; } -static void *pblk_bb_get_log(struct pblk *pblk) +static void *pblk_bb_get_meta(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - u8 *log; + u8 *meta; int i, nr_blks, blk_per_lun; int ret; blk_per_lun = geo->num_chk * geo->pln_mode; nr_blks = blk_per_lun * geo->all_luns; - log = kmalloc(nr_blks, GFP_KERNEL); - if (!log) + meta = kmalloc(nr_blks, GFP_KERNEL); + if (!meta) return ERR_PTR(-ENOMEM); for (i = 0; i < geo->all_luns; i++) { struct pblk_lun *rlun = &pblk->luns[i]; - u8 *log_pos = log + i * blk_per_lun; + u8 *meta_pos = meta + i * blk_per_lun; - ret = pblk_bb_get_tbl(dev, rlun, log_pos, blk_per_lun); + ret = pblk_bb_get_tbl(dev, rlun, meta_pos, blk_per_lun); if (ret) { - kfree(log); + kfree(meta); return ERR_PTR(-EIO); } } - return log; + return meta; } -static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line, - u8 *bb_log, int blk_per_line) +static void *pblk_chunk_get_meta(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - int i, bb_cnt = 0; - int blk_per_lun = geo->num_chk * geo->pln_mode; - for (i = 0; i < blk_per_line; i++) { - struct pblk_lun *rlun = &pblk->luns[i]; - u8 *lun_bb_log = bb_log + i * blk_per_lun; - - if (lun_bb_log[line->id] == NVM_BLK_T_FREE) - continue; - - set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap); - bb_cnt++; - } - - return bb_cnt; + if (geo->version == NVM_OCSSD_SPEC_12) + return pblk_bb_get_meta(pblk); + else + return pblk_chunk_get_info(pblk); } static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) @@ -696,8 +686,131 @@ static int pblk_lines_alloc_metadata(struct pblk *pblk) return -ENOMEM; } -static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, - void *chunk_log, long *nr_bad_blks) +static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line, + void *chunk_meta) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; + int i, chk_per_lun, nr_bad_chks = 0; + + chk_per_lun = geo->num_chk * geo->pln_mode; + + for (i = 0; i < lm->blk_per_line; i++) { + struct pblk_lun *rlun = &pblk->luns[i]; + struct nvm_chk_meta *chunk; + int pos = pblk_ppa_to_pos(geo, rlun->bppa); + u8 *lun_bb_meta = chunk_meta + pos * chk_per_lun; + + chunk = &line->chks[pos]; + + /* + * In 1.2 spec. chunk state is not persisted by the device. Thus + * some of the values are reset each time pblk is instantiated. + */ + if (lun_bb_meta[line->id] == NVM_BLK_T_FREE) + chunk->state = NVM_CHK_ST_FREE; + else + chunk->state = NVM_CHK_ST_OFFLINE; + + chunk->type = NVM_CHK_TP_W_SEQ; + chunk->wi = 0; + chunk->slba = -1; + chunk->cnlb = geo->clba; + chunk->wp = 0; + + if (!(chunk->state & NVM_CHK_ST_OFFLINE)) + continue; + + set_bit(pos, line->blk_bitmap); + nr_bad_chks++; + } + + return nr_bad_chks; +} + +static int pblk_setup_line_meta_20(struct pblk *pblk, struct pblk_line *line, + struct nvm_chk_meta *meta) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; + int i, nr_bad_chks = 0; + + for (i = 0; i < lm->blk_per_line; i++) { + struct pblk_lun *rlun = &pblk->luns[i]; + struct nvm_chk_meta *chunk; + struct nvm_chk_meta *chunk_meta; + struct ppa_addr ppa; + int pos; + + ppa = rlun->bppa; + pos = pblk_ppa_to_pos(geo, ppa); + chunk = &line->chks[pos]; + + ppa.m.chk = line->id; + chunk_meta = pblk_chunk_get_off(pblk, meta, ppa); + + chunk->state = chunk_meta->state; + chunk->type = chunk_meta->type; + chunk->wi = chunk_meta->wi; + chunk->slba = chunk_meta->slba; + chunk->cnlb = chunk_meta->cnlb; + chunk->wp = chunk_meta->wp; + + if (!(chunk->state & NVM_CHK_ST_OFFLINE)) + continue; + + if (chunk->type & NVM_CHK_TP_SZ_SPEC) { + WARN_ONCE(1, "pblk: custom-sized chunks unsupported\n"); + continue; + } + + set_bit(pos, line->blk_bitmap); + nr_bad_chks++; + } + + return nr_bad_chks; +} + +static long pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, + void *chunk_meta, int line_id) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_mgmt *l_mg = &pblk->l_mg; + struct pblk_line_meta *lm = &pblk->lm; + long nr_bad_chks, chk_in_line; + + line->pblk = pblk; + line->id = line_id; + line->type = PBLK_LINETYPE_FREE; + line->state = PBLK_LINESTATE_NEW; + line->gc_group = PBLK_LINEGC_NONE; + line->vsc = &l_mg->vsc_list[line_id]; + spin_lock_init(&line->lock); + + if (geo->version == NVM_OCSSD_SPEC_12) + nr_bad_chks = pblk_setup_line_meta_12(pblk, line, chunk_meta); + else + nr_bad_chks = pblk_setup_line_meta_20(pblk, line, chunk_meta); + + chk_in_line = lm->blk_per_line - nr_bad_chks; + if (nr_bad_chks < 0 || nr_bad_chks > lm->blk_per_line || + chk_in_line < lm->min_blk_line) { + line->state = PBLK_LINESTATE_BAD; + list_add_tail(&line->list, &l_mg->bad_list); + return 0; + } + + atomic_set(&line->blk_in_line, chk_in_line); + list_add_tail(&line->list, &l_mg->free_list); + l_mg->nr_free_lines++; + + return chk_in_line; +} + +static int pblk_alloc_line_meta(struct pblk *pblk, struct pblk_line *line) { struct pblk_line_meta *lm = &pblk->lm; @@ -711,7 +824,13 @@ static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, return -ENOMEM; } - *nr_bad_blks = pblk_bb_line(pblk, line, chunk_log, lm->blk_per_line); + line->chks = kmalloc(lm->blk_per_line * sizeof(struct nvm_chk_meta), + GFP_KERNEL); + if (!line->chks) { + kfree(line->erase_bitmap); + kfree(line->blk_bitmap); + return -ENOMEM; + } return 0; } @@ -723,9 +842,9 @@ static int pblk_lines_init(struct pblk *pblk) struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; struct pblk_line *line; - void *chunk_log; + void *chunk_meta; unsigned int smeta_len, emeta_len; - long nr_bad_blks = 0, nr_free_blks = 0; + long nr_free_chks = 0; int bb_distance, max_write_ppas; int i, ret; @@ -842,53 +961,31 @@ static int pblk_lines_init(struct pblk *pblk) goto fail_free_bb_aux; } - chunk_log = pblk_bb_get_log(pblk); - if (IS_ERR(chunk_log)) { - pr_err("pblk: could not get bad block log (%lu)\n", - PTR_ERR(chunk_log)); - ret = PTR_ERR(chunk_log); + chunk_meta = pblk_chunk_get_meta(pblk); + if (IS_ERR(chunk_meta)) { + pr_err("pblk: could not get chunk log (%lu)\n", + PTR_ERR(chunk_meta)); + ret = PTR_ERR(chunk_meta); goto fail_free_lines; } for (i = 0; i < l_mg->nr_lines; i++) { - int chk_in_line; - line = &pblk->lines[i]; - line->pblk = pblk; - line->id = i; - line->type = PBLK_LINETYPE_FREE; - line->state = PBLK_LINESTATE_FREE; - line->gc_group = PBLK_LINEGC_NONE; - line->vsc = &l_mg->vsc_list[i]; - spin_lock_init(&line->lock); - - ret = pblk_setup_line_meta(pblk, line, chunk_log, &nr_bad_blks); + ret = pblk_alloc_line_meta(pblk, line); if (ret) - goto fail_free_chunk_log; + goto fail_free_chunk_meta; - chk_in_line = lm->blk_per_line - nr_bad_blks; - if (nr_bad_blks < 0 || nr_bad_blks > lm->blk_per_line || - chk_in_line < lm->min_blk_line) { - line->state = PBLK_LINESTATE_BAD; - list_add_tail(&line->list, &l_mg->bad_list); - continue; - } - - nr_free_blks += chk_in_line; - atomic_set(&line->blk_in_line, chk_in_line); - - l_mg->nr_free_lines++; - list_add_tail(&line->list, &l_mg->free_list); + nr_free_chks += pblk_setup_line_meta(pblk, line, chunk_meta, i); } - pblk_set_provision(pblk, nr_free_blks); + pblk_set_provision(pblk, nr_free_chks); - kfree(chunk_log); + kfree(chunk_meta); return 0; -fail_free_chunk_log: - kfree(chunk_log); +fail_free_chunk_meta: + kfree(chunk_meta); while (--i >= 0) pblk_line_meta_free(&pblk->lines[i]); fail_free_lines: diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index 6ac64d9eb57e..ee149766b7a0 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -297,6 +297,7 @@ enum { PBLK_LINETYPE_DATA = 2, /* Line state */ + PBLK_LINESTATE_NEW = 9, PBLK_LINESTATE_FREE = 10, PBLK_LINESTATE_OPEN = 11, PBLK_LINESTATE_CLOSED = 12, @@ -426,6 +427,8 @@ struct pblk_line { unsigned long *lun_bitmap; /* Bitmap for LUNs mapped in line */ + struct nvm_chk_meta *chks; /* Chunks forming line */ + struct pblk_smeta *smeta; /* Start metadata */ struct pblk_emeta *emeta; /* End medatada */ @@ -729,6 +732,10 @@ void pblk_set_sec_per_write(struct pblk *pblk, int sec_per_write); int pblk_setup_w_rec_rq(struct pblk *pblk, struct nvm_rq *rqd, struct pblk_c_ctx *c_ctx); void pblk_discard(struct pblk *pblk, struct bio *bio); +struct nvm_chk_meta *pblk_chunk_get_info(struct pblk *pblk); +struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, + struct nvm_chk_meta *lp, + struct ppa_addr ppa); void pblk_log_write_err(struct pblk *pblk, struct nvm_rq *rqd); void pblk_log_read_err(struct pblk *pblk, struct nvm_rq *rqd); int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 9fe37f7e8185..c120b2243758 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -232,6 +232,19 @@ struct nvm_addr_format { u64 rsv_mask[2]; }; +enum { + /* Chunk states */ + NVM_CHK_ST_FREE = 1 << 0, + NVM_CHK_ST_CLOSED = 1 << 1, + NVM_CHK_ST_OPEN = 1 << 2, + NVM_CHK_ST_OFFLINE = 1 << 3, + + /* Chunk types */ + NVM_CHK_TP_W_SEQ = 1 << 0, + NVM_CHK_TP_W_RAN = 1 << 1, + NVM_CHK_TP_SZ_SPEC = 1 << 4, +}; + /* * Note: The structure size is linked to nvme_nvm_chk_meta such that the same * buffer can be used when converting from little endian to cpu addressing. -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 13/15] lightnvm: pblk: implement get log report chunk 2018-02-28 15:49 ` Javier González @ 2018-03-01 10:45 ` Matias Bjørling -1 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:45 UTC (permalink / raw) To: Javier González Cc: linux-block, linux-kernel, linux-nvme, Javier González On 02/28/2018 04:49 PM, Javier González wrote: > In preparation of pblk supporting 2.0, implement the get log report > chunk in pblk. Also, define the chunk states as given in the 2.0 spec. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/lightnvm/pblk-core.c | 139 +++++++++++++++++++++++---- > drivers/lightnvm/pblk-init.c | 223 +++++++++++++++++++++++++++++++------------ > drivers/lightnvm/pblk.h | 7 ++ > include/linux/lightnvm.h | 13 +++ > 4 files changed, 301 insertions(+), 81 deletions(-) > > diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c > index 2e10b18b61e3..cd663855ee88 100644 > --- a/drivers/lightnvm/pblk-core.c > +++ b/drivers/lightnvm/pblk-core.c > @@ -44,11 +44,12 @@ static void pblk_line_mark_bb(struct work_struct *work) > } > > static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, > - struct ppa_addr *ppa) > + struct ppa_addr ppa_addr) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > - int pos = pblk_ppa_to_pos(geo, *ppa); > + struct ppa_addr *ppa; > + int pos = pblk_ppa_to_pos(geo, ppa_addr); > > pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos); > atomic_long_inc(&pblk->erase_failed); > @@ -58,26 +59,38 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, > pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n", > line->id, pos); > > + /* Not necessary to mark bad blocks on 2.0 spec. */ > + if (geo->version == NVM_OCSSD_SPEC_20) > + return; > + > + ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC); > + if (!ppa) > + return; > + > + *ppa = ppa_addr; > pblk_gen_run_ws(pblk, NULL, ppa, pblk_line_mark_bb, > GFP_ATOMIC, pblk->bb_wq); > } > > static void __pblk_end_io_erase(struct pblk *pblk, struct nvm_rq *rqd) > { > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct nvm_chk_meta *chunk; > struct pblk_line *line; > + int pos; > > line = &pblk->lines[pblk_ppa_to_line(rqd->ppa_addr)]; > + pos = pblk_ppa_to_pos(geo, rqd->ppa_addr); > + chunk = &line->chks[pos]; > + > atomic_dec(&line->left_seblks); > > if (rqd->error) { > - struct ppa_addr *ppa; > - > - ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC); > - if (!ppa) > - return; > - > - *ppa = rqd->ppa_addr; > - pblk_mark_bb(pblk, line, ppa); > + chunk->state = NVM_CHK_ST_OFFLINE; > + pblk_mark_bb(pblk, line, rqd->ppa_addr); > + } else { > + chunk->state = NVM_CHK_ST_FREE; > } > > atomic_dec(&pblk->inflight_io); > @@ -92,6 +105,50 @@ static void pblk_end_io_erase(struct nvm_rq *rqd) > mempool_free(rqd, pblk->e_rq_pool); > } > > +/* > + * Get information for all chunks from the device. > + * > + * The caller is responsible for freeing the returned structure > + */ > +struct nvm_chk_meta *pblk_chunk_get_info(struct pblk *pblk) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct nvm_chk_meta *meta; > + struct ppa_addr ppa; > + unsigned long len; > + int ret; > + > + ppa.ppa = 0; > + > + len = geo->all_chunks * sizeof(*meta); > + meta = kzalloc(len, GFP_KERNEL); > + if (!meta) > + return ERR_PTR(-ENOMEM); > + > + ret = nvm_get_chunk_meta(dev, meta, ppa, geo->all_chunks); > + if (ret) { > + pr_err("pblk: could not get chunk metadata (%d)\n", ret); The error message can be omitted here. If there is an error, nvme_nvm_get_chk_meta will already had barfed. > + kfree(meta); > + return ERR_PTR(-EIO); > + } > + > + return meta; > +} > + > +struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, > + struct nvm_chk_meta *meta, > + struct ppa_addr ppa) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + int ch_off = ppa.m.grp * geo->num_chk * geo->num_lun; > + int lun_off = ppa.m.pu * geo->num_chk; > + int chk_off = ppa.m.chk; > + > + return meta + ch_off + lun_off + chk_off; > +} > + > void __pblk_map_invalidate(struct pblk *pblk, struct pblk_line *line, > u64 paddr) > { > @@ -1094,10 +1151,34 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, > return 1; > } > > +static int pblk_prepare_new_line(struct pblk *pblk, struct pblk_line *line) > +{ > + struct pblk_line_meta *lm = &pblk->lm; > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + int blk_to_erase = atomic_read(&line->blk_in_line); > + int i; > + > + for (i = 0; i < lm->blk_per_line; i++) { > + struct pblk_lun *rlun = &pblk->luns[i]; > + int pos = pblk_ppa_to_pos(geo, rlun->bppa); > + int state = line->chks[pos].state; > + > + /* Free chunks should not be erased */ > + if (state & NVM_CHK_ST_FREE) { > + set_bit(pblk_ppa_to_pos(geo, rlun->bppa), > + line->erase_bitmap); > + blk_to_erase--; > + } > + } > + > + return blk_to_erase; > +} > + > static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) > { > struct pblk_line_meta *lm = &pblk->lm; > - int blk_in_line = atomic_read(&line->blk_in_line); > + int blk_to_erase; > > line->map_bitmap = kzalloc(lm->sec_bitmap_len, GFP_ATOMIC); > if (!line->map_bitmap) > @@ -1110,7 +1191,21 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) > return -ENOMEM; > } > > + /* Bad blocks do not need to be erased */ > + bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line); > + > spin_lock(&line->lock); > + > + /* If we have not written to this line, we need to mark up free chunks > + * as already erased > + */ > + if (line->state == PBLK_LINESTATE_NEW) { > + blk_to_erase = pblk_prepare_new_line(pblk, line); > + line->state = PBLK_LINESTATE_FREE; > + } else { > + blk_to_erase = atomic_read(&line->blk_in_line); > + } > + > if (line->state != PBLK_LINESTATE_FREE) { > kfree(line->map_bitmap); > kfree(line->invalid_bitmap); > @@ -1122,15 +1217,12 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) > > line->state = PBLK_LINESTATE_OPEN; > > - atomic_set(&line->left_eblks, blk_in_line); > - atomic_set(&line->left_seblks, blk_in_line); > + atomic_set(&line->left_eblks, blk_to_erase); > + atomic_set(&line->left_seblks, blk_to_erase); > > line->meta_distance = lm->meta_distance; > spin_unlock(&line->lock); > > - /* Bad blocks do not need to be erased */ > - bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line); > - > kref_init(&line->ref); > > return 0; > @@ -1586,12 +1678,14 @@ static void pblk_line_should_sync_meta(struct pblk *pblk) > > void pblk_line_close(struct pblk *pblk, struct pblk_line *line) > { > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct pblk_line_meta *lm = &pblk->lm; > struct pblk_line_mgmt *l_mg = &pblk->l_mg; > struct list_head *move_list; > + int i; > > #ifdef CONFIG_NVM_DEBUG > - struct pblk_line_meta *lm = &pblk->lm; > - > WARN(!bitmap_full(line->map_bitmap, lm->sec_per_line), > "pblk: corrupt closed line %d\n", line->id); > #endif > @@ -1613,6 +1707,15 @@ void pblk_line_close(struct pblk *pblk, struct pblk_line *line) > line->smeta = NULL; > line->emeta = NULL; > > + for (i = 0; i < lm->blk_per_line; i++) { > + struct pblk_lun *rlun = &pblk->luns[i]; > + int pos = pblk_ppa_to_pos(geo, rlun->bppa); > + int state = line->chks[pos].state; > + > + if (!(state & NVM_CHK_ST_OFFLINE)) > + state = NVM_CHK_ST_CLOSED; > + } > + > spin_unlock(&line->lock); > spin_unlock(&l_mg->gc_lock); > } > diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c > index 73b221c69cfd..bd2592fc3378 100644 > --- a/drivers/lightnvm/pblk-init.c > +++ b/drivers/lightnvm/pblk-init.c > @@ -401,6 +401,7 @@ static void pblk_line_meta_free(struct pblk_line *line) > { > kfree(line->blk_bitmap); > kfree(line->erase_bitmap); > + kfree(line->chks); > } > > static void pblk_lines_free(struct pblk *pblk) > @@ -440,55 +441,44 @@ static int pblk_bb_get_tbl(struct nvm_tgt_dev *dev, struct pblk_lun *rlun, > return 0; > } > > -static void *pblk_bb_get_log(struct pblk *pblk) > +static void *pblk_bb_get_meta(struct pblk *pblk) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > - u8 *log; > + u8 *meta; > int i, nr_blks, blk_per_lun; > int ret; > > blk_per_lun = geo->num_chk * geo->pln_mode; > nr_blks = blk_per_lun * geo->all_luns; > > - log = kmalloc(nr_blks, GFP_KERNEL); > - if (!log) > + meta = kmalloc(nr_blks, GFP_KERNEL); > + if (!meta) > return ERR_PTR(-ENOMEM); > > for (i = 0; i < geo->all_luns; i++) { > struct pblk_lun *rlun = &pblk->luns[i]; > - u8 *log_pos = log + i * blk_per_lun; > + u8 *meta_pos = meta + i * blk_per_lun; > > - ret = pblk_bb_get_tbl(dev, rlun, log_pos, blk_per_lun); > + ret = pblk_bb_get_tbl(dev, rlun, meta_pos, blk_per_lun); > if (ret) { > - kfree(log); > + kfree(meta); > return ERR_PTR(-EIO); > } > } > > - return log; > + return meta; > } > > -static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line, > - u8 *bb_log, int blk_per_line) > +static void *pblk_chunk_get_meta(struct pblk *pblk) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > - int i, bb_cnt = 0; > - int blk_per_lun = geo->num_chk * geo->pln_mode; > > - for (i = 0; i < blk_per_line; i++) { > - struct pblk_lun *rlun = &pblk->luns[i]; > - u8 *lun_bb_log = bb_log + i * blk_per_lun; > - > - if (lun_bb_log[line->id] == NVM_BLK_T_FREE) > - continue; > - > - set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap); > - bb_cnt++; > - } > - > - return bb_cnt; > + if (geo->version == NVM_OCSSD_SPEC_12) > + return pblk_bb_get_meta(pblk); > + else > + return pblk_chunk_get_info(pblk); > } > > static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) > @@ -696,8 +686,131 @@ static int pblk_lines_alloc_metadata(struct pblk *pblk) > return -ENOMEM; > } > > -static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, > - void *chunk_log, long *nr_bad_blks) > +static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line, > + void *chunk_meta) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct pblk_line_meta *lm = &pblk->lm; > + int i, chk_per_lun, nr_bad_chks = 0; > + > + chk_per_lun = geo->num_chk * geo->pln_mode; > + > + for (i = 0; i < lm->blk_per_line; i++) { > + struct pblk_lun *rlun = &pblk->luns[i]; > + struct nvm_chk_meta *chunk; > + int pos = pblk_ppa_to_pos(geo, rlun->bppa); > + u8 *lun_bb_meta = chunk_meta + pos * chk_per_lun; > + > + chunk = &line->chks[pos]; > + > + /* > + * In 1.2 spec. chunk state is not persisted by the device. Thus > + * some of the values are reset each time pblk is instantiated. > + */ > + if (lun_bb_meta[line->id] == NVM_BLK_T_FREE) > + chunk->state = NVM_CHK_ST_FREE; > + else > + chunk->state = NVM_CHK_ST_OFFLINE; > + > + chunk->type = NVM_CHK_TP_W_SEQ; > + chunk->wi = 0; > + chunk->slba = -1; > + chunk->cnlb = geo->clba; > + chunk->wp = 0; > + > + if (!(chunk->state & NVM_CHK_ST_OFFLINE)) > + continue; > + > + set_bit(pos, line->blk_bitmap); > + nr_bad_chks++; > + } > + > + return nr_bad_chks; > +} > + > +static int pblk_setup_line_meta_20(struct pblk *pblk, struct pblk_line *line, > + struct nvm_chk_meta *meta) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct pblk_line_meta *lm = &pblk->lm; > + int i, nr_bad_chks = 0; > + > + for (i = 0; i < lm->blk_per_line; i++) { > + struct pblk_lun *rlun = &pblk->luns[i]; > + struct nvm_chk_meta *chunk; > + struct nvm_chk_meta *chunk_meta; > + struct ppa_addr ppa; > + int pos; > + > + ppa = rlun->bppa; > + pos = pblk_ppa_to_pos(geo, ppa); > + chunk = &line->chks[pos]; > + > + ppa.m.chk = line->id; > + chunk_meta = pblk_chunk_get_off(pblk, meta, ppa); > + > + chunk->state = chunk_meta->state; > + chunk->type = chunk_meta->type; > + chunk->wi = chunk_meta->wi; > + chunk->slba = chunk_meta->slba; > + chunk->cnlb = chunk_meta->cnlb; > + chunk->wp = chunk_meta->wp; > + > + if (!(chunk->state & NVM_CHK_ST_OFFLINE)) > + continue; > + > + if (chunk->type & NVM_CHK_TP_SZ_SPEC) { > + WARN_ONCE(1, "pblk: custom-sized chunks unsupported\n"); > + continue; > + } > + > + set_bit(pos, line->blk_bitmap); > + nr_bad_chks++; > + } > + > + return nr_bad_chks; > +} > + > +static long pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, > + void *chunk_meta, int line_id) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct pblk_line_mgmt *l_mg = &pblk->l_mg; > + struct pblk_line_meta *lm = &pblk->lm; > + long nr_bad_chks, chk_in_line; > + > + line->pblk = pblk; > + line->id = line_id; > + line->type = PBLK_LINETYPE_FREE; > + line->state = PBLK_LINESTATE_NEW; > + line->gc_group = PBLK_LINEGC_NONE; > + line->vsc = &l_mg->vsc_list[line_id]; > + spin_lock_init(&line->lock); > + > + if (geo->version == NVM_OCSSD_SPEC_12) > + nr_bad_chks = pblk_setup_line_meta_12(pblk, line, chunk_meta); > + else > + nr_bad_chks = pblk_setup_line_meta_20(pblk, line, chunk_meta); > + > + chk_in_line = lm->blk_per_line - nr_bad_chks; > + if (nr_bad_chks < 0 || nr_bad_chks > lm->blk_per_line || > + chk_in_line < lm->min_blk_line) { > + line->state = PBLK_LINESTATE_BAD; > + list_add_tail(&line->list, &l_mg->bad_list); > + return 0; > + } > + > + atomic_set(&line->blk_in_line, chk_in_line); > + list_add_tail(&line->list, &l_mg->free_list); > + l_mg->nr_free_lines++; > + > + return chk_in_line; > +} > + > +static int pblk_alloc_line_meta(struct pblk *pblk, struct pblk_line *line) > { > struct pblk_line_meta *lm = &pblk->lm; > > @@ -711,7 +824,13 @@ static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, > return -ENOMEM; > } > > - *nr_bad_blks = pblk_bb_line(pblk, line, chunk_log, lm->blk_per_line); > + line->chks = kmalloc(lm->blk_per_line * sizeof(struct nvm_chk_meta), > + GFP_KERNEL); > + if (!line->chks) { > + kfree(line->erase_bitmap); > + kfree(line->blk_bitmap); > + return -ENOMEM; > + } > > return 0; > } > @@ -723,9 +842,9 @@ static int pblk_lines_init(struct pblk *pblk) > struct pblk_line_mgmt *l_mg = &pblk->l_mg; > struct pblk_line_meta *lm = &pblk->lm; > struct pblk_line *line; > - void *chunk_log; > + void *chunk_meta; > unsigned int smeta_len, emeta_len; > - long nr_bad_blks = 0, nr_free_blks = 0; > + long nr_free_chks = 0; > int bb_distance, max_write_ppas; > int i, ret; > > @@ -842,53 +961,31 @@ static int pblk_lines_init(struct pblk *pblk) > goto fail_free_bb_aux; > } > > - chunk_log = pblk_bb_get_log(pblk); > - if (IS_ERR(chunk_log)) { > - pr_err("pblk: could not get bad block log (%lu)\n", > - PTR_ERR(chunk_log)); > - ret = PTR_ERR(chunk_log); > + chunk_meta = pblk_chunk_get_meta(pblk); > + if (IS_ERR(chunk_meta)) { > + pr_err("pblk: could not get chunk log (%lu)\n", > + PTR_ERR(chunk_meta)); > + ret = PTR_ERR(chunk_meta); > goto fail_free_lines; > } > > for (i = 0; i < l_mg->nr_lines; i++) { > - int chk_in_line; > - > line = &pblk->lines[i]; > > - line->pblk = pblk; > - line->id = i; > - line->type = PBLK_LINETYPE_FREE; > - line->state = PBLK_LINESTATE_FREE; > - line->gc_group = PBLK_LINEGC_NONE; > - line->vsc = &l_mg->vsc_list[i]; > - spin_lock_init(&line->lock); > - > - ret = pblk_setup_line_meta(pblk, line, chunk_log, &nr_bad_blks); > + ret = pblk_alloc_line_meta(pblk, line); > if (ret) > - goto fail_free_chunk_log; > + goto fail_free_chunk_meta; > > - chk_in_line = lm->blk_per_line - nr_bad_blks; > - if (nr_bad_blks < 0 || nr_bad_blks > lm->blk_per_line || > - chk_in_line < lm->min_blk_line) { > - line->state = PBLK_LINESTATE_BAD; > - list_add_tail(&line->list, &l_mg->bad_list); > - continue; > - } > - > - nr_free_blks += chk_in_line; > - atomic_set(&line->blk_in_line, chk_in_line); > - > - l_mg->nr_free_lines++; > - list_add_tail(&line->list, &l_mg->free_list); > + nr_free_chks += pblk_setup_line_meta(pblk, line, chunk_meta, i); > } > > - pblk_set_provision(pblk, nr_free_blks); > + pblk_set_provision(pblk, nr_free_chks); > > - kfree(chunk_log); > + kfree(chunk_meta); > return 0; > > -fail_free_chunk_log: > - kfree(chunk_log); > +fail_free_chunk_meta: > + kfree(chunk_meta); > while (--i >= 0) > pblk_line_meta_free(&pblk->lines[i]); > fail_free_lines: > diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h > index 6ac64d9eb57e..ee149766b7a0 100644 > --- a/drivers/lightnvm/pblk.h > +++ b/drivers/lightnvm/pblk.h > @@ -297,6 +297,7 @@ enum { > PBLK_LINETYPE_DATA = 2, > > /* Line state */ > + PBLK_LINESTATE_NEW = 9, > PBLK_LINESTATE_FREE = 10, > PBLK_LINESTATE_OPEN = 11, > PBLK_LINESTATE_CLOSED = 12, > @@ -426,6 +427,8 @@ struct pblk_line { > > unsigned long *lun_bitmap; /* Bitmap for LUNs mapped in line */ > > + struct nvm_chk_meta *chks; /* Chunks forming line */ > + > struct pblk_smeta *smeta; /* Start metadata */ > struct pblk_emeta *emeta; /* End medatada */ > > @@ -729,6 +732,10 @@ void pblk_set_sec_per_write(struct pblk *pblk, int sec_per_write); > int pblk_setup_w_rec_rq(struct pblk *pblk, struct nvm_rq *rqd, > struct pblk_c_ctx *c_ctx); > void pblk_discard(struct pblk *pblk, struct bio *bio); > +struct nvm_chk_meta *pblk_chunk_get_info(struct pblk *pblk); > +struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, > + struct nvm_chk_meta *lp, > + struct ppa_addr ppa); > void pblk_log_write_err(struct pblk *pblk, struct nvm_rq *rqd); > void pblk_log_read_err(struct pblk *pblk, struct nvm_rq *rqd); > int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd); > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 9fe37f7e8185..c120b2243758 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -232,6 +232,19 @@ struct nvm_addr_format { > u64 rsv_mask[2]; > }; > > +enum { > + /* Chunk states */ > + NVM_CHK_ST_FREE = 1 << 0, > + NVM_CHK_ST_CLOSED = 1 << 1, > + NVM_CHK_ST_OPEN = 1 << 2, > + NVM_CHK_ST_OFFLINE = 1 << 3, > + > + /* Chunk types */ > + NVM_CHK_TP_W_SEQ = 1 << 0, > + NVM_CHK_TP_W_RAN = 1 << 1, > + NVM_CHK_TP_SZ_SPEC = 1 << 4, > +}; > + > /* > * Note: The structure size is linked to nvme_nvm_chk_meta such that the same > * buffer can be used when converting from little endian to cpu addressing. > ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 13/15] lightnvm: pblk: implement get log report chunk @ 2018-03-01 10:45 ` Matias Bjørling 0 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:45 UTC (permalink / raw) On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: > In preparation of pblk supporting 2.0, implement the get log report > chunk in pblk. Also, define the chunk states as given in the 2.0 spec. > > Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> > --- > drivers/lightnvm/pblk-core.c | 139 +++++++++++++++++++++++---- > drivers/lightnvm/pblk-init.c | 223 +++++++++++++++++++++++++++++++------------ > drivers/lightnvm/pblk.h | 7 ++ > include/linux/lightnvm.h | 13 +++ > 4 files changed, 301 insertions(+), 81 deletions(-) > > diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c > index 2e10b18b61e3..cd663855ee88 100644 > --- a/drivers/lightnvm/pblk-core.c > +++ b/drivers/lightnvm/pblk-core.c > @@ -44,11 +44,12 @@ static void pblk_line_mark_bb(struct work_struct *work) > } > > static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, > - struct ppa_addr *ppa) > + struct ppa_addr ppa_addr) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > - int pos = pblk_ppa_to_pos(geo, *ppa); > + struct ppa_addr *ppa; > + int pos = pblk_ppa_to_pos(geo, ppa_addr); > > pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos); > atomic_long_inc(&pblk->erase_failed); > @@ -58,26 +59,38 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, > pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n", > line->id, pos); > > + /* Not necessary to mark bad blocks on 2.0 spec. */ > + if (geo->version == NVM_OCSSD_SPEC_20) > + return; > + > + ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC); > + if (!ppa) > + return; > + > + *ppa = ppa_addr; > pblk_gen_run_ws(pblk, NULL, ppa, pblk_line_mark_bb, > GFP_ATOMIC, pblk->bb_wq); > } > > static void __pblk_end_io_erase(struct pblk *pblk, struct nvm_rq *rqd) > { > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct nvm_chk_meta *chunk; > struct pblk_line *line; > + int pos; > > line = &pblk->lines[pblk_ppa_to_line(rqd->ppa_addr)]; > + pos = pblk_ppa_to_pos(geo, rqd->ppa_addr); > + chunk = &line->chks[pos]; > + > atomic_dec(&line->left_seblks); > > if (rqd->error) { > - struct ppa_addr *ppa; > - > - ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC); > - if (!ppa) > - return; > - > - *ppa = rqd->ppa_addr; > - pblk_mark_bb(pblk, line, ppa); > + chunk->state = NVM_CHK_ST_OFFLINE; > + pblk_mark_bb(pblk, line, rqd->ppa_addr); > + } else { > + chunk->state = NVM_CHK_ST_FREE; > } > > atomic_dec(&pblk->inflight_io); > @@ -92,6 +105,50 @@ static void pblk_end_io_erase(struct nvm_rq *rqd) > mempool_free(rqd, pblk->e_rq_pool); > } > > +/* > + * Get information for all chunks from the device. > + * > + * The caller is responsible for freeing the returned structure > + */ > +struct nvm_chk_meta *pblk_chunk_get_info(struct pblk *pblk) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct nvm_chk_meta *meta; > + struct ppa_addr ppa; > + unsigned long len; > + int ret; > + > + ppa.ppa = 0; > + > + len = geo->all_chunks * sizeof(*meta); > + meta = kzalloc(len, GFP_KERNEL); > + if (!meta) > + return ERR_PTR(-ENOMEM); > + > + ret = nvm_get_chunk_meta(dev, meta, ppa, geo->all_chunks); > + if (ret) { > + pr_err("pblk: could not get chunk metadata (%d)\n", ret); The error message can be omitted here. If there is an error, nvme_nvm_get_chk_meta will already had barfed. > + kfree(meta); > + return ERR_PTR(-EIO); > + } > + > + return meta; > +} > + > +struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, > + struct nvm_chk_meta *meta, > + struct ppa_addr ppa) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + int ch_off = ppa.m.grp * geo->num_chk * geo->num_lun; > + int lun_off = ppa.m.pu * geo->num_chk; > + int chk_off = ppa.m.chk; > + > + return meta + ch_off + lun_off + chk_off; > +} > + > void __pblk_map_invalidate(struct pblk *pblk, struct pblk_line *line, > u64 paddr) > { > @@ -1094,10 +1151,34 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line, > return 1; > } > > +static int pblk_prepare_new_line(struct pblk *pblk, struct pblk_line *line) > +{ > + struct pblk_line_meta *lm = &pblk->lm; > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + int blk_to_erase = atomic_read(&line->blk_in_line); > + int i; > + > + for (i = 0; i < lm->blk_per_line; i++) { > + struct pblk_lun *rlun = &pblk->luns[i]; > + int pos = pblk_ppa_to_pos(geo, rlun->bppa); > + int state = line->chks[pos].state; > + > + /* Free chunks should not be erased */ > + if (state & NVM_CHK_ST_FREE) { > + set_bit(pblk_ppa_to_pos(geo, rlun->bppa), > + line->erase_bitmap); > + blk_to_erase--; > + } > + } > + > + return blk_to_erase; > +} > + > static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) > { > struct pblk_line_meta *lm = &pblk->lm; > - int blk_in_line = atomic_read(&line->blk_in_line); > + int blk_to_erase; > > line->map_bitmap = kzalloc(lm->sec_bitmap_len, GFP_ATOMIC); > if (!line->map_bitmap) > @@ -1110,7 +1191,21 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) > return -ENOMEM; > } > > + /* Bad blocks do not need to be erased */ > + bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line); > + > spin_lock(&line->lock); > + > + /* If we have not written to this line, we need to mark up free chunks > + * as already erased > + */ > + if (line->state == PBLK_LINESTATE_NEW) { > + blk_to_erase = pblk_prepare_new_line(pblk, line); > + line->state = PBLK_LINESTATE_FREE; > + } else { > + blk_to_erase = atomic_read(&line->blk_in_line); > + } > + > if (line->state != PBLK_LINESTATE_FREE) { > kfree(line->map_bitmap); > kfree(line->invalid_bitmap); > @@ -1122,15 +1217,12 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) > > line->state = PBLK_LINESTATE_OPEN; > > - atomic_set(&line->left_eblks, blk_in_line); > - atomic_set(&line->left_seblks, blk_in_line); > + atomic_set(&line->left_eblks, blk_to_erase); > + atomic_set(&line->left_seblks, blk_to_erase); > > line->meta_distance = lm->meta_distance; > spin_unlock(&line->lock); > > - /* Bad blocks do not need to be erased */ > - bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line); > - > kref_init(&line->ref); > > return 0; > @@ -1586,12 +1678,14 @@ static void pblk_line_should_sync_meta(struct pblk *pblk) > > void pblk_line_close(struct pblk *pblk, struct pblk_line *line) > { > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct pblk_line_meta *lm = &pblk->lm; > struct pblk_line_mgmt *l_mg = &pblk->l_mg; > struct list_head *move_list; > + int i; > > #ifdef CONFIG_NVM_DEBUG > - struct pblk_line_meta *lm = &pblk->lm; > - > WARN(!bitmap_full(line->map_bitmap, lm->sec_per_line), > "pblk: corrupt closed line %d\n", line->id); > #endif > @@ -1613,6 +1707,15 @@ void pblk_line_close(struct pblk *pblk, struct pblk_line *line) > line->smeta = NULL; > line->emeta = NULL; > > + for (i = 0; i < lm->blk_per_line; i++) { > + struct pblk_lun *rlun = &pblk->luns[i]; > + int pos = pblk_ppa_to_pos(geo, rlun->bppa); > + int state = line->chks[pos].state; > + > + if (!(state & NVM_CHK_ST_OFFLINE)) > + state = NVM_CHK_ST_CLOSED; > + } > + > spin_unlock(&line->lock); > spin_unlock(&l_mg->gc_lock); > } > diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c > index 73b221c69cfd..bd2592fc3378 100644 > --- a/drivers/lightnvm/pblk-init.c > +++ b/drivers/lightnvm/pblk-init.c > @@ -401,6 +401,7 @@ static void pblk_line_meta_free(struct pblk_line *line) > { > kfree(line->blk_bitmap); > kfree(line->erase_bitmap); > + kfree(line->chks); > } > > static void pblk_lines_free(struct pblk *pblk) > @@ -440,55 +441,44 @@ static int pblk_bb_get_tbl(struct nvm_tgt_dev *dev, struct pblk_lun *rlun, > return 0; > } > > -static void *pblk_bb_get_log(struct pblk *pblk) > +static void *pblk_bb_get_meta(struct pblk *pblk) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > - u8 *log; > + u8 *meta; > int i, nr_blks, blk_per_lun; > int ret; > > blk_per_lun = geo->num_chk * geo->pln_mode; > nr_blks = blk_per_lun * geo->all_luns; > > - log = kmalloc(nr_blks, GFP_KERNEL); > - if (!log) > + meta = kmalloc(nr_blks, GFP_KERNEL); > + if (!meta) > return ERR_PTR(-ENOMEM); > > for (i = 0; i < geo->all_luns; i++) { > struct pblk_lun *rlun = &pblk->luns[i]; > - u8 *log_pos = log + i * blk_per_lun; > + u8 *meta_pos = meta + i * blk_per_lun; > > - ret = pblk_bb_get_tbl(dev, rlun, log_pos, blk_per_lun); > + ret = pblk_bb_get_tbl(dev, rlun, meta_pos, blk_per_lun); > if (ret) { > - kfree(log); > + kfree(meta); > return ERR_PTR(-EIO); > } > } > > - return log; > + return meta; > } > > -static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line, > - u8 *bb_log, int blk_per_line) > +static void *pblk_chunk_get_meta(struct pblk *pblk) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > - int i, bb_cnt = 0; > - int blk_per_lun = geo->num_chk * geo->pln_mode; > > - for (i = 0; i < blk_per_line; i++) { > - struct pblk_lun *rlun = &pblk->luns[i]; > - u8 *lun_bb_log = bb_log + i * blk_per_lun; > - > - if (lun_bb_log[line->id] == NVM_BLK_T_FREE) > - continue; > - > - set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap); > - bb_cnt++; > - } > - > - return bb_cnt; > + if (geo->version == NVM_OCSSD_SPEC_12) > + return pblk_bb_get_meta(pblk); > + else > + return pblk_chunk_get_info(pblk); > } > > static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) > @@ -696,8 +686,131 @@ static int pblk_lines_alloc_metadata(struct pblk *pblk) > return -ENOMEM; > } > > -static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, > - void *chunk_log, long *nr_bad_blks) > +static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line, > + void *chunk_meta) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct pblk_line_meta *lm = &pblk->lm; > + int i, chk_per_lun, nr_bad_chks = 0; > + > + chk_per_lun = geo->num_chk * geo->pln_mode; > + > + for (i = 0; i < lm->blk_per_line; i++) { > + struct pblk_lun *rlun = &pblk->luns[i]; > + struct nvm_chk_meta *chunk; > + int pos = pblk_ppa_to_pos(geo, rlun->bppa); > + u8 *lun_bb_meta = chunk_meta + pos * chk_per_lun; > + > + chunk = &line->chks[pos]; > + > + /* > + * In 1.2 spec. chunk state is not persisted by the device. Thus > + * some of the values are reset each time pblk is instantiated. > + */ > + if (lun_bb_meta[line->id] == NVM_BLK_T_FREE) > + chunk->state = NVM_CHK_ST_FREE; > + else > + chunk->state = NVM_CHK_ST_OFFLINE; > + > + chunk->type = NVM_CHK_TP_W_SEQ; > + chunk->wi = 0; > + chunk->slba = -1; > + chunk->cnlb = geo->clba; > + chunk->wp = 0; > + > + if (!(chunk->state & NVM_CHK_ST_OFFLINE)) > + continue; > + > + set_bit(pos, line->blk_bitmap); > + nr_bad_chks++; > + } > + > + return nr_bad_chks; > +} > + > +static int pblk_setup_line_meta_20(struct pblk *pblk, struct pblk_line *line, > + struct nvm_chk_meta *meta) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct pblk_line_meta *lm = &pblk->lm; > + int i, nr_bad_chks = 0; > + > + for (i = 0; i < lm->blk_per_line; i++) { > + struct pblk_lun *rlun = &pblk->luns[i]; > + struct nvm_chk_meta *chunk; > + struct nvm_chk_meta *chunk_meta; > + struct ppa_addr ppa; > + int pos; > + > + ppa = rlun->bppa; > + pos = pblk_ppa_to_pos(geo, ppa); > + chunk = &line->chks[pos]; > + > + ppa.m.chk = line->id; > + chunk_meta = pblk_chunk_get_off(pblk, meta, ppa); > + > + chunk->state = chunk_meta->state; > + chunk->type = chunk_meta->type; > + chunk->wi = chunk_meta->wi; > + chunk->slba = chunk_meta->slba; > + chunk->cnlb = chunk_meta->cnlb; > + chunk->wp = chunk_meta->wp; > + > + if (!(chunk->state & NVM_CHK_ST_OFFLINE)) > + continue; > + > + if (chunk->type & NVM_CHK_TP_SZ_SPEC) { > + WARN_ONCE(1, "pblk: custom-sized chunks unsupported\n"); > + continue; > + } > + > + set_bit(pos, line->blk_bitmap); > + nr_bad_chks++; > + } > + > + return nr_bad_chks; > +} > + > +static long pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, > + void *chunk_meta, int line_id) > +{ > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + struct pblk_line_mgmt *l_mg = &pblk->l_mg; > + struct pblk_line_meta *lm = &pblk->lm; > + long nr_bad_chks, chk_in_line; > + > + line->pblk = pblk; > + line->id = line_id; > + line->type = PBLK_LINETYPE_FREE; > + line->state = PBLK_LINESTATE_NEW; > + line->gc_group = PBLK_LINEGC_NONE; > + line->vsc = &l_mg->vsc_list[line_id]; > + spin_lock_init(&line->lock); > + > + if (geo->version == NVM_OCSSD_SPEC_12) > + nr_bad_chks = pblk_setup_line_meta_12(pblk, line, chunk_meta); > + else > + nr_bad_chks = pblk_setup_line_meta_20(pblk, line, chunk_meta); > + > + chk_in_line = lm->blk_per_line - nr_bad_chks; > + if (nr_bad_chks < 0 || nr_bad_chks > lm->blk_per_line || > + chk_in_line < lm->min_blk_line) { > + line->state = PBLK_LINESTATE_BAD; > + list_add_tail(&line->list, &l_mg->bad_list); > + return 0; > + } > + > + atomic_set(&line->blk_in_line, chk_in_line); > + list_add_tail(&line->list, &l_mg->free_list); > + l_mg->nr_free_lines++; > + > + return chk_in_line; > +} > + > +static int pblk_alloc_line_meta(struct pblk *pblk, struct pblk_line *line) > { > struct pblk_line_meta *lm = &pblk->lm; > > @@ -711,7 +824,13 @@ static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line, > return -ENOMEM; > } > > - *nr_bad_blks = pblk_bb_line(pblk, line, chunk_log, lm->blk_per_line); > + line->chks = kmalloc(lm->blk_per_line * sizeof(struct nvm_chk_meta), > + GFP_KERNEL); > + if (!line->chks) { > + kfree(line->erase_bitmap); > + kfree(line->blk_bitmap); > + return -ENOMEM; > + } > > return 0; > } > @@ -723,9 +842,9 @@ static int pblk_lines_init(struct pblk *pblk) > struct pblk_line_mgmt *l_mg = &pblk->l_mg; > struct pblk_line_meta *lm = &pblk->lm; > struct pblk_line *line; > - void *chunk_log; > + void *chunk_meta; > unsigned int smeta_len, emeta_len; > - long nr_bad_blks = 0, nr_free_blks = 0; > + long nr_free_chks = 0; > int bb_distance, max_write_ppas; > int i, ret; > > @@ -842,53 +961,31 @@ static int pblk_lines_init(struct pblk *pblk) > goto fail_free_bb_aux; > } > > - chunk_log = pblk_bb_get_log(pblk); > - if (IS_ERR(chunk_log)) { > - pr_err("pblk: could not get bad block log (%lu)\n", > - PTR_ERR(chunk_log)); > - ret = PTR_ERR(chunk_log); > + chunk_meta = pblk_chunk_get_meta(pblk); > + if (IS_ERR(chunk_meta)) { > + pr_err("pblk: could not get chunk log (%lu)\n", > + PTR_ERR(chunk_meta)); > + ret = PTR_ERR(chunk_meta); > goto fail_free_lines; > } > > for (i = 0; i < l_mg->nr_lines; i++) { > - int chk_in_line; > - > line = &pblk->lines[i]; > > - line->pblk = pblk; > - line->id = i; > - line->type = PBLK_LINETYPE_FREE; > - line->state = PBLK_LINESTATE_FREE; > - line->gc_group = PBLK_LINEGC_NONE; > - line->vsc = &l_mg->vsc_list[i]; > - spin_lock_init(&line->lock); > - > - ret = pblk_setup_line_meta(pblk, line, chunk_log, &nr_bad_blks); > + ret = pblk_alloc_line_meta(pblk, line); > if (ret) > - goto fail_free_chunk_log; > + goto fail_free_chunk_meta; > > - chk_in_line = lm->blk_per_line - nr_bad_blks; > - if (nr_bad_blks < 0 || nr_bad_blks > lm->blk_per_line || > - chk_in_line < lm->min_blk_line) { > - line->state = PBLK_LINESTATE_BAD; > - list_add_tail(&line->list, &l_mg->bad_list); > - continue; > - } > - > - nr_free_blks += chk_in_line; > - atomic_set(&line->blk_in_line, chk_in_line); > - > - l_mg->nr_free_lines++; > - list_add_tail(&line->list, &l_mg->free_list); > + nr_free_chks += pblk_setup_line_meta(pblk, line, chunk_meta, i); > } > > - pblk_set_provision(pblk, nr_free_blks); > + pblk_set_provision(pblk, nr_free_chks); > > - kfree(chunk_log); > + kfree(chunk_meta); > return 0; > > -fail_free_chunk_log: > - kfree(chunk_log); > +fail_free_chunk_meta: > + kfree(chunk_meta); > while (--i >= 0) > pblk_line_meta_free(&pblk->lines[i]); > fail_free_lines: > diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h > index 6ac64d9eb57e..ee149766b7a0 100644 > --- a/drivers/lightnvm/pblk.h > +++ b/drivers/lightnvm/pblk.h > @@ -297,6 +297,7 @@ enum { > PBLK_LINETYPE_DATA = 2, > > /* Line state */ > + PBLK_LINESTATE_NEW = 9, > PBLK_LINESTATE_FREE = 10, > PBLK_LINESTATE_OPEN = 11, > PBLK_LINESTATE_CLOSED = 12, > @@ -426,6 +427,8 @@ struct pblk_line { > > unsigned long *lun_bitmap; /* Bitmap for LUNs mapped in line */ > > + struct nvm_chk_meta *chks; /* Chunks forming line */ > + > struct pblk_smeta *smeta; /* Start metadata */ > struct pblk_emeta *emeta; /* End medatada */ > > @@ -729,6 +732,10 @@ void pblk_set_sec_per_write(struct pblk *pblk, int sec_per_write); > int pblk_setup_w_rec_rq(struct pblk *pblk, struct nvm_rq *rqd, > struct pblk_c_ctx *c_ctx); > void pblk_discard(struct pblk *pblk, struct bio *bio); > +struct nvm_chk_meta *pblk_chunk_get_info(struct pblk *pblk); > +struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, > + struct nvm_chk_meta *lp, > + struct ppa_addr ppa); > void pblk_log_write_err(struct pblk *pblk, struct nvm_rq *rqd); > void pblk_log_read_err(struct pblk *pblk, struct nvm_rq *rqd); > int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd); > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 9fe37f7e8185..c120b2243758 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -232,6 +232,19 @@ struct nvm_addr_format { > u64 rsv_mask[2]; > }; > > +enum { > + /* Chunk states */ > + NVM_CHK_ST_FREE = 1 << 0, > + NVM_CHK_ST_CLOSED = 1 << 1, > + NVM_CHK_ST_OPEN = 1 << 2, > + NVM_CHK_ST_OFFLINE = 1 << 3, > + > + /* Chunk types */ > + NVM_CHK_TP_W_SEQ = 1 << 0, > + NVM_CHK_TP_W_RAN = 1 << 1, > + NVM_CHK_TP_SZ_SPEC = 1 << 4, > +}; > + > /* > * Note: The structure size is linked to nvme_nvm_chk_meta such that the same > * buffer can be used when converting from little endian to cpu addressing. > ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 14/15] lightnvm: pblk: refactor init/exit sequences 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme UmVmYWN0b3IgaW5pdCBhbmQgZXhpdCBzZXF1ZW5jZXMgdG8gaW1wcm92ZSByZWFkYWJpbGl0eS4g SW4gdGhlIHdheSwgZml4CmJhZCBmcmVlIG9yZGVyaW5nIG9uIHRoZSBpbml0IGVycm9yIHBhdGgu CgpTaWduZWQtb2ZmLWJ5OiBKYXZpZXIgR29uesOhbGV6IDxqYXZpZXJAY25leGxhYnMuY29tPgot LS0KIGRyaXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMgfCA1MzMgKysrKysrKysrKysrKysrKysr KysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogMSBmaWxlIGNoYW5nZWQsIDI2NiBpbnNlcnRpb25z KCspLCAyNjcgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy9saWdodG52bS9wYmxr LWluaXQuYyBiL2RyaXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMKaW5kZXggYmQyNTkyZmMzMzc4 Li5iM2UxNWVmNjNkZjMgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vcGJsay1pbml0LmMK KysrIGIvZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYwpAQCAtMTAzLDcgKzEwMyw0MCBAQCBz dGF0aWMgdm9pZCBwYmxrX2wycF9mcmVlKHN0cnVjdCBwYmxrICpwYmxrKQogCXZmcmVlKHBibGst PnRyYW5zX21hcCk7CiB9CiAKLXN0YXRpYyBpbnQgcGJsa19sMnBfaW5pdChzdHJ1Y3QgcGJsayAq cGJsaykKK3N0YXRpYyBpbnQgcGJsa19sMnBfcmVjb3ZlcihzdHJ1Y3QgcGJsayAqcGJsaywgYm9v bCBmYWN0b3J5X2luaXQpCit7CisJc3RydWN0IHBibGtfbGluZSAqbGluZSA9IE5VTEw7CisKKwlp ZiAoZmFjdG9yeV9pbml0KSB7CisJCXBibGtfc2V0dXBfdXVpZChwYmxrKTsKKwl9IGVsc2Ugewor CQlsaW5lID0gcGJsa19yZWNvdl9sMnAocGJsayk7CisJCWlmIChJU19FUlIobGluZSkpIHsKKwkJ CXByX2VycigicGJsazogY291bGQgbm90IHJlY292ZXIgbDJwIHRhYmxlXG4iKTsKKwkJCXJldHVy biAtRUZBVUxUOworCQl9CisJfQorCisjaWZkZWYgQ09ORklHX05WTV9ERUJVRworCXByX2luZm8o InBibGsgaW5pdDogTDJQIENSQzogJXhcbiIsIHBibGtfbDJwX2NyYyhwYmxrKSk7CisjZW5kaWYK KworCS8qIEZyZWUgZnVsbCBsaW5lcyBkaXJlY3RseSBhcyBHQyBoYXMgbm90IGJlZW4gc3RhcnRl ZCB5ZXQgKi8KKwlwYmxrX2djX2ZyZWVfZnVsbF9saW5lcyhwYmxrKTsKKworCWlmICghbGluZSkg eworCQkvKiBDb25maWd1cmUgbmV4dCBsaW5lIGZvciB1c2VyIGRhdGEgKi8KKwkJbGluZSA9IHBi bGtfbGluZV9nZXRfZmlyc3RfZGF0YShwYmxrKTsKKwkJaWYgKCFsaW5lKSB7CisJCQlwcl9lcnIo InBibGs6IGxpbmUgbGlzdCBjb3JydXB0ZWRcbiIpOworCQkJcmV0dXJuIC1FRkFVTFQ7CisJCX0K Kwl9CisKKwlyZXR1cm4gMDsKK30KKworc3RhdGljIGludCBwYmxrX2wycF9pbml0KHN0cnVjdCBw YmxrICpwYmxrLCBib29sIGZhY3RvcnlfaW5pdCkKIHsKIAlzZWN0b3JfdCBpOwogCXN0cnVjdCBw cGFfYWRkciBwcGE7CkBAIC0xMTksNyArMTUyLDcgQEAgc3RhdGljIGludCBwYmxrX2wycF9pbml0 KHN0cnVjdCBwYmxrICpwYmxrKQogCWZvciAoaSA9IDA7IGkgPCBwYmxrLT5ybC5ucl9zZWNzOyBp KyspCiAJCXBibGtfdHJhbnNfbWFwX3NldChwYmxrLCBpLCBwcGEpOwogCi0JcmV0dXJuIDA7CisJ cmV0dXJuIHBibGtfbDJwX3JlY292ZXIocGJsaywgZmFjdG9yeV9pbml0KTsKIH0KIAogc3RhdGlj IHZvaWQgcGJsa19yd2JfZnJlZShzdHJ1Y3QgcGJsayAqcGJsaykKQEAgLTI2OCw4NiArMzAxLDEx MyBAQCBzdGF0aWMgaW50IHBibGtfY29yZV9pbml0KHN0cnVjdCBwYmxrICpwYmxrKQogewogCXN0 cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJsay0+ZGV2OwogCXN0cnVjdCBudm1fZ2VvICpnZW8g PSAmZGV2LT5nZW87CisJaW50IG1heF93cml0ZV9wcGFzOworCisJYXRvbWljNjRfc2V0KCZwYmxr LT51c2VyX3dhLCAwKTsKKwlhdG9taWM2NF9zZXQoJnBibGstPnBhZF93YSwgMCk7CisJYXRvbWlj NjRfc2V0KCZwYmxrLT5nY193YSwgMCk7CisJcGJsay0+dXNlcl9yc3Rfd2EgPSAwOworCXBibGst PnBhZF9yc3Rfd2EgPSAwOworCXBibGstPmdjX3JzdF93YSA9IDA7CisKKwlhdG9taWNfbG9uZ19z ZXQoJnBibGstPm5yX2ZsdXNoLCAwKTsKKwlwYmxrLT5ucl9mbHVzaF9yc3QgPSAwOwogCiAJcGJs ay0+cGdzX2luX2J1ZmZlciA9IGdlby0+bXdfY3VuaXRzICogZ2VvLT5hbGxfbHVuczsKIAorCXBi bGstPm1pbl93cml0ZV9wZ3MgPSBnZW8tPndzX29wdCAqIChnZW8tPmNzZWNzIC8gUEFHRV9TSVpF KTsKKwltYXhfd3JpdGVfcHBhcyA9IHBibGstPm1pbl93cml0ZV9wZ3MgKiBnZW8tPmFsbF9sdW5z OworCXBibGstPm1heF93cml0ZV9wZ3MgPSAobWF4X3dyaXRlX3BwYXMgPCBOVk1fTUFYX1ZMQkEp ID8KKwkJCQltYXhfd3JpdGVfcHBhcyA6IE5WTV9NQVhfVkxCQTsKKwlwYmxrX3NldF9zZWNfcGVy X3dyaXRlKHBibGssIHBibGstPm1pbl93cml0ZV9wZ3MpOworCisJaWYgKHBibGstPm1heF93cml0 ZV9wZ3MgPiBQQkxLX01BWF9SRVFfQUREUlMpIHsKKwkJcHJfZXJyKCJwYmxrOiBjYW5ub3Qgc3Vw cG9ydCBkZXZpY2UgbWF4X3BoeXNfc2VjdFxuIik7CisJCXJldHVybiAtRUlOVkFMOworCX0KKwor CXBibGstPnBhZF9kaXN0ID0ga3phbGxvYygocGJsay0+bWluX3dyaXRlX3BncyAtIDEpICogc2l6 ZW9mKGF0b21pYzY0X3QpLAorCQkJCQkJCQlHRlBfS0VSTkVMKTsKKwlpZiAoIXBibGstPnBhZF9k aXN0KQorCQlyZXR1cm4gLUVOT01FTTsKKwogCWlmIChwYmxrX2luaXRfZ2xvYmFsX2NhY2hlcyhw YmxrKSkKLQkJcmV0dXJuIC1FTk9NRU07CisJCWdvdG8gZmFpbF9mcmVlX3BhZF9kaXN0OwogCiAJ LyogSW50ZXJuYWwgYmlvcyBjYW4gYmUgYXQgbW9zdCB0aGUgc2VjdG9ycyBzaWduYWxlZCBieSB0 aGUgZGV2aWNlLiAqLwogCXBibGstPnBhZ2VfYmlvX3Bvb2wgPSBtZW1wb29sX2NyZWF0ZV9wYWdl X3Bvb2woTlZNX01BWF9WTEJBLCAwKTsKIAlpZiAoIXBibGstPnBhZ2VfYmlvX3Bvb2wpCi0JCWdv dG8gZnJlZV9nbG9iYWxfY2FjaGVzOworCQlnb3RvIGZhaWxfZnJlZV9nbG9iYWxfY2FjaGVzOwog CiAJcGJsay0+Z2VuX3dzX3Bvb2wgPSBtZW1wb29sX2NyZWF0ZV9zbGFiX3Bvb2woUEJMS19HRU5f V1NfUE9PTF9TSVpFLAogCQkJCQkJCXBibGtfd3NfY2FjaGUpOwogCWlmICghcGJsay0+Z2VuX3dz X3Bvb2wpCi0JCWdvdG8gZnJlZV9wYWdlX2Jpb19wb29sOworCQlnb3RvIGZhaWxfZnJlZV9wYWdl X2Jpb19wb29sOwogCiAJcGJsay0+cmVjX3Bvb2wgPSBtZW1wb29sX2NyZWF0ZV9zbGFiX3Bvb2wo Z2VvLT5hbGxfbHVucywKIAkJCQkJCQlwYmxrX3JlY19jYWNoZSk7CiAJaWYgKCFwYmxrLT5yZWNf cG9vbCkKLQkJZ290byBmcmVlX2dlbl93c19wb29sOworCQlnb3RvIGZhaWxfZnJlZV9nZW5fd3Nf cG9vbDsKIAogCXBibGstPnJfcnFfcG9vbCA9IG1lbXBvb2xfY3JlYXRlX3NsYWJfcG9vbChnZW8t PmFsbF9sdW5zLAogCQkJCQkJCXBibGtfZ19ycV9jYWNoZSk7CiAJaWYgKCFwYmxrLT5yX3JxX3Bv b2wpCi0JCWdvdG8gZnJlZV9yZWNfcG9vbDsKKwkJZ290byBmYWlsX2ZyZWVfcmVjX3Bvb2w7CiAK IAlwYmxrLT5lX3JxX3Bvb2wgPSBtZW1wb29sX2NyZWF0ZV9zbGFiX3Bvb2woZ2VvLT5hbGxfbHVu cywKIAkJCQkJCQlwYmxrX2dfcnFfY2FjaGUpOwogCWlmICghcGJsay0+ZV9ycV9wb29sKQotCQln b3RvIGZyZWVfcl9ycV9wb29sOworCQlnb3RvIGZhaWxfZnJlZV9yX3JxX3Bvb2w7CiAKIAlwYmxr LT53X3JxX3Bvb2wgPSBtZW1wb29sX2NyZWF0ZV9zbGFiX3Bvb2woZ2VvLT5hbGxfbHVucywKIAkJ CQkJCQlwYmxrX3dfcnFfY2FjaGUpOwogCWlmICghcGJsay0+d19ycV9wb29sKQotCQlnb3RvIGZy ZWVfZV9ycV9wb29sOworCQlnb3RvIGZhaWxfZnJlZV9lX3JxX3Bvb2w7CiAKIAlwYmxrLT5jbG9z ZV93cSA9IGFsbG9jX3dvcmtxdWV1ZSgicGJsay1jbG9zZS13cSIsCiAJCQlXUV9NRU1fUkVDTEFJ TSB8IFdRX1VOQk9VTkQsIFBCTEtfTlJfQ0xPU0VfSk9CUyk7CiAJaWYgKCFwYmxrLT5jbG9zZV93 cSkKLQkJZ290byBmcmVlX3dfcnFfcG9vbDsKKwkJZ290byBmYWlsX2ZyZWVfd19ycV9wb29sOwog CiAJcGJsay0+YmJfd3EgPSBhbGxvY193b3JrcXVldWUoInBibGstYmItd3EiLAogCQkJV1FfTUVN X1JFQ0xBSU0gfCBXUV9VTkJPVU5ELCAwKTsKIAlpZiAoIXBibGstPmJiX3dxKQotCQlnb3RvIGZy ZWVfY2xvc2Vfd3E7CisJCWdvdG8gZmFpbF9mcmVlX2Nsb3NlX3dxOwogCiAJcGJsay0+cl9lbmRf d3EgPSBhbGxvY193b3JrcXVldWUoInBibGstcmVhZC1lbmQtd3EiLAogCQkJV1FfTUVNX1JFQ0xB SU0gfCBXUV9VTkJPVU5ELCAwKTsKIAlpZiAoIXBibGstPnJfZW5kX3dxKQotCQlnb3RvIGZyZWVf YmJfd3E7CisJCWdvdG8gZmFpbF9mcmVlX2JiX3dxOwogCiAJaWYgKHBibGtfc2V0X2FkZHJmKHBi bGspKQotCQlnb3RvIGZyZWVfcl9lbmRfd3E7Ci0KLQlpZiAocGJsa19yd2JfaW5pdChwYmxrKSkK LQkJZ290byBmcmVlX3JfZW5kX3dxOworCQlnb3RvIGZhaWxfZnJlZV9yX2VuZF93cTsKIAogCUlO SVRfTElTVF9IRUFEKCZwYmxrLT5jb21wbF9saXN0KTsKKwogCXJldHVybiAwOwogCi1mcmVlX3Jf ZW5kX3dxOgorZmFpbF9mcmVlX3JfZW5kX3dxOgogCWRlc3Ryb3lfd29ya3F1ZXVlKHBibGstPnJf ZW5kX3dxKTsKLWZyZWVfYmJfd3E6CitmYWlsX2ZyZWVfYmJfd3E6CiAJZGVzdHJveV93b3JrcXVl dWUocGJsay0+YmJfd3EpOwotZnJlZV9jbG9zZV93cToKK2ZhaWxfZnJlZV9jbG9zZV93cToKIAlk ZXN0cm95X3dvcmtxdWV1ZShwYmxrLT5jbG9zZV93cSk7Ci1mcmVlX3dfcnFfcG9vbDoKK2ZhaWxf ZnJlZV93X3JxX3Bvb2w6CiAJbWVtcG9vbF9kZXN0cm95KHBibGstPndfcnFfcG9vbCk7Ci1mcmVl X2VfcnFfcG9vbDoKK2ZhaWxfZnJlZV9lX3JxX3Bvb2w6CiAJbWVtcG9vbF9kZXN0cm95KHBibGst PmVfcnFfcG9vbCk7Ci1mcmVlX3JfcnFfcG9vbDoKK2ZhaWxfZnJlZV9yX3JxX3Bvb2w6CiAJbWVt cG9vbF9kZXN0cm95KHBibGstPnJfcnFfcG9vbCk7Ci1mcmVlX3JlY19wb29sOgorZmFpbF9mcmVl X3JlY19wb29sOgogCW1lbXBvb2xfZGVzdHJveShwYmxrLT5yZWNfcG9vbCk7Ci1mcmVlX2dlbl93 c19wb29sOgorZmFpbF9mcmVlX2dlbl93c19wb29sOgogCW1lbXBvb2xfZGVzdHJveShwYmxrLT5n ZW5fd3NfcG9vbCk7Ci1mcmVlX3BhZ2VfYmlvX3Bvb2w6CitmYWlsX2ZyZWVfcGFnZV9iaW9fcG9v bDoKIAltZW1wb29sX2Rlc3Ryb3kocGJsay0+cGFnZV9iaW9fcG9vbCk7Ci1mcmVlX2dsb2JhbF9j YWNoZXM6CitmYWlsX2ZyZWVfZ2xvYmFsX2NhY2hlczoKIAlwYmxrX2ZyZWVfZ2xvYmFsX2NhY2hl cyhwYmxrKTsKK2ZhaWxfZnJlZV9wYWRfZGlzdDoKKwlrZnJlZShwYmxrLT5wYWRfZGlzdCk7CiAJ cmV0dXJuIC1FTk9NRU07CiB9CiAKQEAgLTM2OSwxNCArNDI5LDggQEAgc3RhdGljIHZvaWQgcGJs a19jb3JlX2ZyZWUoc3RydWN0IHBibGsgKnBibGspCiAJbWVtcG9vbF9kZXN0cm95KHBibGstPmVf cnFfcG9vbCk7CiAJbWVtcG9vbF9kZXN0cm95KHBibGstPndfcnFfcG9vbCk7CiAKLQlwYmxrX3J3 Yl9mcmVlKHBibGspOwotCiAJcGJsa19mcmVlX2dsb2JhbF9jYWNoZXMocGJsayk7Ci19Ci0KLXN0 YXRpYyB2b2lkIHBibGtfbHVuc19mcmVlKHN0cnVjdCBwYmxrICpwYmxrKQotewotCWtmcmVlKHBi bGstPmx1bnMpOworCWtmcmVlKHBibGstPnBhZF9kaXN0KTsKIH0KIAogc3RhdGljIHZvaWQgcGJs a19saW5lX21nX2ZyZWUoc3RydWN0IHBibGsgKnBibGspCkBAIC0zOTMsOCArNDQ3LDYgQEAgc3Rh dGljIHZvaWQgcGJsa19saW5lX21nX2ZyZWUoc3RydWN0IHBibGsgKnBibGspCiAJCXBibGtfbWZy ZWUobF9tZy0+ZWxpbmVfbWV0YVtpXS0+YnVmLCBsX21nLT5lbWV0YV9hbGxvY190eXBlKTsKIAkJ a2ZyZWUobF9tZy0+ZWxpbmVfbWV0YVtpXSk7CiAJfQotCi0Ja2ZyZWUocGJsay0+bGluZXMpOwog fQogCiBzdGF0aWMgdm9pZCBwYmxrX2xpbmVfbWV0YV9mcmVlKHN0cnVjdCBwYmxrX2xpbmUgKmxp bmUpCkBAIC00MTgsNiArNDcwLDExIEBAIHN0YXRpYyB2b2lkIHBibGtfbGluZXNfZnJlZShzdHJ1 Y3QgcGJsayAqcGJsaykKIAkJcGJsa19saW5lX21ldGFfZnJlZShsaW5lKTsKIAl9CiAJc3Bpbl91 bmxvY2soJmxfbWctPmZyZWVfbG9jayk7CisKKwlwYmxrX2xpbmVfbWdfZnJlZShwYmxrKTsKKwor CWtmcmVlKHBibGstPmx1bnMpOworCWtmcmVlKHBibGstPmxpbmVzKTsKIH0KIAogc3RhdGljIGlu dCBwYmxrX2JiX2dldF90Ymwoc3RydWN0IG52bV90Z3RfZGV2ICpkZXYsIHN0cnVjdCBwYmxrX2x1 biAqcmx1biwKQEAgLTQ4MSw3ICs1MzgsNyBAQCBzdGF0aWMgdm9pZCAqcGJsa19jaHVua19nZXRf bWV0YShzdHJ1Y3QgcGJsayAqcGJsaykKIAkJcmV0dXJuIHBibGtfY2h1bmtfZ2V0X2luZm8ocGJs ayk7CiB9CiAKLXN0YXRpYyBpbnQgcGJsa19sdW5zX2luaXQoc3RydWN0IHBibGsgKnBibGssIHN0 cnVjdCBwcGFfYWRkciAqbHVucykKK3N0YXRpYyBpbnQgcGJsa19sdW5zX2luaXQoc3RydWN0IHBi bGsgKnBibGspCiB7CiAJc3RydWN0IG52bV90Z3RfZGV2ICpkZXYgPSBwYmxrLT5kZXY7CiAJc3Ry dWN0IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsKQEAgLTQ5NCwxMSArNTUxLDYgQEAgc3RhdGlj IGludCBwYmxrX2x1bnNfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBwYV9hZGRyICps dW5zKQogCQlyZXR1cm4gLUVJTlZBTDsKIAl9CiAKLQlwYmxrLT5sdW5zID0ga2NhbGxvYyhnZW8t PmFsbF9sdW5zLCBzaXplb2Yoc3RydWN0IHBibGtfbHVuKSwKLQkJCQkJCQkJR0ZQX0tFUk5FTCk7 Ci0JaWYgKCFwYmxrLT5sdW5zKQotCQlyZXR1cm4gLUVOT01FTTsKLQogCWZvciAoaSA9IDA7IGkg PCBnZW8tPmFsbF9sdW5zOyBpKyspIHsKIAkJLyogU3RyaXBlIGFjcm9zcyBjaGFubmVscyAqLwog CQlpbnQgY2ggPSBpICUgZ2VvLT5udW1fY2g7CkBAIC01MDYsNyArNTU4LDcgQEAgc3RhdGljIGlu dCBwYmxrX2x1bnNfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBwYV9hZGRyICpsdW5z KQogCQlpbnQgbHVuaWQgPSBsdW5fcmF3ICsgY2ggKiBnZW8tPm51bV9sdW47CiAKIAkJcmx1biA9 ICZwYmxrLT5sdW5zW2ldOwotCQlybHVuLT5icHBhID0gbHVuc1tsdW5pZF07CisJCXJsdW4tPmJw cGEgPSBkZXYtPmx1bnNbbHVuaWRdOwogCiAJCXNlbWFfaW5pdCgmcmx1bi0+d3Jfc2VtLCAxKTsK IAl9CkBAIC01MTQsMzggKzU2Niw2IEBAIHN0YXRpYyBpbnQgcGJsa19sdW5zX2luaXQoc3RydWN0 IHBibGsgKnBibGssIHN0cnVjdCBwcGFfYWRkciAqbHVucykKIAlyZXR1cm4gMDsKIH0KIAotc3Rh dGljIGludCBwYmxrX2xpbmVzX2NvbmZpZ3VyZShzdHJ1Y3QgcGJsayAqcGJsaywgaW50IGZsYWdz KQotewotCXN0cnVjdCBwYmxrX2xpbmUgKmxpbmUgPSBOVUxMOwotCWludCByZXQgPSAwOwotCi0J aWYgKCEoZmxhZ3MgJiBOVk1fVEFSR0VUX0ZBQ1RPUlkpKSB7Ci0JCWxpbmUgPSBwYmxrX3JlY292 X2wycChwYmxrKTsKLQkJaWYgKElTX0VSUihsaW5lKSkgewotCQkJcHJfZXJyKCJwYmxrOiBjb3Vs ZCBub3QgcmVjb3ZlciBsMnAgdGFibGVcbiIpOwotCQkJcmV0ID0gLUVGQVVMVDsKLQkJfQotCX0K LQotI2lmZGVmIENPTkZJR19OVk1fREVCVUcKLQlwcl9pbmZvKCJwYmxrIGluaXQ6IEwyUCBDUkM6 ICV4XG4iLCBwYmxrX2wycF9jcmMocGJsaykpOwotI2VuZGlmCi0KLQkvKiBGcmVlIGZ1bGwgbGlu ZXMgZGlyZWN0bHkgYXMgR0MgaGFzIG5vdCBiZWVuIHN0YXJ0ZWQgeWV0ICovCi0JcGJsa19nY19m cmVlX2Z1bGxfbGluZXMocGJsayk7Ci0KLQlpZiAoIWxpbmUpIHsKLQkJLyogQ29uZmlndXJlIG5l eHQgbGluZSBmb3IgdXNlciBkYXRhICovCi0JCWxpbmUgPSBwYmxrX2xpbmVfZ2V0X2ZpcnN0X2Rh dGEocGJsayk7Ci0JCWlmICghbGluZSkgewotCQkJcHJfZXJyKCJwYmxrOiBsaW5lIGxpc3QgY29y cnVwdGVkXG4iKTsKLQkJCXJldCA9IC1FRkFVTFQ7Ci0JCX0KLQl9Ci0KLQlyZXR1cm4gcmV0Owot fQotCiAvKiBTZWUgY29tbWVudCBvdmVyIHN0cnVjdCBsaW5lX2VtZXRhIGRlZmluaXRpb24gKi8K IHN0YXRpYyB1bnNpZ25lZCBpbnQgY2FsY19lbWV0YV9sZW4oc3RydWN0IHBibGsgKnBibGspCiB7 CkBAIC02MTEsODEgKzYzMSw2IEBAIHN0YXRpYyB2b2lkIHBibGtfc2V0X3Byb3Zpc2lvbihzdHJ1 Y3QgcGJsayAqcGJsaywgbG9uZyBucl9mcmVlX2Jsa3MpCiAJYXRvbWljX3NldCgmcGJsay0+cmwu ZnJlZV91c2VyX2Jsb2NrcywgbnJfZnJlZV9ibGtzKTsKIH0KIAotc3RhdGljIGludCBwYmxrX2xp bmVzX2FsbG9jX21ldGFkYXRhKHN0cnVjdCBwYmxrICpwYmxrKQotewotCXN0cnVjdCBwYmxrX2xp bmVfbWdtdCAqbF9tZyA9ICZwYmxrLT5sX21nOwotCXN0cnVjdCBwYmxrX2xpbmVfbWV0YSAqbG0g PSAmcGJsay0+bG07Ci0JaW50IGk7Ci0KLQkvKiBzbWV0YSBpcyBhbHdheXMgc21hbGwgZW5vdWdo IHRvIGZpdCBvbiBhIGttYWxsb2MgbWVtb3J5IGFsbG9jYXRpb24sCi0JICogZW1ldGEgZGVwZW5k cyBvbiB0aGUgbnVtYmVyIG9mIExVTnMgYWxsb2NhdGVkIHRvIHRoZSBwYmxrIGluc3RhbmNlCi0J ICovCi0JZm9yIChpID0gMDsgaSA8IFBCTEtfREFUQV9MSU5FUzsgaSsrKSB7Ci0JCWxfbWctPnNs aW5lX21ldGFbaV0gPSBrbWFsbG9jKGxtLT5zbWV0YV9sZW4sIEdGUF9LRVJORUwpOwotCQlpZiAo IWxfbWctPnNsaW5lX21ldGFbaV0pCi0JCQlnb3RvIGZhaWxfZnJlZV9zbWV0YTsKLQl9Ci0KLQkv KiBlbWV0YSBhbGxvY2F0ZXMgdGhyZWUgZGlmZmVyZW50IGJ1ZmZlcnMgZm9yIG1hbmFnaW5nIG1l dGFkYXRhIHdpdGgKLQkgKiBpbi1tZW1vcnkgYW5kIGluLW1lZGlhIGxheW91dHMKLQkgKi8KLQlm b3IgKGkgPSAwOyBpIDwgUEJMS19EQVRBX0xJTkVTOyBpKyspIHsKLQkJc3RydWN0IHBibGtfZW1l dGEgKmVtZXRhOwotCi0JCWVtZXRhID0ga21hbGxvYyhzaXplb2Yoc3RydWN0IHBibGtfZW1ldGEp LCBHRlBfS0VSTkVMKTsKLQkJaWYgKCFlbWV0YSkKLQkJCWdvdG8gZmFpbF9mcmVlX2VtZXRhOwot Ci0JCWlmIChsbS0+ZW1ldGFfbGVuWzBdID4gS01BTExPQ19NQVhfQ0FDSEVfU0laRSkgewotCQkJ bF9tZy0+ZW1ldGFfYWxsb2NfdHlwZSA9IFBCTEtfVk1BTExPQ19NRVRBOwotCi0JCQllbWV0YS0+ YnVmID0gdm1hbGxvYyhsbS0+ZW1ldGFfbGVuWzBdKTsKLQkJCWlmICghZW1ldGEtPmJ1Zikgewot CQkJCWtmcmVlKGVtZXRhKTsKLQkJCQlnb3RvIGZhaWxfZnJlZV9lbWV0YTsKLQkJCX0KLQotCQkJ ZW1ldGEtPm5yX2VudHJpZXMgPSBsbS0+ZW1ldGFfc2VjWzBdOwotCQkJbF9tZy0+ZWxpbmVfbWV0 YVtpXSA9IGVtZXRhOwotCQl9IGVsc2UgewotCQkJbF9tZy0+ZW1ldGFfYWxsb2NfdHlwZSA9IFBC TEtfS01BTExPQ19NRVRBOwotCi0JCQllbWV0YS0+YnVmID0ga21hbGxvYyhsbS0+ZW1ldGFfbGVu WzBdLCBHRlBfS0VSTkVMKTsKLQkJCWlmICghZW1ldGEtPmJ1ZikgewotCQkJCWtmcmVlKGVtZXRh KTsKLQkJCQlnb3RvIGZhaWxfZnJlZV9lbWV0YTsKLQkJCX0KLQotCQkJZW1ldGEtPm5yX2VudHJp ZXMgPSBsbS0+ZW1ldGFfc2VjWzBdOwotCQkJbF9tZy0+ZWxpbmVfbWV0YVtpXSA9IGVtZXRhOwot CQl9Ci0JfQotCi0JbF9tZy0+dnNjX2xpc3QgPSBrY2FsbG9jKGxfbWctPm5yX2xpbmVzLCBzaXpl b2YoX19sZTMyKSwgR0ZQX0tFUk5FTCk7Ci0JaWYgKCFsX21nLT52c2NfbGlzdCkKLQkJZ290byBm YWlsX2ZyZWVfZW1ldGE7Ci0KLQlmb3IgKGkgPSAwOyBpIDwgbF9tZy0+bnJfbGluZXM7IGkrKykK LQkJbF9tZy0+dnNjX2xpc3RbaV0gPSBjcHVfdG9fbGUzMihFTVBUWV9FTlRSWSk7Ci0KLQlyZXR1 cm4gMDsKLQotZmFpbF9mcmVlX2VtZXRhOgotCXdoaWxlICgtLWkgPj0gMCkgewotCQlpZiAobF9t Zy0+ZW1ldGFfYWxsb2NfdHlwZSA9PSBQQkxLX1ZNQUxMT0NfTUVUQSkKLQkJCXZmcmVlKGxfbWct PmVsaW5lX21ldGFbaV0tPmJ1Zik7Ci0JCWVsc2UKLQkJCWtmcmVlKGxfbWctPmVsaW5lX21ldGFb aV0tPmJ1Zik7Ci0JCWtmcmVlKGxfbWctPmVsaW5lX21ldGFbaV0pOwotCX0KLQotZmFpbF9mcmVl X3NtZXRhOgotCWZvciAoaSA9IDA7IGkgPCBQQkxLX0RBVEFfTElORVM7IGkrKykKLQkJa2ZyZWUo bF9tZy0+c2xpbmVfbWV0YVtpXSk7Ci0KLQlyZXR1cm4gLUVOT01FTTsKLX0KLQogc3RhdGljIGlu dCBwYmxrX3NldHVwX2xpbmVfbWV0YV8xMihzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBibGtf bGluZSAqbGluZSwKIAkJCQkgICB2b2lkICpjaHVua19tZXRhKQogewpAQCAtODM1LDI5ICs3ODAs MTMgQEAgc3RhdGljIGludCBwYmxrX2FsbG9jX2xpbmVfbWV0YShzdHJ1Y3QgcGJsayAqcGJsaywg c3RydWN0IHBibGtfbGluZSAqbGluZSkKIAlyZXR1cm4gMDsKIH0KIAotc3RhdGljIGludCBwYmxr X2xpbmVzX2luaXQoc3RydWN0IHBibGsgKnBibGspCitzdGF0aWMgaW50IHBibGtfbGluZV9tZ19p bml0KHN0cnVjdCBwYmxrICpwYmxrKQogewogCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJs ay0+ZGV2OwogCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CiAJc3RydWN0IHBibGtf bGluZV9tZ210ICpsX21nID0gJnBibGstPmxfbWc7CiAJc3RydWN0IHBibGtfbGluZV9tZXRhICps bSA9ICZwYmxrLT5sbTsKLQlzdHJ1Y3QgcGJsa19saW5lICpsaW5lOwotCXZvaWQgKmNodW5rX21l dGE7Ci0JdW5zaWduZWQgaW50IHNtZXRhX2xlbiwgZW1ldGFfbGVuOwotCWxvbmcgbnJfZnJlZV9j aGtzID0gMDsKLQlpbnQgYmJfZGlzdGFuY2UsIG1heF93cml0ZV9wcGFzOwotCWludCBpLCByZXQ7 Ci0KLQlwYmxrLT5taW5fd3JpdGVfcGdzID0gZ2VvLT53c19vcHQgKiAoZ2VvLT5jc2VjcyAvIFBB R0VfU0laRSk7Ci0JbWF4X3dyaXRlX3BwYXMgPSBwYmxrLT5taW5fd3JpdGVfcGdzICogZ2VvLT5h bGxfbHVuczsKLQlwYmxrLT5tYXhfd3JpdGVfcGdzID0gbWluX3QoaW50LCBtYXhfd3JpdGVfcHBh cywgTlZNX01BWF9WTEJBKTsKLQlwYmxrX3NldF9zZWNfcGVyX3dyaXRlKHBibGssIHBibGstPm1p bl93cml0ZV9wZ3MpOwotCi0JaWYgKHBibGstPm1heF93cml0ZV9wZ3MgPiBQQkxLX01BWF9SRVFf QUREUlMpIHsKLQkJcHJfZXJyKCJwYmxrOiB2ZWN0b3IgbGlzdCB0b28gYmlnKCV1ID4gJXUpXG4i LAotCQkJCXBibGstPm1heF93cml0ZV9wZ3MsIFBCTEtfTUFYX1JFUV9BRERSUyk7Ci0JCXJldHVy biAtRUlOVkFMOwotCX0KKwlpbnQgaSwgYmJfZGlzdGFuY2U7CiAKIAlsX21nLT5ucl9saW5lcyA9 IGdlby0+bnVtX2NoazsKIAlsX21nLT5sb2dfbGluZSA9IGxfbWctPmRhdGFfbGluZSA9IE5VTEw7 CkBAIC04NjUsNiArNzk0LDExOSBAQCBzdGF0aWMgaW50IHBibGtfbGluZXNfaW5pdChzdHJ1Y3Qg cGJsayAqcGJsaykKIAlsX21nLT5ucl9mcmVlX2xpbmVzID0gMDsKIAliaXRtYXBfemVybygmbF9t Zy0+bWV0YV9iaXRtYXAsIFBCTEtfREFUQV9MSU5FUyk7CiAKKwlJTklUX0xJU1RfSEVBRCgmbF9t Zy0+ZnJlZV9saXN0KTsKKwlJTklUX0xJU1RfSEVBRCgmbF9tZy0+Y29ycnVwdF9saXN0KTsKKwlJ TklUX0xJU1RfSEVBRCgmbF9tZy0+YmFkX2xpc3QpOworCUlOSVRfTElTVF9IRUFEKCZsX21nLT5n Y19mdWxsX2xpc3QpOworCUlOSVRfTElTVF9IRUFEKCZsX21nLT5nY19oaWdoX2xpc3QpOworCUlO SVRfTElTVF9IRUFEKCZsX21nLT5nY19taWRfbGlzdCk7CisJSU5JVF9MSVNUX0hFQUQoJmxfbWct PmdjX2xvd19saXN0KTsKKwlJTklUX0xJU1RfSEVBRCgmbF9tZy0+Z2NfZW1wdHlfbGlzdCk7CisK KwlJTklUX0xJU1RfSEVBRCgmbF9tZy0+ZW1ldGFfbGlzdCk7CisKKwlsX21nLT5nY19saXN0c1sw XSA9ICZsX21nLT5nY19oaWdoX2xpc3Q7CisJbF9tZy0+Z2NfbGlzdHNbMV0gPSAmbF9tZy0+Z2Nf bWlkX2xpc3Q7CisJbF9tZy0+Z2NfbGlzdHNbMl0gPSAmbF9tZy0+Z2NfbG93X2xpc3Q7CisKKwlz cGluX2xvY2tfaW5pdCgmbF9tZy0+ZnJlZV9sb2NrKTsKKwlzcGluX2xvY2tfaW5pdCgmbF9tZy0+ Y2xvc2VfbG9jayk7CisJc3Bpbl9sb2NrX2luaXQoJmxfbWctPmdjX2xvY2spOworCisJbF9tZy0+ dnNjX2xpc3QgPSBrY2FsbG9jKGxfbWctPm5yX2xpbmVzLCBzaXplb2YoX19sZTMyKSwgR0ZQX0tF Uk5FTCk7CisJaWYgKCFsX21nLT52c2NfbGlzdCkKKwkJZ290byBmYWlsOworCisJbF9tZy0+YmJf dGVtcGxhdGUgPSBremFsbG9jKGxtLT5zZWNfYml0bWFwX2xlbiwgR0ZQX0tFUk5FTCk7CisJaWYg KCFsX21nLT5iYl90ZW1wbGF0ZSkKKwkJZ290byBmYWlsX2ZyZWVfdnNjX2xpc3Q7CisKKwlsX21n LT5iYl9hdXggPSBremFsbG9jKGxtLT5zZWNfYml0bWFwX2xlbiwgR0ZQX0tFUk5FTCk7CisJaWYg KCFsX21nLT5iYl9hdXgpCisJCWdvdG8gZmFpbF9mcmVlX2JiX3RlbXBsYXRlOworCisJLyogc21l dGEgaXMgYWx3YXlzIHNtYWxsIGVub3VnaCB0byBmaXQgb24gYSBrbWFsbG9jIG1lbW9yeSBhbGxv Y2F0aW9uLAorCSAqIGVtZXRhIGRlcGVuZHMgb24gdGhlIG51bWJlciBvZiBMVU5zIGFsbG9jYXRl ZCB0byB0aGUgcGJsayBpbnN0YW5jZQorCSAqLworCWZvciAoaSA9IDA7IGkgPCBQQkxLX0RBVEFf TElORVM7IGkrKykgeworCQlsX21nLT5zbGluZV9tZXRhW2ldID0ga21hbGxvYyhsbS0+c21ldGFf bGVuLCBHRlBfS0VSTkVMKTsKKwkJaWYgKCFsX21nLT5zbGluZV9tZXRhW2ldKQorCQkJZ290byBm YWlsX2ZyZWVfc21ldGE7CisJfQorCisJLyogZW1ldGEgYWxsb2NhdGVzIHRocmVlIGRpZmZlcmVu dCBidWZmZXJzIGZvciBtYW5hZ2luZyBtZXRhZGF0YSB3aXRoCisJICogaW4tbWVtb3J5IGFuZCBp bi1tZWRpYSBsYXlvdXRzCisJICovCisJZm9yIChpID0gMDsgaSA8IFBCTEtfREFUQV9MSU5FUzsg aSsrKSB7CisJCXN0cnVjdCBwYmxrX2VtZXRhICplbWV0YTsKKworCQllbWV0YSA9IGttYWxsb2Mo c2l6ZW9mKHN0cnVjdCBwYmxrX2VtZXRhKSwgR0ZQX0tFUk5FTCk7CisJCWlmICghZW1ldGEpCisJ CQlnb3RvIGZhaWxfZnJlZV9lbWV0YTsKKworCQlpZiAobG0tPmVtZXRhX2xlblswXSA+IEtNQUxM T0NfTUFYX0NBQ0hFX1NJWkUpIHsKKwkJCWxfbWctPmVtZXRhX2FsbG9jX3R5cGUgPSBQQkxLX1ZN QUxMT0NfTUVUQTsKKworCQkJZW1ldGEtPmJ1ZiA9IHZtYWxsb2MobG0tPmVtZXRhX2xlblswXSk7 CisJCQlpZiAoIWVtZXRhLT5idWYpIHsKKwkJCQlrZnJlZShlbWV0YSk7CisJCQkJZ290byBmYWls X2ZyZWVfZW1ldGE7CisJCQl9CisKKwkJCWVtZXRhLT5ucl9lbnRyaWVzID0gbG0tPmVtZXRhX3Nl Y1swXTsKKwkJCWxfbWctPmVsaW5lX21ldGFbaV0gPSBlbWV0YTsKKwkJfSBlbHNlIHsKKwkJCWxf bWctPmVtZXRhX2FsbG9jX3R5cGUgPSBQQkxLX0tNQUxMT0NfTUVUQTsKKworCQkJZW1ldGEtPmJ1 ZiA9IGttYWxsb2MobG0tPmVtZXRhX2xlblswXSwgR0ZQX0tFUk5FTCk7CisJCQlpZiAoIWVtZXRh LT5idWYpIHsKKwkJCQlrZnJlZShlbWV0YSk7CisJCQkJZ290byBmYWlsX2ZyZWVfZW1ldGE7CisJ CQl9CisKKwkJCWVtZXRhLT5ucl9lbnRyaWVzID0gbG0tPmVtZXRhX3NlY1swXTsKKwkJCWxfbWct PmVsaW5lX21ldGFbaV0gPSBlbWV0YTsKKwkJfQorCX0KKworCWZvciAoaSA9IDA7IGkgPCBsX21n LT5ucl9saW5lczsgaSsrKQorCQlsX21nLT52c2NfbGlzdFtpXSA9IGNwdV90b19sZTMyKEVNUFRZ X0VOVFJZKTsKKworCWJiX2Rpc3RhbmNlID0gKGdlby0+YWxsX2x1bnMpICogZ2VvLT53c19vcHQ7 CisJZm9yIChpID0gMDsgaSA8IGxtLT5zZWNfcGVyX2xpbmU7IGkgKz0gYmJfZGlzdGFuY2UpCisJ CWJpdG1hcF9zZXQobF9tZy0+YmJfdGVtcGxhdGUsIGksIGdlby0+d3Nfb3B0KTsKKworCXJldHVy biAwOworCitmYWlsX2ZyZWVfZW1ldGE6CisJd2hpbGUgKC0taSA+PSAwKSB7CisJCWlmIChsX21n LT5lbWV0YV9hbGxvY190eXBlID09IFBCTEtfVk1BTExPQ19NRVRBKQorCQkJdmZyZWUobF9tZy0+ ZWxpbmVfbWV0YVtpXS0+YnVmKTsKKwkJZWxzZQorCQkJa2ZyZWUobF9tZy0+ZWxpbmVfbWV0YVtp XS0+YnVmKTsKKwkJa2ZyZWUobF9tZy0+ZWxpbmVfbWV0YVtpXSk7CisJfQorZmFpbF9mcmVlX3Nt ZXRhOgorCWtmcmVlKGxfbWctPmJiX2F1eCk7CisKKwlmb3IgKGkgPSAwOyBpIDwgUEJMS19EQVRB X0xJTkVTOyBpKyspCisJCWtmcmVlKGxfbWctPnNsaW5lX21ldGFbaV0pOworZmFpbF9mcmVlX2Ji X3RlbXBsYXRlOgorCWtmcmVlKGxfbWctPmJiX3RlbXBsYXRlKTsKK2ZhaWxfZnJlZV92c2NfbGlz dDoKKwlrZnJlZShsX21nLT52c2NfbGlzdCk7CitmYWlsOgorCXJldHVybiAtRU5PTUVNOworfQor CitzdGF0aWMgaW50IHBibGtfbGluZV9tZXRhX2luaXQoc3RydWN0IHBibGsgKnBibGspCit7CisJ c3RydWN0IG52bV90Z3RfZGV2ICpkZXYgPSBwYmxrLT5kZXY7CisJc3RydWN0IG52bV9nZW8gKmdl byA9ICZkZXYtPmdlbzsKKwlzdHJ1Y3QgcGJsa19saW5lX21ldGEgKmxtID0gJnBibGstPmxtOwor CXVuc2lnbmVkIGludCBzbWV0YV9sZW4sIGVtZXRhX2xlbjsKKwlpbnQgaTsKKwogCWxtLT5zZWNf cGVyX2xpbmUgPSBnZW8tPmNsYmEgKiBnZW8tPmFsbF9sdW5zOwogCWxtLT5ibGtfcGVyX2xpbmUg PSBnZW8tPmFsbF9sdW5zOwogCWxtLT5ibGtfYml0bWFwX2xlbiA9IEJJVFNfVE9fTE9OR1MoZ2Vv LT5hbGxfbHVucykgKiBzaXplb2YobG9uZyk7CkBAIC05MTUsNTggKzk1Nyw0OSBAQCBzdGF0aWMg aW50IHBibGtfbGluZXNfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaykKIAkJcmV0dXJuIC1FSU5WQUw7 CiAJfQogCi0JcmV0ID0gcGJsa19saW5lc19hbGxvY19tZXRhZGF0YShwYmxrKTsKKwlyZXR1cm4g MDsKK30KKworc3RhdGljIGludCBwYmxrX2xpbmVzX2luaXQoc3RydWN0IHBibGsgKnBibGspCit7 CisJc3RydWN0IG52bV90Z3RfZGV2ICpkZXYgPSBwYmxrLT5kZXY7CisJc3RydWN0IG52bV9nZW8g KmdlbyA9ICZkZXYtPmdlbzsKKwlzdHJ1Y3QgcGJsa19saW5lX21nbXQgKmxfbWcgPSAmcGJsay0+ bF9tZzsKKwlzdHJ1Y3QgcGJsa19saW5lICpsaW5lOworCXZvaWQgKmNodW5rX21ldGE7CisJbG9u ZyBucl9mcmVlX2Noa3MgPSAwOworCWludCBpLCByZXQ7CisKKwlyZXQgPSBwYmxrX2xpbmVfbWV0 YV9pbml0KHBibGspOwogCWlmIChyZXQpCiAJCXJldHVybiByZXQ7CiAKLQlsX21nLT5iYl90ZW1w bGF0ZSA9IGt6YWxsb2MobG0tPnNlY19iaXRtYXBfbGVuLCBHRlBfS0VSTkVMKTsKLQlpZiAoIWxf bWctPmJiX3RlbXBsYXRlKSB7Ci0JCXJldCA9IC1FTk9NRU07Ci0JCWdvdG8gZmFpbF9mcmVlX21l dGE7Ci0JfQotCi0JbF9tZy0+YmJfYXV4ID0ga3phbGxvYyhsbS0+c2VjX2JpdG1hcF9sZW4sIEdG UF9LRVJORUwpOwotCWlmICghbF9tZy0+YmJfYXV4KSB7Ci0JCXJldCA9IC1FTk9NRU07Ci0JCWdv dG8gZmFpbF9mcmVlX2JiX3RlbXBsYXRlOwotCX0KLQotCWJiX2Rpc3RhbmNlID0gKGdlby0+YWxs X2x1bnMpICogZ2VvLT53c19vcHQ7Ci0JZm9yIChpID0gMDsgaSA8IGxtLT5zZWNfcGVyX2xpbmU7 IGkgKz0gYmJfZGlzdGFuY2UpCi0JCWJpdG1hcF9zZXQobF9tZy0+YmJfdGVtcGxhdGUsIGksIGdl by0+d3Nfb3B0KTsKLQotCUlOSVRfTElTVF9IRUFEKCZsX21nLT5mcmVlX2xpc3QpOwotCUlOSVRf TElTVF9IRUFEKCZsX21nLT5jb3JydXB0X2xpc3QpOwotCUlOSVRfTElTVF9IRUFEKCZsX21nLT5i YWRfbGlzdCk7Ci0JSU5JVF9MSVNUX0hFQUQoJmxfbWctPmdjX2Z1bGxfbGlzdCk7Ci0JSU5JVF9M SVNUX0hFQUQoJmxfbWctPmdjX2hpZ2hfbGlzdCk7Ci0JSU5JVF9MSVNUX0hFQUQoJmxfbWctPmdj X21pZF9saXN0KTsKLQlJTklUX0xJU1RfSEVBRCgmbF9tZy0+Z2NfbG93X2xpc3QpOwotCUlOSVRf TElTVF9IRUFEKCZsX21nLT5nY19lbXB0eV9saXN0KTsKLQotCUlOSVRfTElTVF9IRUFEKCZsX21n LT5lbWV0YV9saXN0KTsKLQotCWxfbWctPmdjX2xpc3RzWzBdID0gJmxfbWctPmdjX2hpZ2hfbGlz dDsKLQlsX21nLT5nY19saXN0c1sxXSA9ICZsX21nLT5nY19taWRfbGlzdDsKLQlsX21nLT5nY19s aXN0c1syXSA9ICZsX21nLT5nY19sb3dfbGlzdDsKLQotCXNwaW5fbG9ja19pbml0KCZsX21nLT5m cmVlX2xvY2spOwotCXNwaW5fbG9ja19pbml0KCZsX21nLT5jbG9zZV9sb2NrKTsKLQlzcGluX2xv Y2tfaW5pdCgmbF9tZy0+Z2NfbG9jayk7CisJcmV0ID0gcGJsa19saW5lX21nX2luaXQocGJsayk7 CisJaWYgKHJldCkKKwkJcmV0dXJuIHJldDsKIAotCXBibGstPmxpbmVzID0ga2NhbGxvYyhsX21n LT5ucl9saW5lcywgc2l6ZW9mKHN0cnVjdCBwYmxrX2xpbmUpLAorCXBibGstPmx1bnMgPSBrY2Fs bG9jKGdlby0+YWxsX2x1bnMsIHNpemVvZihzdHJ1Y3QgcGJsa19sdW4pLAogCQkJCQkJCQlHRlBf S0VSTkVMKTsKLQlpZiAoIXBibGstPmxpbmVzKSB7Ci0JCXJldCA9IC1FTk9NRU07Ci0JCWdvdG8g ZmFpbF9mcmVlX2JiX2F1eDsKLQl9CisJaWYgKCFwYmxrLT5sdW5zKQorCQlyZXR1cm4gLUVOT01F TTsKKworCXJldCA9IHBibGtfbHVuc19pbml0KHBibGspOworCWlmIChyZXQpCisJCWdvdG8gZmFp bF9mcmVlX2x1bnM7CiAKIAljaHVua19tZXRhID0gcGJsa19jaHVua19nZXRfbWV0YShwYmxrKTsK IAlpZiAoSVNfRVJSKGNodW5rX21ldGEpKSB7CiAJCXByX2VycigicGJsazogY291bGQgbm90IGdl dCBjaHVuayBsb2cgKCVsdSlcbiIsCiAJCQkJCQkJUFRSX0VSUihjaHVua19tZXRhKSk7CiAJCXJl dCA9IFBUUl9FUlIoY2h1bmtfbWV0YSk7Ci0JCWdvdG8gZmFpbF9mcmVlX2xpbmVzOworCQlnb3Rv IGZhaWxfZnJlZV9tZXRhOworCX0KKworCXBibGstPmxpbmVzID0ga2NhbGxvYyhsX21nLT5ucl9s aW5lcywgc2l6ZW9mKHN0cnVjdCBwYmxrX2xpbmUpLAorCQkJCQkJCQlHRlBfS0VSTkVMKTsKKwlp ZiAoIXBibGstPmxpbmVzKSB7CisJCXJldCA9IC1FTk9NRU07CisJCWdvdG8gZmFpbF9mcmVlX2No dW5rX21ldGE7CiAJfQogCiAJZm9yIChpID0gMDsgaSA8IGxfbWctPm5yX2xpbmVzOyBpKyspIHsK QEAgLTk3NCw3ICsxMDA3LDcgQEAgc3RhdGljIGludCBwYmxrX2xpbmVzX2luaXQoc3RydWN0IHBi bGsgKnBibGspCiAKIAkJcmV0ID0gcGJsa19hbGxvY19saW5lX21ldGEocGJsaywgbGluZSk7CiAJ CWlmIChyZXQpCi0JCQlnb3RvIGZhaWxfZnJlZV9jaHVua19tZXRhOworCQkJZ290byBmYWlsX2Zy ZWVfbGluZXM7CiAKIAkJbnJfZnJlZV9jaGtzICs9IHBibGtfc2V0dXBfbGluZV9tZXRhKHBibGss IGxpbmUsIGNodW5rX21ldGEsIGkpOwogCX0KQEAgLTk4NCwxOCArMTAxNywxNiBAQCBzdGF0aWMg aW50IHBibGtfbGluZXNfaW5pdChzdHJ1Y3QgcGJsayAqcGJsaykKIAlrZnJlZShjaHVua19tZXRh KTsKIAlyZXR1cm4gMDsKIAotZmFpbF9mcmVlX2NodW5rX21ldGE6Ci0Ja2ZyZWUoY2h1bmtfbWV0 YSk7CitmYWlsX2ZyZWVfbGluZXM6CiAJd2hpbGUgKC0taSA+PSAwKQogCQlwYmxrX2xpbmVfbWV0 YV9mcmVlKCZwYmxrLT5saW5lc1tpXSk7Ci1mYWlsX2ZyZWVfbGluZXM6CiAJa2ZyZWUocGJsay0+ bGluZXMpOwotZmFpbF9mcmVlX2JiX2F1eDoKLQlrZnJlZShsX21nLT5iYl9hdXgpOwotZmFpbF9m cmVlX2JiX3RlbXBsYXRlOgotCWtmcmVlKGxfbWctPmJiX3RlbXBsYXRlKTsKK2ZhaWxfZnJlZV9j aHVua19tZXRhOgorCWtmcmVlKGNodW5rX21ldGEpOwogZmFpbF9mcmVlX21ldGE6CiAJcGJsa19s aW5lX21nX2ZyZWUocGJsayk7CitmYWlsX2ZyZWVfbHVuczoKKwlrZnJlZShwYmxrLT5sdW5zKTsK IAogCXJldHVybiByZXQ7CiB9CkBAIC0xMDM2LDEyICsxMDY3LDEwIEBAIHN0YXRpYyB2b2lkIHBi bGtfd3JpdGVyX3N0b3Aoc3RydWN0IHBibGsgKnBibGspCiAKIHN0YXRpYyB2b2lkIHBibGtfZnJl ZShzdHJ1Y3QgcGJsayAqcGJsaykKIHsKLQlwYmxrX2x1bnNfZnJlZShwYmxrKTsKIAlwYmxrX2xp bmVzX2ZyZWUocGJsayk7Ci0Ja2ZyZWUocGJsay0+cGFkX2Rpc3QpOwotCXBibGtfbGluZV9tZ19m cmVlKHBibGspOwotCXBibGtfY29yZV9mcmVlKHBibGspOwogCXBibGtfbDJwX2ZyZWUocGJsayk7 CisJcGJsa19yd2JfZnJlZShwYmxrKTsKKwlwYmxrX2NvcmVfZnJlZShwYmxrKTsKIAogCWtmcmVl KHBibGspOwogfQpAQCAtMTExMiwxOSArMTE0MSw2IEBAIHN0YXRpYyB2b2lkICpwYmxrX2luaXQo c3RydWN0IG52bV90Z3RfZGV2ICpkZXYsIHN0cnVjdCBnZW5kaXNrICp0ZGlzaywKIAlzcGluX2xv Y2tfaW5pdCgmcGJsay0+dHJhbnNfbG9jayk7CiAJc3Bpbl9sb2NrX2luaXQoJnBibGstPmxvY2sp OwogCi0JaWYgKGZsYWdzICYgTlZNX1RBUkdFVF9GQUNUT1JZKQotCQlwYmxrX3NldHVwX3V1aWQo cGJsayk7Ci0KLQlhdG9taWM2NF9zZXQoJnBibGstPnVzZXJfd2EsIDApOwotCWF0b21pYzY0X3Nl dCgmcGJsay0+cGFkX3dhLCAwKTsKLQlhdG9taWM2NF9zZXQoJnBibGstPmdjX3dhLCAwKTsKLQlw YmxrLT51c2VyX3JzdF93YSA9IDA7Ci0JcGJsay0+cGFkX3JzdF93YSA9IDA7Ci0JcGJsay0+Z2Nf cnN0X3dhID0gMDsKLQotCWF0b21pYzY0X3NldCgmcGJsay0+bnJfZmx1c2gsIDApOwotCXBibGst Pm5yX2ZsdXNoX3JzdCA9IDA7Ci0KICNpZmRlZiBDT05GSUdfTlZNX0RFQlVHCiAJYXRvbWljX2xv bmdfc2V0KCZwYmxrLT5pbmZsaWdodF93cml0ZXMsIDApOwogCWF0b21pY19sb25nX3NldCgmcGJs ay0+cGFkZGVkX3dyaXRlcywgMCk7CkBAIC0xMTQ4LDQ4ICsxMTY0LDM1IEBAIHN0YXRpYyB2b2lk ICpwYmxrX2luaXQoc3RydWN0IG52bV90Z3RfZGV2ICpkZXYsIHN0cnVjdCBnZW5kaXNrICp0ZGlz aywKIAlhdG9taWNfbG9uZ19zZXQoJnBibGstPndyaXRlX2ZhaWxlZCwgMCk7CiAJYXRvbWljX2xv bmdfc2V0KCZwYmxrLT5lcmFzZV9mYWlsZWQsIDApOwogCi0JcmV0ID0gcGJsa19sdW5zX2luaXQo cGJsaywgZGV2LT5sdW5zKTsKLQlpZiAocmV0KSB7Ci0JCXByX2VycigicGJsazogY291bGQgbm90 IGluaXRpYWxpemUgbHVuc1xuIik7Ci0JCWdvdG8gZmFpbDsKLQl9Ci0KLQlyZXQgPSBwYmxrX2xp bmVzX2luaXQocGJsayk7Ci0JaWYgKHJldCkgewotCQlwcl9lcnIoInBibGs6IGNvdWxkIG5vdCBp bml0aWFsaXplIGxpbmVzXG4iKTsKLQkJZ290byBmYWlsX2ZyZWVfbHVuczsKLQl9Ci0KLQlwYmxr LT5wYWRfZGlzdCA9IGt6YWxsb2MoKHBibGstPm1pbl93cml0ZV9wZ3MgLSAxKSAqIHNpemVvZihh dG9taWM2NF90KSwKLQkJCQkgR0ZQX0tFUk5FTCk7Ci0JaWYgKCFwYmxrLT5wYWRfZGlzdCkgewot CQlyZXQgPSAtRU5PTUVNOwotCQlnb3RvIGZhaWxfZnJlZV9saW5lX21ldGE7Ci0JfQotCiAJcmV0 ID0gcGJsa19jb3JlX2luaXQocGJsayk7CiAJaWYgKHJldCkgewogCQlwcl9lcnIoInBibGs6IGNv dWxkIG5vdCBpbml0aWFsaXplIGNvcmVcbiIpOwotCQlnb3RvIGZhaWxfZnJlZV9wYWRfZGlzdDsK KwkJZ290byBmYWlsOwogCX0KIAotCXJldCA9IHBibGtfbDJwX2luaXQocGJsayk7CisJcmV0ID0g cGJsa19saW5lc19pbml0KHBibGspOwogCWlmIChyZXQpIHsKLQkJcHJfZXJyKCJwYmxrOiBjb3Vs ZCBub3QgaW5pdGlhbGl6ZSBtYXBzXG4iKTsKKwkJcHJfZXJyKCJwYmxrOiBjb3VsZCBub3QgaW5p dGlhbGl6ZSBsaW5lc1xuIik7CiAJCWdvdG8gZmFpbF9mcmVlX2NvcmU7CiAJfQogCi0JcmV0ID0g cGJsa19saW5lc19jb25maWd1cmUocGJsaywgZmxhZ3MpOworCXJldCA9IHBibGtfcndiX2luaXQo cGJsayk7CiAJaWYgKHJldCkgewotCQlwcl9lcnIoInBibGs6IGNvdWxkIG5vdCBjb25maWd1cmUg bGluZXNcbiIpOwotCQlnb3RvIGZhaWxfZnJlZV9sMnA7CisJCXByX2VycigicGJsazogY291bGQg bm90IGluaXRpYWxpemUgd3JpdGUgYnVmZmVyXG4iKTsKKwkJZ290byBmYWlsX2ZyZWVfbGluZXM7 CisJfQorCisJcmV0ID0gcGJsa19sMnBfaW5pdChwYmxrLCBmbGFncyAmIE5WTV9UQVJHRVRfRkFD VE9SWSk7CisJaWYgKHJldCkgeworCQlwcl9lcnIoInBibGs6IGNvdWxkIG5vdCBpbml0aWFsaXpl IG1hcHNcbiIpOworCQlnb3RvIGZhaWxfZnJlZV9yd2I7CiAJfQogCiAJcmV0ID0gcGJsa193cml0 ZXJfaW5pdChwYmxrKTsKIAlpZiAocmV0KSB7CiAJCWlmIChyZXQgIT0gLUVJTlRSKQogCQkJcHJf ZXJyKCJwYmxrOiBjb3VsZCBub3QgaW5pdGlhbGl6ZSB3cml0ZSB0aHJlYWRcbiIpOwotCQlnb3Rv IGZhaWxfZnJlZV9saW5lczsKKwkJZ290byBmYWlsX2ZyZWVfbDJwOwogCX0KIAogCXJldCA9IHBi bGtfZ2NfaW5pdChwYmxrKTsKQEAgLTEyMjQsMTggKzEyMjcsMTQgQEAgc3RhdGljIHZvaWQgKnBi bGtfaW5pdChzdHJ1Y3QgbnZtX3RndF9kZXYgKmRldiwgc3RydWN0IGdlbmRpc2sgKnRkaXNrLAog CiBmYWlsX3N0b3Bfd3JpdGVyOgogCXBibGtfd3JpdGVyX3N0b3AocGJsayk7Ci1mYWlsX2ZyZWVf bGluZXM6Ci0JcGJsa19saW5lc19mcmVlKHBibGspOwogZmFpbF9mcmVlX2wycDoKIAlwYmxrX2wy cF9mcmVlKHBibGspOworZmFpbF9mcmVlX3J3YjoKKwlwYmxrX3J3Yl9mcmVlKHBibGspOworZmFp bF9mcmVlX2xpbmVzOgorCXBibGtfbGluZXNfZnJlZShwYmxrKTsKIGZhaWxfZnJlZV9jb3JlOgog CXBibGtfY29yZV9mcmVlKHBibGspOwotZmFpbF9mcmVlX3BhZF9kaXN0OgotCWtmcmVlKHBibGst PnBhZF9kaXN0KTsKLWZhaWxfZnJlZV9saW5lX21ldGE6Ci0JcGJsa19saW5lX21nX2ZyZWUocGJs ayk7Ci1mYWlsX2ZyZWVfbHVuczoKLQlwYmxrX2x1bnNfZnJlZShwYmxrKTsKIGZhaWw6CiAJa2Zy ZWUocGJsayk7CiAJcmV0dXJuIEVSUl9QVFIocmV0KTsKLS0gCjIuNy40CgoKX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KTGludXgtbnZtZSBtYWlsaW5nIGxp c3QKTGludXgtbnZtZUBsaXN0cy5pbmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0cy5pbmZyYWRlYWQu b3JnL21haWxtYW4vbGlzdGluZm8vbGludXgtbnZtZQo= ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 14/15] lightnvm: pblk: refactor init/exit sequences @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Refactor init and exit sequences to improve readability. In the way, fix bad free ordering on the init error path. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/pblk-init.c | 533 +++++++++++++++++++++---------------------- 1 file changed, 266 insertions(+), 267 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index bd2592fc3378..b3e15ef63df3 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -103,7 +103,40 @@ static void pblk_l2p_free(struct pblk *pblk) vfree(pblk->trans_map); } -static int pblk_l2p_init(struct pblk *pblk) +static int pblk_l2p_recover(struct pblk *pblk, bool factory_init) +{ + struct pblk_line *line = NULL; + + if (factory_init) { + pblk_setup_uuid(pblk); + } else { + line = pblk_recov_l2p(pblk); + if (IS_ERR(line)) { + pr_err("pblk: could not recover l2p table\n"); + return -EFAULT; + } + } + +#ifdef CONFIG_NVM_DEBUG + pr_info("pblk init: L2P CRC: %x\n", pblk_l2p_crc(pblk)); +#endif + + /* Free full lines directly as GC has not been started yet */ + pblk_gc_free_full_lines(pblk); + + if (!line) { + /* Configure next line for user data */ + line = pblk_line_get_first_data(pblk); + if (!line) { + pr_err("pblk: line list corrupted\n"); + return -EFAULT; + } + } + + return 0; +} + +static int pblk_l2p_init(struct pblk *pblk, bool factory_init) { sector_t i; struct ppa_addr ppa; @@ -119,7 +152,7 @@ static int pblk_l2p_init(struct pblk *pblk) for (i = 0; i < pblk->rl.nr_secs; i++) pblk_trans_map_set(pblk, i, ppa); - return 0; + return pblk_l2p_recover(pblk, factory_init); } static void pblk_rwb_free(struct pblk *pblk) @@ -268,86 +301,113 @@ static int pblk_core_init(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; + int max_write_ppas; + + atomic64_set(&pblk->user_wa, 0); + atomic64_set(&pblk->pad_wa, 0); + atomic64_set(&pblk->gc_wa, 0); + pblk->user_rst_wa = 0; + pblk->pad_rst_wa = 0; + pblk->gc_rst_wa = 0; + + atomic_long_set(&pblk->nr_flush, 0); + pblk->nr_flush_rst = 0; pblk->pgs_in_buffer = geo->mw_cunits * geo->all_luns; + pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); + max_write_ppas = pblk->min_write_pgs * geo->all_luns; + pblk->max_write_pgs = (max_write_ppas < NVM_MAX_VLBA) ? + max_write_ppas : NVM_MAX_VLBA; + pblk_set_sec_per_write(pblk, pblk->min_write_pgs); + + if (pblk->max_write_pgs > PBLK_MAX_REQ_ADDRS) { + pr_err("pblk: cannot support device max_phys_sect\n"); + return -EINVAL; + } + + pblk->pad_dist = kzalloc((pblk->min_write_pgs - 1) * sizeof(atomic64_t), + GFP_KERNEL); + if (!pblk->pad_dist) + return -ENOMEM; + if (pblk_init_global_caches(pblk)) - return -ENOMEM; + goto fail_free_pad_dist; /* Internal bios can be at most the sectors signaled by the device. */ pblk->page_bio_pool = mempool_create_page_pool(NVM_MAX_VLBA, 0); if (!pblk->page_bio_pool) - goto free_global_caches; + goto fail_free_global_caches; pblk->gen_ws_pool = mempool_create_slab_pool(PBLK_GEN_WS_POOL_SIZE, pblk_ws_cache); if (!pblk->gen_ws_pool) - goto free_page_bio_pool; + goto fail_free_page_bio_pool; pblk->rec_pool = mempool_create_slab_pool(geo->all_luns, pblk_rec_cache); if (!pblk->rec_pool) - goto free_gen_ws_pool; + goto fail_free_gen_ws_pool; pblk->r_rq_pool = mempool_create_slab_pool(geo->all_luns, pblk_g_rq_cache); if (!pblk->r_rq_pool) - goto free_rec_pool; + goto fail_free_rec_pool; pblk->e_rq_pool = mempool_create_slab_pool(geo->all_luns, pblk_g_rq_cache); if (!pblk->e_rq_pool) - goto free_r_rq_pool; + goto fail_free_r_rq_pool; pblk->w_rq_pool = mempool_create_slab_pool(geo->all_luns, pblk_w_rq_cache); if (!pblk->w_rq_pool) - goto free_e_rq_pool; + goto fail_free_e_rq_pool; pblk->close_wq = alloc_workqueue("pblk-close-wq", WQ_MEM_RECLAIM | WQ_UNBOUND, PBLK_NR_CLOSE_JOBS); if (!pblk->close_wq) - goto free_w_rq_pool; + goto fail_free_w_rq_pool; pblk->bb_wq = alloc_workqueue("pblk-bb-wq", WQ_MEM_RECLAIM | WQ_UNBOUND, 0); if (!pblk->bb_wq) - goto free_close_wq; + goto fail_free_close_wq; pblk->r_end_wq = alloc_workqueue("pblk-read-end-wq", WQ_MEM_RECLAIM | WQ_UNBOUND, 0); if (!pblk->r_end_wq) - goto free_bb_wq; + goto fail_free_bb_wq; if (pblk_set_addrf(pblk)) - goto free_r_end_wq; - - if (pblk_rwb_init(pblk)) - goto free_r_end_wq; + goto fail_free_r_end_wq; INIT_LIST_HEAD(&pblk->compl_list); + return 0; -free_r_end_wq: +fail_free_r_end_wq: destroy_workqueue(pblk->r_end_wq); -free_bb_wq: +fail_free_bb_wq: destroy_workqueue(pblk->bb_wq); -free_close_wq: +fail_free_close_wq: destroy_workqueue(pblk->close_wq); -free_w_rq_pool: +fail_free_w_rq_pool: mempool_destroy(pblk->w_rq_pool); -free_e_rq_pool: +fail_free_e_rq_pool: mempool_destroy(pblk->e_rq_pool); -free_r_rq_pool: +fail_free_r_rq_pool: mempool_destroy(pblk->r_rq_pool); -free_rec_pool: +fail_free_rec_pool: mempool_destroy(pblk->rec_pool); -free_gen_ws_pool: +fail_free_gen_ws_pool: mempool_destroy(pblk->gen_ws_pool); -free_page_bio_pool: +fail_free_page_bio_pool: mempool_destroy(pblk->page_bio_pool); -free_global_caches: +fail_free_global_caches: pblk_free_global_caches(pblk); +fail_free_pad_dist: + kfree(pblk->pad_dist); return -ENOMEM; } @@ -369,14 +429,8 @@ static void pblk_core_free(struct pblk *pblk) mempool_destroy(pblk->e_rq_pool); mempool_destroy(pblk->w_rq_pool); - pblk_rwb_free(pblk); - pblk_free_global_caches(pblk); -} - -static void pblk_luns_free(struct pblk *pblk) -{ - kfree(pblk->luns); + kfree(pblk->pad_dist); } static void pblk_line_mg_free(struct pblk *pblk) @@ -393,8 +447,6 @@ static void pblk_line_mg_free(struct pblk *pblk) pblk_mfree(l_mg->eline_meta[i]->buf, l_mg->emeta_alloc_type); kfree(l_mg->eline_meta[i]); } - - kfree(pblk->lines); } static void pblk_line_meta_free(struct pblk_line *line) @@ -418,6 +470,11 @@ static void pblk_lines_free(struct pblk *pblk) pblk_line_meta_free(line); } spin_unlock(&l_mg->free_lock); + + pblk_line_mg_free(pblk); + + kfree(pblk->luns); + kfree(pblk->lines); } static int pblk_bb_get_tbl(struct nvm_tgt_dev *dev, struct pblk_lun *rlun, @@ -481,7 +538,7 @@ static void *pblk_chunk_get_meta(struct pblk *pblk) return pblk_chunk_get_info(pblk); } -static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) +static int pblk_luns_init(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; @@ -494,11 +551,6 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) return -EINVAL; } - pblk->luns = kcalloc(geo->all_luns, sizeof(struct pblk_lun), - GFP_KERNEL); - if (!pblk->luns) - return -ENOMEM; - for (i = 0; i < geo->all_luns; i++) { /* Stripe across channels */ int ch = i % geo->num_ch; @@ -506,7 +558,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) int lunid = lun_raw + ch * geo->num_lun; rlun = &pblk->luns[i]; - rlun->bppa = luns[lunid]; + rlun->bppa = dev->luns[lunid]; sema_init(&rlun->wr_sem, 1); } @@ -514,38 +566,6 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) return 0; } -static int pblk_lines_configure(struct pblk *pblk, int flags) -{ - struct pblk_line *line = NULL; - int ret = 0; - - if (!(flags & NVM_TARGET_FACTORY)) { - line = pblk_recov_l2p(pblk); - if (IS_ERR(line)) { - pr_err("pblk: could not recover l2p table\n"); - ret = -EFAULT; - } - } - -#ifdef CONFIG_NVM_DEBUG - pr_info("pblk init: L2P CRC: %x\n", pblk_l2p_crc(pblk)); -#endif - - /* Free full lines directly as GC has not been started yet */ - pblk_gc_free_full_lines(pblk); - - if (!line) { - /* Configure next line for user data */ - line = pblk_line_get_first_data(pblk); - if (!line) { - pr_err("pblk: line list corrupted\n"); - ret = -EFAULT; - } - } - - return ret; -} - /* See comment over struct line_emeta definition */ static unsigned int calc_emeta_len(struct pblk *pblk) { @@ -611,81 +631,6 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); } -static int pblk_lines_alloc_metadata(struct pblk *pblk) -{ - struct pblk_line_mgmt *l_mg = &pblk->l_mg; - struct pblk_line_meta *lm = &pblk->lm; - int i; - - /* smeta is always small enough to fit on a kmalloc memory allocation, - * emeta depends on the number of LUNs allocated to the pblk instance - */ - for (i = 0; i < PBLK_DATA_LINES; i++) { - l_mg->sline_meta[i] = kmalloc(lm->smeta_len, GFP_KERNEL); - if (!l_mg->sline_meta[i]) - goto fail_free_smeta; - } - - /* emeta allocates three different buffers for managing metadata with - * in-memory and in-media layouts - */ - for (i = 0; i < PBLK_DATA_LINES; i++) { - struct pblk_emeta *emeta; - - emeta = kmalloc(sizeof(struct pblk_emeta), GFP_KERNEL); - if (!emeta) - goto fail_free_emeta; - - if (lm->emeta_len[0] > KMALLOC_MAX_CACHE_SIZE) { - l_mg->emeta_alloc_type = PBLK_VMALLOC_META; - - emeta->buf = vmalloc(lm->emeta_len[0]); - if (!emeta->buf) { - kfree(emeta); - goto fail_free_emeta; - } - - emeta->nr_entries = lm->emeta_sec[0]; - l_mg->eline_meta[i] = emeta; - } else { - l_mg->emeta_alloc_type = PBLK_KMALLOC_META; - - emeta->buf = kmalloc(lm->emeta_len[0], GFP_KERNEL); - if (!emeta->buf) { - kfree(emeta); - goto fail_free_emeta; - } - - emeta->nr_entries = lm->emeta_sec[0]; - l_mg->eline_meta[i] = emeta; - } - } - - l_mg->vsc_list = kcalloc(l_mg->nr_lines, sizeof(__le32), GFP_KERNEL); - if (!l_mg->vsc_list) - goto fail_free_emeta; - - for (i = 0; i < l_mg->nr_lines; i++) - l_mg->vsc_list[i] = cpu_to_le32(EMPTY_ENTRY); - - return 0; - -fail_free_emeta: - while (--i >= 0) { - if (l_mg->emeta_alloc_type == PBLK_VMALLOC_META) - vfree(l_mg->eline_meta[i]->buf); - else - kfree(l_mg->eline_meta[i]->buf); - kfree(l_mg->eline_meta[i]); - } - -fail_free_smeta: - for (i = 0; i < PBLK_DATA_LINES; i++) - kfree(l_mg->sline_meta[i]); - - return -ENOMEM; -} - static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line, void *chunk_meta) { @@ -835,29 +780,13 @@ static int pblk_alloc_line_meta(struct pblk *pblk, struct pblk_line *line) return 0; } -static int pblk_lines_init(struct pblk *pblk) +static int pblk_line_mg_init(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; - struct pblk_line *line; - void *chunk_meta; - unsigned int smeta_len, emeta_len; - long nr_free_chks = 0; - int bb_distance, max_write_ppas; - int i, ret; - - pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); - max_write_ppas = pblk->min_write_pgs * geo->all_luns; - pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); - pblk_set_sec_per_write(pblk, pblk->min_write_pgs); - - if (pblk->max_write_pgs > PBLK_MAX_REQ_ADDRS) { - pr_err("pblk: vector list too big(%u > %u)\n", - pblk->max_write_pgs, PBLK_MAX_REQ_ADDRS); - return -EINVAL; - } + int i, bb_distance; l_mg->nr_lines = geo->num_chk; l_mg->log_line = l_mg->data_line = NULL; @@ -865,6 +794,119 @@ static int pblk_lines_init(struct pblk *pblk) l_mg->nr_free_lines = 0; bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES); + INIT_LIST_HEAD(&l_mg->free_list); + INIT_LIST_HEAD(&l_mg->corrupt_list); + INIT_LIST_HEAD(&l_mg->bad_list); + INIT_LIST_HEAD(&l_mg->gc_full_list); + INIT_LIST_HEAD(&l_mg->gc_high_list); + INIT_LIST_HEAD(&l_mg->gc_mid_list); + INIT_LIST_HEAD(&l_mg->gc_low_list); + INIT_LIST_HEAD(&l_mg->gc_empty_list); + + INIT_LIST_HEAD(&l_mg->emeta_list); + + l_mg->gc_lists[0] = &l_mg->gc_high_list; + l_mg->gc_lists[1] = &l_mg->gc_mid_list; + l_mg->gc_lists[2] = &l_mg->gc_low_list; + + spin_lock_init(&l_mg->free_lock); + spin_lock_init(&l_mg->close_lock); + spin_lock_init(&l_mg->gc_lock); + + l_mg->vsc_list = kcalloc(l_mg->nr_lines, sizeof(__le32), GFP_KERNEL); + if (!l_mg->vsc_list) + goto fail; + + l_mg->bb_template = kzalloc(lm->sec_bitmap_len, GFP_KERNEL); + if (!l_mg->bb_template) + goto fail_free_vsc_list; + + l_mg->bb_aux = kzalloc(lm->sec_bitmap_len, GFP_KERNEL); + if (!l_mg->bb_aux) + goto fail_free_bb_template; + + /* smeta is always small enough to fit on a kmalloc memory allocation, + * emeta depends on the number of LUNs allocated to the pblk instance + */ + for (i = 0; i < PBLK_DATA_LINES; i++) { + l_mg->sline_meta[i] = kmalloc(lm->smeta_len, GFP_KERNEL); + if (!l_mg->sline_meta[i]) + goto fail_free_smeta; + } + + /* emeta allocates three different buffers for managing metadata with + * in-memory and in-media layouts + */ + for (i = 0; i < PBLK_DATA_LINES; i++) { + struct pblk_emeta *emeta; + + emeta = kmalloc(sizeof(struct pblk_emeta), GFP_KERNEL); + if (!emeta) + goto fail_free_emeta; + + if (lm->emeta_len[0] > KMALLOC_MAX_CACHE_SIZE) { + l_mg->emeta_alloc_type = PBLK_VMALLOC_META; + + emeta->buf = vmalloc(lm->emeta_len[0]); + if (!emeta->buf) { + kfree(emeta); + goto fail_free_emeta; + } + + emeta->nr_entries = lm->emeta_sec[0]; + l_mg->eline_meta[i] = emeta; + } else { + l_mg->emeta_alloc_type = PBLK_KMALLOC_META; + + emeta->buf = kmalloc(lm->emeta_len[0], GFP_KERNEL); + if (!emeta->buf) { + kfree(emeta); + goto fail_free_emeta; + } + + emeta->nr_entries = lm->emeta_sec[0]; + l_mg->eline_meta[i] = emeta; + } + } + + for (i = 0; i < l_mg->nr_lines; i++) + l_mg->vsc_list[i] = cpu_to_le32(EMPTY_ENTRY); + + bb_distance = (geo->all_luns) * geo->ws_opt; + for (i = 0; i < lm->sec_per_line; i += bb_distance) + bitmap_set(l_mg->bb_template, i, geo->ws_opt); + + return 0; + +fail_free_emeta: + while (--i >= 0) { + if (l_mg->emeta_alloc_type == PBLK_VMALLOC_META) + vfree(l_mg->eline_meta[i]->buf); + else + kfree(l_mg->eline_meta[i]->buf); + kfree(l_mg->eline_meta[i]); + } +fail_free_smeta: + kfree(l_mg->bb_aux); + + for (i = 0; i < PBLK_DATA_LINES; i++) + kfree(l_mg->sline_meta[i]); +fail_free_bb_template: + kfree(l_mg->bb_template); +fail_free_vsc_list: + kfree(l_mg->vsc_list); +fail: + return -ENOMEM; +} + +static int pblk_line_meta_init(struct pblk *pblk) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; + unsigned int smeta_len, emeta_len; + int i; + lm->sec_per_line = geo->clba * geo->all_luns; lm->blk_per_line = geo->all_luns; lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long); @@ -915,58 +957,49 @@ static int pblk_lines_init(struct pblk *pblk) return -EINVAL; } - ret = pblk_lines_alloc_metadata(pblk); + return 0; +} + +static int pblk_lines_init(struct pblk *pblk) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_mgmt *l_mg = &pblk->l_mg; + struct pblk_line *line; + void *chunk_meta; + long nr_free_chks = 0; + int i, ret; + + ret = pblk_line_meta_init(pblk); if (ret) return ret; - l_mg->bb_template = kzalloc(lm->sec_bitmap_len, GFP_KERNEL); - if (!l_mg->bb_template) { - ret = -ENOMEM; - goto fail_free_meta; - } - - l_mg->bb_aux = kzalloc(lm->sec_bitmap_len, GFP_KERNEL); - if (!l_mg->bb_aux) { - ret = -ENOMEM; - goto fail_free_bb_template; - } - - bb_distance = (geo->all_luns) * geo->ws_opt; - for (i = 0; i < lm->sec_per_line; i += bb_distance) - bitmap_set(l_mg->bb_template, i, geo->ws_opt); - - INIT_LIST_HEAD(&l_mg->free_list); - INIT_LIST_HEAD(&l_mg->corrupt_list); - INIT_LIST_HEAD(&l_mg->bad_list); - INIT_LIST_HEAD(&l_mg->gc_full_list); - INIT_LIST_HEAD(&l_mg->gc_high_list); - INIT_LIST_HEAD(&l_mg->gc_mid_list); - INIT_LIST_HEAD(&l_mg->gc_low_list); - INIT_LIST_HEAD(&l_mg->gc_empty_list); - - INIT_LIST_HEAD(&l_mg->emeta_list); - - l_mg->gc_lists[0] = &l_mg->gc_high_list; - l_mg->gc_lists[1] = &l_mg->gc_mid_list; - l_mg->gc_lists[2] = &l_mg->gc_low_list; - - spin_lock_init(&l_mg->free_lock); - spin_lock_init(&l_mg->close_lock); - spin_lock_init(&l_mg->gc_lock); + ret = pblk_line_mg_init(pblk); + if (ret) + return ret; - pblk->lines = kcalloc(l_mg->nr_lines, sizeof(struct pblk_line), + pblk->luns = kcalloc(geo->all_luns, sizeof(struct pblk_lun), GFP_KERNEL); - if (!pblk->lines) { - ret = -ENOMEM; - goto fail_free_bb_aux; - } + if (!pblk->luns) + return -ENOMEM; + + ret = pblk_luns_init(pblk); + if (ret) + goto fail_free_luns; chunk_meta = pblk_chunk_get_meta(pblk); if (IS_ERR(chunk_meta)) { pr_err("pblk: could not get chunk log (%lu)\n", PTR_ERR(chunk_meta)); ret = PTR_ERR(chunk_meta); - goto fail_free_lines; + goto fail_free_meta; + } + + pblk->lines = kcalloc(l_mg->nr_lines, sizeof(struct pblk_line), + GFP_KERNEL); + if (!pblk->lines) { + ret = -ENOMEM; + goto fail_free_chunk_meta; } for (i = 0; i < l_mg->nr_lines; i++) { @@ -974,7 +1007,7 @@ static int pblk_lines_init(struct pblk *pblk) ret = pblk_alloc_line_meta(pblk, line); if (ret) - goto fail_free_chunk_meta; + goto fail_free_lines; nr_free_chks += pblk_setup_line_meta(pblk, line, chunk_meta, i); } @@ -984,18 +1017,16 @@ static int pblk_lines_init(struct pblk *pblk) kfree(chunk_meta); return 0; -fail_free_chunk_meta: - kfree(chunk_meta); +fail_free_lines: while (--i >= 0) pblk_line_meta_free(&pblk->lines[i]); -fail_free_lines: kfree(pblk->lines); -fail_free_bb_aux: - kfree(l_mg->bb_aux); -fail_free_bb_template: - kfree(l_mg->bb_template); +fail_free_chunk_meta: + kfree(chunk_meta); fail_free_meta: pblk_line_mg_free(pblk); +fail_free_luns: + kfree(pblk->luns); return ret; } @@ -1036,12 +1067,10 @@ static void pblk_writer_stop(struct pblk *pblk) static void pblk_free(struct pblk *pblk) { - pblk_luns_free(pblk); pblk_lines_free(pblk); - kfree(pblk->pad_dist); - pblk_line_mg_free(pblk); - pblk_core_free(pblk); pblk_l2p_free(pblk); + pblk_rwb_free(pblk); + pblk_core_free(pblk); kfree(pblk); } @@ -1112,19 +1141,6 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, spin_lock_init(&pblk->trans_lock); spin_lock_init(&pblk->lock); - if (flags & NVM_TARGET_FACTORY) - pblk_setup_uuid(pblk); - - atomic64_set(&pblk->user_wa, 0); - atomic64_set(&pblk->pad_wa, 0); - atomic64_set(&pblk->gc_wa, 0); - pblk->user_rst_wa = 0; - pblk->pad_rst_wa = 0; - pblk->gc_rst_wa = 0; - - atomic64_set(&pblk->nr_flush, 0); - pblk->nr_flush_rst = 0; - #ifdef CONFIG_NVM_DEBUG atomic_long_set(&pblk->inflight_writes, 0); atomic_long_set(&pblk->padded_writes, 0); @@ -1148,48 +1164,35 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, atomic_long_set(&pblk->write_failed, 0); atomic_long_set(&pblk->erase_failed, 0); - ret = pblk_luns_init(pblk, dev->luns); - if (ret) { - pr_err("pblk: could not initialize luns\n"); - goto fail; - } - - ret = pblk_lines_init(pblk); - if (ret) { - pr_err("pblk: could not initialize lines\n"); - goto fail_free_luns; - } - - pblk->pad_dist = kzalloc((pblk->min_write_pgs - 1) * sizeof(atomic64_t), - GFP_KERNEL); - if (!pblk->pad_dist) { - ret = -ENOMEM; - goto fail_free_line_meta; - } - ret = pblk_core_init(pblk); if (ret) { pr_err("pblk: could not initialize core\n"); - goto fail_free_pad_dist; + goto fail; } - ret = pblk_l2p_init(pblk); + ret = pblk_lines_init(pblk); if (ret) { - pr_err("pblk: could not initialize maps\n"); + pr_err("pblk: could not initialize lines\n"); goto fail_free_core; } - ret = pblk_lines_configure(pblk, flags); + ret = pblk_rwb_init(pblk); if (ret) { - pr_err("pblk: could not configure lines\n"); - goto fail_free_l2p; + pr_err("pblk: could not initialize write buffer\n"); + goto fail_free_lines; + } + + ret = pblk_l2p_init(pblk, flags & NVM_TARGET_FACTORY); + if (ret) { + pr_err("pblk: could not initialize maps\n"); + goto fail_free_rwb; } ret = pblk_writer_init(pblk); if (ret) { if (ret != -EINTR) pr_err("pblk: could not initialize write thread\n"); - goto fail_free_lines; + goto fail_free_l2p; } ret = pblk_gc_init(pblk); @@ -1224,18 +1227,14 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, fail_stop_writer: pblk_writer_stop(pblk); -fail_free_lines: - pblk_lines_free(pblk); fail_free_l2p: pblk_l2p_free(pblk); +fail_free_rwb: + pblk_rwb_free(pblk); +fail_free_lines: + pblk_lines_free(pblk); fail_free_core: pblk_core_free(pblk); -fail_free_pad_dist: - kfree(pblk->pad_dist); -fail_free_line_meta: - pblk_line_mg_free(pblk); -fail_free_luns: - pblk_luns_free(pblk); fail: kfree(pblk); return ERR_PTR(ret); -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 14/15] lightnvm: pblk: refactor init/exit sequences @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Refactor init and exit sequences to improve readability. In the way, fix bad free ordering on the init error path. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/pblk-init.c | 533 +++++++++++++++++++++---------------------- 1 file changed, 266 insertions(+), 267 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index bd2592fc3378..b3e15ef63df3 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -103,7 +103,40 @@ static void pblk_l2p_free(struct pblk *pblk) vfree(pblk->trans_map); } -static int pblk_l2p_init(struct pblk *pblk) +static int pblk_l2p_recover(struct pblk *pblk, bool factory_init) +{ + struct pblk_line *line = NULL; + + if (factory_init) { + pblk_setup_uuid(pblk); + } else { + line = pblk_recov_l2p(pblk); + if (IS_ERR(line)) { + pr_err("pblk: could not recover l2p table\n"); + return -EFAULT; + } + } + +#ifdef CONFIG_NVM_DEBUG + pr_info("pblk init: L2P CRC: %x\n", pblk_l2p_crc(pblk)); +#endif + + /* Free full lines directly as GC has not been started yet */ + pblk_gc_free_full_lines(pblk); + + if (!line) { + /* Configure next line for user data */ + line = pblk_line_get_first_data(pblk); + if (!line) { + pr_err("pblk: line list corrupted\n"); + return -EFAULT; + } + } + + return 0; +} + +static int pblk_l2p_init(struct pblk *pblk, bool factory_init) { sector_t i; struct ppa_addr ppa; @@ -119,7 +152,7 @@ static int pblk_l2p_init(struct pblk *pblk) for (i = 0; i < pblk->rl.nr_secs; i++) pblk_trans_map_set(pblk, i, ppa); - return 0; + return pblk_l2p_recover(pblk, factory_init); } static void pblk_rwb_free(struct pblk *pblk) @@ -268,86 +301,113 @@ static int pblk_core_init(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; + int max_write_ppas; + + atomic64_set(&pblk->user_wa, 0); + atomic64_set(&pblk->pad_wa, 0); + atomic64_set(&pblk->gc_wa, 0); + pblk->user_rst_wa = 0; + pblk->pad_rst_wa = 0; + pblk->gc_rst_wa = 0; + + atomic_long_set(&pblk->nr_flush, 0); + pblk->nr_flush_rst = 0; pblk->pgs_in_buffer = geo->mw_cunits * geo->all_luns; + pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); + max_write_ppas = pblk->min_write_pgs * geo->all_luns; + pblk->max_write_pgs = (max_write_ppas < NVM_MAX_VLBA) ? + max_write_ppas : NVM_MAX_VLBA; + pblk_set_sec_per_write(pblk, pblk->min_write_pgs); + + if (pblk->max_write_pgs > PBLK_MAX_REQ_ADDRS) { + pr_err("pblk: cannot support device max_phys_sect\n"); + return -EINVAL; + } + + pblk->pad_dist = kzalloc((pblk->min_write_pgs - 1) * sizeof(atomic64_t), + GFP_KERNEL); + if (!pblk->pad_dist) + return -ENOMEM; + if (pblk_init_global_caches(pblk)) - return -ENOMEM; + goto fail_free_pad_dist; /* Internal bios can be at most the sectors signaled by the device. */ pblk->page_bio_pool = mempool_create_page_pool(NVM_MAX_VLBA, 0); if (!pblk->page_bio_pool) - goto free_global_caches; + goto fail_free_global_caches; pblk->gen_ws_pool = mempool_create_slab_pool(PBLK_GEN_WS_POOL_SIZE, pblk_ws_cache); if (!pblk->gen_ws_pool) - goto free_page_bio_pool; + goto fail_free_page_bio_pool; pblk->rec_pool = mempool_create_slab_pool(geo->all_luns, pblk_rec_cache); if (!pblk->rec_pool) - goto free_gen_ws_pool; + goto fail_free_gen_ws_pool; pblk->r_rq_pool = mempool_create_slab_pool(geo->all_luns, pblk_g_rq_cache); if (!pblk->r_rq_pool) - goto free_rec_pool; + goto fail_free_rec_pool; pblk->e_rq_pool = mempool_create_slab_pool(geo->all_luns, pblk_g_rq_cache); if (!pblk->e_rq_pool) - goto free_r_rq_pool; + goto fail_free_r_rq_pool; pblk->w_rq_pool = mempool_create_slab_pool(geo->all_luns, pblk_w_rq_cache); if (!pblk->w_rq_pool) - goto free_e_rq_pool; + goto fail_free_e_rq_pool; pblk->close_wq = alloc_workqueue("pblk-close-wq", WQ_MEM_RECLAIM | WQ_UNBOUND, PBLK_NR_CLOSE_JOBS); if (!pblk->close_wq) - goto free_w_rq_pool; + goto fail_free_w_rq_pool; pblk->bb_wq = alloc_workqueue("pblk-bb-wq", WQ_MEM_RECLAIM | WQ_UNBOUND, 0); if (!pblk->bb_wq) - goto free_close_wq; + goto fail_free_close_wq; pblk->r_end_wq = alloc_workqueue("pblk-read-end-wq", WQ_MEM_RECLAIM | WQ_UNBOUND, 0); if (!pblk->r_end_wq) - goto free_bb_wq; + goto fail_free_bb_wq; if (pblk_set_addrf(pblk)) - goto free_r_end_wq; - - if (pblk_rwb_init(pblk)) - goto free_r_end_wq; + goto fail_free_r_end_wq; INIT_LIST_HEAD(&pblk->compl_list); + return 0; -free_r_end_wq: +fail_free_r_end_wq: destroy_workqueue(pblk->r_end_wq); -free_bb_wq: +fail_free_bb_wq: destroy_workqueue(pblk->bb_wq); -free_close_wq: +fail_free_close_wq: destroy_workqueue(pblk->close_wq); -free_w_rq_pool: +fail_free_w_rq_pool: mempool_destroy(pblk->w_rq_pool); -free_e_rq_pool: +fail_free_e_rq_pool: mempool_destroy(pblk->e_rq_pool); -free_r_rq_pool: +fail_free_r_rq_pool: mempool_destroy(pblk->r_rq_pool); -free_rec_pool: +fail_free_rec_pool: mempool_destroy(pblk->rec_pool); -free_gen_ws_pool: +fail_free_gen_ws_pool: mempool_destroy(pblk->gen_ws_pool); -free_page_bio_pool: +fail_free_page_bio_pool: mempool_destroy(pblk->page_bio_pool); -free_global_caches: +fail_free_global_caches: pblk_free_global_caches(pblk); +fail_free_pad_dist: + kfree(pblk->pad_dist); return -ENOMEM; } @@ -369,14 +429,8 @@ static void pblk_core_free(struct pblk *pblk) mempool_destroy(pblk->e_rq_pool); mempool_destroy(pblk->w_rq_pool); - pblk_rwb_free(pblk); - pblk_free_global_caches(pblk); -} - -static void pblk_luns_free(struct pblk *pblk) -{ - kfree(pblk->luns); + kfree(pblk->pad_dist); } static void pblk_line_mg_free(struct pblk *pblk) @@ -393,8 +447,6 @@ static void pblk_line_mg_free(struct pblk *pblk) pblk_mfree(l_mg->eline_meta[i]->buf, l_mg->emeta_alloc_type); kfree(l_mg->eline_meta[i]); } - - kfree(pblk->lines); } static void pblk_line_meta_free(struct pblk_line *line) @@ -418,6 +470,11 @@ static void pblk_lines_free(struct pblk *pblk) pblk_line_meta_free(line); } spin_unlock(&l_mg->free_lock); + + pblk_line_mg_free(pblk); + + kfree(pblk->luns); + kfree(pblk->lines); } static int pblk_bb_get_tbl(struct nvm_tgt_dev *dev, struct pblk_lun *rlun, @@ -481,7 +538,7 @@ static void *pblk_chunk_get_meta(struct pblk *pblk) return pblk_chunk_get_info(pblk); } -static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) +static int pblk_luns_init(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; @@ -494,11 +551,6 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) return -EINVAL; } - pblk->luns = kcalloc(geo->all_luns, sizeof(struct pblk_lun), - GFP_KERNEL); - if (!pblk->luns) - return -ENOMEM; - for (i = 0; i < geo->all_luns; i++) { /* Stripe across channels */ int ch = i % geo->num_ch; @@ -506,7 +558,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) int lunid = lun_raw + ch * geo->num_lun; rlun = &pblk->luns[i]; - rlun->bppa = luns[lunid]; + rlun->bppa = dev->luns[lunid]; sema_init(&rlun->wr_sem, 1); } @@ -514,38 +566,6 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns) return 0; } -static int pblk_lines_configure(struct pblk *pblk, int flags) -{ - struct pblk_line *line = NULL; - int ret = 0; - - if (!(flags & NVM_TARGET_FACTORY)) { - line = pblk_recov_l2p(pblk); - if (IS_ERR(line)) { - pr_err("pblk: could not recover l2p table\n"); - ret = -EFAULT; - } - } - -#ifdef CONFIG_NVM_DEBUG - pr_info("pblk init: L2P CRC: %x\n", pblk_l2p_crc(pblk)); -#endif - - /* Free full lines directly as GC has not been started yet */ - pblk_gc_free_full_lines(pblk); - - if (!line) { - /* Configure next line for user data */ - line = pblk_line_get_first_data(pblk); - if (!line) { - pr_err("pblk: line list corrupted\n"); - ret = -EFAULT; - } - } - - return ret; -} - /* See comment over struct line_emeta definition */ static unsigned int calc_emeta_len(struct pblk *pblk) { @@ -611,81 +631,6 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); } -static int pblk_lines_alloc_metadata(struct pblk *pblk) -{ - struct pblk_line_mgmt *l_mg = &pblk->l_mg; - struct pblk_line_meta *lm = &pblk->lm; - int i; - - /* smeta is always small enough to fit on a kmalloc memory allocation, - * emeta depends on the number of LUNs allocated to the pblk instance - */ - for (i = 0; i < PBLK_DATA_LINES; i++) { - l_mg->sline_meta[i] = kmalloc(lm->smeta_len, GFP_KERNEL); - if (!l_mg->sline_meta[i]) - goto fail_free_smeta; - } - - /* emeta allocates three different buffers for managing metadata with - * in-memory and in-media layouts - */ - for (i = 0; i < PBLK_DATA_LINES; i++) { - struct pblk_emeta *emeta; - - emeta = kmalloc(sizeof(struct pblk_emeta), GFP_KERNEL); - if (!emeta) - goto fail_free_emeta; - - if (lm->emeta_len[0] > KMALLOC_MAX_CACHE_SIZE) { - l_mg->emeta_alloc_type = PBLK_VMALLOC_META; - - emeta->buf = vmalloc(lm->emeta_len[0]); - if (!emeta->buf) { - kfree(emeta); - goto fail_free_emeta; - } - - emeta->nr_entries = lm->emeta_sec[0]; - l_mg->eline_meta[i] = emeta; - } else { - l_mg->emeta_alloc_type = PBLK_KMALLOC_META; - - emeta->buf = kmalloc(lm->emeta_len[0], GFP_KERNEL); - if (!emeta->buf) { - kfree(emeta); - goto fail_free_emeta; - } - - emeta->nr_entries = lm->emeta_sec[0]; - l_mg->eline_meta[i] = emeta; - } - } - - l_mg->vsc_list = kcalloc(l_mg->nr_lines, sizeof(__le32), GFP_KERNEL); - if (!l_mg->vsc_list) - goto fail_free_emeta; - - for (i = 0; i < l_mg->nr_lines; i++) - l_mg->vsc_list[i] = cpu_to_le32(EMPTY_ENTRY); - - return 0; - -fail_free_emeta: - while (--i >= 0) { - if (l_mg->emeta_alloc_type == PBLK_VMALLOC_META) - vfree(l_mg->eline_meta[i]->buf); - else - kfree(l_mg->eline_meta[i]->buf); - kfree(l_mg->eline_meta[i]); - } - -fail_free_smeta: - for (i = 0; i < PBLK_DATA_LINES; i++) - kfree(l_mg->sline_meta[i]); - - return -ENOMEM; -} - static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line, void *chunk_meta) { @@ -835,29 +780,13 @@ static int pblk_alloc_line_meta(struct pblk *pblk, struct pblk_line *line) return 0; } -static int pblk_lines_init(struct pblk *pblk) +static int pblk_line_mg_init(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; - struct pblk_line *line; - void *chunk_meta; - unsigned int smeta_len, emeta_len; - long nr_free_chks = 0; - int bb_distance, max_write_ppas; - int i, ret; - - pblk->min_write_pgs = geo->ws_opt * (geo->csecs / PAGE_SIZE); - max_write_ppas = pblk->min_write_pgs * geo->all_luns; - pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); - pblk_set_sec_per_write(pblk, pblk->min_write_pgs); - - if (pblk->max_write_pgs > PBLK_MAX_REQ_ADDRS) { - pr_err("pblk: vector list too big(%u > %u)\n", - pblk->max_write_pgs, PBLK_MAX_REQ_ADDRS); - return -EINVAL; - } + int i, bb_distance; l_mg->nr_lines = geo->num_chk; l_mg->log_line = l_mg->data_line = NULL; @@ -865,6 +794,119 @@ static int pblk_lines_init(struct pblk *pblk) l_mg->nr_free_lines = 0; bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES); + INIT_LIST_HEAD(&l_mg->free_list); + INIT_LIST_HEAD(&l_mg->corrupt_list); + INIT_LIST_HEAD(&l_mg->bad_list); + INIT_LIST_HEAD(&l_mg->gc_full_list); + INIT_LIST_HEAD(&l_mg->gc_high_list); + INIT_LIST_HEAD(&l_mg->gc_mid_list); + INIT_LIST_HEAD(&l_mg->gc_low_list); + INIT_LIST_HEAD(&l_mg->gc_empty_list); + + INIT_LIST_HEAD(&l_mg->emeta_list); + + l_mg->gc_lists[0] = &l_mg->gc_high_list; + l_mg->gc_lists[1] = &l_mg->gc_mid_list; + l_mg->gc_lists[2] = &l_mg->gc_low_list; + + spin_lock_init(&l_mg->free_lock); + spin_lock_init(&l_mg->close_lock); + spin_lock_init(&l_mg->gc_lock); + + l_mg->vsc_list = kcalloc(l_mg->nr_lines, sizeof(__le32), GFP_KERNEL); + if (!l_mg->vsc_list) + goto fail; + + l_mg->bb_template = kzalloc(lm->sec_bitmap_len, GFP_KERNEL); + if (!l_mg->bb_template) + goto fail_free_vsc_list; + + l_mg->bb_aux = kzalloc(lm->sec_bitmap_len, GFP_KERNEL); + if (!l_mg->bb_aux) + goto fail_free_bb_template; + + /* smeta is always small enough to fit on a kmalloc memory allocation, + * emeta depends on the number of LUNs allocated to the pblk instance + */ + for (i = 0; i < PBLK_DATA_LINES; i++) { + l_mg->sline_meta[i] = kmalloc(lm->smeta_len, GFP_KERNEL); + if (!l_mg->sline_meta[i]) + goto fail_free_smeta; + } + + /* emeta allocates three different buffers for managing metadata with + * in-memory and in-media layouts + */ + for (i = 0; i < PBLK_DATA_LINES; i++) { + struct pblk_emeta *emeta; + + emeta = kmalloc(sizeof(struct pblk_emeta), GFP_KERNEL); + if (!emeta) + goto fail_free_emeta; + + if (lm->emeta_len[0] > KMALLOC_MAX_CACHE_SIZE) { + l_mg->emeta_alloc_type = PBLK_VMALLOC_META; + + emeta->buf = vmalloc(lm->emeta_len[0]); + if (!emeta->buf) { + kfree(emeta); + goto fail_free_emeta; + } + + emeta->nr_entries = lm->emeta_sec[0]; + l_mg->eline_meta[i] = emeta; + } else { + l_mg->emeta_alloc_type = PBLK_KMALLOC_META; + + emeta->buf = kmalloc(lm->emeta_len[0], GFP_KERNEL); + if (!emeta->buf) { + kfree(emeta); + goto fail_free_emeta; + } + + emeta->nr_entries = lm->emeta_sec[0]; + l_mg->eline_meta[i] = emeta; + } + } + + for (i = 0; i < l_mg->nr_lines; i++) + l_mg->vsc_list[i] = cpu_to_le32(EMPTY_ENTRY); + + bb_distance = (geo->all_luns) * geo->ws_opt; + for (i = 0; i < lm->sec_per_line; i += bb_distance) + bitmap_set(l_mg->bb_template, i, geo->ws_opt); + + return 0; + +fail_free_emeta: + while (--i >= 0) { + if (l_mg->emeta_alloc_type == PBLK_VMALLOC_META) + vfree(l_mg->eline_meta[i]->buf); + else + kfree(l_mg->eline_meta[i]->buf); + kfree(l_mg->eline_meta[i]); + } +fail_free_smeta: + kfree(l_mg->bb_aux); + + for (i = 0; i < PBLK_DATA_LINES; i++) + kfree(l_mg->sline_meta[i]); +fail_free_bb_template: + kfree(l_mg->bb_template); +fail_free_vsc_list: + kfree(l_mg->vsc_list); +fail: + return -ENOMEM; +} + +static int pblk_line_meta_init(struct pblk *pblk) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; + unsigned int smeta_len, emeta_len; + int i; + lm->sec_per_line = geo->clba * geo->all_luns; lm->blk_per_line = geo->all_luns; lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long); @@ -915,58 +957,49 @@ static int pblk_lines_init(struct pblk *pblk) return -EINVAL; } - ret = pblk_lines_alloc_metadata(pblk); + return 0; +} + +static int pblk_lines_init(struct pblk *pblk) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_mgmt *l_mg = &pblk->l_mg; + struct pblk_line *line; + void *chunk_meta; + long nr_free_chks = 0; + int i, ret; + + ret = pblk_line_meta_init(pblk); if (ret) return ret; - l_mg->bb_template = kzalloc(lm->sec_bitmap_len, GFP_KERNEL); - if (!l_mg->bb_template) { - ret = -ENOMEM; - goto fail_free_meta; - } - - l_mg->bb_aux = kzalloc(lm->sec_bitmap_len, GFP_KERNEL); - if (!l_mg->bb_aux) { - ret = -ENOMEM; - goto fail_free_bb_template; - } - - bb_distance = (geo->all_luns) * geo->ws_opt; - for (i = 0; i < lm->sec_per_line; i += bb_distance) - bitmap_set(l_mg->bb_template, i, geo->ws_opt); - - INIT_LIST_HEAD(&l_mg->free_list); - INIT_LIST_HEAD(&l_mg->corrupt_list); - INIT_LIST_HEAD(&l_mg->bad_list); - INIT_LIST_HEAD(&l_mg->gc_full_list); - INIT_LIST_HEAD(&l_mg->gc_high_list); - INIT_LIST_HEAD(&l_mg->gc_mid_list); - INIT_LIST_HEAD(&l_mg->gc_low_list); - INIT_LIST_HEAD(&l_mg->gc_empty_list); - - INIT_LIST_HEAD(&l_mg->emeta_list); - - l_mg->gc_lists[0] = &l_mg->gc_high_list; - l_mg->gc_lists[1] = &l_mg->gc_mid_list; - l_mg->gc_lists[2] = &l_mg->gc_low_list; - - spin_lock_init(&l_mg->free_lock); - spin_lock_init(&l_mg->close_lock); - spin_lock_init(&l_mg->gc_lock); + ret = pblk_line_mg_init(pblk); + if (ret) + return ret; - pblk->lines = kcalloc(l_mg->nr_lines, sizeof(struct pblk_line), + pblk->luns = kcalloc(geo->all_luns, sizeof(struct pblk_lun), GFP_KERNEL); - if (!pblk->lines) { - ret = -ENOMEM; - goto fail_free_bb_aux; - } + if (!pblk->luns) + return -ENOMEM; + + ret = pblk_luns_init(pblk); + if (ret) + goto fail_free_luns; chunk_meta = pblk_chunk_get_meta(pblk); if (IS_ERR(chunk_meta)) { pr_err("pblk: could not get chunk log (%lu)\n", PTR_ERR(chunk_meta)); ret = PTR_ERR(chunk_meta); - goto fail_free_lines; + goto fail_free_meta; + } + + pblk->lines = kcalloc(l_mg->nr_lines, sizeof(struct pblk_line), + GFP_KERNEL); + if (!pblk->lines) { + ret = -ENOMEM; + goto fail_free_chunk_meta; } for (i = 0; i < l_mg->nr_lines; i++) { @@ -974,7 +1007,7 @@ static int pblk_lines_init(struct pblk *pblk) ret = pblk_alloc_line_meta(pblk, line); if (ret) - goto fail_free_chunk_meta; + goto fail_free_lines; nr_free_chks += pblk_setup_line_meta(pblk, line, chunk_meta, i); } @@ -984,18 +1017,16 @@ static int pblk_lines_init(struct pblk *pblk) kfree(chunk_meta); return 0; -fail_free_chunk_meta: - kfree(chunk_meta); +fail_free_lines: while (--i >= 0) pblk_line_meta_free(&pblk->lines[i]); -fail_free_lines: kfree(pblk->lines); -fail_free_bb_aux: - kfree(l_mg->bb_aux); -fail_free_bb_template: - kfree(l_mg->bb_template); +fail_free_chunk_meta: + kfree(chunk_meta); fail_free_meta: pblk_line_mg_free(pblk); +fail_free_luns: + kfree(pblk->luns); return ret; } @@ -1036,12 +1067,10 @@ static void pblk_writer_stop(struct pblk *pblk) static void pblk_free(struct pblk *pblk) { - pblk_luns_free(pblk); pblk_lines_free(pblk); - kfree(pblk->pad_dist); - pblk_line_mg_free(pblk); - pblk_core_free(pblk); pblk_l2p_free(pblk); + pblk_rwb_free(pblk); + pblk_core_free(pblk); kfree(pblk); } @@ -1112,19 +1141,6 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, spin_lock_init(&pblk->trans_lock); spin_lock_init(&pblk->lock); - if (flags & NVM_TARGET_FACTORY) - pblk_setup_uuid(pblk); - - atomic64_set(&pblk->user_wa, 0); - atomic64_set(&pblk->pad_wa, 0); - atomic64_set(&pblk->gc_wa, 0); - pblk->user_rst_wa = 0; - pblk->pad_rst_wa = 0; - pblk->gc_rst_wa = 0; - - atomic64_set(&pblk->nr_flush, 0); - pblk->nr_flush_rst = 0; - #ifdef CONFIG_NVM_DEBUG atomic_long_set(&pblk->inflight_writes, 0); atomic_long_set(&pblk->padded_writes, 0); @@ -1148,48 +1164,35 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, atomic_long_set(&pblk->write_failed, 0); atomic_long_set(&pblk->erase_failed, 0); - ret = pblk_luns_init(pblk, dev->luns); - if (ret) { - pr_err("pblk: could not initialize luns\n"); - goto fail; - } - - ret = pblk_lines_init(pblk); - if (ret) { - pr_err("pblk: could not initialize lines\n"); - goto fail_free_luns; - } - - pblk->pad_dist = kzalloc((pblk->min_write_pgs - 1) * sizeof(atomic64_t), - GFP_KERNEL); - if (!pblk->pad_dist) { - ret = -ENOMEM; - goto fail_free_line_meta; - } - ret = pblk_core_init(pblk); if (ret) { pr_err("pblk: could not initialize core\n"); - goto fail_free_pad_dist; + goto fail; } - ret = pblk_l2p_init(pblk); + ret = pblk_lines_init(pblk); if (ret) { - pr_err("pblk: could not initialize maps\n"); + pr_err("pblk: could not initialize lines\n"); goto fail_free_core; } - ret = pblk_lines_configure(pblk, flags); + ret = pblk_rwb_init(pblk); if (ret) { - pr_err("pblk: could not configure lines\n"); - goto fail_free_l2p; + pr_err("pblk: could not initialize write buffer\n"); + goto fail_free_lines; + } + + ret = pblk_l2p_init(pblk, flags & NVM_TARGET_FACTORY); + if (ret) { + pr_err("pblk: could not initialize maps\n"); + goto fail_free_rwb; } ret = pblk_writer_init(pblk); if (ret) { if (ret != -EINTR) pr_err("pblk: could not initialize write thread\n"); - goto fail_free_lines; + goto fail_free_l2p; } ret = pblk_gc_init(pblk); @@ -1224,18 +1227,14 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, fail_stop_writer: pblk_writer_stop(pblk); -fail_free_lines: - pblk_lines_free(pblk); fail_free_l2p: pblk_l2p_free(pblk); +fail_free_rwb: + pblk_rwb_free(pblk); +fail_free_lines: + pblk_lines_free(pblk); fail_free_core: pblk_core_free(pblk); -fail_free_pad_dist: - kfree(pblk->pad_dist); -fail_free_line_meta: - pblk_line_mg_free(pblk); -fail_free_luns: - pblk_luns_free(pblk); fail: kfree(pblk); return ERR_PTR(ret); -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 15/15] lightnvm: pblk: implement 2.0 support 2018-02-28 15:49 ` Javier González (?) @ 2018-02-28 15:49 ` Javier González -1 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, Javier González, linux-kernel, linux-nvme SW1wbGVtZW50IDIuMCBzdXBwb3J0IGluIHBibGsuIFRoaXMgaW5jbHVkZXMgdGhlIGFkZHJlc3Mg Zm9ybWF0dGluZyBhbmQKbWFwcGluZyBwYXRocywgYXMgd2VsbCBhcyB0aGUgc3lzZnMgZW50cmll cyBmb3IgdGhlbS4KClNpZ25lZC1vZmYtYnk6IEphdmllciBHb256w6FsZXogPGphdmllckBjbmV4 bGFicy5jb20+Ci0tLQogZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYyAgfCAgNTcgKysrKysr KysrKy0tCiBkcml2ZXJzL2xpZ2h0bnZtL3BibGstc3lzZnMuYyB8ICAzNiArKysrKystLQogZHJp dmVycy9saWdodG52bS9wYmxrLmggICAgICAgfCAxOTggKysrKysrKysrKysrKysrKysrKysrKysr KysrKysrKystLS0tLS0tLS0tCiAzIGZpbGVzIGNoYW5nZWQsIDIzMyBpbnNlcnRpb25zKCspLCA1 OCBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGstaW5pdC5j IGIvZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYwppbmRleCBiM2UxNWVmNjNkZjMuLjQ3NGYz ZjA0NzA4NyAxMDA2NDQKLS0tIGEvZHJpdmVycy9saWdodG52bS9wYmxrLWluaXQuYworKysgYi9k cml2ZXJzL2xpZ2h0bnZtL3BibGstaW5pdC5jCkBAIC0yMzEsMjAgKzIzMSw2MyBAQCBzdGF0aWMg aW50IHBibGtfc2V0X2FkZHJmXzEyKHN0cnVjdCBudm1fZ2VvICpnZW8sCiAJcmV0dXJuIGRzdC0+ YmxrX29mZnNldCArIHNyYy0+YmxrX2xlbjsKIH0KIAorc3RhdGljIGludCBwYmxrX3NldF9hZGRy Zl8yMChzdHJ1Y3QgbnZtX2dlbyAqZ2VvLAorCQkJICAgICBzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0 ICphZHN0LAorCQkJICAgICBzdHJ1Y3QgcGJsa19hZGRyX2Zvcm1hdCAqdWRzdCkKK3sKKwlzdHJ1 Y3QgbnZtX2FkZHJfZm9ybWF0ICpzcmMgPSAmZ2VvLT5hZGRyZjsKKworCWFkc3QtPmNoX2xlbiA9 IGdldF9jb3VudF9vcmRlcihnZW8tPm51bV9jaCk7CisJYWRzdC0+bHVuX2xlbiA9IGdldF9jb3Vu dF9vcmRlcihnZW8tPm51bV9sdW4pOworCWFkc3QtPmNoa19sZW4gPSBzcmMtPmNoa19sZW47CisJ YWRzdC0+c2VjX2xlbiA9IHNyYy0+c2VjX2xlbjsKKworCWFkc3QtPnNlY19vZmZzZXQgPSAwOwor CWFkc3QtPmNoX29mZnNldCA9IGFkc3QtPnNlY19sZW47CisJYWRzdC0+bHVuX29mZnNldCA9IGFk c3QtPmNoX29mZnNldCArIGFkc3QtPmNoX2xlbjsKKwlhZHN0LT5jaGtfb2Zmc2V0ID0gYWRzdC0+ bHVuX29mZnNldCArIGFkc3QtPmx1bl9sZW47CisKKwlhZHN0LT5zZWNfbWFzayA9ICgoMVVMTCA8 PCBhZHN0LT5zZWNfbGVuKSAtIDEpIDw8IGFkc3QtPnNlY19vZmZzZXQ7CisJYWRzdC0+Y2hrX21h c2sgPSAoKDFVTEwgPDwgYWRzdC0+Y2hrX2xlbikgLSAxKSA8PCBhZHN0LT5jaGtfb2Zmc2V0Owor CWFkc3QtPmx1bl9tYXNrID0gKCgxVUxMIDw8IGFkc3QtPmx1bl9sZW4pIC0gMSkgPDwgYWRzdC0+ bHVuX29mZnNldDsKKwlhZHN0LT5jaF9tYXNrID0gKCgxVUxMIDw8IGFkc3QtPmNoX2xlbikgLSAx KSA8PCBhZHN0LT5jaF9vZmZzZXQ7CisKKwl1ZHN0LT5zZWNfc3RyaXBlID0gZ2VvLT53c19vcHQ7 CisJdWRzdC0+Y2hfc3RyaXBlID0gZ2VvLT5udW1fY2g7CisJdWRzdC0+bHVuX3N0cmlwZSA9IGdl by0+bnVtX2x1bjsKKworCXVkc3QtPnNlY19sdW5fc3RyaXBlID0gdWRzdC0+c2VjX3N0cmlwZSAq IHVkc3QtPmNoX3N0cmlwZTsKKwl1ZHN0LT5zZWNfd3Nfc3RyaXBlID0gdWRzdC0+c2VjX2x1bl9z dHJpcGUgKiB1ZHN0LT5sdW5fc3RyaXBlOworCisJcmV0dXJuIGFkc3QtPmNoa19vZmZzZXQgKyBh ZHN0LT5jaGtfbGVuOworfQorCiBzdGF0aWMgaW50IHBibGtfc2V0X2FkZHJmKHN0cnVjdCBwYmxr ICpwYmxrKQogewogCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJsay0+ZGV2OwogCXN0cnVj dCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CiAJaW50IG1vZDsKIAotCWRpdl91NjRfcmVtKGdl by0+Y2xiYSwgcGJsay0+bWluX3dyaXRlX3BncywgJm1vZCk7Ci0JaWYgKG1vZCkgewotCQlwcl9l cnIoInBibGs6IGJhZCBjb25maWd1cmF0aW9uIG9mIHNlY3RvcnMvcGFnZXNcbiIpOworCXN3aXRj aCAoZ2VvLT52ZXJzaW9uKSB7CisJY2FzZSBOVk1fT0NTU0RfU1BFQ18xMjoKKwkJZGl2X3U2NF9y ZW0oZ2VvLT5jbGJhLCBwYmxrLT5taW5fd3JpdGVfcGdzLCAmbW9kKTsKKwkJaWYgKG1vZCkgewor CQkJcHJfZXJyKCJwYmxrOiBiYWQgY29uZmlndXJhdGlvbiBvZiBzZWN0b3JzL3BhZ2VzXG4iKTsK KwkJCXJldHVybiAtRUlOVkFMOworCQl9CisKKwkJcGJsay0+YWRkcmZfbGVuID0gcGJsa19zZXRf YWRkcmZfMTIoZ2VvLCAodm9pZCAqKSZwYmxrLT5hZGRyZik7CisJCWJyZWFrOworCWNhc2UgTlZN X09DU1NEX1NQRUNfMjA6CisJCXBibGstPmFkZHJmX2xlbiA9IHBibGtfc2V0X2FkZHJmXzIwKGdl bywgKHZvaWQgKikmcGJsay0+YWRkcmYsCisJCQkJCQkJCSZwYmxrLT51YWRkcmYpOworCQlicmVh azsKKwlkZWZhdWx0OgorCQlwcl9lcnIoInBibGs6IE9DU1NEIHJldmlzaW9uIG5vdCBzdXBwb3J0 ZWQgKCVkKVxuIiwKKwkJCQkJCQkJZ2VvLT52ZXJzaW9uKTsKIAkJcmV0dXJuIC1FSU5WQUw7CiAJ fQogCi0JcGJsay0+YWRkcmZfbGVuID0gcGJsa19zZXRfYWRkcmZfMTIoZ2VvLCAodm9pZCAqKSZw YmxrLT5hZGRyZik7Ci0KIAlyZXR1cm4gMDsKIH0KIApAQCAtMTExNyw3ICsxMTYwLDkgQEAgc3Rh dGljIHZvaWQgKnBibGtfaW5pdChzdHJ1Y3QgbnZtX3RndF9kZXYgKmRldiwgc3RydWN0IGdlbmRp c2sgKnRkaXNrLAogCXN0cnVjdCBwYmxrICpwYmxrOwogCWludCByZXQ7CiAKLQlpZiAoZ2VvLT52 ZXJzaW9uICE9IE5WTV9PQ1NTRF9TUEVDXzEyKSB7CisJLyogcGJsayBzdXBwb3J0cyAxLjIgYW5k IDIuMCB2ZXJzaW9ucyAqLworCWlmICghKGdlby0+dmVyc2lvbiA9PSBOVk1fT0NTU0RfU1BFQ18x MiB8fAorCQkJCQlnZW8tPnZlcnNpb24gPT0gTlZNX09DU1NEX1NQRUNfMjApKSB7CiAJCXByX2Vy cigicGJsazogT0NTU0QgdmVyc2lvbiBub3Qgc3VwcG9ydGVkICgldSlcbiIsCiAJCQkJCQkJZ2Vv LT52ZXJzaW9uKTsKIAkJcmV0dXJuIEVSUl9QVFIoLUVJTlZBTCk7CmRpZmYgLS1naXQgYS9kcml2 ZXJzL2xpZ2h0bnZtL3BibGstc3lzZnMuYyBiL2RyaXZlcnMvbGlnaHRudm0vcGJsay1zeXNmcy5j CmluZGV4IGE2NDNkYzYyMzczMS4uMzkxZjg2NWIwMmQ5IDEwMDY0NAotLS0gYS9kcml2ZXJzL2xp Z2h0bnZtL3BibGstc3lzZnMuYworKysgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGstc3lzZnMuYwpA QCAtMTEzLDE1ICsxMTMsMTYgQEAgc3RhdGljIHNzaXplX3QgcGJsa19zeXNmc19wcGFmKHN0cnVj dCBwYmxrICpwYmxrLCBjaGFyICpwYWdlKQogewogCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0g cGJsay0+ZGV2OwogCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87Ci0Jc3RydWN0IG52 bV9hZGRyX2Zvcm1hdF8xMiAqcHBhZjsKLQlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICpnZW9f cHBhZjsKIAlzc2l6ZV90IHN6ID0gMDsKIAotCXBwYWYgPSAoc3RydWN0IG52bV9hZGRyX2Zvcm1h dF8xMiAqKSZwYmxrLT5hZGRyZjsKLQlnZW9fcHBhZiA9IChzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0 XzEyICopJmdlby0+YWRkcmY7CisJaWYgKGdlby0+dmVyc2lvbiA9PSBOVk1fT0NTU0RfU1BFQ18x MikgeworCQlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICpwcGFmID0KKwkJCQkoc3RydWN0IG52 bV9hZGRyX2Zvcm1hdF8xMiAqKSZwYmxrLT5hZGRyZjsKKwkJc3RydWN0IG52bV9hZGRyX2Zvcm1h dF8xMiAqZ2VvX3BwYWYgPQorCQkJCShzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICopJmdlby0+ YWRkcmY7CiAKLQlzeiA9IHNucHJpbnRmKHBhZ2UsIFBBR0VfU0laRSwKLQkJInBibGs6KHM6JWQp Y2g6JWQvJWQsbHVuOiVkLyVkLGJsazolZC8lZCxwZzolZC8lZCxwbDolZC8lZCxzZWM6JWQvJWRc biIsCisJCXN6ID0gc25wcmludGYocGFnZSwgUEFHRV9TSVpFLAorCQkJInBibGs6KHM6JWQpY2g6 JWQvJWQsbHVuOiVkLyVkLGJsazolZC8lZCxwZzolZC8lZCxwbDolZC8lZCxzZWM6JWQvJWRcbiIs CiAJCQlwYmxrLT5hZGRyZl9sZW4sCiAJCQlwcGFmLT5jaF9vZmZzZXQsIHBwYWYtPmNoX2xlbiwK IAkJCXBwYWYtPmx1bl9vZmZzZXQsIHBwYWYtPmx1bl9sZW4sCkBAIC0xMzAsMTQgKzEzMSwzMyBA QCBzdGF0aWMgc3NpemVfdCBwYmxrX3N5c2ZzX3BwYWYoc3RydWN0IHBibGsgKnBibGssIGNoYXIg KnBhZ2UpCiAJCQlwcGFmLT5wbG5fb2Zmc2V0LCBwcGFmLT5wbG5fbGVuLAogCQkJcHBhZi0+c2Vj X29mZnNldCwgcHBhZi0+c2VjX2xlbik7CiAKLQlzeiArPSBzbnByaW50ZihwYWdlICsgc3osIFBB R0VfU0laRSAtIHN6LAotCQkiZGV2aWNlOmNoOiVkLyVkLGx1bjolZC8lZCxibGs6JWQvJWQscGc6 JWQvJWQscGw6JWQvJWQsc2VjOiVkLyVkXG4iLAorCQlzeiArPSBzbnByaW50ZihwYWdlICsgc3os IFBBR0VfU0laRSAtIHN6LAorCQkJImRldmljZTpjaDolZC8lZCxsdW46JWQvJWQsYmxrOiVkLyVk LHBnOiVkLyVkLHBsOiVkLyVkLHNlYzolZC8lZFxuIiwKIAkJCWdlb19wcGFmLT5jaF9vZmZzZXQs IGdlb19wcGFmLT5jaF9sZW4sCiAJCQlnZW9fcHBhZi0+bHVuX29mZnNldCwgZ2VvX3BwYWYtPmx1 bl9sZW4sCiAJCQlnZW9fcHBhZi0+YmxrX29mZnNldCwgZ2VvX3BwYWYtPmJsa19sZW4sCiAJCQln ZW9fcHBhZi0+cGdfb2Zmc2V0LCBnZW9fcHBhZi0+cGdfbGVuLAogCQkJZ2VvX3BwYWYtPnBsbl9v ZmZzZXQsIGdlb19wcGFmLT5wbG5fbGVuLAogCQkJZ2VvX3BwYWYtPnNlY19vZmZzZXQsIGdlb19w cGFmLT5zZWNfbGVuKTsKKwl9IGVsc2UgeworCQlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0ICpwcGFm ID0gJnBibGstPmFkZHJmOworCQlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0ICpnZW9fcHBhZiA9ICZn ZW8tPmFkZHJmOworCisJCXN6ID0gc25wcmludGYocGFnZSwgUEFHRV9TSVpFLAorCQkJInBibGs6 KHM6JWQpY2g6JWQvJWQsbHVuOiVkLyVkLGNoazolZC8lZC9zZWM6JWQvJWRcbiIsCisJCQlwYmxr LT5hZGRyZl9sZW4sCisJCQlwcGFmLT5jaF9vZmZzZXQsIHBwYWYtPmNoX2xlbiwKKwkJCXBwYWYt Pmx1bl9vZmZzZXQsIHBwYWYtPmx1bl9sZW4sCisJCQlwcGFmLT5jaGtfb2Zmc2V0LCBwcGFmLT5j aGtfbGVuLAorCQkJcHBhZi0+c2VjX29mZnNldCwgcHBhZi0+c2VjX2xlbik7CisKKwkJc3ogKz0g c25wcmludGYocGFnZSArIHN6LCBQQUdFX1NJWkUgLSBzeiwKKwkJCSJkZXZpY2U6Y2g6JWQvJWQs bHVuOiVkLyVkLGNoazolZC8lZCxzZWM6JWQvJWRcbiIsCisJCQlnZW9fcHBhZi0+Y2hfb2Zmc2V0 LCBnZW9fcHBhZi0+Y2hfbGVuLAorCQkJZ2VvX3BwYWYtPmx1bl9vZmZzZXQsIGdlb19wcGFmLT5s dW5fbGVuLAorCQkJZ2VvX3BwYWYtPmNoa19vZmZzZXQsIGdlb19wcGFmLT5jaGtfbGVuLAorCQkJ Z2VvX3BwYWYtPnNlY19vZmZzZXQsIGdlb19wcGFmLT5zZWNfbGVuKTsKKwl9CiAKIAlyZXR1cm4g c3o7CiB9CmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL3BibGsuaCBiL2RyaXZlcnMvbGln aHRudm0vcGJsay5oCmluZGV4IGVlMTQ5NzY2YjdhMC4uMWRlZGRkMzhjMGFjIDEwMDY0NAotLS0g YS9kcml2ZXJzL2xpZ2h0bnZtL3BibGsuaAorKysgYi9kcml2ZXJzL2xpZ2h0bnZtL3BibGsuaApA QCAtNTYxLDYgKzU2MSwxOCBAQCBlbnVtIHsKIAlQQkxLX1NUQVRFX1NUT1BQRUQgPSAzLAogfTsK IAorLyogSW50ZXJuYWwgZm9ybWF0IHRvIHN1cHBvcnQgbm90IHBvd2VyLW9mLTIgZGV2aWNlIGZv cm1hdHMgKi8KK3N0cnVjdCBwYmxrX2FkZHJfZm9ybWF0IHsKKwkvKiBnZW4gdG8gZGV2ICovCisJ aW50IHNlY19zdHJpcGU7CisJaW50IGNoX3N0cmlwZTsKKwlpbnQgbHVuX3N0cmlwZTsKKworCS8q IGRldiB0byBnZW4gKi8KKwlpbnQgc2VjX2x1bl9zdHJpcGU7CisJaW50IHNlY193c19zdHJpcGU7 Cit9OworCiBzdHJ1Y3QgcGJsayB7CiAJc3RydWN0IG52bV90Z3RfZGV2ICpkZXY7CiAJc3RydWN0 IGdlbmRpc2sgKmRpc2s7CkBAIC01NzMsNyArNTg1LDggQEAgc3RydWN0IHBibGsgewogCXN0cnVj dCBwYmxrX2xpbmVfbWdtdCBsX21nOwkJLyogTGluZSBtYW5hZ2VtZW50ICovCiAJc3RydWN0IHBi bGtfbGluZV9tZXRhIGxtOwkJLyogTGluZSBtZXRhZGF0YSAqLwogCi0Jc3RydWN0IG52bV9hZGRy X2Zvcm1hdCBhZGRyZjsKKwlzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0IGFkZHJmOwkvKiBBbGlnbmVk IGFkZHJlc3MgZm9ybWF0ICovCisJc3RydWN0IHBibGtfYWRkcl9mb3JtYXQgdWFkZHJmOwkvKiBV bmFsaWduZWQgYWRkcmVzcyBmb3JtYXQgKi8KIAlpbnQgYWRkcmZfbGVuOwogCiAJc3RydWN0IHBi bGtfcmIgcndiOwpAQCAtOTU0LDE3ICs5NjcsNDMgQEAgc3RhdGljIGlubGluZSBpbnQgcGJsa19w cGFfdG9fcG9zKHN0cnVjdCBudm1fZ2VvICpnZW8sIHN0cnVjdCBwcGFfYWRkciBwKQogc3RhdGlj IGlubGluZSBzdHJ1Y3QgcHBhX2FkZHIgYWRkcl90b19nZW5fcHBhKHN0cnVjdCBwYmxrICpwYmxr LCB1NjQgcGFkZHIsCiAJCQkJCSAgICAgIHU2NCBsaW5lX2lkKQogewotCXN0cnVjdCBudm1fYWRk cl9mb3JtYXRfMTIgKnBwYWYgPQotCQkJCShzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICopJnBi bGstPmFkZHJmOworCXN0cnVjdCBudm1fdGd0X2RldiAqZGV2ID0gcGJsay0+ZGV2OworCXN0cnVj dCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CiAJc3RydWN0IHBwYV9hZGRyIHBwYTsKIAotCXBw YS5wcGEgPSAwOwotCXBwYS5nLmJsayA9IGxpbmVfaWQ7Ci0JcHBhLmcucGcgPSAocGFkZHIgJiBw cGFmLT5wZ19tYXNrKSA+PiBwcGFmLT5wZ19vZmZzZXQ7Ci0JcHBhLmcubHVuID0gKHBhZGRyICYg cHBhZi0+bHVuX21hc2spID4+IHBwYWYtPmx1bl9vZmZzZXQ7Ci0JcHBhLmcuY2ggPSAocGFkZHIg JiBwcGFmLT5jaF9tYXNrKSA+PiBwcGFmLT5jaF9vZmZzZXQ7Ci0JcHBhLmcucGwgPSAocGFkZHIg JiBwcGFmLT5wbG5fbWFzaykgPj4gcHBhZi0+cGxuX29mZnNldDsKLQlwcGEuZy5zZWMgPSAocGFk ZHIgJiBwcGFmLT5zZWNfbWFzaykgPj4gcHBhZi0+c2VjX29mZnNldDsKKwlpZiAoZ2VvLT52ZXJz aW9uID09IE5WTV9PQ1NTRF9TUEVDXzEyKSB7CisJCXN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIg KnBwYWYgPQorCQkJCShzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICopJnBibGstPmFkZHJmOwor CisJCXBwYS5wcGEgPSAwOworCQlwcGEuZy5ibGsgPSBsaW5lX2lkOworCQlwcGEuZy5wZyA9IChw YWRkciAmIHBwYWYtPnBnX21hc2spID4+IHBwYWYtPnBnX29mZnNldDsKKwkJcHBhLmcubHVuID0g KHBhZGRyICYgcHBhZi0+bHVuX21hc2spID4+IHBwYWYtPmx1bl9vZmZzZXQ7CisJCXBwYS5nLmNo ID0gKHBhZGRyICYgcHBhZi0+Y2hfbWFzaykgPj4gcHBhZi0+Y2hfb2Zmc2V0OworCQlwcGEuZy5w bCA9IChwYWRkciAmIHBwYWYtPnBsbl9tYXNrKSA+PiBwcGFmLT5wbG5fb2Zmc2V0OworCQlwcGEu Zy5zZWMgPSAocGFkZHIgJiBwcGFmLT5zZWNfbWFzaykgPj4gcHBhZi0+c2VjX29mZnNldDsKKwl9 IGVsc2UgeworCQlzdHJ1Y3QgcGJsa19hZGRyX2Zvcm1hdCAqdWFkZHJmID0gJnBibGstPnVhZGRy ZjsKKwkJaW50IHNlY3MsIGNobmxzLCBsdW5zOworCisJCXBwYS5wcGEgPSAwOworCisJCXBwYS5t LmNoayA9IGxpbmVfaWQ7CisKKwkJZGl2X3U2NF9yZW0ocGFkZHIsIHVhZGRyZi0+c2VjX3N0cmlw ZSwgJnNlY3MpOworCQlwcGEubS5zZWMgPSBzZWNzOworCisJCXNlY3Rvcl9kaXYocGFkZHIsIHVh ZGRyZi0+c2VjX3N0cmlwZSk7CisJCWRpdl91NjRfcmVtKHBhZGRyLCB1YWRkcmYtPmNoX3N0cmlw ZSwgJmNobmxzKTsKKwkJcHBhLm0uZ3JwID0gY2hubHM7CisKKwkJc2VjdG9yX2RpdihwYWRkciwg dWFkZHJmLT5jaF9zdHJpcGUpOworCQlkaXZfdTY0X3JlbShwYWRkciwgdWFkZHJmLT5sdW5fc3Ry aXBlLCAmbHVucyk7CisJCXBwYS5tLnB1ID0gbHVuczsKKworCQlzZWN0b3JfZGl2KHBhZGRyLCB1 YWRkcmYtPmx1bl9zdHJpcGUpOworCQlwcGEubS5zZWMgKz0gdWFkZHJmLT5zZWNfc3RyaXBlICog cGFkZHI7CisJfQogCiAJcmV0dXJuIHBwYTsKIH0KQEAgLTk3MiwxNSArMTAxMSwzMiBAQCBzdGF0 aWMgaW5saW5lIHN0cnVjdCBwcGFfYWRkciBhZGRyX3RvX2dlbl9wcGEoc3RydWN0IHBibGsgKnBi bGssIHU2NCBwYWRkciwKIHN0YXRpYyBpbmxpbmUgdTY0IHBibGtfZGV2X3BwYV90b19saW5lX2Fk ZHIoc3RydWN0IHBibGsgKnBibGssCiAJCQkJCQkJc3RydWN0IHBwYV9hZGRyIHApCiB7Ci0Jc3Ry dWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqcHBhZiA9Ci0JCQkJKHN0cnVjdCBudm1fYWRkcl9mb3Jt YXRfMTIgKikmcGJsay0+YWRkcmY7CisJc3RydWN0IG52bV90Z3RfZGV2ICpkZXYgPSBwYmxrLT5k ZXY7CisJc3RydWN0IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsKIAl1NjQgcGFkZHI7CiAKLQlw YWRkciA9ICh1NjQpcC5nLmNoIDw8IHBwYWYtPmNoX29mZnNldDsKLQlwYWRkciB8PSAodTY0KXAu Zy5sdW4gPDwgcHBhZi0+bHVuX29mZnNldDsKLQlwYWRkciB8PSAodTY0KXAuZy5wZyA8PCBwcGFm LT5wZ19vZmZzZXQ7Ci0JcGFkZHIgfD0gKHU2NClwLmcucGwgPDwgcHBhZi0+cGxuX29mZnNldDsK LQlwYWRkciB8PSAodTY0KXAuZy5zZWMgPDwgcHBhZi0+c2VjX29mZnNldDsKKwlpZiAoZ2VvLT52 ZXJzaW9uID09IE5WTV9PQ1NTRF9TUEVDXzEyKSB7CisJCXN0cnVjdCBudm1fYWRkcl9mb3JtYXRf MTIgKnBwYWYgPQorCQkJCShzdHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICopJnBibGstPmFkZHJm OworCisJCXBhZGRyID0gKHU2NClwLmcuY2ggPDwgcHBhZi0+Y2hfb2Zmc2V0OworCQlwYWRkciB8 PSAodTY0KXAuZy5sdW4gPDwgcHBhZi0+bHVuX29mZnNldDsKKwkJcGFkZHIgfD0gKHU2NClwLmcu cGcgPDwgcHBhZi0+cGdfb2Zmc2V0OworCQlwYWRkciB8PSAodTY0KXAuZy5wbCA8PCBwcGFmLT5w bG5fb2Zmc2V0OworCQlwYWRkciB8PSAodTY0KXAuZy5zZWMgPDwgcHBhZi0+c2VjX29mZnNldDsK Kwl9IGVsc2UgeworCQlzdHJ1Y3QgcGJsa19hZGRyX2Zvcm1hdCAqdWFkZHJmID0gJnBibGstPnVh ZGRyZjsKKwkJdTY0IHNlY3MgPSAodTY0KXAubS5zZWM7CisJCWludCBzZWNfc3RyaXBlOworCisJ CXBhZGRyID0gKHU2NClwLm0uZ3JwICogdWFkZHJmLT5zZWNfc3RyaXBlOworCQlwYWRkciArPSAo dTY0KXAubS5wdSAqIHVhZGRyZi0+c2VjX2x1bl9zdHJpcGU7CisKKwkJZGl2X3U2NF9yZW0oc2Vj cywgdWFkZHJmLT5zZWNfc3RyaXBlLCAmc2VjX3N0cmlwZSk7CisJCXNlY3Rvcl9kaXYoc2Vjcywg dWFkZHJmLT5zZWNfc3RyaXBlKTsKKwkJcGFkZHIgKz0gc2VjcyAqIHVhZGRyZi0+c2VjX3dzX3N0 cmlwZTsKKwkJcGFkZHIgKz0gc2VjX3N0cmlwZTsKKwl9CiAKIAlyZXR1cm4gcGFkZHI7CiB9CkBA IC05OTcsMTUgKzEwNTMsMzcgQEAgc3RhdGljIGlubGluZSBzdHJ1Y3QgcHBhX2FkZHIgcGJsa19w cGEzMl90b19wcGE2NChzdHJ1Y3QgcGJsayAqcGJsaywgdTMyIHBwYTMyKQogCQlwcGE2NC5jLmxp bmUgPSBwcGEzMiAmICgofjBVKSA+PiAxKTsKIAkJcHBhNjQuYy5pc19jYWNoZWQgPSAxOwogCX0g ZWxzZSB7Ci0JCXN0cnVjdCBudm1fYWRkcl9mb3JtYXRfMTIgKnBwYWYgPQorCQlzdHJ1Y3QgbnZt X3RndF9kZXYgKmRldiA9IHBibGstPmRldjsKKwkJc3RydWN0IG52bV9nZW8gKmdlbyA9ICZkZXYt PmdlbzsKKworCQlpZiAoZ2VvLT52ZXJzaW9uID09IE5WTV9PQ1NTRF9TUEVDXzEyKSB7CisJCQlz dHJ1Y3QgbnZtX2FkZHJfZm9ybWF0XzEyICpwcGFmID0KIAkJCQkoc3RydWN0IG52bV9hZGRyX2Zv cm1hdF8xMiAqKSZwYmxrLT5hZGRyZjsKIAotCQlwcGE2NC5nLmNoID0gKHBwYTMyICYgcHBhZi0+ Y2hfbWFzaykgPj4gcHBhZi0+Y2hfb2Zmc2V0OwotCQlwcGE2NC5nLmx1biA9IChwcGEzMiAmIHBw YWYtPmx1bl9tYXNrKSA+PiBwcGFmLT5sdW5fb2Zmc2V0OwotCQlwcGE2NC5nLmJsayA9IChwcGEz MiAmIHBwYWYtPmJsa19tYXNrKSA+PiBwcGFmLT5ibGtfb2Zmc2V0OwotCQlwcGE2NC5nLnBnID0g KHBwYTMyICYgcHBhZi0+cGdfbWFzaykgPj4gcHBhZi0+cGdfb2Zmc2V0OwotCQlwcGE2NC5nLnBs ID0gKHBwYTMyICYgcHBhZi0+cGxuX21hc2spID4+IHBwYWYtPnBsbl9vZmZzZXQ7Ci0JCXBwYTY0 Lmcuc2VjID0gKHBwYTMyICYgcHBhZi0+c2VjX21hc2spID4+IHBwYWYtPnNlY19vZmZzZXQ7CisJ CQlwcGE2NC5nLmNoID0gKHBwYTMyICYgcHBhZi0+Y2hfbWFzaykgPj4KKwkJCQkJCQlwcGFmLT5j aF9vZmZzZXQ7CisJCQlwcGE2NC5nLmx1biA9IChwcGEzMiAmIHBwYWYtPmx1bl9tYXNrKSA+Pgor CQkJCQkJCXBwYWYtPmx1bl9vZmZzZXQ7CisJCQlwcGE2NC5nLmJsayA9IChwcGEzMiAmIHBwYWYt PmJsa19tYXNrKSA+PgorCQkJCQkJCXBwYWYtPmJsa19vZmZzZXQ7CisJCQlwcGE2NC5nLnBnID0g KHBwYTMyICYgcHBhZi0+cGdfbWFzaykgPj4KKwkJCQkJCQlwcGFmLT5wZ19vZmZzZXQ7CisJCQlw cGE2NC5nLnBsID0gKHBwYTMyICYgcHBhZi0+cGxuX21hc2spID4+CisJCQkJCQkJcHBhZi0+cGxu X29mZnNldDsKKwkJCXBwYTY0Lmcuc2VjID0gKHBwYTMyICYgcHBhZi0+c2VjX21hc2spID4+CisJ CQkJCQkJcHBhZi0+c2VjX29mZnNldDsKKwkJfSBlbHNlIHsKKwkJCXN0cnVjdCBudm1fYWRkcl9m b3JtYXQgKmxiYWYgPSAmcGJsay0+YWRkcmY7CisKKwkJCXBwYTY0Lm0uZ3JwID0gKHBwYTMyICYg bGJhZi0+Y2hfbWFzaykgPj4KKwkJCQkJCQlsYmFmLT5jaF9vZmZzZXQ7CisJCQlwcGE2NC5tLnB1 ID0gKHBwYTMyICYgbGJhZi0+bHVuX21hc2spID4+CisJCQkJCQkJbGJhZi0+bHVuX29mZnNldDsK KwkJCXBwYTY0Lm0uY2hrID0gKHBwYTMyICYgbGJhZi0+Y2hrX21hc2spID4+CisJCQkJCQkJbGJh Zi0+Y2hrX29mZnNldDsKKwkJCXBwYTY0Lm0uc2VjID0gKHBwYTMyICYgbGJhZi0+c2VjX21hc2sp ID4+CisJCQkJCQkJbGJhZi0+c2VjX29mZnNldDsKKwkJfQogCX0KIAogCXJldHVybiBwcGE2NDsK QEAgLTEwMjEsMTUgKzEwOTksMjcgQEAgc3RhdGljIGlubGluZSB1MzIgcGJsa19wcGE2NF90b19w cGEzMihzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0IHBwYV9hZGRyIHBwYTY0KQogCQlwcGEzMiB8 PSBwcGE2NC5jLmxpbmU7CiAJCXBwYTMyIHw9IDFVIDw8IDMxOwogCX0gZWxzZSB7Ci0JCXN0cnVj dCBudm1fYWRkcl9mb3JtYXRfMTIgKnBwYWYgPQorCQlzdHJ1Y3QgbnZtX3RndF9kZXYgKmRldiA9 IHBibGstPmRldjsKKwkJc3RydWN0IG52bV9nZW8gKmdlbyA9ICZkZXYtPmdlbzsKKworCQlpZiAo Z2VvLT52ZXJzaW9uID09IE5WTV9PQ1NTRF9TUEVDXzEyKSB7CisJCQlzdHJ1Y3QgbnZtX2FkZHJf Zm9ybWF0XzEyICpwcGFmID0KIAkJCQkoc3RydWN0IG52bV9hZGRyX2Zvcm1hdF8xMiAqKSZwYmxr LT5hZGRyZjsKIAotCQlwcGEzMiB8PSBwcGE2NC5nLmNoIDw8IHBwYWYtPmNoX29mZnNldDsKLQkJ cHBhMzIgfD0gcHBhNjQuZy5sdW4gPDwgcHBhZi0+bHVuX29mZnNldDsKLQkJcHBhMzIgfD0gcHBh NjQuZy5ibGsgPDwgcHBhZi0+YmxrX29mZnNldDsKLQkJcHBhMzIgfD0gcHBhNjQuZy5wZyA8PCBw cGFmLT5wZ19vZmZzZXQ7Ci0JCXBwYTMyIHw9IHBwYTY0LmcucGwgPDwgcHBhZi0+cGxuX29mZnNl dDsKLQkJcHBhMzIgfD0gcHBhNjQuZy5zZWMgPDwgcHBhZi0+c2VjX29mZnNldDsKKwkJCXBwYTMy IHw9IHBwYTY0LmcuY2ggPDwgcHBhZi0+Y2hfb2Zmc2V0OworCQkJcHBhMzIgfD0gcHBhNjQuZy5s dW4gPDwgcHBhZi0+bHVuX29mZnNldDsKKwkJCXBwYTMyIHw9IHBwYTY0LmcuYmxrIDw8IHBwYWYt PmJsa19vZmZzZXQ7CisJCQlwcGEzMiB8PSBwcGE2NC5nLnBnIDw8IHBwYWYtPnBnX29mZnNldDsK KwkJCXBwYTMyIHw9IHBwYTY0LmcucGwgPDwgcHBhZi0+cGxuX29mZnNldDsKKwkJCXBwYTMyIHw9 IHBwYTY0Lmcuc2VjIDw8IHBwYWYtPnNlY19vZmZzZXQ7CisJCX0gZWxzZSB7CisJCQlzdHJ1Y3Qg bnZtX2FkZHJfZm9ybWF0ICpsYmFmID0gJnBibGstPmFkZHJmOworCisJCQlwcGEzMiB8PSBwcGE2 NC5tLmdycCA8PCBsYmFmLT5jaF9vZmZzZXQ7CisJCQlwcGEzMiB8PSBwcGE2NC5tLnB1IDw8IGxi YWYtPmx1bl9vZmZzZXQ7CisJCQlwcGEzMiB8PSBwcGE2NC5tLmNoayA8PCBsYmFmLT5jaGtfb2Zm c2V0OworCQkJcHBhMzIgfD0gcHBhNjQubS5zZWMgPDwgbGJhZi0+c2VjX29mZnNldDsKKwkJfQog CX0KIAogCXJldHVybiBwcGEzMjsKQEAgLTExNDcsNiArMTIzNyw5IEBAIHN0YXRpYyBpbmxpbmUg aW50IHBibGtfc2V0X3Byb2dyX21vZGUoc3RydWN0IHBibGsgKnBibGssIGludCB0eXBlKQogCXN0 cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CiAJaW50IGZsYWdzOwogCisJaWYgKGdlby0+ dmVyc2lvbiA9PSBOVk1fT0NTU0RfU1BFQ18yMCkKKwkJcmV0dXJuIDA7CisKIAlmbGFncyA9IGdl by0+cGxuX21vZGUgPj4gMTsKIAogCWlmICh0eXBlID09IFBCTEtfV1JJVEUpCkBAIC0xMTY2LDYg KzEyNTksOSBAQCBzdGF0aWMgaW5saW5lIGludCBwYmxrX3NldF9yZWFkX21vZGUoc3RydWN0IHBi bGsgKnBibGssIGludCB0eXBlKQogCXN0cnVjdCBudm1fZ2VvICpnZW8gPSAmZGV2LT5nZW87CiAJ aW50IGZsYWdzOwogCisJaWYgKGdlby0+dmVyc2lvbiA9PSBOVk1fT0NTU0RfU1BFQ18yMCkKKwkJ cmV0dXJuIDA7CisKIAlmbGFncyA9IE5WTV9JT19TVVNQRU5EIHwgTlZNX0lPX1NDUkFNQkxFX0VO QUJMRTsKIAlpZiAodHlwZSA9PSBQQkxLX1JFQURfU0VRVUVOVElBTCkKIAkJZmxhZ3MgfD0gZ2Vv LT5wbG5fbW9kZSA+PiAxOwpAQCAtMTE3OSwxNiArMTI3NSwyMSBAQCBzdGF0aWMgaW5saW5lIGlu dCBwYmxrX2lvX2FsaWduZWQoc3RydWN0IHBibGsgKnBibGssIGludCBucl9zZWNzKQogfQogCiAj aWZkZWYgQ09ORklHX05WTV9ERUJVRwotc3RhdGljIGlubGluZSB2b2lkIHByaW50X3BwYShzdHJ1 Y3QgcHBhX2FkZHIgKnAsIGNoYXIgKm1zZywgaW50IGVycm9yKQorc3RhdGljIGlubGluZSB2b2lk IHByaW50X3BwYShzdHJ1Y3QgbnZtX2dlbyAqZ2VvLCBzdHJ1Y3QgcHBhX2FkZHIgKnAsCisJCQkg ICAgIGNoYXIgKm1zZywgaW50IGVycm9yKQogewogCWlmIChwLT5jLmlzX2NhY2hlZCkgewogCQlw cl9lcnIoInBwYTogKCVzOiAleCkgY2FjaGUgbGluZTogJWxsdVxuIiwKIAkJCQltc2csIGVycm9y LCAodTY0KXAtPmMubGluZSk7Ci0JfSBlbHNlIHsKKwl9IGVsc2UgaWYgKGdlby0+dmVyc2lvbiA9 PSBOVk1fT0NTU0RfU1BFQ18xMikgewogCQlwcl9lcnIoInBwYTogKCVzOiAleCk6Y2g6JWQsbHVu OiVkLGJsazolZCxwZzolZCxwbDolZCxzZWM6JWRcbiIsCiAJCQltc2csIGVycm9yLAogCQkJcC0+ Zy5jaCwgcC0+Zy5sdW4sIHAtPmcuYmxrLAogCQkJcC0+Zy5wZywgcC0+Zy5wbCwgcC0+Zy5zZWMp OworCX0gZWxzZSB7CisJCXByX2VycigicHBhOiAoJXM6ICV4KTpjaDolZCxsdW46JWQsY2hrOiVk LHNlYzolZFxuIiwKKwkJCW1zZywgZXJyb3IsCisJCQlwLT5tLmdycCwgcC0+bS5wdSwgcC0+bS5j aGssIHAtPm0uc2VjKTsKIAl9CiB9CiAKQEAgLTExOTgsMTMgKzEyOTksMTMgQEAgc3RhdGljIGlu bGluZSB2b2lkIHBibGtfcHJpbnRfZmFpbGVkX3JxZChzdHJ1Y3QgcGJsayAqcGJsaywgc3RydWN0 IG52bV9ycSAqcnFkLAogCWludCBiaXQgPSAtMTsKIAogCWlmIChycWQtPm5yX3BwYXMgPT0gIDEp IHsKLQkJcHJpbnRfcHBhKCZycWQtPnBwYV9hZGRyLCAicnFkIiwgZXJyb3IpOworCQlwcmludF9w cGEoJnBibGstPmRldi0+Z2VvLCAmcnFkLT5wcGFfYWRkciwgInJxZCIsIGVycm9yKTsKIAkJcmV0 dXJuOwogCX0KIAogCXdoaWxlICgoYml0ID0gZmluZF9uZXh0X2JpdCgodm9pZCAqKSZycWQtPnBw YV9zdGF0dXMsIHJxZC0+bnJfcHBhcywKIAkJCQkJCWJpdCArIDEpKSA8IHJxZC0+bnJfcHBhcykg ewotCQlwcmludF9wcGEoJnJxZC0+cHBhX2xpc3RbYml0XSwgInJxZCIsIGVycm9yKTsKKwkJcHJp bnRfcHBhKCZwYmxrLT5kZXYtPmdlbywgJnJxZC0+cHBhX2xpc3RbYml0XSwgInJxZCIsIGVycm9y KTsKIAl9CiAKIAlwcl9lcnIoImVycm9yOiVkLCBwcGFfc3RhdHVzOiVsbHhcbiIsIGVycm9yLCBy cWQtPnBwYV9zdGF0dXMpOwpAQCAtMTIyMCwxNiArMTMyMSwyNSBAQCBzdGF0aWMgaW5saW5lIGlu dCBwYmxrX2JvdW5kYXJ5X3BwYV9jaGVja3Moc3RydWN0IG52bV90Z3RfZGV2ICp0Z3RfZGV2LAog CWZvciAoaSA9IDA7IGkgPCBucl9wcGFzOyBpKyspIHsKIAkJcHBhID0gJnBwYXNbaV07CiAKLQkJ aWYgKCFwcGEtPmMuaXNfY2FjaGVkICYmCi0JCQkJcHBhLT5nLmNoIDwgZ2VvLT5udW1fY2ggJiYK LQkJCQlwcGEtPmcubHVuIDwgZ2VvLT5udW1fbHVuICYmCi0JCQkJcHBhLT5nLnBsIDwgZ2VvLT5u dW1fcGxuICYmCi0JCQkJcHBhLT5nLmJsayA8IGdlby0+bnVtX2NoayAmJgotCQkJCXBwYS0+Zy5w ZyA8IGdlby0+bnVtX3BnICYmCi0JCQkJcHBhLT5nLnNlYyA8IGdlby0+d3NfbWluKQotCQkJY29u dGludWU7CisJCWlmIChnZW8tPnZlcnNpb24gPT0gTlZNX09DU1NEX1NQRUNfMTIpIHsKKwkJCWlm ICghcHBhLT5jLmlzX2NhY2hlZCAmJgorCQkJCQlwcGEtPmcuY2ggPCBnZW8tPm51bV9jaCAmJgor CQkJCQlwcGEtPmcubHVuIDwgZ2VvLT5udW1fbHVuICYmCisJCQkJCXBwYS0+Zy5wbCA8IGdlby0+ bnVtX3BsbiAmJgorCQkJCQlwcGEtPmcuYmxrIDwgZ2VvLT5udW1fY2hrICYmCisJCQkJCXBwYS0+ Zy5wZyA8IGdlby0+bnVtX3BnICYmCisJCQkJCXBwYS0+Zy5zZWMgPCBnZW8tPndzX21pbikKKwkJ CQljb250aW51ZTsKKwkJfSBlbHNlIHsKKwkJCWlmICghcHBhLT5jLmlzX2NhY2hlZCAmJgorCQkJ CQlwcGEtPm0uZ3JwIDwgZ2VvLT5udW1fY2ggJiYKKwkJCQkJcHBhLT5tLnB1IDwgZ2VvLT5udW1f bHVuICYmCisJCQkJCXBwYS0+bS5jaGsgPCBnZW8tPm51bV9jaGsgJiYKKwkJCQkJcHBhLT5tLnNl YyA8IGdlby0+Y2xiYSkKKwkJCQljb250aW51ZTsKKwkJfQogCi0JCXByaW50X3BwYShwcGEsICJi b3VuZGFyeSIsIGkpOworCQlwcmludF9wcGEoZ2VvLCBwcGEsICJib3VuZGFyeSIsIGkpOwogCiAJ CXJldHVybiAxOwogCX0KLS0gCjIuNy40CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX18KTGludXgtbnZtZSBtYWlsaW5nIGxpc3QKTGludXgtbnZtZUBsaXN0 cy5pbmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0cy5pbmZyYWRlYWQub3JnL21haWxtYW4vbGlzdGlu Zm8vbGludXgtbnZtZQo= ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 15/15] lightnvm: pblk: implement 2.0 support @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) Implement 2.0 support in pblk. This includes the address formatting and mapping paths, as well as the sysfs entries for them. Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> --- drivers/lightnvm/pblk-init.c | 57 ++++++++++-- drivers/lightnvm/pblk-sysfs.c | 36 ++++++-- drivers/lightnvm/pblk.h | 198 ++++++++++++++++++++++++++++++++---------- 3 files changed, 233 insertions(+), 58 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index b3e15ef63df3..474f3f047087 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -231,20 +231,63 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, return dst->blk_offset + src->blk_len; } +static int pblk_set_addrf_20(struct nvm_geo *geo, + struct nvm_addr_format *adst, + struct pblk_addr_format *udst) +{ + struct nvm_addr_format *src = &geo->addrf; + + adst->ch_len = get_count_order(geo->num_ch); + adst->lun_len = get_count_order(geo->num_lun); + adst->chk_len = src->chk_len; + adst->sec_len = src->sec_len; + + adst->sec_offset = 0; + adst->ch_offset = adst->sec_len; + adst->lun_offset = adst->ch_offset + adst->ch_len; + adst->chk_offset = adst->lun_offset + adst->lun_len; + + adst->sec_mask = ((1ULL << adst->sec_len) - 1) << adst->sec_offset; + adst->chk_mask = ((1ULL << adst->chk_len) - 1) << adst->chk_offset; + adst->lun_mask = ((1ULL << adst->lun_len) - 1) << adst->lun_offset; + adst->ch_mask = ((1ULL << adst->ch_len) - 1) << adst->ch_offset; + + udst->sec_stripe = geo->ws_opt; + udst->ch_stripe = geo->num_ch; + udst->lun_stripe = geo->num_lun; + + udst->sec_lun_stripe = udst->sec_stripe * udst->ch_stripe; + udst->sec_ws_stripe = udst->sec_lun_stripe * udst->lun_stripe; + + return adst->chk_offset + adst->chk_len; +} + static int pblk_set_addrf(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; int mod; - div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); - if (mod) { - pr_err("pblk: bad configuration of sectors/pages\n"); + switch (geo->version) { + case NVM_OCSSD_SPEC_12: + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); + if (mod) { + pr_err("pblk: bad configuration of sectors/pages\n"); + return -EINVAL; + } + + pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); + break; + case NVM_OCSSD_SPEC_20: + pblk->addrf_len = pblk_set_addrf_20(geo, (void *)&pblk->addrf, + &pblk->uaddrf); + break; + default: + pr_err("pblk: OCSSD revision not supported (%d)\n", + geo->version); return -EINVAL; } - pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); - return 0; } @@ -1117,7 +1160,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, struct pblk *pblk; int ret; - if (geo->version != NVM_OCSSD_SPEC_12) { + /* pblk supports 1.2 and 2.0 versions */ + if (!(geo->version == NVM_OCSSD_SPEC_12 || + geo->version == NVM_OCSSD_SPEC_20)) { pr_err("pblk: OCSSD version not supported (%u)\n", geo->version); return ERR_PTR(-EINVAL); diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index a643dc623731..391f865b02d9 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -113,15 +113,16 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - struct nvm_addr_format_12 *ppaf; - struct nvm_addr_format_12 *geo_ppaf; ssize_t sz = 0; - ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; - geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->addrf; + struct nvm_addr_format_12 *geo_ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; - sz = snprintf(page, PAGE_SIZE, - "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", + sz = snprintf(page, PAGE_SIZE, + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", pblk->addrf_len, ppaf->ch_offset, ppaf->ch_len, ppaf->lun_offset, ppaf->lun_len, @@ -130,14 +131,33 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) ppaf->pln_offset, ppaf->pln_len, ppaf->sec_offset, ppaf->sec_len); - sz += snprintf(page + sz, PAGE_SIZE - sz, - "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", + sz += snprintf(page + sz, PAGE_SIZE - sz, + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", geo_ppaf->ch_offset, geo_ppaf->ch_len, geo_ppaf->lun_offset, geo_ppaf->lun_len, geo_ppaf->blk_offset, geo_ppaf->blk_len, geo_ppaf->pg_offset, geo_ppaf->pg_len, geo_ppaf->pln_offset, geo_ppaf->pln_len, geo_ppaf->sec_offset, geo_ppaf->sec_len); + } else { + struct nvm_addr_format *ppaf = &pblk->addrf; + struct nvm_addr_format *geo_ppaf = &geo->addrf; + + sz = snprintf(page, PAGE_SIZE, + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,chk:%d/%d/sec:%d/%d\n", + pblk->addrf_len, + ppaf->ch_offset, ppaf->ch_len, + ppaf->lun_offset, ppaf->lun_len, + ppaf->chk_offset, ppaf->chk_len, + ppaf->sec_offset, ppaf->sec_len); + + sz += snprintf(page + sz, PAGE_SIZE - sz, + "device:ch:%d/%d,lun:%d/%d,chk:%d/%d,sec:%d/%d\n", + geo_ppaf->ch_offset, geo_ppaf->ch_len, + geo_ppaf->lun_offset, geo_ppaf->lun_len, + geo_ppaf->chk_offset, geo_ppaf->chk_len, + geo_ppaf->sec_offset, geo_ppaf->sec_len); + } return sz; } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index ee149766b7a0..1deddd38c0ac 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -561,6 +561,18 @@ enum { PBLK_STATE_STOPPED = 3, }; +/* Internal format to support not power-of-2 device formats */ +struct pblk_addr_format { + /* gen to dev */ + int sec_stripe; + int ch_stripe; + int lun_stripe; + + /* dev to gen */ + int sec_lun_stripe; + int sec_ws_stripe; +}; + struct pblk { struct nvm_tgt_dev *dev; struct gendisk *disk; @@ -573,7 +585,8 @@ struct pblk { struct pblk_line_mgmt l_mg; /* Line management */ struct pblk_line_meta lm; /* Line metadata */ - struct nvm_addr_format addrf; + struct nvm_addr_format addrf; /* Aligned address format */ + struct pblk_addr_format uaddrf; /* Unaligned address format */ int addrf_len; struct pblk_rb rwb; @@ -954,17 +967,43 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, u64 line_id) { - struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->addrf; + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; struct ppa_addr ppa; - ppa.ppa = 0; - ppa.g.blk = line_id; - ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; - ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; - ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; - ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; - ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->addrf; + + ppa.ppa = 0; + ppa.g.blk = line_id; + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; + } else { + struct pblk_addr_format *uaddrf = &pblk->uaddrf; + int secs, chnls, luns; + + ppa.ppa = 0; + + ppa.m.chk = line_id; + + div_u64_rem(paddr, uaddrf->sec_stripe, &secs); + ppa.m.sec = secs; + + sector_div(paddr, uaddrf->sec_stripe); + div_u64_rem(paddr, uaddrf->ch_stripe, &chnls); + ppa.m.grp = chnls; + + sector_div(paddr, uaddrf->ch_stripe); + div_u64_rem(paddr, uaddrf->lun_stripe, &luns); + ppa.m.pu = luns; + + sector_div(paddr, uaddrf->lun_stripe); + ppa.m.sec += uaddrf->sec_stripe * paddr; + } return ppa; } @@ -972,15 +1011,32 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, struct ppa_addr p) { - struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->addrf; + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; u64 paddr; - paddr = (u64)p.g.ch << ppaf->ch_offset; - paddr |= (u64)p.g.lun << ppaf->lun_offset; - paddr |= (u64)p.g.pg << ppaf->pg_offset; - paddr |= (u64)p.g.pl << ppaf->pln_offset; - paddr |= (u64)p.g.sec << ppaf->sec_offset; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->addrf; + + paddr = (u64)p.g.ch << ppaf->ch_offset; + paddr |= (u64)p.g.lun << ppaf->lun_offset; + paddr |= (u64)p.g.pg << ppaf->pg_offset; + paddr |= (u64)p.g.pl << ppaf->pln_offset; + paddr |= (u64)p.g.sec << ppaf->sec_offset; + } else { + struct pblk_addr_format *uaddrf = &pblk->uaddrf; + u64 secs = (u64)p.m.sec; + int sec_stripe; + + paddr = (u64)p.m.grp * uaddrf->sec_stripe; + paddr += (u64)p.m.pu * uaddrf->sec_lun_stripe; + + div_u64_rem(secs, uaddrf->sec_stripe, &sec_stripe); + sector_div(secs, uaddrf->sec_stripe); + paddr += secs * uaddrf->sec_ws_stripe; + paddr += sec_stripe; + } return paddr; } @@ -997,15 +1053,37 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) ppa64.c.line = ppa32 & ((~0U) >> 1); ppa64.c.is_cached = 1; } else { - struct nvm_addr_format_12 *ppaf = + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; - ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; - ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; - ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; - ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; - ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; - ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset; + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> + ppaf->ch_offset; + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> + ppaf->lun_offset; + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> + ppaf->blk_offset; + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> + ppaf->pg_offset; + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> + ppaf->pln_offset; + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> + ppaf->sec_offset; + } else { + struct nvm_addr_format *lbaf = &pblk->addrf; + + ppa64.m.grp = (ppa32 & lbaf->ch_mask) >> + lbaf->ch_offset; + ppa64.m.pu = (ppa32 & lbaf->lun_mask) >> + lbaf->lun_offset; + ppa64.m.chk = (ppa32 & lbaf->chk_mask) >> + lbaf->chk_offset; + ppa64.m.sec = (ppa32 & lbaf->sec_mask) >> + lbaf->sec_offset; + } } return ppa64; @@ -1021,15 +1099,27 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) ppa32 |= ppa64.c.line; ppa32 |= 1U << 31; } else { - struct nvm_addr_format_12 *ppaf = + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; - ppa32 |= ppa64.g.ch << ppaf->ch_offset; - ppa32 |= ppa64.g.lun << ppaf->lun_offset; - ppa32 |= ppa64.g.blk << ppaf->blk_offset; - ppa32 |= ppa64.g.pg << ppaf->pg_offset; - ppa32 |= ppa64.g.pl << ppaf->pln_offset; - ppa32 |= ppa64.g.sec << ppaf->sec_offset; + ppa32 |= ppa64.g.ch << ppaf->ch_offset; + ppa32 |= ppa64.g.lun << ppaf->lun_offset; + ppa32 |= ppa64.g.blk << ppaf->blk_offset; + ppa32 |= ppa64.g.pg << ppaf->pg_offset; + ppa32 |= ppa64.g.pl << ppaf->pln_offset; + ppa32 |= ppa64.g.sec << ppaf->sec_offset; + } else { + struct nvm_addr_format *lbaf = &pblk->addrf; + + ppa32 |= ppa64.m.grp << lbaf->ch_offset; + ppa32 |= ppa64.m.pu << lbaf->lun_offset; + ppa32 |= ppa64.m.chk << lbaf->chk_offset; + ppa32 |= ppa64.m.sec << lbaf->sec_offset; + } } return ppa32; @@ -1147,6 +1237,9 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type) struct nvm_geo *geo = &dev->geo; int flags; + if (geo->version == NVM_OCSSD_SPEC_20) + return 0; + flags = geo->pln_mode >> 1; if (type == PBLK_WRITE) @@ -1166,6 +1259,9 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type) struct nvm_geo *geo = &dev->geo; int flags; + if (geo->version == NVM_OCSSD_SPEC_20) + return 0; + flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE; if (type == PBLK_READ_SEQUENTIAL) flags |= geo->pln_mode >> 1; @@ -1179,16 +1275,21 @@ static inline int pblk_io_aligned(struct pblk *pblk, int nr_secs) } #ifdef CONFIG_NVM_DEBUG -static inline void print_ppa(struct ppa_addr *p, char *msg, int error) +static inline void print_ppa(struct nvm_geo *geo, struct ppa_addr *p, + char *msg, int error) { if (p->c.is_cached) { pr_err("ppa: (%s: %x) cache line: %llu\n", msg, error, (u64)p->c.line); - } else { + } else if (geo->version == NVM_OCSSD_SPEC_12) { pr_err("ppa: (%s: %x):ch:%d,lun:%d,blk:%d,pg:%d,pl:%d,sec:%d\n", msg, error, p->g.ch, p->g.lun, p->g.blk, p->g.pg, p->g.pl, p->g.sec); + } else { + pr_err("ppa: (%s: %x):ch:%d,lun:%d,chk:%d,sec:%d\n", + msg, error, + p->m.grp, p->m.pu, p->m.chk, p->m.sec); } } @@ -1198,13 +1299,13 @@ static inline void pblk_print_failed_rqd(struct pblk *pblk, struct nvm_rq *rqd, int bit = -1; if (rqd->nr_ppas == 1) { - print_ppa(&rqd->ppa_addr, "rqd", error); + print_ppa(&pblk->dev->geo, &rqd->ppa_addr, "rqd", error); return; } while ((bit = find_next_bit((void *)&rqd->ppa_status, rqd->nr_ppas, bit + 1)) < rqd->nr_ppas) { - print_ppa(&rqd->ppa_list[bit], "rqd", error); + print_ppa(&pblk->dev->geo, &rqd->ppa_list[bit], "rqd", error); } pr_err("error:%d, ppa_status:%llx\n", error, rqd->ppa_status); @@ -1220,16 +1321,25 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, for (i = 0; i < nr_ppas; i++) { ppa = &ppas[i]; - if (!ppa->c.is_cached && - ppa->g.ch < geo->num_ch && - ppa->g.lun < geo->num_lun && - ppa->g.pl < geo->num_pln && - ppa->g.blk < geo->num_chk && - ppa->g.pg < geo->num_pg && - ppa->g.sec < geo->ws_min) - continue; + if (geo->version == NVM_OCSSD_SPEC_12) { + if (!ppa->c.is_cached && + ppa->g.ch < geo->num_ch && + ppa->g.lun < geo->num_lun && + ppa->g.pl < geo->num_pln && + ppa->g.blk < geo->num_chk && + ppa->g.pg < geo->num_pg && + ppa->g.sec < geo->ws_min) + continue; + } else { + if (!ppa->c.is_cached && + ppa->m.grp < geo->num_ch && + ppa->m.pu < geo->num_lun && + ppa->m.chk < geo->num_chk && + ppa->m.sec < geo->clba) + continue; + } - print_ppa(ppa, "boundary", i); + print_ppa(geo, ppa, "boundary", i); return 1; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 15/15] lightnvm: pblk: implement 2.0 support @ 2018-02-28 15:49 ` Javier González 0 siblings, 0 replies; 71+ messages in thread From: Javier González @ 2018-02-28 15:49 UTC (permalink / raw) To: mb; +Cc: linux-block, linux-kernel, linux-nvme, Javier González Implement 2.0 support in pblk. This includes the address formatting and mapping paths, as well as the sysfs entries for them. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/pblk-init.c | 57 ++++++++++-- drivers/lightnvm/pblk-sysfs.c | 36 ++++++-- drivers/lightnvm/pblk.h | 198 ++++++++++++++++++++++++++++++++---------- 3 files changed, 233 insertions(+), 58 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index b3e15ef63df3..474f3f047087 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -231,20 +231,63 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, return dst->blk_offset + src->blk_len; } +static int pblk_set_addrf_20(struct nvm_geo *geo, + struct nvm_addr_format *adst, + struct pblk_addr_format *udst) +{ + struct nvm_addr_format *src = &geo->addrf; + + adst->ch_len = get_count_order(geo->num_ch); + adst->lun_len = get_count_order(geo->num_lun); + adst->chk_len = src->chk_len; + adst->sec_len = src->sec_len; + + adst->sec_offset = 0; + adst->ch_offset = adst->sec_len; + adst->lun_offset = adst->ch_offset + adst->ch_len; + adst->chk_offset = adst->lun_offset + adst->lun_len; + + adst->sec_mask = ((1ULL << adst->sec_len) - 1) << adst->sec_offset; + adst->chk_mask = ((1ULL << adst->chk_len) - 1) << adst->chk_offset; + adst->lun_mask = ((1ULL << adst->lun_len) - 1) << adst->lun_offset; + adst->ch_mask = ((1ULL << adst->ch_len) - 1) << adst->ch_offset; + + udst->sec_stripe = geo->ws_opt; + udst->ch_stripe = geo->num_ch; + udst->lun_stripe = geo->num_lun; + + udst->sec_lun_stripe = udst->sec_stripe * udst->ch_stripe; + udst->sec_ws_stripe = udst->sec_lun_stripe * udst->lun_stripe; + + return adst->chk_offset + adst->chk_len; +} + static int pblk_set_addrf(struct pblk *pblk) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; int mod; - div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); - if (mod) { - pr_err("pblk: bad configuration of sectors/pages\n"); + switch (geo->version) { + case NVM_OCSSD_SPEC_12: + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); + if (mod) { + pr_err("pblk: bad configuration of sectors/pages\n"); + return -EINVAL; + } + + pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); + break; + case NVM_OCSSD_SPEC_20: + pblk->addrf_len = pblk_set_addrf_20(geo, (void *)&pblk->addrf, + &pblk->uaddrf); + break; + default: + pr_err("pblk: OCSSD revision not supported (%d)\n", + geo->version); return -EINVAL; } - pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); - return 0; } @@ -1117,7 +1160,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, struct pblk *pblk; int ret; - if (geo->version != NVM_OCSSD_SPEC_12) { + /* pblk supports 1.2 and 2.0 versions */ + if (!(geo->version == NVM_OCSSD_SPEC_12 || + geo->version == NVM_OCSSD_SPEC_20)) { pr_err("pblk: OCSSD version not supported (%u)\n", geo->version); return ERR_PTR(-EINVAL); diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index a643dc623731..391f865b02d9 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -113,15 +113,16 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - struct nvm_addr_format_12 *ppaf; - struct nvm_addr_format_12 *geo_ppaf; ssize_t sz = 0; - ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; - geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->addrf; + struct nvm_addr_format_12 *geo_ppaf = + (struct nvm_addr_format_12 *)&geo->addrf; - sz = snprintf(page, PAGE_SIZE, - "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", + sz = snprintf(page, PAGE_SIZE, + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", pblk->addrf_len, ppaf->ch_offset, ppaf->ch_len, ppaf->lun_offset, ppaf->lun_len, @@ -130,14 +131,33 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) ppaf->pln_offset, ppaf->pln_len, ppaf->sec_offset, ppaf->sec_len); - sz += snprintf(page + sz, PAGE_SIZE - sz, - "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", + sz += snprintf(page + sz, PAGE_SIZE - sz, + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", geo_ppaf->ch_offset, geo_ppaf->ch_len, geo_ppaf->lun_offset, geo_ppaf->lun_len, geo_ppaf->blk_offset, geo_ppaf->blk_len, geo_ppaf->pg_offset, geo_ppaf->pg_len, geo_ppaf->pln_offset, geo_ppaf->pln_len, geo_ppaf->sec_offset, geo_ppaf->sec_len); + } else { + struct nvm_addr_format *ppaf = &pblk->addrf; + struct nvm_addr_format *geo_ppaf = &geo->addrf; + + sz = snprintf(page, PAGE_SIZE, + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,chk:%d/%d/sec:%d/%d\n", + pblk->addrf_len, + ppaf->ch_offset, ppaf->ch_len, + ppaf->lun_offset, ppaf->lun_len, + ppaf->chk_offset, ppaf->chk_len, + ppaf->sec_offset, ppaf->sec_len); + + sz += snprintf(page + sz, PAGE_SIZE - sz, + "device:ch:%d/%d,lun:%d/%d,chk:%d/%d,sec:%d/%d\n", + geo_ppaf->ch_offset, geo_ppaf->ch_len, + geo_ppaf->lun_offset, geo_ppaf->lun_len, + geo_ppaf->chk_offset, geo_ppaf->chk_len, + geo_ppaf->sec_offset, geo_ppaf->sec_len); + } return sz; } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index ee149766b7a0..1deddd38c0ac 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -561,6 +561,18 @@ enum { PBLK_STATE_STOPPED = 3, }; +/* Internal format to support not power-of-2 device formats */ +struct pblk_addr_format { + /* gen to dev */ + int sec_stripe; + int ch_stripe; + int lun_stripe; + + /* dev to gen */ + int sec_lun_stripe; + int sec_ws_stripe; +}; + struct pblk { struct nvm_tgt_dev *dev; struct gendisk *disk; @@ -573,7 +585,8 @@ struct pblk { struct pblk_line_mgmt l_mg; /* Line management */ struct pblk_line_meta lm; /* Line metadata */ - struct nvm_addr_format addrf; + struct nvm_addr_format addrf; /* Aligned address format */ + struct pblk_addr_format uaddrf; /* Unaligned address format */ int addrf_len; struct pblk_rb rwb; @@ -954,17 +967,43 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, u64 line_id) { - struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->addrf; + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; struct ppa_addr ppa; - ppa.ppa = 0; - ppa.g.blk = line_id; - ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; - ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; - ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; - ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; - ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->addrf; + + ppa.ppa = 0; + ppa.g.blk = line_id; + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; + } else { + struct pblk_addr_format *uaddrf = &pblk->uaddrf; + int secs, chnls, luns; + + ppa.ppa = 0; + + ppa.m.chk = line_id; + + div_u64_rem(paddr, uaddrf->sec_stripe, &secs); + ppa.m.sec = secs; + + sector_div(paddr, uaddrf->sec_stripe); + div_u64_rem(paddr, uaddrf->ch_stripe, &chnls); + ppa.m.grp = chnls; + + sector_div(paddr, uaddrf->ch_stripe); + div_u64_rem(paddr, uaddrf->lun_stripe, &luns); + ppa.m.pu = luns; + + sector_div(paddr, uaddrf->lun_stripe); + ppa.m.sec += uaddrf->sec_stripe * paddr; + } return ppa; } @@ -972,15 +1011,32 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, struct ppa_addr p) { - struct nvm_addr_format_12 *ppaf = - (struct nvm_addr_format_12 *)&pblk->addrf; + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; u64 paddr; - paddr = (u64)p.g.ch << ppaf->ch_offset; - paddr |= (u64)p.g.lun << ppaf->lun_offset; - paddr |= (u64)p.g.pg << ppaf->pg_offset; - paddr |= (u64)p.g.pl << ppaf->pln_offset; - paddr |= (u64)p.g.sec << ppaf->sec_offset; + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = + (struct nvm_addr_format_12 *)&pblk->addrf; + + paddr = (u64)p.g.ch << ppaf->ch_offset; + paddr |= (u64)p.g.lun << ppaf->lun_offset; + paddr |= (u64)p.g.pg << ppaf->pg_offset; + paddr |= (u64)p.g.pl << ppaf->pln_offset; + paddr |= (u64)p.g.sec << ppaf->sec_offset; + } else { + struct pblk_addr_format *uaddrf = &pblk->uaddrf; + u64 secs = (u64)p.m.sec; + int sec_stripe; + + paddr = (u64)p.m.grp * uaddrf->sec_stripe; + paddr += (u64)p.m.pu * uaddrf->sec_lun_stripe; + + div_u64_rem(secs, uaddrf->sec_stripe, &sec_stripe); + sector_div(secs, uaddrf->sec_stripe); + paddr += secs * uaddrf->sec_ws_stripe; + paddr += sec_stripe; + } return paddr; } @@ -997,15 +1053,37 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) ppa64.c.line = ppa32 & ((~0U) >> 1); ppa64.c.is_cached = 1; } else { - struct nvm_addr_format_12 *ppaf = + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; - ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; - ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; - ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; - ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; - ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; - ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset; + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> + ppaf->ch_offset; + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> + ppaf->lun_offset; + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> + ppaf->blk_offset; + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> + ppaf->pg_offset; + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> + ppaf->pln_offset; + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> + ppaf->sec_offset; + } else { + struct nvm_addr_format *lbaf = &pblk->addrf; + + ppa64.m.grp = (ppa32 & lbaf->ch_mask) >> + lbaf->ch_offset; + ppa64.m.pu = (ppa32 & lbaf->lun_mask) >> + lbaf->lun_offset; + ppa64.m.chk = (ppa32 & lbaf->chk_mask) >> + lbaf->chk_offset; + ppa64.m.sec = (ppa32 & lbaf->sec_mask) >> + lbaf->sec_offset; + } } return ppa64; @@ -1021,15 +1099,27 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) ppa32 |= ppa64.c.line; ppa32 |= 1U << 31; } else { - struct nvm_addr_format_12 *ppaf = + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + + if (geo->version == NVM_OCSSD_SPEC_12) { + struct nvm_addr_format_12 *ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; - ppa32 |= ppa64.g.ch << ppaf->ch_offset; - ppa32 |= ppa64.g.lun << ppaf->lun_offset; - ppa32 |= ppa64.g.blk << ppaf->blk_offset; - ppa32 |= ppa64.g.pg << ppaf->pg_offset; - ppa32 |= ppa64.g.pl << ppaf->pln_offset; - ppa32 |= ppa64.g.sec << ppaf->sec_offset; + ppa32 |= ppa64.g.ch << ppaf->ch_offset; + ppa32 |= ppa64.g.lun << ppaf->lun_offset; + ppa32 |= ppa64.g.blk << ppaf->blk_offset; + ppa32 |= ppa64.g.pg << ppaf->pg_offset; + ppa32 |= ppa64.g.pl << ppaf->pln_offset; + ppa32 |= ppa64.g.sec << ppaf->sec_offset; + } else { + struct nvm_addr_format *lbaf = &pblk->addrf; + + ppa32 |= ppa64.m.grp << lbaf->ch_offset; + ppa32 |= ppa64.m.pu << lbaf->lun_offset; + ppa32 |= ppa64.m.chk << lbaf->chk_offset; + ppa32 |= ppa64.m.sec << lbaf->sec_offset; + } } return ppa32; @@ -1147,6 +1237,9 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type) struct nvm_geo *geo = &dev->geo; int flags; + if (geo->version == NVM_OCSSD_SPEC_20) + return 0; + flags = geo->pln_mode >> 1; if (type == PBLK_WRITE) @@ -1166,6 +1259,9 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type) struct nvm_geo *geo = &dev->geo; int flags; + if (geo->version == NVM_OCSSD_SPEC_20) + return 0; + flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE; if (type == PBLK_READ_SEQUENTIAL) flags |= geo->pln_mode >> 1; @@ -1179,16 +1275,21 @@ static inline int pblk_io_aligned(struct pblk *pblk, int nr_secs) } #ifdef CONFIG_NVM_DEBUG -static inline void print_ppa(struct ppa_addr *p, char *msg, int error) +static inline void print_ppa(struct nvm_geo *geo, struct ppa_addr *p, + char *msg, int error) { if (p->c.is_cached) { pr_err("ppa: (%s: %x) cache line: %llu\n", msg, error, (u64)p->c.line); - } else { + } else if (geo->version == NVM_OCSSD_SPEC_12) { pr_err("ppa: (%s: %x):ch:%d,lun:%d,blk:%d,pg:%d,pl:%d,sec:%d\n", msg, error, p->g.ch, p->g.lun, p->g.blk, p->g.pg, p->g.pl, p->g.sec); + } else { + pr_err("ppa: (%s: %x):ch:%d,lun:%d,chk:%d,sec:%d\n", + msg, error, + p->m.grp, p->m.pu, p->m.chk, p->m.sec); } } @@ -1198,13 +1299,13 @@ static inline void pblk_print_failed_rqd(struct pblk *pblk, struct nvm_rq *rqd, int bit = -1; if (rqd->nr_ppas == 1) { - print_ppa(&rqd->ppa_addr, "rqd", error); + print_ppa(&pblk->dev->geo, &rqd->ppa_addr, "rqd", error); return; } while ((bit = find_next_bit((void *)&rqd->ppa_status, rqd->nr_ppas, bit + 1)) < rqd->nr_ppas) { - print_ppa(&rqd->ppa_list[bit], "rqd", error); + print_ppa(&pblk->dev->geo, &rqd->ppa_list[bit], "rqd", error); } pr_err("error:%d, ppa_status:%llx\n", error, rqd->ppa_status); @@ -1220,16 +1321,25 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, for (i = 0; i < nr_ppas; i++) { ppa = &ppas[i]; - if (!ppa->c.is_cached && - ppa->g.ch < geo->num_ch && - ppa->g.lun < geo->num_lun && - ppa->g.pl < geo->num_pln && - ppa->g.blk < geo->num_chk && - ppa->g.pg < geo->num_pg && - ppa->g.sec < geo->ws_min) - continue; + if (geo->version == NVM_OCSSD_SPEC_12) { + if (!ppa->c.is_cached && + ppa->g.ch < geo->num_ch && + ppa->g.lun < geo->num_lun && + ppa->g.pl < geo->num_pln && + ppa->g.blk < geo->num_chk && + ppa->g.pg < geo->num_pg && + ppa->g.sec < geo->ws_min) + continue; + } else { + if (!ppa->c.is_cached && + ppa->m.grp < geo->num_ch && + ppa->m.pu < geo->num_lun && + ppa->m.chk < geo->num_chk && + ppa->m.sec < geo->clba) + continue; + } - print_ppa(ppa, "boundary", i); + print_ppa(geo, ppa, "boundary", i); return 1; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 15/15] lightnvm: pblk: implement 2.0 support 2018-02-28 15:49 ` Javier González @ 2018-03-01 10:48 ` Matias Bjørling -1 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:48 UTC (permalink / raw) To: Javier González Cc: linux-block, linux-kernel, linux-nvme, Javier González On 02/28/2018 04:49 PM, Javier González wrote: > Implement 2.0 support in pblk. This includes the address formatting and > mapping paths, as well as the sysfs entries for them. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/lightnvm/pblk-init.c | 57 ++++++++++-- > drivers/lightnvm/pblk-sysfs.c | 36 ++++++-- > drivers/lightnvm/pblk.h | 198 ++++++++++++++++++++++++++++++++---------- > 3 files changed, 233 insertions(+), 58 deletions(-) > > diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c > index b3e15ef63df3..474f3f047087 100644 > --- a/drivers/lightnvm/pblk-init.c > +++ b/drivers/lightnvm/pblk-init.c > @@ -231,20 +231,63 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, > return dst->blk_offset + src->blk_len; > } > > +static int pblk_set_addrf_20(struct nvm_geo *geo, > + struct nvm_addr_format *adst, > + struct pblk_addr_format *udst) > +{ > + struct nvm_addr_format *src = &geo->addrf; > + > + adst->ch_len = get_count_order(geo->num_ch); > + adst->lun_len = get_count_order(geo->num_lun); > + adst->chk_len = src->chk_len; > + adst->sec_len = src->sec_len; > + > + adst->sec_offset = 0; > + adst->ch_offset = adst->sec_len; > + adst->lun_offset = adst->ch_offset + adst->ch_len; > + adst->chk_offset = adst->lun_offset + adst->lun_len; > + > + adst->sec_mask = ((1ULL << adst->sec_len) - 1) << adst->sec_offset; > + adst->chk_mask = ((1ULL << adst->chk_len) - 1) << adst->chk_offset; > + adst->lun_mask = ((1ULL << adst->lun_len) - 1) << adst->lun_offset; > + adst->ch_mask = ((1ULL << adst->ch_len) - 1) << adst->ch_offset; > + > + udst->sec_stripe = geo->ws_opt; > + udst->ch_stripe = geo->num_ch; > + udst->lun_stripe = geo->num_lun; > + > + udst->sec_lun_stripe = udst->sec_stripe * udst->ch_stripe; > + udst->sec_ws_stripe = udst->sec_lun_stripe * udst->lun_stripe; > + > + return adst->chk_offset + adst->chk_len; > +} > + > static int pblk_set_addrf(struct pblk *pblk) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > int mod; > > - div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); > - if (mod) { > - pr_err("pblk: bad configuration of sectors/pages\n"); > + switch (geo->version) { > + case NVM_OCSSD_SPEC_12: > + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); > + if (mod) { > + pr_err("pblk: bad configuration of sectors/pages\n"); > + return -EINVAL; > + } > + > + pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); > + break; > + case NVM_OCSSD_SPEC_20: > + pblk->addrf_len = pblk_set_addrf_20(geo, (void *)&pblk->addrf, > + &pblk->uaddrf); > + break; > + default: > + pr_err("pblk: OCSSD revision not supported (%d)\n", > + geo->version); > return -EINVAL; > } > > - pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); > - > return 0; > } > > @@ -1117,7 +1160,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, > struct pblk *pblk; > int ret; > > - if (geo->version != NVM_OCSSD_SPEC_12) { > + /* pblk supports 1.2 and 2.0 versions */ > + if (!(geo->version == NVM_OCSSD_SPEC_12 || > + geo->version == NVM_OCSSD_SPEC_20)) { > pr_err("pblk: OCSSD version not supported (%u)\n", > geo->version); > return ERR_PTR(-EINVAL); > diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c > index a643dc623731..391f865b02d9 100644 > --- a/drivers/lightnvm/pblk-sysfs.c > +++ b/drivers/lightnvm/pblk-sysfs.c > @@ -113,15 +113,16 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > - struct nvm_addr_format_12 *ppaf; > - struct nvm_addr_format_12 *geo_ppaf; > ssize_t sz = 0; > > - ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; > - geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->addrf; > + struct nvm_addr_format_12 *geo_ppaf = > + (struct nvm_addr_format_12 *)&geo->addrf; > > - sz = snprintf(page, PAGE_SIZE, > - "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > + sz = snprintf(page, PAGE_SIZE, > + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > pblk->addrf_len, > ppaf->ch_offset, ppaf->ch_len, > ppaf->lun_offset, ppaf->lun_len, > @@ -130,14 +131,33 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) > ppaf->pln_offset, ppaf->pln_len, > ppaf->sec_offset, ppaf->sec_len); > > - sz += snprintf(page + sz, PAGE_SIZE - sz, > - "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > + sz += snprintf(page + sz, PAGE_SIZE - sz, > + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > geo_ppaf->ch_offset, geo_ppaf->ch_len, > geo_ppaf->lun_offset, geo_ppaf->lun_len, > geo_ppaf->blk_offset, geo_ppaf->blk_len, > geo_ppaf->pg_offset, geo_ppaf->pg_len, > geo_ppaf->pln_offset, geo_ppaf->pln_len, > geo_ppaf->sec_offset, geo_ppaf->sec_len); > + } else { > + struct nvm_addr_format *ppaf = &pblk->addrf; > + struct nvm_addr_format *geo_ppaf = &geo->addrf; > + > + sz = snprintf(page, PAGE_SIZE, > + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,chk:%d/%d/sec:%d/%d\n", > + pblk->addrf_len, > + ppaf->ch_offset, ppaf->ch_len, > + ppaf->lun_offset, ppaf->lun_len, > + ppaf->chk_offset, ppaf->chk_len, > + ppaf->sec_offset, ppaf->sec_len); > + > + sz += snprintf(page + sz, PAGE_SIZE - sz, > + "device:ch:%d/%d,lun:%d/%d,chk:%d/%d,sec:%d/%d\n", > + geo_ppaf->ch_offset, geo_ppaf->ch_len, > + geo_ppaf->lun_offset, geo_ppaf->lun_len, > + geo_ppaf->chk_offset, geo_ppaf->chk_len, > + geo_ppaf->sec_offset, geo_ppaf->sec_len); > + } > > return sz; > } > diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h > index ee149766b7a0..1deddd38c0ac 100644 > --- a/drivers/lightnvm/pblk.h > +++ b/drivers/lightnvm/pblk.h > @@ -561,6 +561,18 @@ enum { > PBLK_STATE_STOPPED = 3, > }; > > +/* Internal format to support not power-of-2 device formats */ > +struct pblk_addr_format { > + /* gen to dev */ > + int sec_stripe; > + int ch_stripe; > + int lun_stripe; > + > + /* dev to gen */ > + int sec_lun_stripe; > + int sec_ws_stripe; > +}; > + > struct pblk { > struct nvm_tgt_dev *dev; > struct gendisk *disk; > @@ -573,7 +585,8 @@ struct pblk { > struct pblk_line_mgmt l_mg; /* Line management */ > struct pblk_line_meta lm; /* Line metadata */ > > - struct nvm_addr_format addrf; > + struct nvm_addr_format addrf; /* Aligned address format */ > + struct pblk_addr_format uaddrf; /* Unaligned address format */ > int addrf_len; > > struct pblk_rb rwb; > @@ -954,17 +967,43 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) > static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > u64 line_id) > { > - struct nvm_addr_format_12 *ppaf = > - (struct nvm_addr_format_12 *)&pblk->addrf; > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > struct ppa_addr ppa; > > - ppa.ppa = 0; > - ppa.g.blk = line_id; > - ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; > - ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; > - ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; > - ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; > - ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->addrf; > + > + ppa.ppa = 0; > + ppa.g.blk = line_id; > + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; > + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; > + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; > + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; > + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; > + } else { > + struct pblk_addr_format *uaddrf = &pblk->uaddrf; > + int secs, chnls, luns; > + > + ppa.ppa = 0; > + > + ppa.m.chk = line_id; > + > + div_u64_rem(paddr, uaddrf->sec_stripe, &secs); > + ppa.m.sec = secs; > + > + sector_div(paddr, uaddrf->sec_stripe); > + div_u64_rem(paddr, uaddrf->ch_stripe, &chnls); > + ppa.m.grp = chnls; > + > + sector_div(paddr, uaddrf->ch_stripe); > + div_u64_rem(paddr, uaddrf->lun_stripe, &luns); > + ppa.m.pu = luns; > + > + sector_div(paddr, uaddrf->lun_stripe); > + ppa.m.sec += uaddrf->sec_stripe * paddr; > + } > > return ppa; > } > @@ -972,15 +1011,32 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, > struct ppa_addr p) > { > - struct nvm_addr_format_12 *ppaf = > - (struct nvm_addr_format_12 *)&pblk->addrf; > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > u64 paddr; > > - paddr = (u64)p.g.ch << ppaf->ch_offset; > - paddr |= (u64)p.g.lun << ppaf->lun_offset; > - paddr |= (u64)p.g.pg << ppaf->pg_offset; > - paddr |= (u64)p.g.pl << ppaf->pln_offset; > - paddr |= (u64)p.g.sec << ppaf->sec_offset; > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->addrf; > + > + paddr = (u64)p.g.ch << ppaf->ch_offset; > + paddr |= (u64)p.g.lun << ppaf->lun_offset; > + paddr |= (u64)p.g.pg << ppaf->pg_offset; > + paddr |= (u64)p.g.pl << ppaf->pln_offset; > + paddr |= (u64)p.g.sec << ppaf->sec_offset; > + } else { > + struct pblk_addr_format *uaddrf = &pblk->uaddrf; > + u64 secs = (u64)p.m.sec; > + int sec_stripe; > + > + paddr = (u64)p.m.grp * uaddrf->sec_stripe; > + paddr += (u64)p.m.pu * uaddrf->sec_lun_stripe; > + > + div_u64_rem(secs, uaddrf->sec_stripe, &sec_stripe); > + sector_div(secs, uaddrf->sec_stripe); > + paddr += secs * uaddrf->sec_ws_stripe; > + paddr += sec_stripe; > + } > > return paddr; > } > @@ -997,15 +1053,37 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) > ppa64.c.line = ppa32 & ((~0U) >> 1); > ppa64.c.is_cached = 1; > } else { > - struct nvm_addr_format_12 *ppaf = > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > (struct nvm_addr_format_12 *)&pblk->addrf; > > - ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; > - ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; > - ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; > - ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; > - ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; > - ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset; > + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> > + ppaf->ch_offset; > + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> > + ppaf->lun_offset; > + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> > + ppaf->blk_offset; > + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> > + ppaf->pg_offset; > + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> > + ppaf->pln_offset; > + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> > + ppaf->sec_offset; > + } else { > + struct nvm_addr_format *lbaf = &pblk->addrf; > + > + ppa64.m.grp = (ppa32 & lbaf->ch_mask) >> > + lbaf->ch_offset; > + ppa64.m.pu = (ppa32 & lbaf->lun_mask) >> > + lbaf->lun_offset; > + ppa64.m.chk = (ppa32 & lbaf->chk_mask) >> > + lbaf->chk_offset; > + ppa64.m.sec = (ppa32 & lbaf->sec_mask) >> > + lbaf->sec_offset; > + } > } > > return ppa64; > @@ -1021,15 +1099,27 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) > ppa32 |= ppa64.c.line; > ppa32 |= 1U << 31; > } else { > - struct nvm_addr_format_12 *ppaf = > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > (struct nvm_addr_format_12 *)&pblk->addrf; > > - ppa32 |= ppa64.g.ch << ppaf->ch_offset; > - ppa32 |= ppa64.g.lun << ppaf->lun_offset; > - ppa32 |= ppa64.g.blk << ppaf->blk_offset; > - ppa32 |= ppa64.g.pg << ppaf->pg_offset; > - ppa32 |= ppa64.g.pl << ppaf->pln_offset; > - ppa32 |= ppa64.g.sec << ppaf->sec_offset; > + ppa32 |= ppa64.g.ch << ppaf->ch_offset; > + ppa32 |= ppa64.g.lun << ppaf->lun_offset; > + ppa32 |= ppa64.g.blk << ppaf->blk_offset; > + ppa32 |= ppa64.g.pg << ppaf->pg_offset; > + ppa32 |= ppa64.g.pl << ppaf->pln_offset; > + ppa32 |= ppa64.g.sec << ppaf->sec_offset; > + } else { > + struct nvm_addr_format *lbaf = &pblk->addrf; > + > + ppa32 |= ppa64.m.grp << lbaf->ch_offset; > + ppa32 |= ppa64.m.pu << lbaf->lun_offset; > + ppa32 |= ppa64.m.chk << lbaf->chk_offset; > + ppa32 |= ppa64.m.sec << lbaf->sec_offset; > + } > } > > return ppa32; > @@ -1147,6 +1237,9 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type) > struct nvm_geo *geo = &dev->geo; > int flags; > > + if (geo->version == NVM_OCSSD_SPEC_20) > + return 0; > + > flags = geo->pln_mode >> 1; > > if (type == PBLK_WRITE) > @@ -1166,6 +1259,9 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type) > struct nvm_geo *geo = &dev->geo; > int flags; > > + if (geo->version == NVM_OCSSD_SPEC_20) > + return 0; > + > flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE; > if (type == PBLK_READ_SEQUENTIAL) > flags |= geo->pln_mode >> 1; > @@ -1179,16 +1275,21 @@ static inline int pblk_io_aligned(struct pblk *pblk, int nr_secs) > } > > #ifdef CONFIG_NVM_DEBUG > -static inline void print_ppa(struct ppa_addr *p, char *msg, int error) > +static inline void print_ppa(struct nvm_geo *geo, struct ppa_addr *p, > + char *msg, int error) > { > if (p->c.is_cached) { > pr_err("ppa: (%s: %x) cache line: %llu\n", > msg, error, (u64)p->c.line); > - } else { > + } else if (geo->version == NVM_OCSSD_SPEC_12) { > pr_err("ppa: (%s: %x):ch:%d,lun:%d,blk:%d,pg:%d,pl:%d,sec:%d\n", > msg, error, > p->g.ch, p->g.lun, p->g.blk, > p->g.pg, p->g.pl, p->g.sec); > + } else { > + pr_err("ppa: (%s: %x):ch:%d,lun:%d,chk:%d,sec:%d\n", > + msg, error, > + p->m.grp, p->m.pu, p->m.chk, p->m.sec); > } > } > > @@ -1198,13 +1299,13 @@ static inline void pblk_print_failed_rqd(struct pblk *pblk, struct nvm_rq *rqd, > int bit = -1; > > if (rqd->nr_ppas == 1) { > - print_ppa(&rqd->ppa_addr, "rqd", error); > + print_ppa(&pblk->dev->geo, &rqd->ppa_addr, "rqd", error); > return; > } > > while ((bit = find_next_bit((void *)&rqd->ppa_status, rqd->nr_ppas, > bit + 1)) < rqd->nr_ppas) { > - print_ppa(&rqd->ppa_list[bit], "rqd", error); > + print_ppa(&pblk->dev->geo, &rqd->ppa_list[bit], "rqd", error); > } > > pr_err("error:%d, ppa_status:%llx\n", error, rqd->ppa_status); > @@ -1220,16 +1321,25 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, > for (i = 0; i < nr_ppas; i++) { > ppa = &ppas[i]; > > - if (!ppa->c.is_cached && > - ppa->g.ch < geo->num_ch && > - ppa->g.lun < geo->num_lun && > - ppa->g.pl < geo->num_pln && > - ppa->g.blk < geo->num_chk && > - ppa->g.pg < geo->num_pg && > - ppa->g.sec < geo->ws_min) > - continue; > + if (geo->version == NVM_OCSSD_SPEC_12) { > + if (!ppa->c.is_cached && > + ppa->g.ch < geo->num_ch && > + ppa->g.lun < geo->num_lun && > + ppa->g.pl < geo->num_pln && > + ppa->g.blk < geo->num_chk && > + ppa->g.pg < geo->num_pg && > + ppa->g.sec < geo->ws_min) > + continue; > + } else { > + if (!ppa->c.is_cached && > + ppa->m.grp < geo->num_ch && > + ppa->m.pu < geo->num_lun && > + ppa->m.chk < geo->num_chk && > + ppa->m.sec < geo->clba) > + continue; > + } > > - print_ppa(ppa, "boundary", i); > + print_ppa(geo, ppa, "boundary", i); > > return 1; > } > Ok, I think this is close to be good such that the patches can be picked up in v5. ^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 15/15] lightnvm: pblk: implement 2.0 support @ 2018-03-01 10:48 ` Matias Bjørling 0 siblings, 0 replies; 71+ messages in thread From: Matias Bjørling @ 2018-03-01 10:48 UTC (permalink / raw) On 02/28/2018 04:49 PM, Javier Gonz?lez wrote: > Implement 2.0 support in pblk. This includes the address formatting and > mapping paths, as well as the sysfs entries for them. > > Signed-off-by: Javier Gonz?lez <javier at cnexlabs.com> > --- > drivers/lightnvm/pblk-init.c | 57 ++++++++++-- > drivers/lightnvm/pblk-sysfs.c | 36 ++++++-- > drivers/lightnvm/pblk.h | 198 ++++++++++++++++++++++++++++++++---------- > 3 files changed, 233 insertions(+), 58 deletions(-) > > diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c > index b3e15ef63df3..474f3f047087 100644 > --- a/drivers/lightnvm/pblk-init.c > +++ b/drivers/lightnvm/pblk-init.c > @@ -231,20 +231,63 @@ static int pblk_set_addrf_12(struct nvm_geo *geo, > return dst->blk_offset + src->blk_len; > } > > +static int pblk_set_addrf_20(struct nvm_geo *geo, > + struct nvm_addr_format *adst, > + struct pblk_addr_format *udst) > +{ > + struct nvm_addr_format *src = &geo->addrf; > + > + adst->ch_len = get_count_order(geo->num_ch); > + adst->lun_len = get_count_order(geo->num_lun); > + adst->chk_len = src->chk_len; > + adst->sec_len = src->sec_len; > + > + adst->sec_offset = 0; > + adst->ch_offset = adst->sec_len; > + adst->lun_offset = adst->ch_offset + adst->ch_len; > + adst->chk_offset = adst->lun_offset + adst->lun_len; > + > + adst->sec_mask = ((1ULL << adst->sec_len) - 1) << adst->sec_offset; > + adst->chk_mask = ((1ULL << adst->chk_len) - 1) << adst->chk_offset; > + adst->lun_mask = ((1ULL << adst->lun_len) - 1) << adst->lun_offset; > + adst->ch_mask = ((1ULL << adst->ch_len) - 1) << adst->ch_offset; > + > + udst->sec_stripe = geo->ws_opt; > + udst->ch_stripe = geo->num_ch; > + udst->lun_stripe = geo->num_lun; > + > + udst->sec_lun_stripe = udst->sec_stripe * udst->ch_stripe; > + udst->sec_ws_stripe = udst->sec_lun_stripe * udst->lun_stripe; > + > + return adst->chk_offset + adst->chk_len; > +} > + > static int pblk_set_addrf(struct pblk *pblk) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > int mod; > > - div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); > - if (mod) { > - pr_err("pblk: bad configuration of sectors/pages\n"); > + switch (geo->version) { > + case NVM_OCSSD_SPEC_12: > + div_u64_rem(geo->clba, pblk->min_write_pgs, &mod); > + if (mod) { > + pr_err("pblk: bad configuration of sectors/pages\n"); > + return -EINVAL; > + } > + > + pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); > + break; > + case NVM_OCSSD_SPEC_20: > + pblk->addrf_len = pblk_set_addrf_20(geo, (void *)&pblk->addrf, > + &pblk->uaddrf); > + break; > + default: > + pr_err("pblk: OCSSD revision not supported (%d)\n", > + geo->version); > return -EINVAL; > } > > - pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf); > - > return 0; > } > > @@ -1117,7 +1160,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, > struct pblk *pblk; > int ret; > > - if (geo->version != NVM_OCSSD_SPEC_12) { > + /* pblk supports 1.2 and 2.0 versions */ > + if (!(geo->version == NVM_OCSSD_SPEC_12 || > + geo->version == NVM_OCSSD_SPEC_20)) { > pr_err("pblk: OCSSD version not supported (%u)\n", > geo->version); > return ERR_PTR(-EINVAL); > diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c > index a643dc623731..391f865b02d9 100644 > --- a/drivers/lightnvm/pblk-sysfs.c > +++ b/drivers/lightnvm/pblk-sysfs.c > @@ -113,15 +113,16 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) > { > struct nvm_tgt_dev *dev = pblk->dev; > struct nvm_geo *geo = &dev->geo; > - struct nvm_addr_format_12 *ppaf; > - struct nvm_addr_format_12 *geo_ppaf; > ssize_t sz = 0; > > - ppaf = (struct nvm_addr_format_12 *)&pblk->addrf; > - geo_ppaf = (struct nvm_addr_format_12 *)&geo->addrf; > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->addrf; > + struct nvm_addr_format_12 *geo_ppaf = > + (struct nvm_addr_format_12 *)&geo->addrf; > > - sz = snprintf(page, PAGE_SIZE, > - "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > + sz = snprintf(page, PAGE_SIZE, > + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > pblk->addrf_len, > ppaf->ch_offset, ppaf->ch_len, > ppaf->lun_offset, ppaf->lun_len, > @@ -130,14 +131,33 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page) > ppaf->pln_offset, ppaf->pln_len, > ppaf->sec_offset, ppaf->sec_len); > > - sz += snprintf(page + sz, PAGE_SIZE - sz, > - "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > + sz += snprintf(page + sz, PAGE_SIZE - sz, > + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n", > geo_ppaf->ch_offset, geo_ppaf->ch_len, > geo_ppaf->lun_offset, geo_ppaf->lun_len, > geo_ppaf->blk_offset, geo_ppaf->blk_len, > geo_ppaf->pg_offset, geo_ppaf->pg_len, > geo_ppaf->pln_offset, geo_ppaf->pln_len, > geo_ppaf->sec_offset, geo_ppaf->sec_len); > + } else { > + struct nvm_addr_format *ppaf = &pblk->addrf; > + struct nvm_addr_format *geo_ppaf = &geo->addrf; > + > + sz = snprintf(page, PAGE_SIZE, > + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,chk:%d/%d/sec:%d/%d\n", > + pblk->addrf_len, > + ppaf->ch_offset, ppaf->ch_len, > + ppaf->lun_offset, ppaf->lun_len, > + ppaf->chk_offset, ppaf->chk_len, > + ppaf->sec_offset, ppaf->sec_len); > + > + sz += snprintf(page + sz, PAGE_SIZE - sz, > + "device:ch:%d/%d,lun:%d/%d,chk:%d/%d,sec:%d/%d\n", > + geo_ppaf->ch_offset, geo_ppaf->ch_len, > + geo_ppaf->lun_offset, geo_ppaf->lun_len, > + geo_ppaf->chk_offset, geo_ppaf->chk_len, > + geo_ppaf->sec_offset, geo_ppaf->sec_len); > + } > > return sz; > } > diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h > index ee149766b7a0..1deddd38c0ac 100644 > --- a/drivers/lightnvm/pblk.h > +++ b/drivers/lightnvm/pblk.h > @@ -561,6 +561,18 @@ enum { > PBLK_STATE_STOPPED = 3, > }; > > +/* Internal format to support not power-of-2 device formats */ > +struct pblk_addr_format { > + /* gen to dev */ > + int sec_stripe; > + int ch_stripe; > + int lun_stripe; > + > + /* dev to gen */ > + int sec_lun_stripe; > + int sec_ws_stripe; > +}; > + > struct pblk { > struct nvm_tgt_dev *dev; > struct gendisk *disk; > @@ -573,7 +585,8 @@ struct pblk { > struct pblk_line_mgmt l_mg; /* Line management */ > struct pblk_line_meta lm; /* Line metadata */ > > - struct nvm_addr_format addrf; > + struct nvm_addr_format addrf; /* Aligned address format */ > + struct pblk_addr_format uaddrf; /* Unaligned address format */ > int addrf_len; > > struct pblk_rb rwb; > @@ -954,17 +967,43 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p) > static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > u64 line_id) > { > - struct nvm_addr_format_12 *ppaf = > - (struct nvm_addr_format_12 *)&pblk->addrf; > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > struct ppa_addr ppa; > > - ppa.ppa = 0; > - ppa.g.blk = line_id; > - ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; > - ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; > - ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; > - ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; > - ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->addrf; > + > + ppa.ppa = 0; > + ppa.g.blk = line_id; > + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset; > + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset; > + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset; > + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset; > + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset; > + } else { > + struct pblk_addr_format *uaddrf = &pblk->uaddrf; > + int secs, chnls, luns; > + > + ppa.ppa = 0; > + > + ppa.m.chk = line_id; > + > + div_u64_rem(paddr, uaddrf->sec_stripe, &secs); > + ppa.m.sec = secs; > + > + sector_div(paddr, uaddrf->sec_stripe); > + div_u64_rem(paddr, uaddrf->ch_stripe, &chnls); > + ppa.m.grp = chnls; > + > + sector_div(paddr, uaddrf->ch_stripe); > + div_u64_rem(paddr, uaddrf->lun_stripe, &luns); > + ppa.m.pu = luns; > + > + sector_div(paddr, uaddrf->lun_stripe); > + ppa.m.sec += uaddrf->sec_stripe * paddr; > + } > > return ppa; > } > @@ -972,15 +1011,32 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr, > static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk, > struct ppa_addr p) > { > - struct nvm_addr_format_12 *ppaf = > - (struct nvm_addr_format_12 *)&pblk->addrf; > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > u64 paddr; > > - paddr = (u64)p.g.ch << ppaf->ch_offset; > - paddr |= (u64)p.g.lun << ppaf->lun_offset; > - paddr |= (u64)p.g.pg << ppaf->pg_offset; > - paddr |= (u64)p.g.pl << ppaf->pln_offset; > - paddr |= (u64)p.g.sec << ppaf->sec_offset; > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > + (struct nvm_addr_format_12 *)&pblk->addrf; > + > + paddr = (u64)p.g.ch << ppaf->ch_offset; > + paddr |= (u64)p.g.lun << ppaf->lun_offset; > + paddr |= (u64)p.g.pg << ppaf->pg_offset; > + paddr |= (u64)p.g.pl << ppaf->pln_offset; > + paddr |= (u64)p.g.sec << ppaf->sec_offset; > + } else { > + struct pblk_addr_format *uaddrf = &pblk->uaddrf; > + u64 secs = (u64)p.m.sec; > + int sec_stripe; > + > + paddr = (u64)p.m.grp * uaddrf->sec_stripe; > + paddr += (u64)p.m.pu * uaddrf->sec_lun_stripe; > + > + div_u64_rem(secs, uaddrf->sec_stripe, &sec_stripe); > + sector_div(secs, uaddrf->sec_stripe); > + paddr += secs * uaddrf->sec_ws_stripe; > + paddr += sec_stripe; > + } > > return paddr; > } > @@ -997,15 +1053,37 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32) > ppa64.c.line = ppa32 & ((~0U) >> 1); > ppa64.c.is_cached = 1; > } else { > - struct nvm_addr_format_12 *ppaf = > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > (struct nvm_addr_format_12 *)&pblk->addrf; > > - ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset; > - ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset; > - ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset; > - ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset; > - ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset; > - ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset; > + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> > + ppaf->ch_offset; > + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> > + ppaf->lun_offset; > + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> > + ppaf->blk_offset; > + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> > + ppaf->pg_offset; > + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> > + ppaf->pln_offset; > + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> > + ppaf->sec_offset; > + } else { > + struct nvm_addr_format *lbaf = &pblk->addrf; > + > + ppa64.m.grp = (ppa32 & lbaf->ch_mask) >> > + lbaf->ch_offset; > + ppa64.m.pu = (ppa32 & lbaf->lun_mask) >> > + lbaf->lun_offset; > + ppa64.m.chk = (ppa32 & lbaf->chk_mask) >> > + lbaf->chk_offset; > + ppa64.m.sec = (ppa32 & lbaf->sec_mask) >> > + lbaf->sec_offset; > + } > } > > return ppa64; > @@ -1021,15 +1099,27 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64) > ppa32 |= ppa64.c.line; > ppa32 |= 1U << 31; > } else { > - struct nvm_addr_format_12 *ppaf = > + struct nvm_tgt_dev *dev = pblk->dev; > + struct nvm_geo *geo = &dev->geo; > + > + if (geo->version == NVM_OCSSD_SPEC_12) { > + struct nvm_addr_format_12 *ppaf = > (struct nvm_addr_format_12 *)&pblk->addrf; > > - ppa32 |= ppa64.g.ch << ppaf->ch_offset; > - ppa32 |= ppa64.g.lun << ppaf->lun_offset; > - ppa32 |= ppa64.g.blk << ppaf->blk_offset; > - ppa32 |= ppa64.g.pg << ppaf->pg_offset; > - ppa32 |= ppa64.g.pl << ppaf->pln_offset; > - ppa32 |= ppa64.g.sec << ppaf->sec_offset; > + ppa32 |= ppa64.g.ch << ppaf->ch_offset; > + ppa32 |= ppa64.g.lun << ppaf->lun_offset; > + ppa32 |= ppa64.g.blk << ppaf->blk_offset; > + ppa32 |= ppa64.g.pg << ppaf->pg_offset; > + ppa32 |= ppa64.g.pl << ppaf->pln_offset; > + ppa32 |= ppa64.g.sec << ppaf->sec_offset; > + } else { > + struct nvm_addr_format *lbaf = &pblk->addrf; > + > + ppa32 |= ppa64.m.grp << lbaf->ch_offset; > + ppa32 |= ppa64.m.pu << lbaf->lun_offset; > + ppa32 |= ppa64.m.chk << lbaf->chk_offset; > + ppa32 |= ppa64.m.sec << lbaf->sec_offset; > + } > } > > return ppa32; > @@ -1147,6 +1237,9 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type) > struct nvm_geo *geo = &dev->geo; > int flags; > > + if (geo->version == NVM_OCSSD_SPEC_20) > + return 0; > + > flags = geo->pln_mode >> 1; > > if (type == PBLK_WRITE) > @@ -1166,6 +1259,9 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type) > struct nvm_geo *geo = &dev->geo; > int flags; > > + if (geo->version == NVM_OCSSD_SPEC_20) > + return 0; > + > flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE; > if (type == PBLK_READ_SEQUENTIAL) > flags |= geo->pln_mode >> 1; > @@ -1179,16 +1275,21 @@ static inline int pblk_io_aligned(struct pblk *pblk, int nr_secs) > } > > #ifdef CONFIG_NVM_DEBUG > -static inline void print_ppa(struct ppa_addr *p, char *msg, int error) > +static inline void print_ppa(struct nvm_geo *geo, struct ppa_addr *p, > + char *msg, int error) > { > if (p->c.is_cached) { > pr_err("ppa: (%s: %x) cache line: %llu\n", > msg, error, (u64)p->c.line); > - } else { > + } else if (geo->version == NVM_OCSSD_SPEC_12) { > pr_err("ppa: (%s: %x):ch:%d,lun:%d,blk:%d,pg:%d,pl:%d,sec:%d\n", > msg, error, > p->g.ch, p->g.lun, p->g.blk, > p->g.pg, p->g.pl, p->g.sec); > + } else { > + pr_err("ppa: (%s: %x):ch:%d,lun:%d,chk:%d,sec:%d\n", > + msg, error, > + p->m.grp, p->m.pu, p->m.chk, p->m.sec); > } > } > > @@ -1198,13 +1299,13 @@ static inline void pblk_print_failed_rqd(struct pblk *pblk, struct nvm_rq *rqd, > int bit = -1; > > if (rqd->nr_ppas == 1) { > - print_ppa(&rqd->ppa_addr, "rqd", error); > + print_ppa(&pblk->dev->geo, &rqd->ppa_addr, "rqd", error); > return; > } > > while ((bit = find_next_bit((void *)&rqd->ppa_status, rqd->nr_ppas, > bit + 1)) < rqd->nr_ppas) { > - print_ppa(&rqd->ppa_list[bit], "rqd", error); > + print_ppa(&pblk->dev->geo, &rqd->ppa_list[bit], "rqd", error); > } > > pr_err("error:%d, ppa_status:%llx\n", error, rqd->ppa_status); > @@ -1220,16 +1321,25 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev, > for (i = 0; i < nr_ppas; i++) { > ppa = &ppas[i]; > > - if (!ppa->c.is_cached && > - ppa->g.ch < geo->num_ch && > - ppa->g.lun < geo->num_lun && > - ppa->g.pl < geo->num_pln && > - ppa->g.blk < geo->num_chk && > - ppa->g.pg < geo->num_pg && > - ppa->g.sec < geo->ws_min) > - continue; > + if (geo->version == NVM_OCSSD_SPEC_12) { > + if (!ppa->c.is_cached && > + ppa->g.ch < geo->num_ch && > + ppa->g.lun < geo->num_lun && > + ppa->g.pl < geo->num_pln && > + ppa->g.blk < geo->num_chk && > + ppa->g.pg < geo->num_pg && > + ppa->g.sec < geo->ws_min) > + continue; > + } else { > + if (!ppa->c.is_cached && > + ppa->m.grp < geo->num_ch && > + ppa->m.pu < geo->num_lun && > + ppa->m.chk < geo->num_chk && > + ppa->m.sec < geo->clba) > + continue; > + } > > - print_ppa(ppa, "boundary", i); > + print_ppa(geo, ppa, "boundary", i); > > return 1; > } > Ok, I think this is close to be good such that the patches can be picked up in v5. ^ permalink raw reply [flat|nested] 71+ messages in thread
end of thread, other threads:[~2018-03-02 12:00 UTC | newest] Thread overview: 71+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-02-28 15:49 [PATCH V4 00/15] lightnvm: pblk: implement 2.0 support Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 01/15] lightnvm: simplify geometry structure Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-03-01 10:22 ` Matias Bjørling 2018-03-01 10:22 ` Matias Bjørling 2018-03-02 11:15 ` Javier González 2018-03-02 11:15 ` Javier González 2018-02-28 15:49 ` [PATCH 02/15] lightnvm: add controller capabilities to 2.0 Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-03-01 10:33 ` Matias Bjørling 2018-03-01 10:33 ` Matias Bjørling 2018-03-01 10:33 ` Matias Bjørling 2018-03-02 11:59 ` Javier González 2018-03-02 11:59 ` Javier González 2018-02-28 15:49 ` [PATCH 03/15] lightnvm: add minor version to generic geometry Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 04/15] lightnvm: add shorten OCSSD version in geo Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 05/15] lightnvm: complete geo structure with maxoc* Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 06/15] lightnvm: normalize geometry nomenclature Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 07/15] lightnvm: add support for 2.0 address format Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 08/15] lightnvm: make address conversions depend on generic device Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 09/15] lightnvm: implement get log report chunk helpers Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-03-01 10:40 ` Matias Bjørling 2018-03-01 10:40 ` Matias Bjørling 2018-03-01 11:02 ` Javier Gonzalez 2018-03-01 11:02 ` Javier Gonzalez 2018-03-01 11:51 ` Matias Bjørling 2018-03-01 11:51 ` Matias Bjørling 2018-03-01 11:54 ` Javier Gonzalez 2018-03-01 11:54 ` Javier Gonzalez 2018-02-28 15:49 ` [PATCH 10/15] lightnvm: pblk: check for supported version Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 11/15] lightnvm: pblk: rename ppaf* to addrf* Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 12/15] lightnvn: pblk: use generic address format Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-03-01 10:41 ` Matias Bjørling 2018-03-01 10:41 ` Matias Bjørling 2018-03-01 11:05 ` Javier González 2018-03-01 11:05 ` Javier González 2018-02-28 15:49 ` [PATCH 13/15] lightnvm: pblk: implement get log report chunk Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-03-01 10:45 ` Matias Bjørling 2018-03-01 10:45 ` Matias Bjørling 2018-02-28 15:49 ` [PATCH 14/15] lightnvm: pblk: refactor init/exit sequences Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` [PATCH 15/15] lightnvm: pblk: implement 2.0 support Javier González 2018-02-28 15:49 ` Javier González 2018-02-28 15:49 ` Javier González 2018-03-01 10:48 ` Matias Bjørling 2018-03-01 10:48 ` Matias Bjørling
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.