All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matias Bjørling" <m@bjorling.me>
To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	axboe@fb.com, keith.busch@intel.com,
	linux-nvme@lists.infradead.org, dm-devel@redhat.com
Cc: "Matias Bjørling" <m@bjorling.me>
Subject: [PATCH 1/6] nvme: refactor namespaces to support non-gendisk devices
Date: Wed, 29 Jun 2016 16:51:36 +0200	[thread overview]
Message-ID: <1467211901-26707-2-git-send-email-m@bjorling.me> (raw)
In-Reply-To: <1467211901-26707-1-git-send-email-m@bjorling.me>

V2l0aCBMaWdodE5WTSBlbmFibGVkIG5hbWVzcGFjZXMsIHRoZSBnZW5kaXNrIHN0cnVjdHVyZSBp
cyBub3QgZXhwb3NlZAp0byB0aGUgdXNlci4gVGhpcyBwcmV2ZW50cyBMaWdodE5WTSB1c2VycyBm
cm9tIGFjY2Vzc2luZyB0aGUgTlZNZSBkZXZpY2UKZHJpdmVyIHNwZWNpZmljIHN5c2ZzIGVudHJp
ZXMsIGFuZCBMaWdodE5WTSBuYW1lc3BhY2UgZ2VvbWV0cnkuCgpSZWZhY3RvciB0aGUgcmV2YWxp
ZGF0aW9uIHByb2Nlc3MsIHNvIHRoYXQgYSBuYW1lc3BhY2UsIGluc3RlYWQgb2YgYQpnZW5kaXNr
LCBpcyByZXZhbGlkYXRlZC4gVGhpcyBsYXRlciBhbGxvd3MgcGF0Y2hlcyB0byB3aXJlIHVwIHRo
ZQpzeXNmcyBlbnRyaWVzIHVwIHRvIGEgbm9uLWdlbmRpc2sgbmFtZXNwYWNlLgoKU2lnbmVkLW9m
Zi1ieTogTWF0aWFzIEJqw7hybGluZyA8bUBiam9ybGluZy5tZT4KLS0tCiBkcml2ZXJzL2xpZ2h0
bnZtL2NvcmUuYyAgICAgIHwgICAyICsKIGRyaXZlcnMvbnZtZS9ob3N0L2NvcmUuYyAgICAgfCAx
MzQgKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tLS0tLQogZHJpdmVycy9u
dm1lL2hvc3QvbGlnaHRudm0uYyB8ICAgNSArLQogMyBmaWxlcyBjaGFuZ2VkLCA4NyBpbnNlcnRp
b25zKCspLCA1NCBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2xpZ2h0bnZtL2Nv
cmUuYyBiL2RyaXZlcnMvbGlnaHRudm0vY29yZS5jCmluZGV4IDllYmQyY2YuLjI1YzVkZjkgMTAw
NjQ0Ci0tLSBhL2RyaXZlcnMvbGlnaHRudm0vY29yZS5jCisrKyBiL2RyaXZlcnMvbGlnaHRudm0v
Y29yZS5jCkBAIC01ODEsNiArNTgxLDggQEAgc3RhdGljIGludCBudm1fY29yZV9pbml0KHN0cnVj
dCBudm1fZGV2ICpkZXYpCiAJbXV0ZXhfaW5pdCgmZGV2LT5tbG9jayk7CiAJc3Bpbl9sb2NrX2lu
aXQoJmRldi0+bG9jayk7CiAKKwlibGtfcXVldWVfbG9naWNhbF9ibG9ja19zaXplKGRldi0+cSwg
ZGV2LT5zZWNfc2l6ZSk7CisKIAlyZXR1cm4gMDsKIGVycl9mbXR5cGU6CiAJa2ZyZWUoZGV2LT5s
dW5fbWFwKTsKZGlmZiAtLWdpdCBhL2RyaXZlcnMvbnZtZS9ob3N0L2NvcmUuYyBiL2RyaXZlcnMv
bnZtZS9ob3N0L2NvcmUuYwppbmRleCA2ODQwNjJhLi5mNjE1YjZiIDEwMDY0NAotLS0gYS9kcml2
ZXJzL252bWUvaG9zdC9jb3JlLmMKKysrIGIvZHJpdmVycy9udm1lL2hvc3QvY29yZS5jCkBAIC03
ODUsNDIgKzc4NSwzMyBAQCBzdGF0aWMgdm9pZCBudm1lX2NvbmZpZ19kaXNjYXJkKHN0cnVjdCBu
dm1lX25zICpucykKIAlxdWV1ZV9mbGFnX3NldF91bmxvY2tlZChRVUVVRV9GTEFHX0RJU0NBUkQs
IG5zLT5xdWV1ZSk7CiB9CiAKLXN0YXRpYyBpbnQgbnZtZV9yZXZhbGlkYXRlX2Rpc2soc3RydWN0
IGdlbmRpc2sgKmRpc2spCitzdGF0aWMgaW50IG52bWVfcmV2YWxpZGF0ZV9ucyhzdHJ1Y3QgbnZt
ZV9ucyAqbnMsIHN0cnVjdCBudm1lX2lkX25zICoqaWQpCiB7Ci0Jc3RydWN0IG52bWVfbnMgKm5z
ID0gZGlzay0+cHJpdmF0ZV9kYXRhOwotCXN0cnVjdCBudm1lX2lkX25zICppZDsKLQl1OCBsYmFm
LCBwaV90eXBlOwotCXUxNiBvbGRfbXM7Ci0JdW5zaWduZWQgc2hvcnQgYnM7Ci0KLQlpZiAodGVz
dF9iaXQoTlZNRV9OU19ERUFELCAmbnMtPmZsYWdzKSkgewotCQlzZXRfY2FwYWNpdHkoZGlzaywg
MCk7Ci0JCXJldHVybiAtRU5PREVWOwotCX0KLQlpZiAobnZtZV9pZGVudGlmeV9ucyhucy0+Y3Ry
bCwgbnMtPm5zX2lkLCAmaWQpKSB7CisJaWYgKG52bWVfaWRlbnRpZnlfbnMobnMtPmN0cmwsIG5z
LT5uc19pZCwgaWQpKSB7CiAJCWRldl93YXJuKGRpc2tfdG9fZGV2KG5zLT5kaXNrKSwgIiVzOiBJ
ZGVudGlmeSBmYWlsdXJlXG4iLAogCQkJCV9fZnVuY19fKTsKIAkJcmV0dXJuIC1FTk9ERVY7CiAJ
fQotCWlmIChpZC0+bmNhcCA9PSAwKSB7Ci0JCWtmcmVlKGlkKTsKLQkJcmV0dXJuIC1FTk9ERVY7
Ci0JfQogCi0JaWYgKG52bWVfbnZtX25zX3N1cHBvcnRlZChucywgaWQpICYmIG5zLT50eXBlICE9
IE5WTUVfTlNfTElHSFROVk0pIHsKLQkJaWYgKG52bWVfbnZtX3JlZ2lzdGVyKG5zLT5xdWV1ZSwg
ZGlzay0+ZGlza19uYW1lKSkgewotCQkJZGV2X3dhcm4oZGlza190b19kZXYobnMtPmRpc2spLAot
CQkJCSIlczogTGlnaHROVk0gaW5pdCBmYWlsdXJlXG4iLCBfX2Z1bmNfXyk7Ci0JCQlrZnJlZShp
ZCk7Ci0JCQlyZXR1cm4gLUVOT0RFVjsKLQkJfQotCQlucy0+dHlwZSA9IE5WTUVfTlNfTElHSFRO
Vk07CisJaWYgKCgqaWQpLT5uY2FwID09IDApIHsKKwkJa2ZyZWUoKmlkKTsKKwkJcmV0dXJuIC1F
Tk9ERVY7CiAJfQogCiAJaWYgKG5zLT5jdHJsLT52cyA+PSBOVk1FX1ZTKDEsIDEpKQotCQltZW1j
cHkobnMtPmV1aSwgaWQtPmV1aTY0LCBzaXplb2YobnMtPmV1aSkpOworCQltZW1jcHkobnMtPmV1
aSwgKCppZCktPmV1aTY0LCBzaXplb2YobnMtPmV1aSkpOwogCWlmIChucy0+Y3RybC0+dnMgPj0g
TlZNRV9WUygxLCAyKSkKLQkJbWVtY3B5KG5zLT51dWlkLCBpZC0+bmd1aWQsIHNpemVvZihucy0+
dXVpZCkpOworCQltZW1jcHkobnMtPnV1aWQsICgqaWQpLT5uZ3VpZCwgc2l6ZW9mKG5zLT51dWlk
KSk7CisKKwlyZXR1cm4gMDsKK30KKworc3RhdGljIHZvaWQgX19udm1lX3JldmFsaWRhdGVfZGlz
ayhzdHJ1Y3QgZ2VuZGlzayAqZGlzaywgc3RydWN0IG52bWVfaWRfbnMgKmlkKQoreworCXN0cnVj
dCBudm1lX25zICpucyA9IGRpc2stPnByaXZhdGVfZGF0YTsKKwl1OCBsYmFmLCBwaV90eXBlOwor
CXUxNiBvbGRfbXM7CisJdW5zaWduZWQgc2hvcnQgYnM7CiAKIAlvbGRfbXMgPSBucy0+bXM7CiAJ
bGJhZiA9IGlkLT5mbGJhcyAmIE5WTUVfTlNfRkxCQVNfTEJBX01BU0s7CkBAIC04NTksOCArODUw
LDI2IEBAIHN0YXRpYyBpbnQgbnZtZV9yZXZhbGlkYXRlX2Rpc2soc3RydWN0IGdlbmRpc2sgKmRp
c2spCiAJaWYgKG5zLT5jdHJsLT5vbmNzICYgTlZNRV9DVFJMX09OQ1NfRFNNKQogCQludm1lX2Nv
bmZpZ19kaXNjYXJkKG5zKTsKIAlibGtfbXFfdW5mcmVlemVfcXVldWUoZGlzay0+cXVldWUpOwor
fQogCitzdGF0aWMgaW50IG52bWVfcmV2YWxpZGF0ZV9kaXNrKHN0cnVjdCBnZW5kaXNrICpkaXNr
KQoreworCXN0cnVjdCBudm1lX25zICpucyA9IGRpc2stPnByaXZhdGVfZGF0YTsKKwlzdHJ1Y3Qg
bnZtZV9pZF9ucyAqaWQgPSBOVUxMOworCWludCByZXQ7CisKKwlpZiAodGVzdF9iaXQoTlZNRV9O
U19ERUFELCAmbnMtPmZsYWdzKSkgeworCQlzZXRfY2FwYWNpdHkoZGlzaywgMCk7CisJCXJldHVy
biAtRU5PREVWOworCX0KKworCXJldCA9IG52bWVfcmV2YWxpZGF0ZV9ucyhucywgJmlkKTsKKwlp
ZiAocmV0KQorCQlyZXR1cm4gcmV0OworCisJX19udm1lX3JldmFsaWRhdGVfZGlzayhkaXNrLCBp
ZCk7CiAJa2ZyZWUoaWQpOworCiAJcmV0dXJuIDA7CiB9CiAKQEAgLTE0MzAsNiArMTQzOSw4IEBA
IHN0YXRpYyB2b2lkIG52bWVfYWxsb2NfbnMoc3RydWN0IG52bWVfY3RybCAqY3RybCwgdW5zaWdu
ZWQgbnNpZCkKIHsKIAlzdHJ1Y3QgbnZtZV9ucyAqbnM7CiAJc3RydWN0IGdlbmRpc2sgKmRpc2s7
CisJc3RydWN0IG52bWVfaWRfbnMgKmlkOworCWNoYXIgZGlza19uYW1lW0RJU0tfTkFNRV9MRU5d
OwogCWludCBub2RlID0gZGV2X3RvX25vZGUoY3RybC0+ZGV2KTsKIAogCWxvY2tkZXBfYXNzZXJ0
X2hlbGQoJmN0cmwtPm5hbWVzcGFjZXNfbXV0ZXgpOwpAQCAtMTQ0OSw0NCArMTQ2MCw2MyBAQCBz
dGF0aWMgdm9pZCBudm1lX2FsbG9jX25zKHN0cnVjdCBudm1lX2N0cmwgKmN0cmwsIHVuc2lnbmVk
IG5zaWQpCiAJbnMtPnF1ZXVlLT5xdWV1ZWRhdGEgPSBuczsKIAlucy0+Y3RybCA9IGN0cmw7CiAK
LQlkaXNrID0gYWxsb2NfZGlza19ub2RlKDAsIG5vZGUpOwotCWlmICghZGlzaykKLQkJZ290byBv
dXRfZnJlZV9xdWV1ZTsKLQogCWtyZWZfaW5pdCgmbnMtPmtyZWYpOwogCW5zLT5uc19pZCA9IG5z
aWQ7Ci0JbnMtPmRpc2sgPSBkaXNrOwogCW5zLT5sYmFfc2hpZnQgPSA5OyAvKiBzZXQgdG8gYSBk
ZWZhdWx0IHZhbHVlIGZvciA1MTIgdW50aWwgZGlzayBpcyB2YWxpZGF0ZWQgKi8KIAotCiAJYmxr
X3F1ZXVlX2xvZ2ljYWxfYmxvY2tfc2l6ZShucy0+cXVldWUsIDEgPDwgbnMtPmxiYV9zaGlmdCk7
CiAJbnZtZV9zZXRfcXVldWVfbGltaXRzKGN0cmwsIG5zLT5xdWV1ZSk7CiAKLQlkaXNrLT5tYWpv
ciA9IG52bWVfbWFqb3I7Ci0JZGlzay0+Zmlyc3RfbWlub3IgPSAwOwotCWRpc2stPmZvcHMgPSAm
bnZtZV9mb3BzOwotCWRpc2stPnByaXZhdGVfZGF0YSA9IG5zOwotCWRpc2stPnF1ZXVlID0gbnMt
PnF1ZXVlOwotCWRpc2stPmRyaXZlcmZzX2RldiA9IGN0cmwtPmRldmljZTsKLQlkaXNrLT5mbGFn
cyA9IEdFTkhEX0ZMX0VYVF9ERVZUOwotCXNwcmludGYoZGlzay0+ZGlza19uYW1lLCAibnZtZSVk
biVkIiwgY3RybC0+aW5zdGFuY2UsIG5zLT5pbnN0YW5jZSk7Ci0KLQlpZiAobnZtZV9yZXZhbGlk
YXRlX2Rpc2sobnMtPmRpc2spKQotCQlnb3RvIG91dF9mcmVlX2Rpc2s7CisJaWYgKG52bWVfcmV2
YWxpZGF0ZV9ucyhucywgJmlkKSkKKwkJZ290byBvdXRfZnJlZV9xdWV1ZTsKKworCXNwcmludGYo
ZGlza19uYW1lLCAibnZtZSVkbiVkIiwgY3RybC0+aW5zdGFuY2UsIG5zLT5pbnN0YW5jZSk7CisK
KwlpZiAobnZtZV9udm1fbnNfc3VwcG9ydGVkKG5zLCBpZCkpIHsKKwkJaWYgKG52bWVfbnZtX3Jl
Z2lzdGVyKG5zLT5xdWV1ZSwgZGlza19uYW1lKSkgeworCQkJZGV2X3dhcm4oY3RybC0+ZGV2LAor
CQkJCSIlczogTGlnaHROVk0gaW5pdCBmYWlsdXJlXG4iLCBfX2Z1bmNfXyk7CisJCQlnb3RvIG91
dF9mcmVlX2lkOworCQl9CisKKwkJZGlzayA9IGFsbG9jX2Rpc2tfbm9kZSgwLCBub2RlKTsKKwkJ
aWYgKCFkaXNrKQorCQkJZ290byBvdXRfZnJlZV9pZDsKKwkJbWVtY3B5KGRpc2stPmRpc2tfbmFt
ZSwgZGlza19uYW1lLCBESVNLX05BTUVfTEVOKTsKKwkJbnMtPmRpc2sgPSBkaXNrOworCQlucy0+
dHlwZSA9IE5WTUVfTlNfTElHSFROVk07CisJfSBlbHNlIHsKKwkJZGlzayA9IGFsbG9jX2Rpc2tf
bm9kZSgwLCBub2RlKTsKKwkJaWYgKCFkaXNrKQorCQkJZ290byBvdXRfZnJlZV9pZDsKKworCQlk
aXNrLT5tYWpvciA9IG52bWVfbWFqb3I7CisJCWRpc2stPmZpcnN0X21pbm9yID0gMDsKKwkJZGlz
ay0+Zm9wcyA9ICZudm1lX2ZvcHM7CisJCWRpc2stPnByaXZhdGVfZGF0YSA9IG5zOworCQlkaXNr
LT5xdWV1ZSA9IG5zLT5xdWV1ZTsKKwkJZGlzay0+ZHJpdmVyZnNfZGV2ID0gY3RybC0+ZGV2aWNl
OworCQlkaXNrLT5mbGFncyA9IEdFTkhEX0ZMX0VYVF9ERVZUOworCQltZW1jcHkoZGlzay0+ZGlz
a19uYW1lLCBkaXNrX25hbWUsIERJU0tfTkFNRV9MRU4pOworCQlucy0+ZGlzayA9IGRpc2s7CisK
KwkJX19udm1lX3JldmFsaWRhdGVfZGlzayhkaXNrLCBpZCk7CisKKwkJYWRkX2Rpc2sobnMtPmRp
c2spOworCisJCWlmIChzeXNmc19jcmVhdGVfZ3JvdXAoJmRpc2tfdG9fZGV2KG5zLT5kaXNrKS0+
a29iaiwKKwkJCQkJCQkmbnZtZV9uc19hdHRyX2dyb3VwKSkKKwkJCXByX3dhcm4oIiVzOiBmYWls
ZWQgdG8gY3JlYXRlIHN5c2ZzIGdyb3VwIGZvciBpZGVudGlmaWNhdGlvblxuIiwKKwkJCQlucy0+
ZGlzay0+ZGlza19uYW1lKTsKKwl9CiAKIAlsaXN0X2FkZF90YWlsX3JjdSgmbnMtPmxpc3QsICZj
dHJsLT5uYW1lc3BhY2VzKTsKIAlrcmVmX2dldCgmY3RybC0+a3JlZik7Ci0JaWYgKG5zLT50eXBl
ID09IE5WTUVfTlNfTElHSFROVk0pCi0JCXJldHVybjsKIAotCWFkZF9kaXNrKG5zLT5kaXNrKTsK
LQlpZiAoc3lzZnNfY3JlYXRlX2dyb3VwKCZkaXNrX3RvX2Rldihucy0+ZGlzayktPmtvYmosCi0J
CQkJCSZudm1lX25zX2F0dHJfZ3JvdXApKQotCQlwcl93YXJuKCIlczogZmFpbGVkIHRvIGNyZWF0
ZSBzeXNmcyBncm91cCBmb3IgaWRlbnRpZmljYXRpb25cbiIsCi0JCQlucy0+ZGlzay0+ZGlza19u
YW1lKTsKKwlrZnJlZShpZCk7CiAJcmV0dXJuOwotIG91dF9mcmVlX2Rpc2s6Ci0Ja2ZyZWUoZGlz
ayk7Cisgb3V0X2ZyZWVfaWQ6CisJa2ZyZWUoaWQpOwogIG91dF9mcmVlX3F1ZXVlOgogCWJsa19j
bGVhbnVwX3F1ZXVlKG5zLT5xdWV1ZSk7CiAgb3V0X3JlbGVhc2VfaW5zdGFuY2U6CmRpZmYgLS1n
aXQgYS9kcml2ZXJzL252bWUvaG9zdC9saWdodG52bS5jIGIvZHJpdmVycy9udm1lL2hvc3QvbGln
aHRudm0uYwppbmRleCA5N2ZlNjEwLi5iYTUxNjAyIDEwMDY0NAotLS0gYS9kcml2ZXJzL252bWUv
aG9zdC9saWdodG52bS5jCisrKyBiL2RyaXZlcnMvbnZtZS9ob3N0L2xpZ2h0bnZtLmMKQEAgLTQ3
NCw4ICs0NzQsOSBAQCBzdGF0aWMgaW5saW5lIHZvaWQgbnZtZV9udm1fcnF0b2NtZChzdHJ1Y3Qg
cmVxdWVzdCAqcnEsIHN0cnVjdCBudm1fcnEgKnJxZCwKIAljLT5waF9ydy5sZW5ndGggPSBjcHVf
dG9fbGUxNihycWQtPm5yX3BwYXMgLSAxKTsKIAogCWlmIChycWQtPm9wY29kZSA9PSBOVk1fT1Bf
SEJXUklURSB8fCBycWQtPm9wY29kZSA9PSBOVk1fT1BfSEJSRUFEKQotCQljLT5oYl9ydy5zbGJh
ID0gY3B1X3RvX2xlNjQobnZtZV9ibG9ja19ucihucywKLQkJCQkJCXJxZC0+YmlvLT5iaV9pdGVy
LmJpX3NlY3RvcikpOworCQkvKiBtb21lbnRhcmlseSBoYXJkY29kZSB0aGUgc2hpZnQgY29uZmln
dXJhdGlvbi4gbGJhX3NoaWZ0IGZyb20KKwkJICogbnZtX2RldiB3aWxsIGJlIGF2YWlsYWJsZSBp
biBhIGZvbGxvdy11cCBwYXRjaCAqLworCQljLT5oYl9ydy5zbGJhID0gY3B1X3RvX2xlNjQocnFk
LT5iaW8tPmJpX2l0ZXIuYmlfc2VjdG9yID4+IDMpOwogfQogCiBzdGF0aWMgdm9pZCBudm1lX252
bV9lbmRfaW8oc3RydWN0IHJlcXVlc3QgKnJxLCBpbnQgZXJyb3IpCi0tIAoyLjEuNAoKCl9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCkxpbnV4LW52bWUgbWFp
bGluZyBsaXN0CkxpbnV4LW52bWVAbGlzdHMuaW5mcmFkZWFkLm9yZwpodHRwOi8vbGlzdHMuaW5m
cmFkZWFkLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2xpbnV4LW52bWUK

WARNING: multiple messages have this Message-ID (diff)
From: "Matias Bjørling" <m@bjorling.me>
To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	axboe@fb.com, keith.busch@intel.com,
	linux-nvme@lists.infradead.org, dm-devel@redhat.com
Cc: "Matias Bjørling" <m@bjorling.me>
Subject: [PATCH 1/6] nvme: refactor namespaces to support non-gendisk devices
Date: Wed, 29 Jun 2016 16:51:36 +0200	[thread overview]
Message-ID: <1467211901-26707-2-git-send-email-m@bjorling.me> (raw)
In-Reply-To: <1467211901-26707-1-git-send-email-m@bjorling.me>

With LightNVM enabled namespaces, the gendisk structure is not exposed
to the user. This prevents LightNVM users from accessing the NVMe device
driver specific sysfs entries, and LightNVM namespace geometry.

Refactor the revalidation process, so that a namespace, instead of a
gendisk, is revalidated. This later allows patches to wire up the
sysfs entries up to a non-gendisk namespace.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/core.c      |   2 +
 drivers/nvme/host/core.c     | 134 ++++++++++++++++++++++++++-----------------
 drivers/nvme/host/lightnvm.c |   5 +-
 3 files changed, 87 insertions(+), 54 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 9ebd2cf..25c5df9 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -581,6 +581,8 @@ static int nvm_core_init(struct nvm_dev *dev)
 	mutex_init(&dev->mlock);
 	spin_lock_init(&dev->lock);
 
+	blk_queue_logical_block_size(dev->q, dev->sec_size);
+
 	return 0;
 err_fmtype:
 	kfree(dev->lun_map);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 684062a..f615b6b 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -785,42 +785,33 @@ static void nvme_config_discard(struct nvme_ns *ns)
 	queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, ns->queue);
 }
 
-static int nvme_revalidate_disk(struct gendisk *disk)
+static int nvme_revalidate_ns(struct nvme_ns *ns, struct nvme_id_ns **id)
 {
-	struct nvme_ns *ns = disk->private_data;
-	struct nvme_id_ns *id;
-	u8 lbaf, pi_type;
-	u16 old_ms;
-	unsigned short bs;
-
-	if (test_bit(NVME_NS_DEAD, &ns->flags)) {
-		set_capacity(disk, 0);
-		return -ENODEV;
-	}
-	if (nvme_identify_ns(ns->ctrl, ns->ns_id, &id)) {
+	if (nvme_identify_ns(ns->ctrl, ns->ns_id, id)) {
 		dev_warn(disk_to_dev(ns->disk), "%s: Identify failure\n",
 				__func__);
 		return -ENODEV;
 	}
-	if (id->ncap == 0) {
-		kfree(id);
-		return -ENODEV;
-	}
 
-	if (nvme_nvm_ns_supported(ns, id) && ns->type != NVME_NS_LIGHTNVM) {
-		if (nvme_nvm_register(ns->queue, disk->disk_name)) {
-			dev_warn(disk_to_dev(ns->disk),
-				"%s: LightNVM init failure\n", __func__);
-			kfree(id);
-			return -ENODEV;
-		}
-		ns->type = NVME_NS_LIGHTNVM;
+	if ((*id)->ncap == 0) {
+		kfree(*id);
+		return -ENODEV;
 	}
 
 	if (ns->ctrl->vs >= NVME_VS(1, 1))
-		memcpy(ns->eui, id->eui64, sizeof(ns->eui));
+		memcpy(ns->eui, (*id)->eui64, sizeof(ns->eui));
 	if (ns->ctrl->vs >= NVME_VS(1, 2))
-		memcpy(ns->uuid, id->nguid, sizeof(ns->uuid));
+		memcpy(ns->uuid, (*id)->nguid, sizeof(ns->uuid));
+
+	return 0;
+}
+
+static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
+{
+	struct nvme_ns *ns = disk->private_data;
+	u8 lbaf, pi_type;
+	u16 old_ms;
+	unsigned short bs;
 
 	old_ms = ns->ms;
 	lbaf = id->flbas & NVME_NS_FLBAS_LBA_MASK;
@@ -859,8 +850,26 @@ static int nvme_revalidate_disk(struct gendisk *disk)
 	if (ns->ctrl->oncs & NVME_CTRL_ONCS_DSM)
 		nvme_config_discard(ns);
 	blk_mq_unfreeze_queue(disk->queue);
+}
 
+static int nvme_revalidate_disk(struct gendisk *disk)
+{
+	struct nvme_ns *ns = disk->private_data;
+	struct nvme_id_ns *id = NULL;
+	int ret;
+
+	if (test_bit(NVME_NS_DEAD, &ns->flags)) {
+		set_capacity(disk, 0);
+		return -ENODEV;
+	}
+
+	ret = nvme_revalidate_ns(ns, &id);
+	if (ret)
+		return ret;
+
+	__nvme_revalidate_disk(disk, id);
 	kfree(id);
+
 	return 0;
 }
 
@@ -1430,6 +1439,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 {
 	struct nvme_ns *ns;
 	struct gendisk *disk;
+	struct nvme_id_ns *id;
+	char disk_name[DISK_NAME_LEN];
 	int node = dev_to_node(ctrl->dev);
 
 	lockdep_assert_held(&ctrl->namespaces_mutex);
@@ -1449,44 +1460,63 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 	ns->queue->queuedata = ns;
 	ns->ctrl = ctrl;
 
-	disk = alloc_disk_node(0, node);
-	if (!disk)
-		goto out_free_queue;
-
 	kref_init(&ns->kref);
 	ns->ns_id = nsid;
-	ns->disk = disk;
 	ns->lba_shift = 9; /* set to a default value for 512 until disk is validated */
 
-
 	blk_queue_logical_block_size(ns->queue, 1 << ns->lba_shift);
 	nvme_set_queue_limits(ctrl, ns->queue);
 
-	disk->major = nvme_major;
-	disk->first_minor = 0;
-	disk->fops = &nvme_fops;
-	disk->private_data = ns;
-	disk->queue = ns->queue;
-	disk->driverfs_dev = ctrl->device;
-	disk->flags = GENHD_FL_EXT_DEVT;
-	sprintf(disk->disk_name, "nvme%dn%d", ctrl->instance, ns->instance);
-
-	if (nvme_revalidate_disk(ns->disk))
-		goto out_free_disk;
+	if (nvme_revalidate_ns(ns, &id))
+		goto out_free_queue;
+
+	sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->instance);
+
+	if (nvme_nvm_ns_supported(ns, id)) {
+		if (nvme_nvm_register(ns->queue, disk_name)) {
+			dev_warn(ctrl->dev,
+				"%s: LightNVM init failure\n", __func__);
+			goto out_free_id;
+		}
+
+		disk = alloc_disk_node(0, node);
+		if (!disk)
+			goto out_free_id;
+		memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
+		ns->disk = disk;
+		ns->type = NVME_NS_LIGHTNVM;
+	} else {
+		disk = alloc_disk_node(0, node);
+		if (!disk)
+			goto out_free_id;
+
+		disk->major = nvme_major;
+		disk->first_minor = 0;
+		disk->fops = &nvme_fops;
+		disk->private_data = ns;
+		disk->queue = ns->queue;
+		disk->driverfs_dev = ctrl->device;
+		disk->flags = GENHD_FL_EXT_DEVT;
+		memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
+		ns->disk = disk;
+
+		__nvme_revalidate_disk(disk, id);
+
+		add_disk(ns->disk);
+
+		if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
+							&nvme_ns_attr_group))
+			pr_warn("%s: failed to create sysfs group for identification\n",
+				ns->disk->disk_name);
+	}
 
 	list_add_tail_rcu(&ns->list, &ctrl->namespaces);
 	kref_get(&ctrl->kref);
-	if (ns->type == NVME_NS_LIGHTNVM)
-		return;
 
-	add_disk(ns->disk);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
+	kfree(id);
 	return;
- out_free_disk:
-	kfree(disk);
+ out_free_id:
+	kfree(id);
  out_free_queue:
 	blk_cleanup_queue(ns->queue);
  out_release_instance:
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 97fe610..ba51602 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -474,8 +474,9 @@ static inline void nvme_nvm_rqtocmd(struct request *rq, struct nvm_rq *rqd,
 	c->ph_rw.length = cpu_to_le16(rqd->nr_ppas - 1);
 
 	if (rqd->opcode == NVM_OP_HBWRITE || rqd->opcode == NVM_OP_HBREAD)
-		c->hb_rw.slba = cpu_to_le64(nvme_block_nr(ns,
-						rqd->bio->bi_iter.bi_sector));
+		/* momentarily hardcode the shift configuration. lba_shift from
+		 * nvm_dev will be available in a follow-up patch */
+		c->hb_rw.slba = cpu_to_le64(rqd->bio->bi_iter.bi_sector >> 3);
 }
 
 static void nvme_nvm_end_io(struct request *rq, int error)
-- 
2.1.4

WARNING: multiple messages have this Message-ID (diff)
From: m@bjorling.me (Matias Bjørling)
Subject: [PATCH 1/6] nvme: refactor namespaces to support non-gendisk devices
Date: Wed, 29 Jun 2016 16:51:36 +0200	[thread overview]
Message-ID: <1467211901-26707-2-git-send-email-m@bjorling.me> (raw)
In-Reply-To: <1467211901-26707-1-git-send-email-m@bjorling.me>

With LightNVM enabled namespaces, the gendisk structure is not exposed
to the user. This prevents LightNVM users from accessing the NVMe device
driver specific sysfs entries, and LightNVM namespace geometry.

Refactor the revalidation process, so that a namespace, instead of a
gendisk, is revalidated. This later allows patches to wire up the
sysfs entries up to a non-gendisk namespace.

Signed-off-by: Matias Bj?rling <m at bjorling.me>
---
 drivers/lightnvm/core.c      |   2 +
 drivers/nvme/host/core.c     | 134 ++++++++++++++++++++++++++-----------------
 drivers/nvme/host/lightnvm.c |   5 +-
 3 files changed, 87 insertions(+), 54 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 9ebd2cf..25c5df9 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -581,6 +581,8 @@ static int nvm_core_init(struct nvm_dev *dev)
 	mutex_init(&dev->mlock);
 	spin_lock_init(&dev->lock);
 
+	blk_queue_logical_block_size(dev->q, dev->sec_size);
+
 	return 0;
 err_fmtype:
 	kfree(dev->lun_map);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 684062a..f615b6b 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -785,42 +785,33 @@ static void nvme_config_discard(struct nvme_ns *ns)
 	queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, ns->queue);
 }
 
-static int nvme_revalidate_disk(struct gendisk *disk)
+static int nvme_revalidate_ns(struct nvme_ns *ns, struct nvme_id_ns **id)
 {
-	struct nvme_ns *ns = disk->private_data;
-	struct nvme_id_ns *id;
-	u8 lbaf, pi_type;
-	u16 old_ms;
-	unsigned short bs;
-
-	if (test_bit(NVME_NS_DEAD, &ns->flags)) {
-		set_capacity(disk, 0);
-		return -ENODEV;
-	}
-	if (nvme_identify_ns(ns->ctrl, ns->ns_id, &id)) {
+	if (nvme_identify_ns(ns->ctrl, ns->ns_id, id)) {
 		dev_warn(disk_to_dev(ns->disk), "%s: Identify failure\n",
 				__func__);
 		return -ENODEV;
 	}
-	if (id->ncap == 0) {
-		kfree(id);
-		return -ENODEV;
-	}
 
-	if (nvme_nvm_ns_supported(ns, id) && ns->type != NVME_NS_LIGHTNVM) {
-		if (nvme_nvm_register(ns->queue, disk->disk_name)) {
-			dev_warn(disk_to_dev(ns->disk),
-				"%s: LightNVM init failure\n", __func__);
-			kfree(id);
-			return -ENODEV;
-		}
-		ns->type = NVME_NS_LIGHTNVM;
+	if ((*id)->ncap == 0) {
+		kfree(*id);
+		return -ENODEV;
 	}
 
 	if (ns->ctrl->vs >= NVME_VS(1, 1))
-		memcpy(ns->eui, id->eui64, sizeof(ns->eui));
+		memcpy(ns->eui, (*id)->eui64, sizeof(ns->eui));
 	if (ns->ctrl->vs >= NVME_VS(1, 2))
-		memcpy(ns->uuid, id->nguid, sizeof(ns->uuid));
+		memcpy(ns->uuid, (*id)->nguid, sizeof(ns->uuid));
+
+	return 0;
+}
+
+static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
+{
+	struct nvme_ns *ns = disk->private_data;
+	u8 lbaf, pi_type;
+	u16 old_ms;
+	unsigned short bs;
 
 	old_ms = ns->ms;
 	lbaf = id->flbas & NVME_NS_FLBAS_LBA_MASK;
@@ -859,8 +850,26 @@ static int nvme_revalidate_disk(struct gendisk *disk)
 	if (ns->ctrl->oncs & NVME_CTRL_ONCS_DSM)
 		nvme_config_discard(ns);
 	blk_mq_unfreeze_queue(disk->queue);
+}
 
+static int nvme_revalidate_disk(struct gendisk *disk)
+{
+	struct nvme_ns *ns = disk->private_data;
+	struct nvme_id_ns *id = NULL;
+	int ret;
+
+	if (test_bit(NVME_NS_DEAD, &ns->flags)) {
+		set_capacity(disk, 0);
+		return -ENODEV;
+	}
+
+	ret = nvme_revalidate_ns(ns, &id);
+	if (ret)
+		return ret;
+
+	__nvme_revalidate_disk(disk, id);
 	kfree(id);
+
 	return 0;
 }
 
@@ -1430,6 +1439,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 {
 	struct nvme_ns *ns;
 	struct gendisk *disk;
+	struct nvme_id_ns *id;
+	char disk_name[DISK_NAME_LEN];
 	int node = dev_to_node(ctrl->dev);
 
 	lockdep_assert_held(&ctrl->namespaces_mutex);
@@ -1449,44 +1460,63 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 	ns->queue->queuedata = ns;
 	ns->ctrl = ctrl;
 
-	disk = alloc_disk_node(0, node);
-	if (!disk)
-		goto out_free_queue;
-
 	kref_init(&ns->kref);
 	ns->ns_id = nsid;
-	ns->disk = disk;
 	ns->lba_shift = 9; /* set to a default value for 512 until disk is validated */
 
-
 	blk_queue_logical_block_size(ns->queue, 1 << ns->lba_shift);
 	nvme_set_queue_limits(ctrl, ns->queue);
 
-	disk->major = nvme_major;
-	disk->first_minor = 0;
-	disk->fops = &nvme_fops;
-	disk->private_data = ns;
-	disk->queue = ns->queue;
-	disk->driverfs_dev = ctrl->device;
-	disk->flags = GENHD_FL_EXT_DEVT;
-	sprintf(disk->disk_name, "nvme%dn%d", ctrl->instance, ns->instance);
-
-	if (nvme_revalidate_disk(ns->disk))
-		goto out_free_disk;
+	if (nvme_revalidate_ns(ns, &id))
+		goto out_free_queue;
+
+	sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->instance);
+
+	if (nvme_nvm_ns_supported(ns, id)) {
+		if (nvme_nvm_register(ns->queue, disk_name)) {
+			dev_warn(ctrl->dev,
+				"%s: LightNVM init failure\n", __func__);
+			goto out_free_id;
+		}
+
+		disk = alloc_disk_node(0, node);
+		if (!disk)
+			goto out_free_id;
+		memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
+		ns->disk = disk;
+		ns->type = NVME_NS_LIGHTNVM;
+	} else {
+		disk = alloc_disk_node(0, node);
+		if (!disk)
+			goto out_free_id;
+
+		disk->major = nvme_major;
+		disk->first_minor = 0;
+		disk->fops = &nvme_fops;
+		disk->private_data = ns;
+		disk->queue = ns->queue;
+		disk->driverfs_dev = ctrl->device;
+		disk->flags = GENHD_FL_EXT_DEVT;
+		memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
+		ns->disk = disk;
+
+		__nvme_revalidate_disk(disk, id);
+
+		add_disk(ns->disk);
+
+		if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
+							&nvme_ns_attr_group))
+			pr_warn("%s: failed to create sysfs group for identification\n",
+				ns->disk->disk_name);
+	}
 
 	list_add_tail_rcu(&ns->list, &ctrl->namespaces);
 	kref_get(&ctrl->kref);
-	if (ns->type == NVME_NS_LIGHTNVM)
-		return;
 
-	add_disk(ns->disk);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
+	kfree(id);
 	return;
- out_free_disk:
-	kfree(disk);
+ out_free_id:
+	kfree(id);
  out_free_queue:
 	blk_cleanup_queue(ns->queue);
  out_release_instance:
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 97fe610..ba51602 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -474,8 +474,9 @@ static inline void nvme_nvm_rqtocmd(struct request *rq, struct nvm_rq *rqd,
 	c->ph_rw.length = cpu_to_le16(rqd->nr_ppas - 1);
 
 	if (rqd->opcode == NVM_OP_HBWRITE || rqd->opcode == NVM_OP_HBREAD)
-		c->hb_rw.slba = cpu_to_le64(nvme_block_nr(ns,
-						rqd->bio->bi_iter.bi_sector));
+		/* momentarily hardcode the shift configuration. lba_shift from
+		 * nvm_dev will be available in a follow-up patch */
+		c->hb_rw.slba = cpu_to_le64(rqd->bio->bi_iter.bi_sector >> 3);
 }
 
 static void nvme_nvm_end_io(struct request *rq, int error)
-- 
2.1.4

WARNING: multiple messages have this Message-ID (diff)
From: "Matias Bjørling" <m@bjorling.me>
To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	axboe@fb.com, keith.busch@intel.com,
	linux-nvme@lists.infradead.org, dm-devel@redhat.com
Cc: "Matias Bjørling" <m@bjorling.me>
Subject: [PATCH 1/6] nvme: refactor namespaces to support non-gendisk devices
Date: Wed, 29 Jun 2016 16:51:36 +0200	[thread overview]
Message-ID: <1467211901-26707-2-git-send-email-m@bjorling.me> (raw)
In-Reply-To: <1467211901-26707-1-git-send-email-m@bjorling.me>

With LightNVM enabled namespaces, the gendisk structure is not exposed
to the user. This prevents LightNVM users from accessing the NVMe device
driver specific sysfs entries, and LightNVM namespace geometry.

Refactor the revalidation process, so that a namespace, instead of a
gendisk, is revalidated. This later allows patches to wire up the
sysfs entries up to a non-gendisk namespace.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/core.c      |   2 +
 drivers/nvme/host/core.c     | 134 ++++++++++++++++++++++++++-----------------
 drivers/nvme/host/lightnvm.c |   5 +-
 3 files changed, 87 insertions(+), 54 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 9ebd2cf..25c5df9 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -581,6 +581,8 @@ static int nvm_core_init(struct nvm_dev *dev)
 	mutex_init(&dev->mlock);
 	spin_lock_init(&dev->lock);
 
+	blk_queue_logical_block_size(dev->q, dev->sec_size);
+
 	return 0;
 err_fmtype:
 	kfree(dev->lun_map);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 684062a..f615b6b 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -785,42 +785,33 @@ static void nvme_config_discard(struct nvme_ns *ns)
 	queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, ns->queue);
 }
 
-static int nvme_revalidate_disk(struct gendisk *disk)
+static int nvme_revalidate_ns(struct nvme_ns *ns, struct nvme_id_ns **id)
 {
-	struct nvme_ns *ns = disk->private_data;
-	struct nvme_id_ns *id;
-	u8 lbaf, pi_type;
-	u16 old_ms;
-	unsigned short bs;
-
-	if (test_bit(NVME_NS_DEAD, &ns->flags)) {
-		set_capacity(disk, 0);
-		return -ENODEV;
-	}
-	if (nvme_identify_ns(ns->ctrl, ns->ns_id, &id)) {
+	if (nvme_identify_ns(ns->ctrl, ns->ns_id, id)) {
 		dev_warn(disk_to_dev(ns->disk), "%s: Identify failure\n",
 				__func__);
 		return -ENODEV;
 	}
-	if (id->ncap == 0) {
-		kfree(id);
-		return -ENODEV;
-	}
 
-	if (nvme_nvm_ns_supported(ns, id) && ns->type != NVME_NS_LIGHTNVM) {
-		if (nvme_nvm_register(ns->queue, disk->disk_name)) {
-			dev_warn(disk_to_dev(ns->disk),
-				"%s: LightNVM init failure\n", __func__);
-			kfree(id);
-			return -ENODEV;
-		}
-		ns->type = NVME_NS_LIGHTNVM;
+	if ((*id)->ncap == 0) {
+		kfree(*id);
+		return -ENODEV;
 	}
 
 	if (ns->ctrl->vs >= NVME_VS(1, 1))
-		memcpy(ns->eui, id->eui64, sizeof(ns->eui));
+		memcpy(ns->eui, (*id)->eui64, sizeof(ns->eui));
 	if (ns->ctrl->vs >= NVME_VS(1, 2))
-		memcpy(ns->uuid, id->nguid, sizeof(ns->uuid));
+		memcpy(ns->uuid, (*id)->nguid, sizeof(ns->uuid));
+
+	return 0;
+}
+
+static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
+{
+	struct nvme_ns *ns = disk->private_data;
+	u8 lbaf, pi_type;
+	u16 old_ms;
+	unsigned short bs;
 
 	old_ms = ns->ms;
 	lbaf = id->flbas & NVME_NS_FLBAS_LBA_MASK;
@@ -859,8 +850,26 @@ static int nvme_revalidate_disk(struct gendisk *disk)
 	if (ns->ctrl->oncs & NVME_CTRL_ONCS_DSM)
 		nvme_config_discard(ns);
 	blk_mq_unfreeze_queue(disk->queue);
+}
 
+static int nvme_revalidate_disk(struct gendisk *disk)
+{
+	struct nvme_ns *ns = disk->private_data;
+	struct nvme_id_ns *id = NULL;
+	int ret;
+
+	if (test_bit(NVME_NS_DEAD, &ns->flags)) {
+		set_capacity(disk, 0);
+		return -ENODEV;
+	}
+
+	ret = nvme_revalidate_ns(ns, &id);
+	if (ret)
+		return ret;
+
+	__nvme_revalidate_disk(disk, id);
 	kfree(id);
+
 	return 0;
 }
 
@@ -1430,6 +1439,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 {
 	struct nvme_ns *ns;
 	struct gendisk *disk;
+	struct nvme_id_ns *id;
+	char disk_name[DISK_NAME_LEN];
 	int node = dev_to_node(ctrl->dev);
 
 	lockdep_assert_held(&ctrl->namespaces_mutex);
@@ -1449,44 +1460,63 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 	ns->queue->queuedata = ns;
 	ns->ctrl = ctrl;
 
-	disk = alloc_disk_node(0, node);
-	if (!disk)
-		goto out_free_queue;
-
 	kref_init(&ns->kref);
 	ns->ns_id = nsid;
-	ns->disk = disk;
 	ns->lba_shift = 9; /* set to a default value for 512 until disk is validated */
 
-
 	blk_queue_logical_block_size(ns->queue, 1 << ns->lba_shift);
 	nvme_set_queue_limits(ctrl, ns->queue);
 
-	disk->major = nvme_major;
-	disk->first_minor = 0;
-	disk->fops = &nvme_fops;
-	disk->private_data = ns;
-	disk->queue = ns->queue;
-	disk->driverfs_dev = ctrl->device;
-	disk->flags = GENHD_FL_EXT_DEVT;
-	sprintf(disk->disk_name, "nvme%dn%d", ctrl->instance, ns->instance);
-
-	if (nvme_revalidate_disk(ns->disk))
-		goto out_free_disk;
+	if (nvme_revalidate_ns(ns, &id))
+		goto out_free_queue;
+
+	sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->instance);
+
+	if (nvme_nvm_ns_supported(ns, id)) {
+		if (nvme_nvm_register(ns->queue, disk_name)) {
+			dev_warn(ctrl->dev,
+				"%s: LightNVM init failure\n", __func__);
+			goto out_free_id;
+		}
+
+		disk = alloc_disk_node(0, node);
+		if (!disk)
+			goto out_free_id;
+		memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
+		ns->disk = disk;
+		ns->type = NVME_NS_LIGHTNVM;
+	} else {
+		disk = alloc_disk_node(0, node);
+		if (!disk)
+			goto out_free_id;
+
+		disk->major = nvme_major;
+		disk->first_minor = 0;
+		disk->fops = &nvme_fops;
+		disk->private_data = ns;
+		disk->queue = ns->queue;
+		disk->driverfs_dev = ctrl->device;
+		disk->flags = GENHD_FL_EXT_DEVT;
+		memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
+		ns->disk = disk;
+
+		__nvme_revalidate_disk(disk, id);
+
+		add_disk(ns->disk);
+
+		if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
+							&nvme_ns_attr_group))
+			pr_warn("%s: failed to create sysfs group for identification\n",
+				ns->disk->disk_name);
+	}
 
 	list_add_tail_rcu(&ns->list, &ctrl->namespaces);
 	kref_get(&ctrl->kref);
-	if (ns->type == NVME_NS_LIGHTNVM)
-		return;
 
-	add_disk(ns->disk);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
+	kfree(id);
 	return;
- out_free_disk:
-	kfree(disk);
+ out_free_id:
+	kfree(id);
  out_free_queue:
 	blk_cleanup_queue(ns->queue);
  out_release_instance:
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 97fe610..ba51602 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -474,8 +474,9 @@ static inline void nvme_nvm_rqtocmd(struct request *rq, struct nvm_rq *rqd,
 	c->ph_rw.length = cpu_to_le16(rqd->nr_ppas - 1);
 
 	if (rqd->opcode == NVM_OP_HBWRITE || rqd->opcode == NVM_OP_HBREAD)
-		c->hb_rw.slba = cpu_to_le64(nvme_block_nr(ns,
-						rqd->bio->bi_iter.bi_sector));
+		/* momentarily hardcode the shift configuration. lba_shift from
+		 * nvm_dev will be available in a follow-up patch */
+		c->hb_rw.slba = cpu_to_le64(rqd->bio->bi_iter.bi_sector >> 3);
 }
 
 static void nvme_nvm_end_io(struct request *rq, int error)
-- 
2.1.4

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

  reply	other threads:[~2016-06-29 14:51 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-29 14:51 [PATCH 0/6] enable sysfs for lightnvm Matias Bjørling
2016-06-29 14:51 ` Matias Bjørling
2016-06-29 14:51 ` Matias Bjørling
2016-06-29 14:51 ` Matias Bjørling [this message]
2016-06-29 14:51   ` [PATCH 1/6] nvme: refactor namespaces to support non-gendisk devices Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51 ` [PATCH 2/6] null_blk: refactor " Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51 ` [PATCH 3/6] blk-mq: register device instead of disk Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51 ` [PATCH 4/6] lightnvm: let drivers control the lifetime of nvm_dev Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51 ` [PATCH 5/6] lightnvm: expose device geometry through sysfs Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-30 20:01   ` J Freyensee
2016-06-30 20:01     ` J Freyensee
2016-07-01  7:20     ` Matias Bjørling
2016-07-01  7:20       ` Matias Bjørling
2016-06-29 14:51 ` [PATCH 6/6] lightnvm: expose gennvm target type " Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
2016-06-29 14:51   ` Matias Bjørling
  -- strict thread matches above, loose matches on Subject: below --
2016-06-10 12:20 [PATCH 0/6] sysfs support for LightNVM Matias Bjørling
2016-06-10 12:20 ` [PATCH 1/6] nvme: refactor namespaces to support non-gendisk devices Matias Bjørling
2016-06-10 12:20   ` Matias Bjørling
2016-06-10 12:20   ` Matias Bjørling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1467211901-26707-2-git-send-email-m@bjorling.me \
    --to=m@bjorling.me \
    --cc=axboe@fb.com \
    --cc=dm-devel@redhat.com \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.