From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.8 required=3.0 tests=BAYES_00, DATE_IN_FUTURE_06_12,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FA46C433B4 for ; Fri, 21 May 2021 07:20:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53628611BD for ; Fri, 21 May 2021 07:20:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233161AbhEUHVY (ORCPT ); Fri, 21 May 2021 03:21:24 -0400 Received: from mga04.intel.com ([192.55.52.120]:16559 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233118AbhEUHVX (ORCPT ); Fri, 21 May 2021 03:21:23 -0400 IronPort-SDR: S+bCHmziKwUAOc1Cyg1Gk56W4glvWAsDleA84aqYcW0dZ8jZgOh1tBwHnO5KFxxicT9PGflife 1xPk8u+wfAbg== X-IronPort-AV: E=McAfee;i="6200,9189,9990"; a="199484323" X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="199484323" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2021 00:20:01 -0700 IronPort-SDR: MEbhmNencnuqhaKaJcRzuKFjBpEGBdn6/4fJNF+Eqwp0kTfjiqCoxVerKuZilXanwZ3/OwCteL J3vEdLw4WeGw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="440817119" Received: from ceph.sh.intel.com ([10.239.241.176]) by orsmga008.jf.intel.com with ESMTP; 21 May 2021 00:19:58 -0700 From: Qiaowei Ren To: linux-bcache@vger.kernel.org Cc: qiaowei.ren@intel.com, jianpeng.ma@intel.com, colyli@suse.de, rdunlap@infradead.oom, Randy Dunlap , Colin Ian King Subject: [bch-nvm-pages v10 2/6] bcache: initialize the nvm pages allocator Date: Fri, 21 May 2021 10:57:22 -0400 Message-Id: <20210521145726.154276-3-qiaowei.ren@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210521145726.154276-1-qiaowei.ren@intel.com> References: <20210521145726.154276-1-qiaowei.ren@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-bcache@vger.kernel.org From: Jianpeng Ma This patch define the prototype data structures in memory and initializes the nvm pages allocator. The nvm address space which is managed by this allocator can consist of many nvm namespaces, and some namespaces can compose into one nvm set, like cache set. For this initial implementation, only one set can be supported. The users of this nvm pages allocator need to call register_namespace() to register the nvdimm device (like /dev/pmemX) into this allocator as the instance of struct nvm_namespace. Reported-by: Randy Dunlap Signed-off-by: Jianpeng Ma Co-developed-by: Qiaowei Ren Signed-off-by: Qiaowei Ren Signed-off-by: Colin Ian King --- drivers/md/bcache/Kconfig | 9 ++ drivers/md/bcache/Makefile | 1 + drivers/md/bcache/nvm-pages.c | 291 ++++++++++++++++++++++++++++++++++ drivers/md/bcache/nvm-pages.h | 74 +++++++++ drivers/md/bcache/super.c | 3 + 5 files changed, 378 insertions(+) create mode 100644 drivers/md/bcache/nvm-pages.c create mode 100644 drivers/md/bcache/nvm-pages.h diff --git a/drivers/md/bcache/Kconfig b/drivers/md/bcache/Kconfig index d1ca4d059c20..b48282bcad52 100644 --- a/drivers/md/bcache/Kconfig +++ b/drivers/md/bcache/Kconfig @@ -35,3 +35,12 @@ config BCACHE_ASYNC_REGISTRATION device path into this file will returns immediately and the real registration work is handled in kernel work queue in asynchronous way. + +config BCACHE_NVM_PAGES + bool "NVDIMM support for bcache (EXPERIMENTAL)" + depends on BCACHE + depends on LIBNVDIMM + depends on DAX + help + Allocate/release NV-memory pages for bcache and provide allocated pages + for each requestor after system reboot. diff --git a/drivers/md/bcache/Makefile b/drivers/md/bcache/Makefile index 5b87e59676b8..2397bb7c7ffd 100644 --- a/drivers/md/bcache/Makefile +++ b/drivers/md/bcache/Makefile @@ -5,3 +5,4 @@ obj-$(CONFIG_BCACHE) += bcache.o bcache-y := alloc.o bset.o btree.o closure.o debug.o extents.o\ io.o journal.o movinggc.o request.o stats.o super.o sysfs.o trace.o\ util.o writeback.o features.o +bcache-$(CONFIG_BCACHE_NVM_PAGES) += nvm-pages.o diff --git a/drivers/md/bcache/nvm-pages.c b/drivers/md/bcache/nvm-pages.c new file mode 100644 index 000000000000..34e8a7c8a463 --- /dev/null +++ b/drivers/md/bcache/nvm-pages.c @@ -0,0 +1,291 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Nvdimm page-buddy allocator + * + * Copyright (c) 2021, Intel Corporation. + * Copyright (c) 2021, Qiaowei Ren . + * Copyright (c) 2021, Jianpeng Ma . + */ + +#include "bcache.h" +#include "nvm-pages.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct bch_nvm_set *only_set; + +static void release_nvm_namespaces(struct bch_nvm_set *nvm_set) +{ + int i; + struct bch_nvm_namespace *ns; + + for (i = 0; i < nvm_set->total_namespaces_nr; i++) { + ns = nvm_set->nss[i]; + if (ns) { + blkdev_put(ns->bdev, FMODE_READ|FMODE_WRITE|FMODE_EXEC); + kfree(ns); + } + } + + kfree(nvm_set->nss); +} + +static void release_nvm_set(struct bch_nvm_set *nvm_set) +{ + release_nvm_namespaces(nvm_set); + kfree(nvm_set); +} + +static int init_owner_info(struct bch_nvm_namespace *ns) +{ + struct bch_owner_list_head *owner_list_head = ns->sb->owner_list_head; + + mutex_lock(&only_set->lock); + only_set->owner_list_head = owner_list_head; + only_set->owner_list_size = owner_list_head->size; + only_set->owner_list_used = owner_list_head->used; + mutex_unlock(&only_set->lock); + + return 0; +} + +static int attach_nvm_set(struct bch_nvm_namespace *ns) +{ + int rc = 0; + + mutex_lock(&only_set->lock); + if (only_set->nss) { + if (memcmp(ns->sb->set_uuid, only_set->set_uuid, 16)) { + pr_info("namespace id doesn't match nvm set\n"); + rc = -EINVAL; + goto unlock; + } + + if (only_set->nss[ns->sb->this_namespace_nr]) { + pr_info("already has the same position(%d) nvm\n", + ns->sb->this_namespace_nr); + rc = -EEXIST; + goto unlock; + } + } else { + memcpy(only_set->set_uuid, ns->sb->set_uuid, 16); + only_set->total_namespaces_nr = ns->sb->total_namespaces_nr; + only_set->nss = kcalloc(only_set->total_namespaces_nr, + sizeof(struct bch_nvm_namespace *), GFP_KERNEL); + if (!only_set->nss) { + rc = -ENOMEM; + goto unlock; + } + } + + only_set->nss[ns->sb->this_namespace_nr] = ns; + + /* Firstly attach */ + if ((unsigned long)ns->sb->owner_list_head == BCH_NVM_PAGES_OWNER_LIST_HEAD_OFFSET) { + struct bch_nvm_pages_owner_head *sys_owner_head; + struct bch_nvm_pgalloc_recs *sys_pgalloc_recs; + + ns->sb->owner_list_head = ns->kaddr + BCH_NVM_PAGES_OWNER_LIST_HEAD_OFFSET; + sys_pgalloc_recs = ns->kaddr + BCH_NVM_PAGES_SYS_RECS_HEAD_OFFSET; + + sys_owner_head = &(ns->sb->owner_list_head->heads[0]); + sys_owner_head->recs[0] = sys_pgalloc_recs; + ns->sb->csum = csum_set(ns->sb); + + sys_pgalloc_recs->owner = sys_owner_head; + } else + BUG_ON(ns->sb->owner_list_head != + (ns->kaddr + BCH_NVM_PAGES_OWNER_LIST_HEAD_OFFSET)); + +unlock: + mutex_unlock(&only_set->lock); + return rc; +} + +static int read_nvdimm_meta_super(struct block_device *bdev, + struct bch_nvm_namespace *ns) +{ + struct page *page; + struct bch_nvm_pages_sb *sb; + int r = 0; + uint64_t expected_csum = 0; + + page = read_cache_page_gfp(bdev->bd_inode->i_mapping, + BCH_NVM_PAGES_SB_OFFSET >> PAGE_SHIFT, GFP_KERNEL); + + if (IS_ERR(page)) + return -EIO; + + sb = (struct bch_nvm_pages_sb *)(page_address(page) + + offset_in_page(BCH_NVM_PAGES_SB_OFFSET)); + r = -EINVAL; + expected_csum = csum_set(sb); + if (expected_csum != sb->csum) { + pr_info("csum is not match with expected one\n"); + goto put_page; + } + + if (memcmp(sb->magic, bch_nvm_pages_magic, 16)) { + pr_info("invalid bch_nvm_pages_magic\n"); + goto put_page; + } + + if (sb->total_namespaces_nr != 1) { + pr_info("currently only support one nvm device\n"); + goto put_page; + } + + if (sb->sb_offset != BCH_NVM_PAGES_SB_OFFSET) { + pr_info("invalid superblock offset\n"); + goto put_page; + } + + r = 0; + /* temporary use for DAX API */ + ns->page_size = sb->page_size; + ns->pages_total = sb->pages_total; + +put_page: + put_page(page); + return r; +} + +struct bch_nvm_namespace *bch_register_namespace(const char *dev_path) +{ + struct bch_nvm_namespace *ns; + int err; + pgoff_t pgoff; + char buf[BDEVNAME_SIZE]; + struct block_device *bdev; + int id; + char *path = NULL; + + path = kstrndup(dev_path, 512, GFP_KERNEL); + if (!path) { + pr_err("kstrndup failed\n"); + return ERR_PTR(-ENOMEM); + } + + bdev = blkdev_get_by_path(strim(path), + FMODE_READ|FMODE_WRITE|FMODE_EXEC, + only_set); + if (IS_ERR(bdev)) { + pr_info("get %s error: %ld\n", dev_path, PTR_ERR(bdev)); + kfree(path); + return ERR_PTR(PTR_ERR(bdev)); + } + + err = -ENOMEM; + ns = kzalloc(sizeof(struct bch_nvm_namespace), GFP_KERNEL); + if (!ns) + goto bdput; + + err = -EIO; + if (read_nvdimm_meta_super(bdev, ns)) { + pr_info("%s read nvdimm meta super block failed.\n", + bdevname(bdev, buf)); + goto free_ns; + } + + err = -EOPNOTSUPP; + if (!bdev_dax_supported(bdev, ns->page_size)) { + pr_info("%s don't support DAX\n", bdevname(bdev, buf)); + goto free_ns; + } + + err = -EINVAL; + if (bdev_dax_pgoff(bdev, 0, ns->page_size, &pgoff)) { + pr_info("invalid offset of %s\n", bdevname(bdev, buf)); + goto free_ns; + } + + err = -ENOMEM; + ns->dax_dev = fs_dax_get_by_bdev(bdev); + if (!ns->dax_dev) { + pr_info("can't by dax device by %s\n", bdevname(bdev, buf)); + goto free_ns; + } + + err = -EINVAL; + id = dax_read_lock(); + if (dax_direct_access(ns->dax_dev, pgoff, ns->pages_total, + &ns->kaddr, &ns->start_pfn) <= 0) { + pr_info("dax_direct_access error\n"); + dax_read_unlock(id); + goto free_ns; + } + dax_read_unlock(id); + + ns->sb = ns->kaddr + BCH_NVM_PAGES_SB_OFFSET; + + err = -EINVAL; + /* Check magic again to make sure DAX mapping is currect */ + if (memcmp(ns->sb->magic, bch_nvm_pages_magic, 16)) { + pr_info("invalid bch_nvm_pages_magic after DAX mapping\n"); + goto free_ns; + } + + err = attach_nvm_set(ns); + if (err < 0) + goto free_ns; + + ns->page_size = ns->sb->page_size; + ns->pages_offset = ns->sb->pages_offset; + ns->pages_total = ns->sb->pages_total; + ns->free = 0; + ns->bdev = bdev; + ns->nvm_set = only_set; + mutex_init(&ns->lock); + + if (ns->sb->this_namespace_nr == 0) { + pr_info("only first namespace contain owner info\n"); + err = init_owner_info(ns); + if (err < 0) { + pr_info("init_owner_info met error %d\n", err); + only_set->nss[ns->sb->this_namespace_nr] = NULL; + goto free_ns; + } + } + + kfree(path); + return ns; +free_ns: + kfree(ns); +bdput: + blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXEC); + kfree(path); + return ERR_PTR(err); +} +EXPORT_SYMBOL_GPL(bch_register_namespace); + +int __init bch_nvm_init(void) +{ + only_set = kzalloc(sizeof(*only_set), GFP_KERNEL); + if (!only_set) + return -ENOMEM; + + only_set->total_namespaces_nr = 0; + only_set->owner_list_head = NULL; + only_set->nss = NULL; + + mutex_init(&only_set->lock); + + pr_info("bcache nvm init\n"); + return 0; +} + +void bch_nvm_exit(void) +{ + release_nvm_set(only_set); + pr_info("bcache nvm exit\n"); +} diff --git a/drivers/md/bcache/nvm-pages.h b/drivers/md/bcache/nvm-pages.h new file mode 100644 index 000000000000..87a0d2c46788 --- /dev/null +++ b/drivers/md/bcache/nvm-pages.h @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _BCACHE_NVM_PAGES_H +#define _BCACHE_NVM_PAGES_H + +#ifdef CONFIG_BCACHE_NVM_PAGES +#include +#endif /* CONFIG_BCACHE_NVM_PAGES */ + +/* + * Bcache NVDIMM in memory data structures + */ + +/* + * The following three structures in memory records which page(s) allocated + * to which owner. After reboot from power failure, they will be initialized + * based on nvm pages superblock in NVDIMM device. + */ +struct bch_nvm_namespace { + struct bch_nvm_pages_sb *sb; + void *kaddr; + + u8 uuid[16]; + u64 free; + u32 page_size; + u64 pages_offset; + u64 pages_total; + pfn_t start_pfn; + + struct dax_device *dax_dev; + struct block_device *bdev; + struct bch_nvm_set *nvm_set; + + struct mutex lock; +}; + +/* + * A set of namespaces. Currently only one set can be supported. + */ +struct bch_nvm_set { + u8 set_uuid[16]; + u32 total_namespaces_nr; + + u32 owner_list_size; + u32 owner_list_used; + struct bch_owner_list_head *owner_list_head; + + struct bch_nvm_namespace **nss; + + struct mutex lock; +}; +extern struct bch_nvm_set *only_set; + +#ifdef CONFIG_BCACHE_NVM_PAGES + +struct bch_nvm_namespace *bch_register_namespace(const char *dev_path); +int bch_nvm_init(void); +void bch_nvm_exit(void); + +#else + +static inline struct bch_nvm_namespace *bch_register_namespace(const char *dev_path) +{ + return NULL; +} +static inline int bch_nvm_init(void) +{ + return 0; +} +static inline void bch_nvm_exit(void) { } + +#endif /* CONFIG_BCACHE_NVM_PAGES */ + +#endif /* _BCACHE_NVM_PAGES_H */ diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 03e1fe4de53d..0674a76d9454 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -14,6 +14,7 @@ #include "request.h" #include "writeback.h" #include "features.h" +#include "nvm-pages.h" #include #include @@ -2816,6 +2817,7 @@ static void bcache_exit(void) { bch_debug_exit(); bch_request_exit(); + bch_nvm_exit(); if (bcache_kobj) kobject_put(bcache_kobj); if (bcache_wq) @@ -2914,6 +2916,7 @@ static int __init bcache_init(void) bch_debug_init(); closure_debug_init(); + bch_nvm_init(); bcache_is_reboot = false; -- 2.25.1