All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] Add persistent memory driver
@ 2015-03-16 21:12 ` Ross Zwisler
  0 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

PMEM is a modified version of the Block RAM Driver, BRD. The major difference
is that BRD allocates its backing store pages from the page cache, whereas
PMEM uses reserved memory that has been ioremapped.

One benefit of this approach is that there is a direct mapping between
filesystem block numbers and virtual addresses.  In PMEM, filesystem blocks N,
N+1, N+2, etc. will all be adjacent in the virtual memory space. This property
allows us to set up PMD mappings (2 MiB) for DAX.

This patch set is builds upon the work that Matthew Wilcox has been doing for
DAX, which has been merged into the v4.0 kernel series.

For more information on PMEM and for some instructions on how to use it, please
check out PMEM's github tree:

https://github.com/01org/prd

Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com

Boaz Harrosh (1):
  pmem: Let each device manage private memory region

Ross Zwisler (5):
  pmem: Initial version of persistent memory driver
  pmem: Add support for getgeo()
  pmem: Add support for rw_page()
  pmem: Add support for direct_access()
  pmem: Clean up includes

 MAINTAINERS            |   6 +
 drivers/block/Kconfig  |  41 +++++
 drivers/block/Makefile |   1 +
 drivers/block/pmem.c   | 401 +++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 449 insertions(+)
 create mode 100644 drivers/block/pmem.c

-- 
1.9.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/6] Add persistent memory driver
@ 2015-03-16 21:12 ` Ross Zwisler
  0 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

PMEM is a modified version of the Block RAM Driver, BRD. The major difference
is that BRD allocates its backing store pages from the page cache, whereas
PMEM uses reserved memory that has been ioremapped.

One benefit of this approach is that there is a direct mapping between
filesystem block numbers and virtual addresses.  In PMEM, filesystem blocks N,
N+1, N+2, etc. will all be adjacent in the virtual memory space. This property
allows us to set up PMD mappings (2 MiB) for DAX.

This patch set is builds upon the work that Matthew Wilcox has been doing for
DAX, which has been merged into the v4.0 kernel series.

For more information on PMEM and for some instructions on how to use it, please
check out PMEM's github tree:

https://github.com/01org/prd

Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com

Boaz Harrosh (1):
  pmem: Let each device manage private memory region

Ross Zwisler (5):
  pmem: Initial version of persistent memory driver
  pmem: Add support for getgeo()
  pmem: Add support for rw_page()
  pmem: Add support for direct_access()
  pmem: Clean up includes

 MAINTAINERS            |   6 +
 drivers/block/Kconfig  |  41 +++++
 drivers/block/Makefile |   1 +
 drivers/block/pmem.c   | 401 +++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 449 insertions(+)
 create mode 100644 drivers/block/pmem.c

-- 
1.9.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/6] pmem: Initial version of persistent memory driver
  2015-03-16 21:12 ` Ross Zwisler
@ 2015-03-16 21:12   ` Ross Zwisler
  -1 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

PMEM is a new driver that presents a reserved range of memory as a
block device.  This is useful for developing with NV-DIMMs, and
can be used with volatile memory as a development platform.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 MAINTAINERS            |   6 +
 drivers/block/Kconfig  |  41 ++++++
 drivers/block/Makefile |   1 +
 drivers/block/pmem.c   | 330 +++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 378 insertions(+)
 create mode 100644 drivers/block/pmem.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 6239a30..9414b42 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8052,6 +8052,12 @@ S:	Maintained
 F:	Documentation/blockdev/ramdisk.txt
 F:	drivers/block/brd.c
 
+PERSISTENT MEMORY DRIVER
+M:	Ross Zwisler <ross.zwisler@linux.intel.com>
+L:	linux-nvdimm@lists.01.org
+S:	Supported
+F:	drivers/block/pmem.c
+
 RANDOM NUMBER DRIVER
 M:	"Theodore Ts'o" <tytso@mit.edu>
 S:	Maintained
diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
index 1b8094d..ac52f5a 100644
--- a/drivers/block/Kconfig
+++ b/drivers/block/Kconfig
@@ -404,6 +404,47 @@ config BLK_DEV_RAM_DAX
 	  and will prevent RAM block device backing store memory from being
 	  allocated from highmem (only a problem for highmem systems).
 
+config BLK_DEV_PMEM
+	tristate "Persistent memory block device support"
+	help
+	  Saying Y here will allow you to use a contiguous range of reserved
+	  memory as one or more block devices.  Memory for PMEM should be
+	  reserved using the "memmap" kernel parameter.
+
+	  To compile this driver as a module, choose M here: the module will be
+	  called pmem.
+
+	  Most normal users won't need this functionality, and can thus say N
+	  here.
+
+config BLK_DEV_PMEM_START
+	int "Offset in GiB of where to start claiming space"
+	default "0"
+	depends on BLK_DEV_PMEM
+	help
+	  Starting offset in GiB that PMEM should use when claiming memory.  This
+	  memory needs to be reserved from the OS at boot time using the
+	  "memmap" kernel parameter.
+
+	  If you provide PMEM with volatile memory it will act as a volatile
+	  RAM disk and your data will not be persistent.
+
+config BLK_DEV_PMEM_COUNT
+	int "Default number of PMEM disks"
+	default "4"
+	depends on BLK_DEV_PMEM
+	help
+	  Number of equal sized block devices that PMEM should create.
+
+config BLK_DEV_PMEM_SIZE
+	int "Size in GiB of space to claim"
+	depends on BLK_DEV_PMEM
+	default "0"
+	help
+	  Amount of memory in GiB that PMEM should use when creating block
+	  devices.  This memory needs to be reserved from the OS at
+	  boot time using the "memmap" kernel parameter.
+
 config CDROM_PKTCDVD
 	tristate "Packet writing on CD/DVD media"
 	depends on !UML
diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 02b688d..9cc6c18 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -14,6 +14,7 @@ obj-$(CONFIG_PS3_VRAM)		+= ps3vram.o
 obj-$(CONFIG_ATARI_FLOPPY)	+= ataflop.o
 obj-$(CONFIG_AMIGA_Z2RAM)	+= z2ram.o
 obj-$(CONFIG_BLK_DEV_RAM)	+= brd.o
+obj-$(CONFIG_BLK_DEV_PMEM)	+= pmem.o
 obj-$(CONFIG_BLK_DEV_LOOP)	+= loop.o
 obj-$(CONFIG_BLK_CPQ_DA)	+= cpqarray.o
 obj-$(CONFIG_BLK_CPQ_CISS_DA)  += cciss.o
diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
new file mode 100644
index 0000000..d366b9b
--- /dev/null
+++ b/drivers/block/pmem.c
@@ -0,0 +1,330 @@
+/*
+ * Persistent Memory Driver
+ * Copyright (c) 2014, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * This driver is heavily based on drivers/block/brd.c.
+ * Copyright (C) 2007 Nick Piggin
+ * Copyright (C) 2007 Novell Inc.
+ */
+
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/hdreg.h>
+#include <linux/highmem.h>
+#include <linux/init.h>
+#include <linux/major.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#define SECTOR_SHIFT		9
+#define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
+#define PAGE_SECTORS		(1 << PAGE_SECTORS_SHIFT)
+
+/*
+ * driver-wide physical address and total_size - one single, contiguous memory
+ * region that we divide up in to same-sized devices
+ */
+phys_addr_t	phys_addr;
+void		*virt_addr;
+size_t		total_size;
+
+struct pmem_device {
+	struct request_queue	*pmem_queue;
+	struct gendisk		*pmem_disk;
+	struct list_head	pmem_list;
+
+	phys_addr_t		phys_addr;
+	void			*virt_addr;
+	size_t			size;
+};
+
+/*
+ * direct translation from (pmem,sector) => void*
+ * We do not require that sector be page aligned.
+ * The return value will point to the beginning of the page containing the
+ * given sector, not to the sector itself.
+ */
+static void *pmem_lookup_pg_addr(struct pmem_device *pmem, sector_t sector)
+{
+	size_t page_offset = sector >> PAGE_SECTORS_SHIFT;
+	size_t offset = page_offset << PAGE_SHIFT;
+
+	BUG_ON(offset >= pmem->size);
+	return pmem->virt_addr + offset;
+}
+
+/*
+ * sector is not required to be page aligned.
+ * n is at most a single page, but could be less.
+ */
+static void copy_to_pmem(struct pmem_device *pmem, const void *src,
+			sector_t sector, size_t n)
+{
+	void *dst;
+	unsigned int offset = (sector & (PAGE_SECTORS - 1)) << SECTOR_SHIFT;
+	size_t copy;
+
+	BUG_ON(n > PAGE_SIZE);
+
+	copy = min_t(size_t, n, PAGE_SIZE - offset);
+	dst = pmem_lookup_pg_addr(pmem, sector);
+	memcpy(dst + offset, src, copy);
+
+	if (copy < n) {
+		src += copy;
+		sector += copy >> SECTOR_SHIFT;
+		copy = n - copy;
+		dst = pmem_lookup_pg_addr(pmem, sector);
+		memcpy(dst, src, copy);
+	}
+}
+
+/*
+ * sector is not required to be page aligned.
+ * n is at most a single page, but could be less.
+ */
+static void copy_from_pmem(void *dst, struct pmem_device *pmem,
+			  sector_t sector, size_t n)
+{
+	void *src;
+	unsigned int offset = (sector & (PAGE_SECTORS - 1)) << SECTOR_SHIFT;
+	size_t copy;
+
+	BUG_ON(n > PAGE_SIZE);
+
+	copy = min_t(size_t, n, PAGE_SIZE - offset);
+	src = pmem_lookup_pg_addr(pmem, sector);
+
+	memcpy(dst, src + offset, copy);
+
+	if (copy < n) {
+		dst += copy;
+		sector += copy >> SECTOR_SHIFT;
+		copy = n - copy;
+		src = pmem_lookup_pg_addr(pmem, sector);
+		memcpy(dst, src, copy);
+	}
+}
+
+static void pmem_do_bvec(struct pmem_device *pmem, struct page *page,
+			unsigned int len, unsigned int off, int rw,
+			sector_t sector)
+{
+	void *mem = kmap_atomic(page);
+
+	if (rw == READ) {
+		copy_from_pmem(mem + off, pmem, sector, len);
+		flush_dcache_page(page);
+	} else {
+		/*
+		 * FIXME: Need more involved flushing to ensure that writes to
+		 * NVDIMMs are actually durable before returning.
+		 */
+		flush_dcache_page(page);
+		copy_to_pmem(pmem, mem + off, sector, len);
+	}
+
+	kunmap_atomic(mem);
+}
+
+static void pmem_make_request(struct request_queue *q, struct bio *bio)
+{
+	struct block_device *bdev = bio->bi_bdev;
+	struct pmem_device *pmem = bdev->bd_disk->private_data;
+	int rw;
+	struct bio_vec bvec;
+	sector_t sector;
+	struct bvec_iter iter;
+	int err = 0;
+
+	sector = bio->bi_iter.bi_sector;
+	if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) {
+		err = -EIO;
+		goto out;
+	}
+
+	BUG_ON(bio->bi_rw & REQ_DISCARD);
+
+	rw = bio_rw(bio);
+	if (rw == READA)
+		rw = READ;
+
+	bio_for_each_segment(bvec, bio, iter) {
+		unsigned int len = bvec.bv_len;
+
+		BUG_ON(len > PAGE_SIZE);
+		pmem_do_bvec(pmem, bvec.bv_page, len,
+			    bvec.bv_offset, rw, sector);
+		sector += len >> SECTOR_SHIFT;
+	}
+
+out:
+	bio_endio(bio, err);
+}
+
+static const struct block_device_operations pmem_fops = {
+	.owner =		THIS_MODULE,
+};
+
+/* Kernel module stuff */
+static int pmem_start_gb = CONFIG_BLK_DEV_PMEM_START;
+module_param(pmem_start_gb, int, S_IRUGO);
+MODULE_PARM_DESC(pmem_start_gb, "Offset in GB of where to start claiming space");
+
+static int pmem_size_gb = CONFIG_BLK_DEV_PMEM_SIZE;
+module_param(pmem_size_gb,  int, S_IRUGO);
+MODULE_PARM_DESC(pmem_size_gb,  "Total size in GB of space to claim for all disks");
+
+static int pmem_count = CONFIG_BLK_DEV_PMEM_COUNT;
+module_param(pmem_count, int, S_IRUGO);
+MODULE_PARM_DESC(pmem_count, "Number of pmem devices to evenly split allocated space");
+
+static LIST_HEAD(pmem_devices);
+static int pmem_major;
+
+/* FIXME: move phys_addr, virt_addr, size calls up to caller */
+static struct pmem_device *pmem_alloc(int i)
+{
+	struct pmem_device *pmem;
+	struct gendisk *disk;
+	size_t disk_size = total_size / pmem_count;
+	size_t disk_sectors = disk_size / 512;
+
+	pmem = kzalloc(sizeof(*pmem), GFP_KERNEL);
+	if (!pmem)
+		goto out;
+
+	pmem->phys_addr = phys_addr + i * disk_size;
+	pmem->virt_addr = virt_addr + i * disk_size;
+	pmem->size = disk_size;
+
+	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
+	if (!pmem->pmem_queue)
+		goto out_free_dev;
+
+	blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
+	blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
+	blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
+
+	disk = pmem->pmem_disk = alloc_disk(0);
+	if (!disk)
+		goto out_free_queue;
+	disk->major		= pmem_major;
+	disk->first_minor	= 0;
+	disk->fops		= &pmem_fops;
+	disk->private_data	= pmem;
+	disk->queue		= pmem->pmem_queue;
+	disk->flags		= GENHD_FL_EXT_DEVT;
+	sprintf(disk->disk_name, "pmem%d", i);
+	set_capacity(disk, disk_sectors);
+
+	return pmem;
+
+out_free_queue:
+	blk_cleanup_queue(pmem->pmem_queue);
+out_free_dev:
+	kfree(pmem);
+out:
+	return NULL;
+}
+
+static void pmem_free(struct pmem_device *pmem)
+{
+	put_disk(pmem->pmem_disk);
+	blk_cleanup_queue(pmem->pmem_queue);
+	kfree(pmem);
+}
+
+static void pmem_del_one(struct pmem_device *pmem)
+{
+	list_del(&pmem->pmem_list);
+	del_gendisk(pmem->pmem_disk);
+	pmem_free(pmem);
+}
+
+static int __init pmem_init(void)
+{
+	int result, i;
+	struct resource *res_mem;
+	struct pmem_device *pmem, *next;
+
+	phys_addr  = (phys_addr_t) pmem_start_gb * 1024 * 1024 * 1024;
+	total_size = (size_t)	   pmem_size_gb  * 1024 * 1024 * 1024;
+
+	res_mem = request_mem_region_exclusive(phys_addr, total_size, "pmem");
+	if (!res_mem)
+		return -ENOMEM;
+
+	virt_addr = ioremap_cache(phys_addr, total_size);
+	if (!virt_addr) {
+		result = -ENOMEM;
+		goto out_release;
+	}
+
+	result = register_blkdev(0, "pmem");
+	if (result < 0) {
+		result = -EIO;
+		goto out_unmap;
+	} else
+		pmem_major = result;
+
+	for (i = 0; i < pmem_count; i++) {
+		pmem = pmem_alloc(i);
+		if (!pmem) {
+			result = -ENOMEM;
+			goto out_free;
+		}
+		list_add_tail(&pmem->pmem_list, &pmem_devices);
+	}
+
+	list_for_each_entry(pmem, &pmem_devices, pmem_list)
+		add_disk(pmem->pmem_disk);
+
+	pr_info("pmem: module loaded\n");
+	return 0;
+
+out_free:
+	list_for_each_entry_safe(pmem, next, &pmem_devices, pmem_list) {
+		list_del(&pmem->pmem_list);
+		pmem_free(pmem);
+	}
+	unregister_blkdev(pmem_major, "pmem");
+
+out_unmap:
+	iounmap(virt_addr);
+
+out_release:
+	release_mem_region(phys_addr, total_size);
+	return result;
+}
+
+static void __exit pmem_exit(void)
+{
+	struct pmem_device *pmem, *next;
+
+	list_for_each_entry_safe(pmem, next, &pmem_devices, pmem_list)
+		pmem_del_one(pmem);
+
+	unregister_blkdev(pmem_major, "pmem");
+	iounmap(virt_addr);
+	release_mem_region(phys_addr, total_size);
+
+	pr_info("pmem: module unloaded\n");
+}
+
+MODULE_AUTHOR("Ross Zwisler <ross.zwisler@linux.intel.com>");
+MODULE_LICENSE("GPL");
+module_init(pmem_init);
+module_exit(pmem_exit);
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 1/6] pmem: Initial version of persistent memory driver
@ 2015-03-16 21:12   ` Ross Zwisler
  0 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

PMEM is a new driver that presents a reserved range of memory as a
block device.  This is useful for developing with NV-DIMMs, and
can be used with volatile memory as a development platform.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 MAINTAINERS            |   6 +
 drivers/block/Kconfig  |  41 ++++++
 drivers/block/Makefile |   1 +
 drivers/block/pmem.c   | 330 +++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 378 insertions(+)
 create mode 100644 drivers/block/pmem.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 6239a30..9414b42 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8052,6 +8052,12 @@ S:	Maintained
 F:	Documentation/blockdev/ramdisk.txt
 F:	drivers/block/brd.c
 
+PERSISTENT MEMORY DRIVER
+M:	Ross Zwisler <ross.zwisler@linux.intel.com>
+L:	linux-nvdimm@lists.01.org
+S:	Supported
+F:	drivers/block/pmem.c
+
 RANDOM NUMBER DRIVER
 M:	"Theodore Ts'o" <tytso@mit.edu>
 S:	Maintained
diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
index 1b8094d..ac52f5a 100644
--- a/drivers/block/Kconfig
+++ b/drivers/block/Kconfig
@@ -404,6 +404,47 @@ config BLK_DEV_RAM_DAX
 	  and will prevent RAM block device backing store memory from being
 	  allocated from highmem (only a problem for highmem systems).
 
+config BLK_DEV_PMEM
+	tristate "Persistent memory block device support"
+	help
+	  Saying Y here will allow you to use a contiguous range of reserved
+	  memory as one or more block devices.  Memory for PMEM should be
+	  reserved using the "memmap" kernel parameter.
+
+	  To compile this driver as a module, choose M here: the module will be
+	  called pmem.
+
+	  Most normal users won't need this functionality, and can thus say N
+	  here.
+
+config BLK_DEV_PMEM_START
+	int "Offset in GiB of where to start claiming space"
+	default "0"
+	depends on BLK_DEV_PMEM
+	help
+	  Starting offset in GiB that PMEM should use when claiming memory.  This
+	  memory needs to be reserved from the OS at boot time using the
+	  "memmap" kernel parameter.
+
+	  If you provide PMEM with volatile memory it will act as a volatile
+	  RAM disk and your data will not be persistent.
+
+config BLK_DEV_PMEM_COUNT
+	int "Default number of PMEM disks"
+	default "4"
+	depends on BLK_DEV_PMEM
+	help
+	  Number of equal sized block devices that PMEM should create.
+
+config BLK_DEV_PMEM_SIZE
+	int "Size in GiB of space to claim"
+	depends on BLK_DEV_PMEM
+	default "0"
+	help
+	  Amount of memory in GiB that PMEM should use when creating block
+	  devices.  This memory needs to be reserved from the OS at
+	  boot time using the "memmap" kernel parameter.
+
 config CDROM_PKTCDVD
 	tristate "Packet writing on CD/DVD media"
 	depends on !UML
diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 02b688d..9cc6c18 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -14,6 +14,7 @@ obj-$(CONFIG_PS3_VRAM)		+= ps3vram.o
 obj-$(CONFIG_ATARI_FLOPPY)	+= ataflop.o
 obj-$(CONFIG_AMIGA_Z2RAM)	+= z2ram.o
 obj-$(CONFIG_BLK_DEV_RAM)	+= brd.o
+obj-$(CONFIG_BLK_DEV_PMEM)	+= pmem.o
 obj-$(CONFIG_BLK_DEV_LOOP)	+= loop.o
 obj-$(CONFIG_BLK_CPQ_DA)	+= cpqarray.o
 obj-$(CONFIG_BLK_CPQ_CISS_DA)  += cciss.o
diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
new file mode 100644
index 0000000..d366b9b
--- /dev/null
+++ b/drivers/block/pmem.c
@@ -0,0 +1,330 @@
+/*
+ * Persistent Memory Driver
+ * Copyright (c) 2014, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * This driver is heavily based on drivers/block/brd.c.
+ * Copyright (C) 2007 Nick Piggin
+ * Copyright (C) 2007 Novell Inc.
+ */
+
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/hdreg.h>
+#include <linux/highmem.h>
+#include <linux/init.h>
+#include <linux/major.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#define SECTOR_SHIFT		9
+#define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
+#define PAGE_SECTORS		(1 << PAGE_SECTORS_SHIFT)
+
+/*
+ * driver-wide physical address and total_size - one single, contiguous memory
+ * region that we divide up in to same-sized devices
+ */
+phys_addr_t	phys_addr;
+void		*virt_addr;
+size_t		total_size;
+
+struct pmem_device {
+	struct request_queue	*pmem_queue;
+	struct gendisk		*pmem_disk;
+	struct list_head	pmem_list;
+
+	phys_addr_t		phys_addr;
+	void			*virt_addr;
+	size_t			size;
+};
+
+/*
+ * direct translation from (pmem,sector) => void*
+ * We do not require that sector be page aligned.
+ * The return value will point to the beginning of the page containing the
+ * given sector, not to the sector itself.
+ */
+static void *pmem_lookup_pg_addr(struct pmem_device *pmem, sector_t sector)
+{
+	size_t page_offset = sector >> PAGE_SECTORS_SHIFT;
+	size_t offset = page_offset << PAGE_SHIFT;
+
+	BUG_ON(offset >= pmem->size);
+	return pmem->virt_addr + offset;
+}
+
+/*
+ * sector is not required to be page aligned.
+ * n is at most a single page, but could be less.
+ */
+static void copy_to_pmem(struct pmem_device *pmem, const void *src,
+			sector_t sector, size_t n)
+{
+	void *dst;
+	unsigned int offset = (sector & (PAGE_SECTORS - 1)) << SECTOR_SHIFT;
+	size_t copy;
+
+	BUG_ON(n > PAGE_SIZE);
+
+	copy = min_t(size_t, n, PAGE_SIZE - offset);
+	dst = pmem_lookup_pg_addr(pmem, sector);
+	memcpy(dst + offset, src, copy);
+
+	if (copy < n) {
+		src += copy;
+		sector += copy >> SECTOR_SHIFT;
+		copy = n - copy;
+		dst = pmem_lookup_pg_addr(pmem, sector);
+		memcpy(dst, src, copy);
+	}
+}
+
+/*
+ * sector is not required to be page aligned.
+ * n is at most a single page, but could be less.
+ */
+static void copy_from_pmem(void *dst, struct pmem_device *pmem,
+			  sector_t sector, size_t n)
+{
+	void *src;
+	unsigned int offset = (sector & (PAGE_SECTORS - 1)) << SECTOR_SHIFT;
+	size_t copy;
+
+	BUG_ON(n > PAGE_SIZE);
+
+	copy = min_t(size_t, n, PAGE_SIZE - offset);
+	src = pmem_lookup_pg_addr(pmem, sector);
+
+	memcpy(dst, src + offset, copy);
+
+	if (copy < n) {
+		dst += copy;
+		sector += copy >> SECTOR_SHIFT;
+		copy = n - copy;
+		src = pmem_lookup_pg_addr(pmem, sector);
+		memcpy(dst, src, copy);
+	}
+}
+
+static void pmem_do_bvec(struct pmem_device *pmem, struct page *page,
+			unsigned int len, unsigned int off, int rw,
+			sector_t sector)
+{
+	void *mem = kmap_atomic(page);
+
+	if (rw == READ) {
+		copy_from_pmem(mem + off, pmem, sector, len);
+		flush_dcache_page(page);
+	} else {
+		/*
+		 * FIXME: Need more involved flushing to ensure that writes to
+		 * NVDIMMs are actually durable before returning.
+		 */
+		flush_dcache_page(page);
+		copy_to_pmem(pmem, mem + off, sector, len);
+	}
+
+	kunmap_atomic(mem);
+}
+
+static void pmem_make_request(struct request_queue *q, struct bio *bio)
+{
+	struct block_device *bdev = bio->bi_bdev;
+	struct pmem_device *pmem = bdev->bd_disk->private_data;
+	int rw;
+	struct bio_vec bvec;
+	sector_t sector;
+	struct bvec_iter iter;
+	int err = 0;
+
+	sector = bio->bi_iter.bi_sector;
+	if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) {
+		err = -EIO;
+		goto out;
+	}
+
+	BUG_ON(bio->bi_rw & REQ_DISCARD);
+
+	rw = bio_rw(bio);
+	if (rw == READA)
+		rw = READ;
+
+	bio_for_each_segment(bvec, bio, iter) {
+		unsigned int len = bvec.bv_len;
+
+		BUG_ON(len > PAGE_SIZE);
+		pmem_do_bvec(pmem, bvec.bv_page, len,
+			    bvec.bv_offset, rw, sector);
+		sector += len >> SECTOR_SHIFT;
+	}
+
+out:
+	bio_endio(bio, err);
+}
+
+static const struct block_device_operations pmem_fops = {
+	.owner =		THIS_MODULE,
+};
+
+/* Kernel module stuff */
+static int pmem_start_gb = CONFIG_BLK_DEV_PMEM_START;
+module_param(pmem_start_gb, int, S_IRUGO);
+MODULE_PARM_DESC(pmem_start_gb, "Offset in GB of where to start claiming space");
+
+static int pmem_size_gb = CONFIG_BLK_DEV_PMEM_SIZE;
+module_param(pmem_size_gb,  int, S_IRUGO);
+MODULE_PARM_DESC(pmem_size_gb,  "Total size in GB of space to claim for all disks");
+
+static int pmem_count = CONFIG_BLK_DEV_PMEM_COUNT;
+module_param(pmem_count, int, S_IRUGO);
+MODULE_PARM_DESC(pmem_count, "Number of pmem devices to evenly split allocated space");
+
+static LIST_HEAD(pmem_devices);
+static int pmem_major;
+
+/* FIXME: move phys_addr, virt_addr, size calls up to caller */
+static struct pmem_device *pmem_alloc(int i)
+{
+	struct pmem_device *pmem;
+	struct gendisk *disk;
+	size_t disk_size = total_size / pmem_count;
+	size_t disk_sectors = disk_size / 512;
+
+	pmem = kzalloc(sizeof(*pmem), GFP_KERNEL);
+	if (!pmem)
+		goto out;
+
+	pmem->phys_addr = phys_addr + i * disk_size;
+	pmem->virt_addr = virt_addr + i * disk_size;
+	pmem->size = disk_size;
+
+	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
+	if (!pmem->pmem_queue)
+		goto out_free_dev;
+
+	blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
+	blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
+	blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
+
+	disk = pmem->pmem_disk = alloc_disk(0);
+	if (!disk)
+		goto out_free_queue;
+	disk->major		= pmem_major;
+	disk->first_minor	= 0;
+	disk->fops		= &pmem_fops;
+	disk->private_data	= pmem;
+	disk->queue		= pmem->pmem_queue;
+	disk->flags		= GENHD_FL_EXT_DEVT;
+	sprintf(disk->disk_name, "pmem%d", i);
+	set_capacity(disk, disk_sectors);
+
+	return pmem;
+
+out_free_queue:
+	blk_cleanup_queue(pmem->pmem_queue);
+out_free_dev:
+	kfree(pmem);
+out:
+	return NULL;
+}
+
+static void pmem_free(struct pmem_device *pmem)
+{
+	put_disk(pmem->pmem_disk);
+	blk_cleanup_queue(pmem->pmem_queue);
+	kfree(pmem);
+}
+
+static void pmem_del_one(struct pmem_device *pmem)
+{
+	list_del(&pmem->pmem_list);
+	del_gendisk(pmem->pmem_disk);
+	pmem_free(pmem);
+}
+
+static int __init pmem_init(void)
+{
+	int result, i;
+	struct resource *res_mem;
+	struct pmem_device *pmem, *next;
+
+	phys_addr  = (phys_addr_t) pmem_start_gb * 1024 * 1024 * 1024;
+	total_size = (size_t)	   pmem_size_gb  * 1024 * 1024 * 1024;
+
+	res_mem = request_mem_region_exclusive(phys_addr, total_size, "pmem");
+	if (!res_mem)
+		return -ENOMEM;
+
+	virt_addr = ioremap_cache(phys_addr, total_size);
+	if (!virt_addr) {
+		result = -ENOMEM;
+		goto out_release;
+	}
+
+	result = register_blkdev(0, "pmem");
+	if (result < 0) {
+		result = -EIO;
+		goto out_unmap;
+	} else
+		pmem_major = result;
+
+	for (i = 0; i < pmem_count; i++) {
+		pmem = pmem_alloc(i);
+		if (!pmem) {
+			result = -ENOMEM;
+			goto out_free;
+		}
+		list_add_tail(&pmem->pmem_list, &pmem_devices);
+	}
+
+	list_for_each_entry(pmem, &pmem_devices, pmem_list)
+		add_disk(pmem->pmem_disk);
+
+	pr_info("pmem: module loaded\n");
+	return 0;
+
+out_free:
+	list_for_each_entry_safe(pmem, next, &pmem_devices, pmem_list) {
+		list_del(&pmem->pmem_list);
+		pmem_free(pmem);
+	}
+	unregister_blkdev(pmem_major, "pmem");
+
+out_unmap:
+	iounmap(virt_addr);
+
+out_release:
+	release_mem_region(phys_addr, total_size);
+	return result;
+}
+
+static void __exit pmem_exit(void)
+{
+	struct pmem_device *pmem, *next;
+
+	list_for_each_entry_safe(pmem, next, &pmem_devices, pmem_list)
+		pmem_del_one(pmem);
+
+	unregister_blkdev(pmem_major, "pmem");
+	iounmap(virt_addr);
+	release_mem_region(phys_addr, total_size);
+
+	pr_info("pmem: module unloaded\n");
+}
+
+MODULE_AUTHOR("Ross Zwisler <ross.zwisler@linux.intel.com>");
+MODULE_LICENSE("GPL");
+module_init(pmem_init);
+module_exit(pmem_exit);
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/6] pmem: Add support for getgeo()
  2015-03-16 21:12 ` Ross Zwisler
@ 2015-03-16 21:12   ` Ross Zwisler
  -1 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Some programs require HDIO_GETGEO work, which requires we implement
getgeo.  Based off of the work done to the NVMe driver in this commit:

commit 4cc09e2dc4cb ("NVMe: Add getgeo to block ops")

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index d366b9b..60bbe0d 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -50,6 +50,15 @@ struct pmem_device {
 	size_t			size;
 };
 
+static int pmem_getgeo(struct block_device *bd, struct hd_geometry *geo)
+{
+	/* some standard values */
+	geo->heads = 1 << 6;
+	geo->sectors = 1 << 5;
+	geo->cylinders = get_capacity(bd->bd_disk) >> 11;
+	return 0;
+}
+
 /*
  * direct translation from (pmem,sector) => void*
  * We do not require that sector be page aligned.
@@ -176,6 +185,7 @@ out:
 
 static const struct block_device_operations pmem_fops = {
 	.owner =		THIS_MODULE,
+	.getgeo =		pmem_getgeo,
 };
 
 /* Kernel module stuff */
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/6] pmem: Add support for getgeo()
@ 2015-03-16 21:12   ` Ross Zwisler
  0 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Some programs require HDIO_GETGEO work, which requires we implement
getgeo.  Based off of the work done to the NVMe driver in this commit:

commit 4cc09e2dc4cb ("NVMe: Add getgeo to block ops")

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index d366b9b..60bbe0d 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -50,6 +50,15 @@ struct pmem_device {
 	size_t			size;
 };
 
+static int pmem_getgeo(struct block_device *bd, struct hd_geometry *geo)
+{
+	/* some standard values */
+	geo->heads = 1 << 6;
+	geo->sectors = 1 << 5;
+	geo->cylinders = get_capacity(bd->bd_disk) >> 11;
+	return 0;
+}
+
 /*
  * direct translation from (pmem,sector) => void*
  * We do not require that sector be page aligned.
@@ -176,6 +185,7 @@ out:
 
 static const struct block_device_operations pmem_fops = {
 	.owner =		THIS_MODULE,
+	.getgeo =		pmem_getgeo,
 };
 
 /* Kernel module stuff */
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/6] pmem: Add support for rw_page()
  2015-03-16 21:12 ` Ross Zwisler
@ 2015-03-16 21:12   ` Ross Zwisler
  -1 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Based on commit a72132c31d58 ("brd: add support for rw_page()")

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index 60bbe0d..0be3669 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -183,8 +183,19 @@ out:
 	bio_endio(bio, err);
 }
 
+static int pmem_rw_page(struct block_device *bdev, sector_t sector,
+		       struct page *page, int rw)
+{
+	struct pmem_device *pmem = bdev->bd_disk->private_data;
+
+	pmem_do_bvec(pmem, page, PAGE_CACHE_SIZE, 0, rw, sector);
+	page_endio(page, rw & WRITE, 0);
+	return 0;
+}
+
 static const struct block_device_operations pmem_fops = {
 	.owner =		THIS_MODULE,
+	.rw_page =		pmem_rw_page,
 	.getgeo =		pmem_getgeo,
 };
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/6] pmem: Add support for rw_page()
@ 2015-03-16 21:12   ` Ross Zwisler
  0 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Based on commit a72132c31d58 ("brd: add support for rw_page()")

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index 60bbe0d..0be3669 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -183,8 +183,19 @@ out:
 	bio_endio(bio, err);
 }
 
+static int pmem_rw_page(struct block_device *bdev, sector_t sector,
+		       struct page *page, int rw)
+{
+	struct pmem_device *pmem = bdev->bd_disk->private_data;
+
+	pmem_do_bvec(pmem, page, PAGE_CACHE_SIZE, 0, rw, sector);
+	page_endio(page, rw & WRITE, 0);
+	return 0;
+}
+
 static const struct block_device_operations pmem_fops = {
 	.owner =		THIS_MODULE,
+	.rw_page =		pmem_rw_page,
 	.getgeo =		pmem_getgeo,
 };
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/6] pmem: Add support for direct_access()
  2015-03-16 21:12 ` Ross Zwisler
@ 2015-03-16 21:12   ` Ross Zwisler
  -1 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index 0be3669..d63bc96 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -74,6 +74,15 @@ static void *pmem_lookup_pg_addr(struct pmem_device *pmem, sector_t sector)
 	return pmem->virt_addr + offset;
 }
 
+/* sector must be page aligned */
+static unsigned long pmem_lookup_pfn(struct pmem_device *pmem, sector_t sector)
+{
+	size_t page_offset = sector >> PAGE_SECTORS_SHIFT;
+
+	BUG_ON(sector & (PAGE_SECTORS - 1));
+	return (pmem->phys_addr >> PAGE_SHIFT) + page_offset;
+}
+
 /*
  * sector is not required to be page aligned.
  * n is at most a single page, but could be less.
@@ -193,9 +202,24 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector,
 	return 0;
 }
 
+static long pmem_direct_access(struct block_device *bdev, sector_t sector,
+			      void **kaddr, unsigned long *pfn, long size)
+{
+	struct pmem_device *pmem = bdev->bd_disk->private_data;
+
+	if (!pmem)
+		return -ENODEV;
+
+	*kaddr = pmem_lookup_pg_addr(pmem, sector);
+	*pfn = pmem_lookup_pfn(pmem, sector);
+
+	return pmem->size - (sector * 512);
+}
+
 static const struct block_device_operations pmem_fops = {
 	.owner =		THIS_MODULE,
 	.rw_page =		pmem_rw_page,
+	.direct_access =	pmem_direct_access,
 	.getgeo =		pmem_getgeo,
 };
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/6] pmem: Add support for direct_access()
@ 2015-03-16 21:12   ` Ross Zwisler
  0 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index 0be3669..d63bc96 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -74,6 +74,15 @@ static void *pmem_lookup_pg_addr(struct pmem_device *pmem, sector_t sector)
 	return pmem->virt_addr + offset;
 }
 
+/* sector must be page aligned */
+static unsigned long pmem_lookup_pfn(struct pmem_device *pmem, sector_t sector)
+{
+	size_t page_offset = sector >> PAGE_SECTORS_SHIFT;
+
+	BUG_ON(sector & (PAGE_SECTORS - 1));
+	return (pmem->phys_addr >> PAGE_SHIFT) + page_offset;
+}
+
 /*
  * sector is not required to be page aligned.
  * n is at most a single page, but could be less.
@@ -193,9 +202,24 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector,
 	return 0;
 }
 
+static long pmem_direct_access(struct block_device *bdev, sector_t sector,
+			      void **kaddr, unsigned long *pfn, long size)
+{
+	struct pmem_device *pmem = bdev->bd_disk->private_data;
+
+	if (!pmem)
+		return -ENODEV;
+
+	*kaddr = pmem_lookup_pg_addr(pmem, sector);
+	*pfn = pmem_lookup_pfn(pmem, sector);
+
+	return pmem->size - (sector * 512);
+}
+
 static const struct block_device_operations pmem_fops = {
 	.owner =		THIS_MODULE,
 	.rw_page =		pmem_rw_page,
+	.direct_access =	pmem_direct_access,
 	.getgeo =		pmem_getgeo,
 };
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/6] pmem: Clean up includes
  2015-03-16 21:12 ` Ross Zwisler
@ 2015-03-16 21:12   ` Ross Zwisler
  -1 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index d63bc96..8f39ef4 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -16,17 +16,15 @@
  * Copyright (C) 2007 Novell Inc.
  */
 
+#include <asm/cacheflush.h>
 #include <linux/bio.h>
 #include <linux/blkdev.h>
 #include <linux/fs.h>
 #include <linux/hdreg.h>
-#include <linux/highmem.h>
 #include <linux/init.h>
-#include <linux/major.h>
 #include <linux/module.h>
 #include <linux/moduleparam.h>
 #include <linux/slab.h>
-#include <linux/uaccess.h>
 
 #define SECTOR_SHIFT		9
 #define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/6] pmem: Clean up includes
@ 2015-03-16 21:12   ` Ross Zwisler
  0 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:12 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index d63bc96..8f39ef4 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -16,17 +16,15 @@
  * Copyright (C) 2007 Novell Inc.
  */
 
+#include <asm/cacheflush.h>
 #include <linux/bio.h>
 #include <linux/blkdev.h>
 #include <linux/fs.h>
 #include <linux/hdreg.h>
-#include <linux/highmem.h>
 #include <linux/init.h>
-#include <linux/major.h>
 #include <linux/module.h>
 #include <linux/moduleparam.h>
 #include <linux/slab.h>
-#include <linux/uaccess.h>
 
 #define SECTOR_SHIFT		9
 #define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 6/6] pmem: Let each device manage private memory region
  2015-03-16 21:12 ` Ross Zwisler
@ 2015-03-16 21:13   ` Ross Zwisler
  -1 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:13 UTC (permalink / raw)
  To: linux-kernel
  Cc: Boaz Harrosh, Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe,
	hch, riel

From: Boaz Harrosh <boaz@plexistor.com>

This patch removes any global memory information. And lets
each pmem-device manage it's own memory region.

pmem_alloc() Now receives phys_addr and disk_size and will
map that region, also pmem_free will do the unmaping.

This is so we can support multiple discontinuous memory regions
in the next patch

Signed-off-by: Boaz Harrosh <boaz@plexistor.com>
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 122 +++++++++++++++++++++++++++++++--------------------
 1 file changed, 75 insertions(+), 47 deletions(-)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index 8f39ef4..1bd9ab0 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -30,19 +30,12 @@
 #define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
 #define PAGE_SECTORS		(1 << PAGE_SECTORS_SHIFT)
 
-/*
- * driver-wide physical address and total_size - one single, contiguous memory
- * region that we divide up in to same-sized devices
- */
-phys_addr_t	phys_addr;
-void		*virt_addr;
-size_t		total_size;
-
 struct pmem_device {
 	struct request_queue	*pmem_queue;
 	struct gendisk		*pmem_disk;
 	struct list_head	pmem_list;
 
+	/* One contiguous memory region per device */
 	phys_addr_t		phys_addr;
 	void			*virt_addr;
 	size_t			size;
@@ -237,33 +230,80 @@ MODULE_PARM_DESC(pmem_count, "Number of pmem devices to evenly split allocated s
 static LIST_HEAD(pmem_devices);
 static int pmem_major;
 
-/* FIXME: move phys_addr, virt_addr, size calls up to caller */
-static struct pmem_device *pmem_alloc(int i)
+/* pmem->phys_addr and pmem->size need to be set.
+ * Will then set virt_addr if successful.
+ */
+int pmem_mapmem(struct pmem_device *pmem)
+{
+	struct resource *res_mem;
+	int err;
+
+	res_mem = request_mem_region_exclusive(pmem->phys_addr, pmem->size,
+					       "pmem");
+	if (!res_mem) {
+		pr_warn("pmem: request_mem_region_exclusive phys=0x%llx size=0x%zx failed\n",
+			   pmem->phys_addr, pmem->size);
+		return -EINVAL;
+	}
+
+	pmem->virt_addr = ioremap_cache(pmem->phys_addr, pmem->size);
+	if (unlikely(!pmem->virt_addr)) {
+		err = -ENXIO;
+		goto out_release;
+	}
+	return 0;
+
+out_release:
+	release_mem_region(pmem->phys_addr, pmem->size);
+	return err;
+}
+
+void pmem_unmapmem(struct pmem_device *pmem)
+{
+	if (unlikely(!pmem->virt_addr))
+		return;
+
+	iounmap(pmem->virt_addr);
+	release_mem_region(pmem->phys_addr, pmem->size);
+	pmem->virt_addr = NULL;
+}
+
+static struct pmem_device *pmem_alloc(phys_addr_t phys_addr, size_t disk_size,
+				      int i)
 {
 	struct pmem_device *pmem;
 	struct gendisk *disk;
-	size_t disk_size = total_size / pmem_count;
-	size_t disk_sectors = disk_size / 512;
+	int err;
 
 	pmem = kzalloc(sizeof(*pmem), GFP_KERNEL);
-	if (!pmem)
+	if (unlikely(!pmem)) {
+		err = -ENOMEM;
 		goto out;
+	}
 
-	pmem->phys_addr = phys_addr + i * disk_size;
-	pmem->virt_addr = virt_addr + i * disk_size;
+	pmem->phys_addr = phys_addr;
 	pmem->size = disk_size;
 
-	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
-	if (!pmem->pmem_queue)
+	err = pmem_mapmem(pmem);
+	if (unlikely(err))
 		goto out_free_dev;
 
+	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
+	if (unlikely(!pmem->pmem_queue)) {
+		err = -ENOMEM;
+		goto out_unmap;
+	}
+
 	blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
 	blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
 	blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
 
-	disk = pmem->pmem_disk = alloc_disk(0);
-	if (!disk)
+	disk = alloc_disk(0);
+	if (unlikely(!disk)) {
+		err = -ENOMEM;
 		goto out_free_queue;
+	}
+
 	disk->major		= pmem_major;
 	disk->first_minor	= 0;
 	disk->fops		= &pmem_fops;
@@ -271,22 +311,26 @@ static struct pmem_device *pmem_alloc(int i)
 	disk->queue		= pmem->pmem_queue;
 	disk->flags		= GENHD_FL_EXT_DEVT;
 	sprintf(disk->disk_name, "pmem%d", i);
-	set_capacity(disk, disk_sectors);
+	set_capacity(disk, disk_size >> SECTOR_SHIFT);
+	pmem->pmem_disk = disk;
 
 	return pmem;
 
 out_free_queue:
 	blk_cleanup_queue(pmem->pmem_queue);
+out_unmap:
+	pmem_unmapmem(pmem);
 out_free_dev:
 	kfree(pmem);
 out:
-	return NULL;
+	return ERR_PTR(err);
 }
 
 static void pmem_free(struct pmem_device *pmem)
 {
 	put_disk(pmem->pmem_disk);
 	blk_cleanup_queue(pmem->pmem_queue);
+	pmem_unmapmem(pmem);
 	kfree(pmem);
 }
 
@@ -300,36 +344,28 @@ static void pmem_del_one(struct pmem_device *pmem)
 static int __init pmem_init(void)
 {
 	int result, i;
-	struct resource *res_mem;
 	struct pmem_device *pmem, *next;
+	phys_addr_t phys_addr;
+	size_t total_size, disk_size;
 
 	phys_addr  = (phys_addr_t) pmem_start_gb * 1024 * 1024 * 1024;
 	total_size = (size_t)	   pmem_size_gb  * 1024 * 1024 * 1024;
-
-	res_mem = request_mem_region_exclusive(phys_addr, total_size, "pmem");
-	if (!res_mem)
-		return -ENOMEM;
-
-	virt_addr = ioremap_cache(phys_addr, total_size);
-	if (!virt_addr) {
-		result = -ENOMEM;
-		goto out_release;
-	}
+	disk_size = total_size / pmem_count;
 
 	result = register_blkdev(0, "pmem");
-	if (result < 0) {
-		result = -EIO;
-		goto out_unmap;
-	} else
+	if (result < 0)
+		return -EIO;
+	else
 		pmem_major = result;
 
 	for (i = 0; i < pmem_count; i++) {
-		pmem = pmem_alloc(i);
-		if (!pmem) {
-			result = -ENOMEM;
+		pmem = pmem_alloc(phys_addr, disk_size, i);
+		if (IS_ERR(pmem)) {
+			result = PTR_ERR(pmem);
 			goto out_free;
 		}
 		list_add_tail(&pmem->pmem_list, &pmem_devices);
+		phys_addr += disk_size;
 	}
 
 	list_for_each_entry(pmem, &pmem_devices, pmem_list)
@@ -345,11 +381,6 @@ out_free:
 	}
 	unregister_blkdev(pmem_major, "pmem");
 
-out_unmap:
-	iounmap(virt_addr);
-
-out_release:
-	release_mem_region(phys_addr, total_size);
 	return result;
 }
 
@@ -361,9 +392,6 @@ static void __exit pmem_exit(void)
 		pmem_del_one(pmem);
 
 	unregister_blkdev(pmem_major, "pmem");
-	iounmap(virt_addr);
-	release_mem_region(phys_addr, total_size);
-
 	pr_info("pmem: module unloaded\n");
 }
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 6/6] pmem: Let each device manage private memory region
@ 2015-03-16 21:13   ` Ross Zwisler
  0 siblings, 0 replies; 20+ messages in thread
From: Ross Zwisler @ 2015-03-16 21:13 UTC (permalink / raw)
  To: linux-kernel
  Cc: Boaz Harrosh, Ross Zwisler, linux-nvdimm, linux-fsdevel, axboe,
	hch, riel

From: Boaz Harrosh <boaz@plexistor.com>

This patch removes any global memory information. And lets
each pmem-device manage it's own memory region.

pmem_alloc() Now receives phys_addr and disk_size and will
map that region, also pmem_free will do the unmaping.

This is so we can support multiple discontinuous memory regions
in the next patch

Signed-off-by: Boaz Harrosh <boaz@plexistor.com>
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: linux-nvdimm@lists.01.org
Cc: linux-fsdevel@vger.kernel.org
Cc: axboe@kernel.dk
Cc: hch@infradead.org
Cc: riel@redhat.com
---
 drivers/block/pmem.c | 122 +++++++++++++++++++++++++++++++--------------------
 1 file changed, 75 insertions(+), 47 deletions(-)

diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
index 8f39ef4..1bd9ab0 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/pmem.c
@@ -30,19 +30,12 @@
 #define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
 #define PAGE_SECTORS		(1 << PAGE_SECTORS_SHIFT)
 
-/*
- * driver-wide physical address and total_size - one single, contiguous memory
- * region that we divide up in to same-sized devices
- */
-phys_addr_t	phys_addr;
-void		*virt_addr;
-size_t		total_size;
-
 struct pmem_device {
 	struct request_queue	*pmem_queue;
 	struct gendisk		*pmem_disk;
 	struct list_head	pmem_list;
 
+	/* One contiguous memory region per device */
 	phys_addr_t		phys_addr;
 	void			*virt_addr;
 	size_t			size;
@@ -237,33 +230,80 @@ MODULE_PARM_DESC(pmem_count, "Number of pmem devices to evenly split allocated s
 static LIST_HEAD(pmem_devices);
 static int pmem_major;
 
-/* FIXME: move phys_addr, virt_addr, size calls up to caller */
-static struct pmem_device *pmem_alloc(int i)
+/* pmem->phys_addr and pmem->size need to be set.
+ * Will then set virt_addr if successful.
+ */
+int pmem_mapmem(struct pmem_device *pmem)
+{
+	struct resource *res_mem;
+	int err;
+
+	res_mem = request_mem_region_exclusive(pmem->phys_addr, pmem->size,
+					       "pmem");
+	if (!res_mem) {
+		pr_warn("pmem: request_mem_region_exclusive phys=0x%llx size=0x%zx failed\n",
+			   pmem->phys_addr, pmem->size);
+		return -EINVAL;
+	}
+
+	pmem->virt_addr = ioremap_cache(pmem->phys_addr, pmem->size);
+	if (unlikely(!pmem->virt_addr)) {
+		err = -ENXIO;
+		goto out_release;
+	}
+	return 0;
+
+out_release:
+	release_mem_region(pmem->phys_addr, pmem->size);
+	return err;
+}
+
+void pmem_unmapmem(struct pmem_device *pmem)
+{
+	if (unlikely(!pmem->virt_addr))
+		return;
+
+	iounmap(pmem->virt_addr);
+	release_mem_region(pmem->phys_addr, pmem->size);
+	pmem->virt_addr = NULL;
+}
+
+static struct pmem_device *pmem_alloc(phys_addr_t phys_addr, size_t disk_size,
+				      int i)
 {
 	struct pmem_device *pmem;
 	struct gendisk *disk;
-	size_t disk_size = total_size / pmem_count;
-	size_t disk_sectors = disk_size / 512;
+	int err;
 
 	pmem = kzalloc(sizeof(*pmem), GFP_KERNEL);
-	if (!pmem)
+	if (unlikely(!pmem)) {
+		err = -ENOMEM;
 		goto out;
+	}
 
-	pmem->phys_addr = phys_addr + i * disk_size;
-	pmem->virt_addr = virt_addr + i * disk_size;
+	pmem->phys_addr = phys_addr;
 	pmem->size = disk_size;
 
-	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
-	if (!pmem->pmem_queue)
+	err = pmem_mapmem(pmem);
+	if (unlikely(err))
 		goto out_free_dev;
 
+	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
+	if (unlikely(!pmem->pmem_queue)) {
+		err = -ENOMEM;
+		goto out_unmap;
+	}
+
 	blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
 	blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
 	blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
 
-	disk = pmem->pmem_disk = alloc_disk(0);
-	if (!disk)
+	disk = alloc_disk(0);
+	if (unlikely(!disk)) {
+		err = -ENOMEM;
 		goto out_free_queue;
+	}
+
 	disk->major		= pmem_major;
 	disk->first_minor	= 0;
 	disk->fops		= &pmem_fops;
@@ -271,22 +311,26 @@ static struct pmem_device *pmem_alloc(int i)
 	disk->queue		= pmem->pmem_queue;
 	disk->flags		= GENHD_FL_EXT_DEVT;
 	sprintf(disk->disk_name, "pmem%d", i);
-	set_capacity(disk, disk_sectors);
+	set_capacity(disk, disk_size >> SECTOR_SHIFT);
+	pmem->pmem_disk = disk;
 
 	return pmem;
 
 out_free_queue:
 	blk_cleanup_queue(pmem->pmem_queue);
+out_unmap:
+	pmem_unmapmem(pmem);
 out_free_dev:
 	kfree(pmem);
 out:
-	return NULL;
+	return ERR_PTR(err);
 }
 
 static void pmem_free(struct pmem_device *pmem)
 {
 	put_disk(pmem->pmem_disk);
 	blk_cleanup_queue(pmem->pmem_queue);
+	pmem_unmapmem(pmem);
 	kfree(pmem);
 }
 
@@ -300,36 +344,28 @@ static void pmem_del_one(struct pmem_device *pmem)
 static int __init pmem_init(void)
 {
 	int result, i;
-	struct resource *res_mem;
 	struct pmem_device *pmem, *next;
+	phys_addr_t phys_addr;
+	size_t total_size, disk_size;
 
 	phys_addr  = (phys_addr_t) pmem_start_gb * 1024 * 1024 * 1024;
 	total_size = (size_t)	   pmem_size_gb  * 1024 * 1024 * 1024;
-
-	res_mem = request_mem_region_exclusive(phys_addr, total_size, "pmem");
-	if (!res_mem)
-		return -ENOMEM;
-
-	virt_addr = ioremap_cache(phys_addr, total_size);
-	if (!virt_addr) {
-		result = -ENOMEM;
-		goto out_release;
-	}
+	disk_size = total_size / pmem_count;
 
 	result = register_blkdev(0, "pmem");
-	if (result < 0) {
-		result = -EIO;
-		goto out_unmap;
-	} else
+	if (result < 0)
+		return -EIO;
+	else
 		pmem_major = result;
 
 	for (i = 0; i < pmem_count; i++) {
-		pmem = pmem_alloc(i);
-		if (!pmem) {
-			result = -ENOMEM;
+		pmem = pmem_alloc(phys_addr, disk_size, i);
+		if (IS_ERR(pmem)) {
+			result = PTR_ERR(pmem);
 			goto out_free;
 		}
 		list_add_tail(&pmem->pmem_list, &pmem_devices);
+		phys_addr += disk_size;
 	}
 
 	list_for_each_entry(pmem, &pmem_devices, pmem_list)
@@ -345,11 +381,6 @@ out_free:
 	}
 	unregister_blkdev(pmem_major, "pmem");
 
-out_unmap:
-	iounmap(virt_addr);
-
-out_release:
-	release_mem_region(phys_addr, total_size);
 	return result;
 }
 
@@ -361,9 +392,6 @@ static void __exit pmem_exit(void)
 		pmem_del_one(pmem);
 
 	unregister_blkdev(pmem_major, "pmem");
-	iounmap(virt_addr);
-	release_mem_region(phys_addr, total_size);
-
 	pr_info("pmem: module unloaded\n");
 }
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] pmem: Initial version of persistent memory driver
  2015-03-16 21:12   ` Ross Zwisler
@ 2015-03-17 18:53     ` Paul Bolle
  -1 siblings, 0 replies; 20+ messages in thread
From: Paul Bolle @ 2015-03-17 18:53 UTC (permalink / raw)
  To: Ross Zwisler; +Cc: linux-kernel, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Just a license nit.

On Mon, 2015-03-16 at 15:12 -0600, Ross Zwisler wrote:
> --- /dev/null
> +++ b/drivers/block/pmem.c

> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.

This states the license is GPL v2.

> +MODULE_LICENSE("GPL");

And using
    MODULE_LICENSE("GPL v2");

would match that statement.


Paul Bolle


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] pmem: Initial version of persistent memory driver
@ 2015-03-17 18:53     ` Paul Bolle
  0 siblings, 0 replies; 20+ messages in thread
From: Paul Bolle @ 2015-03-17 18:53 UTC (permalink / raw)
  To: Ross Zwisler; +Cc: linux-kernel, linux-nvdimm, linux-fsdevel, axboe, hch, riel

Just a license nit.

On Mon, 2015-03-16 at 15:12 -0600, Ross Zwisler wrote:
> --- /dev/null
> +++ b/drivers/block/pmem.c

> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.

This states the license is GPL v2.

> +MODULE_LICENSE("GPL");

And using
    MODULE_LICENSE("GPL v2");

would match that statement.


Paul Bolle


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/6] pmem: Let each device manage private memory region
  2015-03-16 21:13   ` Ross Zwisler
@ 2015-03-18 10:57     ` Boaz Harrosh
  -1 siblings, 0 replies; 20+ messages in thread
From: Boaz Harrosh @ 2015-03-18 10:57 UTC (permalink / raw)
  To: Ross Zwisler, linux-kernel
  Cc: Boaz Harrosh, linux-nvdimm, linux-fsdevel, axboe, hch, riel

On 03/16/2015 11:13 PM, Ross Zwisler wrote:
> From: Boaz Harrosh <boaz@plexistor.com>
> 
> This patch removes any global memory information. And lets
> each pmem-device manage it's own memory region.
> 
> pmem_alloc() Now receives phys_addr and disk_size and will
> map that region, also pmem_free will do the unmaping.
> 
> This is so we can support multiple discontinuous memory regions
> in the next patch
> 
> Signed-off-by: Boaz Harrosh <boaz@plexistor.com>

Yes I wrote this! No! please remove my Signed-off-by

This is completely the wrong API, and completely the
wrong convoluted, crap code from the beginning.

Even after my half backed, done as a review enhancer
for the better code, that needs to come later.

Sad
Boaz

> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Cc: linux-nvdimm@lists.01.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: axboe@kernel.dk
> Cc: hch@infradead.org
> Cc: riel@redhat.com
> ---
>  drivers/block/pmem.c | 122 +++++++++++++++++++++++++++++++--------------------
>  1 file changed, 75 insertions(+), 47 deletions(-)
> 
> diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
> index 8f39ef4..1bd9ab0 100644
> --- a/drivers/block/pmem.c
> +++ b/drivers/block/pmem.c
> @@ -30,19 +30,12 @@
>  #define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
>  #define PAGE_SECTORS		(1 << PAGE_SECTORS_SHIFT)
>  
> -/*
> - * driver-wide physical address and total_size - one single, contiguous memory
> - * region that we divide up in to same-sized devices
> - */
> -phys_addr_t	phys_addr;
> -void		*virt_addr;
> -size_t		total_size;
> -
>  struct pmem_device {
>  	struct request_queue	*pmem_queue;
>  	struct gendisk		*pmem_disk;
>  	struct list_head	pmem_list;
>  
> +	/* One contiguous memory region per device */
>  	phys_addr_t		phys_addr;
>  	void			*virt_addr;
>  	size_t			size;
> @@ -237,33 +230,80 @@ MODULE_PARM_DESC(pmem_count, "Number of pmem devices to evenly split allocated s
>  static LIST_HEAD(pmem_devices);
>  static int pmem_major;
>  
> -/* FIXME: move phys_addr, virt_addr, size calls up to caller */
> -static struct pmem_device *pmem_alloc(int i)
> +/* pmem->phys_addr and pmem->size need to be set.
> + * Will then set virt_addr if successful.
> + */
> +int pmem_mapmem(struct pmem_device *pmem)
> +{
> +	struct resource *res_mem;
> +	int err;
> +
> +	res_mem = request_mem_region_exclusive(pmem->phys_addr, pmem->size,
> +					       "pmem");
> +	if (!res_mem) {
> +		pr_warn("pmem: request_mem_region_exclusive phys=0x%llx size=0x%zx failed\n",
> +			   pmem->phys_addr, pmem->size);
> +		return -EINVAL;
> +	}
> +
> +	pmem->virt_addr = ioremap_cache(pmem->phys_addr, pmem->size);
> +	if (unlikely(!pmem->virt_addr)) {
> +		err = -ENXIO;
> +		goto out_release;
> +	}
> +	return 0;
> +
> +out_release:
> +	release_mem_region(pmem->phys_addr, pmem->size);
> +	return err;
> +}
> +
> +void pmem_unmapmem(struct pmem_device *pmem)
> +{
> +	if (unlikely(!pmem->virt_addr))
> +		return;
> +
> +	iounmap(pmem->virt_addr);
> +	release_mem_region(pmem->phys_addr, pmem->size);
> +	pmem->virt_addr = NULL;
> +}
> +
> +static struct pmem_device *pmem_alloc(phys_addr_t phys_addr, size_t disk_size,
> +				      int i)
>  {
>  	struct pmem_device *pmem;
>  	struct gendisk *disk;
> -	size_t disk_size = total_size / pmem_count;
> -	size_t disk_sectors = disk_size / 512;
> +	int err;
>  
>  	pmem = kzalloc(sizeof(*pmem), GFP_KERNEL);
> -	if (!pmem)
> +	if (unlikely(!pmem)) {
> +		err = -ENOMEM;
>  		goto out;
> +	}
>  
> -	pmem->phys_addr = phys_addr + i * disk_size;
> -	pmem->virt_addr = virt_addr + i * disk_size;
> +	pmem->phys_addr = phys_addr;
>  	pmem->size = disk_size;
>  
> -	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
> -	if (!pmem->pmem_queue)
> +	err = pmem_mapmem(pmem);
> +	if (unlikely(err))
>  		goto out_free_dev;
>  
> +	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
> +	if (unlikely(!pmem->pmem_queue)) {
> +		err = -ENOMEM;
> +		goto out_unmap;
> +	}
> +
>  	blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
>  	blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
>  	blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
>  
> -	disk = pmem->pmem_disk = alloc_disk(0);
> -	if (!disk)
> +	disk = alloc_disk(0);
> +	if (unlikely(!disk)) {
> +		err = -ENOMEM;
>  		goto out_free_queue;
> +	}
> +
>  	disk->major		= pmem_major;
>  	disk->first_minor	= 0;
>  	disk->fops		= &pmem_fops;
> @@ -271,22 +311,26 @@ static struct pmem_device *pmem_alloc(int i)
>  	disk->queue		= pmem->pmem_queue;
>  	disk->flags		= GENHD_FL_EXT_DEVT;
>  	sprintf(disk->disk_name, "pmem%d", i);
> -	set_capacity(disk, disk_sectors);
> +	set_capacity(disk, disk_size >> SECTOR_SHIFT);
> +	pmem->pmem_disk = disk;
>  
>  	return pmem;
>  
>  out_free_queue:
>  	blk_cleanup_queue(pmem->pmem_queue);
> +out_unmap:
> +	pmem_unmapmem(pmem);
>  out_free_dev:
>  	kfree(pmem);
>  out:
> -	return NULL;
> +	return ERR_PTR(err);
>  }
>  
>  static void pmem_free(struct pmem_device *pmem)
>  {
>  	put_disk(pmem->pmem_disk);
>  	blk_cleanup_queue(pmem->pmem_queue);
> +	pmem_unmapmem(pmem);
>  	kfree(pmem);
>  }
>  
> @@ -300,36 +344,28 @@ static void pmem_del_one(struct pmem_device *pmem)
>  static int __init pmem_init(void)
>  {
>  	int result, i;
> -	struct resource *res_mem;
>  	struct pmem_device *pmem, *next;
> +	phys_addr_t phys_addr;
> +	size_t total_size, disk_size;
>  
>  	phys_addr  = (phys_addr_t) pmem_start_gb * 1024 * 1024 * 1024;
>  	total_size = (size_t)	   pmem_size_gb  * 1024 * 1024 * 1024;
> -
> -	res_mem = request_mem_region_exclusive(phys_addr, total_size, "pmem");
> -	if (!res_mem)
> -		return -ENOMEM;
> -
> -	virt_addr = ioremap_cache(phys_addr, total_size);
> -	if (!virt_addr) {
> -		result = -ENOMEM;
> -		goto out_release;
> -	}
> +	disk_size = total_size / pmem_count;
>  
>  	result = register_blkdev(0, "pmem");
> -	if (result < 0) {
> -		result = -EIO;
> -		goto out_unmap;
> -	} else
> +	if (result < 0)
> +		return -EIO;
> +	else
>  		pmem_major = result;
>  
>  	for (i = 0; i < pmem_count; i++) {
> -		pmem = pmem_alloc(i);
> -		if (!pmem) {
> -			result = -ENOMEM;
> +		pmem = pmem_alloc(phys_addr, disk_size, i);
> +		if (IS_ERR(pmem)) {
> +			result = PTR_ERR(pmem);
>  			goto out_free;
>  		}
>  		list_add_tail(&pmem->pmem_list, &pmem_devices);
> +		phys_addr += disk_size;
>  	}
>  
>  	list_for_each_entry(pmem, &pmem_devices, pmem_list)
> @@ -345,11 +381,6 @@ out_free:
>  	}
>  	unregister_blkdev(pmem_major, "pmem");
>  
> -out_unmap:
> -	iounmap(virt_addr);
> -
> -out_release:
> -	release_mem_region(phys_addr, total_size);
>  	return result;
>  }
>  
> @@ -361,9 +392,6 @@ static void __exit pmem_exit(void)
>  		pmem_del_one(pmem);
>  
>  	unregister_blkdev(pmem_major, "pmem");
> -	iounmap(virt_addr);
> -	release_mem_region(phys_addr, total_size);
> -
>  	pr_info("pmem: module unloaded\n");
>  }
>  
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/6] pmem: Let each device manage private memory region
@ 2015-03-18 10:57     ` Boaz Harrosh
  0 siblings, 0 replies; 20+ messages in thread
From: Boaz Harrosh @ 2015-03-18 10:57 UTC (permalink / raw)
  To: Ross Zwisler, linux-kernel
  Cc: Boaz Harrosh, linux-nvdimm, linux-fsdevel, axboe, hch, riel

On 03/16/2015 11:13 PM, Ross Zwisler wrote:
> From: Boaz Harrosh <boaz@plexistor.com>
> 
> This patch removes any global memory information. And lets
> each pmem-device manage it's own memory region.
> 
> pmem_alloc() Now receives phys_addr and disk_size and will
> map that region, also pmem_free will do the unmaping.
> 
> This is so we can support multiple discontinuous memory regions
> in the next patch
> 
> Signed-off-by: Boaz Harrosh <boaz@plexistor.com>

Yes I wrote this! No! please remove my Signed-off-by

This is completely the wrong API, and completely the
wrong convoluted, crap code from the beginning.

Even after my half backed, done as a review enhancer
for the better code, that needs to come later.

Sad
Boaz

> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Cc: linux-nvdimm@lists.01.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: axboe@kernel.dk
> Cc: hch@infradead.org
> Cc: riel@redhat.com
> ---
>  drivers/block/pmem.c | 122 +++++++++++++++++++++++++++++++--------------------
>  1 file changed, 75 insertions(+), 47 deletions(-)
> 
> diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
> index 8f39ef4..1bd9ab0 100644
> --- a/drivers/block/pmem.c
> +++ b/drivers/block/pmem.c
> @@ -30,19 +30,12 @@
>  #define PAGE_SECTORS_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
>  #define PAGE_SECTORS		(1 << PAGE_SECTORS_SHIFT)
>  
> -/*
> - * driver-wide physical address and total_size - one single, contiguous memory
> - * region that we divide up in to same-sized devices
> - */
> -phys_addr_t	phys_addr;
> -void		*virt_addr;
> -size_t		total_size;
> -
>  struct pmem_device {
>  	struct request_queue	*pmem_queue;
>  	struct gendisk		*pmem_disk;
>  	struct list_head	pmem_list;
>  
> +	/* One contiguous memory region per device */
>  	phys_addr_t		phys_addr;
>  	void			*virt_addr;
>  	size_t			size;
> @@ -237,33 +230,80 @@ MODULE_PARM_DESC(pmem_count, "Number of pmem devices to evenly split allocated s
>  static LIST_HEAD(pmem_devices);
>  static int pmem_major;
>  
> -/* FIXME: move phys_addr, virt_addr, size calls up to caller */
> -static struct pmem_device *pmem_alloc(int i)
> +/* pmem->phys_addr and pmem->size need to be set.
> + * Will then set virt_addr if successful.
> + */
> +int pmem_mapmem(struct pmem_device *pmem)
> +{
> +	struct resource *res_mem;
> +	int err;
> +
> +	res_mem = request_mem_region_exclusive(pmem->phys_addr, pmem->size,
> +					       "pmem");
> +	if (!res_mem) {
> +		pr_warn("pmem: request_mem_region_exclusive phys=0x%llx size=0x%zx failed\n",
> +			   pmem->phys_addr, pmem->size);
> +		return -EINVAL;
> +	}
> +
> +	pmem->virt_addr = ioremap_cache(pmem->phys_addr, pmem->size);
> +	if (unlikely(!pmem->virt_addr)) {
> +		err = -ENXIO;
> +		goto out_release;
> +	}
> +	return 0;
> +
> +out_release:
> +	release_mem_region(pmem->phys_addr, pmem->size);
> +	return err;
> +}
> +
> +void pmem_unmapmem(struct pmem_device *pmem)
> +{
> +	if (unlikely(!pmem->virt_addr))
> +		return;
> +
> +	iounmap(pmem->virt_addr);
> +	release_mem_region(pmem->phys_addr, pmem->size);
> +	pmem->virt_addr = NULL;
> +}
> +
> +static struct pmem_device *pmem_alloc(phys_addr_t phys_addr, size_t disk_size,
> +				      int i)
>  {
>  	struct pmem_device *pmem;
>  	struct gendisk *disk;
> -	size_t disk_size = total_size / pmem_count;
> -	size_t disk_sectors = disk_size / 512;
> +	int err;
>  
>  	pmem = kzalloc(sizeof(*pmem), GFP_KERNEL);
> -	if (!pmem)
> +	if (unlikely(!pmem)) {
> +		err = -ENOMEM;
>  		goto out;
> +	}
>  
> -	pmem->phys_addr = phys_addr + i * disk_size;
> -	pmem->virt_addr = virt_addr + i * disk_size;
> +	pmem->phys_addr = phys_addr;
>  	pmem->size = disk_size;
>  
> -	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
> -	if (!pmem->pmem_queue)
> +	err = pmem_mapmem(pmem);
> +	if (unlikely(err))
>  		goto out_free_dev;
>  
> +	pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
> +	if (unlikely(!pmem->pmem_queue)) {
> +		err = -ENOMEM;
> +		goto out_unmap;
> +	}
> +
>  	blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
>  	blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
>  	blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
>  
> -	disk = pmem->pmem_disk = alloc_disk(0);
> -	if (!disk)
> +	disk = alloc_disk(0);
> +	if (unlikely(!disk)) {
> +		err = -ENOMEM;
>  		goto out_free_queue;
> +	}
> +
>  	disk->major		= pmem_major;
>  	disk->first_minor	= 0;
>  	disk->fops		= &pmem_fops;
> @@ -271,22 +311,26 @@ static struct pmem_device *pmem_alloc(int i)
>  	disk->queue		= pmem->pmem_queue;
>  	disk->flags		= GENHD_FL_EXT_DEVT;
>  	sprintf(disk->disk_name, "pmem%d", i);
> -	set_capacity(disk, disk_sectors);
> +	set_capacity(disk, disk_size >> SECTOR_SHIFT);
> +	pmem->pmem_disk = disk;
>  
>  	return pmem;
>  
>  out_free_queue:
>  	blk_cleanup_queue(pmem->pmem_queue);
> +out_unmap:
> +	pmem_unmapmem(pmem);
>  out_free_dev:
>  	kfree(pmem);
>  out:
> -	return NULL;
> +	return ERR_PTR(err);
>  }
>  
>  static void pmem_free(struct pmem_device *pmem)
>  {
>  	put_disk(pmem->pmem_disk);
>  	blk_cleanup_queue(pmem->pmem_queue);
> +	pmem_unmapmem(pmem);
>  	kfree(pmem);
>  }
>  
> @@ -300,36 +344,28 @@ static void pmem_del_one(struct pmem_device *pmem)
>  static int __init pmem_init(void)
>  {
>  	int result, i;
> -	struct resource *res_mem;
>  	struct pmem_device *pmem, *next;
> +	phys_addr_t phys_addr;
> +	size_t total_size, disk_size;
>  
>  	phys_addr  = (phys_addr_t) pmem_start_gb * 1024 * 1024 * 1024;
>  	total_size = (size_t)	   pmem_size_gb  * 1024 * 1024 * 1024;
> -
> -	res_mem = request_mem_region_exclusive(phys_addr, total_size, "pmem");
> -	if (!res_mem)
> -		return -ENOMEM;
> -
> -	virt_addr = ioremap_cache(phys_addr, total_size);
> -	if (!virt_addr) {
> -		result = -ENOMEM;
> -		goto out_release;
> -	}
> +	disk_size = total_size / pmem_count;
>  
>  	result = register_blkdev(0, "pmem");
> -	if (result < 0) {
> -		result = -EIO;
> -		goto out_unmap;
> -	} else
> +	if (result < 0)
> +		return -EIO;
> +	else
>  		pmem_major = result;
>  
>  	for (i = 0; i < pmem_count; i++) {
> -		pmem = pmem_alloc(i);
> -		if (!pmem) {
> -			result = -ENOMEM;
> +		pmem = pmem_alloc(phys_addr, disk_size, i);
> +		if (IS_ERR(pmem)) {
> +			result = PTR_ERR(pmem);
>  			goto out_free;
>  		}
>  		list_add_tail(&pmem->pmem_list, &pmem_devices);
> +		phys_addr += disk_size;
>  	}
>  
>  	list_for_each_entry(pmem, &pmem_devices, pmem_list)
> @@ -345,11 +381,6 @@ out_free:
>  	}
>  	unregister_blkdev(pmem_major, "pmem");
>  
> -out_unmap:
> -	iounmap(virt_addr);
> -
> -out_release:
> -	release_mem_region(phys_addr, total_size);
>  	return result;
>  }
>  
> @@ -361,9 +392,6 @@ static void __exit pmem_exit(void)
>  		pmem_del_one(pmem);
>  
>  	unregister_blkdev(pmem_major, "pmem");
> -	iounmap(virt_addr);
> -	release_mem_region(phys_addr, total_size);
> -
>  	pr_info("pmem: module unloaded\n");
>  }
>  
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Linux-nvdimm] [PATCH 0/6] Add persistent memory driver
  2015-03-16 21:12 ` Ross Zwisler
@ 2015-03-18 11:08   ` Boaz Harrosh
  -1 siblings, 0 replies; 20+ messages in thread
From: Boaz Harrosh @ 2015-03-18 11:08 UTC (permalink / raw)
  To: Ross Zwisler, linux-kernel; +Cc: axboe, riel, linux-nvdimm, hch, linux-fsdevel

On 03/16/2015 11:12 PM, Ross Zwisler wrote:
> PMEM is a modified version of the Block RAM Driver, BRD. The major difference
> is that BRD allocates its backing store pages from the page cache, whereas
> PMEM uses reserved memory that has been ioremapped.
> 
> One benefit of this approach is that there is a direct mapping between
> filesystem block numbers and virtual addresses.  In PMEM, filesystem blocks N,
> N+1, N+2, etc. will all be adjacent in the virtual memory space. This property
> allows us to set up PMD mappings (2 MiB) for DAX.
> 
> This patch set is builds upon the work that Matthew Wilcox has been doing for
> DAX, which has been merged into the v4.0 kernel series.
> 
> For more information on PMEM and for some instructions on how to use it, please
> check out PMEM's github tree:
> 
> https://github.com/01org/prd
> 
> Cc: linux-nvdimm@lists.01.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: axboe@kernel.dk
> Cc: hch@infradead.org
> Cc: riel@redhat.com
> 
> Boaz Harrosh (1):
>   pmem: Let each device manage private memory region
> 
   Not signed-off-by me.

> Ross Zwisler (5):
>   pmem: Initial version of persistent memory driver

This is the wrong code

>   pmem: Add support for getgeo()

We do not need this patch

>   pmem: Add support for rw_page()
>   pmem: Add support for direct_access()
>   pmem: Clean up includes
> 

NACK!

This is the wrong pmem driver, the wrong API and the wrong bad copy/paste
from brd code.

(And thanks Ross for not CCing me I have lots of mail to read every day,
 Seriously this is rude, what do I need to feel?)

And very Seriously. Ross what is that joke Kconfig and module-param
API, how it is relevant to anything and how it is getting us close
to what pmem really needs to be, with the auto-probe. Is this your
wait wait we done lots of new work on this? It did not change one
bit from the original brd copy/paste.

Boaz

>  MAINTAINERS            |   6 +
>  drivers/block/Kconfig  |  41 +++++
>  drivers/block/Makefile |   1 +
>  drivers/block/pmem.c   | 401 +++++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 449 insertions(+)
>  create mode 100644 drivers/block/pmem.c
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Linux-nvdimm] [PATCH 0/6] Add persistent memory driver
@ 2015-03-18 11:08   ` Boaz Harrosh
  0 siblings, 0 replies; 20+ messages in thread
From: Boaz Harrosh @ 2015-03-18 11:08 UTC (permalink / raw)
  To: Ross Zwisler, linux-kernel; +Cc: axboe, riel, linux-nvdimm, hch, linux-fsdevel

On 03/16/2015 11:12 PM, Ross Zwisler wrote:
> PMEM is a modified version of the Block RAM Driver, BRD. The major difference
> is that BRD allocates its backing store pages from the page cache, whereas
> PMEM uses reserved memory that has been ioremapped.
> 
> One benefit of this approach is that there is a direct mapping between
> filesystem block numbers and virtual addresses.  In PMEM, filesystem blocks N,
> N+1, N+2, etc. will all be adjacent in the virtual memory space. This property
> allows us to set up PMD mappings (2 MiB) for DAX.
> 
> This patch set is builds upon the work that Matthew Wilcox has been doing for
> DAX, which has been merged into the v4.0 kernel series.
> 
> For more information on PMEM and for some instructions on how to use it, please
> check out PMEM's github tree:
> 
> https://github.com/01org/prd
> 
> Cc: linux-nvdimm@lists.01.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: axboe@kernel.dk
> Cc: hch@infradead.org
> Cc: riel@redhat.com
> 
> Boaz Harrosh (1):
>   pmem: Let each device manage private memory region
> 
   Not signed-off-by me.

> Ross Zwisler (5):
>   pmem: Initial version of persistent memory driver

This is the wrong code

>   pmem: Add support for getgeo()

We do not need this patch

>   pmem: Add support for rw_page()
>   pmem: Add support for direct_access()
>   pmem: Clean up includes
> 

NACK!

This is the wrong pmem driver, the wrong API and the wrong bad copy/paste
from brd code.

(And thanks Ross for not CCing me I have lots of mail to read every day,
 Seriously this is rude, what do I need to feel?)

And very Seriously. Ross what is that joke Kconfig and module-param
API, how it is relevant to anything and how it is getting us close
to what pmem really needs to be, with the auto-probe. Is this your
wait wait we done lots of new work on this? It did not change one
bit from the original brd copy/paste.

Boaz

>  MAINTAINERS            |   6 +
>  drivers/block/Kconfig  |  41 +++++
>  drivers/block/Makefile |   1 +
>  drivers/block/pmem.c   | 401 +++++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 449 insertions(+)
>  create mode 100644 drivers/block/pmem.c
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2015-03-18 11:09 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-16 21:12 [PATCH 0/6] Add persistent memory driver Ross Zwisler
2015-03-16 21:12 ` Ross Zwisler
2015-03-16 21:12 ` [PATCH 1/6] pmem: Initial version of " Ross Zwisler
2015-03-16 21:12   ` Ross Zwisler
2015-03-17 18:53   ` Paul Bolle
2015-03-17 18:53     ` Paul Bolle
2015-03-16 21:12 ` [PATCH 2/6] pmem: Add support for getgeo() Ross Zwisler
2015-03-16 21:12   ` Ross Zwisler
2015-03-16 21:12 ` [PATCH 3/6] pmem: Add support for rw_page() Ross Zwisler
2015-03-16 21:12   ` Ross Zwisler
2015-03-16 21:12 ` [PATCH 4/6] pmem: Add support for direct_access() Ross Zwisler
2015-03-16 21:12   ` Ross Zwisler
2015-03-16 21:12 ` [PATCH 5/6] pmem: Clean up includes Ross Zwisler
2015-03-16 21:12   ` Ross Zwisler
2015-03-16 21:13 ` [PATCH 6/6] pmem: Let each device manage private memory region Ross Zwisler
2015-03-16 21:13   ` Ross Zwisler
2015-03-18 10:57   ` Boaz Harrosh
2015-03-18 10:57     ` Boaz Harrosh
2015-03-18 11:08 ` [Linux-nvdimm] [PATCH 0/6] Add persistent memory driver Boaz Harrosh
2015-03-18 11:08   ` Boaz Harrosh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.