linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3][linux-next] ramzswap: bug fixes and some cleanups
@ 2010-01-27 14:24 Nitin Gupta
  2010-01-27 14:24 ` [PATCH 1/3] use lock for 64-bit stats Nitin Gupta
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Nitin Gupta @ 2010-01-27 14:24 UTC (permalink / raw)
  To: Greg KH; +Cc: Pekka Enberg, linux-kernel

ramzswap driver creates virtual block devices which
can be used (only) as swap disks. Pages swapped to these
disks are compressed and stored in memory itself.
Project home: http://code.google.com/p/compcache/

These patches apply to linux-next.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>

Nitin Gupta (3):
  use lock for 64-bit stats
  flush block device before reset
  set block size to PAGE_SIZE and some cleanups

 drivers/staging/ramzswap/ramzswap_drv.c   |  144 ++++++++++++++++------------
 drivers/staging/ramzswap/ramzswap_drv.h   |   61 ++++++++++--
 drivers/staging/ramzswap/ramzswap_ioctl.h |    3 +-
 drivers/staging/ramzswap/xvmalloc.c       |    2 +-
 drivers/staging/ramzswap/xvmalloc.h       |    2 +-
 drivers/staging/ramzswap/xvmalloc_int.h   |    2 +-
 6 files changed, 137 insertions(+), 77 deletions(-)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] use lock for 64-bit stats
  2010-01-27 14:24 [PATCH 0/3][linux-next] ramzswap: bug fixes and some cleanups Nitin Gupta
@ 2010-01-27 14:24 ` Nitin Gupta
  2010-01-27 15:10   ` Pekka Enberg
  2010-01-28  2:19   ` [PATCH 1/3][resend] " Nitin Gupta
  2010-01-27 14:24 ` [PATCH 2/3] flush block device before reset Nitin Gupta
  2010-01-27 14:24 ` [PATCH 3/3] set block size to PAGE_SIZE and some cleanups Nitin Gupta
  2 siblings, 2 replies; 11+ messages in thread
From: Nitin Gupta @ 2010-01-27 14:24 UTC (permalink / raw)
  To: Greg KH; +Cc: Pekka Enberg, linux-kernel

64-bit stats corruption was observed when ramzswap was
used on SMP systems. To prevent this, use separate spinlock
to protect these stats.

Eventually, these will be converted to per-cpu counters
if this driver finds use on large scale systems and this
locking is found to affect scalability.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
---
 drivers/staging/ramzswap/ramzswap_drv.c   |   58 +++++++++++++++--------------
 drivers/staging/ramzswap/ramzswap_drv.h   |   59 ++++++++++++++++++++++++-----
 drivers/staging/ramzswap/ramzswap_ioctl.h |    1 +
 3 files changed, 80 insertions(+), 38 deletions(-)

diff --git a/drivers/staging/ramzswap/ramzswap_drv.c b/drivers/staging/ramzswap/ramzswap_drv.c
index baf7572..0a376c2 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.c
+++ b/drivers/staging/ramzswap/ramzswap_drv.c
@@ -24,7 +24,6 @@
 #include <linux/genhd.h>
 #include <linux/highmem.h>
 #include <linux/lzo.h>
-#include <linux/mutex.h>
 #include <linux/string.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
@@ -239,7 +238,8 @@ void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
 
 	mem_used = xv_get_total_size_bytes(rzs->mem_pool)
 			+ (rs->pages_expand << PAGE_SHIFT);
-	succ_writes = rs->num_writes - rs->failed_writes;
+	succ_writes = stat64_read(rzs, &rs->num_writes) -
+			stat64_read(rzs, &rs->failed_writes);
 
 	if (succ_writes && rs->pages_stored) {
 		good_compress_perc = rs->good_compress * 100
@@ -248,11 +248,12 @@ void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
 					/ rs->pages_stored;
 	}
 
-	s->num_reads = rs->num_reads;
-	s->num_writes = rs->num_writes;
-	s->failed_reads = rs->failed_reads;
-	s->failed_writes = rs->failed_writes;
-	s->invalid_io = rs->invalid_io;
+	s->num_reads = stat64_read(rzs, &rs->num_reads);
+	s->num_writes = stat64_read(rzs, &rs->num_writes);
+	s->failed_reads = stat64_read(rzs, &rs->failed_reads);
+	s->failed_writes = stat64_read(rzs, &rs->failed_writes);
+	s->invalid_io = stat64_read(rzs, &rs->invalid_io);
+	s->notify_free = stat64_read(rzs, &rs->notify_free);
 	s->pages_zero = rs->pages_zero;
 
 	s->good_compress_pct = good_compress_perc;
@@ -264,8 +265,8 @@ void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
 	s->compr_data_size = rs->compr_size;
 	s->mem_used_total = mem_used;
 
-	s->bdev_num_reads = rs->bdev_num_reads;
-	s->bdev_num_writes = rs->bdev_num_writes;
+	s->bdev_num_reads = stat64_read(rzs, &rs->bdev_num_reads);
+	s->bdev_num_writes = stat64_read(rzs, &rs->bdev_num_writes);
 	}
 #endif /* CONFIG_RAMZSWAP_STATS */
 }
@@ -594,7 +595,7 @@ static void ramzswap_free_page(struct ramzswap *rzs, size_t index)
 	if (unlikely(!page)) {
 		if (rzs_test_flag(rzs, index, RZS_ZERO)) {
 			rzs_clear_flag(rzs, index, RZS_ZERO);
-			stat_dec(rzs->stats.pages_zero);
+			stat_dec(&rzs->stats.pages_zero);
 		}
 		return;
 	}
@@ -603,7 +604,7 @@ static void ramzswap_free_page(struct ramzswap *rzs, size_t index)
 		clen = PAGE_SIZE;
 		__free_page(page);
 		rzs_clear_flag(rzs, index, RZS_UNCOMPRESSED);
-		stat_dec(rzs->stats.pages_expand);
+		stat_dec(&rzs->stats.pages_expand);
 		goto out;
 	}
 
@@ -613,11 +614,11 @@ static void ramzswap_free_page(struct ramzswap *rzs, size_t index)
 
 	xv_free(rzs->mem_pool, page, offset);
 	if (clen <= PAGE_SIZE / 2)
-		stat_dec(rzs->stats.good_compress);
+		stat_dec(&rzs->stats.good_compress);
 
 out:
 	rzs->stats.compr_size -= clen;
-	stat_dec(rzs->stats.pages_stored);
+	stat_dec(&rzs->stats.pages_stored);
 
 	rzs->table[index].page = NULL;
 	rzs->table[index].offset = 0;
@@ -679,8 +680,8 @@ static int handle_ramzswap_fault(struct ramzswap *rzs, struct bio *bio)
 	 */
 	if (rzs->backing_swap) {
 		u32 pagenum;
-		stat_dec(rzs->stats.num_reads);
-		stat_inc(rzs->stats.bdev_num_reads);
+		stat64_dec(rzs, &rzs->stats.num_reads);
+		stat64_inc(rzs, &rzs->stats.bdev_num_reads);
 		bio->bi_bdev = rzs->backing_swap;
 
 		/*
@@ -718,7 +719,7 @@ static int ramzswap_read(struct ramzswap *rzs, struct bio *bio)
 	struct zobj_header *zheader;
 	unsigned char *user_mem, *cmem;
 
-	stat_inc(rzs->stats.num_reads);
+	stat64_inc(rzs, &rzs->stats.num_reads);
 
 	page = bio->bi_io_vec[0].bv_page;
 	index = bio->bi_sector >> SECTORS_PER_PAGE_SHIFT;
@@ -752,7 +753,7 @@ static int ramzswap_read(struct ramzswap *rzs, struct bio *bio)
 	if (unlikely(ret != LZO_E_OK)) {
 		pr_err("Decompression failed! err=%d, page=%u\n",
 			ret, index);
-		stat_inc(rzs->stats.failed_reads);
+		stat64_inc(rzs, &rzs->stats.failed_reads);
 		goto out;
 	}
 
@@ -776,7 +777,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	struct page *page, *page_store;
 	unsigned char *user_mem, *cmem, *src;
 
-	stat_inc(rzs->stats.num_writes);
+	stat64_inc(rzs, &rzs->stats.num_writes);
 
 	page = bio->bi_io_vec[0].bv_page;
 	index = bio->bi_sector >> SECTORS_PER_PAGE_SHIFT;
@@ -796,7 +797,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	 * Simply clear zero page flag.
 	 */
 	if (rzs_test_flag(rzs, index, RZS_ZERO)) {
-		stat_dec(rzs->stats.pages_zero);
+		stat_dec(&rzs->stats.pages_zero);
 		rzs_clear_flag(rzs, index, RZS_ZERO);
 	}
 
@@ -806,7 +807,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	if (page_zero_filled(user_mem)) {
 		kunmap_atomic(user_mem, KM_USER0);
 		mutex_unlock(&rzs->lock);
-		stat_inc(rzs->stats.pages_zero);
+		stat_inc(&rzs->stats.pages_zero);
 		rzs_set_flag(rzs, index, RZS_ZERO);
 
 		set_bit(BIO_UPTODATE, &bio->bi_flags);
@@ -830,7 +831,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	if (unlikely(ret != LZO_E_OK)) {
 		mutex_unlock(&rzs->lock);
 		pr_err("Compression failed! err=%d\n", ret);
-		stat_inc(rzs->stats.failed_writes);
+		stat64_inc(rzs, &rzs->stats.failed_writes);
 		goto out;
 	}
 
@@ -853,13 +854,13 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 			mutex_unlock(&rzs->lock);
 			pr_info("Error allocating memory for incompressible "
 				"page: %u\n", index);
-			stat_inc(rzs->stats.failed_writes);
+			stat64_inc(rzs, &rzs->stats.failed_writes);
 			goto out;
 		}
 
 		offset = 0;
 		rzs_set_flag(rzs, index, RZS_UNCOMPRESSED);
-		stat_inc(rzs->stats.pages_expand);
+		stat_inc(&rzs->stats.pages_expand);
 		rzs->table[index].page = page_store;
 		src = kmap_atomic(page, KM_USER0);
 		goto memstore;
@@ -871,7 +872,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 		mutex_unlock(&rzs->lock);
 		pr_info("Error allocating memory for compressed "
 			"page: %u, size=%zu\n", index, clen);
-		stat_inc(rzs->stats.failed_writes);
+		stat64_inc(rzs, &rzs->stats.failed_writes);
 		if (rzs->backing_swap)
 			fwd_write_request = 1;
 		goto out;
@@ -900,9 +901,9 @@ memstore:
 
 	/* Update stats */
 	rzs->stats.compr_size += clen;
-	stat_inc(rzs->stats.pages_stored);
+	stat_inc(&rzs->stats.pages_stored);
 	if (clen <= PAGE_SIZE / 2)
-		stat_inc(rzs->stats.good_compress);
+		stat_inc(&rzs->stats.good_compress);
 
 	mutex_unlock(&rzs->lock);
 
@@ -912,7 +913,7 @@ memstore:
 
 out:
 	if (fwd_write_request) {
-		stat_inc(rzs->stats.bdev_num_writes);
+		stat64_inc(rzs, &rzs->stats.bdev_num_writes);
 		bio->bi_bdev = rzs->backing_swap;
 #if 0
 		/*
@@ -974,7 +975,7 @@ static int ramzswap_make_request(struct request_queue *queue, struct bio *bio)
 	}
 
 	if (!valid_swap_request(rzs, bio)) {
-		stat_inc(rzs->stats.invalid_io);
+		stat64_inc(rzs, &rzs->stats.invalid_io);
 		bio_io_error(bio);
 		return 0;
 	}
@@ -1295,6 +1296,7 @@ static struct block_device_operations ramzswap_devops = {
 static int create_device(struct ramzswap *rzs, int device_id)
 {
 	mutex_init(&rzs->lock);
+	spin_lock_init(&rzs->stat64_lock);
 	INIT_LIST_HEAD(&rzs->backing_swap_extent_list);
 
 	rzs->queue = blk_alloc_queue(GFP_KERNEL);
diff --git a/drivers/staging/ramzswap/ramzswap_drv.h b/drivers/staging/ramzswap/ramzswap_drv.h
index 7ba98bf..3831d93 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.h
+++ b/drivers/staging/ramzswap/ramzswap_drv.h
@@ -15,6 +15,9 @@
 #ifndef _RAMZSWAP_DRV_H_
 #define _RAMZSWAP_DRV_H_
 
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+
 #include "ramzswap_ioctl.h"
 #include "xvmalloc.h"
 
@@ -71,15 +74,6 @@ static const unsigned max_zpage_size_nobdev = PAGE_SIZE / 4 * 3;
 #define SECTORS_PER_PAGE_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
 #define SECTORS_PER_PAGE	(1 << SECTORS_PER_PAGE_SHIFT)
 
-/* Debugging and Stats */
-#if defined(CONFIG_RAMZSWAP_STATS)
-#define stat_inc(stat)	((stat)++)
-#define stat_dec(stat)	((stat)--)
-#else
-#define stat_inc(x)
-#define stat_dec(x)
-#endif
-
 /* Flags for ramzswap pages (table[page_no].flags) */
 enum rzs_pageflags {
 	/* Page is stored uncompressed */
@@ -124,6 +118,7 @@ struct ramzswap_stats {
 	u64 failed_reads;	/* should NEVER! happen */
 	u64 failed_writes;	/* can happen when memory is too low */
 	u64 invalid_io;		/* non-swap I/O requests */
+	u64 notify_free;	/* no. of swap slot free notifications */
 	u32 pages_zero;		/* no. of zero filled pages */
 	u32 pages_stored;	/* no. of pages currently stored */
 	u32 good_compress;	/* % of pages with compression ratio<=50% */
@@ -138,6 +133,7 @@ struct ramzswap {
 	void *compress_workmem;
 	void *compress_buffer;
 	struct table *table;
+	spinlock_t stat64_lock;	/* protect 64-bit stats */
 	struct mutex lock;
 	struct request_queue *queue;
 	struct gendisk *disk;
@@ -167,5 +163,48 @@ struct ramzswap {
 
 /*-- */
 
-#endif
+/* Debugging and Stats */
+#if defined(CONFIG_RAMZSWAP_STATS)
+static void stat_inc(u32 *v)
+{
+	*v = *v + 1;
+}
+
+static void stat_dec(u32 *v)
+{
+	*v = *v - 1;
+}
+
+static void stat64_inc(struct ramzswap *rzs, u64 *v)
+{
+	spin_lock(&rzs->stat64_lock);
+	*v = *v + 1;
+	spin_unlock(&rzs->stat64_lock);
+}
+
+static void stat64_dec(struct ramzswap *rzs, u64 *v)
+{
+	spin_lock(&rzs->stat64_lock);
+	*v = *v - 1;
+	spin_unlock(&rzs->stat64_lock);
+}
+
+static u64 stat64_read(struct ramzswap *rzs, u64 *v)
+{
+	u64 val;
+
+	spin_lock(&rzs->stat64_lock);
+	val = *v;
+	spin_unlock(&rzs->stat64_lock);
+
+	return val;
+}
+#else
+#define stat_inc(v)
+#define stat_dec(v)
+#define stat64_inc(r, v)
+#define stat64_dec(r, v)
+#define stat64_read(r, v)
+#endif /* CONFIG_RAMZSWAP_STATS */
 
+#endif
diff --git a/drivers/staging/ramzswap/ramzswap_ioctl.h b/drivers/staging/ramzswap/ramzswap_ioctl.h
index 1bc54e2..1edaeba 100644
--- a/drivers/staging/ramzswap/ramzswap_ioctl.h
+++ b/drivers/staging/ramzswap/ramzswap_ioctl.h
@@ -27,6 +27,7 @@ struct ramzswap_ioctl_stats {
 	u64 failed_reads;	/* should NEVER! happen */
 	u64 failed_writes;	/* can happen when memory is too low */
 	u64 invalid_io;		/* non-swap I/O requests */
+	u64 notify_free;	/* no. of swap slot free notifications */
 	u32 pages_zero;		/* no. of zero filled pages */
 	u32 good_compress_pct;	/* no. of pages with compression ratio<=50% */
 	u32 pages_expand_pct;	/* no. of incompressible pages */
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] flush block device before reset
  2010-01-27 14:24 [PATCH 0/3][linux-next] ramzswap: bug fixes and some cleanups Nitin Gupta
  2010-01-27 14:24 ` [PATCH 1/3] use lock for 64-bit stats Nitin Gupta
@ 2010-01-27 14:24 ` Nitin Gupta
  2010-01-27 14:24 ` [PATCH 3/3] set block size to PAGE_SIZE and some cleanups Nitin Gupta
  2 siblings, 0 replies; 11+ messages in thread
From: Nitin Gupta @ 2010-01-27 14:24 UTC (permalink / raw)
  To: Greg KH; +Cc: Pekka Enberg, linux-kernel

Make sure we flush block device before freeing all metadata
during reset ioctl.

Signed-off-by: Nitin Gupta <ngupta@vflar.org>
---
 drivers/staging/ramzswap/ramzswap_drv.c |   11 ++++++++---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/staging/ramzswap/ramzswap_drv.c b/drivers/staging/ramzswap/ramzswap_drv.c
index 0a376c2..64f49c9 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.c
+++ b/drivers/staging/ramzswap/ramzswap_drv.c
@@ -1000,6 +1000,9 @@ static void reset_device(struct ramzswap *rzs)
 	unsigned entries_per_page;
 	unsigned long num_table_pages, entry = 0;
 
+	/* Do not accept any new I/O request */
+	rzs->init_done = 0;
+
 	if (rzs->backing_swap && !rzs->num_extents)
 		is_backing_blkdev = 1;
 
@@ -1073,9 +1076,6 @@ static void reset_device(struct ramzswap *rzs)
 
 	rzs->disksize = 0;
 	rzs->memlimit = 0;
-
-	/* Back to uninitialized state */
-	rzs->init_done = 0;
 }
 
 static int ramzswap_ioctl_init_device(struct ramzswap *rzs)
@@ -1276,6 +1276,11 @@ static int ramzswap_ioctl(struct block_device *bdev, fmode_t mode,
 			ret = -EBUSY;
 			goto out;
 		}
+
+		/* Make sure all pending I/O is finished */
+		if (bdev)
+			fsync_bdev(bdev);
+
 		ret = ramzswap_ioctl_reset_device(rzs);
 		break;
 
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] set block size to PAGE_SIZE and some cleanups
  2010-01-27 14:24 [PATCH 0/3][linux-next] ramzswap: bug fixes and some cleanups Nitin Gupta
  2010-01-27 14:24 ` [PATCH 1/3] use lock for 64-bit stats Nitin Gupta
  2010-01-27 14:24 ` [PATCH 2/3] flush block device before reset Nitin Gupta
@ 2010-01-27 14:24 ` Nitin Gupta
  2010-01-28  3:59   ` [PATCH 3/3][resend] " Nitin Gupta
  2 siblings, 1 reply; 11+ messages in thread
From: Nitin Gupta @ 2010-01-27 14:24 UTC (permalink / raw)
  To: Greg KH; +Cc: Pekka Enberg, linux-kernel

ramzswap block size needs to be set to PAGE_SIZE
to avoid receiving any unaligned block I/O (happens
during swapon time). These unaligned access produce
unncessary I/O errors, scaring users.

Also included some minor cleanups.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
---
 drivers/staging/ramzswap/ramzswap_drv.c   |   77 +++++++++++++++++------------
 drivers/staging/ramzswap/ramzswap_drv.h   |    2 +-
 drivers/staging/ramzswap/ramzswap_ioctl.h |    2 +-
 drivers/staging/ramzswap/xvmalloc.c       |    2 +-
 drivers/staging/ramzswap/xvmalloc.h       |    2 +-
 drivers/staging/ramzswap/xvmalloc_int.h   |    2 +-
 6 files changed, 50 insertions(+), 37 deletions(-)

diff --git a/drivers/staging/ramzswap/ramzswap_drv.c b/drivers/staging/ramzswap/ramzswap_drv.c
index 64f49c9..7f47373 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.c
+++ b/drivers/staging/ramzswap/ramzswap_drv.c
@@ -1,7 +1,7 @@
 /*
  * Compressed RAM based swap device
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
@@ -220,7 +220,7 @@ out:
 	return ret;
 }
 
-void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
+static void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
 			struct ramzswap_ioctl_stats *s)
 {
 	strncpy(s->backing_swap_name, rzs->backing_swap_name,
@@ -502,6 +502,10 @@ static int setup_backing_swap(struct ramzswap *rzs)
 			goto bad_param;
 		}
 		disksize = i_size_read(inode);
+		if (!disksize) {
+			pr_err("Error reading backing swap size.\n");
+			goto bad_param;
+		}
 	} else if (S_ISREG(inode->i_mode)) {
 		bdev = inode->i_sb->s_bdev;
 		if (IS_SWAPFILE(inode)) {
@@ -519,7 +523,6 @@ static int setup_backing_swap(struct ramzswap *rzs)
 	rzs->swap_file = swap_file;
 	rzs->backing_swap = bdev;
 	rzs->disksize = disksize;
-	BUG_ON(!rzs->disksize);
 
 	return 0;
 
@@ -537,7 +540,7 @@ out:
  * Map logical page number 'pagenum' to physical page number
  * on backing swap device. For block device, this is a nop.
  */
-u32 map_backing_swap_page(struct ramzswap *rzs, u32 pagenum)
+static u32 map_backing_swap_page(struct ramzswap *rzs, u32 pagenum)
 {
 	u32 skip_pages, entries_per_page;
 	size_t delta, se_offset, skipped;
@@ -593,6 +596,10 @@ static void ramzswap_free_page(struct ramzswap *rzs, size_t index)
 	u32 offset = rzs->table[index].offset;
 
 	if (unlikely(!page)) {
+		/*
+		 * No memory is allocated for zero filled pages.
+		 * Simply clear zero page flag.
+		 */
 		if (rzs_test_flag(rzs, index, RZS_ZERO)) {
 			rzs_clear_flag(rzs, index, RZS_ZERO);
 			stat_dec(&rzs->stats.pages_zero);
@@ -664,7 +671,6 @@ static int handle_uncompressed_page(struct ramzswap *rzs, struct bio *bio)
 	return 0;
 }
 
-
 /*
  * Called when request page is not present in ramzswap.
  * Its either in backing swap device (if present) or
@@ -782,24 +788,15 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	page = bio->bi_io_vec[0].bv_page;
 	index = bio->bi_sector >> SECTORS_PER_PAGE_SHIFT;
 
-	src = rzs->compress_buffer;
-
 	/*
 	 * System swaps to same sector again when the stored page
 	 * is no longer referenced by any process. So, its now safe
 	 * to free the memory that was allocated for this page.
 	 */
-	if (rzs->table[index].page)
+	if (rzs->table[index].page || rzs_test_flag(rzs, index, RZS_ZERO))
 		ramzswap_free_page(rzs, index);
 
-	/*
-	 * No memory is allocated for zero filled pages.
-	 * Simply clear zero page flag.
-	 */
-	if (rzs_test_flag(rzs, index, RZS_ZERO)) {
-		stat_dec(&rzs->stats.pages_zero);
-		rzs_clear_flag(rzs, index, RZS_ZERO);
-	}
+	src = rzs->compress_buffer;
 
 	mutex_lock(&rzs->lock);
 
@@ -941,7 +938,6 @@ out:
 	return 0;
 }
 
-
 /*
  * Check if request is within bounds and page aligned.
  */
@@ -1069,6 +1065,7 @@ static void reset_device(struct ramzswap *rzs)
 			bd_release(rzs->backing_swap);
 		filp_close(rzs->swap_file, NULL);
 		rzs->backing_swap = NULL;
+		memset(rzs->backing_swap_name, 0, MAX_SWAP_NAME_LEN);
 	}
 
 	/* Reset stats */
@@ -1300,6 +1297,7 @@ static struct block_device_operations ramzswap_devops = {
 
 static int create_device(struct ramzswap *rzs, int device_id)
 {
+	int ret = 0;
 	mutex_init(&rzs->lock);
 	spin_lock_init(&rzs->stat64_lock);
 	INIT_LIST_HEAD(&rzs->backing_swap_extent_list);
@@ -1308,7 +1306,8 @@ static int create_device(struct ramzswap *rzs, int device_id)
 	if (!rzs->queue) {
 		pr_err("Error allocating disk queue for device %d\n",
 			device_id);
-		return 0;
+		ret = -ENOMEM;
+		goto out;
 	}
 
 	blk_queue_make_request(rzs->queue, ramzswap_make_request);
@@ -1320,7 +1319,8 @@ static int create_device(struct ramzswap *rzs, int device_id)
 		blk_cleanup_queue(rzs->queue);
 		pr_warning("Error allocating disk structure for device %d\n",
 			device_id);
-		return 0;
+		ret = -ENOMEM;
+		goto out;
 	}
 
 	rzs->disk->major = ramzswap_major;
@@ -1335,10 +1335,16 @@ static int create_device(struct ramzswap *rzs, int device_id)
 	 * or set equal to backing swap device (if provided)
 	 */
 	set_capacity(rzs->disk, 0);
+
+	blk_queue_physical_block_size(rzs->disk->queue, PAGE_SIZE);
+	blk_queue_logical_block_size(rzs->disk->queue, PAGE_SIZE);
+
 	add_disk(rzs->disk);
 
 	rzs->init_done = 0;
-	return 1;
+
+out:
+	return ret;
 }
 
 static void destroy_device(struct ramzswap *rzs)
@@ -1354,18 +1360,20 @@ static void destroy_device(struct ramzswap *rzs)
 
 static int __init ramzswap_init(void)
 {
-	int i, ret;
+	int ret, dev_id;
 
 	if (num_devices > max_num_devices) {
 		pr_warning("Invalid value for num_devices: %u\n",
 				num_devices);
-		return -EINVAL;
+		ret = -EINVAL;
+		goto out;
 	}
 
 	ramzswap_major = register_blkdev(0, "ramzswap");
 	if (ramzswap_major <= 0) {
 		pr_warning("Unable to get major number\n");
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out;
 	}
 
 	if (!num_devices) {
@@ -1376,21 +1384,26 @@ static int __init ramzswap_init(void)
 	/* Allocate the device array and initialize each one */
 	pr_info("Creating %u devices ...\n", num_devices);
 	devices = kzalloc(num_devices * sizeof(struct ramzswap), GFP_KERNEL);
-	if (!devices)
-		goto out;
+	if (!devices) {
+		ret = -ENOMEM;
+		goto unregister;
+	}
 
-	for (i = 0; i < num_devices; i++)
-		if (!create_device(&devices[i], i)) {
-			ret = i;
+	for (dev_id = 0; dev_id < num_devices; dev_id++) {
+		if (create_device(&devices[dev_id], dev_id)) {
+			ret = -ENOMEM;
 			goto free_devices;
 		}
+	}
+
 	return 0;
+
 free_devices:
-	for (i = 0; i < ret; i++)
-		destroy_device(&devices[i]);
-out:
-	ret = -ENOMEM;
+	while (dev_id)
+		destroy_device(&devices[--dev_id]);
+unregister:
 	unregister_blkdev(ramzswap_major, "ramzswap");
+out:
 	return ret;
 }
 
diff --git a/drivers/staging/ramzswap/ramzswap_drv.h b/drivers/staging/ramzswap/ramzswap_drv.h
index 3831d93..e5ffdce 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.h
+++ b/drivers/staging/ramzswap/ramzswap_drv.h
@@ -1,7 +1,7 @@
 /*
  * Compressed RAM based swap device
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
diff --git a/drivers/staging/ramzswap/ramzswap_ioctl.h b/drivers/staging/ramzswap/ramzswap_ioctl.h
index 1edaeba..d26076d 100644
--- a/drivers/staging/ramzswap/ramzswap_ioctl.h
+++ b/drivers/staging/ramzswap/ramzswap_ioctl.h
@@ -1,7 +1,7 @@
 /*
  * Compressed RAM based swap device
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
diff --git a/drivers/staging/ramzswap/xvmalloc.c b/drivers/staging/ramzswap/xvmalloc.c
index dabd266..3fdbb8a 100644
--- a/drivers/staging/ramzswap/xvmalloc.c
+++ b/drivers/staging/ramzswap/xvmalloc.c
@@ -1,7 +1,7 @@
 /*
  * xvmalloc memory allocator
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
diff --git a/drivers/staging/ramzswap/xvmalloc.h b/drivers/staging/ramzswap/xvmalloc.h
index 010c6fe..5b1a81a 100644
--- a/drivers/staging/ramzswap/xvmalloc.h
+++ b/drivers/staging/ramzswap/xvmalloc.h
@@ -1,7 +1,7 @@
 /*
  * xvmalloc memory allocator
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
diff --git a/drivers/staging/ramzswap/xvmalloc_int.h b/drivers/staging/ramzswap/xvmalloc_int.h
index 263c72d..e23ed5c 100644
--- a/drivers/staging/ramzswap/xvmalloc_int.h
+++ b/drivers/staging/ramzswap/xvmalloc_int.h
@@ -1,7 +1,7 @@
 /*
  * xvmalloc memory allocator
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] use lock for 64-bit stats
  2010-01-27 14:24 ` [PATCH 1/3] use lock for 64-bit stats Nitin Gupta
@ 2010-01-27 15:10   ` Pekka Enberg
  2010-01-27 15:51     ` Nitin Gupta
  2010-01-28  2:19   ` [PATCH 1/3][resend] " Nitin Gupta
  1 sibling, 1 reply; 11+ messages in thread
From: Pekka Enberg @ 2010-01-27 15:10 UTC (permalink / raw)
  To: Nitin Gupta; +Cc: Greg KH, linux-kernel

Nitin Gupta wrote:
> +static void stat_dec(u32 *v)
> +{
> +	*v = *v - 1;
> +}
> +
> +static void stat64_inc(struct ramzswap *rzs, u64 *v)
> +{
> +	spin_lock(&rzs->stat64_lock);
> +	*v = *v + 1;
> +	spin_unlock(&rzs->stat64_lock);
> +}
> +
> +static void stat64_dec(struct ramzswap *rzs, u64 *v)
> +{
> +	spin_lock(&rzs->stat64_lock);
> +	*v = *v - 1;
> +	spin_unlock(&rzs->stat64_lock);
> +}
> +
> +static u64 stat64_read(struct ramzswap *rzs, u64 *v)
> +{
> +	u64 val;
> +
> +	spin_lock(&rzs->stat64_lock);
> +	val = *v;
> +	spin_unlock(&rzs->stat64_lock);
> +
> +	return val;
> +}
> +#else
> +#define stat_inc(v)
> +#define stat_dec(v)
> +#define stat64_inc(r, v)
> +#define stat64_dec(r, v)
> +#define stat64_read(r, v)
> +#endif /* CONFIG_RAMZSWAP_STATS */

I think I complained about this before: the names are too generic and 
could collide with core kernel code. I think they ought to be called 
ramzsawp_stat*().

			Pekka

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] use lock for 64-bit stats
  2010-01-27 15:10   ` Pekka Enberg
@ 2010-01-27 15:51     ` Nitin Gupta
  0 siblings, 0 replies; 11+ messages in thread
From: Nitin Gupta @ 2010-01-27 15:51 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Greg KH, linux-kernel

On 01/27/2010 08:40 PM, Pekka Enberg wrote:
> Nitin Gupta wrote:
>> +static void stat_dec(u32 *v)
>> +{
>> +    *v = *v - 1;
>> +}
>> +
>> +static void stat64_inc(struct ramzswap *rzs, u64 *v)
>> +{
>> +    spin_lock(&rzs->stat64_lock);
>> +    *v = *v + 1;
>> +    spin_unlock(&rzs->stat64_lock);
>> +}
>> +
>> +static void stat64_dec(struct ramzswap *rzs, u64 *v)
>> +{
>> +    spin_lock(&rzs->stat64_lock);
>> +    *v = *v - 1;
>> +    spin_unlock(&rzs->stat64_lock);
>> +}
>> +
>> +static u64 stat64_read(struct ramzswap *rzs, u64 *v)
>> +{
>> +    u64 val;
>> +
>> +    spin_lock(&rzs->stat64_lock);
>> +    val = *v;
>> +    spin_unlock(&rzs->stat64_lock);
>> +
>> +    return val;
>> +}
>> +#else
>> +#define stat_inc(v)
>> +#define stat_dec(v)
>> +#define stat64_inc(r, v)
>> +#define stat64_dec(r, v)
>> +#define stat64_read(r, v)
>> +#endif /* CONFIG_RAMZSWAP_STATS */
> 
> I think I complained about this before: the names are too generic and
> could collide with core kernel code. I think they ought to be called
> ramzsawp_stat*().
> 


I somehow missed this point. I will rename these to rzs_stat*()
('rzs_' prefix is used for other one-liner functions too).

Thanks,
Nitin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3][resend] use lock for 64-bit stats
  2010-01-27 14:24 ` [PATCH 1/3] use lock for 64-bit stats Nitin Gupta
  2010-01-27 15:10   ` Pekka Enberg
@ 2010-01-28  2:19   ` Nitin Gupta
  1 sibling, 0 replies; 11+ messages in thread
From: Nitin Gupta @ 2010-01-28  2:19 UTC (permalink / raw)
  To: Greg KH; +Cc: Pekka Enberg, linux-kernel

64-bit stats corruption was observed when ramzswap was
used on SMP systems. To prevent this, use separate spinlock
to protect these stats.

Also, replace stat_*() with rzs_stat*() to avoid possible
conflict with core kernel code.

Eventually, these will be converted to per-cpu counters
if this driver finds use on large scale systems and this
locking is found to affect scalability.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
---
 drivers/staging/ramzswap/ramzswap_drv.c   |   58 +++++++++++++++--------------
 drivers/staging/ramzswap/ramzswap_drv.h   |   59 ++++++++++++++++++++++++-----
 drivers/staging/ramzswap/ramzswap_ioctl.h |    1 +
 3 files changed, 80 insertions(+), 38 deletions(-)

diff --git a/drivers/staging/ramzswap/ramzswap_drv.c b/drivers/staging/ramzswap/ramzswap_drv.c
index baf7572..05273c0 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.c
+++ b/drivers/staging/ramzswap/ramzswap_drv.c
@@ -24,7 +24,6 @@
 #include <linux/genhd.h>
 #include <linux/highmem.h>
 #include <linux/lzo.h>
-#include <linux/mutex.h>
 #include <linux/string.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
@@ -239,7 +238,8 @@ void ramzswap_ioctl_get_stats(struct ramzswap *rzs,

 	mem_used = xv_get_total_size_bytes(rzs->mem_pool)
 			+ (rs->pages_expand << PAGE_SHIFT);
-	succ_writes = rs->num_writes - rs->failed_writes;
+	succ_writes = rzs_stat64_read(rzs, &rs->num_writes) -
+			rzs_stat64_read(rzs, &rs->failed_writes);

 	if (succ_writes && rs->pages_stored) {
 		good_compress_perc = rs->good_compress * 100
@@ -248,11 +248,12 @@ void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
 					/ rs->pages_stored;
 	}

-	s->num_reads = rs->num_reads;
-	s->num_writes = rs->num_writes;
-	s->failed_reads = rs->failed_reads;
-	s->failed_writes = rs->failed_writes;
-	s->invalid_io = rs->invalid_io;
+	s->num_reads = rzs_stat64_read(rzs, &rs->num_reads);
+	s->num_writes = rzs_stat64_read(rzs, &rs->num_writes);
+	s->failed_reads = rzs_stat64_read(rzs, &rs->failed_reads);
+	s->failed_writes = rzs_stat64_read(rzs, &rs->failed_writes);
+	s->invalid_io = rzs_stat64_read(rzs, &rs->invalid_io);
+	s->notify_free = rzs_stat64_read(rzs, &rs->notify_free);
 	s->pages_zero = rs->pages_zero;

 	s->good_compress_pct = good_compress_perc;
@@ -264,8 +265,8 @@ void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
 	s->compr_data_size = rs->compr_size;
 	s->mem_used_total = mem_used;

-	s->bdev_num_reads = rs->bdev_num_reads;
-	s->bdev_num_writes = rs->bdev_num_writes;
+	s->bdev_num_reads = rzs_stat64_read(rzs, &rs->bdev_num_reads);
+	s->bdev_num_writes = rzs_stat64_read(rzs, &rs->bdev_num_writes);
 	}
 #endif /* CONFIG_RAMZSWAP_STATS */
 }
@@ -594,7 +595,7 @@ static void ramzswap_free_page(struct ramzswap *rzs, size_t index)
 	if (unlikely(!page)) {
 		if (rzs_test_flag(rzs, index, RZS_ZERO)) {
 			rzs_clear_flag(rzs, index, RZS_ZERO);
-			stat_dec(rzs->stats.pages_zero);
+			rzs_stat_dec(&rzs->stats.pages_zero);
 		}
 		return;
 	}
@@ -603,7 +604,7 @@ static void ramzswap_free_page(struct ramzswap *rzs, size_t index)
 		clen = PAGE_SIZE;
 		__free_page(page);
 		rzs_clear_flag(rzs, index, RZS_UNCOMPRESSED);
-		stat_dec(rzs->stats.pages_expand);
+		rzs_stat_dec(&rzs->stats.pages_expand);
 		goto out;
 	}

@@ -613,11 +614,11 @@ static void ramzswap_free_page(struct ramzswap *rzs, size_t index)

 	xv_free(rzs->mem_pool, page, offset);
 	if (clen <= PAGE_SIZE / 2)
-		stat_dec(rzs->stats.good_compress);
+		rzs_stat_dec(&rzs->stats.good_compress);

 out:
 	rzs->stats.compr_size -= clen;
-	stat_dec(rzs->stats.pages_stored);
+	rzs_stat_dec(&rzs->stats.pages_stored);

 	rzs->table[index].page = NULL;
 	rzs->table[index].offset = 0;
@@ -679,8 +680,8 @@ static int handle_ramzswap_fault(struct ramzswap *rzs, struct bio *bio)
 	 */
 	if (rzs->backing_swap) {
 		u32 pagenum;
-		stat_dec(rzs->stats.num_reads);
-		stat_inc(rzs->stats.bdev_num_reads);
+		rzs_stat64_dec(rzs, &rzs->stats.num_reads);
+		rzs_stat64_inc(rzs, &rzs->stats.bdev_num_reads);
 		bio->bi_bdev = rzs->backing_swap;

 		/*
@@ -718,7 +719,7 @@ static int ramzswap_read(struct ramzswap *rzs, struct bio *bio)
 	struct zobj_header *zheader;
 	unsigned char *user_mem, *cmem;

-	stat_inc(rzs->stats.num_reads);
+	rzs_stat64_inc(rzs, &rzs->stats.num_reads);

 	page = bio->bi_io_vec[0].bv_page;
 	index = bio->bi_sector >> SECTORS_PER_PAGE_SHIFT;
@@ -752,7 +753,7 @@ static int ramzswap_read(struct ramzswap *rzs, struct bio *bio)
 	if (unlikely(ret != LZO_E_OK)) {
 		pr_err("Decompression failed! err=%d, page=%u\n",
 			ret, index);
-		stat_inc(rzs->stats.failed_reads);
+		rzs_stat64_inc(rzs, &rzs->stats.failed_reads);
 		goto out;
 	}

@@ -776,7 +777,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	struct page *page, *page_store;
 	unsigned char *user_mem, *cmem, *src;

-	stat_inc(rzs->stats.num_writes);
+	rzs_stat64_inc(rzs, &rzs->stats.num_writes);

 	page = bio->bi_io_vec[0].bv_page;
 	index = bio->bi_sector >> SECTORS_PER_PAGE_SHIFT;
@@ -796,7 +797,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	 * Simply clear zero page flag.
 	 */
 	if (rzs_test_flag(rzs, index, RZS_ZERO)) {
-		stat_dec(rzs->stats.pages_zero);
+		rzs_stat_dec(&rzs->stats.pages_zero);
 		rzs_clear_flag(rzs, index, RZS_ZERO);
 	}

@@ -806,7 +807,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	if (page_zero_filled(user_mem)) {
 		kunmap_atomic(user_mem, KM_USER0);
 		mutex_unlock(&rzs->lock);
-		stat_inc(rzs->stats.pages_zero);
+		rzs_stat_inc(&rzs->stats.pages_zero);
 		rzs_set_flag(rzs, index, RZS_ZERO);

 		set_bit(BIO_UPTODATE, &bio->bi_flags);
@@ -830,7 +831,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	if (unlikely(ret != LZO_E_OK)) {
 		mutex_unlock(&rzs->lock);
 		pr_err("Compression failed! err=%d\n", ret);
-		stat_inc(rzs->stats.failed_writes);
+		rzs_stat64_inc(rzs, &rzs->stats.failed_writes);
 		goto out;
 	}

@@ -853,13 +854,13 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 			mutex_unlock(&rzs->lock);
 			pr_info("Error allocating memory for incompressible "
 				"page: %u\n", index);
-			stat_inc(rzs->stats.failed_writes);
+			rzs_stat64_inc(rzs, &rzs->stats.failed_writes);
 			goto out;
 		}

 		offset = 0;
 		rzs_set_flag(rzs, index, RZS_UNCOMPRESSED);
-		stat_inc(rzs->stats.pages_expand);
+		rzs_stat_inc(&rzs->stats.pages_expand);
 		rzs->table[index].page = page_store;
 		src = kmap_atomic(page, KM_USER0);
 		goto memstore;
@@ -871,7 +872,7 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 		mutex_unlock(&rzs->lock);
 		pr_info("Error allocating memory for compressed "
 			"page: %u, size=%zu\n", index, clen);
-		stat_inc(rzs->stats.failed_writes);
+		rzs_stat64_inc(rzs, &rzs->stats.failed_writes);
 		if (rzs->backing_swap)
 			fwd_write_request = 1;
 		goto out;
@@ -900,9 +901,9 @@ memstore:

 	/* Update stats */
 	rzs->stats.compr_size += clen;
-	stat_inc(rzs->stats.pages_stored);
+	rzs_stat_inc(&rzs->stats.pages_stored);
 	if (clen <= PAGE_SIZE / 2)
-		stat_inc(rzs->stats.good_compress);
+		rzs_stat_inc(&rzs->stats.good_compress);

 	mutex_unlock(&rzs->lock);

@@ -912,7 +913,7 @@ memstore:

 out:
 	if (fwd_write_request) {
-		stat_inc(rzs->stats.bdev_num_writes);
+		rzs_stat64_inc(rzs, &rzs->stats.bdev_num_writes);
 		bio->bi_bdev = rzs->backing_swap;
 #if 0
 		/*
@@ -974,7 +975,7 @@ static int ramzswap_make_request(struct request_queue *queue, struct bio *bio)
 	}

 	if (!valid_swap_request(rzs, bio)) {
-		stat_inc(rzs->stats.invalid_io);
+		rzs_stat64_inc(rzs, &rzs->stats.invalid_io);
 		bio_io_error(bio);
 		return 0;
 	}
@@ -1295,6 +1296,7 @@ static struct block_device_operations ramzswap_devops = {
 static int create_device(struct ramzswap *rzs, int device_id)
 {
 	mutex_init(&rzs->lock);
+	spin_lock_init(&rzs->stat64_lock);
 	INIT_LIST_HEAD(&rzs->backing_swap_extent_list);

 	rzs->queue = blk_alloc_queue(GFP_KERNEL);
diff --git a/drivers/staging/ramzswap/ramzswap_drv.h b/drivers/staging/ramzswap/ramzswap_drv.h
index 7ba98bf..5b67222 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.h
+++ b/drivers/staging/ramzswap/ramzswap_drv.h
@@ -15,6 +15,9 @@
 #ifndef _RAMZSWAP_DRV_H_
 #define _RAMZSWAP_DRV_H_

+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+
 #include "ramzswap_ioctl.h"
 #include "xvmalloc.h"

@@ -71,15 +74,6 @@ static const unsigned max_zpage_size_nobdev = PAGE_SIZE / 4 * 3;
 #define SECTORS_PER_PAGE_SHIFT	(PAGE_SHIFT - SECTOR_SHIFT)
 #define SECTORS_PER_PAGE	(1 << SECTORS_PER_PAGE_SHIFT)

-/* Debugging and Stats */
-#if defined(CONFIG_RAMZSWAP_STATS)
-#define stat_inc(stat)	((stat)++)
-#define stat_dec(stat)	((stat)--)
-#else
-#define stat_inc(x)
-#define stat_dec(x)
-#endif
-
 /* Flags for ramzswap pages (table[page_no].flags) */
 enum rzs_pageflags {
 	/* Page is stored uncompressed */
@@ -124,6 +118,7 @@ struct ramzswap_stats {
 	u64 failed_reads;	/* should NEVER! happen */
 	u64 failed_writes;	/* can happen when memory is too low */
 	u64 invalid_io;		/* non-swap I/O requests */
+	u64 notify_free;	/* no. of swap slot free notifications */
 	u32 pages_zero;		/* no. of zero filled pages */
 	u32 pages_stored;	/* no. of pages currently stored */
 	u32 good_compress;	/* % of pages with compression ratio<=50% */
@@ -138,6 +133,7 @@ struct ramzswap {
 	void *compress_workmem;
 	void *compress_buffer;
 	struct table *table;
+	spinlock_t stat64_lock;	/* protect 64-bit stats */
 	struct mutex lock;
 	struct request_queue *queue;
 	struct gendisk *disk;
@@ -167,5 +163,48 @@ struct ramzswap {

 /*-- */

-#endif
+/* Debugging and Stats */
+#if defined(CONFIG_RAMZSWAP_STATS)
+static void rzs_stat_inc(u32 *v)
+{
+	*v = *v + 1;
+}
+
+static void rzs_stat_dec(u32 *v)
+{
+	*v = *v - 1;
+}
+
+static void rzs_stat64_inc(struct ramzswap *rzs, u64 *v)
+{
+	spin_lock(&rzs->stat64_lock);
+	*v = *v + 1;
+	spin_unlock(&rzs->stat64_lock);
+}
+
+static void rzs_stat64_dec(struct ramzswap *rzs, u64 *v)
+{
+	spin_lock(&rzs->stat64_lock);
+	*v = *v - 1;
+	spin_unlock(&rzs->stat64_lock);
+}
+
+static u64 rzs_stat64_read(struct ramzswap *rzs, u64 *v)
+{
+	u64 val;
+
+	spin_lock(&rzs->stat64_lock);
+	val = *v;
+	spin_unlock(&rzs->stat64_lock);
+
+	return val;
+}
+#else
+#define rzs_stat_inc(v)
+#define rzs_stat_dec(v)
+#define rzs_stat64_inc(r, v)
+#define rzs_stat64_dec(r, v)
+#define rzs_stat64_read(r, v)
+#endif /* CONFIG_RAMZSWAP_STATS */

+#endif
diff --git a/drivers/staging/ramzswap/ramzswap_ioctl.h b/drivers/staging/ramzswap/ramzswap_ioctl.h
index 1bc54e2..1edaeba 100644
--- a/drivers/staging/ramzswap/ramzswap_ioctl.h
+++ b/drivers/staging/ramzswap/ramzswap_ioctl.h
@@ -27,6 +27,7 @@ struct ramzswap_ioctl_stats {
 	u64 failed_reads;	/* should NEVER! happen */
 	u64 failed_writes;	/* can happen when memory is too low */
 	u64 invalid_io;		/* non-swap I/O requests */
+	u64 notify_free;	/* no. of swap slot free notifications */
 	u32 pages_zero;		/* no. of zero filled pages */
 	u32 good_compress_pct;	/* no. of pages with compression ratio<=50% */
 	u32 pages_expand_pct;	/* no. of incompressible pages */
--
1.6.2.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3][resend] set block size to PAGE_SIZE and some cleanups
  2010-01-27 14:24 ` [PATCH 3/3] set block size to PAGE_SIZE and some cleanups Nitin Gupta
@ 2010-01-28  3:59   ` Nitin Gupta
  2010-01-28  4:23     ` Greg KH
  0 siblings, 1 reply; 11+ messages in thread
From: Nitin Gupta @ 2010-01-28  3:59 UTC (permalink / raw)
  To: Greg KH; +Cc: Pekka Enberg, linux-kernel

[replace stat_*() with rzs_stat*()]
---

ramzswap block size needs to be set to PAGE_SIZE
to avoid receiving any unaligned block I/O (happens
during swapon time). These unaligned access produce
unncessary I/O errors, scaring users.

Also included some minor cleanups.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
---
 drivers/staging/ramzswap/ramzswap_drv.c   |   77 +++++++++++++++++------------
 drivers/staging/ramzswap/ramzswap_drv.h   |    2 +-
 drivers/staging/ramzswap/ramzswap_ioctl.h |    2 +-
 drivers/staging/ramzswap/xvmalloc.c       |    2 +-
 drivers/staging/ramzswap/xvmalloc.h       |    2 +-
 drivers/staging/ramzswap/xvmalloc_int.h   |    2 +-
 6 files changed, 50 insertions(+), 37 deletions(-)

diff --git a/drivers/staging/ramzswap/ramzswap_drv.c b/drivers/staging/ramzswap/ramzswap_drv.c
index 3567ee3..5c93727 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.c
+++ b/drivers/staging/ramzswap/ramzswap_drv.c
@@ -1,7 +1,7 @@
 /*
  * Compressed RAM based swap device
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
@@ -220,7 +220,7 @@ out:
 	return ret;
 }

-void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
+static void ramzswap_ioctl_get_stats(struct ramzswap *rzs,
 			struct ramzswap_ioctl_stats *s)
 {
 	strncpy(s->backing_swap_name, rzs->backing_swap_name,
@@ -502,6 +502,10 @@ static int setup_backing_swap(struct ramzswap *rzs)
 			goto bad_param;
 		}
 		disksize = i_size_read(inode);
+		if (!disksize) {
+			pr_err("Error reading backing swap size.\n");
+			goto bad_param;
+		}
 	} else if (S_ISREG(inode->i_mode)) {
 		bdev = inode->i_sb->s_bdev;
 		if (IS_SWAPFILE(inode)) {
@@ -519,7 +523,6 @@ static int setup_backing_swap(struct ramzswap *rzs)
 	rzs->swap_file = swap_file;
 	rzs->backing_swap = bdev;
 	rzs->disksize = disksize;
-	BUG_ON(!rzs->disksize);

 	return 0;

@@ -537,7 +540,7 @@ out:
  * Map logical page number 'pagenum' to physical page number
  * on backing swap device. For block device, this is a nop.
  */
-u32 map_backing_swap_page(struct ramzswap *rzs, u32 pagenum)
+static u32 map_backing_swap_page(struct ramzswap *rzs, u32 pagenum)
 {
 	u32 skip_pages, entries_per_page;
 	size_t delta, se_offset, skipped;
@@ -593,6 +596,10 @@ static void ramzswap_free_page(struct ramzswap *rzs, size_t index)
 	u32 offset = rzs->table[index].offset;

 	if (unlikely(!page)) {
+		/*
+		 * No memory is allocated for zero filled pages.
+		 * Simply clear zero page flag.
+		 */
 		if (rzs_test_flag(rzs, index, RZS_ZERO)) {
 			rzs_clear_flag(rzs, index, RZS_ZERO);
 			rzs_stat_dec(&rzs->stats.pages_zero);
@@ -664,7 +671,6 @@ static int handle_uncompressed_page(struct ramzswap *rzs, struct bio *bio)
 	return 0;
 }

-
 /*
  * Called when request page is not present in ramzswap.
  * Its either in backing swap device (if present) or
@@ -782,24 +788,15 @@ static int ramzswap_write(struct ramzswap *rzs, struct bio *bio)
 	page = bio->bi_io_vec[0].bv_page;
 	index = bio->bi_sector >> SECTORS_PER_PAGE_SHIFT;

-	src = rzs->compress_buffer;
-
 	/*
 	 * System swaps to same sector again when the stored page
 	 * is no longer referenced by any process. So, its now safe
 	 * to free the memory that was allocated for this page.
 	 */
-	if (rzs->table[index].page)
+	if (rzs->table[index].page || rzs_test_flag(rzs, index, RZS_ZERO))
 		ramzswap_free_page(rzs, index);

-	/*
-	 * No memory is allocated for zero filled pages.
-	 * Simply clear zero page flag.
-	 */
-	if (rzs_test_flag(rzs, index, RZS_ZERO)) {
-		rzs_stat_dec(&rzs->stats.pages_zero);
-		rzs_clear_flag(rzs, index, RZS_ZERO);
-	}
+	src = rzs->compress_buffer;

 	mutex_lock(&rzs->lock);

@@ -941,7 +938,6 @@ out:
 	return 0;
 }

-
 /*
  * Check if request is within bounds and page aligned.
  */
@@ -1069,6 +1065,7 @@ static void reset_device(struct ramzswap *rzs)
 			bd_release(rzs->backing_swap);
 		filp_close(rzs->swap_file, NULL);
 		rzs->backing_swap = NULL;
+		memset(rzs->backing_swap_name, 0, MAX_SWAP_NAME_LEN);
 	}

 	/* Reset stats */
@@ -1300,6 +1297,7 @@ static struct block_device_operations ramzswap_devops = {

 static int create_device(struct ramzswap *rzs, int device_id)
 {
+	int ret = 0;
 	mutex_init(&rzs->lock);
 	spin_lock_init(&rzs->stat64_lock);
 	INIT_LIST_HEAD(&rzs->backing_swap_extent_list);
@@ -1308,7 +1306,8 @@ static int create_device(struct ramzswap *rzs, int device_id)
 	if (!rzs->queue) {
 		pr_err("Error allocating disk queue for device %d\n",
 			device_id);
-		return 0;
+		ret = -ENOMEM;
+		goto out;
 	}

 	blk_queue_make_request(rzs->queue, ramzswap_make_request);
@@ -1320,7 +1319,8 @@ static int create_device(struct ramzswap *rzs, int device_id)
 		blk_cleanup_queue(rzs->queue);
 		pr_warning("Error allocating disk structure for device %d\n",
 			device_id);
-		return 0;
+		ret = -ENOMEM;
+		goto out;
 	}

 	rzs->disk->major = ramzswap_major;
@@ -1335,10 +1335,16 @@ static int create_device(struct ramzswap *rzs, int device_id)
 	 * or set equal to backing swap device (if provided)
 	 */
 	set_capacity(rzs->disk, 0);
+
+	blk_queue_physical_block_size(rzs->disk->queue, PAGE_SIZE);
+	blk_queue_logical_block_size(rzs->disk->queue, PAGE_SIZE);
+
 	add_disk(rzs->disk);

 	rzs->init_done = 0;
-	return 1;
+
+out:
+	return ret;
 }

 static void destroy_device(struct ramzswap *rzs)
@@ -1354,18 +1360,20 @@ static void destroy_device(struct ramzswap *rzs)

 static int __init ramzswap_init(void)
 {
-	int i, ret;
+	int ret, dev_id;

 	if (num_devices > max_num_devices) {
 		pr_warning("Invalid value for num_devices: %u\n",
 				num_devices);
-		return -EINVAL;
+		ret = -EINVAL;
+		goto out;
 	}

 	ramzswap_major = register_blkdev(0, "ramzswap");
 	if (ramzswap_major <= 0) {
 		pr_warning("Unable to get major number\n");
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out;
 	}

 	if (!num_devices) {
@@ -1376,21 +1384,26 @@ static int __init ramzswap_init(void)
 	/* Allocate the device array and initialize each one */
 	pr_info("Creating %u devices ...\n", num_devices);
 	devices = kzalloc(num_devices * sizeof(struct ramzswap), GFP_KERNEL);
-	if (!devices)
-		goto out;
+	if (!devices) {
+		ret = -ENOMEM;
+		goto unregister;
+	}

-	for (i = 0; i < num_devices; i++)
-		if (!create_device(&devices[i], i)) {
-			ret = i;
+	for (dev_id = 0; dev_id < num_devices; dev_id++) {
+		if (create_device(&devices[dev_id], dev_id)) {
+			ret = -ENOMEM;
 			goto free_devices;
 		}
+	}
+
 	return 0;
+
 free_devices:
-	for (i = 0; i < ret; i++)
-		destroy_device(&devices[i]);
-out:
-	ret = -ENOMEM;
+	while (dev_id)
+		destroy_device(&devices[--dev_id]);
+unregister:
 	unregister_blkdev(ramzswap_major, "ramzswap");
+out:
 	return ret;
 }

diff --git a/drivers/staging/ramzswap/ramzswap_drv.h b/drivers/staging/ramzswap/ramzswap_drv.h
index 5b67222..c7e0e76 100644
--- a/drivers/staging/ramzswap/ramzswap_drv.h
+++ b/drivers/staging/ramzswap/ramzswap_drv.h
@@ -1,7 +1,7 @@
 /*
  * Compressed RAM based swap device
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
diff --git a/drivers/staging/ramzswap/ramzswap_ioctl.h b/drivers/staging/ramzswap/ramzswap_ioctl.h
index 1edaeba..d26076d 100644
--- a/drivers/staging/ramzswap/ramzswap_ioctl.h
+++ b/drivers/staging/ramzswap/ramzswap_ioctl.h
@@ -1,7 +1,7 @@
 /*
  * Compressed RAM based swap device
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
diff --git a/drivers/staging/ramzswap/xvmalloc.c b/drivers/staging/ramzswap/xvmalloc.c
index dabd266..3fdbb8a 100644
--- a/drivers/staging/ramzswap/xvmalloc.c
+++ b/drivers/staging/ramzswap/xvmalloc.c
@@ -1,7 +1,7 @@
 /*
  * xvmalloc memory allocator
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
diff --git a/drivers/staging/ramzswap/xvmalloc.h b/drivers/staging/ramzswap/xvmalloc.h
index 010c6fe..5b1a81a 100644
--- a/drivers/staging/ramzswap/xvmalloc.h
+++ b/drivers/staging/ramzswap/xvmalloc.h
@@ -1,7 +1,7 @@
 /*
  * xvmalloc memory allocator
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
diff --git a/drivers/staging/ramzswap/xvmalloc_int.h b/drivers/staging/ramzswap/xvmalloc_int.h
index 263c72d..e23ed5c 100644
--- a/drivers/staging/ramzswap/xvmalloc_int.h
+++ b/drivers/staging/ramzswap/xvmalloc_int.h
@@ -1,7 +1,7 @@
 /*
  * xvmalloc memory allocator
  *
- * Copyright (C) 2008, 2009  Nitin Gupta
+ * Copyright (C) 2008, 2009, 2010  Nitin Gupta
  *
  * This code is released using a dual license strategy: BSD/GPL
  * You can choose the licence that better fits your requirements.
--
1.6.2.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3][resend] set block size to PAGE_SIZE and some cleanups
  2010-01-28  3:59   ` [PATCH 3/3][resend] " Nitin Gupta
@ 2010-01-28  4:23     ` Greg KH
  2010-01-28  5:37       ` Nitin Gupta
  0 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2010-01-28  4:23 UTC (permalink / raw)
  To: Nitin Gupta; +Cc: Pekka Enberg, linux-kernel

On Thu, Jan 28, 2010 at 09:29:34AM +0530, Nitin Gupta wrote:
> [replace stat_*() with rzs_stat*()]
> ---
> 
> ramzswap block size needs to be set to PAGE_SIZE
> to avoid receiving any unaligned block I/O (happens
> during swapon time). These unaligned access produce
> unncessary I/O errors, scaring users.
> 
> Also included some minor cleanups.

Such as?

Could you break this into 2 patches, one the block size stuff, and the
other the cleanups?  Remember, 1 patch does 1 thing.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3][resend] set block size to PAGE_SIZE and some cleanups
  2010-01-28  4:23     ` Greg KH
@ 2010-01-28  5:37       ` Nitin Gupta
  2010-01-28  5:54         ` Greg KH
  0 siblings, 1 reply; 11+ messages in thread
From: Nitin Gupta @ 2010-01-28  5:37 UTC (permalink / raw)
  To: Greg KH; +Cc: Pekka Enberg, linux-kernel

On Thu, Jan 28, 2010 at 9:53 AM, Greg KH <greg@kroah.com> wrote:
> On Thu, Jan 28, 2010 at 09:29:34AM +0530, Nitin Gupta wrote:
>> [replace stat_*() with rzs_stat*()]
>> ---
>>
>> ramzswap block size needs to be set to PAGE_SIZE
>> to avoid receiving any unaligned block I/O (happens
>> during swapon time). These unaligned access produce
>> unncessary I/O errors, scaring users.
>>
>> Also included some minor cleanups.
>
> Such as?
>
> Could you break this into 2 patches, one the block size stuff, and the
> other the cleanups?  Remember, 1 patch does 1 thing.
>

I thought large number of patches is not desirable, so I merged lot
of stuff in one. I will resend 'v2' patches with proper breakup.

Thanks,
Nitin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3][resend] set block size to PAGE_SIZE and some cleanups
  2010-01-28  5:37       ` Nitin Gupta
@ 2010-01-28  5:54         ` Greg KH
  0 siblings, 0 replies; 11+ messages in thread
From: Greg KH @ 2010-01-28  5:54 UTC (permalink / raw)
  To: Nitin Gupta; +Cc: Pekka Enberg, linux-kernel

On Thu, Jan 28, 2010 at 11:07:20AM +0530, Nitin Gupta wrote:
> On Thu, Jan 28, 2010 at 9:53 AM, Greg KH <greg@kroah.com> wrote:
> > On Thu, Jan 28, 2010 at 09:29:34AM +0530, Nitin Gupta wrote:
> >> [replace stat_*() with rzs_stat*()]
> >> ---
> >>
> >> ramzswap block size needs to be set to PAGE_SIZE
> >> to avoid receiving any unaligned block I/O (happens
> >> during swapon time). These unaligned access produce
> >> unncessary I/O errors, scaring users.
> >>
> >> Also included some minor cleanups.
> >
> > Such as?
> >
> > Could you break this into 2 patches, one the block size stuff, and the
> > other the cleanups? ?Remember, 1 patch does 1 thing.
> >
> 
> I thought large number of patches is not desirable, so I merged lot
> of stuff in one. I will resend 'v2' patches with proper breakup.

Large numbers of patches is not only desirable, it is encouraged!  Bring
them on, I get patch series all the time that start out [00/34] and
that's just fine.

And actually, doing things in small chunks is better if a problem is
found, as smaller patches are easier to review, and 'git bisect' makes
it trivial to narrow in on a specific patch.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2010-01-28  5:54 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-01-27 14:24 [PATCH 0/3][linux-next] ramzswap: bug fixes and some cleanups Nitin Gupta
2010-01-27 14:24 ` [PATCH 1/3] use lock for 64-bit stats Nitin Gupta
2010-01-27 15:10   ` Pekka Enberg
2010-01-27 15:51     ` Nitin Gupta
2010-01-28  2:19   ` [PATCH 1/3][resend] " Nitin Gupta
2010-01-27 14:24 ` [PATCH 2/3] flush block device before reset Nitin Gupta
2010-01-27 14:24 ` [PATCH 3/3] set block size to PAGE_SIZE and some cleanups Nitin Gupta
2010-01-28  3:59   ` [PATCH 3/3][resend] " Nitin Gupta
2010-01-28  4:23     ` Greg KH
2010-01-28  5:37       ` Nitin Gupta
2010-01-28  5:54         ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).