All of lore.kernel.org
 help / color / mirror / Atom feed
* zram /proc/swaps accounting weirdness
@ 2012-12-07 23:57 Dan Magenheimer
  2012-12-10  3:15 ` Bob Liu
  2012-12-11  6:26 ` Minchan Kim
  0 siblings, 2 replies; 8+ messages in thread
From: Dan Magenheimer @ 2012-12-07 23:57 UTC (permalink / raw)
  To: Nitin Gupta, Minchan Kim; +Cc: Luigi Semenzato, linux-mm

While playing around with zcache+zram (see separate thread),
I was watching stats with "watch -d".

It appears from the code that /sys/block/num_writes only
increases, never decreases.  In my test, num_writes got up
to 1863.  /sys/block/disksize is 104857600.

I have two swap disks, one zram (pri=60), one real (pri=-1),
and as a I watched /proc/swaps, the "Used" field grew rapidly
and reached the Size (102396k) of the zram swap, and then
the second swap disk (a physical disk partition) started being
used.  Then for awhile, the Used field for both swap devices
was changing (up and down).

Can you explain how this could happen if num_writes never
exceeded 1863?  This may be harmless in the case where
the only swap on the system is zram; or may indicate a bug
somewhere?

It looks like num_writes is counting bio's not pages...
which would imply the bio's are potentially quite large
(and I'll guess they are of size SWAPFILE_CLUSTER which is
defined to be 256).  Do large clusters make sense with zram?

Late on a Friday so sorry if I am incomprehensible...

P.S. The corresponding stat for zcache indicates that
it failed 8852 stores, so I would have expected zram
to deal with no more than 8852 compressions.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: zram /proc/swaps accounting weirdness
  2012-12-07 23:57 zram /proc/swaps accounting weirdness Dan Magenheimer
@ 2012-12-10  3:15 ` Bob Liu
  2012-12-12  0:22   ` Dan Magenheimer
  2012-12-11  6:26 ` Minchan Kim
  1 sibling, 1 reply; 8+ messages in thread
From: Bob Liu @ 2012-12-10  3:15 UTC (permalink / raw)
  To: Dan Magenheimer; +Cc: Nitin Gupta, Minchan Kim, Luigi Semenzato, linux-mm

Hi Dan,

On Sat, Dec 8, 2012 at 7:57 AM, Dan Magenheimer
<dan.magenheimer@oracle.com> wrote:
> While playing around with zcache+zram (see separate thread),
> I was watching stats with "watch -d".
>
> It appears from the code that /sys/block/num_writes only
> increases, never decreases.  In my test, num_writes got up
> to 1863.  /sys/block/disksize is 104857600.
>
> I have two swap disks, one zram (pri=60), one real (pri=-1),
> and as a I watched /proc/swaps, the "Used" field grew rapidly
> and reached the Size (102396k) of the zram swap, and then
> the second swap disk (a physical disk partition) started being
> used.  Then for awhile, the Used field for both swap devices
> was changing (up and down).
>
> Can you explain how this could happen if num_writes never
> exceeded 1863?  This may be harmless in the case where
> the only swap on the system is zram; or may indicate a bug
> somewhere?
>

Sorry, I didn't get your idea here.
In my opinion, num_writes is the count of request but not the size.
I think the total size should be the sum of bio->bi_size,
so if num_writes is 1863 the actual size may also exceed 102396k.

> It looks like num_writes is counting bio's not pages...
> which would imply the bio's are potentially quite large
> (and I'll guess they are of size SWAPFILE_CLUSTER which is
> defined to be 256).  Do large clusters make sense with zram?
>
> Late on a Friday so sorry if I am incomprehensible...
>
> P.S. The corresponding stat for zcache indicates that
> it failed 8852 stores, so I would have expected zram
> to deal with no more than 8852 compressions.
>

-- 
Regards,
--Bob

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: zram /proc/swaps accounting weirdness
  2012-12-07 23:57 zram /proc/swaps accounting weirdness Dan Magenheimer
  2012-12-10  3:15 ` Bob Liu
@ 2012-12-11  6:26 ` Minchan Kim
  2012-12-12  0:34   ` Dan Magenheimer
  1 sibling, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2012-12-11  6:26 UTC (permalink / raw)
  To: Dan Magenheimer; +Cc: Nitin Gupta, Luigi Semenzato, linux-mm

Hi Dan,

On Fri, Dec 07, 2012 at 03:57:08PM -0800, Dan Magenheimer wrote:
> While playing around with zcache+zram (see separate thread),
> I was watching stats with "watch -d".
> 
> It appears from the code that /sys/block/num_writes only
> increases, never decreases.  In my test, num_writes got up

Never decreasement is natural.

> to 1863.  /sys/block/disksize is 104857600.
> 
> I have two swap disks, one zram (pri=60), one real (pri=-1),
> and as a I watched /proc/swaps, the "Used" field grew rapidly
> and reached the Size (102396k) of the zram swap, and then
> the second swap disk (a physical disk partition) started being
> used.  Then for awhile, the Used field for both swap devices
> was changing (up and down).
> 
> Can you explain how this could happen if num_writes never
> exceeded 1863?  This may be harmless in the case where

Odd.
I tried to reproduce it with zram and real swap device without
zcache but failed. Does the problem happen only if enabling zcache
together? 

> the only swap on the system is zram; or may indicate a bug
> somewhere?

> 
> It looks like num_writes is counting bio's not pages...
> which would imply the bio's are potentially quite large
> (and I'll guess they are of size SWAPFILE_CLUSTER which is
> defined to be 256).  Do large clusters make sense with zram?

Swap_writepage handles a page and zram_make_request doesn't use
pluging mechanism of block I/O. So every request for swap-over-zram
is a bio and a page. So your problem might be a BUG.

> 
> Late on a Friday so sorry if I am incomprehensible...
> 
> P.S. The corresponding stat for zcache indicates that
> it failed 8852 stores, so I would have expected zram
> to deal with no more than 8852 compressions.
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: zram /proc/swaps accounting weirdness
  2012-12-10  3:15 ` Bob Liu
@ 2012-12-12  0:22   ` Dan Magenheimer
  0 siblings, 0 replies; 8+ messages in thread
From: Dan Magenheimer @ 2012-12-12  0:22 UTC (permalink / raw)
  To: Bob Liu; +Cc: Nitin Gupta, Minchan Kim, Luigi Semenzato, linux-mm

> From: Bob Liu [mailto:lliubbo@gmail.com]
> Subject: Re: zram /proc/swaps accounting weirdness
> 
> Hi Dan,
> 
> On Sat, Dec 8, 2012 at 7:57 AM, Dan Magenheimer
> <dan.magenheimer@oracle.com> wrote:
> > While playing around with zcache+zram (see separate thread),
> > I was watching stats with "watch -d".
> >
> > It appears from the code that /sys/block/num_writes only
> > increases, never decreases.  In my test, num_writes got up
> > to 1863.  /sys/block/disksize is 104857600.
> >
> > I have two swap disks, one zram (pri=60), one real (pri=-1),
> > and as a I watched /proc/swaps, the "Used" field grew rapidly
> > and reached the Size (102396k) of the zram swap, and then
> > the second swap disk (a physical disk partition) started being
> > used.  Then for awhile, the Used field for both swap devices
> > was changing (up and down).
> >
> > Can you explain how this could happen if num_writes never
> > exceeded 1863?  This may be harmless in the case where
> > the only swap on the system is zram; or may indicate a bug
> > somewhere?
> >
> 
> Sorry, I didn't get your idea here.
> In my opinion, num_writes is the count of request but not the size.
> I think the total size should be the sum of bio->bi_size,
> so if num_writes is 1863 the actual size may also exceed 102396k.

Hi Bob --

I added some debug code to record total bio_bi_size (and some
other things) to sysfs.  No, bio->bi_size appears to always
(or nearly always) be PAGE_SIZE.

Debug patch attached below in case you are interested.
(Applies to 3.7 final.)

> > It looks like num_writes is counting bio's not pages...
> > which would imply the bio's are potentially quite large
> > (and I'll guess they are of size SWAPFILE_CLUSTER which is
> > defined to be 256).  Do large clusters make sense with zram?
> >
> > Late on a Friday so sorry if I am incomprehensible...
> >
> > P.S. The corresponding stat for zcache indicates that
> > it failed 8852 stores, so I would have expected zram
> > to deal with no more than 8852 compressions.

diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index 6edefde..9679b02 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -160,7 +160,7 @@ static void zram_free_page(struct zram *zram, size_t index)
 
 	zram_stat64_sub(zram, &zram->stats.compr_size,
 			zram->table[index].size);
-	zram_stat_dec(&zram->stats.pages_stored);
+	zram_stat64_sub(zram, &zram->stats.pages_stored, -1);
 
 	zram->table[index].handle = 0;
 	zram->table[index].size = 0;
@@ -371,7 +371,8 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 
 	/* Update stats */
 	zram_stat64_add(zram, &zram->stats.compr_size, clen);
-	zram_stat_inc(&zram->stats.pages_stored);
+	zram_stat64_inc(zram, &zram->stats.pages_stored);
+	zram_stat64_inc(zram, &zram->stats.cum_pages_stored);
 	if (clen <= PAGE_SIZE / 2)
 		zram_stat_inc(&zram->stats.good_compress);
 
@@ -419,6 +420,8 @@ static void __zram_make_request(struct zram *zram, struct bio *bio, int rw)
 		zram_stat64_inc(zram, &zram->stats.num_reads);
 		break;
 	case WRITE:
+		zram_stat64_add(zram, &zram->stats.tot_bio_bi_size,
+				bio->bi_size);
 		zram_stat64_inc(zram, &zram->stats.num_writes);
 		break;
 	}
@@ -428,6 +431,11 @@ static void __zram_make_request(struct zram *zram, struct bio *bio, int rw)
 
 	bio_for_each_segment(bvec, bio, i) {
 		int max_transfer_size = PAGE_SIZE - offset;
+		switch (rw) {
+		case WRITE:
+			zram_stat64_inc(zram, &zram->stats.num_segments);
+		break;
+		}
 
 		if (bvec->bv_len > max_transfer_size) {
 			/*
diff --git a/drivers/staging/zram/zram_drv.h b/drivers/staging/zram/zram_drv.h
index 572c0b1..c40fe50 100644
--- a/drivers/staging/zram/zram_drv.h
+++ b/drivers/staging/zram/zram_drv.h
@@ -76,12 +76,15 @@ struct zram_stats {
 	u64 compr_size;		/* compressed size of pages stored */
 	u64 num_reads;		/* failed + successful */
 	u64 num_writes;		/* --do-- */
+	u64 tot_bio_bi_size;		/* --do-- */
+	u64 num_segments;		/* --do-- */
 	u64 failed_reads;	/* should NEVER! happen */
 	u64 failed_writes;	/* can happen when memory is too low */
 	u64 invalid_io;		/* non-page-aligned I/O requests */
 	u64 notify_free;	/* no. of swap slot free notifications */
 	u32 pages_zero;		/* no. of zero filled pages */
-	u32 pages_stored;	/* no. of pages currently stored */
+	u64 pages_stored;	/* no. of pages currently stored */
+	u64 cum_pages_stored;	/* pages cumulatively stored */
 	u32 good_compress;	/* % of pages with compression ratio<=50% */
 	u32 bad_compress;	/* % of pages with compression ratio>=75% */
 };
diff --git a/drivers/staging/zram/zram_sysfs.c b/drivers/staging/zram/zram_sysfs.c
index edb0ed4..2df62d4 100644
--- a/drivers/staging/zram/zram_sysfs.c
+++ b/drivers/staging/zram/zram_sysfs.c
@@ -136,6 +136,42 @@ static ssize_t num_writes_show(struct device *dev,
 		zram_stat64_read(zram, &zram->stats.num_writes));
 }
 
+static ssize_t tot_bio_bi_size_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	struct zram *zram = dev_to_zram(dev);
+
+	return sprintf(buf, "%llu\n",
+		zram_stat64_read(zram, &zram->stats.tot_bio_bi_size));
+}
+
+static ssize_t num_segments_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	struct zram *zram = dev_to_zram(dev);
+
+	return sprintf(buf, "%llu\n",
+		zram_stat64_read(zram, &zram->stats.num_segments));
+}
+
+static ssize_t pages_stored_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	struct zram *zram = dev_to_zram(dev);
+
+	return sprintf(buf, "%llu\n",
+		zram_stat64_read(zram, &zram->stats.pages_stored));
+}
+
+static ssize_t cum_pages_stored_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	struct zram *zram = dev_to_zram(dev);
+
+	return sprintf(buf, "%llu\n",
+		zram_stat64_read(zram, &zram->stats.cum_pages_stored));
+}
+
 static ssize_t invalid_io_show(struct device *dev,
 		struct device_attribute *attr, char *buf)
 {
@@ -198,6 +234,10 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
 static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
 static DEVICE_ATTR(num_reads, S_IRUGO, num_reads_show, NULL);
 static DEVICE_ATTR(num_writes, S_IRUGO, num_writes_show, NULL);
+static DEVICE_ATTR(tot_bio_bi_size, S_IRUGO, tot_bio_bi_size_show, NULL);
+static DEVICE_ATTR(num_segments, S_IRUGO, num_segments_show, NULL);
+static DEVICE_ATTR(pages_stored, S_IRUGO, pages_stored_show, NULL);
+static DEVICE_ATTR(cum_pages_stored, S_IRUGO, cum_pages_stored_show, NULL);
 static DEVICE_ATTR(invalid_io, S_IRUGO, invalid_io_show, NULL);
 static DEVICE_ATTR(notify_free, S_IRUGO, notify_free_show, NULL);
 static DEVICE_ATTR(zero_pages, S_IRUGO, zero_pages_show, NULL);
@@ -211,6 +251,10 @@ static struct attribute *zram_disk_attrs[] = {
 	&dev_attr_reset.attr,
 	&dev_attr_num_reads.attr,
 	&dev_attr_num_writes.attr,
+	&dev_attr_tot_bio_bi_size.attr,
+	&dev_attr_num_segments.attr,
+	&dev_attr_pages_stored.attr,
+	&dev_attr_cum_pages_stored.attr,
 	&dev_attr_invalid_io.attr,
 	&dev_attr_notify_free.attr,
 	&dev_attr_zero_pages.attr,


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* RE: zram /proc/swaps accounting weirdness
  2012-12-11  6:26 ` Minchan Kim
@ 2012-12-12  0:34   ` Dan Magenheimer
  2012-12-12  1:12     ` Dan Magenheimer
  0 siblings, 1 reply; 8+ messages in thread
From: Dan Magenheimer @ 2012-12-12  0:34 UTC (permalink / raw)
  To: Minchan Kim; +Cc: Nitin Gupta, Luigi Semenzato, linux-mm

> From: Minchan Kim [mailto:minchan@kernel.org]
> Subject: Re: zram /proc/swaps accounting weirdness
> 
> Hi Dan,
> 
> On Fri, Dec 07, 2012 at 03:57:08PM -0800, Dan Magenheimer wrote:
> > While playing around with zcache+zram (see separate thread),
> > I was watching stats with "watch -d".
> >
> > It appears from the code that /sys/block/num_writes only
> > increases, never decreases.  In my test, num_writes got up
> 
> Never decreasement is natural.

Agreed.
 
> > to 1863.  /sys/block/disksize is 104857600.
> >
> > I have two swap disks, one zram (pri=60), one real (pri=-1),
> > and as a I watched /proc/swaps, the "Used" field grew rapidly
> > and reached the Size (102396k) of the zram swap, and then
> > the second swap disk (a physical disk partition) started being
> > used.  Then for awhile, the Used field for both swap devices
> > was changing (up and down).
> >
> > Can you explain how this could happen if num_writes never
> > exceeded 1863?  This may be harmless in the case where
> 
> Odd.
> I tried to reproduce it with zram and real swap device without
> zcache but failed. Does the problem happen only if enabling zcache
> together?

I also cannot reproduce it with only zram, without zcache.
I can only reproduce with zcache+zram.  Since zcache will
only "fall through" to zram when the frontswap_store() call
in swap_writepage() fails, I wonder if in both cases swap_writepage()
is being called in large (e.g. SWAPFILE_CLUSTER-sized) blocks
of pages?  When zram-only, the entire block of pages always gets
sent to zram, but with zcache only a small randomly-positioned
fraction fail frontswap_store(), but the SWAPFILE_CLUSTER-sized
blocks have already been pre-reserved on the swap device and
become only partially-filled?

Thanks,
Dan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: zram /proc/swaps accounting weirdness
  2012-12-12  0:34   ` Dan Magenheimer
@ 2012-12-12  1:12     ` Dan Magenheimer
  2013-02-17  1:52       ` Simon Jeons
  0 siblings, 1 reply; 8+ messages in thread
From: Dan Magenheimer @ 2012-12-12  1:12 UTC (permalink / raw)
  To: Dan Magenheimer, Minchan Kim
  Cc: Nitin Gupta, Luigi Semenzato, linux-mm, Bob Liu

> From: Dan Magenheimer
> Subject: RE: zram /proc/swaps accounting weirdness
> 
> > > Can you explain how this could happen if num_writes never
> > > exceeded 1863?  This may be harmless in the case where
> >
> > Odd.
> > I tried to reproduce it with zram and real swap device without
> > zcache but failed. Does the problem happen only if enabling zcache
> > together?
> 
> I also cannot reproduce it with only zram, without zcache.
> I can only reproduce with zcache+zram.  Since zcache will
> only "fall through" to zram when the frontswap_store() call
> in swap_writepage() fails, I wonder if in both cases swap_writepage()
> is being called in large (e.g. SWAPFILE_CLUSTER-sized) blocks
> of pages?  When zram-only, the entire block of pages always gets
> sent to zram, but with zcache only a small randomly-positioned
> fraction fail frontswap_store(), but the SWAPFILE_CLUSTER-sized
> blocks have already been pre-reserved on the swap device and
> become only partially-filled?

Urk.  Never mind.  My bad.  When a swap page is compressed in
zcache, it gets accounted in the swap subsystem as an "inuse"
page for the backing swap device.  (Frontswap provides a
page-by-page "fronting store" for the swap device.)  That explains
why Used is so high for the "zram swap device" even though
zram has only compressed a fraction of the pages... the
remaining (much larger) number of pages have been compressed
by/in zcache.

Move along, there are no droids here. :-(

Dan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: zram /proc/swaps accounting weirdness
  2012-12-12  1:12     ` Dan Magenheimer
@ 2013-02-17  1:52       ` Simon Jeons
  2013-02-18 17:54         ` Dan Magenheimer
  0 siblings, 1 reply; 8+ messages in thread
From: Simon Jeons @ 2013-02-17  1:52 UTC (permalink / raw)
  To: Dan Magenheimer
  Cc: Minchan Kim, Nitin Gupta, Luigi Semenzato, linux-mm, Bob Liu

On 12/12/2012 09:12 AM, Dan Magenheimer wrote:
>> From: Dan Magenheimer
>> Subject: RE: zram /proc/swaps accounting weirdness
>>
>>>> Can you explain how this could happen if num_writes never
>>>> exceeded 1863?  This may be harmless in the case where
>>> Odd.
>>> I tried to reproduce it with zram and real swap device without
>>> zcache but failed. Does the problem happen only if enabling zcache
>>> together?
>> I also cannot reproduce it with only zram, without zcache.
>> I can only reproduce with zcache+zram.  Since zcache will
>> only "fall through" to zram when the frontswap_store() call
>> in swap_writepage() fails, I wonder if in both cases swap_writepage()
>> is being called in large (e.g. SWAPFILE_CLUSTER-sized) blocks
>> of pages?  When zram-only, the entire block of pages always gets
>> sent to zram, but with zcache only a small randomly-positioned
>> fraction fail frontswap_store(), but the SWAPFILE_CLUSTER-sized
>> blocks have already been pre-reserved on the swap device and
>> become only partially-filled?
> Urk.  Never mind.  My bad.  When a swap page is compressed in
> zcache, it gets accounted in the swap subsystem as an "inuse"

Could you point out to me where add this count to swap subsystem?

> page for the backing swap device.  (Frontswap provides a
> page-by-page "fronting store" for the swap device.)  That explains
> why Used is so high for the "zram swap device" even though
> zram has only compressed a fraction of the pages... the
> remaining (much larger) number of pages have been compressed
> by/in zcache.
>
> Move along, there are no droids here. :-(
>
> Dan
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=ilto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: zram /proc/swaps accounting weirdness
  2013-02-17  1:52       ` Simon Jeons
@ 2013-02-18 17:54         ` Dan Magenheimer
  0 siblings, 0 replies; 8+ messages in thread
From: Dan Magenheimer @ 2013-02-18 17:54 UTC (permalink / raw)
  To: Simon Jeons; +Cc: Minchan Kim, Nitin Gupta, Luigi Semenzato, linux-mm, Bob Liu

> From: Simon Jeons [mailto:simon.jeons@gmail.com]
> Subject: Re: zram /proc/swaps accounting weirdness
> 
> On 12/12/2012 09:12 AM, Dan Magenheimer wrote:
> >> From: Dan Magenheimer
> >> Subject: RE: zram /proc/swaps accounting weirdness
> >>
> >>>> Can you explain how this could happen if num_writes never
> >>>> exceeded 1863?  This may be harmless in the case where
> >>> Odd.
> >>> I tried to reproduce it with zram and real swap device without
> >>> zcache but failed. Does the problem happen only if enabling zcache
> >>> together?
> >> I also cannot reproduce it with only zram, without zcache.
> >> I can only reproduce with zcache+zram.  Since zcache will
> >> only "fall through" to zram when the frontswap_store() call
> >> in swap_writepage() fails, I wonder if in both cases swap_writepage()
> >> is being called in large (e.g. SWAPFILE_CLUSTER-sized) blocks
> >> of pages?  When zram-only, the entire block of pages always gets
> >> sent to zram, but with zcache only a small randomly-positioned
> >> fraction fail frontswap_store(), but the SWAPFILE_CLUSTER-sized
> >> blocks have already been pre-reserved on the swap device and
> >> become only partially-filled?
> > Urk.  Never mind.  My bad.  When a swap page is compressed in
> > zcache, it gets accounted in the swap subsystem as an "inuse"
> 
> Could you point out to me where add this count to swap subsystem?

The swap subsystem doesn't know whether the page is
held in zcache or has been written to the swap disk,
only that one of these happened.  So si->inuse_pages gets
incremented either way.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-02-18 17:54 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-12-07 23:57 zram /proc/swaps accounting weirdness Dan Magenheimer
2012-12-10  3:15 ` Bob Liu
2012-12-12  0:22   ` Dan Magenheimer
2012-12-11  6:26 ` Minchan Kim
2012-12-12  0:34   ` Dan Magenheimer
2012-12-12  1:12     ` Dan Magenheimer
2013-02-17  1:52       ` Simon Jeons
2013-02-18 17:54         ` Dan Magenheimer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.