linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCHv5 0/8] zswap: compressed swap caching
       [not found] <1360780731-11708-1-git-send-email-sjenning@linux.vnet.ibm.com>
@ 2013-02-16  3:20 ` Ric Mason
  2013-02-18 19:37   ` Seth Jennings
       [not found] ` <1360780731-11708-5-git-send-email-sjenning@linux.vnet.ibm.com>
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 38+ messages in thread
From: Ric Mason @ 2013-02-16  3:20 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/14/2013 02:38 AM, Seth Jennings wrote:
> Lots of changes this time around.  Hopefully I collected and acted on
> all the feedback.  I apologize ahead of time if I missed something.
> Please let me know if I did.
>
> Changelog:
>
> v5:
> * zsmalloc patch converted from promotion to "new code" (for review only,
>    see note in [1/8])
> * promote zsmalloc to mm/ instead of /lib
> * add more documentation everywhere
> * convert USE_PGTABLE_MAPPING to kconfig option, thanks to Minchan
> * s/flush/writeback/
> * #define pr_fmt() for formatting messages (Joe)
> * checkpatch fixups
> * lots of changes suggested Minchan
>
> v4:
> * Added Acks (Minchan)
> * Separated flushing functionality into standalone patch
>    for easier review (Minchan)
> * fix comment on zswap enabled attribute (Minchan)
> * add TODO for dynamic mempool size (Minchan)
> * and check for NULL in zswap_free_page() (Minchan)
> * add missing zs_free() in error path (Minchan)
> * TODO: add comments for flushing/refcounting (Minchan)
>
> v3:
> * Dropped the zsmalloc patches from the set, except the promotion patch
>    which has be converted to a rename patch (vs full diff).  The dropped
>    patches have been Acked and are going into Greg's staging tree soon.
> * Separated [PATCHv2 7/9] into two patches since it makes changes for two
>    different reasons (Minchan)
> * Moved ZSWAP_MAX_OUTSTANDING_FLUSHES near the top in zswap.c (Rik)
> * Rebase to v3.8-rc5. linux-next is a little volatile with the
>    swapper_space per type changes which will effect this patchset.
> * TODO: Move some stats from debugfs to sysfs. Which ones? (Rik)
>
> v2:
> * Rename zswap_fs_* functions to zswap_frontswap_* to avoid
>    confusion with "filesystem"
> * Add comment about what the tree lock protects
> * Remove "#if 0" code (should have been done before)
> * Break out changes to existing swap code into separate patch
> * Fix blank line EOF warning on documentation file
> * Rebase to next-20130107
>
> Zswap Overview:
>
> Zswap is a lightweight compressed cache for swap pages. It takes
> pages that are in the process of being swapped out and attempts to
> compress them into a dynamically allocated RAM-based memory pool.
> If this process is successful, the writeback to the swap device is
> deferred and, in many cases, avoided completely.  This results in
> a significant I/O reduction and performance gains for systems that
> are swapping.
>
> The results of a kernel building benchmark indicate a
> runtime reduction of 53% and an I/O reduction 76% with zswap vs normal
> swapping with a kernel build under heavy memory pressure (see
> Performance section for more).
>
> Some addition performance metrics regarding the performance
> improvements and I/O reductions that can be achieved using zswap as
> measured by SPECjbb are provided here:
>
> http://ibm.co/VCgHvM

I see this link.  You mentioned that "When a user enables zswap and the 
hardware accelerator, zswap simply passes the pages to be compressed or 
decompressed off to the accelerator instead of performing the work in 
software". Then how can user enable hardware accelerator, there are 
option in UEFI or ... ?

> These results include runs on x86 and new results on Power7+ with
> hardware compression acceleration.
>
> Of particular note is that zswap is able to evict pages from the compressed
> cache, on an LRU basis, to the backing swap device when the compressed pool
> reaches it size limit or the pool is unable to obtain additional pages
> from the buddy allocator.  This eviction functionality had been identified
> as a requirement in prior community discussions.
>
> Patchset Structure:
> 1-2: add zsmalloc and documentation
> 3:   add atomic_t get/set to debugfs
> 4:   add basic zswap functionality
> 4,5: changes to existing swap code for zswap
> 6,7: add zswap writeback support
> 8:   add zswap documentation
>
> Rationale:
>
> Zswap provides compressed swap caching that basically trades CPU cycles
> for reduced swap I/O.  This trade-off can result in a significant
> performance improvement as reads to/writes from to the compressed
> cache almost always faster that reading from a swap device
> which incurs the latency of an asynchronous block I/O read.
>
> Some potential benefits:
> * Desktop/laptop users with limited RAM capacities can mitigate the
>      performance impact of swapping.
> * Overcommitted guests that share a common I/O resource can
>      dramatically reduce their swap I/O pressure, avoiding heavy
>      handed I/O throttling by the hypervisor.  This allows more work
>      to get done with less impact to the guest workload and guests
>      sharing the I/O subsystem
> * Users with SSDs as swap devices can extend the life of the device by
>      drastically reducing life-shortening writes.
>
> Compressed swap is also provided in zcache, along with page cache
> compression and RAM clustering through RAMSter. Zswap seeks to deliver
> the benefit of swap  compression to users in a discrete function.
> This design decision is akin to Unix design philosophy of doing one
> thing well, it leaves file cache compression and other features
> for separate code.
>
> Design:
>
> Zswap receives pages for compression through the Frontswap API and
> is able to evict pages from its own compressed pool on an LRU basis
> and write them back to the backing swap device in the case that the
> compressed pool is full or unable to secure additional pages from
> the buddy allocator.
>
> Zswap makes use of zsmalloc for the managing the compressed memory
> pool.  This is because zsmalloc is specifically designed to minimize
> fragmentation on large (> PAGE_SIZE/2) allocation sizes.  Each
> allocation in zsmalloc is not directly accessible by address.
> Rather, a handle is return by the allocation routine and that handle
> must be mapped before being accessed.  The compressed memory pool grows
> on demand and shrinks as compressed pages are freed.  The pool is
> not preallocated.
>
> When a swap page is passed from frontswap to zswap, zswap maintains
> a mapping of the swap entry, a combination of the swap type and swap
> offset, to the zsmalloc handle that references that compressed swap
> page.  This mapping is achieved with a red-black tree per swap type.
> The swap offset is the search key for the tree nodes.
>
> Zswap seeks to be simple in its policies.  Sysfs attributes allow for
> two user controlled policies:
> * max_compression_ratio - Maximum compression ratio, as as percentage,
>      for an acceptable compressed page. Any page that does not compress
>      by at least this ratio will be rejected.
> * max_pool_percent - The maximum percentage of memory that the compressed
>      pool can occupy.
>
> To enabled zswap, the "enabled" attribute must be set to 1 at boot time.
>
> Zswap allows the compressor to be selected at kernel boot time by
> setting the “compressor” attribute.  The default compressor is lzo.
>
> A debugfs interface is provided for various statistic about pool size,
> number of pages stored, and various counters for the reasons pages
> are rejected.
>
> Performance, Kernel Building:
>
> Setup
> ========
> Gentoo w/ kernel v3.7-rc7
> Quad-core i5-2500 @ 3.3GHz
> 512MB DDR3 1600MHz (limited with mem=512m on boot)
> Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
> majflt are major page faults reported by the time command
> pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
> the make -jN
>
> Summary
> ========
> * Zswap reduces I/O and improves performance at all swap pressure levels.
>
> * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
>    over 1.5GB of I/O, and cut runtime in half.
>
> Details
> ========
> I/O (in pages)
> 	base				zswap				change	change
> N	pswpin	pswpout	majflt	I/O sum	pswpin	pswpout	majflt	I/O sum	%I/O	MB
> 8	1	335	291	627	0	0	249	249	-60%	1
> 12	3688	14315	5290	23293	123	860	5954	6937	-70%	64
> 16	12711	46179	16803	75693	2936	7390	46092	56418	-25%	75
> 20	42178	133781	49898	225857	9460	28382	92951	130793	-42%	371
> 24	96079	357280	105242	558601	7719	18484	109309	135512	-76%	1653
>
> Runtime (in seconds)
> N	base	zswap	%change
> 8	107	107	0%
> 12	128	110	-14%
> 16	191	179	-6%
> 20	371	240	-35%
> 24	570	267	-53%
>
> %CPU utilization (out of 400% on 4 cpus)
> N	base	zswap	%change
> 8	317	319	1%
> 12	267	311	16%
> 16	179	191	7%
> 20	94	143	52%
> 24	60	128	113%
>
> Seth Jennings (8):
>    zsmalloc: add to mm/
>    zsmalloc: add documentation
>    debugfs: add get/set for atomic types
>    zswap: add to mm/
>    mm: break up swap_writepage() for frontswap backends
>    mm: allow for outstanding swap writeback accounting
>    zswap: add swap page writeback support
>    zswap: add documentation
>
>   Documentation/vm/zsmalloc.txt |   68 +++
>   Documentation/vm/zswap.txt    |   82 +++
>   fs/debugfs/file.c             |   42 ++
>   include/linux/debugfs.h       |    2 +
>   include/linux/swap.h          |    4 +
>   include/linux/zsmalloc.h      |   49 ++
>   mm/Kconfig                    |   39 ++
>   mm/Makefile                   |    2 +
>   mm/page_io.c                  |   22 +-
>   mm/swap_state.c               |    2 +-
>   mm/zsmalloc.c                 | 1124 ++++++++++++++++++++++++++++++++++++++++
>   mm/zswap.c                    | 1148 +++++++++++++++++++++++++++++++++++++++++
>   12 files changed, 2578 insertions(+), 6 deletions(-)
>   create mode 100644 Documentation/vm/zsmalloc.txt
>   create mode 100644 Documentation/vm/zswap.txt
>   create mode 100644 include/linux/zsmalloc.h
>   create mode 100644 mm/zsmalloc.c
>   create mode 100644 mm/zswap.c
>


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
       [not found] ` <1360780731-11708-2-git-send-email-sjenning@linux.vnet.ibm.com>
@ 2013-02-16  3:26   ` Ric Mason
  2013-02-18 19:04     ` Seth Jennings
  2013-02-19  9:18   ` Joonsoo Kim
  1 sibling, 1 reply; 38+ messages in thread
From: Ric Mason @ 2013-02-16  3:26 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/14/2013 02:38 AM, Seth Jennings wrote:
> =========
> DO NOT MERGE, FOR REVIEW ONLY
> This patch introduces zsmalloc as new code, however, it already
> exists in drivers/staging.  In order to build successfully, you
> must select EITHER to driver/staging version OR this version.
> Once zsmalloc is reviewed in this format (and hopefully accepted),
> I will create a new patchset that properly promotes zsmalloc from
> staging.
> =========
>
> This patchset introduces a new slab-based memory allocator,
> zsmalloc, for storing compressed pages.  It is designed for
> low fragmentation and high allocation success rate on
> large object, but <= PAGE_SIZE allocations.
>
> zsmalloc differs from the kernel slab allocator in two primary
> ways to achieve these design goals.
>
> zsmalloc never requires high order page allocations to back
> slabs, or "size classes" in zsmalloc terms. Instead it allows
> multiple single-order pages to be stitched together into a
> "zspage" which backs the slab.  This allows for higher allocation
> success rate under memory pressure.
>
> Also, zsmalloc allows objects to span page boundaries within the
> zspage.  This allows for lower fragmentation than could be had
> with the kernel slab allocator for objects between PAGE_SIZE/2
> and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
> to 60% of it original size, the memory savings gained through
> compression is lost in fragmentation because another object of
> the same size can't be stored in the leftover space.

Why you say so? slab/slub allocator both have policies to setup suitable 
order of pages in each slab cache in order to reduce fragmentation. 
Which codes show you slab object can't span page boundaries? Could you 
pointed out to me?

>
> This ability to span pages results in zsmalloc allocations not being
> directly addressable by the user.  The user is given an
> non-dereferencable handle in response to an allocation request.
> That handle must be mapped, using zs_map_object(), which returns
> a pointer to the mapped region that can be used.  The mapping is
> necessary since the object data may reside in two different
> noncontigious pages.
>
> zsmalloc fulfills the allocation needs for zram and zswap.
>
> Acked-by: Nitin Gupta <ngupta@vflare.org>
> Acked-by: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> ---
>   include/linux/zsmalloc.h |   49 ++
>   mm/Kconfig               |   24 +
>   mm/Makefile              |    1 +
>   mm/zsmalloc.c            | 1124 ++++++++++++++++++++++++++++++++++++++++++++++
>   4 files changed, 1198 insertions(+)
>   create mode 100644 include/linux/zsmalloc.h
>   create mode 100644 mm/zsmalloc.c
>
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> new file mode 100644
> index 0000000..eb6efb6
> --- /dev/null
> +++ b/include/linux/zsmalloc.h
> @@ -0,0 +1,49 @@
> +/*
> + * zsmalloc memory allocator
> + *
> + * Copyright (C) 2011  Nitin Gupta
> + *
> + * This code is released using a dual license strategy: BSD/GPL
> + * You can choose the license that better fits your requirements.
> + *
> + * Released under the terms of 3-clause BSD License
> + * Released under the terms of GNU General Public License Version 2.0
> + */
> +
> +#ifndef _ZS_MALLOC_H_
> +#define _ZS_MALLOC_H_
> +
> +#include <linux/types.h>
> +#include <linux/mm_types.h>
> +
> +/*
> + * zsmalloc mapping modes
> + *
> + * NOTE: These only make a difference when a mapped object spans pages
> +*/
> +enum zs_mapmode {
> +	ZS_MM_RW, /* normal read-write mapping */
> +	ZS_MM_RO, /* read-only (no copy-out at unmap time) */
> +	ZS_MM_WO /* write-only (no copy-in at map time) */
> +};
> +
> +struct zs_ops {
> +	struct page * (*alloc)(gfp_t);
> +	void (*free)(struct page *);
> +};
> +
> +struct zs_pool;
> +
> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
> +void zs_destroy_pool(struct zs_pool *pool);
> +
> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
> +void zs_free(struct zs_pool *pool, unsigned long obj);
> +
> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> +			enum zs_mapmode mm);
> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> +
> +u64 zs_get_total_size_bytes(struct zs_pool *pool);
> +
> +#endif
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 278e3ab..25b8f38 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -446,3 +446,27 @@ config FRONTSWAP
>   	  and swap data is stored as normal on the matching swap device.
>   
>   	  If unsure, say Y to enable frontswap.
> +
> +config ZSMALLOC
> +	tristate "Memory allocator for compressed pages"
> +	default n
> +	help
> +	  zsmalloc is a slab-based memory allocator designed to store
> +	  compressed RAM pages.  zsmalloc uses virtual memory mapping
> +	  in order to reduce fragmentation.  However, this results in a
> +	  non-standard allocator interface where a handle, not a pointer, is
> +	  returned by an alloc().  This handle must be mapped in order to
> +	  access the allocated space.
> +
> +config PGTABLE_MAPPING
> +	bool "Use page table mapping to access object in zsmalloc"
> +	depends on ZSMALLOC
> +	help
> +	  By default, zsmalloc uses a copy-based object mapping method to
> +	  access allocations that span two pages. However, if a particular
> +	  architecture (ex, ARM) performs VM mapping faster than copying,
> +	  then you should select this. This causes zsmalloc to use page table
> +	  mapping rather than copying for object mapping.
> +
> +	  You can check speed with zsmalloc benchmark[1].
> +	  [1] https://github.com/spartacus06/zsmalloc
> diff --git a/mm/Makefile b/mm/Makefile
> index 3a46287..0f6ef0a 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
>   obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
>   obj-$(CONFIG_CLEANCACHE) += cleancache.o
>   obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> +obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> new file mode 100644
> index 0000000..34378ef
> --- /dev/null
> +++ b/mm/zsmalloc.c
> @@ -0,0 +1,1124 @@
> +/*
> + * zsmalloc memory allocator
> + *
> + * Copyright (C) 2011  Nitin Gupta
> + *
> + * This code is released using a dual license strategy: BSD/GPL
> + * You can choose the license that better fits your requirements.
> + *
> + * Released under the terms of 3-clause BSD License
> + * Released under the terms of GNU General Public License Version 2.0
> + */
> +
> +
> +/*
> + * This allocator is designed for use with zcache and zram. Thus, the
> + * allocator is supposed to work well under low memory conditions. In
> + * particular, it never attempts higher order page allocation which is
> + * very likely to fail under memory pressure. On the other hand, if we
> + * just use single (0-order) pages, it would suffer from very high
> + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
> + * an entire page. This was one of the major issues with its predecessor
> + * (xvmalloc).
> + *
> + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
> + * and links them together using various 'struct page' fields. These linked
> + * pages act as a single higher-order page i.e. an object can span 0-order
> + * page boundaries. The code refers to these linked pages as a single entity
> + * called zspage.
> + *
> + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
> + * since this satisfies the requirements of all its current users (in the
> + * worst case, page is incompressible and is thus stored "as-is" i.e. in
> + * uncompressed form). For allocation requests larger than this size, failure
> + * is returned (see zs_malloc).
> + *
> + * Additionally, zs_malloc() does not return a dereferenceable pointer.
> + * Instead, it returns an opaque handle (unsigned long) which encodes actual
> + * location of the allocated object. The reason for this indirection is that
> + * zsmalloc does not keep zspages permanently mapped since that would cause
> + * issues on 32-bit systems where the VA region for kernel space mappings
> + * is very small. So, before using the allocating memory, the object has to
> + * be mapped using zs_map_object() to get a usable pointer and subsequently
> + * unmapped using zs_unmap_object().
> + *
> + * Following is how we use various fields and flags of underlying
> + * struct page(s) to form a zspage.
> + *
> + * Usage of struct page fields:
> + *	page->first_page: points to the first component (0-order) page
> + *	page->index (union with page->freelist): offset of the first object
> + *		starting in this page. For the first page, this is
> + *		always 0, so we use this field (aka freelist) to point
> + *		to the first free object in zspage.
> + *	page->lru: links together all component pages (except the first page)
> + *		of a zspage
> + *
> + *	For _first_ page only:
> + *
> + *	page->private (union with page->first_page): refers to the
> + *		component page after the first page
> + *	page->freelist: points to the first free object in zspage.
> + *		Free objects are linked together using in-place
> + *		metadata.
> + *	page->objects: maximum number of objects we can store in this
> + *		zspage (class->zspage_order * PAGE_SIZE / class->size)
> + *	page->lru: links together first pages of various zspages.
> + *		Basically forming list of zspages in a fullness group.
> + *	page->mapping: class index and fullness group of the zspage
> + *
> + * Usage of struct page flags:
> + *	PG_private: identifies the first component page
> + *	PG_private2: identifies the last component page
> + *
> + */
> +
> +#ifdef CONFIG_ZSMALLOC_DEBUG
> +#define DEBUG
> +#endif
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/bitops.h>
> +#include <linux/errno.h>
> +#include <linux/highmem.h>
> +#include <linux/init.h>
> +#include <linux/string.h>
> +#include <linux/slab.h>
> +#include <asm/tlbflush.h>
> +#include <asm/pgtable.h>
> +#include <linux/cpumask.h>
> +#include <linux/cpu.h>
> +#include <linux/vmalloc.h>
> +#include <linux/hardirq.h>
> +#include <linux/spinlock.h>
> +#include <linux/types.h>
> +
> +#include <linux/zsmalloc.h>
> +
> +/*
> + * This must be power of 2 and greater than of equal to sizeof(link_free).
> + * These two conditions ensure that any 'struct link_free' itself doesn't
> + * span more than 1 page which avoids complex case of mapping 2 pages simply
> + * to restore link_free pointer values.
> + */
> +#define ZS_ALIGN		8
> +
> +/*
> + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
> + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
> + */
> +#define ZS_MAX_ZSPAGE_ORDER 2
> +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
> +
> +/*
> + * Object location (<PFN>, <obj_idx>) is encoded as
> + * as single (unsigned long) handle value.
> + *
> + * Note that object index <obj_idx> is relative to system
> + * page <PFN> it is stored in, so for each sub-page belonging
> + * to a zspage, obj_idx starts with 0.
> + *
> + * This is made more complicated by various memory models and PAE.
> + */
> +
> +#ifndef MAX_PHYSMEM_BITS
> +#ifdef CONFIG_HIGHMEM64G
> +#define MAX_PHYSMEM_BITS 36
> +#else /* !CONFIG_HIGHMEM64G */
> +/*
> + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
> + * be PAGE_SHIFT
> + */
> +#define MAX_PHYSMEM_BITS BITS_PER_LONG
> +#endif
> +#endif
> +#define _PFN_BITS		(MAX_PHYSMEM_BITS - PAGE_SHIFT)
> +#define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS)
> +#define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
> +
> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
> +#define ZS_MIN_ALLOC_SIZE \
> +	MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
> +#define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
> +
> +/*
> + * On systems with 4K page size, this gives 254 size classes! There is a
> + * trader-off here:
> + *  - Large number of size classes is potentially wasteful as free page are
> + *    spread across these classes
> + *  - Small number of size classes causes large internal fragmentation
> + *  - Probably its better to use specific size classes (empirically
> + *    determined). NOTE: all those class sizes must be set as multiple of
> + *    ZS_ALIGN to make sure link_free itself never has to span 2 pages.
> + *
> + *  ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
> + *  (reason above)
> + */
> +#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> 8)
> +#define ZS_SIZE_CLASSES		((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
> +					ZS_SIZE_CLASS_DELTA + 1)
> +
> +/*
> + * We do not maintain any list for completely empty or full pages
> + */
> +enum fullness_group {
> +	ZS_ALMOST_FULL,
> +	ZS_ALMOST_EMPTY,
> +	_ZS_NR_FULLNESS_GROUPS,
> +
> +	ZS_EMPTY,
> +	ZS_FULL
> +};
> +
> +/*
> + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
> + *	n <= N / f, where
> + * n = number of allocated objects
> + * N = total number of objects zspage can store
> + * f = 1/fullness_threshold_frac
> + *
> + * Similarly, we assign zspage to:
> + *	ZS_ALMOST_FULL	when n > N / f
> + *	ZS_EMPTY	when n == 0
> + *	ZS_FULL		when n == N
> + *
> + * (see: fix_fullness_group())
> + */
> +static const int fullness_threshold_frac = 4;
> +
> +struct size_class {
> +	/*
> +	 * Size of objects stored in this class. Must be multiple
> +	 * of ZS_ALIGN.
> +	 */
> +	int size;
> +	unsigned int index;
> +
> +	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
> +	int pages_per_zspage;
> +
> +	spinlock_t lock;
> +
> +	/* stats */
> +	u64 pages_allocated;
> +
> +	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
> +};
> +
> +/*
> + * Placed within free objects to form a singly linked list.
> + * For every zspage, first_page->freelist gives head of this list.
> + *
> + * This must be power of 2 and less than or equal to ZS_ALIGN
> + */
> +struct link_free {
> +	/* Handle of next free chunk (encodes <PFN, obj_idx>) */
> +	void *next;
> +};
> +
> +struct zs_pool {
> +	struct size_class size_class[ZS_SIZE_CLASSES];
> +
> +	struct zs_ops *ops;
> +};
> +
> +/*
> + * A zspage's class index and fullness group
> + * are encoded in its (first)page->mapping
> + */
> +#define CLASS_IDX_BITS	28
> +#define FULLNESS_BITS	4
> +#define CLASS_IDX_MASK	((1 << CLASS_IDX_BITS) - 1)
> +#define FULLNESS_MASK	((1 << FULLNESS_BITS) - 1)
> +
> +struct mapping_area {
> +#ifdef CONFIG_PGTABLE_MAPPING
> +	struct vm_struct *vm; /* vm area for mapping object that span pages */
> +#else
> +	char *vm_buf; /* copy buffer for objects that span pages */
> +#endif
> +	char *vm_addr; /* address of kmap_atomic()'ed pages */
> +	enum zs_mapmode vm_mm; /* mapping mode */
> +};
> +
> +/* default page alloc/free ops */
> +struct page *zs_alloc_page(gfp_t flags)
> +{
> +	return alloc_page(flags);
> +}
> +
> +void zs_free_page(struct page *page)
> +{
> +	__free_page(page);
> +}
> +
> +struct zs_ops zs_default_ops = {
> +	.alloc = zs_alloc_page,
> +	.free = zs_free_page
> +};
> +
> +/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
> +static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
> +
> +static int is_first_page(struct page *page)
> +{
> +	return PagePrivate(page);
> +}
> +
> +static int is_last_page(struct page *page)
> +{
> +	return PagePrivate2(page);
> +}
> +
> +static void get_zspage_mapping(struct page *page, unsigned int *class_idx,
> +				enum fullness_group *fullness)
> +{
> +	unsigned long m;
> +	BUG_ON(!is_first_page(page));
> +
> +	m = (unsigned long)page->mapping;
> +	*fullness = m & FULLNESS_MASK;
> +	*class_idx = (m >> FULLNESS_BITS) & CLASS_IDX_MASK;
> +}
> +
> +static void set_zspage_mapping(struct page *page, unsigned int class_idx,
> +				enum fullness_group fullness)
> +{
> +	unsigned long m;
> +	BUG_ON(!is_first_page(page));
> +
> +	m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
> +			(fullness & FULLNESS_MASK);
> +	page->mapping = (struct address_space *)m;
> +}
> +
> +/*
> + * zsmalloc divides the pool into various size classes where each
> + * class maintains a list of zspages where each zspage is divided
> + * into equal sized chunks. Each allocation falls into one of these
> + * classes depending on its size. This function returns index of the
> + * size class which has chunk size big enough to hold the give size.
> + */
> +static int get_size_class_index(int size)
> +{
> +	int idx = 0;
> +
> +	if (likely(size > ZS_MIN_ALLOC_SIZE))
> +		idx = DIV_ROUND_UP(size - ZS_MIN_ALLOC_SIZE,
> +				ZS_SIZE_CLASS_DELTA);
> +
> +	return idx;
> +}
> +
> +/*
> + * For each size class, zspages are divided into different groups
> + * depending on how "full" they are. This was done so that we could
> + * easily find empty or nearly empty zspages when we try to shrink
> + * the pool (not yet implemented). This function returns fullness
> + * status of the given page.
> + */
> +static enum fullness_group get_fullness_group(struct page *page)
> +{
> +	int inuse, max_objects;
> +	enum fullness_group fg;
> +	BUG_ON(!is_first_page(page));
> +
> +	inuse = page->inuse;
> +	max_objects = page->objects;
> +
> +	if (inuse == 0)
> +		fg = ZS_EMPTY;
> +	else if (inuse == max_objects)
> +		fg = ZS_FULL;
> +	else if (inuse <= max_objects / fullness_threshold_frac)
> +		fg = ZS_ALMOST_EMPTY;
> +	else
> +		fg = ZS_ALMOST_FULL;
> +
> +	return fg;
> +}
> +
> +/*
> + * Each size class maintains various freelists and zspages are assigned
> + * to one of these freelists based on the number of live objects they
> + * have. This functions inserts the given zspage into the freelist
> + * identified by <class, fullness_group>.
> + */
> +static void insert_zspage(struct page *page, struct size_class *class,
> +				enum fullness_group fullness)
> +{
> +	struct page **head;
> +
> +	BUG_ON(!is_first_page(page));
> +
> +	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
> +		return;
> +
> +	head = &class->fullness_list[fullness];
> +	if (*head)
> +		list_add_tail(&page->lru, &(*head)->lru);
> +
> +	*head = page;
> +}
> +
> +/*
> + * This function removes the given zspage from the freelist identified
> + * by <class, fullness_group>.
> + */
> +static void remove_zspage(struct page *page, struct size_class *class,
> +				enum fullness_group fullness)
> +{
> +	struct page **head;
> +
> +	BUG_ON(!is_first_page(page));
> +
> +	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
> +		return;
> +
> +	head = &class->fullness_list[fullness];
> +	BUG_ON(!*head);
> +	if (list_empty(&(*head)->lru))
> +		*head = NULL;
> +	else if (*head == page)
> +		*head = (struct page *)list_entry((*head)->lru.next,
> +					struct page, lru);
> +
> +	list_del_init(&page->lru);
> +}
> +
> +/*
> + * Each size class maintains zspages in different fullness groups depending
> + * on the number of live objects they contain. When allocating or freeing
> + * objects, the fullness status of the page can change, say, from ALMOST_FULL
> + * to ALMOST_EMPTY when freeing an object. This function checks if such
> + * a status change has occurred for the given page and accordingly moves the
> + * page from the freelist of the old fullness group to that of the new
> + * fullness group.
> + */
> +static enum fullness_group fix_fullness_group(struct zs_pool *pool,
> +						struct page *page)
> +{
> +	int class_idx;
> +	struct size_class *class;
> +	enum fullness_group currfg, newfg;
> +
> +	BUG_ON(!is_first_page(page));
> +
> +	get_zspage_mapping(page, &class_idx, &currfg);
> +	newfg = get_fullness_group(page);
> +	if (newfg == currfg)
> +		goto out;
> +
> +	class = &pool->size_class[class_idx];
> +	remove_zspage(page, class, currfg);
> +	insert_zspage(page, class, newfg);
> +	set_zspage_mapping(page, class_idx, newfg);
> +
> +out:
> +	return newfg;
> +}
> +
> +/*
> + * We have to decide on how many pages to link together
> + * to form a zspage for each size class. This is important
> + * to reduce wastage due to unusable space left at end of
> + * each zspage which is given as:
> + *	wastage = Zp - Zp % size_class
> + * where Zp = zspage size = k * PAGE_SIZE where k = 1, 2, ...
> + *
> + * For example, for size class of 3/8 * PAGE_SIZE, we should
> + * link together 3 PAGE_SIZE sized pages to form a zspage
> + * since then we can perfectly fit in 8 such objects.
> + */
> +static int get_pages_per_zspage(int class_size)
> +{
> +	int i, max_usedpc = 0;
> +	/* zspage order which gives maximum used size per KB */
> +	int max_usedpc_order = 1;
> +
> +	for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) {
> +		int zspage_size;
> +		int waste, usedpc;
> +
> +		zspage_size = i * PAGE_SIZE;
> +		waste = zspage_size % class_size;
> +		usedpc = (zspage_size - waste) * 100 / zspage_size;
> +
> +		if (usedpc > max_usedpc) {
> +			max_usedpc = usedpc;
> +			max_usedpc_order = i;
> +		}
> +	}
> +
> +	return max_usedpc_order;
> +}
> +
> +/*
> + * A single 'zspage' is composed of many system pages which are
> + * linked together using fields in struct page. This function finds
> + * the first/head page, given any component page of a zspage.
> + */
> +static struct page *get_first_page(struct page *page)
> +{
> +	if (is_first_page(page))
> +		return page;
> +	else
> +		return page->first_page;
> +}
> +
> +static struct page *get_next_page(struct page *page)
> +{
> +	struct page *next;
> +
> +	if (is_last_page(page))
> +		next = NULL;
> +	else if (is_first_page(page))
> +		next = (struct page *)page->private;
> +	else
> +		next = list_entry(page->lru.next, struct page, lru);
> +
> +	return next;
> +}
> +
> +/* Encode <page, obj_idx> as a single handle value */
> +static void *obj_location_to_handle(struct page *page, unsigned long obj_idx)
> +{
> +	unsigned long handle;
> +
> +	if (!page) {
> +		BUG_ON(obj_idx);
> +		return NULL;
> +	}
> +
> +	handle = page_to_pfn(page) << OBJ_INDEX_BITS;
> +	handle |= (obj_idx & OBJ_INDEX_MASK);
> +
> +	return (void *)handle;
> +}
> +
> +/* Decode <page, obj_idx> pair from the given object handle */
> +static void obj_handle_to_location(unsigned long handle, struct page **page,
> +				unsigned long *obj_idx)
> +{
> +	*page = pfn_to_page(handle >> OBJ_INDEX_BITS);
> +	*obj_idx = handle & OBJ_INDEX_MASK;
> +}
> +
> +static unsigned long obj_idx_to_offset(struct page *page,
> +				unsigned long obj_idx, int class_size)
> +{
> +	unsigned long off = 0;
> +
> +	if (!is_first_page(page))
> +		off = page->index;
> +
> +	return off + obj_idx * class_size;
> +}
> +
> +static void reset_page(struct page *page)
> +{
> +	clear_bit(PG_private, &page->flags);
> +	clear_bit(PG_private_2, &page->flags);
> +	set_page_private(page, 0);
> +	page->mapping = NULL;
> +	page->freelist = NULL;
> +	reset_page_mapcount(page);
> +}
> +
> +static void free_zspage(struct zs_ops *ops, struct page *first_page)
> +{
> +	struct page *nextp, *tmp, *head_extra;
> +
> +	BUG_ON(!is_first_page(first_page));
> +	BUG_ON(first_page->inuse);
> +
> +	head_extra = (struct page *)page_private(first_page);
> +
> +	reset_page(first_page);
> +	ops->free(first_page);
> +
> +	/* zspage with only 1 system page */
> +	if (!head_extra)
> +		return;
> +
> +	list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
> +		list_del(&nextp->lru);
> +		reset_page(nextp);
> +		ops->free(nextp);
> +	}
> +	reset_page(head_extra);
> +	ops->free(head_extra);
> +}
> +
> +/* Initialize a newly allocated zspage */
> +static void init_zspage(struct page *first_page, struct size_class *class)
> +{
> +	unsigned long off = 0;
> +	struct page *page = first_page;
> +
> +	BUG_ON(!is_first_page(first_page));
> +	while (page) {
> +		struct page *next_page;
> +		struct link_free *link;
> +		unsigned int i, objs_on_page;
> +
> +		/*
> +		 * page->index stores offset of first object starting
> +		 * in the page. For the first page, this is always 0,
> +		 * so we use first_page->index (aka ->freelist) to store
> +		 * head of corresponding zspage's freelist.
> +		 */
> +		if (page != first_page)
> +			page->index = off;
> +
> +		link = (struct link_free *)kmap_atomic(page) +
> +						off / sizeof(*link);
> +		objs_on_page = (PAGE_SIZE - off) / class->size;
> +
> +		for (i = 1; i <= objs_on_page; i++) {
> +			off += class->size;
> +			if (off < PAGE_SIZE) {
> +				link->next = obj_location_to_handle(page, i);
> +				link += class->size / sizeof(*link);
> +			}
> +		}
> +
> +		/*
> +		 * We now come to the last (full or partial) object on this
> +		 * page, which must point to the first object on the next
> +		 * page (if present)
> +		 */
> +		next_page = get_next_page(page);
> +		link->next = obj_location_to_handle(next_page, 0);
> +		kunmap_atomic(link);
> +		page = next_page;
> +		off = (off + class->size) % PAGE_SIZE;
> +	}
> +}
> +
> +/*
> + * Allocate a zspage for the given size class
> + */
> +static struct page *alloc_zspage(struct zs_ops *ops, struct size_class *class,
> +				gfp_t flags)
> +{
> +	int i, error;
> +	struct page *first_page = NULL, *uninitialized_var(prev_page);
> +
> +	/*
> +	 * Allocate individual pages and link them together as:
> +	 * 1. first page->private = first sub-page
> +	 * 2. all sub-pages are linked together using page->lru
> +	 * 3. each sub-page is linked to the first page using page->first_page
> +	 *
> +	 * For each size class, First/Head pages are linked together using
> +	 * page->lru. Also, we set PG_private to identify the first page
> +	 * (i.e. no other sub-page has this flag set) and PG_private_2 to
> +	 * identify the last page.
> +	 */
> +	error = -ENOMEM;
> +	for (i = 0; i < class->pages_per_zspage; i++) {
> +		struct page *page;
> +
> +		page = ops->alloc(flags);
> +		if (!page)
> +			goto cleanup;
> +
> +		INIT_LIST_HEAD(&page->lru);
> +		if (i == 0) {	/* first page */
> +			SetPagePrivate(page);
> +			set_page_private(page, 0);
> +			first_page = page;
> +			first_page->inuse = 0;
> +		}
> +		if (i == 1)
> +			first_page->private = (unsigned long)page;
> +		if (i >= 1)
> +			page->first_page = first_page;
> +		if (i >= 2)
> +			list_add(&page->lru, &prev_page->lru);
> +		if (i == class->pages_per_zspage - 1)	/* last page */
> +			SetPagePrivate2(page);
> +		prev_page = page;
> +	}
> +
> +	init_zspage(first_page, class);
> +
> +	first_page->freelist = obj_location_to_handle(first_page, 0);
> +	/* Maximum number of objects we can store in this zspage */
> +	first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
> +
> +	error = 0; /* Success */
> +
> +cleanup:
> +	if (unlikely(error) && first_page) {
> +		free_zspage(ops, first_page);
> +		first_page = NULL;
> +	}
> +
> +	return first_page;
> +}
> +
> +static struct page *find_get_zspage(struct size_class *class)
> +{
> +	int i;
> +	struct page *page;
> +
> +	for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
> +		page = class->fullness_list[i];
> +		if (page)
> +			break;
> +	}
> +
> +	return page;
> +}
> +
> +#ifdef CONFIG_PGTABLE_MAPPING
> +static inline int __zs_cpu_up(struct mapping_area *area)
> +{
> +	/*
> +	 * Make sure we don't leak memory if a cpu UP notification
> +	 * and zs_init() race and both call zs_cpu_up() on the same cpu
> +	 */
> +	if (area->vm)
> +		return 0;
> +	area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL);
> +	if (!area->vm)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
> +static inline void __zs_cpu_down(struct mapping_area *area)
> +{
> +	if (area->vm)
> +		free_vm_area(area->vm);
> +	area->vm = NULL;
> +}
> +
> +static inline void *__zs_map_object(struct mapping_area *area,
> +				struct page *pages[2], int off, int size)
> +{
> +	BUG_ON(map_vm_area(area->vm, PAGE_KERNEL, &pages));
> +	area->vm_addr = area->vm->addr;
> +	return area->vm_addr + off;
> +}
> +
> +static inline void __zs_unmap_object(struct mapping_area *area,
> +				struct page *pages[2], int off, int size)
> +{
> +	unsigned long addr = (unsigned long)area->vm_addr;
> +	unsigned long end = addr + (PAGE_SIZE * 2);
> +
> +	flush_cache_vunmap(addr, end);
> +	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> +	flush_tlb_kernel_range(addr, end);
> +}
> +
> +#else /* CONFIG_PGTABLE_MAPPING*/
> +
> +static inline int __zs_cpu_up(struct mapping_area *area)
> +{
> +	/*
> +	 * Make sure we don't leak memory if a cpu UP notification
> +	 * and zs_init() race and both call zs_cpu_up() on the same cpu
> +	 */
> +	if (area->vm_buf)
> +		return 0;
> +	area->vm_buf = (char *)__get_free_page(GFP_KERNEL);
> +	if (!area->vm_buf)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
> +static inline void __zs_cpu_down(struct mapping_area *area)
> +{
> +	if (area->vm_buf)
> +		free_page((unsigned long)area->vm_buf);
> +	area->vm_buf = NULL;
> +}
> +
> +static void *__zs_map_object(struct mapping_area *area,
> +			struct page *pages[2], int off, int size)
> +{
> +	int sizes[2];
> +	void *addr;
> +	char *buf = area->vm_buf;
> +
> +	/* disable page faults to match kmap_atomic() return conditions */
> +	pagefault_disable();
> +
> +	/* no read fastpath */
> +	if (area->vm_mm == ZS_MM_WO)
> +		goto out;
> +
> +	sizes[0] = PAGE_SIZE - off;
> +	sizes[1] = size - sizes[0];
> +
> +	/* copy object to per-cpu buffer */
> +	addr = kmap_atomic(pages[0]);
> +	memcpy(buf, addr + off, sizes[0]);
> +	kunmap_atomic(addr);
> +	addr = kmap_atomic(pages[1]);
> +	memcpy(buf + sizes[0], addr, sizes[1]);
> +	kunmap_atomic(addr);
> +out:
> +	return area->vm_buf;
> +}
> +
> +static void __zs_unmap_object(struct mapping_area *area,
> +			struct page *pages[2], int off, int size)
> +{
> +	int sizes[2];
> +	void *addr;
> +	char *buf = area->vm_buf;
> +
> +	/* no write fastpath */
> +	if (area->vm_mm == ZS_MM_RO)
> +		goto out;
> +
> +	sizes[0] = PAGE_SIZE - off;
> +	sizes[1] = size - sizes[0];
> +
> +	/* copy per-cpu buffer to object */
> +	addr = kmap_atomic(pages[0]);
> +	memcpy(addr + off, buf, sizes[0]);
> +	kunmap_atomic(addr);
> +	addr = kmap_atomic(pages[1]);
> +	memcpy(addr, buf + sizes[0], sizes[1]);
> +	kunmap_atomic(addr);
> +
> +out:
> +	/* enable page faults to match kunmap_atomic() return conditions */
> +	pagefault_enable();
> +}
> +
> +#endif /* CONFIG_PGTABLE_MAPPING */
> +
> +static int zs_cpu_notifier(struct notifier_block *nb, unsigned long action,
> +				void *pcpu)
> +{
> +	int ret, cpu = (long)pcpu;
> +	struct mapping_area *area;
> +
> +	switch (action) {
> +	case CPU_UP_PREPARE:
> +		area = &per_cpu(zs_map_area, cpu);
> +		ret = __zs_cpu_up(area);
> +		if (ret)
> +			return notifier_from_errno(ret);
> +		break;
> +	case CPU_DEAD:
> +	case CPU_UP_CANCELED:
> +		area = &per_cpu(zs_map_area, cpu);
> +		__zs_cpu_down(area);
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block zs_cpu_nb = {
> +	.notifier_call = zs_cpu_notifier
> +};
> +
> +static void zs_exit(void)
> +{
> +	int cpu;
> +
> +	for_each_online_cpu(cpu)
> +		zs_cpu_notifier(NULL, CPU_DEAD, (void *)(long)cpu);
> +	unregister_cpu_notifier(&zs_cpu_nb);
> +}
> +
> +static int zs_init(void)
> +{
> +	int cpu, ret;
> +
> +	register_cpu_notifier(&zs_cpu_nb);
> +	for_each_online_cpu(cpu) {
> +		ret = zs_cpu_notifier(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
> +		if (notifier_to_errno(ret))
> +			goto fail;
> +	}
> +	return 0;
> +fail:
> +	zs_exit();
> +	return notifier_to_errno(ret);
> +}
> +
> +/**
> + * zs_create_pool - Creates an allocation pool to work from.
> + * @flags: allocation flags used to allocate pool metadata
> + * @ops: allocation/free callbacks for expanding the pool
> + *
> + * This function must be called before anything when using
> + * the zsmalloc allocator.
> + *
> + * On success, a pointer to the newly created pool is returned,
> + * otherwise NULL.
> + */
> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops)
> +{
> +	int i, ovhd_size;
> +	struct zs_pool *pool;
> +
> +	ovhd_size = roundup(sizeof(*pool), PAGE_SIZE);
> +	pool = kzalloc(ovhd_size, flags);
> +	if (!pool)
> +		return NULL;
> +
> +	for (i = 0; i < ZS_SIZE_CLASSES; i++) {
> +		int size;
> +		struct size_class *class;
> +
> +		size = ZS_MIN_ALLOC_SIZE + i * ZS_SIZE_CLASS_DELTA;
> +		if (size > ZS_MAX_ALLOC_SIZE)
> +			size = ZS_MAX_ALLOC_SIZE;
> +
> +		class = &pool->size_class[i];
> +		class->size = size;
> +		class->index = i;
> +		spin_lock_init(&class->lock);
> +		class->pages_per_zspage = get_pages_per_zspage(size);
> +
> +	}
> +
> +	if (ops)
> +		pool->ops = ops;
> +	else
> +		pool->ops = &zs_default_ops;
> +
> +	return pool;
> +}
> +EXPORT_SYMBOL_GPL(zs_create_pool);
> +
> +void zs_destroy_pool(struct zs_pool *pool)
> +{
> +	int i;
> +
> +	for (i = 0; i < ZS_SIZE_CLASSES; i++) {
> +		int fg;
> +		struct size_class *class = &pool->size_class[i];
> +
> +		for (fg = 0; fg < _ZS_NR_FULLNESS_GROUPS; fg++) {
> +			if (class->fullness_list[fg]) {
> +				pr_info("Freeing non-empty class with size "
> +					"%db, fullness group %d\n",
> +					class->size, fg);
> +			}
> +		}
> +	}
> +	kfree(pool);
> +}
> +EXPORT_SYMBOL_GPL(zs_destroy_pool);
> +
> +/**
> + * zs_malloc - Allocate block of given size from pool.
> + * @pool: pool to allocate from
> + * @size: size of block to allocate
> + *
> + * On success, handle to the allocated object is returned,
> + * otherwise 0.
> + * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail.
> + */
> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags)
> +{
> +	unsigned long obj;
> +	struct link_free *link;
> +	int class_idx;
> +	struct size_class *class;
> +
> +	struct page *first_page, *m_page;
> +	unsigned long m_objidx, m_offset;
> +
> +	if (unlikely(!size || size > ZS_MAX_ALLOC_SIZE))
> +		return 0;
> +
> +	class_idx = get_size_class_index(size);
> +	class = &pool->size_class[class_idx];
> +	BUG_ON(class_idx != class->index);
> +
> +	spin_lock(&class->lock);
> +	first_page = find_get_zspage(class);
> +
> +	if (!first_page) {
> +		spin_unlock(&class->lock);
> +		first_page = alloc_zspage(pool->ops, class, flags);
> +		if (unlikely(!first_page))
> +			return 0;
> +
> +		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
> +		spin_lock(&class->lock);
> +		class->pages_allocated += class->pages_per_zspage;
> +	}
> +
> +	obj = (unsigned long)first_page->freelist;
> +	obj_handle_to_location(obj, &m_page, &m_objidx);
> +	m_offset = obj_idx_to_offset(m_page, m_objidx, class->size);
> +
> +	link = (struct link_free *)kmap_atomic(m_page) +
> +					m_offset / sizeof(*link);
> +	first_page->freelist = link->next;
> +	memset(link, POISON_INUSE, sizeof(*link));
> +	kunmap_atomic(link);
> +
> +	first_page->inuse++;
> +	/* Now move the zspage to another fullness group, if required */
> +	fix_fullness_group(pool, first_page);
> +	spin_unlock(&class->lock);
> +
> +	return obj;
> +}
> +EXPORT_SYMBOL_GPL(zs_malloc);
> +
> +void zs_free(struct zs_pool *pool, unsigned long obj)
> +{
> +	struct link_free *link;
> +	struct page *first_page, *f_page;
> +	unsigned long f_objidx, f_offset;
> +
> +	int class_idx;
> +	struct size_class *class;
> +	enum fullness_group fullness;
> +
> +	if (unlikely(!obj))
> +		return;
> +
> +	obj_handle_to_location(obj, &f_page, &f_objidx);
> +	first_page = get_first_page(f_page);
> +
> +	get_zspage_mapping(first_page, &class_idx, &fullness);
> +	class = &pool->size_class[class_idx];
> +	f_offset = obj_idx_to_offset(f_page, f_objidx, class->size);
> +
> +	spin_lock(&class->lock);
> +
> +	/* Insert this object in containing zspage's freelist */
> +	link = (struct link_free *)((unsigned char *)kmap_atomic(f_page)
> +							+ f_offset);
> +	link->next = first_page->freelist;
> +	kunmap_atomic(link);
> +	first_page->freelist = (void *)obj;
> +
> +	first_page->inuse--;
> +	fullness = fix_fullness_group(pool, first_page);
> +
> +	if (fullness == ZS_EMPTY)
> +		class->pages_allocated -= class->pages_per_zspage;
> +
> +	spin_unlock(&class->lock);
> +
> +	if (fullness == ZS_EMPTY)
> +		free_zspage(pool->ops, first_page);
> +}
> +EXPORT_SYMBOL_GPL(zs_free);
> +
> +/**
> + * zs_map_object - get address of allocated object from handle.
> + * @pool: pool from which the object was allocated
> + * @handle: handle returned from zs_malloc
> + *
> + * Before using an object allocated from zs_malloc, it must be mapped using
> + * this function. When done with the object, it must be unmapped using
> + * zs_unmap_object.
> + *
> + * Only one object can be mapped per cpu at a time. There is no protection
> + * against nested mappings.
> + *
> + * This function returns with preemption and page faults disabled.
> +*/
> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> +			enum zs_mapmode mm)
> +{
> +	struct page *page;
> +	unsigned long obj_idx, off;
> +
> +	unsigned int class_idx;
> +	enum fullness_group fg;
> +	struct size_class *class;
> +	struct mapping_area *area;
> +	struct page *pages[2];
> +
> +	BUG_ON(!handle);
> +
> +	/*
> +	 * Because we use per-cpu mapping areas shared among the
> +	 * pools/users, we can't allow mapping in interrupt context
> +	 * because it can corrupt another users mappings.
> +	 */
> +	BUG_ON(in_interrupt());
> +
> +	obj_handle_to_location(handle, &page, &obj_idx);
> +	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> +	class = &pool->size_class[class_idx];
> +	off = obj_idx_to_offset(page, obj_idx, class->size);
> +
> +	area = &get_cpu_var(zs_map_area);
> +	area->vm_mm = mm;
> +	if (off + class->size <= PAGE_SIZE) {
> +		/* this object is contained entirely within a page */
> +		area->vm_addr = kmap_atomic(page);
> +		return area->vm_addr + off;
> +	}
> +
> +	/* this object spans two pages */
> +	pages[0] = page;
> +	pages[1] = get_next_page(page);
> +	BUG_ON(!pages[1]);
> +
> +	return __zs_map_object(area, pages, off, class->size);
> +}
> +EXPORT_SYMBOL_GPL(zs_map_object);
> +
> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> +{
> +	struct page *page;
> +	unsigned long obj_idx, off;
> +
> +	unsigned int class_idx;
> +	enum fullness_group fg;
> +	struct size_class *class;
> +	struct mapping_area *area;
> +
> +	BUG_ON(!handle);
> +
> +	obj_handle_to_location(handle, &page, &obj_idx);
> +	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> +	class = &pool->size_class[class_idx];
> +	off = obj_idx_to_offset(page, obj_idx, class->size);
> +
> +	area = &__get_cpu_var(zs_map_area);
> +	if (off + class->size <= PAGE_SIZE)
> +		kunmap_atomic(area->vm_addr);
> +	else {
> +		struct page *pages[2];
> +
> +		pages[0] = page;
> +		pages[1] = get_next_page(page);
> +		BUG_ON(!pages[1]);
> +
> +		__zs_unmap_object(area, pages, off, class->size);
> +	}
> +	put_cpu_var(zs_map_area);
> +}
> +EXPORT_SYMBOL_GPL(zs_unmap_object);
> +
> +u64 zs_get_total_size_bytes(struct zs_pool *pool)
> +{
> +	int i;
> +	u64 npages = 0;
> +
> +	for (i = 0; i < ZS_SIZE_CLASSES; i++)
> +		npages += pool->size_class[i].pages_allocated;
> +
> +	return npages << PAGE_SHIFT;
> +}
> +EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> +
> +module_init(zs_init);
> +module_exit(zs_exit);
> +
> +MODULE_LICENSE("Dual BSD/GPL");
> +MODULE_AUTHOR("Nitin Gupta <ngupta@vflare.org>");


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 4/8] zswap: add to mm/
       [not found] ` <1360780731-11708-5-git-send-email-sjenning@linux.vnet.ibm.com>
@ 2013-02-16  4:04   ` Ric Mason
  2013-02-18 19:24     ` Seth Jennings
  0 siblings, 1 reply; 38+ messages in thread
From: Ric Mason @ 2013-02-16  4:04 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/14/2013 02:38 AM, Seth Jennings wrote:
> zswap is a thin compression backend for frontswap. It receives
> pages from frontswap and attempts to store them in a compressed
> memory pool, resulting in an effective partial memory reclaim and
> dramatically reduced swap device I/O.
>
> Additionally, in most cases, pages can be retrieved from this
> compressed store much more quickly than reading from tradition
> swap devices resulting in faster performance for many workloads.
>
> This patch adds the zswap driver to mm/
>
> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> ---
>   mm/Kconfig  |   15 ++
>   mm/Makefile |    1 +
>   mm/zswap.c  |  658 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>   3 files changed, 674 insertions(+)
>   create mode 100644 mm/zswap.c
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 25b8f38..f9f35b7 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -470,3 +470,18 @@ config PGTABLE_MAPPING
>   
>   	  You can check speed with zsmalloc benchmark[1].
>   	  [1] https://github.com/spartacus06/zsmalloc
> +
> +config ZSWAP
> +	bool "In-kernel swap page compression"
> +	depends on FRONTSWAP && CRYPTO
> +	select CRYPTO_LZO
> +	select ZSMALLOC
> +	default n
> +	help
> +	  Zswap is a backend for the frontswap mechanism in the VMM.
> +	  It receives pages from frontswap and attempts to store them
> +	  in a compressed memory pool, resulting in an effective
> +	  partial memory reclaim.  In addition, pages and be retrieved
> +	  from this compressed store much faster than most tradition
> +	  swap devices resulting in reduced I/O and faster performance
> +	  for many workloads.
> diff --git a/mm/Makefile b/mm/Makefile
> index 0f6ef0a..1e0198f 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
>   obj-$(CONFIG_BOUNCE)	+= bounce.o
>   obj-$(CONFIG_SWAP)	+= page_io.o swap_state.o swapfile.o
>   obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
> +obj-$(CONFIG_ZSWAP)	+= zswap.o
>   obj-$(CONFIG_HAS_DMA)	+= dmapool.o
>   obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
>   obj-$(CONFIG_NUMA) 	+= mempolicy.o
> diff --git a/mm/zswap.c b/mm/zswap.c
> new file mode 100644
> index 0000000..e77ab2f
> --- /dev/null
> +++ b/mm/zswap.c
> @@ -0,0 +1,658 @@
> +/*
> + * zswap.c - zswap driver file
> + *
> + * zswap is a backend for frontswap that takes pages that are in the
> + * process of being swapped out and attempts to compress them and store
> + * them in a RAM-based memory pool.  This results in a significant I/O
> + * reduction on the real swap device and, in the case of a slow swap
> + * device, can also improve workload performance.
> + *
> + * Copyright (C) 2012  Seth Jennings <sjenning@linux.vnet.ibm.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> +*/
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/module.h>
> +#include <linux/cpu.h>
> +#include <linux/highmem.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/types.h>
> +#include <linux/atomic.h>
> +#include <linux/frontswap.h>
> +#include <linux/rbtree.h>
> +#include <linux/swap.h>
> +#include <linux/crypto.h>
> +#include <linux/mempool.h>
> +#include <linux/zsmalloc.h>
> +
> +/*********************************
> +* statistics
> +**********************************/
> +/* Number of memory pages used by the compressed pool */
> +static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
> +/* The number of compressed pages currently stored in zswap */
> +static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> +
> +/*
> + * The statistics below are not protected from concurrent access for
> + * performance reasons so they may not be a 100% accurate.  However,
> + * the do provide useful information on roughly how many times a

s/the/they

> + * certain event is occurring.
> +*/
> +static u64 zswap_pool_limit_hit;
> +static u64 zswap_reject_compress_poor;
> +static u64 zswap_reject_zsmalloc_fail;
> +static u64 zswap_reject_kmemcache_fail;
> +static u64 zswap_duplicate_entry;
> +
> +/*********************************
> +* tunables
> +**********************************/
> +/* Enable/disable zswap (disabled by default, fixed at boot for now) */
> +static bool zswap_enabled;
> +module_param_named(enabled, zswap_enabled, bool, 0);

please document in Documentation/kernel-parameters.txt.

> +
> +/* Compressor to be used by zswap (fixed at boot for now) */
> +#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
> +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
> +module_param_named(compressor, zswap_compressor, charp, 0);

ditto

> +
> +/* The maximum percentage of memory that the compressed pool can occupy */
> +static unsigned int zswap_max_pool_percent = 20;
> +module_param_named(max_pool_percent,
> +			zswap_max_pool_percent, uint, 0644);
> +
> +/*
> + * Maximum compression ratio, as as percentage, for an acceptable
> + * compressed page. Any pages that do not compress by at least
> + * this ratio will be rejected.
> +*/
> +static unsigned int zswap_max_compression_ratio = 80;
> +module_param_named(max_compression_ratio,
> +			zswap_max_compression_ratio, uint, 0644);
> +
> +/*********************************
> +* compression functions
> +**********************************/
> +/* per-cpu compression transforms */
> +static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms;
> +
> +enum comp_op {
> +	ZSWAP_COMPOP_COMPRESS,
> +	ZSWAP_COMPOP_DECOMPRESS
> +};
> +
> +static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
> +				u8 *dst, unsigned int *dlen)
> +{
> +	struct crypto_comp *tfm;
> +	int ret;
> +
> +	tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu());
> +	switch (op) {
> +	case ZSWAP_COMPOP_COMPRESS:
> +		ret = crypto_comp_compress(tfm, src, slen, dst, dlen);
> +		break;
> +	case ZSWAP_COMPOP_DECOMPRESS:
> +		ret = crypto_comp_decompress(tfm, src, slen, dst, dlen);
> +		break;
> +	default:
> +		ret = -EINVAL;
> +	}
> +
> +	put_cpu();
> +	return ret;
> +}
> +
> +static int __init zswap_comp_init(void)
> +{
> +	if (!crypto_has_comp(zswap_compressor, 0, 0)) {
> +		pr_info("%s compressor not available\n", zswap_compressor);
> +		/* fall back to default compressor */
> +		zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
> +		if (!crypto_has_comp(zswap_compressor, 0, 0))
> +			/* can't even load the default compressor */
> +			return -ENODEV;
> +	}
> +	pr_info("using %s compressor\n", zswap_compressor);
> +
> +	/* alloc percpu transforms */
> +	zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
> +	if (!zswap_comp_pcpu_tfms)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
> +static void zswap_comp_exit(void)
> +{
> +	/* free percpu transforms */
> +	if (zswap_comp_pcpu_tfms)
> +		free_percpu(zswap_comp_pcpu_tfms);
> +}
> +
> +/*********************************
> +* data structures
> +**********************************/
> +struct zswap_entry {
> +	struct rb_node rbnode;
> +	unsigned type;
> +	pgoff_t offset;
> +	unsigned long handle;
> +	unsigned int length;
> +};
> +
> +struct zswap_tree {
> +	struct rb_root rbroot;
> +	spinlock_t lock;
> +	struct zs_pool *pool;
> +};
> +
> +static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
> +
> +/*********************************
> +* zswap entry functions
> +**********************************/
> +#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache"
> +static struct kmem_cache *zswap_entry_cache;
> +
> +static inline int zswap_entry_cache_create(void)
> +{
> +	zswap_entry_cache =
> +		kmem_cache_create(ZSWAP_KMEM_CACHE_NAME,
> +			sizeof(struct zswap_entry), 0, 0, NULL);
> +	return (zswap_entry_cache == NULL);
> +}
> +
> +static inline void zswap_entry_cache_destory(void)
> +{
> +	kmem_cache_destroy(zswap_entry_cache);
> +}
> +
> +static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> +{
> +	struct zswap_entry *entry;
> +	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> +	if (!entry)
> +		return NULL;
> +	return entry;
> +}
> +
> +static inline void zswap_entry_cache_free(struct zswap_entry *entry)
> +{
> +	kmem_cache_free(zswap_entry_cache, entry);
> +}
> +
> +/*********************************
> +* rbtree functions
> +**********************************/
> +static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset)
> +{
> +	struct rb_node *node = root->rb_node;
> +	struct zswap_entry *entry;
> +
> +	while (node) {
> +		entry = rb_entry(node, struct zswap_entry, rbnode);
> +		if (entry->offset > offset)
> +			node = node->rb_left;
> +		else if (entry->offset < offset)
> +			node = node->rb_right;
> +		else
> +			return entry;
> +	}
> +	return NULL;
> +}
> +
> +/*
> + * In the case that a entry with the same offset is found, it a pointer to
> + * the existing entry is stored in dupentry and the function returns -EEXIST
> +*/
> +static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry,
> +			struct zswap_entry **dupentry)
> +{
> +	struct rb_node **link = &root->rb_node, *parent = NULL;
> +	struct zswap_entry *myentry;
> +
> +	while (*link) {
> +		parent = *link;
> +		myentry = rb_entry(parent, struct zswap_entry, rbnode);
> +		if (myentry->offset > entry->offset)
> +			link = &(*link)->rb_left;
> +		else if (myentry->offset < entry->offset)
> +			link = &(*link)->rb_right;
> +		else {
> +			*dupentry = myentry;
> +			return -EEXIST;
> +		}
> +	}
> +	rb_link_node(&entry->rbnode, parent, link);
> +	rb_insert_color(&entry->rbnode, root);
> +	return 0;
> +}
> +
> +/*********************************
> +* per-cpu code
> +**********************************/
> +static DEFINE_PER_CPU(u8 *, zswap_dstmem);
> +
> +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
> +{
> +	struct crypto_comp *tfm;
> +	u8 *dst;
> +
> +	switch (action) {
> +	case CPU_UP_PREPARE:
> +		tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
> +		if (IS_ERR(tfm)) {
> +			pr_err("can't allocate compressor transform\n");
> +			return NOTIFY_BAD;
> +		}
> +		*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
> +		dst = (u8 *)__get_free_pages(GFP_KERNEL, 1);
> +		if (!dst) {
> +			pr_err("can't allocate compressor buffer\n");
> +			crypto_free_comp(tfm);
> +			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
> +			return NOTIFY_BAD;
> +		}
> +		per_cpu(zswap_dstmem, cpu) = dst;
> +		break;
> +	case CPU_DEAD:
> +	case CPU_UP_CANCELED:
> +		tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu);
> +		if (tfm) {
> +			crypto_free_comp(tfm);
> +			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
> +		}
> +		dst = per_cpu(zswap_dstmem, cpu);
> +		if (dst) {
> +			free_pages((unsigned long)dst, 1);
> +			per_cpu(zswap_dstmem, cpu) = NULL;
> +		}
> +		break;
> +	default:
> +		break;
> +	}
> +	return NOTIFY_OK;
> +}
> +
> +static int zswap_cpu_notifier(struct notifier_block *nb,
> +				unsigned long action, void *pcpu)
> +{
> +	unsigned long cpu = (unsigned long)pcpu;
> +	return __zswap_cpu_notifier(action, cpu);
> +}
> +
> +static struct notifier_block zswap_cpu_notifier_block = {
> +	.notifier_call = zswap_cpu_notifier
> +};
> +
> +static int zswap_cpu_init(void)
> +{
> +	unsigned long cpu;
> +
> +	get_online_cpus();
> +	for_each_online_cpu(cpu)
> +		if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK)
> +			goto cleanup;
> +	register_cpu_notifier(&zswap_cpu_notifier_block);
> +	put_online_cpus();
> +	return 0;
> +
> +cleanup:
> +	for_each_online_cpu(cpu)
> +		__zswap_cpu_notifier(CPU_UP_CANCELED, cpu);
> +	put_online_cpus();
> +	return -ENOMEM;
> +}
> +
> +/*********************************
> +* zsmalloc callbacks
> +**********************************/
> +static mempool_t *zswap_page_pool;
> +
> +static inline unsigned int zswap_max_pool_pages(void)
> +{
> +	return zswap_max_pool_percent * totalram_pages / 100;
> +}
> +
> +static inline int zswap_page_pool_create(void)
> +{
> +	/* TODO: dynamically size mempool */
> +	zswap_page_pool = mempool_create_page_pool(256, 0);
> +	if (!zswap_page_pool)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
> +static inline void zswap_page_pool_destroy(void)
> +{
> +	mempool_destroy(zswap_page_pool);
> +}
> +
> +static struct page *zswap_alloc_page(gfp_t flags)
> +{
> +	struct page *page;
> +
> +	if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) {
> +		zswap_pool_limit_hit++;
> +		return NULL;
> +	}
> +	page = mempool_alloc(zswap_page_pool, flags);
> +	if (page)
> +		atomic_inc(&zswap_pool_pages);
> +	return page;
> +}
> +
> +static void zswap_free_page(struct page *page)
> +{
> +	if (!page)
> +		return;
> +	mempool_free(page, zswap_page_pool);
> +	atomic_dec(&zswap_pool_pages);
> +}
> +
> +static struct zs_ops zswap_zs_ops = {
> +	.alloc = zswap_alloc_page,
> +	.free = zswap_free_page
> +};
> +
> +/*********************************
> +* frontswap hooks
> +**********************************/
> +/* attempts to compress and store an single page */
> +static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> +				struct page *page)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct zswap_entry *entry, *dupentry;
> +	int ret;
> +	unsigned int dlen = PAGE_SIZE;
> +	unsigned long handle;
> +	char *buf;
> +	u8 *src, *dst;
> +
> +	if (!tree) {
> +		ret = -ENODEV;
> +		goto reject;
> +	}
> +
> +	/* compress */
> +	dst = get_cpu_var(zswap_dstmem);
> +	src = kmap_atomic(page);
> +	ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen);
> +	kunmap_atomic(src);
> +	if (ret) {
> +		ret = -EINVAL;
> +		goto putcpu;
> +	}
> +	if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
> +		zswap_reject_compress_poor++;
> +		ret = -E2BIG;
> +		goto putcpu;
> +	}
> +
> +	/* store */
> +	handle = zs_malloc(tree->pool, dlen,
> +		__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> +			__GFP_NOWARN);
> +	if (!handle) {
> +		zswap_reject_zsmalloc_fail++;
> +		ret = -ENOMEM;
> +		goto putcpu;
> +	}
> +
> +	buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
> +	memcpy(buf, dst, dlen);
> +	zs_unmap_object(tree->pool, handle);
> +	put_cpu_var(zswap_dstmem);
> +
> +	/* allocate entry */
> +	entry = zswap_entry_cache_alloc(GFP_KERNEL);
> +	if (!entry) {
> +		zs_free(tree->pool, handle);
> +		zswap_reject_kmemcache_fail++;
> +		ret = -ENOMEM;
> +		goto reject;
> +	}
> +
> +	/* populate entry */
> +	entry->type = type;
> +	entry->offset = offset;
> +	entry->handle = handle;
> +	entry->length = dlen;
> +
> +	/* map */
> +	spin_lock(&tree->lock);
> +	do {
> +		ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
> +		if (ret == -EEXIST) {
> +			zswap_duplicate_entry++;
> +
> +			/* remove from rbtree */
> +			rb_erase(&dupentry->rbnode, &tree->rbroot);
> +			
> +			/* free */
> +			zs_free(tree->pool, dupentry->handle);
> +			zswap_entry_cache_free(dupentry);
> +			atomic_dec(&zswap_stored_pages);
> +		}
> +	} while (ret == -EEXIST);
> +	spin_unlock(&tree->lock);
> +
> +	/* update stats */
> +	atomic_inc(&zswap_stored_pages);
> +
> +	return 0;
> +
> +putcpu:
> +	put_cpu_var(zswap_dstmem);
> +reject:
> +	return ret;
> +}
> +
> +/*
> + * returns 0 if the page was successfully decompressed
> + * return -1 on entry not found or error
> +*/
> +static int zswap_frontswap_load(unsigned type, pgoff_t offset,
> +				struct page *page)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct zswap_entry *entry;
> +	u8 *src, *dst;
> +	unsigned int dlen;
> +
> +	/* find */
> +	spin_lock(&tree->lock);
> +	entry = zswap_rb_search(&tree->rbroot, offset);
> +	spin_unlock(&tree->lock);
> +
> +	/* decompress */
> +	dlen = PAGE_SIZE;
> +	src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
> +	dst = kmap_atomic(page);
> +	zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
> +		dst, &dlen);
> +	kunmap_atomic(dst);
> +	zs_unmap_object(tree->pool, entry->handle);
> +
> +	return 0;
> +}
> +
> +/* invalidates a single page */
> +static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct zswap_entry *entry;
> +
> +	/* find */
> +	spin_lock(&tree->lock);
> +	entry = zswap_rb_search(&tree->rbroot, offset);
> +
> +	/* remove from rbtree */
> +	rb_erase(&entry->rbnode, &tree->rbroot);
> +	spin_unlock(&tree->lock);
> +
> +	/* free */
> +	zs_free(tree->pool, entry->handle);
> +	zswap_entry_cache_free(entry);
> +	atomic_dec(&zswap_stored_pages);
> +}
> +
> +/* invalidates all pages for the given swap type */
> +static void zswap_frontswap_invalidate_area(unsigned type)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct rb_node *node, *next;
> +	struct zswap_entry *entry;
> +
> +	if (!tree)
> +		return;
> +
> +	/* walk the tree and free everything */
> +	spin_lock(&tree->lock);
> +	node = rb_first(&tree->rbroot);
> +	while (node) {
> +		entry = rb_entry(node, struct zswap_entry, rbnode);
> +		zs_free(tree->pool, entry->handle);
> +		next = rb_next(node);
> +		zswap_entry_cache_free(entry);
> +		node = next;
> +	}
> +	tree->rbroot = RB_ROOT;

Why don't need rb_erase for every nodes?

> +	spin_unlock(&tree->lock);
> +}
> +
> +/* NOTE: this is called in atomic context from swapon and must not sleep */
> +static void zswap_frontswap_init(unsigned type)
> +{
> +	struct zswap_tree *tree;
> +
> +	tree = kzalloc(sizeof(struct zswap_tree), GFP_NOWAIT);
> +	if (!tree)
> +		goto err;
> +	tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops);
> +	if (!tree->pool)
> +		goto freetree;
> +	tree->rbroot = RB_ROOT;
> +	spin_lock_init(&tree->lock);
> +	zswap_trees[type] = tree;
> +	return;
> +
> +freetree:
> +	kfree(tree);
> +err:
> +	pr_err("alloc failed, zswap disabled for swap type %d\n", type);
> +}
> +
> +static struct frontswap_ops zswap_frontswap_ops = {
> +	.store = zswap_frontswap_store,
> +	.load = zswap_frontswap_load,
> +	.invalidate_page = zswap_frontswap_invalidate_page,
> +	.invalidate_area = zswap_frontswap_invalidate_area,
> +	.init = zswap_frontswap_init
> +};
> +
> +/*********************************
> +* debugfs functions
> +**********************************/
> +#ifdef CONFIG_DEBUG_FS
> +#include <linux/debugfs.h>
> +
> +static struct dentry *zswap_debugfs_root;
> +
> +static int __init zswap_debugfs_init(void)
> +{
> +	if (!debugfs_initialized())
> +		return -ENODEV;
> +
> +	zswap_debugfs_root = debugfs_create_dir("zswap", NULL);
> +	if (!zswap_debugfs_root)
> +		return -ENOMEM;
> +
> +	debugfs_create_u64("pool_limit_hit", S_IRUGO,
> +			zswap_debugfs_root, &zswap_pool_limit_hit);
> +	debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
> +	debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_kmemcache_fail);
> +	debugfs_create_u64("reject_compress_poor", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_compress_poor);
> +	debugfs_create_u64("duplicate_entry", S_IRUGO,
> +			zswap_debugfs_root, &zswap_duplicate_entry);
> +	debugfs_create_atomic_t("pool_pages", S_IRUGO,
> +			zswap_debugfs_root, &zswap_pool_pages);
> +	debugfs_create_atomic_t("stored_pages", S_IRUGO,
> +			zswap_debugfs_root, &zswap_stored_pages);
> +
> +	return 0;
> +}
> +
> +static void __exit zswap_debugfs_exit(void)
> +{
> +	debugfs_remove_recursive(zswap_debugfs_root);
> +}
> +#else
> +static inline int __init zswap_debugfs_init(void)
> +{
> +	return 0;
> +}
> +
> +static inline void __exit zswap_debugfs_exit(void) { }
> +#endif
> +
> +/*********************************
> +* module init and exit
> +**********************************/
> +static int __init init_zswap(void)
> +{
> +	if (!zswap_enabled)
> +		return 0;
> +
> +	pr_info("loading zswap\n");
> +	if (zswap_entry_cache_create()) {
> +		pr_err("entry cache creation failed\n");
> +		goto error;
> +	}
> +	if (zswap_page_pool_create()) {
> +		pr_err("page pool initialization failed\n");
> +		goto pagepoolfail;
> +	}
> +	if (zswap_comp_init()) {
> +		pr_err("compressor initialization failed\n");
> +		goto compfail;
> +	}
> +	if (zswap_cpu_init()) {
> +		pr_err("per-cpu initialization failed\n");
> +		goto pcpufail;
> +	}
> +	frontswap_register_ops(&zswap_frontswap_ops);
> +	if (zswap_debugfs_init())
> +		pr_warn("debugfs initialization failed\n");
> +	return 0;
> +pcpufail:
> +	zswap_comp_exit();
> +compfail:
> +	zswap_page_pool_destroy();
> +pagepoolfail:
> +	zswap_entry_cache_destory();
> +error:
> +	return -ENOMEM;
> +}
> +/* must be late so crypto has time to come up */
> +late_initcall(init_zswap);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Seth Jennings <sjenning@linux.vnet.ibm.com>");
> +MODULE_DESCRIPTION("Compressed cache for swap pages");


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 7/8] zswap: add swap page writeback support
       [not found] ` <1360780731-11708-8-git-send-email-sjenning@linux.vnet.ibm.com>
@ 2013-02-16  6:11   ` Ric Mason
  2013-02-18 19:32     ` Seth Jennings
  2013-02-25  2:54   ` Minchan Kim
  1 sibling, 1 reply; 38+ messages in thread
From: Ric Mason @ 2013-02-16  6:11 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/14/2013 02:38 AM, Seth Jennings wrote:
> This patch adds support for evicting swap pages that are currently
> compressed in zswap to the swap device.  This functionality is very
> important and make zswap a true cache in that, once the cache is full
> or can't grow due to memory pressure, the oldest pages can be moved
> out of zswap to the swap device so newer pages can be compressed and
> stored in zswap.
>
> This introduces a good amount of new code to guarantee coherency.
> Most notably, and LRU list is added to the zswap_tree structure,
> and refcounts are added to each entry to ensure that one code path
> doesn't free then entry while another code path is operating on it.
>
> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> ---
>   mm/zswap.c |  530 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
>   1 file changed, 510 insertions(+), 20 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index e77ab2f..6478262 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -36,6 +36,12 @@
>   #include <linux/mempool.h>
>   #include <linux/zsmalloc.h>
>   
> +#include <linux/mm_types.h>
> +#include <linux/page-flags.h>
> +#include <linux/swapops.h>
> +#include <linux/writeback.h>
> +#include <linux/pagemap.h>
> +
>   /*********************************
>   * statistics
>   **********************************/
> @@ -43,6 +49,8 @@
>   static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
>   /* The number of compressed pages currently stored in zswap */
>   static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> +/* The number of outstanding pages awaiting writeback */
> +static atomic_t zswap_outstanding_writebacks = ATOMIC_INIT(0);
>   
>   /*
>    * The statistics below are not protected from concurrent access for
> @@ -51,9 +59,13 @@ static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
>    * certain event is occurring.
>   */
>   static u64 zswap_pool_limit_hit;
> +static u64 zswap_written_back_pages;
>   static u64 zswap_reject_compress_poor;
> +static u64 zswap_writeback_attempted;
> +static u64 zswap_reject_tmppage_fail;
>   static u64 zswap_reject_zsmalloc_fail;
>   static u64 zswap_reject_kmemcache_fail;
> +static u64 zswap_saved_by_writeback;
>   static u64 zswap_duplicate_entry;
>   
>   /*********************************
> @@ -82,6 +94,14 @@ static unsigned int zswap_max_compression_ratio = 80;
>   module_param_named(max_compression_ratio,
>   			zswap_max_compression_ratio, uint, 0644);
>   
> +/*
> + * Maximum number of outstanding writebacks allowed at any given time.
> + * This is to prevent decompressing an unbounded number of compressed
> + * pages into the swap cache all at once, and to help with writeback
> + * congestion.
> +*/
> +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64
> +
>   /*********************************
>   * compression functions
>   **********************************/
> @@ -144,16 +164,47 @@ static void zswap_comp_exit(void)
>   /*********************************
>   * data structures
>   **********************************/
> +
> +/*
> + * struct zswap_entry
> + *
> + * This structure contains the metadata for tracking a single compressed
> + * page within zswap.
> + *
> + * rbnode - links the entry into red-black tree for the appropriate swap type
> + * lru - links the entry into the lru list for the appropriate swap type
> + * refcount - the number of outstanding reference to the entry. This is needed
> + *            to protect against premature freeing of the entry by code
> + *            concurent calls to load, invalidate, and writeback.  The lock
> + *            for the zswap_tree structure that contains the entry must
> + *            be held while changing the refcount.  Since the lock must
> + *            be held, there is no reason to also make refcount atomic.
> + * type - the swap type for the entry.  Used to map back to the zswap_tree
> + *        structure that contains the entry.
> + * offset - the swap offset for the entry.  Index into the red-black tree.
> + * handle - zsmalloc allocation handle that stores the compressed page data
> + * length - the length in bytes of the compressed page data.  Needed during
> +            decompression
> + */
>   struct zswap_entry {
>   	struct rb_node rbnode;
> +	struct list_head lru;
> +	int refcount;
>   	unsigned type;
>   	pgoff_t offset;
>   	unsigned long handle;
>   	unsigned int length;
>   };
>   
> +/*
> + * The tree lock in the zswap_tree struct protects a few things:
> + * - the rbtree
> + * - the lru list
> + * - the refcount field of each entry in the tree
> + */
>   struct zswap_tree {
>   	struct rb_root rbroot;
> +	struct list_head lru;
>   	spinlock_t lock;
>   	struct zs_pool *pool;
>   };
> @@ -185,6 +236,8 @@ static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
>   	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
>   	if (!entry)
>   		return NULL;
> +	INIT_LIST_HEAD(&entry->lru);
> +	entry->refcount = 1;
>   	return entry;
>   }
>   
> @@ -193,6 +246,17 @@ static inline void zswap_entry_cache_free(struct zswap_entry *entry)
>   	kmem_cache_free(zswap_entry_cache, entry);
>   }
>   
> +static inline void zswap_entry_get(struct zswap_entry *entry)
> +{
> +	entry->refcount++;
> +}
> +
> +static inline int zswap_entry_put(struct zswap_entry *entry)
> +{
> +	entry->refcount--;
> +	return entry->refcount;
> +}
> +
>   /*********************************
>   * rbtree functions
>   **********************************/
> @@ -367,6 +431,333 @@ static struct zs_ops zswap_zs_ops = {
>   	.free = zswap_free_page
>   };
>   
> +
> +/*********************************
> +* helpers
> +**********************************/
> +
> +/*
> + * Carries out the common pattern of freeing and entry's zsmalloc allocation,
> + * freeing the entry itself, and decrementing the number of stored pages.
> + */
> +static void zswap_free_entry(struct zswap_tree *tree, struct zswap_entry *entry)
> +{
> +	zs_free(tree->pool, entry->handle);
> +	zswap_entry_cache_free(entry);
> +	atomic_dec(&zswap_stored_pages);
> +}
> +
> +/*********************************
> +* writeback code
> +**********************************/
> +static void zswap_end_swap_write(struct bio *bio, int err)
> +{
> +	end_swap_bio_write(bio, err);
> +	atomic_dec(&zswap_outstanding_writebacks);
> +	zswap_written_back_pages++;
> +}
> +
> +/* return enum for zswap_get_swap_cache_page */
> +enum zswap_get_swap_ret {
> +	ZSWAP_SWAPCACHE_NEW,
> +	ZSWAP_SWAPCACHE_EXIST,
> +	ZSWAP_SWAPCACHE_NOMEM
> +};
> +
> +/*
> + * zswap_get_swap_cache_page
> + *
> + * This is an adaption of read_swap_cache_async()
> + *
> + * This function tries to find a page with the given swap entry
> + * in the swapper_space address space (the swap cache).  If the page
> + * is found, it is returned in retpage.  Otherwise, a page is allocated,
> + * added to the swap cache, and returned in retpage.
> + *
> + * If success, the swap cache page is returned in retpage
> + * Returns 0 if page was already in the swap cache, page is not locked
> + * Returns 1 if the new page needs to be populated, page is locked
> + * Returns <0 on error
> + */
> +static int zswap_get_swap_cache_page(swp_entry_t entry,
> +				struct page **retpage)
> +{
> +	struct page *found_page, *new_page = NULL;
> +	int err;
> +
> +	*retpage = NULL;
> +	do {
> +		/*
> +		 * First check the swap cache.  Since this is normally
> +		 * called after lookup_swap_cache() failed, re-calling
> +		 * that would confuse statistics.
> +		 */
> +		found_page = find_get_page(&swapper_space, entry.val);
> +		if (found_page)
> +			break;
> +
> +		/*
> +		 * Get a new page to read into from swap.
> +		 */
> +		if (!new_page) {
> +			new_page = alloc_page(GFP_KERNEL);
> +			if (!new_page)
> +				break; /* Out of memory */
> +		}
> +
> +		/*
> +		 * call radix_tree_preload() while we can wait.
> +		 */
> +		err = radix_tree_preload(GFP_KERNEL);
> +		if (err)
> +			break;
> +
> +		/*
> +		 * Swap entry may have been freed since our caller observed it.
> +		 */
> +		err = swapcache_prepare(entry);
> +		if (err == -EEXIST) { /* seems racy */
> +			radix_tree_preload_end();
> +			continue;
> +		}
> +		if (err) { /* swp entry is obsolete ? */
> +			radix_tree_preload_end();
> +			break;
> +		}
> +
> +		/* May fail (-ENOMEM) if radix-tree node allocation failed. */
> +		__set_page_locked(new_page);
> +		SetPageSwapBacked(new_page);
> +		err = __add_to_swap_cache(new_page, entry);
> +		if (likely(!err)) {
> +			radix_tree_preload_end();
> +			lru_cache_add_anon(new_page);
> +			*retpage = new_page;
> +			return ZSWAP_SWAPCACHE_NEW;
> +		}
> +		radix_tree_preload_end();
> +		ClearPageSwapBacked(new_page);
> +		__clear_page_locked(new_page);
> +		/*
> +		 * add_to_swap_cache() doesn't return -EEXIST, so we can safely
> +		 * clear SWAP_HAS_CACHE flag.
> +		 */
> +		swapcache_free(entry, NULL);
> +	} while (err != -ENOMEM);
> +
> +	if (new_page)
> +		page_cache_release(new_page);
> +	if (!found_page)
> +		return ZSWAP_SWAPCACHE_NOMEM;
> +	*retpage = found_page;
> +	return ZSWAP_SWAPCACHE_EXIST;
> +}
> +
> +/*
> + * Attempts to free and entry by adding a page to the swap cache,
> + * decompressing the entry data into the page, and issuing a
> + * bio write to write the page back to the swap device.
> + *
> + * This can be thought of as a "resumed writeback" of the page
> + * to the swap device.  We are basically resuming the same swap
> + * writeback path that was intercepted with the frontswap_store()
> + * in the first place.  After the page has been decompressed into
> + * the swap cache, the compressed version stored by zswap can be
> + * freed.
> + */
> +static int zswap_writeback_entry(struct zswap_entry *entry)
> +{
> +	unsigned long type = entry->type;
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct page *page;
> +	swp_entry_t swpentry;
> +	u8 *src, *dst;
> +	unsigned int dlen;
> +	int ret;
> +	struct writeback_control wbc = {
> +		.sync_mode = WB_SYNC_NONE,
> +	};
> +
> +	/* get/allocate page in the swap cache */
> +	swpentry = swp_entry(type, entry->offset);
> +
> +	/* try to allocate swap cache page */
> +	switch (zswap_get_swap_cache_page(swpentry, &page)) {
> +
> +	case ZSWAP_SWAPCACHE_NOMEM: /* no memory */
> +		return -ENOMEM;
> +		break; /* not reached */
> +
> +	case ZSWAP_SWAPCACHE_EXIST: /* page is unlocked */
> +		/* page is already in the swap cache, ignore for now */
> +		return -EEXIST;
> +		break; /* not reached */
> +
> +	case ZSWAP_SWAPCACHE_NEW: /* page is locked */
> +		/* decompress */
> +		dlen = PAGE_SIZE;
> +		src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
> +		dst = kmap_atomic(page);
> +		ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
> +				dst, &dlen);
> +		kunmap_atomic(dst);
> +		zs_unmap_object(tree->pool, entry->handle);
> +		BUG_ON(ret);
> +		BUG_ON(dlen != PAGE_SIZE);
> +
> +		/* page is up to date */
> +		SetPageUptodate(page);
> +	}
> +
> +	/* start writeback */
> +	SetPageReclaim(page);
> +	/*
> +	 * Return value is ignored here because it doesn't change anything
> +	 * for us.  Page is returned unlocked.
> +	 */
> +	(void)__swap_writepage(page, &wbc, zswap_end_swap_write);
> +	page_cache_release(page);
> +	atomic_inc(&zswap_outstanding_writebacks);
> +
> +	return 0;
> +}
> +
> +/*
> + * Attempts to free nr of entries via writeback to the swap device.
> + * The number of entries that were actually freed is returned.
> + */
> +static int zswap_writeback_entries(unsigned type, int nr)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct zswap_entry *entry;
> +	int i, ret, refcount, freed_nr = 0;
> +
> +	/*
> +	 * This limits is arbitrary for now until a better
> +	 * policy can be implemented. This is so we don't
> +	 * eat all of RAM decompressing pages for writeback.
> +	 */
> +	if (atomic_read(&zswap_outstanding_writebacks) >
> +		ZSWAP_MAX_OUTSTANDING_FLUSHES)
> +		return 0;
> +
> +	for (i = 0; i < nr; i++) {
> +		spin_lock(&tree->lock);
> +
> +		/* dequeue from lru */
> +		if (list_empty(&tree->lru)) {
> +			spin_unlock(&tree->lock);
> +			break;
> +		}
> +		entry = list_first_entry(&tree->lru,
> +				struct zswap_entry, lru);
> +		list_del_init(&entry->lru);
> +
> +		/* so invalidate doesn't free the entry from under us */
> +		zswap_entry_get(entry);
> +
> +		spin_unlock(&tree->lock);
> +
> +		/* attempt writeback */
> +		ret = zswap_writeback_entry(entry);
> +
> +		spin_lock(&tree->lock);
> +
> +		/* drop reference from above */
> +		refcount = zswap_entry_put(entry);
> +
> +		if (!ret)
> +			 /* drop the initial reference from entry creation */
> +			refcount = zswap_entry_put(entry);
> +
> +		/*
> +		 * There are three possible values for refcount here:
> +		 * (1) refcount is 1, load is in progress or writeback failed;
> +		 *     do not free entry, add back to LRU
> +		 * (2) refcount is 0, (usual case) not invalidate yet;
> +		 *     free entry
> +		 * (3) refcount is -1, invalidate happened during writeback;
> +		 *     free entry
> +		 */
> +		if (refcount > 0)
> +			list_add(&entry->lru, &tree->lru);
> +		spin_unlock(&tree->lock);
> +
> +		if (refcount <= 0) {
> +			/* free the entry */
> +			if (refcount == 0)
> +				/* no invalidate yet, remove from rbtree */
> +				rb_erase(&entry->rbnode, &tree->rbroot);
> +			zswap_free_entry(tree, entry);
> +			freed_nr++;
> +		}
> +			
> +		if (atomic_read(&zswap_outstanding_writebacks) >
> +			ZSWAP_MAX_OUTSTANDING_FLUSHES)
> +			break;
> +	}
> +	return freed_nr++;
> +}
> +
> +/*******************************************
> +* page pool for temporary compression result
> +********************************************/
> +#define ZSWAP_TMPPAGE_POOL_PAGES 16

Why not the number of online cpu?

> +static LIST_HEAD(zswap_tmppage_list);
> +static DEFINE_SPINLOCK(zswap_tmppage_lock);
> +
> +static void zswap_tmppage_pool_destroy(void)
> +{
> +	struct page *page, *tmppage;
> +
> +	spin_lock(&zswap_tmppage_lock);
> +	list_for_each_entry_safe(page, tmppage, &zswap_tmppage_list, lru) {
> +		list_del(&page->lru);
> +		__free_pages(page, 1);
> +	}
> +	spin_unlock(&zswap_tmppage_lock);
> +}
> +
> +static int zswap_tmppage_pool_create(void)
> +{
> +	int i;
> +	struct page *page;
> +
> +	for (i = 0; i < ZSWAP_TMPPAGE_POOL_PAGES; i++) {
> +		page = alloc_pages(GFP_KERNEL, 1);
> +		if (!page) {
> +			zswap_tmppage_pool_destroy();
> +			return -ENOMEM;
> +		}
> +		spin_lock(&zswap_tmppage_lock);
> +		list_add(&page->lru, &zswap_tmppage_list);
> +		spin_unlock(&zswap_tmppage_lock);
> +	}
> +	return 0;
> +}
> +
> +static inline struct page *zswap_tmppage_alloc(void)
> +{
> +	struct page *page;
> +
> +	spin_lock(&zswap_tmppage_lock);
> +	if (list_empty(&zswap_tmppage_list)) {
> +		spin_unlock(&zswap_tmppage_lock);
> +		return NULL;
> +	}
> +	page = list_first_entry(&zswap_tmppage_list, struct page, lru);
> +	list_del(&page->lru);
> +	spin_unlock(&zswap_tmppage_lock);
> +	return page;
> +}
> +
> +static inline void zswap_tmppage_free(struct page *page)
> +{
> +	spin_lock(&zswap_tmppage_lock);
> +	list_add(&page->lru, &zswap_tmppage_list);
> +	spin_unlock(&zswap_tmppage_lock);
> +}
> +
>   /*********************************
>   * frontswap hooks
>   **********************************/
> @@ -380,7 +771,9 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
>   	unsigned int dlen = PAGE_SIZE;
>   	unsigned long handle;
>   	char *buf;
> -	u8 *src, *dst;
> +	u8 *src, *dst, *tmpdst;
> +	struct page *tmppage;
> +	bool writeback_attempted = 0;
>   
>   	if (!tree) {
>   		ret = -ENODEV;
> @@ -394,12 +787,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
>   	kunmap_atomic(src);
>   	if (ret) {
>   		ret = -EINVAL;
> -		goto putcpu;
> +		goto freepage;
>   	}
>   	if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
>   		zswap_reject_compress_poor++;
>   		ret = -E2BIG;
> -		goto putcpu;
> +		goto freepage;
>   	}
>   
>   	/* store */
> @@ -407,15 +800,46 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
>   		__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
>   			__GFP_NOWARN);
>   	if (!handle) {
> -		zswap_reject_zsmalloc_fail++;
> -		ret = -ENOMEM;
> -		goto putcpu;
> +		zswap_writeback_attempted++;
> +		/*
> +		 * Copy compressed buffer out of per-cpu storage so
> +		 * we can re-enable preemption.
> +		*/

Why re-enable preemption is very important?

> +		tmppage = zswap_tmppage_alloc();
> +		if (!tmppage) {
> +			zswap_reject_tmppage_fail++;
> +			ret = -ENOMEM;
> +			goto freepage;
> +		}
> +		writeback_attempted = 1;
> +		tmpdst = page_address(tmppage);
> +		memcpy(tmpdst, dst, dlen);
> +		dst = tmpdst;
> +		put_cpu_var(zswap_dstmem);
> +
> +		/* try to free up some space */
> +		/* TODO: replace with more targeted policy */
> +		zswap_writeback_entries(type, 16);
> +		/* try again, allowing wait */
> +		handle = zs_malloc(tree->pool, dlen,
> +			__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> +				__GFP_NOWARN);
> +		if (!handle) {
> +			/* still no space, fail */
> +			zswap_reject_zsmalloc_fail++;
> +			ret = -ENOMEM;
> +			goto freepage;
> +		}
> +		zswap_saved_by_writeback++;
>   	}
>   
>   	buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
>   	memcpy(buf, dst, dlen);
>   	zs_unmap_object(tree->pool, handle);
> -	put_cpu_var(zswap_dstmem);
> +	if (writeback_attempted)
> +		zswap_tmppage_free(tmppage);
> +	else
> +		put_cpu_var(zswap_dstmem);
>   
>   	/* allocate entry */
>   	entry = zswap_entry_cache_alloc(GFP_KERNEL);
> @@ -438,16 +862,17 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
>   		ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
>   		if (ret == -EEXIST) {
>   			zswap_duplicate_entry++;
> -
> -			/* remove from rbtree */
> +			/* remove from rbtree and lru */
>   			rb_erase(&dupentry->rbnode, &tree->rbroot);
> -			
> -			/* free */
> -			zs_free(tree->pool, dupentry->handle);
> -			zswap_entry_cache_free(dupentry);
> -			atomic_dec(&zswap_stored_pages);
> +			if (!list_empty(&dupentry->lru))
> +				list_del_init(&dupentry->lru);
> +			if (!zswap_entry_put(dupentry)) {
> +				/* free */
> +				zswap_free_entry(tree, dupentry);
> +			}
>   		}
>   	} while (ret == -EEXIST);
> +	list_add_tail(&entry->lru, &tree->lru);
>   	spin_unlock(&tree->lock);
>   
>   	/* update stats */
> @@ -455,8 +880,11 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
>   
>   	return 0;
>   
> -putcpu:
> -	put_cpu_var(zswap_dstmem);
> +freepage:
> +	if (writeback_attempted)
> +		zswap_tmppage_free(tmppage);
> +	else
> +		put_cpu_var(zswap_dstmem);
>   reject:
>   	return ret;
>   }
> @@ -472,10 +900,21 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
>   	struct zswap_entry *entry;
>   	u8 *src, *dst;
>   	unsigned int dlen;
> +	int refcount;
>   
>   	/* find */
>   	spin_lock(&tree->lock);
>   	entry = zswap_rb_search(&tree->rbroot, offset);
> +	if (!entry) {
> +		/* entry was written_back */
> +		spin_unlock(&tree->lock);
> +		return -1;
> +	}
> +	zswap_entry_get(entry);
> +
> +	/* remove from lru */
> +	if (!list_empty(&entry->lru))
> +		list_del_init(&entry->lru);
>   	spin_unlock(&tree->lock);
>   
>   	/* decompress */
> @@ -487,6 +926,24 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
>   	kunmap_atomic(dst);
>   	zs_unmap_object(tree->pool, entry->handle);
>   
> +	spin_lock(&tree->lock);
> +	refcount = zswap_entry_put(entry);
> +	if (likely(refcount)) {
> +		list_add_tail(&entry->lru, &tree->lru);
> +		spin_unlock(&tree->lock);
> +		return 0;
> +	}
> +	spin_unlock(&tree->lock);
> +
> +	/*
> +	 * We don't have to unlink from the rbtree because
> +	 * zswap_writeback_entry() or zswap_frontswap_invalidate page()
> +	 * has already done this for us if we are the last reference.
> +	 */
> +	/* free */
> +
> +	zswap_free_entry(tree, entry);
> +
>   	return 0;
>   }
>   
> @@ -495,19 +952,34 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
>   {
>   	struct zswap_tree *tree = zswap_trees[type];
>   	struct zswap_entry *entry;
> +	int refcount;
>   
>   	/* find */
>   	spin_lock(&tree->lock);
>   	entry = zswap_rb_search(&tree->rbroot, offset);
> +	if (!entry) {
> +		/* entry was written back */
> +		spin_unlock(&tree->lock);
> +		return;
> +	}
>   
> -	/* remove from rbtree */
> +	/* remove from rbtree and lru */
>   	rb_erase(&entry->rbnode, &tree->rbroot);
> +	if (!list_empty(&entry->lru))
> +		list_del_init(&entry->lru);
> +
> +	/* drop the initial reference from entry creation */
> +	refcount = zswap_entry_put(entry);
> +
>   	spin_unlock(&tree->lock);
>   
> +	if (refcount) {
> +		/* writeback in progress, writeback will free */
> +		return;
> +	}
> +
>   	/* free */
> -	zs_free(tree->pool, entry->handle);
> -	zswap_entry_cache_free(entry);
> -	atomic_dec(&zswap_stored_pages);
> +	zswap_free_entry(tree, entry);
>   }
>   
>   /* invalidates all pages for the given swap type */
> @@ -531,6 +1003,7 @@ static void zswap_frontswap_invalidate_area(unsigned type)
>   		node = next;
>   	}
>   	tree->rbroot = RB_ROOT;
> +	INIT_LIST_HEAD(&tree->lru);
>   	spin_unlock(&tree->lock);
>   }
>   
> @@ -546,6 +1019,7 @@ static void zswap_frontswap_init(unsigned type)
>   	if (!tree->pool)
>   		goto freetree;
>   	tree->rbroot = RB_ROOT;
> +	INIT_LIST_HEAD(&tree->lru);
>   	spin_lock_init(&tree->lock);
>   	zswap_trees[type] = tree;
>   	return;
> @@ -581,20 +1055,30 @@ static int __init zswap_debugfs_init(void)
>   	if (!zswap_debugfs_root)
>   		return -ENOMEM;
>   
> +	debugfs_create_u64("saved_by_writeback", S_IRUGO,
> +			zswap_debugfs_root, &zswap_saved_by_writeback);
>   	debugfs_create_u64("pool_limit_hit", S_IRUGO,
>   			zswap_debugfs_root, &zswap_pool_limit_hit);
> +	debugfs_create_u64("reject_writeback_attempted", S_IRUGO,
> +			zswap_debugfs_root, &zswap_writeback_attempted);
> +	debugfs_create_u64("reject_tmppage_fail", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_tmppage_fail);
>   	debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
>   			zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
>   	debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
>   			zswap_debugfs_root, &zswap_reject_kmemcache_fail);
>   	debugfs_create_u64("reject_compress_poor", S_IRUGO,
>   			zswap_debugfs_root, &zswap_reject_compress_poor);
> +	debugfs_create_u64("written_back_pages", S_IRUGO,
> +			zswap_debugfs_root, &zswap_written_back_pages);
>   	debugfs_create_u64("duplicate_entry", S_IRUGO,
>   			zswap_debugfs_root, &zswap_duplicate_entry);
>   	debugfs_create_atomic_t("pool_pages", S_IRUGO,
>   			zswap_debugfs_root, &zswap_pool_pages);
>   	debugfs_create_atomic_t("stored_pages", S_IRUGO,
>   			zswap_debugfs_root, &zswap_stored_pages);
> +	debugfs_create_atomic_t("outstanding_writebacks", S_IRUGO,
> +			zswap_debugfs_root, &zswap_outstanding_writebacks);
>   
>   	return 0;
>   }
> @@ -629,6 +1113,10 @@ static int __init init_zswap(void)
>   		pr_err("page pool initialization failed\n");
>   		goto pagepoolfail;
>   	}
> +	if (zswap_tmppage_pool_create()) {
> +		pr_err("workmem pool initialization failed\n");
> +		goto tmppoolfail;
> +	}
>   	if (zswap_comp_init()) {
>   		pr_err("compressor initialization failed\n");
>   		goto compfail;
> @@ -644,6 +1132,8 @@ static int __init init_zswap(void)
>   pcpufail:
>   	zswap_comp_exit();
>   compfail:
> +	zswap_tmppage_pool_destroy();
> +tmppoolfail:
>   	zswap_page_pool_destroy();
>   pagepoolfail:
>   	zswap_entry_cache_destory();


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
       [not found] ` <1360780731-11708-3-git-send-email-sjenning@linux.vnet.ibm.com>
@ 2013-02-16  6:21   ` Ric Mason
  2013-02-18 19:16     ` Seth Jennings
  0 siblings, 1 reply; 38+ messages in thread
From: Ric Mason @ 2013-02-16  6:21 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/14/2013 02:38 AM, Seth Jennings wrote:
> This patch adds a documentation file for zsmalloc at
> Documentation/vm/zsmalloc.txt
>
> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> ---
>   Documentation/vm/zsmalloc.txt |   68 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 68 insertions(+)
>   create mode 100644 Documentation/vm/zsmalloc.txt
>
> diff --git a/Documentation/vm/zsmalloc.txt b/Documentation/vm/zsmalloc.txt
> new file mode 100644
> index 0000000..85aa617
> --- /dev/null
> +++ b/Documentation/vm/zsmalloc.txt
> @@ -0,0 +1,68 @@
> +zsmalloc Memory Allocator
> +
> +Overview
> +
> +zmalloc a new slab-based memory allocator,
> +zsmalloc, for storing compressed pages.  It is designed for
> +low fragmentation and high allocation success rate on
> +large object, but <= PAGE_SIZE allocations.
> +
> +zsmalloc differs from the kernel slab allocator in two primary
> +ways to achieve these design goals.
> +
> +zsmalloc never requires high order page allocations to back
> +slabs, or "size classes" in zsmalloc terms. Instead it allows
> +multiple single-order pages to be stitched together into a
> +"zspage" which backs the slab.  This allows for higher allocation
> +success rate under memory pressure.
> +
> +Also, zsmalloc allows objects to span page boundaries within the
> +zspage.  This allows for lower fragmentation than could be had
> +with the kernel slab allocator for objects between PAGE_SIZE/2
> +and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
> +to 60% of it original size, the memory savings gained through
> +compression is lost in fragmentation because another object of
> +the same size can't be stored in the leftover space.
> +
> +This ability to span pages results in zsmalloc allocations not being
> +directly addressable by the user.  The user is given an
> +non-dereferencable handle in response to an allocation request.
> +That handle must be mapped, using zs_map_object(), which returns
> +a pointer to the mapped region that can be used.  The mapping is
> +necessary since the object data may reside in two different
> +noncontigious pages.

Do you mean the reason of  to use a zsmalloc object must map after 
malloc is object data maybe reside in two different nocontiguous pages?

> +
> +For 32-bit systems, zsmalloc has the added benefit of being
> +able to back slabs with HIGHMEM pages, something not possible

What's the meaning of "back slabs with HIGHMEM pages"?

> +with the kernel slab allocators (SLAB or SLUB).
> +
> +Usage:
> +
> +#include <linux/zsmalloc.h>
> +
> +/* create a new pool */
> +struct zs_pool *pool = zs_create_pool("mypool", GFP_KERNEL);
> +
> +/* allocate a 256 byte object */
> +unsigned long handle = zs_malloc(pool, 256);
> +
> +/*
> + * Map the object to get a dereferenceable pointer in "read-write mode"
> + * (see zsmalloc.h for additional modes)
> + */
> +void *ptr = zs_map_object(pool, handle, ZS_MM_RW);
> +
> +/* do something with ptr */
> +
> +/*
> + * Unmap the object when done dealing with it. You should try to
> + * minimize the time for which the object is mapped since preemption
> + * is disabled during the mapped period.
> + */
> +zs_unmap_object(pool, handle);
> +
> +/* free the object */
> +zs_free(pool, handle);
> +
> +/* destroy the pool */
> +zs_destroy_pool(pool);


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-16  3:26   ` [PATCHv5 1/8] zsmalloc: add to mm/ Ric Mason
@ 2013-02-18 19:04     ` Seth Jennings
  0 siblings, 0 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-18 19:04 UTC (permalink / raw)
  To: Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/15/2013 09:26 PM, Ric Mason wrote:
> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>> =========
>> DO NOT MERGE, FOR REVIEW ONLY
>> This patch introduces zsmalloc as new code, however, it already
>> exists in drivers/staging.  In order to build successfully, you
>> must select EITHER to driver/staging version OR this version.
>> Once zsmalloc is reviewed in this format (and hopefully accepted),
>> I will create a new patchset that properly promotes zsmalloc from
>> staging.
>> =========
>>
>> This patchset introduces a new slab-based memory allocator,
>> zsmalloc, for storing compressed pages.  It is designed for
>> low fragmentation and high allocation success rate on
>> large object, but <= PAGE_SIZE allocations.
>>
>> zsmalloc differs from the kernel slab allocator in two primary
>> ways to achieve these design goals.
>>
>> zsmalloc never requires high order page allocations to back
>> slabs, or "size classes" in zsmalloc terms. Instead it allows
>> multiple single-order pages to be stitched together into a
>> "zspage" which backs the slab.  This allows for higher allocation
>> success rate under memory pressure.
>>
>> Also, zsmalloc allows objects to span page boundaries within the
>> zspage.  This allows for lower fragmentation than could be had
>> with the kernel slab allocator for objects between PAGE_SIZE/2
>> and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>> to 60% of it original size, the memory savings gained through
>> compression is lost in fragmentation because another object of
>> the same size can't be stored in the leftover space.
> 
> Why you say so? slab/slub allocator both have policies to setup
> suitable order of pages in each slab cache in order to reduce
> fragmentation. Which codes show you slab object can't span page
> boundaries? Could you pointed out to me?

I might need to reword this. What I meant to say is "non-contiguous
page boundaries".

zsmalloc allows an object to span non-contiguous page boundaries
within a zspage.  This obviously can't be done by slab/slub since they
give addresses directly to users and the object data must be
contiguous.  This is one reason why zsmalloc allocations require
mapping to obtain a usable address.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-16  6:21   ` [PATCHv5 2/8] zsmalloc: add documentation Ric Mason
@ 2013-02-18 19:16     ` Seth Jennings
  2013-02-21  8:49       ` Ric Mason
  0 siblings, 1 reply; 38+ messages in thread
From: Seth Jennings @ 2013-02-18 19:16 UTC (permalink / raw)
  To: Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/16/2013 12:21 AM, Ric Mason wrote:
> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>> This patch adds a documentation file for zsmalloc at
>> Documentation/vm/zsmalloc.txt
>>
>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>> ---
>>   Documentation/vm/zsmalloc.txt |   68
>> +++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 68 insertions(+)
>>   create mode 100644 Documentation/vm/zsmalloc.txt
>>
>> diff --git a/Documentation/vm/zsmalloc.txt
>> b/Documentation/vm/zsmalloc.txt
>> new file mode 100644
>> index 0000000..85aa617
>> --- /dev/null
>> +++ b/Documentation/vm/zsmalloc.txt
>> @@ -0,0 +1,68 @@
>> +zsmalloc Memory Allocator
>> +
>> +Overview
>> +
>> +zmalloc a new slab-based memory allocator,
>> +zsmalloc, for storing compressed pages.  It is designed for
>> +low fragmentation and high allocation success rate on
>> +large object, but <= PAGE_SIZE allocations.
>> +
>> +zsmalloc differs from the kernel slab allocator in two primary
>> +ways to achieve these design goals.
>> +
>> +zsmalloc never requires high order page allocations to back
>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>> +multiple single-order pages to be stitched together into a
>> +"zspage" which backs the slab.  This allows for higher allocation
>> +success rate under memory pressure.
>> +
>> +Also, zsmalloc allows objects to span page boundaries within the
>> +zspage.  This allows for lower fragmentation than could be had
>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>> +and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>> +to 60% of it original size, the memory savings gained through
>> +compression is lost in fragmentation because another object of
>> +the same size can't be stored in the leftover space.
>> +
>> +This ability to span pages results in zsmalloc allocations not being
>> +directly addressable by the user.  The user is given an
>> +non-dereferencable handle in response to an allocation request.
>> +That handle must be mapped, using zs_map_object(), which returns
>> +a pointer to the mapped region that can be used.  The mapping is
>> +necessary since the object data may reside in two different
>> +noncontigious pages.
> 
> Do you mean the reason of  to use a zsmalloc object must map after
> malloc is object data maybe reside in two different nocontiguous pages?

Yes, that is one reason for the mapping.  The other reason (more of an
added bonus) is below.

> 
>> +
>> +For 32-bit systems, zsmalloc has the added benefit of being
>> +able to back slabs with HIGHMEM pages, something not possible
> 
> What's the meaning of "back slabs with HIGHMEM pages"?

By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit systems
with larger that 1GB (actually a little less) of RAM.  The upper 3GB
of the 4GB address space, depending on kernel build options, is not
directly addressable by the kernel, but can be mapped into the kernel
address space with functions like kmap() or kmap_atomic().

These pages can't be used by slab/slub because they are not
continuously mapped into the kernel address space.  However, since
zsmalloc requires a mapping anyway to handle objects that span
non-contiguous page boundaries, we do the kernel mapping as part of
the process.

So zspages, the conceptual slab in zsmalloc backed by single-order
pages can include pages from the HIGHMEM zone as well.

Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 4/8] zswap: add to mm/
  2013-02-16  4:04   ` [PATCHv5 4/8] zswap: add to mm/ Ric Mason
@ 2013-02-18 19:24     ` Seth Jennings
  2013-02-18 19:49       ` Cody P Schafer
  2013-02-18 19:55       ` Dan Magenheimer
  0 siblings, 2 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-18 19:24 UTC (permalink / raw)
  To: Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/15/2013 10:04 PM, Ric Mason wrote:
> On 02/14/2013 02:38 AM, Seth Jennings wrote:
<snip>
>> + * The statistics below are not protected from concurrent access for
>> + * performance reasons so they may not be a 100% accurate.  However,
>> + * the do provide useful information on roughly how many times a
> 
> s/the/they

Ah yes, thanks :)

> 
>> + * certain event is occurring.
>> +*/
>> +static u64 zswap_pool_limit_hit;
>> +static u64 zswap_reject_compress_poor;
>> +static u64 zswap_reject_zsmalloc_fail;
>> +static u64 zswap_reject_kmemcache_fail;
>> +static u64 zswap_duplicate_entry;
>> +
>> +/*********************************
>> +* tunables
>> +**********************************/
>> +/* Enable/disable zswap (disabled by default, fixed at boot for
>> now) */
>> +static bool zswap_enabled;
>> +module_param_named(enabled, zswap_enabled, bool, 0);
> 
> please document in Documentation/kernel-parameters.txt.

Will do.

> 
>> +
>> +/* Compressor to be used by zswap (fixed at boot for now) */
>> +#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
>> +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
>> +module_param_named(compressor, zswap_compressor, charp, 0);
> 
> ditto

ditto

> 
>> +
<snip>
>> +/* invalidates all pages for the given swap type */
>> +static void zswap_frontswap_invalidate_area(unsigned type)
>> +{
>> +    struct zswap_tree *tree = zswap_trees[type];
>> +    struct rb_node *node, *next;
>> +    struct zswap_entry *entry;
>> +
>> +    if (!tree)
>> +        return;
>> +
>> +    /* walk the tree and free everything */
>> +    spin_lock(&tree->lock);
>> +    node = rb_first(&tree->rbroot);
>> +    while (node) {
>> +        entry = rb_entry(node, struct zswap_entry, rbnode);
>> +        zs_free(tree->pool, entry->handle);
>> +        next = rb_next(node);
>> +        zswap_entry_cache_free(entry);
>> +        node = next;
>> +    }
>> +    tree->rbroot = RB_ROOT;
> 
> Why don't need rb_erase for every nodes?

We are freeing the entire tree here.  try_to_unuse() in the swapoff
syscall should have already emptied the tree, but this is here for
completeness.

rb_erase() will do things like rebalancing the tree; something that
just wastes time since we are in the process of freeing the whole
tree.  We are holding the tree lock here so we are sure that no one
else is accessing the tree while it is in this transient broken state.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 7/8] zswap: add swap page writeback support
  2013-02-16  6:11   ` [PATCHv5 7/8] zswap: add swap page writeback support Ric Mason
@ 2013-02-18 19:32     ` Seth Jennings
  0 siblings, 0 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-18 19:32 UTC (permalink / raw)
  To: Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/16/2013 12:11 AM, Ric Mason wrote:
> On 02/14/2013 02:38 AM, Seth Jennings wrote:
<snip>
>> +/*******************************************
>> +* page pool for temporary compression result
>> +********************************************/
>> +#define ZSWAP_TMPPAGE_POOL_PAGES 16
> 
> Why not the number of online cpu?
> 

While the number of online cpus can change, it could better than this
fixed value.

You're the second person to mention this, so I'll fix it up.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 0/8] zswap: compressed swap caching
  2013-02-16  3:20 ` [PATCHv5 0/8] zswap: compressed swap caching Ric Mason
@ 2013-02-18 19:37   ` Seth Jennings
  0 siblings, 0 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-18 19:37 UTC (permalink / raw)
  To: Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/15/2013 09:20 PM, Ric Mason wrote:
> On 02/14/2013 02:38 AM, Seth Jennings wrote:
<snip>
>>
>> Some addition performance metrics regarding the performance
>> improvements and I/O reductions that can be achieved using zswap as
>> measured by SPECjbb are provided here:
>>
>> http://ibm.co/VCgHvM
> 
> I see this link.  You mentioned that "When a user enables zswap and
> the hardware accelerator, zswap simply passes the pages to be
> compressed or decompressed off to the accelerator instead of
> performing the work in software". Then how can user enable hardware
> accelerator, there are option in UEFI or ... ?

zswap uses the cryptographic API for accessing compressor modules.  In
the case of Power7+, we have a crypto API driver (crypto/842.c) which
wraps calls to the real driver (drivers/crypto/nx/nx-842.c) which
makes the hardware calls.

To use a compressor module, use the zswap.compressor attribute on the
kernel parameter.  For P7+, for exmaple:

zswap.compressor=842

> 
>> These results include runs on x86 and new results on Power7+ with
>> hardware compression acceleration.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 4/8] zswap: add to mm/
  2013-02-18 19:24     ` Seth Jennings
@ 2013-02-18 19:49       ` Cody P Schafer
  2013-02-18 20:07         ` Seth Jennings
  2013-02-18 19:55       ` Dan Magenheimer
  1 sibling, 1 reply; 38+ messages in thread
From: Cody P Schafer @ 2013-02-18 19:49 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Ric Mason, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer,
	Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner,
	Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen,
	Joe Perches, linux-mm, linux-kernel, devel

On 02/18/2013 11:24 AM, Seth Jennings wrote:
> On 02/15/2013 10:04 PM, Ric Mason wrote:
>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
> <snip>
>>> +/* invalidates all pages for the given swap type */
>>> +static void zswap_frontswap_invalidate_area(unsigned type)
>>> +{
>>> +    struct zswap_tree *tree = zswap_trees[type];
>>> +    struct rb_node *node, *next;
>>> +    struct zswap_entry *entry;
>>> +
>>> +    if (!tree)
>>> +        return;
>>> +
>>> +    /* walk the tree and free everything */
>>> +    spin_lock(&tree->lock);
>>> +    node = rb_first(&tree->rbroot);
>>> +    while (node) {
>>> +        entry = rb_entry(node, struct zswap_entry, rbnode);
>>> +        zs_free(tree->pool, entry->handle);
>>> +        next = rb_next(node);
>>> +        zswap_entry_cache_free(entry);
>>> +        node = next;
>>> +    }
>>> +    tree->rbroot = RB_ROOT;
>>
>> Why don't need rb_erase for every nodes?
>
> We are freeing the entire tree here.  try_to_unuse() in the swapoff
> syscall should have already emptied the tree, but this is here for
> completeness.
>
> rb_erase() will do things like rebalancing the tree; something that
> just wastes time since we are in the process of freeing the whole
> tree.  We are holding the tree lock here so we are sure that no one
> else is accessing the tree while it is in this transient broken state.

If we have a sub-tree like:
     ...
    /
   A
  / \
B   C

B == rb_next(tree)
A == rb_next(B)
C == rb_next(A)

The current code free's A (via zswap_entry_cache_free()) prior to 
examining C, and thus rb_next(C) results in a use after free of A.

You can solve this by doing a post-order traversal of the tree, either

a) in the destructive manner used in a number of filesystems, see 
fs/ubifs/orphan.c ubifs_add_orphan(), for example.

b) or by doing something similar to this commit: 
https://github.com/jmesmon/linux/commit/d9e43aaf9e8a447d6802531d95a1767532339fad 
, which I've been using for some yet-to-be-merged code.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [PATCHv5 4/8] zswap: add to mm/
  2013-02-18 19:24     ` Seth Jennings
  2013-02-18 19:49       ` Cody P Schafer
@ 2013-02-18 19:55       ` Dan Magenheimer
  2013-02-18 20:39         ` Seth Jennings
  2013-02-20 20:37         ` Seth Jennings
  1 sibling, 2 replies; 38+ messages in thread
From: Dan Magenheimer @ 2013-02-18 19:55 UTC (permalink / raw)
  To: Seth Jennings, Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Wilk, Robert Jennings, Jenifer Hopper, Mel Gorman,
	Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
> Subject: Re: [PATCHv5 4/8] zswap: add to mm/
> 
> On 02/15/2013 10:04 PM, Ric Mason wrote:
> > On 02/14/2013 02:38 AM, Seth Jennings wrote:
> <snip>
> >> + * The statistics below are not protected from concurrent access for
> >> + * performance reasons so they may not be a 100% accurate.  However,
> >> + * the do provide useful information on roughly how many times a
> >
> > s/the/they
> 
> Ah yes, thanks :)
> 
> >
> >> + * certain event is occurring.
> >> +*/
> >> +static u64 zswap_pool_limit_hit;
> >> +static u64 zswap_reject_compress_poor;
> >> +static u64 zswap_reject_zsmalloc_fail;
> >> +static u64 zswap_reject_kmemcache_fail;
> >> +static u64 zswap_duplicate_entry;
> >> +
> >> +/*********************************
> >> +* tunables
> >> +**********************************/
> >> +/* Enable/disable zswap (disabled by default, fixed at boot for
> >> now) */
> >> +static bool zswap_enabled;
> >> +module_param_named(enabled, zswap_enabled, bool, 0);
> >
> > please document in Documentation/kernel-parameters.txt.
> 
> Will do.

Is that a good idea?  Konrad's frontswap/cleancache patches
to fix frontswap/cleancache initialization so that backends
can be built/loaded as modules may be merged for 3.9.
AFAIK, module parameters are not included in kernel-parameters.txt.

Dan

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 4/8] zswap: add to mm/
  2013-02-18 19:49       ` Cody P Schafer
@ 2013-02-18 20:07         ` Seth Jennings
  0 siblings, 0 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-18 20:07 UTC (permalink / raw)
  To: Cody P Schafer
  Cc: Ric Mason, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer,
	Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner,
	Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen,
	Joe Perches, linux-mm, linux-kernel, devel

On 02/18/2013 01:49 PM, Cody P Schafer wrote:
> On 02/18/2013 11:24 AM, Seth Jennings wrote:
>> On 02/15/2013 10:04 PM, Ric Mason wrote:
>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>> <snip>
>>>> +/* invalidates all pages for the given swap type */
>>>> +static void zswap_frontswap_invalidate_area(unsigned type)
>>>> +{
>>>> +    struct zswap_tree *tree = zswap_trees[type];
>>>> +    struct rb_node *node, *next;
>>>> +    struct zswap_entry *entry;
>>>> +
>>>> +    if (!tree)
>>>> +        return;
>>>> +
>>>> +    /* walk the tree and free everything */
>>>> +    spin_lock(&tree->lock);
>>>> +    node = rb_first(&tree->rbroot);
>>>> +    while (node) {
>>>> +        entry = rb_entry(node, struct zswap_entry, rbnode);
>>>> +        zs_free(tree->pool, entry->handle);
>>>> +        next = rb_next(node);
>>>> +        zswap_entry_cache_free(entry);
>>>> +        node = next;
>>>> +    }
>>>> +    tree->rbroot = RB_ROOT;
>>>
>>> Why don't need rb_erase for every nodes?
>>
>> We are freeing the entire tree here.  try_to_unuse() in the swapoff
>> syscall should have already emptied the tree, but this is here for
>> completeness.
>>
>> rb_erase() will do things like rebalancing the tree; something that
>> just wastes time since we are in the process of freeing the whole
>> tree.  We are holding the tree lock here so we are sure that no one
>> else is accessing the tree while it is in this transient broken state.
> 
> If we have a sub-tree like:
>     ...
>    /
>   A
>  / \
> B   C
> 
> B == rb_next(tree)
> A == rb_next(B)
> C == rb_next(A)
> 
> The current code free's A (via zswap_entry_cache_free()) prior to
> examining C, and thus rb_next(C) results in a use after free of A.
> 
> You can solve this by doing a post-order traversal of the tree, either
> 
> a) in the destructive manner used in a number of filesystems, see
> fs/ubifs/orphan.c ubifs_add_orphan(), for example.
> 
> b) or by doing something similar to this commit:
> https://github.com/jmesmon/linux/commit/d9e43aaf9e8a447d6802531d95a1767532339fad
> , which I've been using for some yet-to-be-merged code.

Great catch! I'll fix this up.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 4/8] zswap: add to mm/
  2013-02-18 19:55       ` Dan Magenheimer
@ 2013-02-18 20:39         ` Seth Jennings
  2013-02-18 21:59           ` Dan Magenheimer
  2013-02-20 20:37         ` Seth Jennings
  1 sibling, 1 reply; 38+ messages in thread
From: Seth Jennings @ 2013-02-18 20:39 UTC (permalink / raw)
  To: Dan Magenheimer
  Cc: Ric Mason, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper,
	Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

On 02/18/2013 01:55 PM, Dan Magenheimer wrote:
>> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
>> Subject: Re: [PATCHv5 4/8] zswap: add to mm/
>>
>> On 02/15/2013 10:04 PM, Ric Mason wrote:
>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>> <snip>
>>>> + * The statistics below are not protected from concurrent access for
>>>> + * performance reasons so they may not be a 100% accurate.  However,
>>>> + * the do provide useful information on roughly how many times a
>>>
>>> s/the/they
>>
>> Ah yes, thanks :)
>>
>>>
>>>> + * certain event is occurring.
>>>> +*/
>>>> +static u64 zswap_pool_limit_hit;
>>>> +static u64 zswap_reject_compress_poor;
>>>> +static u64 zswap_reject_zsmalloc_fail;
>>>> +static u64 zswap_reject_kmemcache_fail;
>>>> +static u64 zswap_duplicate_entry;
>>>> +
>>>> +/*********************************
>>>> +* tunables
>>>> +**********************************/
>>>> +/* Enable/disable zswap (disabled by default, fixed at boot for
>>>> now) */
>>>> +static bool zswap_enabled;
>>>> +module_param_named(enabled, zswap_enabled, bool, 0);
>>>
>>> please document in Documentation/kernel-parameters.txt.
>>
>> Will do.
> 
> Is that a good idea?  Konrad's frontswap/cleancache patches
> to fix frontswap/cleancache initialization so that backends
> can be built/loaded as modules may be merged for 3.9.
> AFAIK, module parameters are not included in kernel-parameters.txt.

This is true.  However, the frontswap/cleancache init stuff isn't the
only reason zswap is built-in only.  The writeback code depends on
non-exported kernel symbols:

swapcache_free
__swap_writepage
__add_to_swap_cache
swapcache_prepare
swapper_space
end_swap_bio_write

I know a fix is as trivial as exporting them, but I didn't want to
take on that debate right now.

Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [PATCHv5 4/8] zswap: add to mm/
  2013-02-18 20:39         ` Seth Jennings
@ 2013-02-18 21:59           ` Dan Magenheimer
  2013-02-18 22:52             ` Seth Jennings
  0 siblings, 1 reply; 38+ messages in thread
From: Dan Magenheimer @ 2013-02-18 21:59 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Ric Mason, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper,
	Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
> Subject: Re: [PATCHv5 4/8] zswap: add to mm/
> 
> On 02/18/2013 01:55 PM, Dan Magenheimer wrote:
> >> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
> >> Subject: Re: [PATCHv5 4/8] zswap: add to mm/
> >>
> >> On 02/15/2013 10:04 PM, Ric Mason wrote:
> >>>> + * certain event is occurring.
> >>>> +*/
> >>>> +static u64 zswap_pool_limit_hit;
> >>>> +static u64 zswap_reject_compress_poor;
> >>>> +static u64 zswap_reject_zsmalloc_fail;
> >>>> +static u64 zswap_reject_kmemcache_fail;
> >>>> +static u64 zswap_duplicate_entry;
> >>>> +
> >>>> +/*********************************
> >>>> +* tunables
> >>>> +**********************************/
> >>>> +/* Enable/disable zswap (disabled by default, fixed at boot for
> >>>> now) */
> >>>> +static bool zswap_enabled;
> >>>> +module_param_named(enabled, zswap_enabled, bool, 0);
> >>>
> >>> please document in Documentation/kernel-parameters.txt.
> >>
> >> Will do.
> >
> > Is that a good idea?  Konrad's frontswap/cleancache patches
> > to fix frontswap/cleancache initialization so that backends
> > can be built/loaded as modules may be merged for 3.9.
> > AFAIK, module parameters are not included in kernel-parameters.txt.
> 
> This is true.  However, the frontswap/cleancache init stuff isn't the
> only reason zswap is built-in only.  The writeback code depends on
> non-exported kernel symbols:
> 
> swapcache_free
> __swap_writepage
> __add_to_swap_cache
> swapcache_prepare
> swapper_space
> end_swap_bio_write
> 
> I know a fix is as trivial as exporting them, but I didn't want to
> take on that debate right now.

Hmmm... I wonder if exporting these might be the best solution
as it (unnecessarily?) exposes some swap subsystem internals.
I wonder if a small change to read_swap_cache_async might
be more acceptable.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 4/8] zswap: add to mm/
  2013-02-18 21:59           ` Dan Magenheimer
@ 2013-02-18 22:52             ` Seth Jennings
  2013-02-18 23:17               ` Dan Magenheimer
  0 siblings, 1 reply; 38+ messages in thread
From: Seth Jennings @ 2013-02-18 22:52 UTC (permalink / raw)
  To: Dan Magenheimer
  Cc: Ric Mason, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper,
	Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

On 02/18/2013 03:59 PM, Dan Magenheimer wrote:
>> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
>> Subject: Re: [PATCHv5 4/8] zswap: add to mm/
>>
>> On 02/18/2013 01:55 PM, Dan Magenheimer wrote:
>>>> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
>>>> Subject: Re: [PATCHv5 4/8] zswap: add to mm/
>>>>
>>>> On 02/15/2013 10:04 PM, Ric Mason wrote:
>>>>>> + * certain event is occurring.
>>>>>> +*/
>>>>>> +static u64 zswap_pool_limit_hit;
>>>>>> +static u64 zswap_reject_compress_poor;
>>>>>> +static u64 zswap_reject_zsmalloc_fail;
>>>>>> +static u64 zswap_reject_kmemcache_fail;
>>>>>> +static u64 zswap_duplicate_entry;
>>>>>> +
>>>>>> +/*********************************
>>>>>> +* tunables
>>>>>> +**********************************/
>>>>>> +/* Enable/disable zswap (disabled by default, fixed at boot for
>>>>>> now) */
>>>>>> +static bool zswap_enabled;
>>>>>> +module_param_named(enabled, zswap_enabled, bool, 0);
>>>>>
>>>>> please document in Documentation/kernel-parameters.txt.
>>>>
>>>> Will do.
>>>
>>> Is that a good idea?  Konrad's frontswap/cleancache patches
>>> to fix frontswap/cleancache initialization so that backends
>>> can be built/loaded as modules may be merged for 3.9.
>>> AFAIK, module parameters are not included in kernel-parameters.txt.
>>
>> This is true.  However, the frontswap/cleancache init stuff isn't the
>> only reason zswap is built-in only.  The writeback code depends on
>> non-exported kernel symbols:
>>
>> swapcache_free
>> __swap_writepage
>> __add_to_swap_cache
>> swapcache_prepare
>> swapper_space
>> end_swap_bio_write
>>
>> I know a fix is as trivial as exporting them, but I didn't want to
>> take on that debate right now.
> 
> Hmmm... I wonder if exporting these might be the best solution
> as it (unnecessarily?) exposes some swap subsystem internals.
> I wonder if a small change to read_swap_cache_async might
> be more acceptable.

Yes, I'm not saying that I'm for exporting them; just that that would
be an easy and probably improper fix.

As I recall, the only thing I really needed to change in my adaption
of read_swap_cache_async(), zswap_get_swap_cache_page() in zswap, was
the assumption built in that it is swapping in a page on behalf of a
userspace program with the vma argument and alloc_page_vma().  Maybe
if we change it to just use alloc_page when vma is NULL, that could
work.  In a non-NUMA kernel alloc_page_vma() equals alloc_page() so I
wouldn't expect weird things doing that.

Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [PATCHv5 4/8] zswap: add to mm/
  2013-02-18 22:52             ` Seth Jennings
@ 2013-02-18 23:17               ` Dan Magenheimer
  0 siblings, 0 replies; 38+ messages in thread
From: Dan Magenheimer @ 2013-02-18 23:17 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Ric Mason, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper,
	Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
> Subject: Re: [PATCHv5 4/8] zswap: add to mm/
> 
> On 02/18/2013 03:59 PM, Dan Magenheimer wrote:
> >>>>> please document in Documentation/kernel-parameters.txt.
> >>>>
> >>>> Will do.
> >>>
> >>> Is that a good idea?  Konrad's frontswap/cleancache patches
> >>> to fix frontswap/cleancache initialization so that backends
> >>> can be built/loaded as modules may be merged for 3.9.
> >>> AFAIK, module parameters are not included in kernel-parameters.txt.
> >>
> >> This is true.  However, the frontswap/cleancache init stuff isn't the
> >> only reason zswap is built-in only.  The writeback code depends on
> >> non-exported kernel symbols:
> >>
> >> swapcache_free
> >> __swap_writepage
> >> __add_to_swap_cache
> >> swapcache_prepare
> >> swapper_space
> >> end_swap_bio_write
> >>
> >> I know a fix is as trivial as exporting them, but I didn't want to
> >> take on that debate right now.
> >
> > Hmmm... I wonder if exporting these might be the best solution
> > as it (unnecessarily?) exposes some swap subsystem internals.
> > I wonder if a small change to read_swap_cache_async might
> > be more acceptable.
> 
> Yes, I'm not saying that I'm for exporting them; just that that would
> be an easy and probably improper fix.
> 
> As I recall, the only thing I really needed to change in my adaption
> of read_swap_cache_async(), zswap_get_swap_cache_page() in zswap, was
> the assumption built in that it is swapping in a page on behalf of a
> userspace program with the vma argument and alloc_page_vma().  Maybe
> if we change it to just use alloc_page when vma is NULL, that could
> work.  In a non-NUMA kernel alloc_page_vma() equals alloc_page() so I
> wouldn't expect weird things doing that.

The zcache version (zcache_get_swap_cache_page, in linux-next) expects
the new_page to be pre-allocated and passed in.  This could be
done easily with something like the patch below.  But both the
zswap and zcache version require three distinct return values
and slightly different actions before returning "success" so
some minor surgery will be needed there as well.

With a more generic read_swap_cache_async, I think the only
remaining swap subsystem change might be the modified
__swap_writepage (and possibly the end_swap_bio_write change,
though that seems to be mostly just to modify a counter...
may not be really needed.)

Oh, and then of course read_swap_cache_async() would need to be
exported.

Dan

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 0cb36fb..c0e2509 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -279,9 +279,10 @@ struct page * lookup_swap_cache(swp_entry_t entry)
  * the swap entry is no longer in use.
  */
 struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
-			struct vm_area_struct *vma, unsigned long addr)
+			struct vm_area_struct *vma, unsigned long addr,
+			struct page *new_page)
 {
-	struct page *found_page, *new_page = NULL;
+	struct page *found_page;
 	int err;
 
 	do {
@@ -389,7 +390,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
 	for (offset = start_offset; offset <= end_offset ; offset++) {
 		/* Ok, do the async read-ahead now */
 		page = read_swap_cache_async(swp_entry(swp_type(entry), offset),
-						gfp_mask, vma, addr);
+						gfp_mask, vma, addr, NULL);
 		if (!page)
 			continue;
 		page_cache_release(page);
@@ -397,5 +398,5 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
 	blk_finish_plug(&plug);
 
 	lru_add_drain();	/* Push any new pages onto the LRU now */
-	return read_swap_cache_async(entry, gfp_mask, vma, addr);
+	return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
 }

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
       [not found] ` <1360780731-11708-2-git-send-email-sjenning@linux.vnet.ibm.com>
  2013-02-16  3:26   ` [PATCHv5 1/8] zsmalloc: add to mm/ Ric Mason
@ 2013-02-19  9:18   ` Joonsoo Kim
  2013-02-19 17:54     ` Seth Jennings
  1 sibling, 1 reply; 38+ messages in thread
From: Joonsoo Kim @ 2013-02-19  9:18 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

Hello, Seth.
I'm not sure that this is right time to review, because I already have
seen many effort of various people to promote zxxx series. I don't want to
be a stopper to promote these. :)

But, I read the code, now, and then some comments below.

On Wed, Feb 13, 2013 at 12:38:44PM -0600, Seth Jennings wrote:
> =========
> DO NOT MERGE, FOR REVIEW ONLY
> This patch introduces zsmalloc as new code, however, it already
> exists in drivers/staging.  In order to build successfully, you
> must select EITHER to driver/staging version OR this version.
> Once zsmalloc is reviewed in this format (and hopefully accepted),
> I will create a new patchset that properly promotes zsmalloc from
> staging.
> =========
> 
> This patchset introduces a new slab-based memory allocator,
> zsmalloc, for storing compressed pages.  It is designed for
> low fragmentation and high allocation success rate on
> large object, but <= PAGE_SIZE allocations.
> 
> zsmalloc differs from the kernel slab allocator in two primary
> ways to achieve these design goals.
> 
> zsmalloc never requires high order page allocations to back
> slabs, or "size classes" in zsmalloc terms. Instead it allows
> multiple single-order pages to be stitched together into a
> "zspage" which backs the slab.  This allows for higher allocation
> success rate under memory pressure.
> 
> Also, zsmalloc allows objects to span page boundaries within the
> zspage.  This allows for lower fragmentation than could be had
> with the kernel slab allocator for objects between PAGE_SIZE/2
> and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
> to 60% of it original size, the memory savings gained through
> compression is lost in fragmentation because another object of
> the same size can't be stored in the leftover space.
> 
> This ability to span pages results in zsmalloc allocations not being
> directly addressable by the user.  The user is given an
> non-dereferencable handle in response to an allocation request.
> That handle must be mapped, using zs_map_object(), which returns
> a pointer to the mapped region that can be used.  The mapping is
> necessary since the object data may reside in two different
> noncontigious pages.
> 
> zsmalloc fulfills the allocation needs for zram and zswap.
> 
> Acked-by: Nitin Gupta <ngupta@vflare.org>
> Acked-by: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> ---
>  include/linux/zsmalloc.h |   49 ++
>  mm/Kconfig               |   24 +
>  mm/Makefile              |    1 +
>  mm/zsmalloc.c            | 1124 ++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 1198 insertions(+)
>  create mode 100644 include/linux/zsmalloc.h
>  create mode 100644 mm/zsmalloc.c
> 
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> new file mode 100644
> index 0000000..eb6efb6
> --- /dev/null
> +++ b/include/linux/zsmalloc.h
> @@ -0,0 +1,49 @@
> +/*
> + * zsmalloc memory allocator
> + *
> + * Copyright (C) 2011  Nitin Gupta
> + *
> + * This code is released using a dual license strategy: BSD/GPL
> + * You can choose the license that better fits your requirements.
> + *
> + * Released under the terms of 3-clause BSD License
> + * Released under the terms of GNU General Public License Version 2.0
> + */
> +
> +#ifndef _ZS_MALLOC_H_
> +#define _ZS_MALLOC_H_
> +
> +#include <linux/types.h>
> +#include <linux/mm_types.h>
> +
> +/*
> + * zsmalloc mapping modes
> + *
> + * NOTE: These only make a difference when a mapped object spans pages
> +*/
> +enum zs_mapmode {
> +	ZS_MM_RW, /* normal read-write mapping */
> +	ZS_MM_RO, /* read-only (no copy-out at unmap time) */
> +	ZS_MM_WO /* write-only (no copy-in at map time) */
> +};


These makes no difference for PGTABLE_MAPPING.
Please add some comment for this.

> +struct zs_ops {
> +	struct page * (*alloc)(gfp_t);
> +	void (*free)(struct page *);
> +};
> +
> +struct zs_pool;
> +
> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
> +void zs_destroy_pool(struct zs_pool *pool);
> +
> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
> +void zs_free(struct zs_pool *pool, unsigned long obj);
> +
> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> +			enum zs_mapmode mm);
> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> +
> +u64 zs_get_total_size_bytes(struct zs_pool *pool);
> +
> +#endif
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 278e3ab..25b8f38 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -446,3 +446,27 @@ config FRONTSWAP
>  	  and swap data is stored as normal on the matching swap device.
>  
>  	  If unsure, say Y to enable frontswap.
> +
> +config ZSMALLOC
> +	tristate "Memory allocator for compressed pages"
> +	default n
> +	help
> +	  zsmalloc is a slab-based memory allocator designed to store
> +	  compressed RAM pages.  zsmalloc uses virtual memory mapping
> +	  in order to reduce fragmentation.  However, this results in a
> +	  non-standard allocator interface where a handle, not a pointer, is
> +	  returned by an alloc().  This handle must be mapped in order to
> +	  access the allocated space.
> +
> +config PGTABLE_MAPPING
> +	bool "Use page table mapping to access object in zsmalloc"
> +	depends on ZSMALLOC
> +	help
> +	  By default, zsmalloc uses a copy-based object mapping method to
> +	  access allocations that span two pages. However, if a particular
> +	  architecture (ex, ARM) performs VM mapping faster than copying,
> +	  then you should select this. This causes zsmalloc to use page table
> +	  mapping rather than copying for object mapping.
> +
> +	  You can check speed with zsmalloc benchmark[1].
> +	  [1] https://github.com/spartacus06/zsmalloc
> diff --git a/mm/Makefile b/mm/Makefile
> index 3a46287..0f6ef0a 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
>  obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
>  obj-$(CONFIG_CLEANCACHE) += cleancache.o
>  obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> +obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> new file mode 100644
> index 0000000..34378ef
> --- /dev/null
> +++ b/mm/zsmalloc.c
> @@ -0,0 +1,1124 @@
> +/*
> + * zsmalloc memory allocator
> + *
> + * Copyright (C) 2011  Nitin Gupta
> + *
> + * This code is released using a dual license strategy: BSD/GPL
> + * You can choose the license that better fits your requirements.
> + *
> + * Released under the terms of 3-clause BSD License
> + * Released under the terms of GNU General Public License Version 2.0
> + */
> +
> +
> +/*
> + * This allocator is designed for use with zcache and zram. Thus, the
> + * allocator is supposed to work well under low memory conditions. In
> + * particular, it never attempts higher order page allocation which is
> + * very likely to fail under memory pressure. On the other hand, if we
> + * just use single (0-order) pages, it would suffer from very high
> + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
> + * an entire page. This was one of the major issues with its predecessor
> + * (xvmalloc).
> + *
> + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
> + * and links them together using various 'struct page' fields. These linked
> + * pages act as a single higher-order page i.e. an object can span 0-order
> + * page boundaries. The code refers to these linked pages as a single entity
> + * called zspage.
> + *
> + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
> + * since this satisfies the requirements of all its current users (in the
> + * worst case, page is incompressible and is thus stored "as-is" i.e. in
> + * uncompressed form). For allocation requests larger than this size, failure
> + * is returned (see zs_malloc).
> + *
> + * Additionally, zs_malloc() does not return a dereferenceable pointer.
> + * Instead, it returns an opaque handle (unsigned long) which encodes actual
> + * location of the allocated object. The reason for this indirection is that
> + * zsmalloc does not keep zspages permanently mapped since that would cause
> + * issues on 32-bit systems where the VA region for kernel space mappings
> + * is very small. So, before using the allocating memory, the object has to
> + * be mapped using zs_map_object() to get a usable pointer and subsequently
> + * unmapped using zs_unmap_object().
> + *
> + * Following is how we use various fields and flags of underlying
> + * struct page(s) to form a zspage.
> + *
> + * Usage of struct page fields:
> + *	page->first_page: points to the first component (0-order) page
> + *	page->index (union with page->freelist): offset of the first object
> + *		starting in this page. For the first page, this is
> + *		always 0, so we use this field (aka freelist) to point
> + *		to the first free object in zspage.
> + *	page->lru: links together all component pages (except the first page)
> + *		of a zspage
> + *
> + *	For _first_ page only:
> + *
> + *	page->private (union with page->first_page): refers to the
> + *		component page after the first page
> + *	page->freelist: points to the first free object in zspage.
> + *		Free objects are linked together using in-place
> + *		metadata.
> + *	page->objects: maximum number of objects we can store in this
> + *		zspage (class->zspage_order * PAGE_SIZE / class->size)

How about just embedding maximum number of objects to size_class?
For the SLUB, each slab can have difference number of objects.
But, for the zsmalloc, it is not possible, so there is no reason
to maintain it within metadata of zspage. Just to embed it to size_class
is sufficient.


> + *	page->lru: links together first pages of various zspages.
> + *		Basically forming list of zspages in a fullness group.
> + *	page->mapping: class index and fullness group of the zspage
> + *
> + * Usage of struct page flags:
> + *	PG_private: identifies the first component page
> + *	PG_private2: identifies the last component page
> + *
> + */
> +
> +#ifdef CONFIG_ZSMALLOC_DEBUG
> +#define DEBUG
> +#endif

Is this obsolete?

> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/bitops.h>
> +#include <linux/errno.h>
> +#include <linux/highmem.h>
> +#include <linux/init.h>
> +#include <linux/string.h>
> +#include <linux/slab.h>
> +#include <asm/tlbflush.h>
> +#include <asm/pgtable.h>
> +#include <linux/cpumask.h>
> +#include <linux/cpu.h>
> +#include <linux/vmalloc.h>
> +#include <linux/hardirq.h>
> +#include <linux/spinlock.h>
> +#include <linux/types.h>
> +
> +#include <linux/zsmalloc.h>
> +
> +/*
> + * This must be power of 2 and greater than of equal to sizeof(link_free).
> + * These two conditions ensure that any 'struct link_free' itself doesn't
> + * span more than 1 page which avoids complex case of mapping 2 pages simply
> + * to restore link_free pointer values.
> + */
> +#define ZS_ALIGN		8
> +
> +/*
> + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
> + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
> + */
> +#define ZS_MAX_ZSPAGE_ORDER 2
> +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
> +
> +/*
> + * Object location (<PFN>, <obj_idx>) is encoded as
> + * as single (unsigned long) handle value.
> + *
> + * Note that object index <obj_idx> is relative to system
> + * page <PFN> it is stored in, so for each sub-page belonging
> + * to a zspage, obj_idx starts with 0.
> + *
> + * This is made more complicated by various memory models and PAE.
> + */
> +
> +#ifndef MAX_PHYSMEM_BITS
> +#ifdef CONFIG_HIGHMEM64G
> +#define MAX_PHYSMEM_BITS 36
> +#else /* !CONFIG_HIGHMEM64G */
> +/*
> + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
> + * be PAGE_SHIFT
> + */
> +#define MAX_PHYSMEM_BITS BITS_PER_LONG
> +#endif
> +#endif
> +#define _PFN_BITS		(MAX_PHYSMEM_BITS - PAGE_SHIFT)
> +#define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS)
> +#define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
> +
> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
> +#define ZS_MIN_ALLOC_SIZE \
> +	MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
> +#define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
> +
> +/*
> + * On systems with 4K page size, this gives 254 size classes! There is a
> + * trader-off here:
> + *  - Large number of size classes is potentially wasteful as free page are
> + *    spread across these classes
> + *  - Small number of size classes causes large internal fragmentation
> + *  - Probably its better to use specific size classes (empirically
> + *    determined). NOTE: all those class sizes must be set as multiple of
> + *    ZS_ALIGN to make sure link_free itself never has to span 2 pages.
> + *
> + *  ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
> + *  (reason above)
> + */
> +#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> 8)
> +#define ZS_SIZE_CLASSES		((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
> +					ZS_SIZE_CLASS_DELTA + 1)
> +
> +/*
> + * We do not maintain any list for completely empty or full pages
> + */
> +enum fullness_group {
> +	ZS_ALMOST_FULL,
> +	ZS_ALMOST_EMPTY,
> +	_ZS_NR_FULLNESS_GROUPS,
> +
> +	ZS_EMPTY,
> +	ZS_FULL
> +};
> +
> +/*
> + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
> + *	n <= N / f, where
> + * n = number of allocated objects
> + * N = total number of objects zspage can store
> + * f = 1/fullness_threshold_frac
> + *
> + * Similarly, we assign zspage to:
> + *	ZS_ALMOST_FULL	when n > N / f
> + *	ZS_EMPTY	when n == 0
> + *	ZS_FULL		when n == N
> + *
> + * (see: fix_fullness_group())
> + */
> +static const int fullness_threshold_frac = 4;
> +
> +struct size_class {
> +	/*
> +	 * Size of objects stored in this class. Must be multiple
> +	 * of ZS_ALIGN.
> +	 */
> +	int size;
> +	unsigned int index;
> +
> +	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
> +	int pages_per_zspage;
> +
> +	spinlock_t lock;
> +
> +	/* stats */
> +	u64 pages_allocated;
> +
> +	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
> +};

Instead of simple pointer, how about using list_head?
With this, fullness_list management is easily consolidated to
set_zspage_mapping() and we can remove remove_zspage(), insert_zspage().

And how about maintaining FULL, EMPTY list?
There is not much memory waste and it can be used for debugging and
implementing other functionality.

> +
> +/*
> + * Placed within free objects to form a singly linked list.
> + * For every zspage, first_page->freelist gives head of this list.
> + *
> + * This must be power of 2 and less than or equal to ZS_ALIGN
> + */
> +struct link_free {
> +	/* Handle of next free chunk (encodes <PFN, obj_idx>) */
> +	void *next;
> +};
> +
> +struct zs_pool {
> +	struct size_class size_class[ZS_SIZE_CLASSES];
> +
> +	struct zs_ops *ops;
> +};
> +
> +/*
> + * A zspage's class index and fullness group
> + * are encoded in its (first)page->mapping
> + */
> +#define CLASS_IDX_BITS	28
> +#define FULLNESS_BITS	4
> +#define CLASS_IDX_MASK	((1 << CLASS_IDX_BITS) - 1)
> +#define FULLNESS_MASK	((1 << FULLNESS_BITS) - 1)
> +
> +struct mapping_area {
> +#ifdef CONFIG_PGTABLE_MAPPING
> +	struct vm_struct *vm; /* vm area for mapping object that span pages */
> +#else
> +	char *vm_buf; /* copy buffer for objects that span pages */
> +#endif
> +	char *vm_addr; /* address of kmap_atomic()'ed pages */
> +	enum zs_mapmode vm_mm; /* mapping mode */
> +};
> +
> +/* default page alloc/free ops */
> +struct page *zs_alloc_page(gfp_t flags)
> +{
> +	return alloc_page(flags);
> +}
> +
> +void zs_free_page(struct page *page)
> +{
> +	__free_page(page);
> +}
> +
> +struct zs_ops zs_default_ops = {
> +	.alloc = zs_alloc_page,
> +	.free = zs_free_page
> +};
> +
> +/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
> +static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
> +
> +static int is_first_page(struct page *page)
> +{
> +	return PagePrivate(page);
> +}
> +
> +static int is_last_page(struct page *page)
> +{
> +	return PagePrivate2(page);
> +}
> +
> +static void get_zspage_mapping(struct page *page, unsigned int *class_idx,
> +				enum fullness_group *fullness)
> +{
> +	unsigned long m;
> +	BUG_ON(!is_first_page(page));
> +
> +	m = (unsigned long)page->mapping;
> +	*fullness = m & FULLNESS_MASK;
> +	*class_idx = (m >> FULLNESS_BITS) & CLASS_IDX_MASK;
> +}
> +
> +static void set_zspage_mapping(struct page *page, unsigned int class_idx,
> +				enum fullness_group fullness)
> +{
> +	unsigned long m;
> +	BUG_ON(!is_first_page(page));
> +
> +	m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
> +			(fullness & FULLNESS_MASK);
> +	page->mapping = (struct address_space *)m;
> +}
> +
> +/*
> + * zsmalloc divides the pool into various size classes where each
> + * class maintains a list of zspages where each zspage is divided
> + * into equal sized chunks. Each allocation falls into one of these
> + * classes depending on its size. This function returns index of the
> + * size class which has chunk size big enough to hold the give size.
> + */
> +static int get_size_class_index(int size)
> +{
> +	int idx = 0;
> +
> +	if (likely(size > ZS_MIN_ALLOC_SIZE))
> +		idx = DIV_ROUND_UP(size - ZS_MIN_ALLOC_SIZE,
> +				ZS_SIZE_CLASS_DELTA);
> +
> +	return idx;
> +}
> +
> +/*
> + * For each size class, zspages are divided into different groups
> + * depending on how "full" they are. This was done so that we could
> + * easily find empty or nearly empty zspages when we try to shrink
> + * the pool (not yet implemented). This function returns fullness
> + * status of the given page.
> + */
> +static enum fullness_group get_fullness_group(struct page *page)
> +{
> +	int inuse, max_objects;
> +	enum fullness_group fg;
> +	BUG_ON(!is_first_page(page));
> +
> +	inuse = page->inuse;
> +	max_objects = page->objects;
> +
> +	if (inuse == 0)
> +		fg = ZS_EMPTY;
> +	else if (inuse == max_objects)
> +		fg = ZS_FULL;
> +	else if (inuse <= max_objects / fullness_threshold_frac)
> +		fg = ZS_ALMOST_EMPTY;
> +	else
> +		fg = ZS_ALMOST_FULL;
> +
> +	return fg;
> +}
> +
> +/*
> + * Each size class maintains various freelists and zspages are assigned
> + * to one of these freelists based on the number of live objects they
> + * have. This functions inserts the given zspage into the freelist
> + * identified by <class, fullness_group>.
> + */
> +static void insert_zspage(struct page *page, struct size_class *class,
> +				enum fullness_group fullness)
> +{
> +	struct page **head;
> +
> +	BUG_ON(!is_first_page(page));
> +
> +	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
> +		return;
> +
> +	head = &class->fullness_list[fullness];
> +	if (*head)
> +		list_add_tail(&page->lru, &(*head)->lru);
> +
> +	*head = page;
> +}
> +
> +/*
> + * This function removes the given zspage from the freelist identified
> + * by <class, fullness_group>.
> + */
> +static void remove_zspage(struct page *page, struct size_class *class,
> +				enum fullness_group fullness)
> +{
> +	struct page **head;
> +
> +	BUG_ON(!is_first_page(page));
> +
> +	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
> +		return;
> +
> +	head = &class->fullness_list[fullness];
> +	BUG_ON(!*head);
> +	if (list_empty(&(*head)->lru))
> +		*head = NULL;
> +	else if (*head == page)
> +		*head = (struct page *)list_entry((*head)->lru.next,
> +					struct page, lru);
> +
> +	list_del_init(&page->lru);
> +}
> +
> +/*
> + * Each size class maintains zspages in different fullness groups depending
> + * on the number of live objects they contain. When allocating or freeing
> + * objects, the fullness status of the page can change, say, from ALMOST_FULL
> + * to ALMOST_EMPTY when freeing an object. This function checks if such
> + * a status change has occurred for the given page and accordingly moves the
> + * page from the freelist of the old fullness group to that of the new
> + * fullness group.
> + */
> +static enum fullness_group fix_fullness_group(struct zs_pool *pool,
> +						struct page *page)
> +{
> +	int class_idx;
> +	struct size_class *class;
> +	enum fullness_group currfg, newfg;
> +
> +	BUG_ON(!is_first_page(page));
> +
> +	get_zspage_mapping(page, &class_idx, &currfg);
> +	newfg = get_fullness_group(page);
> +	if (newfg == currfg)
> +		goto out;
> +
> +	class = &pool->size_class[class_idx];
> +	remove_zspage(page, class, currfg);
> +	insert_zspage(page, class, newfg);
> +	set_zspage_mapping(page, class_idx, newfg);
> +
> +out:
> +	return newfg;
> +}
> +
> +/*
> + * We have to decide on how many pages to link together
> + * to form a zspage for each size class. This is important
> + * to reduce wastage due to unusable space left at end of
> + * each zspage which is given as:
> + *	wastage = Zp - Zp % size_class
> + * where Zp = zspage size = k * PAGE_SIZE where k = 1, 2, ...
> + *
> + * For example, for size class of 3/8 * PAGE_SIZE, we should
> + * link together 3 PAGE_SIZE sized pages to form a zspage
> + * since then we can perfectly fit in 8 such objects.
> + */
> +static int get_pages_per_zspage(int class_size)
> +{
> +	int i, max_usedpc = 0;
> +	/* zspage order which gives maximum used size per KB */
> +	int max_usedpc_order = 1;
> +
> +	for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) {
> +		int zspage_size;
> +		int waste, usedpc;
> +
> +		zspage_size = i * PAGE_SIZE;
> +		waste = zspage_size % class_size;
> +		usedpc = (zspage_size - waste) * 100 / zspage_size;
> +
> +		if (usedpc > max_usedpc) {
> +			max_usedpc = usedpc;
> +			max_usedpc_order = i;
> +		}
> +	}
> +
> +	return max_usedpc_order;
> +}
> +
> +/*
> + * A single 'zspage' is composed of many system pages which are
> + * linked together using fields in struct page. This function finds
> + * the first/head page, given any component page of a zspage.
> + */
> +static struct page *get_first_page(struct page *page)
> +{
> +	if (is_first_page(page))
> +		return page;
> +	else
> +		return page->first_page;
> +}
> +
> +static struct page *get_next_page(struct page *page)
> +{
> +	struct page *next;
> +
> +	if (is_last_page(page))
> +		next = NULL;
> +	else if (is_first_page(page))
> +		next = (struct page *)page->private;
> +	else
> +		next = list_entry(page->lru.next, struct page, lru);
> +
> +	return next;
> +}
> +
> +/* Encode <page, obj_idx> as a single handle value */
> +static void *obj_location_to_handle(struct page *page, unsigned long obj_idx)
> +{
> +	unsigned long handle;
> +
> +	if (!page) {
> +		BUG_ON(obj_idx);
> +		return NULL;
> +	}
> +
> +	handle = page_to_pfn(page) << OBJ_INDEX_BITS;
> +	handle |= (obj_idx & OBJ_INDEX_MASK);
> +
> +	return (void *)handle;
> +}
> +
> +/* Decode <page, obj_idx> pair from the given object handle */
> +static void obj_handle_to_location(unsigned long handle, struct page **page,
> +				unsigned long *obj_idx)
> +{
> +	*page = pfn_to_page(handle >> OBJ_INDEX_BITS);
> +	*obj_idx = handle & OBJ_INDEX_MASK;
> +}
> +
> +static unsigned long obj_idx_to_offset(struct page *page,
> +				unsigned long obj_idx, int class_size)
> +{
> +	unsigned long off = 0;
> +
> +	if (!is_first_page(page))
> +		off = page->index;
> +
> +	return off + obj_idx * class_size;
> +}
> +
> +static void reset_page(struct page *page)
> +{
> +	clear_bit(PG_private, &page->flags);
> +	clear_bit(PG_private_2, &page->flags);
> +	set_page_private(page, 0);
> +	page->mapping = NULL;
> +	page->freelist = NULL;
> +	reset_page_mapcount(page);
> +}
> +
> +static void free_zspage(struct zs_ops *ops, struct page *first_page)
> +{
> +	struct page *nextp, *tmp, *head_extra;
> +
> +	BUG_ON(!is_first_page(first_page));
> +	BUG_ON(first_page->inuse);
> +
> +	head_extra = (struct page *)page_private(first_page);
> +
> +	reset_page(first_page);
> +	ops->free(first_page);
> +
> +	/* zspage with only 1 system page */
> +	if (!head_extra)
> +		return;
> +
> +	list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
> +		list_del(&nextp->lru);
> +		reset_page(nextp);
> +		ops->free(nextp);
> +	}
> +	reset_page(head_extra);
> +	ops->free(head_extra);
> +}
> +
> +/* Initialize a newly allocated zspage */
> +static void init_zspage(struct page *first_page, struct size_class *class)
> +{
> +	unsigned long off = 0;
> +	struct page *page = first_page;
> +
> +	BUG_ON(!is_first_page(first_page));
> +	while (page) {
> +		struct page *next_page;
> +		struct link_free *link;
> +		unsigned int i, objs_on_page;
> +
> +		/*
> +		 * page->index stores offset of first object starting
> +		 * in the page. For the first page, this is always 0,
> +		 * so we use first_page->index (aka ->freelist) to store
> +		 * head of corresponding zspage's freelist.
> +		 */
> +		if (page != first_page)
> +			page->index = off;
> +
> +		link = (struct link_free *)kmap_atomic(page) +
> +						off / sizeof(*link);
> +		objs_on_page = (PAGE_SIZE - off) / class->size;
> +
> +		for (i = 1; i <= objs_on_page; i++) {
> +			off += class->size;
> +			if (off < PAGE_SIZE) {
> +				link->next = obj_location_to_handle(page, i);
> +				link += class->size / sizeof(*link);
> +			}
> +		}
> +
> +		/*
> +		 * We now come to the last (full or partial) object on this
> +		 * page, which must point to the first object on the next
> +		 * page (if present)
> +		 */
> +		next_page = get_next_page(page);
> +		link->next = obj_location_to_handle(next_page, 0);
> +		kunmap_atomic(link);
> +		page = next_page;
> +		off = (off + class->size) % PAGE_SIZE;
> +	}
> +}
> +
> +/*
> + * Allocate a zspage for the given size class
> + */
> +static struct page *alloc_zspage(struct zs_ops *ops, struct size_class *class,
> +				gfp_t flags)
> +{
> +	int i, error;
> +	struct page *first_page = NULL, *uninitialized_var(prev_page);
> +
> +	/*
> +	 * Allocate individual pages and link them together as:
> +	 * 1. first page->private = first sub-page
> +	 * 2. all sub-pages are linked together using page->lru
> +	 * 3. each sub-page is linked to the first page using page->first_page
> +	 *
> +	 * For each size class, First/Head pages are linked together using
> +	 * page->lru. Also, we set PG_private to identify the first page
> +	 * (i.e. no other sub-page has this flag set) and PG_private_2 to
> +	 * identify the last page.
> +	 */
> +	error = -ENOMEM;
> +	for (i = 0; i < class->pages_per_zspage; i++) {
> +		struct page *page;
> +
> +		page = ops->alloc(flags);
> +		if (!page)
> +			goto cleanup;
> +
> +		INIT_LIST_HEAD(&page->lru);
> +		if (i == 0) {	/* first page */
> +			SetPagePrivate(page);
> +			set_page_private(page, 0);
> +			first_page = page;
> +			first_page->inuse = 0;
> +		}
> +		if (i == 1)
> +			first_page->private = (unsigned long)page;
> +		if (i >= 1)
> +			page->first_page = first_page;
> +		if (i >= 2)
> +			list_add(&page->lru, &prev_page->lru);
> +		if (i == class->pages_per_zspage - 1)	/* last page */
> +			SetPagePrivate2(page);
> +		prev_page = page;
> +	}
> +
> +	init_zspage(first_page, class);
> +
> +	first_page->freelist = obj_location_to_handle(first_page, 0);
> +	/* Maximum number of objects we can store in this zspage */
> +	first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
> +
> +	error = 0; /* Success */
> +
> +cleanup:
> +	if (unlikely(error) && first_page) {
> +		free_zspage(ops, first_page);
> +		first_page = NULL;
> +	}
> +
> +	return first_page;
> +}
> +
> +static struct page *find_get_zspage(struct size_class *class)
> +{
> +	int i;
> +	struct page *page;
> +
> +	for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
> +		page = class->fullness_list[i];
> +		if (page)
> +			break;
> +	}
> +
> +	return page;
> +}
> +
> +#ifdef CONFIG_PGTABLE_MAPPING
> +static inline int __zs_cpu_up(struct mapping_area *area)
> +{
> +	/*
> +	 * Make sure we don't leak memory if a cpu UP notification
> +	 * and zs_init() race and both call zs_cpu_up() on the same cpu
> +	 */
> +	if (area->vm)
> +		return 0;
> +	area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL);
> +	if (!area->vm)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
> +static inline void __zs_cpu_down(struct mapping_area *area)
> +{
> +	if (area->vm)
> +		free_vm_area(area->vm);
> +	area->vm = NULL;
> +}
> +
> +static inline void *__zs_map_object(struct mapping_area *area,
> +				struct page *pages[2], int off, int size)
> +{
> +	BUG_ON(map_vm_area(area->vm, PAGE_KERNEL, &pages));
> +	area->vm_addr = area->vm->addr;
> +	return area->vm_addr + off;
> +}
> +
> +static inline void __zs_unmap_object(struct mapping_area *area,
> +				struct page *pages[2], int off, int size)
> +{
> +	unsigned long addr = (unsigned long)area->vm_addr;
> +	unsigned long end = addr + (PAGE_SIZE * 2);
> +
> +	flush_cache_vunmap(addr, end);
> +	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> +	flush_tlb_kernel_range(addr, end);
> +}
> +
> +#else /* CONFIG_PGTABLE_MAPPING*/
> +
> +static inline int __zs_cpu_up(struct mapping_area *area)
> +{
> +	/*
> +	 * Make sure we don't leak memory if a cpu UP notification
> +	 * and zs_init() race and both call zs_cpu_up() on the same cpu
> +	 */
> +	if (area->vm_buf)
> +		return 0;
> +	area->vm_buf = (char *)__get_free_page(GFP_KERNEL);
> +	if (!area->vm_buf)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
> +static inline void __zs_cpu_down(struct mapping_area *area)
> +{
> +	if (area->vm_buf)
> +		free_page((unsigned long)area->vm_buf);
> +	area->vm_buf = NULL;
> +}
> +
> +static void *__zs_map_object(struct mapping_area *area,
> +			struct page *pages[2], int off, int size)
> +{
> +	int sizes[2];
> +	void *addr;
> +	char *buf = area->vm_buf;
> +
> +	/* disable page faults to match kmap_atomic() return conditions */
> +	pagefault_disable();
> +
> +	/* no read fastpath */
> +	if (area->vm_mm == ZS_MM_WO)
> +		goto out;

Current implementation of 'ZS_MM_WO' is not safe.
For example, think about a situation like as mapping 512 bytes and writing
only 8 bytes. When we unmap_object, remaining area will be filled with
dummy bytes, but not original values.

If above comments are not important for now, feel free to ignore them. :)

Thanks.

> +
> +	sizes[0] = PAGE_SIZE - off;
> +	sizes[1] = size - sizes[0];
> +
> +	/* copy object to per-cpu buffer */
> +	addr = kmap_atomic(pages[0]);
> +	memcpy(buf, addr + off, sizes[0]);
> +	kunmap_atomic(addr);
> +	addr = kmap_atomic(pages[1]);
> +	memcpy(buf + sizes[0], addr, sizes[1]);
> +	kunmap_atomic(addr);
> +out:
> +	return area->vm_buf;
> +}
> +
> +static void __zs_unmap_object(struct mapping_area *area,
> +			struct page *pages[2], int off, int size)
> +{
> +	int sizes[2];
> +	void *addr;
> +	char *buf = area->vm_buf;
> +
> +	/* no write fastpath */
> +	if (area->vm_mm == ZS_MM_RO)
> +		goto out;
> +
> +	sizes[0] = PAGE_SIZE - off;
> +	sizes[1] = size - sizes[0];
> +
> +	/* copy per-cpu buffer to object */
> +	addr = kmap_atomic(pages[0]);
> +	memcpy(addr + off, buf, sizes[0]);
> +	kunmap_atomic(addr);
> +	addr = kmap_atomic(pages[1]);
> +	memcpy(addr, buf + sizes[0], sizes[1]);
> +	kunmap_atomic(addr);
> +
> +out:
> +	/* enable page faults to match kunmap_atomic() return conditions */
> +	pagefault_enable();
> +}
> +
> +#endif /* CONFIG_PGTABLE_MAPPING */
> +
> +static int zs_cpu_notifier(struct notifier_block *nb, unsigned long action,
> +				void *pcpu)
> +{
> +	int ret, cpu = (long)pcpu;
> +	struct mapping_area *area;
> +
> +	switch (action) {
> +	case CPU_UP_PREPARE:
> +		area = &per_cpu(zs_map_area, cpu);
> +		ret = __zs_cpu_up(area);
> +		if (ret)
> +			return notifier_from_errno(ret);
> +		break;
> +	case CPU_DEAD:
> +	case CPU_UP_CANCELED:
> +		area = &per_cpu(zs_map_area, cpu);
> +		__zs_cpu_down(area);
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block zs_cpu_nb = {
> +	.notifier_call = zs_cpu_notifier
> +};
> +
> +static void zs_exit(void)
> +{
> +	int cpu;
> +
> +	for_each_online_cpu(cpu)
> +		zs_cpu_notifier(NULL, CPU_DEAD, (void *)(long)cpu);
> +	unregister_cpu_notifier(&zs_cpu_nb);
> +}
> +
> +static int zs_init(void)
> +{
> +	int cpu, ret;
> +
> +	register_cpu_notifier(&zs_cpu_nb);
> +	for_each_online_cpu(cpu) {
> +		ret = zs_cpu_notifier(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
> +		if (notifier_to_errno(ret))
> +			goto fail;
> +	}
> +	return 0;
> +fail:
> +	zs_exit();
> +	return notifier_to_errno(ret);
> +}
> +
> +/**
> + * zs_create_pool - Creates an allocation pool to work from.
> + * @flags: allocation flags used to allocate pool metadata
> + * @ops: allocation/free callbacks for expanding the pool
> + *
> + * This function must be called before anything when using
> + * the zsmalloc allocator.
> + *
> + * On success, a pointer to the newly created pool is returned,
> + * otherwise NULL.
> + */
> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops)
> +{
> +	int i, ovhd_size;
> +	struct zs_pool *pool;
> +
> +	ovhd_size = roundup(sizeof(*pool), PAGE_SIZE);
> +	pool = kzalloc(ovhd_size, flags);
> +	if (!pool)
> +		return NULL;
> +
> +	for (i = 0; i < ZS_SIZE_CLASSES; i++) {
> +		int size;
> +		struct size_class *class;
> +
> +		size = ZS_MIN_ALLOC_SIZE + i * ZS_SIZE_CLASS_DELTA;
> +		if (size > ZS_MAX_ALLOC_SIZE)
> +			size = ZS_MAX_ALLOC_SIZE;
> +
> +		class = &pool->size_class[i];
> +		class->size = size;
> +		class->index = i;
> +		spin_lock_init(&class->lock);
> +		class->pages_per_zspage = get_pages_per_zspage(size);
> +
> +	}
> +
> +	if (ops)
> +		pool->ops = ops;
> +	else
> +		pool->ops = &zs_default_ops;
> +
> +	return pool;
> +}
> +EXPORT_SYMBOL_GPL(zs_create_pool);
> +
> +void zs_destroy_pool(struct zs_pool *pool)
> +{
> +	int i;
> +
> +	for (i = 0; i < ZS_SIZE_CLASSES; i++) {
> +		int fg;
> +		struct size_class *class = &pool->size_class[i];
> +
> +		for (fg = 0; fg < _ZS_NR_FULLNESS_GROUPS; fg++) {
> +			if (class->fullness_list[fg]) {
> +				pr_info("Freeing non-empty class with size "
> +					"%db, fullness group %d\n",
> +					class->size, fg);
> +			}
> +		}
> +	}
> +	kfree(pool);
> +}
> +EXPORT_SYMBOL_GPL(zs_destroy_pool);
> +
> +/**
> + * zs_malloc - Allocate block of given size from pool.
> + * @pool: pool to allocate from
> + * @size: size of block to allocate
> + *
> + * On success, handle to the allocated object is returned,
> + * otherwise 0.
> + * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail.
> + */
> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags)
> +{
> +	unsigned long obj;
> +	struct link_free *link;
> +	int class_idx;
> +	struct size_class *class;
> +
> +	struct page *first_page, *m_page;
> +	unsigned long m_objidx, m_offset;
> +
> +	if (unlikely(!size || size > ZS_MAX_ALLOC_SIZE))
> +		return 0;
> +
> +	class_idx = get_size_class_index(size);
> +	class = &pool->size_class[class_idx];
> +	BUG_ON(class_idx != class->index);
> +
> +	spin_lock(&class->lock);
> +	first_page = find_get_zspage(class);
> +
> +	if (!first_page) {
> +		spin_unlock(&class->lock);
> +		first_page = alloc_zspage(pool->ops, class, flags);
> +		if (unlikely(!first_page))
> +			return 0;
> +
> +		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
> +		spin_lock(&class->lock);
> +		class->pages_allocated += class->pages_per_zspage;
> +	}
> +
> +	obj = (unsigned long)first_page->freelist;
> +	obj_handle_to_location(obj, &m_page, &m_objidx);
> +	m_offset = obj_idx_to_offset(m_page, m_objidx, class->size);
> +
> +	link = (struct link_free *)kmap_atomic(m_page) +
> +					m_offset / sizeof(*link);
> +	first_page->freelist = link->next;
> +	memset(link, POISON_INUSE, sizeof(*link));
> +	kunmap_atomic(link);
> +
> +	first_page->inuse++;
> +	/* Now move the zspage to another fullness group, if required */
> +	fix_fullness_group(pool, first_page);
> +	spin_unlock(&class->lock);
> +
> +	return obj;
> +}
> +EXPORT_SYMBOL_GPL(zs_malloc);
> +
> +void zs_free(struct zs_pool *pool, unsigned long obj)
> +{
> +	struct link_free *link;
> +	struct page *first_page, *f_page;
> +	unsigned long f_objidx, f_offset;
> +
> +	int class_idx;
> +	struct size_class *class;
> +	enum fullness_group fullness;
> +
> +	if (unlikely(!obj))
> +		return;
> +
> +	obj_handle_to_location(obj, &f_page, &f_objidx);
> +	first_page = get_first_page(f_page);
> +
> +	get_zspage_mapping(first_page, &class_idx, &fullness);
> +	class = &pool->size_class[class_idx];
> +	f_offset = obj_idx_to_offset(f_page, f_objidx, class->size);
> +
> +	spin_lock(&class->lock);
> +
> +	/* Insert this object in containing zspage's freelist */
> +	link = (struct link_free *)((unsigned char *)kmap_atomic(f_page)
> +							+ f_offset);
> +	link->next = first_page->freelist;
> +	kunmap_atomic(link);
> +	first_page->freelist = (void *)obj;
> +
> +	first_page->inuse--;
> +	fullness = fix_fullness_group(pool, first_page);
> +
> +	if (fullness == ZS_EMPTY)
> +		class->pages_allocated -= class->pages_per_zspage;
> +
> +	spin_unlock(&class->lock);
> +
> +	if (fullness == ZS_EMPTY)
> +		free_zspage(pool->ops, first_page);
> +}
> +EXPORT_SYMBOL_GPL(zs_free);
> +
> +/**
> + * zs_map_object - get address of allocated object from handle.
> + * @pool: pool from which the object was allocated
> + * @handle: handle returned from zs_malloc
> + *
> + * Before using an object allocated from zs_malloc, it must be mapped using
> + * this function. When done with the object, it must be unmapped using
> + * zs_unmap_object.
> + *
> + * Only one object can be mapped per cpu at a time. There is no protection
> + * against nested mappings.
> + *
> + * This function returns with preemption and page faults disabled.
> +*/
> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> +			enum zs_mapmode mm)
> +{
> +	struct page *page;
> +	unsigned long obj_idx, off;
> +
> +	unsigned int class_idx;
> +	enum fullness_group fg;
> +	struct size_class *class;
> +	struct mapping_area *area;
> +	struct page *pages[2];
> +
> +	BUG_ON(!handle);
> +
> +	/*
> +	 * Because we use per-cpu mapping areas shared among the
> +	 * pools/users, we can't allow mapping in interrupt context
> +	 * because it can corrupt another users mappings.
> +	 */
> +	BUG_ON(in_interrupt());
> +
> +	obj_handle_to_location(handle, &page, &obj_idx);
> +	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> +	class = &pool->size_class[class_idx];
> +	off = obj_idx_to_offset(page, obj_idx, class->size);
> +
> +	area = &get_cpu_var(zs_map_area);
> +	area->vm_mm = mm;
> +	if (off + class->size <= PAGE_SIZE) {
> +		/* this object is contained entirely within a page */
> +		area->vm_addr = kmap_atomic(page);
> +		return area->vm_addr + off;
> +	}
> +
> +	/* this object spans two pages */
> +	pages[0] = page;
> +	pages[1] = get_next_page(page);
> +	BUG_ON(!pages[1]);
> +
> +	return __zs_map_object(area, pages, off, class->size);
> +}
> +EXPORT_SYMBOL_GPL(zs_map_object);
> +
> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> +{
> +	struct page *page;
> +	unsigned long obj_idx, off;
> +
> +	unsigned int class_idx;
> +	enum fullness_group fg;
> +	struct size_class *class;
> +	struct mapping_area *area;
> +
> +	BUG_ON(!handle);
> +
> +	obj_handle_to_location(handle, &page, &obj_idx);
> +	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> +	class = &pool->size_class[class_idx];
> +	off = obj_idx_to_offset(page, obj_idx, class->size);
> +
> +	area = &__get_cpu_var(zs_map_area);
> +	if (off + class->size <= PAGE_SIZE)
> +		kunmap_atomic(area->vm_addr);
> +	else {
> +		struct page *pages[2];
> +
> +		pages[0] = page;
> +		pages[1] = get_next_page(page);
> +		BUG_ON(!pages[1]);
> +
> +		__zs_unmap_object(area, pages, off, class->size);
> +	}
> +	put_cpu_var(zs_map_area);
> +}
> +EXPORT_SYMBOL_GPL(zs_unmap_object);
> +
> +u64 zs_get_total_size_bytes(struct zs_pool *pool)
> +{
> +	int i;
> +	u64 npages = 0;
> +
> +	for (i = 0; i < ZS_SIZE_CLASSES; i++)
> +		npages += pool->size_class[i].pages_allocated;
> +
> +	return npages << PAGE_SHIFT;
> +}
> +EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> +
> +module_init(zs_init);
> +module_exit(zs_exit);
> +
> +MODULE_LICENSE("Dual BSD/GPL");
> +MODULE_AUTHOR("Nitin Gupta <ngupta@vflare.org>");
> -- 
> 1.7.9.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-19  9:18   ` Joonsoo Kim
@ 2013-02-19 17:54     ` Seth Jennings
  2013-02-19 23:37       ` Minchan Kim
  2013-02-20  2:42       ` Nitin Gupta
  0 siblings, 2 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-19 17:54 UTC (permalink / raw)
  To: Joonsoo Kim, Nitin Gupta
  Cc: Andrew Morton, Greg Kroah-Hartman, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/19/2013 03:18 AM, Joonsoo Kim wrote:
> Hello, Seth.
> I'm not sure that this is right time to review, because I already have
> seen many effort of various people to promote zxxx series. I don't want to
> be a stopper to promote these. :)

Any time is good review time :)  Thanks for your review!

> 
> But, I read the code, now, and then some comments below.
> 
> On Wed, Feb 13, 2013 at 12:38:44PM -0600, Seth Jennings wrote:
>> =========
>> DO NOT MERGE, FOR REVIEW ONLY
>> This patch introduces zsmalloc as new code, however, it already
>> exists in drivers/staging.  In order to build successfully, you
>> must select EITHER to driver/staging version OR this version.
>> Once zsmalloc is reviewed in this format (and hopefully accepted),
>> I will create a new patchset that properly promotes zsmalloc from
>> staging.
>> =========
>>
>> This patchset introduces a new slab-based memory allocator,
>> zsmalloc, for storing compressed pages.  It is designed for
>> low fragmentation and high allocation success rate on
>> large object, but <= PAGE_SIZE allocations.
>>
>> zsmalloc differs from the kernel slab allocator in two primary
>> ways to achieve these design goals.
>>
>> zsmalloc never requires high order page allocations to back
>> slabs, or "size classes" in zsmalloc terms. Instead it allows
>> multiple single-order pages to be stitched together into a
>> "zspage" which backs the slab.  This allows for higher allocation
>> success rate under memory pressure.
>>
>> Also, zsmalloc allows objects to span page boundaries within the
>> zspage.  This allows for lower fragmentation than could be had
>> with the kernel slab allocator for objects between PAGE_SIZE/2
>> and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>> to 60% of it original size, the memory savings gained through
>> compression is lost in fragmentation because another object of
>> the same size can't be stored in the leftover space.
>>
>> This ability to span pages results in zsmalloc allocations not being
>> directly addressable by the user.  The user is given an
>> non-dereferencable handle in response to an allocation request.
>> That handle must be mapped, using zs_map_object(), which returns
>> a pointer to the mapped region that can be used.  The mapping is
>> necessary since the object data may reside in two different
>> noncontigious pages.
>>
>> zsmalloc fulfills the allocation needs for zram and zswap.
>>
>> Acked-by: Nitin Gupta <ngupta@vflare.org>
>> Acked-by: Minchan Kim <minchan@kernel.org>
>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>> ---
>>  include/linux/zsmalloc.h |   49 ++
>>  mm/Kconfig               |   24 +
>>  mm/Makefile              |    1 +
>>  mm/zsmalloc.c            | 1124 ++++++++++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 1198 insertions(+)
>>  create mode 100644 include/linux/zsmalloc.h
>>  create mode 100644 mm/zsmalloc.c
>>
>> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
>> new file mode 100644
>> index 0000000..eb6efb6
>> --- /dev/null
>> +++ b/include/linux/zsmalloc.h
>> @@ -0,0 +1,49 @@
>> +/*
>> + * zsmalloc memory allocator
>> + *
>> + * Copyright (C) 2011  Nitin Gupta
>> + *
>> + * This code is released using a dual license strategy: BSD/GPL
>> + * You can choose the license that better fits your requirements.
>> + *
>> + * Released under the terms of 3-clause BSD License
>> + * Released under the terms of GNU General Public License Version 2.0
>> + */
>> +
>> +#ifndef _ZS_MALLOC_H_
>> +#define _ZS_MALLOC_H_
>> +
>> +#include <linux/types.h>
>> +#include <linux/mm_types.h>
>> +
>> +/*
>> + * zsmalloc mapping modes
>> + *
>> + * NOTE: These only make a difference when a mapped object spans pages
>> +*/
>> +enum zs_mapmode {
>> +	ZS_MM_RW, /* normal read-write mapping */
>> +	ZS_MM_RO, /* read-only (no copy-out at unmap time) */
>> +	ZS_MM_WO /* write-only (no copy-in at map time) */
>> +};
> 
> 
> These makes no difference for PGTABLE_MAPPING.
> Please add some comment for this.

Yes. Will do.

> 
>> +struct zs_ops {
>> +	struct page * (*alloc)(gfp_t);
>> +	void (*free)(struct page *);
>> +};
>> +
>> +struct zs_pool;
>> +
>> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
>> +void zs_destroy_pool(struct zs_pool *pool);
>> +
>> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
>> +void zs_free(struct zs_pool *pool, unsigned long obj);
>> +
>> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
>> +			enum zs_mapmode mm);
>> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
>> +
>> +u64 zs_get_total_size_bytes(struct zs_pool *pool);
>> +
>> +#endif
>> diff --git a/mm/Kconfig b/mm/Kconfig
>> index 278e3ab..25b8f38 100644
>> --- a/mm/Kconfig
>> +++ b/mm/Kconfig
>> @@ -446,3 +446,27 @@ config FRONTSWAP
>>  	  and swap data is stored as normal on the matching swap device.
>>  
>>  	  If unsure, say Y to enable frontswap.
>> +
>> +config ZSMALLOC
>> +	tristate "Memory allocator for compressed pages"
>> +	default n
>> +	help
>> +	  zsmalloc is a slab-based memory allocator designed to store
>> +	  compressed RAM pages.  zsmalloc uses virtual memory mapping
>> +	  in order to reduce fragmentation.  However, this results in a
>> +	  non-standard allocator interface where a handle, not a pointer, is
>> +	  returned by an alloc().  This handle must be mapped in order to
>> +	  access the allocated space.
>> +
>> +config PGTABLE_MAPPING
>> +	bool "Use page table mapping to access object in zsmalloc"
>> +	depends on ZSMALLOC
>> +	help
>> +	  By default, zsmalloc uses a copy-based object mapping method to
>> +	  access allocations that span two pages. However, if a particular
>> +	  architecture (ex, ARM) performs VM mapping faster than copying,
>> +	  then you should select this. This causes zsmalloc to use page table
>> +	  mapping rather than copying for object mapping.
>> +
>> +	  You can check speed with zsmalloc benchmark[1].
>> +	  [1] https://github.com/spartacus06/zsmalloc
>> diff --git a/mm/Makefile b/mm/Makefile
>> index 3a46287..0f6ef0a 100644
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
>>  obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
>>  obj-$(CONFIG_CLEANCACHE) += cleancache.o
>>  obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>> +obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> new file mode 100644
>> index 0000000..34378ef
>> --- /dev/null
>> +++ b/mm/zsmalloc.c
>> @@ -0,0 +1,1124 @@
>> +/*
>> + * zsmalloc memory allocator
>> + *
>> + * Copyright (C) 2011  Nitin Gupta
>> + *
>> + * This code is released using a dual license strategy: BSD/GPL
>> + * You can choose the license that better fits your requirements.
>> + *
>> + * Released under the terms of 3-clause BSD License
>> + * Released under the terms of GNU General Public License Version 2.0
>> + */
>> +
>> +
>> +/*
>> + * This allocator is designed for use with zcache and zram. Thus, the
>> + * allocator is supposed to work well under low memory conditions. In
>> + * particular, it never attempts higher order page allocation which is
>> + * very likely to fail under memory pressure. On the other hand, if we
>> + * just use single (0-order) pages, it would suffer from very high
>> + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
>> + * an entire page. This was one of the major issues with its predecessor
>> + * (xvmalloc).
>> + *
>> + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
>> + * and links them together using various 'struct page' fields. These linked
>> + * pages act as a single higher-order page i.e. an object can span 0-order
>> + * page boundaries. The code refers to these linked pages as a single entity
>> + * called zspage.
>> + *
>> + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
>> + * since this satisfies the requirements of all its current users (in the
>> + * worst case, page is incompressible and is thus stored "as-is" i.e. in
>> + * uncompressed form). For allocation requests larger than this size, failure
>> + * is returned (see zs_malloc).
>> + *
>> + * Additionally, zs_malloc() does not return a dereferenceable pointer.
>> + * Instead, it returns an opaque handle (unsigned long) which encodes actual
>> + * location of the allocated object. The reason for this indirection is that
>> + * zsmalloc does not keep zspages permanently mapped since that would cause
>> + * issues on 32-bit systems where the VA region for kernel space mappings
>> + * is very small. So, before using the allocating memory, the object has to
>> + * be mapped using zs_map_object() to get a usable pointer and subsequently
>> + * unmapped using zs_unmap_object().
>> + *
>> + * Following is how we use various fields and flags of underlying
>> + * struct page(s) to form a zspage.
>> + *
>> + * Usage of struct page fields:
>> + *	page->first_page: points to the first component (0-order) page
>> + *	page->index (union with page->freelist): offset of the first object
>> + *		starting in this page. For the first page, this is
>> + *		always 0, so we use this field (aka freelist) to point
>> + *		to the first free object in zspage.
>> + *	page->lru: links together all component pages (except the first page)
>> + *		of a zspage
>> + *
>> + *	For _first_ page only:
>> + *
>> + *	page->private (union with page->first_page): refers to the
>> + *		component page after the first page
>> + *	page->freelist: points to the first free object in zspage.
>> + *		Free objects are linked together using in-place
>> + *		metadata.
>> + *	page->objects: maximum number of objects we can store in this
>> + *		zspage (class->zspage_order * PAGE_SIZE / class->size)
> 
> How about just embedding maximum number of objects to size_class?
> For the SLUB, each slab can have difference number of objects.
> But, for the zsmalloc, it is not possible, so there is no reason
> to maintain it within metadata of zspage. Just to embed it to size_class
> is sufficient.

Yes, a little code massaging and this can go away.

However, there might be some value in having variable sized zspages in
the same size_class.  It could improve allocation success rate at the
expense of efficiency by not failing in alloc_zspage() if we can't
allocate the optimal number of pages.  As long as we can allocate the
first page, then we can proceed.

Nitin care to weigh in?

> 
> 
>> + *	page->lru: links together first pages of various zspages.
>> + *		Basically forming list of zspages in a fullness group.
>> + *	page->mapping: class index and fullness group of the zspage
>> + *
>> + * Usage of struct page flags:
>> + *	PG_private: identifies the first component page
>> + *	PG_private2: identifies the last component page
>> + *
>> + */
>> +
>> +#ifdef CONFIG_ZSMALLOC_DEBUG
>> +#define DEBUG
>> +#endif
> 
> Is this obsolete?

Yes, I'll remove it.

> 
>> +#include <linux/module.h>
>> +#include <linux/kernel.h>
>> +#include <linux/bitops.h>
>> +#include <linux/errno.h>
>> +#include <linux/highmem.h>
>> +#include <linux/init.h>
>> +#include <linux/string.h>
>> +#include <linux/slab.h>
>> +#include <asm/tlbflush.h>
>> +#include <asm/pgtable.h>
>> +#include <linux/cpumask.h>
>> +#include <linux/cpu.h>
>> +#include <linux/vmalloc.h>
>> +#include <linux/hardirq.h>
>> +#include <linux/spinlock.h>
>> +#include <linux/types.h>
>> +
>> +#include <linux/zsmalloc.h>
>> +
>> +/*
>> + * This must be power of 2 and greater than of equal to sizeof(link_free).
>> + * These two conditions ensure that any 'struct link_free' itself doesn't
>> + * span more than 1 page which avoids complex case of mapping 2 pages simply
>> + * to restore link_free pointer values.
>> + */
>> +#define ZS_ALIGN		8
>> +
>> +/*
>> + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
>> + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
>> + */
>> +#define ZS_MAX_ZSPAGE_ORDER 2
>> +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
>> +
>> +/*
>> + * Object location (<PFN>, <obj_idx>) is encoded as
>> + * as single (unsigned long) handle value.
>> + *
>> + * Note that object index <obj_idx> is relative to system
>> + * page <PFN> it is stored in, so for each sub-page belonging
>> + * to a zspage, obj_idx starts with 0.
>> + *
>> + * This is made more complicated by various memory models and PAE.
>> + */
>> +
>> +#ifndef MAX_PHYSMEM_BITS
>> +#ifdef CONFIG_HIGHMEM64G
>> +#define MAX_PHYSMEM_BITS 36
>> +#else /* !CONFIG_HIGHMEM64G */
>> +/*
>> + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
>> + * be PAGE_SHIFT
>> + */
>> +#define MAX_PHYSMEM_BITS BITS_PER_LONG
>> +#endif
>> +#endif
>> +#define _PFN_BITS		(MAX_PHYSMEM_BITS - PAGE_SHIFT)
>> +#define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS)
>> +#define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
>> +
>> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
>> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
>> +#define ZS_MIN_ALLOC_SIZE \
>> +	MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
>> +#define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
>> +
>> +/*
>> + * On systems with 4K page size, this gives 254 size classes! There is a
>> + * trader-off here:
>> + *  - Large number of size classes is potentially wasteful as free page are
>> + *    spread across these classes
>> + *  - Small number of size classes causes large internal fragmentation
>> + *  - Probably its better to use specific size classes (empirically
>> + *    determined). NOTE: all those class sizes must be set as multiple of
>> + *    ZS_ALIGN to make sure link_free itself never has to span 2 pages.
>> + *
>> + *  ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
>> + *  (reason above)
>> + */
>> +#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> 8)
>> +#define ZS_SIZE_CLASSES		((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
>> +					ZS_SIZE_CLASS_DELTA + 1)
>> +
>> +/*
>> + * We do not maintain any list for completely empty or full pages
>> + */
>> +enum fullness_group {
>> +	ZS_ALMOST_FULL,
>> +	ZS_ALMOST_EMPTY,
>> +	_ZS_NR_FULLNESS_GROUPS,
>> +
>> +	ZS_EMPTY,
>> +	ZS_FULL
>> +};
>> +
>> +/*
>> + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
>> + *	n <= N / f, where
>> + * n = number of allocated objects
>> + * N = total number of objects zspage can store
>> + * f = 1/fullness_threshold_frac
>> + *
>> + * Similarly, we assign zspage to:
>> + *	ZS_ALMOST_FULL	when n > N / f
>> + *	ZS_EMPTY	when n == 0
>> + *	ZS_FULL		when n == N
>> + *
>> + * (see: fix_fullness_group())
>> + */
>> +static const int fullness_threshold_frac = 4;
>> +
>> +struct size_class {
>> +	/*
>> +	 * Size of objects stored in this class. Must be multiple
>> +	 * of ZS_ALIGN.
>> +	 */
>> +	int size;
>> +	unsigned int index;
>> +
>> +	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
>> +	int pages_per_zspage;
>> +
>> +	spinlock_t lock;
>> +
>> +	/* stats */
>> +	u64 pages_allocated;
>> +
>> +	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
>> +};
> 
> Instead of simple pointer, how about using list_head?
> With this, fullness_list management is easily consolidated to
> set_zspage_mapping() and we can remove remove_zspage(), insert_zspage().

Makes sense to me.  Nitin what do you think?

> And how about maintaining FULL, EMPTY list?
> There is not much memory waste and it can be used for debugging and
> implementing other functionality.

The EMPTY list would always be empty.  There might be some merit to
maintaining a FULL list though just so zsmalloc always has a handle on
every zspage.

> 
>> +
>> +/*
>> + * Placed within free objects to form a singly linked list.
>> + * For every zspage, first_page->freelist gives head of this list.
>> + *
>> + * This must be power of 2 and less than or equal to ZS_ALIGN
>> + */
>> +struct link_free {
>> +	/* Handle of next free chunk (encodes <PFN, obj_idx>) */
>> +	void *next;
>> +};
>> +
>> +struct zs_pool {
>> +	struct size_class size_class[ZS_SIZE_CLASSES];
>> +
>> +	struct zs_ops *ops;
>> +};
>> +
>> +/*
>> + * A zspage's class index and fullness group
>> + * are encoded in its (first)page->mapping
>> + */
>> +#define CLASS_IDX_BITS	28
>> +#define FULLNESS_BITS	4
>> +#define CLASS_IDX_MASK	((1 << CLASS_IDX_BITS) - 1)
>> +#define FULLNESS_MASK	((1 << FULLNESS_BITS) - 1)
>> +
>> +struct mapping_area {
>> +#ifdef CONFIG_PGTABLE_MAPPING
>> +	struct vm_struct *vm; /* vm area for mapping object that span pages */
>> +#else
>> +	char *vm_buf; /* copy buffer for objects that span pages */
>> +#endif
>> +	char *vm_addr; /* address of kmap_atomic()'ed pages */
>> +	enum zs_mapmode vm_mm; /* mapping mode */
>> +};
>> +
>> +/* default page alloc/free ops */
>> +struct page *zs_alloc_page(gfp_t flags)
>> +{
>> +	return alloc_page(flags);
>> +}
>> +
>> +void zs_free_page(struct page *page)
>> +{
>> +	__free_page(page);
>> +}
>> +
>> +struct zs_ops zs_default_ops = {
>> +	.alloc = zs_alloc_page,
>> +	.free = zs_free_page
>> +};
>> +
>> +/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
>> +static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
>> +
>> +static int is_first_page(struct page *page)
>> +{
>> +	return PagePrivate(page);
>> +}
>> +
>> +static int is_last_page(struct page *page)
>> +{
>> +	return PagePrivate2(page);
>> +}
>> +
>> +static void get_zspage_mapping(struct page *page, unsigned int *class_idx,
>> +				enum fullness_group *fullness)
>> +{
>> +	unsigned long m;
>> +	BUG_ON(!is_first_page(page));
>> +
>> +	m = (unsigned long)page->mapping;
>> +	*fullness = m & FULLNESS_MASK;
>> +	*class_idx = (m >> FULLNESS_BITS) & CLASS_IDX_MASK;
>> +}
>> +
>> +static void set_zspage_mapping(struct page *page, unsigned int class_idx,
>> +				enum fullness_group fullness)
>> +{
>> +	unsigned long m;
>> +	BUG_ON(!is_first_page(page));
>> +
>> +	m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
>> +			(fullness & FULLNESS_MASK);
>> +	page->mapping = (struct address_space *)m;
>> +}
>> +
>> +/*
>> + * zsmalloc divides the pool into various size classes where each
>> + * class maintains a list of zspages where each zspage is divided
>> + * into equal sized chunks. Each allocation falls into one of these
>> + * classes depending on its size. This function returns index of the
>> + * size class which has chunk size big enough to hold the give size.
>> + */
>> +static int get_size_class_index(int size)
>> +{
>> +	int idx = 0;
>> +
>> +	if (likely(size > ZS_MIN_ALLOC_SIZE))
>> +		idx = DIV_ROUND_UP(size - ZS_MIN_ALLOC_SIZE,
>> +				ZS_SIZE_CLASS_DELTA);
>> +
>> +	return idx;
>> +}
>> +
>> +/*
>> + * For each size class, zspages are divided into different groups
>> + * depending on how "full" they are. This was done so that we could
>> + * easily find empty or nearly empty zspages when we try to shrink
>> + * the pool (not yet implemented). This function returns fullness
>> + * status of the given page.
>> + */
>> +static enum fullness_group get_fullness_group(struct page *page)
>> +{
>> +	int inuse, max_objects;
>> +	enum fullness_group fg;
>> +	BUG_ON(!is_first_page(page));
>> +
>> +	inuse = page->inuse;
>> +	max_objects = page->objects;
>> +
>> +	if (inuse == 0)
>> +		fg = ZS_EMPTY;
>> +	else if (inuse == max_objects)
>> +		fg = ZS_FULL;
>> +	else if (inuse <= max_objects / fullness_threshold_frac)
>> +		fg = ZS_ALMOST_EMPTY;
>> +	else
>> +		fg = ZS_ALMOST_FULL;
>> +
>> +	return fg;
>> +}
>> +
>> +/*
>> + * Each size class maintains various freelists and zspages are assigned
>> + * to one of these freelists based on the number of live objects they
>> + * have. This functions inserts the given zspage into the freelist
>> + * identified by <class, fullness_group>.
>> + */
>> +static void insert_zspage(struct page *page, struct size_class *class,
>> +				enum fullness_group fullness)
>> +{
>> +	struct page **head;
>> +
>> +	BUG_ON(!is_first_page(page));
>> +
>> +	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
>> +		return;
>> +
>> +	head = &class->fullness_list[fullness];
>> +	if (*head)
>> +		list_add_tail(&page->lru, &(*head)->lru);
>> +
>> +	*head = page;
>> +}
>> +
>> +/*
>> + * This function removes the given zspage from the freelist identified
>> + * by <class, fullness_group>.
>> + */
>> +static void remove_zspage(struct page *page, struct size_class *class,
>> +				enum fullness_group fullness)
>> +{
>> +	struct page **head;
>> +
>> +	BUG_ON(!is_first_page(page));
>> +
>> +	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
>> +		return;
>> +
>> +	head = &class->fullness_list[fullness];
>> +	BUG_ON(!*head);
>> +	if (list_empty(&(*head)->lru))
>> +		*head = NULL;
>> +	else if (*head == page)
>> +		*head = (struct page *)list_entry((*head)->lru.next,
>> +					struct page, lru);
>> +
>> +	list_del_init(&page->lru);
>> +}
>> +
>> +/*
>> + * Each size class maintains zspages in different fullness groups depending
>> + * on the number of live objects they contain. When allocating or freeing
>> + * objects, the fullness status of the page can change, say, from ALMOST_FULL
>> + * to ALMOST_EMPTY when freeing an object. This function checks if such
>> + * a status change has occurred for the given page and accordingly moves the
>> + * page from the freelist of the old fullness group to that of the new
>> + * fullness group.
>> + */
>> +static enum fullness_group fix_fullness_group(struct zs_pool *pool,
>> +						struct page *page)
>> +{
>> +	int class_idx;
>> +	struct size_class *class;
>> +	enum fullness_group currfg, newfg;
>> +
>> +	BUG_ON(!is_first_page(page));
>> +
>> +	get_zspage_mapping(page, &class_idx, &currfg);
>> +	newfg = get_fullness_group(page);
>> +	if (newfg == currfg)
>> +		goto out;
>> +
>> +	class = &pool->size_class[class_idx];
>> +	remove_zspage(page, class, currfg);
>> +	insert_zspage(page, class, newfg);
>> +	set_zspage_mapping(page, class_idx, newfg);
>> +
>> +out:
>> +	return newfg;
>> +}
>> +
>> +/*
>> + * We have to decide on how many pages to link together
>> + * to form a zspage for each size class. This is important
>> + * to reduce wastage due to unusable space left at end of
>> + * each zspage which is given as:
>> + *	wastage = Zp - Zp % size_class
>> + * where Zp = zspage size = k * PAGE_SIZE where k = 1, 2, ...
>> + *
>> + * For example, for size class of 3/8 * PAGE_SIZE, we should
>> + * link together 3 PAGE_SIZE sized pages to form a zspage
>> + * since then we can perfectly fit in 8 such objects.
>> + */
>> +static int get_pages_per_zspage(int class_size)
>> +{
>> +	int i, max_usedpc = 0;
>> +	/* zspage order which gives maximum used size per KB */
>> +	int max_usedpc_order = 1;
>> +
>> +	for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) {
>> +		int zspage_size;
>> +		int waste, usedpc;
>> +
>> +		zspage_size = i * PAGE_SIZE;
>> +		waste = zspage_size % class_size;
>> +		usedpc = (zspage_size - waste) * 100 / zspage_size;
>> +
>> +		if (usedpc > max_usedpc) {
>> +			max_usedpc = usedpc;
>> +			max_usedpc_order = i;
>> +		}
>> +	}
>> +
>> +	return max_usedpc_order;
>> +}
>> +
>> +/*
>> + * A single 'zspage' is composed of many system pages which are
>> + * linked together using fields in struct page. This function finds
>> + * the first/head page, given any component page of a zspage.
>> + */
>> +static struct page *get_first_page(struct page *page)
>> +{
>> +	if (is_first_page(page))
>> +		return page;
>> +	else
>> +		return page->first_page;
>> +}
>> +
>> +static struct page *get_next_page(struct page *page)
>> +{
>> +	struct page *next;
>> +
>> +	if (is_last_page(page))
>> +		next = NULL;
>> +	else if (is_first_page(page))
>> +		next = (struct page *)page->private;
>> +	else
>> +		next = list_entry(page->lru.next, struct page, lru);
>> +
>> +	return next;
>> +}
>> +
>> +/* Encode <page, obj_idx> as a single handle value */
>> +static void *obj_location_to_handle(struct page *page, unsigned long obj_idx)
>> +{
>> +	unsigned long handle;
>> +
>> +	if (!page) {
>> +		BUG_ON(obj_idx);
>> +		return NULL;
>> +	}
>> +
>> +	handle = page_to_pfn(page) << OBJ_INDEX_BITS;
>> +	handle |= (obj_idx & OBJ_INDEX_MASK);
>> +
>> +	return (void *)handle;
>> +}
>> +
>> +/* Decode <page, obj_idx> pair from the given object handle */
>> +static void obj_handle_to_location(unsigned long handle, struct page **page,
>> +				unsigned long *obj_idx)
>> +{
>> +	*page = pfn_to_page(handle >> OBJ_INDEX_BITS);
>> +	*obj_idx = handle & OBJ_INDEX_MASK;
>> +}
>> +
>> +static unsigned long obj_idx_to_offset(struct page *page,
>> +				unsigned long obj_idx, int class_size)
>> +{
>> +	unsigned long off = 0;
>> +
>> +	if (!is_first_page(page))
>> +		off = page->index;
>> +
>> +	return off + obj_idx * class_size;
>> +}
>> +
>> +static void reset_page(struct page *page)
>> +{
>> +	clear_bit(PG_private, &page->flags);
>> +	clear_bit(PG_private_2, &page->flags);
>> +	set_page_private(page, 0);
>> +	page->mapping = NULL;
>> +	page->freelist = NULL;
>> +	reset_page_mapcount(page);
>> +}
>> +
>> +static void free_zspage(struct zs_ops *ops, struct page *first_page)
>> +{
>> +	struct page *nextp, *tmp, *head_extra;
>> +
>> +	BUG_ON(!is_first_page(first_page));
>> +	BUG_ON(first_page->inuse);
>> +
>> +	head_extra = (struct page *)page_private(first_page);
>> +
>> +	reset_page(first_page);
>> +	ops->free(first_page);
>> +
>> +	/* zspage with only 1 system page */
>> +	if (!head_extra)
>> +		return;
>> +
>> +	list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
>> +		list_del(&nextp->lru);
>> +		reset_page(nextp);
>> +		ops->free(nextp);
>> +	}
>> +	reset_page(head_extra);
>> +	ops->free(head_extra);
>> +}
>> +
>> +/* Initialize a newly allocated zspage */
>> +static void init_zspage(struct page *first_page, struct size_class *class)
>> +{
>> +	unsigned long off = 0;
>> +	struct page *page = first_page;
>> +
>> +	BUG_ON(!is_first_page(first_page));
>> +	while (page) {
>> +		struct page *next_page;
>> +		struct link_free *link;
>> +		unsigned int i, objs_on_page;
>> +
>> +		/*
>> +		 * page->index stores offset of first object starting
>> +		 * in the page. For the first page, this is always 0,
>> +		 * so we use first_page->index (aka ->freelist) to store
>> +		 * head of corresponding zspage's freelist.
>> +		 */
>> +		if (page != first_page)
>> +			page->index = off;
>> +
>> +		link = (struct link_free *)kmap_atomic(page) +
>> +						off / sizeof(*link);
>> +		objs_on_page = (PAGE_SIZE - off) / class->size;
>> +
>> +		for (i = 1; i <= objs_on_page; i++) {
>> +			off += class->size;
>> +			if (off < PAGE_SIZE) {
>> +				link->next = obj_location_to_handle(page, i);
>> +				link += class->size / sizeof(*link);
>> +			}
>> +		}
>> +
>> +		/*
>> +		 * We now come to the last (full or partial) object on this
>> +		 * page, which must point to the first object on the next
>> +		 * page (if present)
>> +		 */
>> +		next_page = get_next_page(page);
>> +		link->next = obj_location_to_handle(next_page, 0);
>> +		kunmap_atomic(link);
>> +		page = next_page;
>> +		off = (off + class->size) % PAGE_SIZE;
>> +	}
>> +}
>> +
>> +/*
>> + * Allocate a zspage for the given size class
>> + */
>> +static struct page *alloc_zspage(struct zs_ops *ops, struct size_class *class,
>> +				gfp_t flags)
>> +{
>> +	int i, error;
>> +	struct page *first_page = NULL, *uninitialized_var(prev_page);
>> +
>> +	/*
>> +	 * Allocate individual pages and link them together as:
>> +	 * 1. first page->private = first sub-page
>> +	 * 2. all sub-pages are linked together using page->lru
>> +	 * 3. each sub-page is linked to the first page using page->first_page
>> +	 *
>> +	 * For each size class, First/Head pages are linked together using
>> +	 * page->lru. Also, we set PG_private to identify the first page
>> +	 * (i.e. no other sub-page has this flag set) and PG_private_2 to
>> +	 * identify the last page.
>> +	 */
>> +	error = -ENOMEM;
>> +	for (i = 0; i < class->pages_per_zspage; i++) {
>> +		struct page *page;
>> +
>> +		page = ops->alloc(flags);
>> +		if (!page)
>> +			goto cleanup;
>> +
>> +		INIT_LIST_HEAD(&page->lru);
>> +		if (i == 0) {	/* first page */
>> +			SetPagePrivate(page);
>> +			set_page_private(page, 0);
>> +			first_page = page;
>> +			first_page->inuse = 0;
>> +		}
>> +		if (i == 1)
>> +			first_page->private = (unsigned long)page;
>> +		if (i >= 1)
>> +			page->first_page = first_page;
>> +		if (i >= 2)
>> +			list_add(&page->lru, &prev_page->lru);
>> +		if (i == class->pages_per_zspage - 1)	/* last page */
>> +			SetPagePrivate2(page);
>> +		prev_page = page;
>> +	}
>> +
>> +	init_zspage(first_page, class);
>> +
>> +	first_page->freelist = obj_location_to_handle(first_page, 0);
>> +	/* Maximum number of objects we can store in this zspage */
>> +	first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
>> +
>> +	error = 0; /* Success */
>> +
>> +cleanup:
>> +	if (unlikely(error) && first_page) {
>> +		free_zspage(ops, first_page);
>> +		first_page = NULL;
>> +	}
>> +
>> +	return first_page;
>> +}
>> +
>> +static struct page *find_get_zspage(struct size_class *class)
>> +{
>> +	int i;
>> +	struct page *page;
>> +
>> +	for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
>> +		page = class->fullness_list[i];
>> +		if (page)
>> +			break;
>> +	}
>> +
>> +	return page;
>> +}
>> +
>> +#ifdef CONFIG_PGTABLE_MAPPING
>> +static inline int __zs_cpu_up(struct mapping_area *area)
>> +{
>> +	/*
>> +	 * Make sure we don't leak memory if a cpu UP notification
>> +	 * and zs_init() race and both call zs_cpu_up() on the same cpu
>> +	 */
>> +	if (area->vm)
>> +		return 0;
>> +	area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL);
>> +	if (!area->vm)
>> +		return -ENOMEM;
>> +	return 0;
>> +}
>> +
>> +static inline void __zs_cpu_down(struct mapping_area *area)
>> +{
>> +	if (area->vm)
>> +		free_vm_area(area->vm);
>> +	area->vm = NULL;
>> +}
>> +
>> +static inline void *__zs_map_object(struct mapping_area *area,
>> +				struct page *pages[2], int off, int size)
>> +{
>> +	BUG_ON(map_vm_area(area->vm, PAGE_KERNEL, &pages));
>> +	area->vm_addr = area->vm->addr;
>> +	return area->vm_addr + off;
>> +}
>> +
>> +static inline void __zs_unmap_object(struct mapping_area *area,
>> +				struct page *pages[2], int off, int size)
>> +{
>> +	unsigned long addr = (unsigned long)area->vm_addr;
>> +	unsigned long end = addr + (PAGE_SIZE * 2);
>> +
>> +	flush_cache_vunmap(addr, end);
>> +	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
>> +	flush_tlb_kernel_range(addr, end);
>> +}
>> +
>> +#else /* CONFIG_PGTABLE_MAPPING*/
>> +
>> +static inline int __zs_cpu_up(struct mapping_area *area)
>> +{
>> +	/*
>> +	 * Make sure we don't leak memory if a cpu UP notification
>> +	 * and zs_init() race and both call zs_cpu_up() on the same cpu
>> +	 */
>> +	if (area->vm_buf)
>> +		return 0;
>> +	area->vm_buf = (char *)__get_free_page(GFP_KERNEL);
>> +	if (!area->vm_buf)
>> +		return -ENOMEM;
>> +	return 0;
>> +}
>> +
>> +static inline void __zs_cpu_down(struct mapping_area *area)
>> +{
>> +	if (area->vm_buf)
>> +		free_page((unsigned long)area->vm_buf);
>> +	area->vm_buf = NULL;
>> +}
>> +
>> +static void *__zs_map_object(struct mapping_area *area,
>> +			struct page *pages[2], int off, int size)
>> +{
>> +	int sizes[2];
>> +	void *addr;
>> +	char *buf = area->vm_buf;
>> +
>> +	/* disable page faults to match kmap_atomic() return conditions */
>> +	pagefault_disable();
>> +
>> +	/* no read fastpath */
>> +	if (area->vm_mm == ZS_MM_WO)
>> +		goto out;
> 
> Current implementation of 'ZS_MM_WO' is not safe.
> For example, think about a situation like as mapping 512 bytes and writing
> only 8 bytes. When we unmap_object, remaining area will be filled with
> dummy bytes, but not original values.

I guess I should comment on the cavet with ZS_MM_WO.  The idea is that
the user is planning to initialize the entire allocation region.  So you
wouldn't use ZS_MM_WO to do a partial write to the region.  You'd have
to use ZS_MM_RW in that case so that the existing region is maintained.

Worthy of a comment. I'd add one.

> 
> If above comments are not important for now, feel free to ignore them. :)

Great comments!  The insight they contain demonstrate that you have a
great understanding of the code, which I find encouraging (i.e. that the
code is not too complex to be understood by reviewers).

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-19 17:54     ` Seth Jennings
@ 2013-02-19 23:37       ` Minchan Kim
  2013-02-22  9:24         ` Joonsoo Kim
  2013-02-20  2:42       ` Nitin Gupta
  1 sibling, 1 reply; 38+ messages in thread
From: Minchan Kim @ 2013-02-19 23:37 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Joonsoo Kim, Nitin Gupta, Andrew Morton, Greg Kroah-Hartman,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On Tue, Feb 19, 2013 at 11:54:21AM -0600, Seth Jennings wrote:
> On 02/19/2013 03:18 AM, Joonsoo Kim wrote:
> > Hello, Seth.
> > I'm not sure that this is right time to review, because I already have
> > seen many effort of various people to promote zxxx series. I don't want to
> > be a stopper to promote these. :)
> 
> Any time is good review time :)  Thanks for your review!
> 
> > 
> > But, I read the code, now, and then some comments below.
> > 
> > On Wed, Feb 13, 2013 at 12:38:44PM -0600, Seth Jennings wrote:
> >> =========
> >> DO NOT MERGE, FOR REVIEW ONLY
> >> This patch introduces zsmalloc as new code, however, it already
> >> exists in drivers/staging.  In order to build successfully, you
> >> must select EITHER to driver/staging version OR this version.
> >> Once zsmalloc is reviewed in this format (and hopefully accepted),
> >> I will create a new patchset that properly promotes zsmalloc from
> >> staging.
> >> =========
> >>
> >> This patchset introduces a new slab-based memory allocator,
> >> zsmalloc, for storing compressed pages.  It is designed for
> >> low fragmentation and high allocation success rate on
> >> large object, but <= PAGE_SIZE allocations.
> >>
> >> zsmalloc differs from the kernel slab allocator in two primary
> >> ways to achieve these design goals.
> >>
> >> zsmalloc never requires high order page allocations to back
> >> slabs, or "size classes" in zsmalloc terms. Instead it allows
> >> multiple single-order pages to be stitched together into a
> >> "zspage" which backs the slab.  This allows for higher allocation
> >> success rate under memory pressure.
> >>
> >> Also, zsmalloc allows objects to span page boundaries within the
> >> zspage.  This allows for lower fragmentation than could be had
> >> with the kernel slab allocator for objects between PAGE_SIZE/2
> >> and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
> >> to 60% of it original size, the memory savings gained through
> >> compression is lost in fragmentation because another object of
> >> the same size can't be stored in the leftover space.
> >>
> >> This ability to span pages results in zsmalloc allocations not being
> >> directly addressable by the user.  The user is given an
> >> non-dereferencable handle in response to an allocation request.
> >> That handle must be mapped, using zs_map_object(), which returns
> >> a pointer to the mapped region that can be used.  The mapping is
> >> necessary since the object data may reside in two different
> >> noncontigious pages.
> >>
> >> zsmalloc fulfills the allocation needs for zram and zswap.
> >>
> >> Acked-by: Nitin Gupta <ngupta@vflare.org>
> >> Acked-by: Minchan Kim <minchan@kernel.org>
> >> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> >> ---
> >>  include/linux/zsmalloc.h |   49 ++
> >>  mm/Kconfig               |   24 +
> >>  mm/Makefile              |    1 +
> >>  mm/zsmalloc.c            | 1124 ++++++++++++++++++++++++++++++++++++++++++++++
> >>  4 files changed, 1198 insertions(+)
> >>  create mode 100644 include/linux/zsmalloc.h
> >>  create mode 100644 mm/zsmalloc.c
> >>
> >> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> >> new file mode 100644
> >> index 0000000..eb6efb6
> >> --- /dev/null
> >> +++ b/include/linux/zsmalloc.h
> >> @@ -0,0 +1,49 @@
> >> +/*
> >> + * zsmalloc memory allocator
> >> + *
> >> + * Copyright (C) 2011  Nitin Gupta
> >> + *
> >> + * This code is released using a dual license strategy: BSD/GPL
> >> + * You can choose the license that better fits your requirements.
> >> + *
> >> + * Released under the terms of 3-clause BSD License
> >> + * Released under the terms of GNU General Public License Version 2.0
> >> + */
> >> +
> >> +#ifndef _ZS_MALLOC_H_
> >> +#define _ZS_MALLOC_H_
> >> +
> >> +#include <linux/types.h>
> >> +#include <linux/mm_types.h>
> >> +
> >> +/*
> >> + * zsmalloc mapping modes
> >> + *
> >> + * NOTE: These only make a difference when a mapped object spans pages
> >> +*/
> >> +enum zs_mapmode {
> >> +	ZS_MM_RW, /* normal read-write mapping */
> >> +	ZS_MM_RO, /* read-only (no copy-out at unmap time) */
> >> +	ZS_MM_WO /* write-only (no copy-in at map time) */
> >> +};
> > 
> > 
> > These makes no difference for PGTABLE_MAPPING.
> > Please add some comment for this.
> 
> Yes. Will do.
> 
> > 
> >> +struct zs_ops {
> >> +	struct page * (*alloc)(gfp_t);
> >> +	void (*free)(struct page *);
> >> +};
> >> +
> >> +struct zs_pool;
> >> +
> >> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
> >> +void zs_destroy_pool(struct zs_pool *pool);
> >> +
> >> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
> >> +void zs_free(struct zs_pool *pool, unsigned long obj);
> >> +
> >> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >> +			enum zs_mapmode mm);
> >> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> >> +
> >> +u64 zs_get_total_size_bytes(struct zs_pool *pool);
> >> +
> >> +#endif
> >> diff --git a/mm/Kconfig b/mm/Kconfig
> >> index 278e3ab..25b8f38 100644
> >> --- a/mm/Kconfig
> >> +++ b/mm/Kconfig
> >> @@ -446,3 +446,27 @@ config FRONTSWAP
> >>  	  and swap data is stored as normal on the matching swap device.
> >>  
> >>  	  If unsure, say Y to enable frontswap.
> >> +
> >> +config ZSMALLOC
> >> +	tristate "Memory allocator for compressed pages"
> >> +	default n
> >> +	help
> >> +	  zsmalloc is a slab-based memory allocator designed to store
> >> +	  compressed RAM pages.  zsmalloc uses virtual memory mapping
> >> +	  in order to reduce fragmentation.  However, this results in a
> >> +	  non-standard allocator interface where a handle, not a pointer, is
> >> +	  returned by an alloc().  This handle must be mapped in order to
> >> +	  access the allocated space.
> >> +
> >> +config PGTABLE_MAPPING
> >> +	bool "Use page table mapping to access object in zsmalloc"
> >> +	depends on ZSMALLOC
> >> +	help
> >> +	  By default, zsmalloc uses a copy-based object mapping method to
> >> +	  access allocations that span two pages. However, if a particular
> >> +	  architecture (ex, ARM) performs VM mapping faster than copying,
> >> +	  then you should select this. This causes zsmalloc to use page table
> >> +	  mapping rather than copying for object mapping.
> >> +
> >> +	  You can check speed with zsmalloc benchmark[1].
> >> +	  [1] https://github.com/spartacus06/zsmalloc
> >> diff --git a/mm/Makefile b/mm/Makefile
> >> index 3a46287..0f6ef0a 100644
> >> --- a/mm/Makefile
> >> +++ b/mm/Makefile
> >> @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
> >>  obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
> >>  obj-$(CONFIG_CLEANCACHE) += cleancache.o
> >>  obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> >> +obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> >> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >> new file mode 100644
> >> index 0000000..34378ef
> >> --- /dev/null
> >> +++ b/mm/zsmalloc.c
> >> @@ -0,0 +1,1124 @@
> >> +/*
> >> + * zsmalloc memory allocator
> >> + *
> >> + * Copyright (C) 2011  Nitin Gupta
> >> + *
> >> + * This code is released using a dual license strategy: BSD/GPL
> >> + * You can choose the license that better fits your requirements.
> >> + *
> >> + * Released under the terms of 3-clause BSD License
> >> + * Released under the terms of GNU General Public License Version 2.0
> >> + */
> >> +
> >> +
> >> +/*
> >> + * This allocator is designed for use with zcache and zram. Thus, the
> >> + * allocator is supposed to work well under low memory conditions. In
> >> + * particular, it never attempts higher order page allocation which is
> >> + * very likely to fail under memory pressure. On the other hand, if we
> >> + * just use single (0-order) pages, it would suffer from very high
> >> + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
> >> + * an entire page. This was one of the major issues with its predecessor
> >> + * (xvmalloc).
> >> + *
> >> + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
> >> + * and links them together using various 'struct page' fields. These linked
> >> + * pages act as a single higher-order page i.e. an object can span 0-order
> >> + * page boundaries. The code refers to these linked pages as a single entity
> >> + * called zspage.
> >> + *
> >> + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
> >> + * since this satisfies the requirements of all its current users (in the
> >> + * worst case, page is incompressible and is thus stored "as-is" i.e. in
> >> + * uncompressed form). For allocation requests larger than this size, failure
> >> + * is returned (see zs_malloc).
> >> + *
> >> + * Additionally, zs_malloc() does not return a dereferenceable pointer.
> >> + * Instead, it returns an opaque handle (unsigned long) which encodes actual
> >> + * location of the allocated object. The reason for this indirection is that
> >> + * zsmalloc does not keep zspages permanently mapped since that would cause
> >> + * issues on 32-bit systems where the VA region for kernel space mappings
> >> + * is very small. So, before using the allocating memory, the object has to
> >> + * be mapped using zs_map_object() to get a usable pointer and subsequently
> >> + * unmapped using zs_unmap_object().
> >> + *
> >> + * Following is how we use various fields and flags of underlying
> >> + * struct page(s) to form a zspage.
> >> + *
> >> + * Usage of struct page fields:
> >> + *	page->first_page: points to the first component (0-order) page
> >> + *	page->index (union with page->freelist): offset of the first object
> >> + *		starting in this page. For the first page, this is
> >> + *		always 0, so we use this field (aka freelist) to point
> >> + *		to the first free object in zspage.
> >> + *	page->lru: links together all component pages (except the first page)
> >> + *		of a zspage
> >> + *
> >> + *	For _first_ page only:
> >> + *
> >> + *	page->private (union with page->first_page): refers to the
> >> + *		component page after the first page
> >> + *	page->freelist: points to the first free object in zspage.
> >> + *		Free objects are linked together using in-place
> >> + *		metadata.
> >> + *	page->objects: maximum number of objects we can store in this
> >> + *		zspage (class->zspage_order * PAGE_SIZE / class->size)
> > 
> > How about just embedding maximum number of objects to size_class?
> > For the SLUB, each slab can have difference number of objects.
> > But, for the zsmalloc, it is not possible, so there is no reason
> > to maintain it within metadata of zspage. Just to embed it to size_class
> > is sufficient.
> 
> Yes, a little code massaging and this can go away.
> 
> However, there might be some value in having variable sized zspages in
> the same size_class.  It could improve allocation success rate at the
> expense of efficiency by not failing in alloc_zspage() if we can't
> allocate the optimal number of pages.  As long as we can allocate the
> first page, then we can proceed.
> 
> Nitin care to weigh in?

Sorry, I'm not Nitin.
IMHO, Seth's idea is good but at the moment, it's just a idea.
We can add it in future easily with some experiment result.
So I vote Joonsoo's comment.

> 
> > 
> > 
> >> + *	page->lru: links together first pages of various zspages.
> >> + *		Basically forming list of zspages in a fullness group.
> >> + *	page->mapping: class index and fullness group of the zspage
> >> + *
> >> + * Usage of struct page flags:
> >> + *	PG_private: identifies the first component page
> >> + *	PG_private2: identifies the last component page
> >> + *
> >> + */
> >> +
> >> +#ifdef CONFIG_ZSMALLOC_DEBUG
> >> +#define DEBUG
> >> +#endif
> > 
> > Is this obsolete?
> 
> Yes, I'll remove it.
> 
> > 
> >> +#include <linux/module.h>
> >> +#include <linux/kernel.h>
> >> +#include <linux/bitops.h>
> >> +#include <linux/errno.h>
> >> +#include <linux/highmem.h>
> >> +#include <linux/init.h>
> >> +#include <linux/string.h>
> >> +#include <linux/slab.h>
> >> +#include <asm/tlbflush.h>
> >> +#include <asm/pgtable.h>
> >> +#include <linux/cpumask.h>
> >> +#include <linux/cpu.h>
> >> +#include <linux/vmalloc.h>
> >> +#include <linux/hardirq.h>
> >> +#include <linux/spinlock.h>
> >> +#include <linux/types.h>
> >> +
> >> +#include <linux/zsmalloc.h>
> >> +
> >> +/*
> >> + * This must be power of 2 and greater than of equal to sizeof(link_free).
> >> + * These two conditions ensure that any 'struct link_free' itself doesn't
> >> + * span more than 1 page which avoids complex case of mapping 2 pages simply
> >> + * to restore link_free pointer values.
> >> + */
> >> +#define ZS_ALIGN		8
> >> +
> >> +/*
> >> + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
> >> + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
> >> + */
> >> +#define ZS_MAX_ZSPAGE_ORDER 2
> >> +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
> >> +
> >> +/*
> >> + * Object location (<PFN>, <obj_idx>) is encoded as
> >> + * as single (unsigned long) handle value.
> >> + *
> >> + * Note that object index <obj_idx> is relative to system
> >> + * page <PFN> it is stored in, so for each sub-page belonging
> >> + * to a zspage, obj_idx starts with 0.
> >> + *
> >> + * This is made more complicated by various memory models and PAE.
> >> + */
> >> +
> >> +#ifndef MAX_PHYSMEM_BITS
> >> +#ifdef CONFIG_HIGHMEM64G
> >> +#define MAX_PHYSMEM_BITS 36
> >> +#else /* !CONFIG_HIGHMEM64G */
> >> +/*
> >> + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
> >> + * be PAGE_SHIFT
> >> + */
> >> +#define MAX_PHYSMEM_BITS BITS_PER_LONG
> >> +#endif
> >> +#endif
> >> +#define _PFN_BITS		(MAX_PHYSMEM_BITS - PAGE_SHIFT)
> >> +#define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS)
> >> +#define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
> >> +
> >> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
> >> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
> >> +#define ZS_MIN_ALLOC_SIZE \
> >> +	MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
> >> +#define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
> >> +
> >> +/*
> >> + * On systems with 4K page size, this gives 254 size classes! There is a
> >> + * trader-off here:
> >> + *  - Large number of size classes is potentially wasteful as free page are
> >> + *    spread across these classes
> >> + *  - Small number of size classes causes large internal fragmentation
> >> + *  - Probably its better to use specific size classes (empirically
> >> + *    determined). NOTE: all those class sizes must be set as multiple of
> >> + *    ZS_ALIGN to make sure link_free itself never has to span 2 pages.
> >> + *
> >> + *  ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
> >> + *  (reason above)
> >> + */
> >> +#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> 8)
> >> +#define ZS_SIZE_CLASSES		((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
> >> +					ZS_SIZE_CLASS_DELTA + 1)
> >> +
> >> +/*
> >> + * We do not maintain any list for completely empty or full pages
> >> + */
> >> +enum fullness_group {
> >> +	ZS_ALMOST_FULL,
> >> +	ZS_ALMOST_EMPTY,
> >> +	_ZS_NR_FULLNESS_GROUPS,
> >> +
> >> +	ZS_EMPTY,
> >> +	ZS_FULL
> >> +};
> >> +
> >> +/*
> >> + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
> >> + *	n <= N / f, where
> >> + * n = number of allocated objects
> >> + * N = total number of objects zspage can store
> >> + * f = 1/fullness_threshold_frac
> >> + *
> >> + * Similarly, we assign zspage to:
> >> + *	ZS_ALMOST_FULL	when n > N / f
> >> + *	ZS_EMPTY	when n == 0
> >> + *	ZS_FULL		when n == N
> >> + *
> >> + * (see: fix_fullness_group())
> >> + */
> >> +static const int fullness_threshold_frac = 4;
> >> +
> >> +struct size_class {
> >> +	/*
> >> +	 * Size of objects stored in this class. Must be multiple
> >> +	 * of ZS_ALIGN.
> >> +	 */
> >> +	int size;
> >> +	unsigned int index;
> >> +
> >> +	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
> >> +	int pages_per_zspage;
> >> +
> >> +	spinlock_t lock;
> >> +
> >> +	/* stats */
> >> +	u64 pages_allocated;
> >> +
> >> +	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
> >> +};
> > 
> > Instead of simple pointer, how about using list_head?
> > With this, fullness_list management is easily consolidated to
> > set_zspage_mapping() and we can remove remove_zspage(), insert_zspage().
> 
> Makes sense to me.  Nitin what do you think?

I like it although I'm not Nitin.

> 
> > And how about maintaining FULL, EMPTY list?
> > There is not much memory waste and it can be used for debugging and
> > implementing other functionality.

Joonsoo, could you elaborate on ideas you have about debugging and
other functions you mentioned?
We need justification for change rather than saying
"might be useful in future". Then, we can judge whether we should do
it right now or are able to add it in future when we really need it.

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-19 17:54     ` Seth Jennings
  2013-02-19 23:37       ` Minchan Kim
@ 2013-02-20  2:42       ` Nitin Gupta
  1 sibling, 0 replies; 38+ messages in thread
From: Nitin Gupta @ 2013-02-20  2:42 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Joonsoo Kim, Andrew Morton, Greg Kroah-Hartman, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/19/2013 09:54 AM, Seth Jennings wrote:
> On 02/19/2013 03:18 AM, Joonsoo Kim wrote:
>> Hello, Seth.
>> I'm not sure that this is right time to review, because I already have
>> seen many effort of various people to promote zxxx series. I don't want to
>> be a stopper to promote these. :)
> 
> Any time is good review time :)  Thanks for your review!
> 
>>
>> But, I read the code, now, and then some comments below.
>>
>> On Wed, Feb 13, 2013 at 12:38:44PM -0600, Seth Jennings wrote:
>>> =========
>>> DO NOT MERGE, FOR REVIEW ONLY
>>> This patch introduces zsmalloc as new code, however, it already
>>> exists in drivers/staging.  In order to build successfully, you
>>> must select EITHER to driver/staging version OR this version.
>>> Once zsmalloc is reviewed in this format (and hopefully accepted),
>>> I will create a new patchset that properly promotes zsmalloc from
>>> staging.
>>> =========
>>>
>>> This patchset introduces a new slab-based memory allocator,
>>> zsmalloc, for storing compressed pages.  It is designed for
>>> low fragmentation and high allocation success rate on
>>> large object, but <= PAGE_SIZE allocations.
>>>
>>> zsmalloc differs from the kernel slab allocator in two primary
>>> ways to achieve these design goals.
>>>
>>> zsmalloc never requires high order page allocations to back
>>> slabs, or "size classes" in zsmalloc terms. Instead it allows
>>> multiple single-order pages to be stitched together into a
>>> "zspage" which backs the slab.  This allows for higher allocation
>>> success rate under memory pressure.
>>>
>>> Also, zsmalloc allows objects to span page boundaries within the
>>> zspage.  This allows for lower fragmentation than could be had
>>> with the kernel slab allocator for objects between PAGE_SIZE/2
>>> and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>>> to 60% of it original size, the memory savings gained through
>>> compression is lost in fragmentation because another object of
>>> the same size can't be stored in the leftover space.
>>>
>>> This ability to span pages results in zsmalloc allocations not being
>>> directly addressable by the user.  The user is given an
>>> non-dereferencable handle in response to an allocation request.
>>> That handle must be mapped, using zs_map_object(), which returns
>>> a pointer to the mapped region that can be used.  The mapping is
>>> necessary since the object data may reside in two different
>>> noncontigious pages.
>>>
>>> zsmalloc fulfills the allocation needs for zram and zswap.
>>>
>>> Acked-by: Nitin Gupta <ngupta@vflare.org>
>>> Acked-by: Minchan Kim <minchan@kernel.org>
>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>> ---
>>>  include/linux/zsmalloc.h |   49 ++
>>>  mm/Kconfig               |   24 +
>>>  mm/Makefile              |    1 +
>>>  mm/zsmalloc.c            | 1124 ++++++++++++++++++++++++++++++++++++++++++++++
>>>  4 files changed, 1198 insertions(+)
>>>  create mode 100644 include/linux/zsmalloc.h
>>>  create mode 100644 mm/zsmalloc.c
>>>
>>> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
>>> new file mode 100644
>>> index 0000000..eb6efb6
>>> --- /dev/null
>>> +++ b/include/linux/zsmalloc.h
>>> @@ -0,0 +1,49 @@
>>> +/*
>>> + * zsmalloc memory allocator
>>> + *
>>> + * Copyright (C) 2011  Nitin Gupta
>>> + *
>>> + * This code is released using a dual license strategy: BSD/GPL
>>> + * You can choose the license that better fits your requirements.
>>> + *
>>> + * Released under the terms of 3-clause BSD License
>>> + * Released under the terms of GNU General Public License Version 2.0
>>> + */
>>> +
>>> +#ifndef _ZS_MALLOC_H_
>>> +#define _ZS_MALLOC_H_
>>> +
>>> +#include <linux/types.h>
>>> +#include <linux/mm_types.h>
>>> +
>>> +/*
>>> + * zsmalloc mapping modes
>>> + *
>>> + * NOTE: These only make a difference when a mapped object spans pages
>>> +*/
>>> +enum zs_mapmode {
>>> +	ZS_MM_RW, /* normal read-write mapping */
>>> +	ZS_MM_RO, /* read-only (no copy-out at unmap time) */
>>> +	ZS_MM_WO /* write-only (no copy-in at map time) */
>>> +};
>>
>>
>> These makes no difference for PGTABLE_MAPPING.
>> Please add some comment for this.
> 
> Yes. Will do.
> 
>>
>>> +struct zs_ops {
>>> +	struct page * (*alloc)(gfp_t);
>>> +	void (*free)(struct page *);
>>> +};
>>> +
>>> +struct zs_pool;
>>> +
>>> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
>>> +void zs_destroy_pool(struct zs_pool *pool);
>>> +
>>> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
>>> +void zs_free(struct zs_pool *pool, unsigned long obj);
>>> +
>>> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
>>> +			enum zs_mapmode mm);
>>> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
>>> +
>>> +u64 zs_get_total_size_bytes(struct zs_pool *pool);
>>> +
>>> +#endif
>>> diff --git a/mm/Kconfig b/mm/Kconfig
>>> index 278e3ab..25b8f38 100644
>>> --- a/mm/Kconfig
>>> +++ b/mm/Kconfig
>>> @@ -446,3 +446,27 @@ config FRONTSWAP
>>>  	  and swap data is stored as normal on the matching swap device.
>>>  
>>>  	  If unsure, say Y to enable frontswap.
>>> +
>>> +config ZSMALLOC
>>> +	tristate "Memory allocator for compressed pages"
>>> +	default n
>>> +	help
>>> +	  zsmalloc is a slab-based memory allocator designed to store
>>> +	  compressed RAM pages.  zsmalloc uses virtual memory mapping
>>> +	  in order to reduce fragmentation.  However, this results in a
>>> +	  non-standard allocator interface where a handle, not a pointer, is
>>> +	  returned by an alloc().  This handle must be mapped in order to
>>> +	  access the allocated space.
>>> +
>>> +config PGTABLE_MAPPING
>>> +	bool "Use page table mapping to access object in zsmalloc"
>>> +	depends on ZSMALLOC
>>> +	help
>>> +	  By default, zsmalloc uses a copy-based object mapping method to
>>> +	  access allocations that span two pages. However, if a particular
>>> +	  architecture (ex, ARM) performs VM mapping faster than copying,
>>> +	  then you should select this. This causes zsmalloc to use page table
>>> +	  mapping rather than copying for object mapping.
>>> +
>>> +	  You can check speed with zsmalloc benchmark[1].
>>> +	  [1] https://github.com/spartacus06/zsmalloc
>>> diff --git a/mm/Makefile b/mm/Makefile
>>> index 3a46287..0f6ef0a 100644
>>> --- a/mm/Makefile
>>> +++ b/mm/Makefile
>>> @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
>>>  obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
>>>  obj-$(CONFIG_CLEANCACHE) += cleancache.o
>>>  obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>>> +obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>>> new file mode 100644
>>> index 0000000..34378ef
>>> --- /dev/null
>>> +++ b/mm/zsmalloc.c
>>> @@ -0,0 +1,1124 @@
>>> +/*
>>> + * zsmalloc memory allocator
>>> + *
>>> + * Copyright (C) 2011  Nitin Gupta
>>> + *
>>> + * This code is released using a dual license strategy: BSD/GPL
>>> + * You can choose the license that better fits your requirements.
>>> + *
>>> + * Released under the terms of 3-clause BSD License
>>> + * Released under the terms of GNU General Public License Version 2.0
>>> + */
>>> +
>>> +
>>> +/*
>>> + * This allocator is designed for use with zcache and zram. Thus, the
>>> + * allocator is supposed to work well under low memory conditions. In
>>> + * particular, it never attempts higher order page allocation which is
>>> + * very likely to fail under memory pressure. On the other hand, if we
>>> + * just use single (0-order) pages, it would suffer from very high
>>> + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
>>> + * an entire page. This was one of the major issues with its predecessor
>>> + * (xvmalloc).
>>> + *
>>> + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
>>> + * and links them together using various 'struct page' fields. These linked
>>> + * pages act as a single higher-order page i.e. an object can span 0-order
>>> + * page boundaries. The code refers to these linked pages as a single entity
>>> + * called zspage.
>>> + *
>>> + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
>>> + * since this satisfies the requirements of all its current users (in the
>>> + * worst case, page is incompressible and is thus stored "as-is" i.e. in
>>> + * uncompressed form). For allocation requests larger than this size, failure
>>> + * is returned (see zs_malloc).
>>> + *
>>> + * Additionally, zs_malloc() does not return a dereferenceable pointer.
>>> + * Instead, it returns an opaque handle (unsigned long) which encodes actual
>>> + * location of the allocated object. The reason for this indirection is that
>>> + * zsmalloc does not keep zspages permanently mapped since that would cause
>>> + * issues on 32-bit systems where the VA region for kernel space mappings
>>> + * is very small. So, before using the allocating memory, the object has to
>>> + * be mapped using zs_map_object() to get a usable pointer and subsequently
>>> + * unmapped using zs_unmap_object().
>>> + *
>>> + * Following is how we use various fields and flags of underlying
>>> + * struct page(s) to form a zspage.
>>> + *
>>> + * Usage of struct page fields:
>>> + *	page->first_page: points to the first component (0-order) page
>>> + *	page->index (union with page->freelist): offset of the first object
>>> + *		starting in this page. For the first page, this is
>>> + *		always 0, so we use this field (aka freelist) to point
>>> + *		to the first free object in zspage.
>>> + *	page->lru: links together all component pages (except the first page)
>>> + *		of a zspage
>>> + *
>>> + *	For _first_ page only:
>>> + *
>>> + *	page->private (union with page->first_page): refers to the
>>> + *		component page after the first page
>>> + *	page->freelist: points to the first free object in zspage.
>>> + *		Free objects are linked together using in-place
>>> + *		metadata.
>>> + *	page->objects: maximum number of objects we can store in this
>>> + *		zspage (class->zspage_order * PAGE_SIZE / class->size)
>>
>> How about just embedding maximum number of objects to size_class?
>> For the SLUB, each slab can have difference number of objects.
>> But, for the zsmalloc, it is not possible, so there is no reason
>> to maintain it within metadata of zspage. Just to embed it to size_class
>> is sufficient.
> 
> Yes, a little code massaging and this can go away.
> 
> However, there might be some value in having variable sized zspages in
> the same size_class.  It could improve allocation success rate at the
> expense of efficiency by not failing in alloc_zspage() if we can't
> allocate the optimal number of pages.  As long as we can allocate the
> first page, then we can proceed.
> 

Yes, I remember trying to allow partial failures and thus allow variable
sized zspages within the same size class but I just skipped that since
the value wasn't clear: if the system cannot give us 4  (non-contiguous)
pages then it's probably going to fail allocation  requests for even
single pages, in not so far future. Also, for objects>PAGESIZE/2,
failing over to zspage of just PAGESIZE is not going to  help either.

>>
>>
>>> + *	page->lru: links together first pages of various zspages.
>>> + *		Basically forming list of zspages in a fullness group.
>>> + *	page->mapping: class index and fullness group of the zspage
>>> + *
>>> + * Usage of struct page flags:
>>> + *	PG_private: identifies the first component page
>>> + *	PG_private2: identifies the last component page
>>> + *
>>> + */
>>> +
>>> +#ifdef CONFIG_ZSMALLOC_DEBUG
>>> +#define DEBUG
>>> +#endif
>>
>> Is this obsolete?
> 
> Yes, I'll remove it.
> 
>>
>>> +#include <linux/module.h>
>>> +#include <linux/kernel.h>
>>> +#include <linux/bitops.h>
>>> +#include <linux/errno.h>
>>> +#include <linux/highmem.h>
>>> +#include <linux/init.h>
>>> +#include <linux/string.h>
>>> +#include <linux/slab.h>
>>> +#include <asm/tlbflush.h>
>>> +#include <asm/pgtable.h>
>>> +#include <linux/cpumask.h>
>>> +#include <linux/cpu.h>
>>> +#include <linux/vmalloc.h>
>>> +#include <linux/hardirq.h>
>>> +#include <linux/spinlock.h>
>>> +#include <linux/types.h>
>>> +
>>> +#include <linux/zsmalloc.h>
>>> +
>>> +/*
>>> + * This must be power of 2 and greater than of equal to sizeof(link_free).
>>> + * These two conditions ensure that any 'struct link_free' itself doesn't
>>> + * span more than 1 page which avoids complex case of mapping 2 pages simply
>>> + * to restore link_free pointer values.
>>> + */
>>> +#define ZS_ALIGN		8
>>> +
>>> +/*
>>> + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
>>> + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
>>> + */
>>> +#define ZS_MAX_ZSPAGE_ORDER 2
>>> +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
>>> +
>>> +/*
>>> + * Object location (<PFN>, <obj_idx>) is encoded as
>>> + * as single (unsigned long) handle value.
>>> + *
>>> + * Note that object index <obj_idx> is relative to system
>>> + * page <PFN> it is stored in, so for each sub-page belonging
>>> + * to a zspage, obj_idx starts with 0.
>>> + *
>>> + * This is made more complicated by various memory models and PAE.
>>> + */
>>> +
>>> +#ifndef MAX_PHYSMEM_BITS
>>> +#ifdef CONFIG_HIGHMEM64G
>>> +#define MAX_PHYSMEM_BITS 36
>>> +#else /* !CONFIG_HIGHMEM64G */
>>> +/*
>>> + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
>>> + * be PAGE_SHIFT
>>> + */
>>> +#define MAX_PHYSMEM_BITS BITS_PER_LONG
>>> +#endif
>>> +#endif
>>> +#define _PFN_BITS		(MAX_PHYSMEM_BITS - PAGE_SHIFT)
>>> +#define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS)
>>> +#define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
>>> +
>>> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
>>> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
>>> +#define ZS_MIN_ALLOC_SIZE \
>>> +	MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
>>> +#define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
>>> +
>>> +/*
>>> + * On systems with 4K page size, this gives 254 size classes! There is a
>>> + * trader-off here:
>>> + *  - Large number of size classes is potentially wasteful as free page are
>>> + *    spread across these classes
>>> + *  - Small number of size classes causes large internal fragmentation
>>> + *  - Probably its better to use specific size classes (empirically
>>> + *    determined). NOTE: all those class sizes must be set as multiple of
>>> + *    ZS_ALIGN to make sure link_free itself never has to span 2 pages.
>>> + *
>>> + *  ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
>>> + *  (reason above)
>>> + */
>>> +#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> 8)
>>> +#define ZS_SIZE_CLASSES		((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
>>> +					ZS_SIZE_CLASS_DELTA + 1)
>>> +
>>> +/*
>>> + * We do not maintain any list for completely empty or full pages
>>> + */
>>> +enum fullness_group {
>>> +	ZS_ALMOST_FULL,
>>> +	ZS_ALMOST_EMPTY,
>>> +	_ZS_NR_FULLNESS_GROUPS,
>>> +
>>> +	ZS_EMPTY,
>>> +	ZS_FULL
>>> +};
>>> +
>>> +/*
>>> + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
>>> + *	n <= N / f, where
>>> + * n = number of allocated objects
>>> + * N = total number of objects zspage can store
>>> + * f = 1/fullness_threshold_frac
>>> + *
>>> + * Similarly, we assign zspage to:
>>> + *	ZS_ALMOST_FULL	when n > N / f
>>> + *	ZS_EMPTY	when n == 0
>>> + *	ZS_FULL		when n == N
>>> + *
>>> + * (see: fix_fullness_group())
>>> + */
>>> +static const int fullness_threshold_frac = 4;
>>> +
>>> +struct size_class {
>>> +	/*
>>> +	 * Size of objects stored in this class. Must be multiple
>>> +	 * of ZS_ALIGN.
>>> +	 */
>>> +	int size;
>>> +	unsigned int index;
>>> +
>>> +	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
>>> +	int pages_per_zspage;
>>> +
>>> +	spinlock_t lock;
>>> +
>>> +	/* stats */
>>> +	u64 pages_allocated;
>>> +
>>> +	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
>>> +};
>>
>> Instead of simple pointer, how about using list_head?
>> With this, fullness_list management is easily consolidated to
>> set_zspage_mapping() and we can remove remove_zspage(), insert_zspage().
> 

Yes seems like a nice cleanup. However, I think we should defer it till
it's promoted out of staging.

Thanks a lot of reviews and comments.
Nitin


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 4/8] zswap: add to mm/
  2013-02-18 19:55       ` Dan Magenheimer
  2013-02-18 20:39         ` Seth Jennings
@ 2013-02-20 20:37         ` Seth Jennings
  1 sibling, 0 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-20 20:37 UTC (permalink / raw)
  To: Ric Mason
  Cc: Dan Magenheimer, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper,
	Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

On 02/18/2013 01:55 PM, Dan Magenheimer wrote:
>> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
>> Subject: Re: [PATCHv5 4/8] zswap: add to mm/
>>
>> On 02/15/2013 10:04 PM, Ric Mason wrote:
>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>> <snip>
>>>> + * The statistics below are not protected from concurrent access for
>>>> + * performance reasons so they may not be a 100% accurate.  However,
>>>> + * the do provide useful information on roughly how many times a
>>>
>>> s/the/they
>>
>> Ah yes, thanks :)
>>
>>>
>>>> + * certain event is occurring.
>>>> +*/
>>>> +static u64 zswap_pool_limit_hit;
>>>> +static u64 zswap_reject_compress_poor;
>>>> +static u64 zswap_reject_zsmalloc_fail;
>>>> +static u64 zswap_reject_kmemcache_fail;
>>>> +static u64 zswap_duplicate_entry;
>>>> +
>>>> +/*********************************
>>>> +* tunables
>>>> +**********************************/
>>>> +/* Enable/disable zswap (disabled by default, fixed at boot for
>>>> now) */
>>>> +static bool zswap_enabled;
>>>> +module_param_named(enabled, zswap_enabled, bool, 0);
>>>
>>> please document in Documentation/kernel-parameters.txt.
>>
>> Will do.
> 
> Is that a good idea?  Konrad's frontswap/cleancache patches
> to fix frontswap/cleancache initialization so that backends
> can be built/loaded as modules may be merged for 3.9.
> AFAIK, module parameters are not included in kernel-parameters.txt.

Good point.  I'm looking to make zswap modular in the not too distant
future.  I'll wait on this for now.

Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-18 19:16     ` Seth Jennings
@ 2013-02-21  8:49       ` Ric Mason
  2013-02-21 15:50         ` Seth Jennings
  0 siblings, 1 reply; 38+ messages in thread
From: Ric Mason @ 2013-02-21  8:49 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/19/2013 03:16 AM, Seth Jennings wrote:
> On 02/16/2013 12:21 AM, Ric Mason wrote:
>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>>> This patch adds a documentation file for zsmalloc at
>>> Documentation/vm/zsmalloc.txt
>>>
>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>> ---
>>>    Documentation/vm/zsmalloc.txt |   68
>>> +++++++++++++++++++++++++++++++++++++++++
>>>    1 file changed, 68 insertions(+)
>>>    create mode 100644 Documentation/vm/zsmalloc.txt
>>>
>>> diff --git a/Documentation/vm/zsmalloc.txt
>>> b/Documentation/vm/zsmalloc.txt
>>> new file mode 100644
>>> index 0000000..85aa617
>>> --- /dev/null
>>> +++ b/Documentation/vm/zsmalloc.txt
>>> @@ -0,0 +1,68 @@
>>> +zsmalloc Memory Allocator
>>> +
>>> +Overview
>>> +
>>> +zmalloc a new slab-based memory allocator,
>>> +zsmalloc, for storing compressed pages.  It is designed for
>>> +low fragmentation and high allocation success rate on
>>> +large object, but <= PAGE_SIZE allocations.
>>> +
>>> +zsmalloc differs from the kernel slab allocator in two primary
>>> +ways to achieve these design goals.
>>> +
>>> +zsmalloc never requires high order page allocations to back
>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>>> +multiple single-order pages to be stitched together into a
>>> +"zspage" which backs the slab.  This allows for higher allocation
>>> +success rate under memory pressure.
>>> +
>>> +Also, zsmalloc allows objects to span page boundaries within the
>>> +zspage.  This allows for lower fragmentation than could be had
>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>>> +to 60% of it original size, the memory savings gained through
>>> +compression is lost in fragmentation because another object of
>>> +the same size can't be stored in the leftover space.
>>> +
>>> +This ability to span pages results in zsmalloc allocations not being
>>> +directly addressable by the user.  The user is given an
>>> +non-dereferencable handle in response to an allocation request.
>>> +That handle must be mapped, using zs_map_object(), which returns
>>> +a pointer to the mapped region that can be used.  The mapping is
>>> +necessary since the object data may reside in two different
>>> +noncontigious pages.
>> Do you mean the reason of  to use a zsmalloc object must map after
>> malloc is object data maybe reside in two different nocontiguous pages?
> Yes, that is one reason for the mapping.  The other reason (more of an
> added bonus) is below.
>
>>> +
>>> +For 32-bit systems, zsmalloc has the added benefit of being
>>> +able to back slabs with HIGHMEM pages, something not possible
>> What's the meaning of "back slabs with HIGHMEM pages"?
> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit systems
> with larger that 1GB (actually a little less) of RAM.  The upper 3GB
> of the 4GB address space, depending on kernel build options, is not
> directly addressable by the kernel, but can be mapped into the kernel
> address space with functions like kmap() or kmap_atomic().
>
> These pages can't be used by slab/slub because they are not
> continuously mapped into the kernel address space.  However, since
> zsmalloc requires a mapping anyway to handle objects that span
> non-contiguous page boundaries, we do the kernel mapping as part of
> the process.
>
> So zspages, the conceptual slab in zsmalloc backed by single-order
> pages can include pages from the HIGHMEM zone as well.

Thanks for your clarify,
  http://lwn.net/Articles/537422/, your article about zswap in lwn.
  "Additionally, the kernel slab allocator does not allow objects that 
are less
than a page in size to span a page boundary. This means that if an object is
PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page, 
resulting in
~50% waste. Hense there are *no kmalloc() cache size* between 
PAGE_SIZE/2 and
PAGE_SIZE."
Are your sure? It seems that kmalloc cache support big size, your can 
check in
include/linux/kmalloc_sizes.h

>
> Seth
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-21  8:49       ` Ric Mason
@ 2013-02-21 15:50         ` Seth Jennings
  2013-02-21 16:20           ` Dan Magenheimer
                             ` (2 more replies)
  0 siblings, 3 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-21 15:50 UTC (permalink / raw)
  To: Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/21/2013 02:49 AM, Ric Mason wrote:
> On 02/19/2013 03:16 AM, Seth Jennings wrote:
>> On 02/16/2013 12:21 AM, Ric Mason wrote:
>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>>>> This patch adds a documentation file for zsmalloc at
>>>> Documentation/vm/zsmalloc.txt
>>>>
>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>> ---
>>>>    Documentation/vm/zsmalloc.txt |   68
>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>    1 file changed, 68 insertions(+)
>>>>    create mode 100644 Documentation/vm/zsmalloc.txt
>>>>
>>>> diff --git a/Documentation/vm/zsmalloc.txt
>>>> b/Documentation/vm/zsmalloc.txt
>>>> new file mode 100644
>>>> index 0000000..85aa617
>>>> --- /dev/null
>>>> +++ b/Documentation/vm/zsmalloc.txt
>>>> @@ -0,0 +1,68 @@
>>>> +zsmalloc Memory Allocator
>>>> +
>>>> +Overview
>>>> +
>>>> +zmalloc a new slab-based memory allocator,
>>>> +zsmalloc, for storing compressed pages.  It is designed for
>>>> +low fragmentation and high allocation success rate on
>>>> +large object, but <= PAGE_SIZE allocations.
>>>> +
>>>> +zsmalloc differs from the kernel slab allocator in two primary
>>>> +ways to achieve these design goals.
>>>> +
>>>> +zsmalloc never requires high order page allocations to back
>>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>>>> +multiple single-order pages to be stitched together into a
>>>> +"zspage" which backs the slab.  This allows for higher allocation
>>>> +success rate under memory pressure.
>>>> +
>>>> +Also, zsmalloc allows objects to span page boundaries within the
>>>> +zspage.  This allows for lower fragmentation than could be had
>>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>>>> +to 60% of it original size, the memory savings gained through
>>>> +compression is lost in fragmentation because another object of
>>>> +the same size can't be stored in the leftover space.
>>>> +
>>>> +This ability to span pages results in zsmalloc allocations not being
>>>> +directly addressable by the user.  The user is given an
>>>> +non-dereferencable handle in response to an allocation request.
>>>> +That handle must be mapped, using zs_map_object(), which returns
>>>> +a pointer to the mapped region that can be used.  The mapping is
>>>> +necessary since the object data may reside in two different
>>>> +noncontigious pages.
>>> Do you mean the reason of  to use a zsmalloc object must map after
>>> malloc is object data maybe reside in two different nocontiguous pages?
>> Yes, that is one reason for the mapping.  The other reason (more of an
>> added bonus) is below.
>>
>>>> +
>>>> +For 32-bit systems, zsmalloc has the added benefit of being
>>>> +able to back slabs with HIGHMEM pages, something not possible
>>> What's the meaning of "back slabs with HIGHMEM pages"?
>> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit systems
>> with larger that 1GB (actually a little less) of RAM.  The upper 3GB
>> of the 4GB address space, depending on kernel build options, is not
>> directly addressable by the kernel, but can be mapped into the kernel
>> address space with functions like kmap() or kmap_atomic().
>>
>> These pages can't be used by slab/slub because they are not
>> continuously mapped into the kernel address space.  However, since
>> zsmalloc requires a mapping anyway to handle objects that span
>> non-contiguous page boundaries, we do the kernel mapping as part of
>> the process.
>>
>> So zspages, the conceptual slab in zsmalloc backed by single-order
>> pages can include pages from the HIGHMEM zone as well.
> 
> Thanks for your clarify,
>  http://lwn.net/Articles/537422/, your article about zswap in lwn.
>  "Additionally, the kernel slab allocator does not allow objects that
> are less
> than a page in size to span a page boundary. This means that if an
> object is
> PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page,
> resulting in
> ~50% waste. Hense there are *no kmalloc() cache size* between
> PAGE_SIZE/2 and
> PAGE_SIZE."
> Are your sure? It seems that kmalloc cache support big size, your can
> check in
> include/linux/kmalloc_sizes.h

Yes, kmalloc can allocate large objects > PAGE_SIZE, but there are no
cache sizes _between_ PAGE_SIZE/2 and PAGE_SIZE.  For example, on a
system with 4k pages, there are no caches between kmalloc-2048 and
kmalloc-4096.

Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-21 15:50         ` Seth Jennings
@ 2013-02-21 16:20           ` Dan Magenheimer
  2013-02-22  2:56           ` Ric Mason
  2013-02-22  2:59           ` Ric Mason
  2 siblings, 0 replies; 38+ messages in thread
From: Dan Magenheimer @ 2013-02-21 16:20 UTC (permalink / raw)
  To: Seth Jennings, Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Wilk, Robert Jennings, Jenifer Hopper, Mel Gorman,
	Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
> Subject: Re: [PATCHv5 2/8] zsmalloc: add documentation
> 
> On 02/21/2013 02:49 AM, Ric Mason wrote:
> > On 02/19/2013 03:16 AM, Seth Jennings wrote:
> >> On 02/16/2013 12:21 AM, Ric Mason wrote:
> >>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
> >>>> This patch adds a documentation file for zsmalloc at
> >>>> Documentation/vm/zsmalloc.txt
> >>>>
> >>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> >>>> ---
> >>>>    Documentation/vm/zsmalloc.txt |   68
> >>>> +++++++++++++++++++++++++++++++++++++++++
> >>>>    1 file changed, 68 insertions(+)
> >>>>    create mode 100644 Documentation/vm/zsmalloc.txt
> >>>>
> >>>> diff --git a/Documentation/vm/zsmalloc.txt
> >>>> b/Documentation/vm/zsmalloc.txt
> >>>> new file mode 100644
> >>>> index 0000000..85aa617
> >>>> --- /dev/null
> >>>> +++ b/Documentation/vm/zsmalloc.txt
> >>>> @@ -0,0 +1,68 @@
> >>>> +zsmalloc Memory Allocator
> >>>> +
> >>>> +Overview
> >>>> +
> >>>> +zmalloc a new slab-based memory allocator,
> >>>> +zsmalloc, for storing compressed pages.  It is designed for
> >>>> +low fragmentation and high allocation success rate on
> >>>> +large object, but <= PAGE_SIZE allocations.
> >>>> +
> >>>> +zsmalloc differs from the kernel slab allocator in two primary
> >>>> +ways to achieve these design goals.
> >>>> +
> >>>> +zsmalloc never requires high order page allocations to back
> >>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
> >>>> +multiple single-order pages to be stitched together into a
> >>>> +"zspage" which backs the slab.  This allows for higher allocation
> >>>> +success rate under memory pressure.
> >>>> +
> >>>> +Also, zsmalloc allows objects to span page boundaries within the
> >>>> +zspage.  This allows for lower fragmentation than could be had
> >>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
> >>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
> >>>> +to 60% of it original size, the memory savings gained through
> >>>> +compression is lost in fragmentation because another object of
> >>>> +the same size can't be stored in the leftover space.
> >>>> +
> >>>> +This ability to span pages results in zsmalloc allocations not being
> >>>> +directly addressable by the user.  The user is given an
> >>>> +non-dereferencable handle in response to an allocation request.
> >>>> +That handle must be mapped, using zs_map_object(), which returns
> >>>> +a pointer to the mapped region that can be used.  The mapping is
> >>>> +necessary since the object data may reside in two different
> >>>> +noncontigious pages.
> >>> Do you mean the reason of  to use a zsmalloc object must map after
> >>> malloc is object data maybe reside in two different nocontiguous pages?
> >> Yes, that is one reason for the mapping.  The other reason (more of an
> >> added bonus) is below.
> >>
> >>>> +
> >>>> +For 32-bit systems, zsmalloc has the added benefit of being
> >>>> +able to back slabs with HIGHMEM pages, something not possible
> >>> What's the meaning of "back slabs with HIGHMEM pages"?
> >> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit systems
> >> with larger that 1GB (actually a little less) of RAM.  The upper 3GB
> >> of the 4GB address space, depending on kernel build options, is not
> >> directly addressable by the kernel, but can be mapped into the kernel
> >> address space with functions like kmap() or kmap_atomic().
> >>
> >> These pages can't be used by slab/slub because they are not
> >> continuously mapped into the kernel address space.  However, since
> >> zsmalloc requires a mapping anyway to handle objects that span
> >> non-contiguous page boundaries, we do the kernel mapping as part of
> >> the process.
> >>
> >> So zspages, the conceptual slab in zsmalloc backed by single-order
> >> pages can include pages from the HIGHMEM zone as well.
> >
> > Thanks for your clarify,
> >  http://lwn.net/Articles/537422/, your article about zswap in lwn.
> >  "Additionally, the kernel slab allocator does not allow objects that
> > are less
> > than a page in size to span a page boundary. This means that if an
> > object is
> > PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page,
> > resulting in
> > ~50% waste. Hense there are *no kmalloc() cache size* between
> > PAGE_SIZE/2 and
> > PAGE_SIZE."
> > Are your sure? It seems that kmalloc cache support big size, your can
> > check in
> > include/linux/kmalloc_sizes.h
> 
> Yes, kmalloc can allocate large objects > PAGE_SIZE, but there are no
> cache sizes _between_ PAGE_SIZE/2 and PAGE_SIZE.  For example, on a
> system with 4k pages, there are no caches between kmalloc-2048 and
> kmalloc-4096.

Important and left unsaid here is that, in many workloads, the
distribution of compressed pages ("zpages") will have as many
as half or more with compressed size ("zsize") between PAGE_SIZE/2
and PAGE_SIZE.  And, in many workloads, the majority of values for
zsize will be much closer to PAGE_SIZE/2 than PAGE_SIZE, which will
result in a great deal of wasted space if slab were used.

And, also very important, kmalloc requires page allocations with
"order > 0" (2**n contiguous pages) to deal with "big size objects".
In-kernel compression would need many of these and they are difficult
(often impossible) to allocate when the system is under memory
pressure.

As a result, various other allocators have been written, first
xvmalloc, then zbud, then zsmalloc.  Each of these depend only
on order==0 page allocations and each has ways of dealing with
high quantities of zpages with PAGE_SIZE/2 < zsize < PAGE_SIZE.

Hope that helps!

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-21 15:50         ` Seth Jennings
  2013-02-21 16:20           ` Dan Magenheimer
@ 2013-02-22  2:56           ` Ric Mason
  2013-02-22 21:02             ` Seth Jennings
  2013-02-22  2:59           ` Ric Mason
  2 siblings, 1 reply; 38+ messages in thread
From: Ric Mason @ 2013-02-22  2:56 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/21/2013 11:50 PM, Seth Jennings wrote:
> On 02/21/2013 02:49 AM, Ric Mason wrote:
>> On 02/19/2013 03:16 AM, Seth Jennings wrote:
>>> On 02/16/2013 12:21 AM, Ric Mason wrote:
>>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>>>>> This patch adds a documentation file for zsmalloc at
>>>>> Documentation/vm/zsmalloc.txt
>>>>>
>>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>>> ---
>>>>>     Documentation/vm/zsmalloc.txt |   68
>>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>>     1 file changed, 68 insertions(+)
>>>>>     create mode 100644 Documentation/vm/zsmalloc.txt
>>>>>
>>>>> diff --git a/Documentation/vm/zsmalloc.txt
>>>>> b/Documentation/vm/zsmalloc.txt
>>>>> new file mode 100644
>>>>> index 0000000..85aa617
>>>>> --- /dev/null
>>>>> +++ b/Documentation/vm/zsmalloc.txt
>>>>> @@ -0,0 +1,68 @@
>>>>> +zsmalloc Memory Allocator
>>>>> +
>>>>> +Overview
>>>>> +
>>>>> +zmalloc a new slab-based memory allocator,
>>>>> +zsmalloc, for storing compressed pages.  It is designed for
>>>>> +low fragmentation and high allocation success rate on
>>>>> +large object, but <= PAGE_SIZE allocations.
>>>>> +
>>>>> +zsmalloc differs from the kernel slab allocator in two primary
>>>>> +ways to achieve these design goals.
>>>>> +
>>>>> +zsmalloc never requires high order page allocations to back
>>>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>>>>> +multiple single-order pages to be stitched together into a
>>>>> +"zspage" which backs the slab.  This allows for higher allocation
>>>>> +success rate under memory pressure.
>>>>> +
>>>>> +Also, zsmalloc allows objects to span page boundaries within the
>>>>> +zspage.  This allows for lower fragmentation than could be had
>>>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>>>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>>>>> +to 60% of it original size, the memory savings gained through
>>>>> +compression is lost in fragmentation because another object of
>>>>> +the same size can't be stored in the leftover space.
>>>>> +
>>>>> +This ability to span pages results in zsmalloc allocations not being
>>>>> +directly addressable by the user.  The user is given an
>>>>> +non-dereferencable handle in response to an allocation request.
>>>>> +That handle must be mapped, using zs_map_object(), which returns
>>>>> +a pointer to the mapped region that can be used.  The mapping is
>>>>> +necessary since the object data may reside in two different
>>>>> +noncontigious pages.
>>>> Do you mean the reason of  to use a zsmalloc object must map after
>>>> malloc is object data maybe reside in two different nocontiguous pages?
>>> Yes, that is one reason for the mapping.  The other reason (more of an
>>> added bonus) is below.
>>>
>>>>> +
>>>>> +For 32-bit systems, zsmalloc has the added benefit of being
>>>>> +able to back slabs with HIGHMEM pages, something not possible
>>>> What's the meaning of "back slabs with HIGHMEM pages"?
>>> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit systems
>>> with larger that 1GB (actually a little less) of RAM.  The upper 3GB
>>> of the 4GB address space, depending on kernel build options, is not
>>> directly addressable by the kernel, but can be mapped into the kernel
>>> address space with functions like kmap() or kmap_atomic().
>>>
>>> These pages can't be used by slab/slub because they are not
>>> continuously mapped into the kernel address space.  However, since
>>> zsmalloc requires a mapping anyway to handle objects that span
>>> non-contiguous page boundaries, we do the kernel mapping as part of
>>> the process.
>>>
>>> So zspages, the conceptual slab in zsmalloc backed by single-order
>>> pages can include pages from the HIGHMEM zone as well.
>> Thanks for your clarify,
>>   http://lwn.net/Articles/537422/, your article about zswap in lwn.
>>   "Additionally, the kernel slab allocator does not allow objects that
>> are less
>> than a page in size to span a page boundary. This means that if an
>> object is
>> PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page,
>> resulting in
>> ~50% waste. Hense there are *no kmalloc() cache size* between
>> PAGE_SIZE/2 and
>> PAGE_SIZE."
>> Are your sure? It seems that kmalloc cache support big size, your can
>> check in
>> include/linux/kmalloc_sizes.h
> Yes, kmalloc can allocate large objects > PAGE_SIZE, but there are no
> cache sizes _between_ PAGE_SIZE/2 and PAGE_SIZE.  For example, on a
> system with 4k pages, there are no caches between kmalloc-2048 and
> kmalloc-4096.

kmalloc object > PAGE_SIZE/2 or > PAGE_SIZE should also allocate from 
slab cache, correct? Then how can alloc object w/o slab cache which 
contains this object size objects?

>
> Seth
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-21 15:50         ` Seth Jennings
  2013-02-21 16:20           ` Dan Magenheimer
  2013-02-22  2:56           ` Ric Mason
@ 2013-02-22  2:59           ` Ric Mason
  2 siblings, 0 replies; 38+ messages in thread
From: Ric Mason @ 2013-02-22  2:59 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/21/2013 11:50 PM, Seth Jennings wrote:
> On 02/21/2013 02:49 AM, Ric Mason wrote:
>> On 02/19/2013 03:16 AM, Seth Jennings wrote:
>>> On 02/16/2013 12:21 AM, Ric Mason wrote:
>>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>>>>> This patch adds a documentation file for zsmalloc at
>>>>> Documentation/vm/zsmalloc.txt
>>>>>
>>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>>> ---
>>>>>     Documentation/vm/zsmalloc.txt |   68
>>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>>     1 file changed, 68 insertions(+)
>>>>>     create mode 100644 Documentation/vm/zsmalloc.txt
>>>>>
>>>>> diff --git a/Documentation/vm/zsmalloc.txt
>>>>> b/Documentation/vm/zsmalloc.txt
>>>>> new file mode 100644
>>>>> index 0000000..85aa617
>>>>> --- /dev/null
>>>>> +++ b/Documentation/vm/zsmalloc.txt
>>>>> @@ -0,0 +1,68 @@
>>>>> +zsmalloc Memory Allocator
>>>>> +
>>>>> +Overview
>>>>> +
>>>>> +zmalloc a new slab-based memory allocator,
>>>>> +zsmalloc, for storing compressed pages.  It is designed for
>>>>> +low fragmentation and high allocation success rate on
>>>>> +large object, but <= PAGE_SIZE allocations.
>>>>> +
>>>>> +zsmalloc differs from the kernel slab allocator in two primary
>>>>> +ways to achieve these design goals.
>>>>> +
>>>>> +zsmalloc never requires high order page allocations to back
>>>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>>>>> +multiple single-order pages to be stitched together into a
>>>>> +"zspage" which backs the slab.  This allows for higher allocation
>>>>> +success rate under memory pressure.
>>>>> +
>>>>> +Also, zsmalloc allows objects to span page boundaries within the
>>>>> +zspage.  This allows for lower fragmentation than could be had
>>>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>>>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>>>>> +to 60% of it original size, the memory savings gained through
>>>>> +compression is lost in fragmentation because another object of
>>>>> +the same size can't be stored in the leftover space.
>>>>> +
>>>>> +This ability to span pages results in zsmalloc allocations not being
>>>>> +directly addressable by the user.  The user is given an
>>>>> +non-dereferencable handle in response to an allocation request.
>>>>> +That handle must be mapped, using zs_map_object(), which returns
>>>>> +a pointer to the mapped region that can be used.  The mapping is
>>>>> +necessary since the object data may reside in two different
>>>>> +noncontigious pages.
>>>> Do you mean the reason of  to use a zsmalloc object must map after
>>>> malloc is object data maybe reside in two different nocontiguous pages?
>>> Yes, that is one reason for the mapping.  The other reason (more of an
>>> added bonus) is below.
>>>
>>>>> +
>>>>> +For 32-bit systems, zsmalloc has the added benefit of being
>>>>> +able to back slabs with HIGHMEM pages, something not possible
>>>> What's the meaning of "back slabs with HIGHMEM pages"?
>>> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit systems
>>> with larger that 1GB (actually a little less) of RAM.  The upper 3GB
>>> of the 4GB address space, depending on kernel build options, is not
>>> directly addressable by the kernel, but can be mapped into the kernel
>>> address space with functions like kmap() or kmap_atomic().
>>>
>>> These pages can't be used by slab/slub because they are not
>>> continuously mapped into the kernel address space.  However, since
>>> zsmalloc requires a mapping anyway to handle objects that span
>>> non-contiguous page boundaries, we do the kernel mapping as part of
>>> the process.
>>>
>>> So zspages, the conceptual slab in zsmalloc backed by single-order
>>> pages can include pages from the HIGHMEM zone as well.
>> Thanks for your clarify,
>>   http://lwn.net/Articles/537422/, your article about zswap in lwn.
>>   "Additionally, the kernel slab allocator does not allow objects that
>> are less
>> than a page in size to span a page boundary. This means that if an
>> object is
>> PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page,
>> resulting in
>> ~50% waste. Hense there are *no kmalloc() cache size* between
>> PAGE_SIZE/2 and
>> PAGE_SIZE."
>> Are your sure? It seems that kmalloc cache support big size, your can
>> check in
>> include/linux/kmalloc_sizes.h
> Yes, kmalloc can allocate large objects > PAGE_SIZE, but there are no
> cache sizes _between_ PAGE_SIZE/2 and PAGE_SIZE.  For example, on a
> system with 4k pages, there are no caches between kmalloc-2048 and
> kmalloc-4096.

Since slub cache can merge, is it the root reason?

>
> Seth
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-19 23:37       ` Minchan Kim
@ 2013-02-22  9:24         ` Joonsoo Kim
  2013-02-22 20:04           ` Seth Jennings
  0 siblings, 1 reply; 38+ messages in thread
From: Joonsoo Kim @ 2013-02-22  9:24 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Seth Jennings, Nitin Gupta, Andrew Morton, Greg Kroah-Hartman,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On Wed, Feb 20, 2013 at 08:37:33AM +0900, Minchan Kim wrote:
> On Tue, Feb 19, 2013 at 11:54:21AM -0600, Seth Jennings wrote:
> > On 02/19/2013 03:18 AM, Joonsoo Kim wrote:
> > > Hello, Seth.
> > > I'm not sure that this is right time to review, because I already have
> > > seen many effort of various people to promote zxxx series. I don't want to
> > > be a stopper to promote these. :)
> > 
> > Any time is good review time :)  Thanks for your review!
> > 
> > > 
> > > But, I read the code, now, and then some comments below.
> > > 
> > > On Wed, Feb 13, 2013 at 12:38:44PM -0600, Seth Jennings wrote:
> > >> =========
> > >> DO NOT MERGE, FOR REVIEW ONLY
> > >> This patch introduces zsmalloc as new code, however, it already
> > >> exists in drivers/staging.  In order to build successfully, you
> > >> must select EITHER to driver/staging version OR this version.
> > >> Once zsmalloc is reviewed in this format (and hopefully accepted),
> > >> I will create a new patchset that properly promotes zsmalloc from
> > >> staging.
> > >> =========
> > >>
> > >> This patchset introduces a new slab-based memory allocator,
> > >> zsmalloc, for storing compressed pages.  It is designed for
> > >> low fragmentation and high allocation success rate on
> > >> large object, but <= PAGE_SIZE allocations.
> > >>
> > >> zsmalloc differs from the kernel slab allocator in two primary
> > >> ways to achieve these design goals.
> > >>
> > >> zsmalloc never requires high order page allocations to back
> > >> slabs, or "size classes" in zsmalloc terms. Instead it allows
> > >> multiple single-order pages to be stitched together into a
> > >> "zspage" which backs the slab.  This allows for higher allocation
> > >> success rate under memory pressure.
> > >>
> > >> Also, zsmalloc allows objects to span page boundaries within the
> > >> zspage.  This allows for lower fragmentation than could be had
> > >> with the kernel slab allocator for objects between PAGE_SIZE/2
> > >> and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
> > >> to 60% of it original size, the memory savings gained through
> > >> compression is lost in fragmentation because another object of
> > >> the same size can't be stored in the leftover space.
> > >>
> > >> This ability to span pages results in zsmalloc allocations not being
> > >> directly addressable by the user.  The user is given an
> > >> non-dereferencable handle in response to an allocation request.
> > >> That handle must be mapped, using zs_map_object(), which returns
> > >> a pointer to the mapped region that can be used.  The mapping is
> > >> necessary since the object data may reside in two different
> > >> noncontigious pages.
> > >>
> > >> zsmalloc fulfills the allocation needs for zram and zswap.
> > >>
> > >> Acked-by: Nitin Gupta <ngupta@vflare.org>
> > >> Acked-by: Minchan Kim <minchan@kernel.org>
> > >> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> > >> ---
> > >>  include/linux/zsmalloc.h |   49 ++
> > >>  mm/Kconfig               |   24 +
> > >>  mm/Makefile              |    1 +
> > >>  mm/zsmalloc.c            | 1124 ++++++++++++++++++++++++++++++++++++++++++++++
> > >>  4 files changed, 1198 insertions(+)
> > >>  create mode 100644 include/linux/zsmalloc.h
> > >>  create mode 100644 mm/zsmalloc.c
> > >>
> > >> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> > >> new file mode 100644
> > >> index 0000000..eb6efb6
> > >> --- /dev/null
> > >> +++ b/include/linux/zsmalloc.h
> > >> @@ -0,0 +1,49 @@
> > >> +/*
> > >> + * zsmalloc memory allocator
> > >> + *
> > >> + * Copyright (C) 2011  Nitin Gupta
> > >> + *
> > >> + * This code is released using a dual license strategy: BSD/GPL
> > >> + * You can choose the license that better fits your requirements.
> > >> + *
> > >> + * Released under the terms of 3-clause BSD License
> > >> + * Released under the terms of GNU General Public License Version 2.0
> > >> + */
> > >> +
> > >> +#ifndef _ZS_MALLOC_H_
> > >> +#define _ZS_MALLOC_H_
> > >> +
> > >> +#include <linux/types.h>
> > >> +#include <linux/mm_types.h>
> > >> +
> > >> +/*
> > >> + * zsmalloc mapping modes
> > >> + *
> > >> + * NOTE: These only make a difference when a mapped object spans pages
> > >> +*/
> > >> +enum zs_mapmode {
> > >> +	ZS_MM_RW, /* normal read-write mapping */
> > >> +	ZS_MM_RO, /* read-only (no copy-out at unmap time) */
> > >> +	ZS_MM_WO /* write-only (no copy-in at map time) */
> > >> +};
> > > 
> > > 
> > > These makes no difference for PGTABLE_MAPPING.
> > > Please add some comment for this.
> > 
> > Yes. Will do.
> > 
> > > 
> > >> +struct zs_ops {
> > >> +	struct page * (*alloc)(gfp_t);
> > >> +	void (*free)(struct page *);
> > >> +};
> > >> +
> > >> +struct zs_pool;
> > >> +
> > >> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
> > >> +void zs_destroy_pool(struct zs_pool *pool);
> > >> +
> > >> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
> > >> +void zs_free(struct zs_pool *pool, unsigned long obj);
> > >> +
> > >> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> > >> +			enum zs_mapmode mm);
> > >> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> > >> +
> > >> +u64 zs_get_total_size_bytes(struct zs_pool *pool);
> > >> +
> > >> +#endif
> > >> diff --git a/mm/Kconfig b/mm/Kconfig
> > >> index 278e3ab..25b8f38 100644
> > >> --- a/mm/Kconfig
> > >> +++ b/mm/Kconfig
> > >> @@ -446,3 +446,27 @@ config FRONTSWAP
> > >>  	  and swap data is stored as normal on the matching swap device.
> > >>  
> > >>  	  If unsure, say Y to enable frontswap.
> > >> +
> > >> +config ZSMALLOC
> > >> +	tristate "Memory allocator for compressed pages"
> > >> +	default n
> > >> +	help
> > >> +	  zsmalloc is a slab-based memory allocator designed to store
> > >> +	  compressed RAM pages.  zsmalloc uses virtual memory mapping
> > >> +	  in order to reduce fragmentation.  However, this results in a
> > >> +	  non-standard allocator interface where a handle, not a pointer, is
> > >> +	  returned by an alloc().  This handle must be mapped in order to
> > >> +	  access the allocated space.
> > >> +
> > >> +config PGTABLE_MAPPING
> > >> +	bool "Use page table mapping to access object in zsmalloc"
> > >> +	depends on ZSMALLOC
> > >> +	help
> > >> +	  By default, zsmalloc uses a copy-based object mapping method to
> > >> +	  access allocations that span two pages. However, if a particular
> > >> +	  architecture (ex, ARM) performs VM mapping faster than copying,
> > >> +	  then you should select this. This causes zsmalloc to use page table
> > >> +	  mapping rather than copying for object mapping.
> > >> +
> > >> +	  You can check speed with zsmalloc benchmark[1].
> > >> +	  [1] https://github.com/spartacus06/zsmalloc
> > >> diff --git a/mm/Makefile b/mm/Makefile
> > >> index 3a46287..0f6ef0a 100644
> > >> --- a/mm/Makefile
> > >> +++ b/mm/Makefile
> > >> @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
> > >>  obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
> > >>  obj-$(CONFIG_CLEANCACHE) += cleancache.o
> > >>  obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> > >> +obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> > >> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > >> new file mode 100644
> > >> index 0000000..34378ef
> > >> --- /dev/null
> > >> +++ b/mm/zsmalloc.c
> > >> @@ -0,0 +1,1124 @@
> > >> +/*
> > >> + * zsmalloc memory allocator
> > >> + *
> > >> + * Copyright (C) 2011  Nitin Gupta
> > >> + *
> > >> + * This code is released using a dual license strategy: BSD/GPL
> > >> + * You can choose the license that better fits your requirements.
> > >> + *
> > >> + * Released under the terms of 3-clause BSD License
> > >> + * Released under the terms of GNU General Public License Version 2.0
> > >> + */
> > >> +
> > >> +
> > >> +/*
> > >> + * This allocator is designed for use with zcache and zram. Thus, the
> > >> + * allocator is supposed to work well under low memory conditions. In
> > >> + * particular, it never attempts higher order page allocation which is
> > >> + * very likely to fail under memory pressure. On the other hand, if we
> > >> + * just use single (0-order) pages, it would suffer from very high
> > >> + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
> > >> + * an entire page. This was one of the major issues with its predecessor
> > >> + * (xvmalloc).
> > >> + *
> > >> + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
> > >> + * and links them together using various 'struct page' fields. These linked
> > >> + * pages act as a single higher-order page i.e. an object can span 0-order
> > >> + * page boundaries. The code refers to these linked pages as a single entity
> > >> + * called zspage.
> > >> + *
> > >> + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
> > >> + * since this satisfies the requirements of all its current users (in the
> > >> + * worst case, page is incompressible and is thus stored "as-is" i.e. in
> > >> + * uncompressed form). For allocation requests larger than this size, failure
> > >> + * is returned (see zs_malloc).
> > >> + *
> > >> + * Additionally, zs_malloc() does not return a dereferenceable pointer.
> > >> + * Instead, it returns an opaque handle (unsigned long) which encodes actual
> > >> + * location of the allocated object. The reason for this indirection is that
> > >> + * zsmalloc does not keep zspages permanently mapped since that would cause
> > >> + * issues on 32-bit systems where the VA region for kernel space mappings
> > >> + * is very small. So, before using the allocating memory, the object has to
> > >> + * be mapped using zs_map_object() to get a usable pointer and subsequently
> > >> + * unmapped using zs_unmap_object().
> > >> + *
> > >> + * Following is how we use various fields and flags of underlying
> > >> + * struct page(s) to form a zspage.
> > >> + *
> > >> + * Usage of struct page fields:
> > >> + *	page->first_page: points to the first component (0-order) page
> > >> + *	page->index (union with page->freelist): offset of the first object
> > >> + *		starting in this page. For the first page, this is
> > >> + *		always 0, so we use this field (aka freelist) to point
> > >> + *		to the first free object in zspage.
> > >> + *	page->lru: links together all component pages (except the first page)
> > >> + *		of a zspage
> > >> + *
> > >> + *	For _first_ page only:
> > >> + *
> > >> + *	page->private (union with page->first_page): refers to the
> > >> + *		component page after the first page
> > >> + *	page->freelist: points to the first free object in zspage.
> > >> + *		Free objects are linked together using in-place
> > >> + *		metadata.
> > >> + *	page->objects: maximum number of objects we can store in this
> > >> + *		zspage (class->zspage_order * PAGE_SIZE / class->size)
> > > 
> > > How about just embedding maximum number of objects to size_class?
> > > For the SLUB, each slab can have difference number of objects.
> > > But, for the zsmalloc, it is not possible, so there is no reason
> > > to maintain it within metadata of zspage. Just to embed it to size_class
> > > is sufficient.
> > 
> > Yes, a little code massaging and this can go away.
> > 
> > However, there might be some value in having variable sized zspages in
> > the same size_class.  It could improve allocation success rate at the
> > expense of efficiency by not failing in alloc_zspage() if we can't
> > allocate the optimal number of pages.  As long as we can allocate the
> > first page, then we can proceed.
> > 
> > Nitin care to weigh in?
> 
> Sorry, I'm not Nitin.
> IMHO, Seth's idea is good but at the moment, it's just a idea.
> We can add it in future easily with some experiment result.
> So I vote Joonsoo's comment.
> 
> > 
> > > 
> > > 
> > >> + *	page->lru: links together first pages of various zspages.
> > >> + *		Basically forming list of zspages in a fullness group.
> > >> + *	page->mapping: class index and fullness group of the zspage
> > >> + *
> > >> + * Usage of struct page flags:
> > >> + *	PG_private: identifies the first component page
> > >> + *	PG_private2: identifies the last component page
> > >> + *
> > >> + */
> > >> +
> > >> +#ifdef CONFIG_ZSMALLOC_DEBUG
> > >> +#define DEBUG
> > >> +#endif
> > > 
> > > Is this obsolete?
> > 
> > Yes, I'll remove it.
> > 
> > > 
> > >> +#include <linux/module.h>
> > >> +#include <linux/kernel.h>
> > >> +#include <linux/bitops.h>
> > >> +#include <linux/errno.h>
> > >> +#include <linux/highmem.h>
> > >> +#include <linux/init.h>
> > >> +#include <linux/string.h>
> > >> +#include <linux/slab.h>
> > >> +#include <asm/tlbflush.h>
> > >> +#include <asm/pgtable.h>
> > >> +#include <linux/cpumask.h>
> > >> +#include <linux/cpu.h>
> > >> +#include <linux/vmalloc.h>
> > >> +#include <linux/hardirq.h>
> > >> +#include <linux/spinlock.h>
> > >> +#include <linux/types.h>
> > >> +
> > >> +#include <linux/zsmalloc.h>
> > >> +
> > >> +/*
> > >> + * This must be power of 2 and greater than of equal to sizeof(link_free).
> > >> + * These two conditions ensure that any 'struct link_free' itself doesn't
> > >> + * span more than 1 page which avoids complex case of mapping 2 pages simply
> > >> + * to restore link_free pointer values.
> > >> + */
> > >> +#define ZS_ALIGN		8
> > >> +
> > >> +/*
> > >> + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
> > >> + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
> > >> + */
> > >> +#define ZS_MAX_ZSPAGE_ORDER 2
> > >> +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
> > >> +
> > >> +/*
> > >> + * Object location (<PFN>, <obj_idx>) is encoded as
> > >> + * as single (unsigned long) handle value.
> > >> + *
> > >> + * Note that object index <obj_idx> is relative to system
> > >> + * page <PFN> it is stored in, so for each sub-page belonging
> > >> + * to a zspage, obj_idx starts with 0.
> > >> + *
> > >> + * This is made more complicated by various memory models and PAE.
> > >> + */
> > >> +
> > >> +#ifndef MAX_PHYSMEM_BITS
> > >> +#ifdef CONFIG_HIGHMEM64G
> > >> +#define MAX_PHYSMEM_BITS 36
> > >> +#else /* !CONFIG_HIGHMEM64G */
> > >> +/*
> > >> + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
> > >> + * be PAGE_SHIFT
> > >> + */
> > >> +#define MAX_PHYSMEM_BITS BITS_PER_LONG
> > >> +#endif
> > >> +#endif
> > >> +#define _PFN_BITS		(MAX_PHYSMEM_BITS - PAGE_SHIFT)
> > >> +#define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS)
> > >> +#define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
> > >> +
> > >> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
> > >> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
> > >> +#define ZS_MIN_ALLOC_SIZE \
> > >> +	MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
> > >> +#define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
> > >> +
> > >> +/*
> > >> + * On systems with 4K page size, this gives 254 size classes! There is a
> > >> + * trader-off here:
> > >> + *  - Large number of size classes is potentially wasteful as free page are
> > >> + *    spread across these classes
> > >> + *  - Small number of size classes causes large internal fragmentation
> > >> + *  - Probably its better to use specific size classes (empirically
> > >> + *    determined). NOTE: all those class sizes must be set as multiple of
> > >> + *    ZS_ALIGN to make sure link_free itself never has to span 2 pages.
> > >> + *
> > >> + *  ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
> > >> + *  (reason above)
> > >> + */
> > >> +#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> 8)
> > >> +#define ZS_SIZE_CLASSES		((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
> > >> +					ZS_SIZE_CLASS_DELTA + 1)
> > >> +
> > >> +/*
> > >> + * We do not maintain any list for completely empty or full pages
> > >> + */
> > >> +enum fullness_group {
> > >> +	ZS_ALMOST_FULL,
> > >> +	ZS_ALMOST_EMPTY,
> > >> +	_ZS_NR_FULLNESS_GROUPS,
> > >> +
> > >> +	ZS_EMPTY,
> > >> +	ZS_FULL
> > >> +};
> > >> +
> > >> +/*
> > >> + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
> > >> + *	n <= N / f, where
> > >> + * n = number of allocated objects
> > >> + * N = total number of objects zspage can store
> > >> + * f = 1/fullness_threshold_frac
> > >> + *
> > >> + * Similarly, we assign zspage to:
> > >> + *	ZS_ALMOST_FULL	when n > N / f
> > >> + *	ZS_EMPTY	when n == 0
> > >> + *	ZS_FULL		when n == N
> > >> + *
> > >> + * (see: fix_fullness_group())
> > >> + */
> > >> +static const int fullness_threshold_frac = 4;
> > >> +
> > >> +struct size_class {
> > >> +	/*
> > >> +	 * Size of objects stored in this class. Must be multiple
> > >> +	 * of ZS_ALIGN.
> > >> +	 */
> > >> +	int size;
> > >> +	unsigned int index;
> > >> +
> > >> +	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
> > >> +	int pages_per_zspage;
> > >> +
> > >> +	spinlock_t lock;
> > >> +
> > >> +	/* stats */
> > >> +	u64 pages_allocated;
> > >> +
> > >> +	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
> > >> +};
> > > 
> > > Instead of simple pointer, how about using list_head?
> > > With this, fullness_list management is easily consolidated to
> > > set_zspage_mapping() and we can remove remove_zspage(), insert_zspage().
> > 
> > Makes sense to me.  Nitin what do you think?
> 
> I like it although I'm not Nitin.
> 
> > 
> > > And how about maintaining FULL, EMPTY list?
> > > There is not much memory waste and it can be used for debugging and
> > > implementing other functionality.
> 
> Joonsoo, could you elaborate on ideas you have about debugging and
> other functions you mentioned?
> We need justification for change rather than saying
> "might be useful in future". Then, we can judge whether we should do
> it right now or are able to add it in future when we really need it.

It's my quick thought. So there is no concrete idea.
As Seth said, with a FULL list, zsmalloc always access all zspage.
So, if we want to know what pages are for zsmalloc, we can know it.
The EMPTY list can be used for pool of zsmalloc itself. With it, we don't
need to free zspage directly, we can keep zspages, so can reduce
alloc/free overhead. But, I'm not sure whether it is useful.

> 
> -- 
> Kind regards,
> Minchan Kim
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-22  9:24         ` Joonsoo Kim
@ 2013-02-22 20:04           ` Seth Jennings
  2013-02-25 17:05             ` Dan Magenheimer
  0 siblings, 1 reply; 38+ messages in thread
From: Seth Jennings @ 2013-02-22 20:04 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Minchan Kim, Nitin Gupta, Andrew Morton, Greg Kroah-Hartman,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/22/2013 03:24 AM, Joonsoo Kim wrote:
> On Wed, Feb 20, 2013 at 08:37:33AM +0900, Minchan Kim wrote:
>> On Tue, Feb 19, 2013 at 11:54:21AM -0600, Seth Jennings wrote:
>>> On 02/19/2013 03:18 AM, Joonsoo Kim wrote:
>>>> Hello, Seth.
>>>> I'm not sure that this is right time to review, because I already have
>>>> seen many effort of various people to promote zxxx series. I don't want to
>>>> be a stopper to promote these. :)
>>>
>>> Any time is good review time :)  Thanks for your review!
>>>
>>>>
>>>> But, I read the code, now, and then some comments below.
>>>>
>>>> On Wed, Feb 13, 2013 at 12:38:44PM -0600, Seth Jennings wrote:
>>>>> =========
>>>>> DO NOT MERGE, FOR REVIEW ONLY
>>>>> This patch introduces zsmalloc as new code, however, it already
>>>>> exists in drivers/staging.  In order to build successfully, you
>>>>> must select EITHER to driver/staging version OR this version.
>>>>> Once zsmalloc is reviewed in this format (and hopefully accepted),
>>>>> I will create a new patchset that properly promotes zsmalloc from
>>>>> staging.
>>>>> =========
>>>>>
>>>>> This patchset introduces a new slab-based memory allocator,
>>>>> zsmalloc, for storing compressed pages.  It is designed for
>>>>> low fragmentation and high allocation success rate on
>>>>> large object, but <= PAGE_SIZE allocations.
>>>>>
>>>>> zsmalloc differs from the kernel slab allocator in two primary
>>>>> ways to achieve these design goals.
>>>>>
>>>>> zsmalloc never requires high order page allocations to back
>>>>> slabs, or "size classes" in zsmalloc terms. Instead it allows
>>>>> multiple single-order pages to be stitched together into a
>>>>> "zspage" which backs the slab.  This allows for higher allocation
>>>>> success rate under memory pressure.
>>>>>
>>>>> Also, zsmalloc allows objects to span page boundaries within the
>>>>> zspage.  This allows for lower fragmentation than could be had
>>>>> with the kernel slab allocator for objects between PAGE_SIZE/2
>>>>> and PAGE_SIZE.  With the kernel slab allocator, if a page compresses
>>>>> to 60% of it original size, the memory savings gained through
>>>>> compression is lost in fragmentation because another object of
>>>>> the same size can't be stored in the leftover space.
>>>>>
>>>>> This ability to span pages results in zsmalloc allocations not being
>>>>> directly addressable by the user.  The user is given an
>>>>> non-dereferencable handle in response to an allocation request.
>>>>> That handle must be mapped, using zs_map_object(), which returns
>>>>> a pointer to the mapped region that can be used.  The mapping is
>>>>> necessary since the object data may reside in two different
>>>>> noncontigious pages.
>>>>>
>>>>> zsmalloc fulfills the allocation needs for zram and zswap.
>>>>>
>>>>> Acked-by: Nitin Gupta <ngupta@vflare.org>
>>>>> Acked-by: Minchan Kim <minchan@kernel.org>
>>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>>> ---
>>>>>  include/linux/zsmalloc.h |   49 ++
>>>>>  mm/Kconfig               |   24 +
>>>>>  mm/Makefile              |    1 +
>>>>>  mm/zsmalloc.c            | 1124 ++++++++++++++++++++++++++++++++++++++++++++++
>>>>>  4 files changed, 1198 insertions(+)
>>>>>  create mode 100644 include/linux/zsmalloc.h
>>>>>  create mode 100644 mm/zsmalloc.c
>>>>>
>>>>> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
>>>>> new file mode 100644
>>>>> index 0000000..eb6efb6
>>>>> --- /dev/null
>>>>> +++ b/include/linux/zsmalloc.h
>>>>> @@ -0,0 +1,49 @@
>>>>> +/*
>>>>> + * zsmalloc memory allocator
>>>>> + *
>>>>> + * Copyright (C) 2011  Nitin Gupta
>>>>> + *
>>>>> + * This code is released using a dual license strategy: BSD/GPL
>>>>> + * You can choose the license that better fits your requirements.
>>>>> + *
>>>>> + * Released under the terms of 3-clause BSD License
>>>>> + * Released under the terms of GNU General Public License Version 2.0
>>>>> + */
>>>>> +
>>>>> +#ifndef _ZS_MALLOC_H_
>>>>> +#define _ZS_MALLOC_H_
>>>>> +
>>>>> +#include <linux/types.h>
>>>>> +#include <linux/mm_types.h>
>>>>> +
>>>>> +/*
>>>>> + * zsmalloc mapping modes
>>>>> + *
>>>>> + * NOTE: These only make a difference when a mapped object spans pages
>>>>> +*/
>>>>> +enum zs_mapmode {
>>>>> +	ZS_MM_RW, /* normal read-write mapping */
>>>>> +	ZS_MM_RO, /* read-only (no copy-out at unmap time) */
>>>>> +	ZS_MM_WO /* write-only (no copy-in at map time) */
>>>>> +};
>>>>
>>>>
>>>> These makes no difference for PGTABLE_MAPPING.
>>>> Please add some comment for this.
>>>
>>> Yes. Will do.
>>>
>>>>
>>>>> +struct zs_ops {
>>>>> +	struct page * (*alloc)(gfp_t);
>>>>> +	void (*free)(struct page *);
>>>>> +};
>>>>> +
>>>>> +struct zs_pool;
>>>>> +
>>>>> +struct zs_pool *zs_create_pool(gfp_t flags, struct zs_ops *ops);
>>>>> +void zs_destroy_pool(struct zs_pool *pool);
>>>>> +
>>>>> +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags);
>>>>> +void zs_free(struct zs_pool *pool, unsigned long obj);
>>>>> +
>>>>> +void *zs_map_object(struct zs_pool *pool, unsigned long handle,
>>>>> +			enum zs_mapmode mm);
>>>>> +void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
>>>>> +
>>>>> +u64 zs_get_total_size_bytes(struct zs_pool *pool);
>>>>> +
>>>>> +#endif
>>>>> diff --git a/mm/Kconfig b/mm/Kconfig
>>>>> index 278e3ab..25b8f38 100644
>>>>> --- a/mm/Kconfig
>>>>> +++ b/mm/Kconfig
>>>>> @@ -446,3 +446,27 @@ config FRONTSWAP
>>>>>  	  and swap data is stored as normal on the matching swap device.
>>>>>  
>>>>>  	  If unsure, say Y to enable frontswap.
>>>>> +
>>>>> +config ZSMALLOC
>>>>> +	tristate "Memory allocator for compressed pages"
>>>>> +	default n
>>>>> +	help
>>>>> +	  zsmalloc is a slab-based memory allocator designed to store
>>>>> +	  compressed RAM pages.  zsmalloc uses virtual memory mapping
>>>>> +	  in order to reduce fragmentation.  However, this results in a
>>>>> +	  non-standard allocator interface where a handle, not a pointer, is
>>>>> +	  returned by an alloc().  This handle must be mapped in order to
>>>>> +	  access the allocated space.
>>>>> +
>>>>> +config PGTABLE_MAPPING
>>>>> +	bool "Use page table mapping to access object in zsmalloc"
>>>>> +	depends on ZSMALLOC
>>>>> +	help
>>>>> +	  By default, zsmalloc uses a copy-based object mapping method to
>>>>> +	  access allocations that span two pages. However, if a particular
>>>>> +	  architecture (ex, ARM) performs VM mapping faster than copying,
>>>>> +	  then you should select this. This causes zsmalloc to use page table
>>>>> +	  mapping rather than copying for object mapping.
>>>>> +
>>>>> +	  You can check speed with zsmalloc benchmark[1].
>>>>> +	  [1] https://github.com/spartacus06/zsmalloc
>>>>> diff --git a/mm/Makefile b/mm/Makefile
>>>>> index 3a46287..0f6ef0a 100644
>>>>> --- a/mm/Makefile
>>>>> +++ b/mm/Makefile
>>>>> @@ -58,3 +58,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
>>>>>  obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
>>>>>  obj-$(CONFIG_CLEANCACHE) += cleancache.o
>>>>>  obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>>>>> +obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>>>>> new file mode 100644
>>>>> index 0000000..34378ef
>>>>> --- /dev/null
>>>>> +++ b/mm/zsmalloc.c
>>>>> @@ -0,0 +1,1124 @@
>>>>> +/*
>>>>> + * zsmalloc memory allocator
>>>>> + *
>>>>> + * Copyright (C) 2011  Nitin Gupta
>>>>> + *
>>>>> + * This code is released using a dual license strategy: BSD/GPL
>>>>> + * You can choose the license that better fits your requirements.
>>>>> + *
>>>>> + * Released under the terms of 3-clause BSD License
>>>>> + * Released under the terms of GNU General Public License Version 2.0
>>>>> + */
>>>>> +
>>>>> +
>>>>> +/*
>>>>> + * This allocator is designed for use with zcache and zram. Thus, the
>>>>> + * allocator is supposed to work well under low memory conditions. In
>>>>> + * particular, it never attempts higher order page allocation which is
>>>>> + * very likely to fail under memory pressure. On the other hand, if we
>>>>> + * just use single (0-order) pages, it would suffer from very high
>>>>> + * fragmentation -- any object of size PAGE_SIZE/2 or larger would occupy
>>>>> + * an entire page. This was one of the major issues with its predecessor
>>>>> + * (xvmalloc).
>>>>> + *
>>>>> + * To overcome these issues, zsmalloc allocates a bunch of 0-order pages
>>>>> + * and links them together using various 'struct page' fields. These linked
>>>>> + * pages act as a single higher-order page i.e. an object can span 0-order
>>>>> + * page boundaries. The code refers to these linked pages as a single entity
>>>>> + * called zspage.
>>>>> + *
>>>>> + * For simplicity, zsmalloc can only allocate objects of size up to PAGE_SIZE
>>>>> + * since this satisfies the requirements of all its current users (in the
>>>>> + * worst case, page is incompressible and is thus stored "as-is" i.e. in
>>>>> + * uncompressed form). For allocation requests larger than this size, failure
>>>>> + * is returned (see zs_malloc).
>>>>> + *
>>>>> + * Additionally, zs_malloc() does not return a dereferenceable pointer.
>>>>> + * Instead, it returns an opaque handle (unsigned long) which encodes actual
>>>>> + * location of the allocated object. The reason for this indirection is that
>>>>> + * zsmalloc does not keep zspages permanently mapped since that would cause
>>>>> + * issues on 32-bit systems where the VA region for kernel space mappings
>>>>> + * is very small. So, before using the allocating memory, the object has to
>>>>> + * be mapped using zs_map_object() to get a usable pointer and subsequently
>>>>> + * unmapped using zs_unmap_object().
>>>>> + *
>>>>> + * Following is how we use various fields and flags of underlying
>>>>> + * struct page(s) to form a zspage.
>>>>> + *
>>>>> + * Usage of struct page fields:
>>>>> + *	page->first_page: points to the first component (0-order) page
>>>>> + *	page->index (union with page->freelist): offset of the first object
>>>>> + *		starting in this page. For the first page, this is
>>>>> + *		always 0, so we use this field (aka freelist) to point
>>>>> + *		to the first free object in zspage.
>>>>> + *	page->lru: links together all component pages (except the first page)
>>>>> + *		of a zspage
>>>>> + *
>>>>> + *	For _first_ page only:
>>>>> + *
>>>>> + *	page->private (union with page->first_page): refers to the
>>>>> + *		component page after the first page
>>>>> + *	page->freelist: points to the first free object in zspage.
>>>>> + *		Free objects are linked together using in-place
>>>>> + *		metadata.
>>>>> + *	page->objects: maximum number of objects we can store in this
>>>>> + *		zspage (class->zspage_order * PAGE_SIZE / class->size)
>>>>
>>>> How about just embedding maximum number of objects to size_class?
>>>> For the SLUB, each slab can have difference number of objects.
>>>> But, for the zsmalloc, it is not possible, so there is no reason
>>>> to maintain it within metadata of zspage. Just to embed it to size_class
>>>> is sufficient.
>>>
>>> Yes, a little code massaging and this can go away.
>>>
>>> However, there might be some value in having variable sized zspages in
>>> the same size_class.  It could improve allocation success rate at the
>>> expense of efficiency by not failing in alloc_zspage() if we can't
>>> allocate the optimal number of pages.  As long as we can allocate the
>>> first page, then we can proceed.
>>>
>>> Nitin care to weigh in?
>>
>> Sorry, I'm not Nitin.
>> IMHO, Seth's idea is good but at the moment, it's just a idea.
>> We can add it in future easily with some experiment result.
>> So I vote Joonsoo's comment.
>>
>>>
>>>>
>>>>
>>>>> + *	page->lru: links together first pages of various zspages.
>>>>> + *		Basically forming list of zspages in a fullness group.
>>>>> + *	page->mapping: class index and fullness group of the zspage
>>>>> + *
>>>>> + * Usage of struct page flags:
>>>>> + *	PG_private: identifies the first component page
>>>>> + *	PG_private2: identifies the last component page
>>>>> + *
>>>>> + */
>>>>> +
>>>>> +#ifdef CONFIG_ZSMALLOC_DEBUG
>>>>> +#define DEBUG
>>>>> +#endif
>>>>
>>>> Is this obsolete?
>>>
>>> Yes, I'll remove it.
>>>
>>>>
>>>>> +#include <linux/module.h>
>>>>> +#include <linux/kernel.h>
>>>>> +#include <linux/bitops.h>
>>>>> +#include <linux/errno.h>
>>>>> +#include <linux/highmem.h>
>>>>> +#include <linux/init.h>
>>>>> +#include <linux/string.h>
>>>>> +#include <linux/slab.h>
>>>>> +#include <asm/tlbflush.h>
>>>>> +#include <asm/pgtable.h>
>>>>> +#include <linux/cpumask.h>
>>>>> +#include <linux/cpu.h>
>>>>> +#include <linux/vmalloc.h>
>>>>> +#include <linux/hardirq.h>
>>>>> +#include <linux/spinlock.h>
>>>>> +#include <linux/types.h>
>>>>> +
>>>>> +#include <linux/zsmalloc.h>
>>>>> +
>>>>> +/*
>>>>> + * This must be power of 2 and greater than of equal to sizeof(link_free).
>>>>> + * These two conditions ensure that any 'struct link_free' itself doesn't
>>>>> + * span more than 1 page which avoids complex case of mapping 2 pages simply
>>>>> + * to restore link_free pointer values.
>>>>> + */
>>>>> +#define ZS_ALIGN		8
>>>>> +
>>>>> +/*
>>>>> + * A single 'zspage' is composed of up to 2^N discontiguous 0-order (single)
>>>>> + * pages. ZS_MAX_ZSPAGE_ORDER defines upper limit on N.
>>>>> + */
>>>>> +#define ZS_MAX_ZSPAGE_ORDER 2
>>>>> +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
>>>>> +
>>>>> +/*
>>>>> + * Object location (<PFN>, <obj_idx>) is encoded as
>>>>> + * as single (unsigned long) handle value.
>>>>> + *
>>>>> + * Note that object index <obj_idx> is relative to system
>>>>> + * page <PFN> it is stored in, so for each sub-page belonging
>>>>> + * to a zspage, obj_idx starts with 0.
>>>>> + *
>>>>> + * This is made more complicated by various memory models and PAE.
>>>>> + */
>>>>> +
>>>>> +#ifndef MAX_PHYSMEM_BITS
>>>>> +#ifdef CONFIG_HIGHMEM64G
>>>>> +#define MAX_PHYSMEM_BITS 36
>>>>> +#else /* !CONFIG_HIGHMEM64G */
>>>>> +/*
>>>>> + * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
>>>>> + * be PAGE_SHIFT
>>>>> + */
>>>>> +#define MAX_PHYSMEM_BITS BITS_PER_LONG
>>>>> +#endif
>>>>> +#endif
>>>>> +#define _PFN_BITS		(MAX_PHYSMEM_BITS - PAGE_SHIFT)
>>>>> +#define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS)
>>>>> +#define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
>>>>> +
>>>>> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
>>>>> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
>>>>> +#define ZS_MIN_ALLOC_SIZE \
>>>>> +	MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS))
>>>>> +#define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
>>>>> +
>>>>> +/*
>>>>> + * On systems with 4K page size, this gives 254 size classes! There is a
>>>>> + * trader-off here:
>>>>> + *  - Large number of size classes is potentially wasteful as free page are
>>>>> + *    spread across these classes
>>>>> + *  - Small number of size classes causes large internal fragmentation
>>>>> + *  - Probably its better to use specific size classes (empirically
>>>>> + *    determined). NOTE: all those class sizes must be set as multiple of
>>>>> + *    ZS_ALIGN to make sure link_free itself never has to span 2 pages.
>>>>> + *
>>>>> + *  ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
>>>>> + *  (reason above)
>>>>> + */
>>>>> +#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> 8)
>>>>> +#define ZS_SIZE_CLASSES		((ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE) / \
>>>>> +					ZS_SIZE_CLASS_DELTA + 1)
>>>>> +
>>>>> +/*
>>>>> + * We do not maintain any list for completely empty or full pages
>>>>> + */
>>>>> +enum fullness_group {
>>>>> +	ZS_ALMOST_FULL,
>>>>> +	ZS_ALMOST_EMPTY,
>>>>> +	_ZS_NR_FULLNESS_GROUPS,
>>>>> +
>>>>> +	ZS_EMPTY,
>>>>> +	ZS_FULL
>>>>> +};
>>>>> +
>>>>> +/*
>>>>> + * We assign a page to ZS_ALMOST_EMPTY fullness group when:
>>>>> + *	n <= N / f, where
>>>>> + * n = number of allocated objects
>>>>> + * N = total number of objects zspage can store
>>>>> + * f = 1/fullness_threshold_frac
>>>>> + *
>>>>> + * Similarly, we assign zspage to:
>>>>> + *	ZS_ALMOST_FULL	when n > N / f
>>>>> + *	ZS_EMPTY	when n == 0
>>>>> + *	ZS_FULL		when n == N
>>>>> + *
>>>>> + * (see: fix_fullness_group())
>>>>> + */
>>>>> +static const int fullness_threshold_frac = 4;
>>>>> +
>>>>> +struct size_class {
>>>>> +	/*
>>>>> +	 * Size of objects stored in this class. Must be multiple
>>>>> +	 * of ZS_ALIGN.
>>>>> +	 */
>>>>> +	int size;
>>>>> +	unsigned int index;
>>>>> +
>>>>> +	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
>>>>> +	int pages_per_zspage;
>>>>> +
>>>>> +	spinlock_t lock;
>>>>> +
>>>>> +	/* stats */
>>>>> +	u64 pages_allocated;
>>>>> +
>>>>> +	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
>>>>> +};
>>>>
>>>> Instead of simple pointer, how about using list_head?
>>>> With this, fullness_list management is easily consolidated to
>>>> set_zspage_mapping() and we can remove remove_zspage(), insert_zspage().
>>>
>>> Makes sense to me.  Nitin what do you think?
>>
>> I like it although I'm not Nitin.
>>
>>>
>>>> And how about maintaining FULL, EMPTY list?
>>>> There is not much memory waste and it can be used for debugging and
>>>> implementing other functionality.
>>
>> Joonsoo, could you elaborate on ideas you have about debugging and
>> other functions you mentioned?
>> We need justification for change rather than saying
>> "might be useful in future". Then, we can judge whether we should do
>> it right now or are able to add it in future when we really need it.
> 
> It's my quick thought. So there is no concrete idea.
> As Seth said, with a FULL list, zsmalloc always access all zspage.
> So, if we want to know what pages are for zsmalloc, we can know it.
> The EMPTY list can be used for pool of zsmalloc itself. With it, we don't
> need to free zspage directly, we can keep zspages, so can reduce
> alloc/free overhead. But, I'm not sure whether it is useful.

I think it's a good idea.  zswap actually does this "keeping some free
pages around for later allocations" outside zsmalloc in a mempool that
zswap manages.  Minchan once mentioned bringing that inside zsmalloc
and this would be a way we could do it.

Just want to be clear that I'd be in favor of looking at this after
the merge.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-22  2:56           ` Ric Mason
@ 2013-02-22 21:02             ` Seth Jennings
  2013-02-24  0:37               ` Ric Mason
  0 siblings, 1 reply; 38+ messages in thread
From: Seth Jennings @ 2013-02-22 21:02 UTC (permalink / raw)
  To: Ric Mason
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/21/2013 08:56 PM, Ric Mason wrote:
> On 02/21/2013 11:50 PM, Seth Jennings wrote:
>> On 02/21/2013 02:49 AM, Ric Mason wrote:
>>> On 02/19/2013 03:16 AM, Seth Jennings wrote:
>>>> On 02/16/2013 12:21 AM, Ric Mason wrote:
>>>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>>>>>> This patch adds a documentation file for zsmalloc at
>>>>>> Documentation/vm/zsmalloc.txt
>>>>>>
>>>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>>>> ---
>>>>>>     Documentation/vm/zsmalloc.txt |   68
>>>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>>>     1 file changed, 68 insertions(+)
>>>>>>     create mode 100644 Documentation/vm/zsmalloc.txt
>>>>>>
>>>>>> diff --git a/Documentation/vm/zsmalloc.txt
>>>>>> b/Documentation/vm/zsmalloc.txt
>>>>>> new file mode 100644
>>>>>> index 0000000..85aa617
>>>>>> --- /dev/null
>>>>>> +++ b/Documentation/vm/zsmalloc.txt
>>>>>> @@ -0,0 +1,68 @@
>>>>>> +zsmalloc Memory Allocator
>>>>>> +
>>>>>> +Overview
>>>>>> +
>>>>>> +zmalloc a new slab-based memory allocator,
>>>>>> +zsmalloc, for storing compressed pages.  It is designed for
>>>>>> +low fragmentation and high allocation success rate on
>>>>>> +large object, but <= PAGE_SIZE allocations.
>>>>>> +
>>>>>> +zsmalloc differs from the kernel slab allocator in two primary
>>>>>> +ways to achieve these design goals.
>>>>>> +
>>>>>> +zsmalloc never requires high order page allocations to back
>>>>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>>>>>> +multiple single-order pages to be stitched together into a
>>>>>> +"zspage" which backs the slab.  This allows for higher allocation
>>>>>> +success rate under memory pressure.
>>>>>> +
>>>>>> +Also, zsmalloc allows objects to span page boundaries within the
>>>>>> +zspage.  This allows for lower fragmentation than could be had
>>>>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>>>>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page
>>>>>> compresses
>>>>>> +to 60% of it original size, the memory savings gained through
>>>>>> +compression is lost in fragmentation because another object of
>>>>>> +the same size can't be stored in the leftover space.
>>>>>> +
>>>>>> +This ability to span pages results in zsmalloc allocations not
>>>>>> being
>>>>>> +directly addressable by the user.  The user is given an
>>>>>> +non-dereferencable handle in response to an allocation request.
>>>>>> +That handle must be mapped, using zs_map_object(), which returns
>>>>>> +a pointer to the mapped region that can be used.  The mapping is
>>>>>> +necessary since the object data may reside in two different
>>>>>> +noncontigious pages.
>>>>> Do you mean the reason of  to use a zsmalloc object must map after
>>>>> malloc is object data maybe reside in two different nocontiguous
>>>>> pages?
>>>> Yes, that is one reason for the mapping.  The other reason (more
>>>> of an
>>>> added bonus) is below.
>>>>
>>>>>> +
>>>>>> +For 32-bit systems, zsmalloc has the added benefit of being
>>>>>> +able to back slabs with HIGHMEM pages, something not possible
>>>>> What's the meaning of "back slabs with HIGHMEM pages"?
>>>> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit
>>>> systems
>>>> with larger that 1GB (actually a little less) of RAM.  The upper 3GB
>>>> of the 4GB address space, depending on kernel build options, is not
>>>> directly addressable by the kernel, but can be mapped into the kernel
>>>> address space with functions like kmap() or kmap_atomic().
>>>>
>>>> These pages can't be used by slab/slub because they are not
>>>> continuously mapped into the kernel address space.  However, since
>>>> zsmalloc requires a mapping anyway to handle objects that span
>>>> non-contiguous page boundaries, we do the kernel mapping as part of
>>>> the process.
>>>>
>>>> So zspages, the conceptual slab in zsmalloc backed by single-order
>>>> pages can include pages from the HIGHMEM zone as well.
>>> Thanks for your clarify,
>>>   http://lwn.net/Articles/537422/, your article about zswap in lwn.
>>>   "Additionally, the kernel slab allocator does not allow objects that
>>> are less
>>> than a page in size to span a page boundary. This means that if an
>>> object is
>>> PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page,
>>> resulting in
>>> ~50% waste. Hense there are *no kmalloc() cache size* between
>>> PAGE_SIZE/2 and
>>> PAGE_SIZE."
>>> Are your sure? It seems that kmalloc cache support big size, your can
>>> check in
>>> include/linux/kmalloc_sizes.h
>> Yes, kmalloc can allocate large objects > PAGE_SIZE, but there are no
>> cache sizes _between_ PAGE_SIZE/2 and PAGE_SIZE.  For example, on a
>> system with 4k pages, there are no caches between kmalloc-2048 and
>> kmalloc-4096.
> 
> kmalloc object > PAGE_SIZE/2 or > PAGE_SIZE should also allocate from
> slab cache, correct? Then how can alloc object w/o slab cache which
> contains this object size objects?

I have to admit, I didn't understand the question.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-22 21:02             ` Seth Jennings
@ 2013-02-24  0:37               ` Ric Mason
  2013-02-25 15:18                 ` Seth Jennings
  0 siblings, 1 reply; 38+ messages in thread
From: Ric Mason @ 2013-02-24  0:37 UTC (permalink / raw)
  To: Seth Jennings, Joonsoo Kim
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/23/2013 05:02 AM, Seth Jennings wrote:
> On 02/21/2013 08:56 PM, Ric Mason wrote:
>> On 02/21/2013 11:50 PM, Seth Jennings wrote:
>>> On 02/21/2013 02:49 AM, Ric Mason wrote:
>>>> On 02/19/2013 03:16 AM, Seth Jennings wrote:
>>>>> On 02/16/2013 12:21 AM, Ric Mason wrote:
>>>>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>>>>>>> This patch adds a documentation file for zsmalloc at
>>>>>>> Documentation/vm/zsmalloc.txt
>>>>>>>
>>>>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>>>>> ---
>>>>>>>      Documentation/vm/zsmalloc.txt |   68
>>>>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>>>>      1 file changed, 68 insertions(+)
>>>>>>>      create mode 100644 Documentation/vm/zsmalloc.txt
>>>>>>>
>>>>>>> diff --git a/Documentation/vm/zsmalloc.txt
>>>>>>> b/Documentation/vm/zsmalloc.txt
>>>>>>> new file mode 100644
>>>>>>> index 0000000..85aa617
>>>>>>> --- /dev/null
>>>>>>> +++ b/Documentation/vm/zsmalloc.txt
>>>>>>> @@ -0,0 +1,68 @@
>>>>>>> +zsmalloc Memory Allocator
>>>>>>> +
>>>>>>> +Overview
>>>>>>> +
>>>>>>> +zmalloc a new slab-based memory allocator,
>>>>>>> +zsmalloc, for storing compressed pages.  It is designed for
>>>>>>> +low fragmentation and high allocation success rate on
>>>>>>> +large object, but <= PAGE_SIZE allocations.
>>>>>>> +
>>>>>>> +zsmalloc differs from the kernel slab allocator in two primary
>>>>>>> +ways to achieve these design goals.
>>>>>>> +
>>>>>>> +zsmalloc never requires high order page allocations to back
>>>>>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>>>>>>> +multiple single-order pages to be stitched together into a
>>>>>>> +"zspage" which backs the slab.  This allows for higher allocation
>>>>>>> +success rate under memory pressure.
>>>>>>> +
>>>>>>> +Also, zsmalloc allows objects to span page boundaries within the
>>>>>>> +zspage.  This allows for lower fragmentation than could be had
>>>>>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>>>>>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page
>>>>>>> compresses
>>>>>>> +to 60% of it original size, the memory savings gained through
>>>>>>> +compression is lost in fragmentation because another object of
>>>>>>> +the same size can't be stored in the leftover space.
>>>>>>> +
>>>>>>> +This ability to span pages results in zsmalloc allocations not
>>>>>>> being
>>>>>>> +directly addressable by the user.  The user is given an
>>>>>>> +non-dereferencable handle in response to an allocation request.
>>>>>>> +That handle must be mapped, using zs_map_object(), which returns
>>>>>>> +a pointer to the mapped region that can be used.  The mapping is
>>>>>>> +necessary since the object data may reside in two different
>>>>>>> +noncontigious pages.
>>>>>> Do you mean the reason of  to use a zsmalloc object must map after
>>>>>> malloc is object data maybe reside in two different nocontiguous
>>>>>> pages?
>>>>> Yes, that is one reason for the mapping.  The other reason (more
>>>>> of an
>>>>> added bonus) is below.
>>>>>
>>>>>>> +
>>>>>>> +For 32-bit systems, zsmalloc has the added benefit of being
>>>>>>> +able to back slabs with HIGHMEM pages, something not possible
>>>>>> What's the meaning of "back slabs with HIGHMEM pages"?
>>>>> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit
>>>>> systems
>>>>> with larger that 1GB (actually a little less) of RAM.  The upper 3GB
>>>>> of the 4GB address space, depending on kernel build options, is not
>>>>> directly addressable by the kernel, but can be mapped into the kernel
>>>>> address space with functions like kmap() or kmap_atomic().
>>>>>
>>>>> These pages can't be used by slab/slub because they are not
>>>>> continuously mapped into the kernel address space.  However, since
>>>>> zsmalloc requires a mapping anyway to handle objects that span
>>>>> non-contiguous page boundaries, we do the kernel mapping as part of
>>>>> the process.
>>>>>
>>>>> So zspages, the conceptual slab in zsmalloc backed by single-order
>>>>> pages can include pages from the HIGHMEM zone as well.
>>>> Thanks for your clarify,
>>>>    http://lwn.net/Articles/537422/, your article about zswap in lwn.
>>>>    "Additionally, the kernel slab allocator does not allow objects that
>>>> are less
>>>> than a page in size to span a page boundary. This means that if an
>>>> object is
>>>> PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page,
>>>> resulting in
>>>> ~50% waste. Hense there are *no kmalloc() cache size* between
>>>> PAGE_SIZE/2 and
>>>> PAGE_SIZE."
>>>> Are your sure? It seems that kmalloc cache support big size, your can
>>>> check in
>>>> include/linux/kmalloc_sizes.h
>>> Yes, kmalloc can allocate large objects > PAGE_SIZE, but there are no
>>> cache sizes _between_ PAGE_SIZE/2 and PAGE_SIZE.  For example, on a
>>> system with 4k pages, there are no caches between kmalloc-2048 and
>>> kmalloc-4096.
>> kmalloc object > PAGE_SIZE/2 or > PAGE_SIZE should also allocate from
>> slab cache, correct? Then how can alloc object w/o slab cache which?
>> contains this object size objects?
> I have to admit, I didn't understand the question.

object is allocated from slab cache, correct? There two kinds of slab 
cache, one is for general purpose, eg. kmalloc slab cache, the other is 
for special purpose, eg. mm_struct, task_struct. kmalloc object > 
PAGE_SIZE/2 or > PAGE_SIZE should also allocated from slab cache, 
correct? then why you said that there are no caches between kmalloc-2048 
and kmalloc-4096?

>
> Thanks,
> Seth
>


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 7/8] zswap: add swap page writeback support
       [not found] ` <1360780731-11708-8-git-send-email-sjenning@linux.vnet.ibm.com>
  2013-02-16  6:11   ` [PATCHv5 7/8] zswap: add swap page writeback support Ric Mason
@ 2013-02-25  2:54   ` Minchan Kim
  2013-02-25 17:37     ` Seth Jennings
  1 sibling, 1 reply; 38+ messages in thread
From: Minchan Kim @ 2013-02-25  2:54 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

Hi Seth,

On Wed, Feb 13, 2013 at 12:38:50PM -0600, Seth Jennings wrote:
> This patch adds support for evicting swap pages that are currently
> compressed in zswap to the swap device.  This functionality is very
> important and make zswap a true cache in that, once the cache is full
> or can't grow due to memory pressure, the oldest pages can be moved
> out of zswap to the swap device so newer pages can be compressed and
> stored in zswap.
> 
> This introduces a good amount of new code to guarantee coherency.
> Most notably, and LRU list is added to the zswap_tree structure,
> and refcounts are added to each entry to ensure that one code path
> doesn't free then entry while another code path is operating on it.
> 
> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>

In this time, I didn't review the code in detail yet but it seems
resolve of all review point in previous interation. Thanks!
But unfortunately, I couldn't find anything related to tmppage handling
so I'd like to ask.

The reason of tmppage is temporal buffer to keep compressed data during
writeback to avoid unnecessary compressing again when we retry?
Is it really critical about performance? What's the wrong if we remove
tmppage handling?

zswap_frontswap_store
retry:
        get_cpu_var(zswap_dstmem);
        zswap_com_op(COMPRESS)
        zs_malloc()
        if (!handle) {
                put_cpu_var(zswap_dstmem);
                if (retry > MAX_RETRY)
                        goto error_nomem;
                zswap_flush_entries()
                goto retry;
        }


-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-24  0:37               ` Ric Mason
@ 2013-02-25 15:18                 ` Seth Jennings
  2013-03-01  6:47                   ` Ric Mason
  0 siblings, 1 reply; 38+ messages in thread
From: Seth Jennings @ 2013-02-25 15:18 UTC (permalink / raw)
  To: Ric Mason
  Cc: Joonsoo Kim, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer,
	Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner,
	Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen,
	Joe Perches, linux-mm, linux-kernel, devel

On 02/23/2013 06:37 PM, Ric Mason wrote:
> On 02/23/2013 05:02 AM, Seth Jennings wrote:
>> On 02/21/2013 08:56 PM, Ric Mason wrote:
>>> On 02/21/2013 11:50 PM, Seth Jennings wrote:
>>>> On 02/21/2013 02:49 AM, Ric Mason wrote:
>>>>> On 02/19/2013 03:16 AM, Seth Jennings wrote:
>>>>>> On 02/16/2013 12:21 AM, Ric Mason wrote:
>>>>>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>>>>>>>> This patch adds a documentation file for zsmalloc at
>>>>>>>> Documentation/vm/zsmalloc.txt
>>>>>>>>
>>>>>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>>>>>> ---
>>>>>>>>      Documentation/vm/zsmalloc.txt |   68
>>>>>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>>>>>      1 file changed, 68 insertions(+)
>>>>>>>>      create mode 100644 Documentation/vm/zsmalloc.txt
>>>>>>>>
>>>>>>>> diff --git a/Documentation/vm/zsmalloc.txt
>>>>>>>> b/Documentation/vm/zsmalloc.txt
>>>>>>>> new file mode 100644
>>>>>>>> index 0000000..85aa617
>>>>>>>> --- /dev/null
>>>>>>>> +++ b/Documentation/vm/zsmalloc.txt
>>>>>>>> @@ -0,0 +1,68 @@
>>>>>>>> +zsmalloc Memory Allocator
>>>>>>>> +
>>>>>>>> +Overview
>>>>>>>> +
>>>>>>>> +zmalloc a new slab-based memory allocator,
>>>>>>>> +zsmalloc, for storing compressed pages.  It is designed for
>>>>>>>> +low fragmentation and high allocation success rate on
>>>>>>>> +large object, but <= PAGE_SIZE allocations.
>>>>>>>> +
>>>>>>>> +zsmalloc differs from the kernel slab allocator in two primary
>>>>>>>> +ways to achieve these design goals.
>>>>>>>> +
>>>>>>>> +zsmalloc never requires high order page allocations to back
>>>>>>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>>>>>>>> +multiple single-order pages to be stitched together into a
>>>>>>>> +"zspage" which backs the slab.  This allows for higher
>>>>>>>> allocation
>>>>>>>> +success rate under memory pressure.
>>>>>>>> +
>>>>>>>> +Also, zsmalloc allows objects to span page boundaries within the
>>>>>>>> +zspage.  This allows for lower fragmentation than could be had
>>>>>>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>>>>>>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page
>>>>>>>> compresses
>>>>>>>> +to 60% of it original size, the memory savings gained through
>>>>>>>> +compression is lost in fragmentation because another object of
>>>>>>>> +the same size can't be stored in the leftover space.
>>>>>>>> +
>>>>>>>> +This ability to span pages results in zsmalloc allocations not
>>>>>>>> being
>>>>>>>> +directly addressable by the user.  The user is given an
>>>>>>>> +non-dereferencable handle in response to an allocation request.
>>>>>>>> +That handle must be mapped, using zs_map_object(), which returns
>>>>>>>> +a pointer to the mapped region that can be used.  The mapping is
>>>>>>>> +necessary since the object data may reside in two different
>>>>>>>> +noncontigious pages.
>>>>>>> Do you mean the reason of  to use a zsmalloc object must map after
>>>>>>> malloc is object data maybe reside in two different nocontiguous
>>>>>>> pages?
>>>>>> Yes, that is one reason for the mapping.  The other reason (more
>>>>>> of an
>>>>>> added bonus) is below.
>>>>>>
>>>>>>>> +
>>>>>>>> +For 32-bit systems, zsmalloc has the added benefit of being
>>>>>>>> +able to back slabs with HIGHMEM pages, something not possible
>>>>>>> What's the meaning of "back slabs with HIGHMEM pages"?
>>>>>> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit
>>>>>> systems
>>>>>> with larger that 1GB (actually a little less) of RAM.  The upper
>>>>>> 3GB
>>>>>> of the 4GB address space, depending on kernel build options, is not
>>>>>> directly addressable by the kernel, but can be mapped into the
>>>>>> kernel
>>>>>> address space with functions like kmap() or kmap_atomic().
>>>>>>
>>>>>> These pages can't be used by slab/slub because they are not
>>>>>> continuously mapped into the kernel address space.  However, since
>>>>>> zsmalloc requires a mapping anyway to handle objects that span
>>>>>> non-contiguous page boundaries, we do the kernel mapping as part of
>>>>>> the process.
>>>>>>
>>>>>> So zspages, the conceptual slab in zsmalloc backed by single-order
>>>>>> pages can include pages from the HIGHMEM zone as well.
>>>>> Thanks for your clarify,
>>>>>    http://lwn.net/Articles/537422/, your article about zswap in lwn.
>>>>>    "Additionally, the kernel slab allocator does not allow
>>>>> objects that
>>>>> are less
>>>>> than a page in size to span a page boundary. This means that if an
>>>>> object is
>>>>> PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page,
>>>>> resulting in
>>>>> ~50% waste. Hense there are *no kmalloc() cache size* between
>>>>> PAGE_SIZE/2 and
>>>>> PAGE_SIZE."
>>>>> Are your sure? It seems that kmalloc cache support big size, your
>>>>> can
>>>>> check in
>>>>> include/linux/kmalloc_sizes.h
>>>> Yes, kmalloc can allocate large objects > PAGE_SIZE, but there are no
>>>> cache sizes _between_ PAGE_SIZE/2 and PAGE_SIZE.  For example, on a
>>>> system with 4k pages, there are no caches between kmalloc-2048 and
>>>> kmalloc-4096.
>>> kmalloc object > PAGE_SIZE/2 or > PAGE_SIZE should also allocate from
>>> slab cache, correct? Then how can alloc object w/o slab cache which?
>>> contains this object size objects?
>> I have to admit, I didn't understand the question.
> 
> object is allocated from slab cache, correct? There two kinds of slab
> cache, one is for general purpose, eg. kmalloc slab cache, the other
> is for special purpose, eg. mm_struct, task_struct. kmalloc object >
> PAGE_SIZE/2 or > PAGE_SIZE should also allocated from slab cache,
> correct? then why you said that there are no caches between
> kmalloc-2048 and kmalloc-4096?

Ok, now I get it.  Yes, I guess I should qualified here that there are
no _kmalloc_ caches between PAGE_SIZE/2 and PAGE_SIZE.

Yes, one can create caches of a particular size.  However that doesn't
work well for zswap because the compressed pages vary widely and size
and, imo, it doesn't make sense to create a bunch of caches very
granular in size.

Plus having granular caches doesn't solve the fragmentation issue
caused by the storage of large objects.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-22 20:04           ` Seth Jennings
@ 2013-02-25 17:05             ` Dan Magenheimer
  2013-02-25 19:14               ` Seth Jennings
  0 siblings, 1 reply; 38+ messages in thread
From: Dan Magenheimer @ 2013-02-25 17:05 UTC (permalink / raw)
  To: Seth Jennings, Joonsoo Kim
  Cc: Minchan Kim, Nitin Gupta, Andrew Morton, Greg Kroah-Hartman,
	Konrad Wilk, Robert Jennings, Jenifer Hopper, Mel Gorman,
	Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
> Sent: Friday, February 22, 2013 1:04 PM
> To: Joonsoo Kim
> Subject: Re: [PATCHv5 1/8] zsmalloc: add to mm/
> 
> On 02/22/2013 03:24 AM, Joonsoo Kim wrote:
> >
> > It's my quick thought. So there is no concrete idea.
> > As Seth said, with a FULL list, zsmalloc always access all zspage.
> > So, if we want to know what pages are for zsmalloc, we can know it.
> > The EMPTY list can be used for pool of zsmalloc itself. With it, we don't
> > need to free zspage directly, we can keep zspages, so can reduce
> > alloc/free overhead. But, I'm not sure whether it is useful.
> 
> I think it's a good idea.  zswap actually does this "keeping some free
> pages around for later allocations" outside zsmalloc in a mempool that
> zswap manages.  Minchan once mentioned bringing that inside zsmalloc
> and this would be a way we could do it.

I think it's a very bad idea.  If I understand, the suggestion will
hide away some quantity (possibly a very large quantity) of pages
for the sole purpose of zswap, in case zswap gets around to using them
sometime in the future.  In the meantime, those pages are not available
for use by any other kernel subsystems or by userland processes.
An idle page is a wasted page.

While you might defend the mempool use for a handful of pages,
frontswap writes/reads thousands of pages in a bursty way,
and then can go idle for a very long time.  This may not be
readily apparent with artificially-created memory pressure
from kernbench with -jN (high N).  Leaving thousands
of pages in zswap's personal free list may cause memory pressure
that would otherwise never have existed.

> Just want to be clear that I'd be in favor of looking at this after
> the merge.

I disagree... I think this is exactly the kind of fundamental
MM interaction that should be well understood and resolved
BEFORE anything gets merged.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 7/8] zswap: add swap page writeback support
  2013-02-25  2:54   ` Minchan Kim
@ 2013-02-25 17:37     ` Seth Jennings
  0 siblings, 0 replies; 38+ messages in thread
From: Seth Jennings @ 2013-02-25 17:37 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches,
	linux-mm, linux-kernel, devel

On 02/24/2013 08:54 PM, Minchan Kim wrote:
> Hi Seth,
> 
> On Wed, Feb 13, 2013 at 12:38:50PM -0600, Seth Jennings wrote:
>> This patch adds support for evicting swap pages that are currently
>> compressed in zswap to the swap device.  This functionality is very
>> important and make zswap a true cache in that, once the cache is full
>> or can't grow due to memory pressure, the oldest pages can be moved
>> out of zswap to the swap device so newer pages can be compressed and
>> stored in zswap.
>>
>> This introduces a good amount of new code to guarantee coherency.
>> Most notably, and LRU list is added to the zswap_tree structure,
>> and refcounts are added to each entry to ensure that one code path
>> doesn't free then entry while another code path is operating on it.
>>
>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> 
> In this time, I didn't review the code in detail yet but it seems
> resolve of all review point in previous interation. Thanks!
> But unfortunately, I couldn't find anything related to tmppage handling
> so I'd like to ask.
> 
> The reason of tmppage is temporal buffer to keep compressed data during
> writeback to avoid unnecessary compressing again when we retry?

Yes.

> Is it really critical about performance?

It's hard to measure.  There is no guarantee that
zswap_flush_entries() has made room for the allocation so if we fail
again, we've compressed the page twice and still fail

So my motivation was to prevent the second compression.  It does add
significant complexity though without a completely clear (i.e.
measurable) benefit.


What's the wrong if we remove
> tmppage handling?
> 
> zswap_frontswap_store
> retry:
>         get_cpu_var(zswap_dstmem);
>         zswap_com_op(COMPRESS)
>         zs_malloc()
>         if (!handle) {
>                 put_cpu_var(zswap_dstmem);
>                 if (retry > MAX_RETRY)
>                         goto error_nomem;
>                 zswap_flush_entries()
>                 goto retry;
>         }

I dislike "jump up" labels, but yes, something like this could be done.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-25 17:05             ` Dan Magenheimer
@ 2013-02-25 19:14               ` Seth Jennings
  2013-02-26  0:20                 ` Dan Magenheimer
  0 siblings, 1 reply; 38+ messages in thread
From: Seth Jennings @ 2013-02-25 19:14 UTC (permalink / raw)
  To: Dan Magenheimer
  Cc: Joonsoo Kim, Minchan Kim, Nitin Gupta, Andrew Morton,
	Greg Kroah-Hartman, Konrad Wilk, Robert Jennings, Jenifer Hopper,
	Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

On 02/25/2013 11:05 AM, Dan Magenheimer wrote:
>> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
>> Sent: Friday, February 22, 2013 1:04 PM
>> To: Joonsoo Kim
>> Subject: Re: [PATCHv5 1/8] zsmalloc: add to mm/
>>
>> On 02/22/2013 03:24 AM, Joonsoo Kim wrote:
>>>
>>> It's my quick thought. So there is no concrete idea.
>>> As Seth said, with a FULL list, zsmalloc always access all zspage.
>>> So, if we want to know what pages are for zsmalloc, we can know it.
>>> The EMPTY list can be used for pool of zsmalloc itself. With it, we don't
>>> need to free zspage directly, we can keep zspages, so can reduce
>>> alloc/free overhead. But, I'm not sure whether it is useful.
>>
>> I think it's a good idea.  zswap actually does this "keeping some free
>> pages around for later allocations" outside zsmalloc in a mempool that
>> zswap manages.  Minchan once mentioned bringing that inside zsmalloc
>> and this would be a way we could do it.
> 
> I think it's a very bad idea.  If I understand, the suggestion will
> hide away some quantity (possibly a very large quantity) of pages
> for the sole purpose of zswap, in case zswap gets around to using them
> sometime in the future.  In the meantime, those pages are not available
> for use by any other kernel subsystems or by userland processes.
> An idle page is a wasted page.
> 
> While you might defend the mempool use for a handful of pages,
> frontswap writes/reads thousands of pages in a bursty way,
> and then can go idle for a very long time.  This may not be
> readily apparent with artificially-created memory pressure
> from kernbench with -jN (high N).  Leaving thousands
> of pages in zswap's personal free list may cause memory pressure
> that would otherwise never have existed.

I experimentally determined that this pool increased allocation
success rate and, therefore, reduced the number of pages going to the
swap device.

The zswap mempool has a target size of 256 pages.  This places an
upper bound on the number of pages held in reserve for zswap.  So we
aren't talking about "thousands of pages".

And yes, the pool does remove up to 1MB of memory (on a 4k PAGE_SIZE)
from general use, which causes the reclaim to start very slightly earlier.

> 
>> Just want to be clear that I'd be in favor of looking at this after
>> the merge.
> 
> I disagree... I think this is exactly the kind of fundamental
> MM interaction that should be well understood and resolved
> BEFORE anything gets merged.

While there is discussion to be had here, I don't agree that it's
"fundamental" and should not block merging.

The mempool does serve a purpose and adds measurable benefit. However,
if it is determined at some future time that having a reserved pool of
any size in zswap is bad practice, it can be removed trivially.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [PATCHv5 1/8] zsmalloc: add to mm/
  2013-02-25 19:14               ` Seth Jennings
@ 2013-02-26  0:20                 ` Dan Magenheimer
  0 siblings, 0 replies; 38+ messages in thread
From: Dan Magenheimer @ 2013-02-26  0:20 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Joonsoo Kim, Minchan Kim, Nitin Gupta, Andrew Morton,
	Greg Kroah-Hartman, Konrad Wilk, Robert Jennings, Jenifer Hopper,
	Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, Joe Perches, linux-mm,
	linux-kernel, devel

> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
> Subject: Re: [PATCHv5 1/8] zsmalloc: add to mm/
> 
> On 02/25/2013 11:05 AM, Dan Magenheimer wrote:
> >> From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com]
> >> Sent: Friday, February 22, 2013 1:04 PM
> >> To: Joonsoo Kim
> >> Subject: Re: [PATCHv5 1/8] zsmalloc: add to mm/
> >>
> >> On 02/22/2013 03:24 AM, Joonsoo Kim wrote:
> >>>
> >>> It's my quick thought. So there is no concrete idea.
> >>> As Seth said, with a FULL list, zsmalloc always access all zspage.
> >>> So, if we want to know what pages are for zsmalloc, we can know it.
> >>> The EMPTY list can be used for pool of zsmalloc itself. With it, we don't
> >>> need to free zspage directly, we can keep zspages, so can reduce
> >>> alloc/free overhead. But, I'm not sure whether it is useful.
> >>
> >> I think it's a good idea.  zswap actually does this "keeping some free
> >> pages around for later allocations" outside zsmalloc in a mempool that
> >> zswap manages.  Minchan once mentioned bringing that inside zsmalloc
> >> and this would be a way we could do it.
> >
> > I think it's a very bad idea.  If I understand, the suggestion will
> > hide away some quantity (possibly a very large quantity) of pages
> > for the sole purpose of zswap, in case zswap gets around to using them
> > sometime in the future.  In the meantime, those pages are not available
> > for use by any other kernel subsystems or by userland processes.
> > An idle page is a wasted page.
> >
> > While you might defend the mempool use for a handful of pages,
> > frontswap writes/reads thousands of pages in a bursty way,
> > and then can go idle for a very long time.  This may not be
> > readily apparent with artificially-created memory pressure
> > from kernbench with -jN (high N).  Leaving thousands
> > of pages in zswap's personal free list may cause memory pressure
> > that would otherwise never have existed.
> 
> I experimentally determined that this pool increased allocation
> success rate and, therefore, reduced the number of pages going to the
> swap device.

Of course it does.  But you can't experimentally determine how
pre-allocating pages will impact overall performance, especially
over a wide range of workloads that may or may not even swap.

> The zswap mempool has a target size of 256 pages.  This places an
> upper bound on the number of pages held in reserve for zswap.  So we
> aren't talking about "thousands of pages".

I used "thousands of pages" in reference to Joonsoo's idea about
releasing pages to the EMPTY list, not about mempool.  I wasn't
aware that mempool has a target size of 256 pages, but even
that smaller amount seems wrong to me, especially if the
mempool user (zswap) will blithely use many thousands of pages
in a burst.
 
> And yes, the pool does remove up to 1MB of memory (on a 4k PAGE_SIZE)
> from general use, which causes the reclaim to start very slightly earlier.
> 
> >
> >> Just want to be clear that I'd be in favor of looking at this after
> >> the merge.
> >
> > I disagree... I think this is exactly the kind of fundamental
> > MM interaction that should be well understood and resolved
> > BEFORE anything gets merged.
> 
> While there is discussion to be had here, I don't agree that it's
> "fundamental" and should not block merging.
> 
> The mempool does serve a purpose and adds measurable benefit. However,
> if it is determined at some future time that having a reserved pool of
> any size in zswap is bad practice, it can be removed trivially.

We had this argument last summer... Sure, _any_ cache can be shown
to add some measurable benefit under the right circumstances.
Any cacheing solution also has some often subtle side effects
on some workloads, that can have more costs than benefits.
Our cache (zcache and/or zswap and/or zram) is not just a cache
but it is also stealing capacity from its backing store (total
kernel RAM), so is even more likely to have side effects.

While I firmly believe -- intuitively -- that compression should
eventually be a feature of the MM subsystem, I don't think we
have enough understanding yet of the interactions or the
policies needed to control those interactions or the impact
of those policies on the underlying zram/zcache/zswap design choices.
For example, zswap completely ignores the pagecache... is that
a good thing or a bad thing, and why?

Seth, I'm not trying to piss you off, but last summer you seemed
hell-bent on promoting zcache out of staging, and it didn't
happen so you hacked off the part of zcache you were most interested
in, forked it and rewrote it, and are now trying to merge that
fork into MM ASAP.  Why?  Why can't we work together on solving
the hard problems instead of infighting and reimplementing the wheel?

Dan "shakes head"

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv5 2/8] zsmalloc: add documentation
  2013-02-25 15:18                 ` Seth Jennings
@ 2013-03-01  6:47                   ` Ric Mason
  0 siblings, 0 replies; 38+ messages in thread
From: Ric Mason @ 2013-03-01  6:47 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Joonsoo Kim, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer,
	Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner,
	Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen,
	Joe Perches, linux-mm, linux-kernel, devel

On 02/25/2013 11:18 PM, Seth Jennings wrote:
> On 02/23/2013 06:37 PM, Ric Mason wrote:
>> On 02/23/2013 05:02 AM, Seth Jennings wrote:
>>> On 02/21/2013 08:56 PM, Ric Mason wrote:
>>>> On 02/21/2013 11:50 PM, Seth Jennings wrote:
>>>>> On 02/21/2013 02:49 AM, Ric Mason wrote:
>>>>>> On 02/19/2013 03:16 AM, Seth Jennings wrote:
>>>>>>> On 02/16/2013 12:21 AM, Ric Mason wrote:
>>>>>>>> On 02/14/2013 02:38 AM, Seth Jennings wrote:
>>>>>>>>> This patch adds a documentation file for zsmalloc at
>>>>>>>>> Documentation/vm/zsmalloc.txt
>>>>>>>>>
>>>>>>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>>>>>>> ---
>>>>>>>>>       Documentation/vm/zsmalloc.txt |   68
>>>>>>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>>>>>>       1 file changed, 68 insertions(+)
>>>>>>>>>       create mode 100644 Documentation/vm/zsmalloc.txt
>>>>>>>>>
>>>>>>>>> diff --git a/Documentation/vm/zsmalloc.txt
>>>>>>>>> b/Documentation/vm/zsmalloc.txt
>>>>>>>>> new file mode 100644
>>>>>>>>> index 0000000..85aa617
>>>>>>>>> --- /dev/null
>>>>>>>>> +++ b/Documentation/vm/zsmalloc.txt
>>>>>>>>> @@ -0,0 +1,68 @@
>>>>>>>>> +zsmalloc Memory Allocator
>>>>>>>>> +
>>>>>>>>> +Overview
>>>>>>>>> +
>>>>>>>>> +zmalloc a new slab-based memory allocator,
>>>>>>>>> +zsmalloc, for storing compressed pages.  It is designed for
>>>>>>>>> +low fragmentation and high allocation success rate on
>>>>>>>>> +large object, but <= PAGE_SIZE allocations.
>>>>>>>>> +
>>>>>>>>> +zsmalloc differs from the kernel slab allocator in two primary
>>>>>>>>> +ways to achieve these design goals.
>>>>>>>>> +
>>>>>>>>> +zsmalloc never requires high order page allocations to back
>>>>>>>>> +slabs, or "size classes" in zsmalloc terms. Instead it allows
>>>>>>>>> +multiple single-order pages to be stitched together into a
>>>>>>>>> +"zspage" which backs the slab.  This allows for higher
>>>>>>>>> allocation
>>>>>>>>> +success rate under memory pressure.
>>>>>>>>> +
>>>>>>>>> +Also, zsmalloc allows objects to span page boundaries within the
>>>>>>>>> +zspage.  This allows for lower fragmentation than could be had
>>>>>>>>> +with the kernel slab allocator for objects between PAGE_SIZE/2
>>>>>>>>> +and PAGE_SIZE.  With the kernel slab allocator, if a page
>>>>>>>>> compresses
>>>>>>>>> +to 60% of it original size, the memory savings gained through
>>>>>>>>> +compression is lost in fragmentation because another object of
>>>>>>>>> +the same size can't be stored in the leftover space.
>>>>>>>>> +
>>>>>>>>> +This ability to span pages results in zsmalloc allocations not
>>>>>>>>> being
>>>>>>>>> +directly addressable by the user.  The user is given an
>>>>>>>>> +non-dereferencable handle in response to an allocation request.
>>>>>>>>> +That handle must be mapped, using zs_map_object(), which returns
>>>>>>>>> +a pointer to the mapped region that can be used.  The mapping is
>>>>>>>>> +necessary since the object data may reside in two different
>>>>>>>>> +noncontigious pages.
>>>>>>>> Do you mean the reason of  to use a zsmalloc object must map after
>>>>>>>> malloc is object data maybe reside in two different nocontiguous
>>>>>>>> pages?
>>>>>>> Yes, that is one reason for the mapping.  The other reason (more
>>>>>>> of an
>>>>>>> added bonus) is below.
>>>>>>>
>>>>>>>>> +
>>>>>>>>> +For 32-bit systems, zsmalloc has the added benefit of being
>>>>>>>>> +able to back slabs with HIGHMEM pages, something not possible
>>>>>>>> What's the meaning of "back slabs with HIGHMEM pages"?
>>>>>>> By HIGHMEM, I'm referring to the HIGHMEM memory zone on 32-bit
>>>>>>> systems
>>>>>>> with larger that 1GB (actually a little less) of RAM.  The upper
>>>>>>> 3GB
>>>>>>> of the 4GB address space, depending on kernel build options, is not
>>>>>>> directly addressable by the kernel, but can be mapped into the
>>>>>>> kernel
>>>>>>> address space with functions like kmap() or kmap_atomic().
>>>>>>>
>>>>>>> These pages can't be used by slab/slub because they are not
>>>>>>> continuously mapped into the kernel address space.  However, since
>>>>>>> zsmalloc requires a mapping anyway to handle objects that span
>>>>>>> non-contiguous page boundaries, we do the kernel mapping as part of
>>>>>>> the process.
>>>>>>>
>>>>>>> So zspages, the conceptual slab in zsmalloc backed by single-order
>>>>>>> pages can include pages from the HIGHMEM zone as well.
>>>>>> Thanks for your clarify,
>>>>>>     http://lwn.net/Articles/537422/, your article about zswap in lwn.
>>>>>>     "Additionally, the kernel slab allocator does not allow
>>>>>> objects that
>>>>>> are less
>>>>>> than a page in size to span a page boundary. This means that if an
>>>>>> object is
>>>>>> PAGE_SIZE/2 + 1 bytes in size, it effectively use an entire page,
>>>>>> resulting in
>>>>>> ~50% waste. Hense there are *no kmalloc() cache size* between
>>>>>> PAGE_SIZE/2 and
>>>>>> PAGE_SIZE."
>>>>>> Are your sure? It seems that kmalloc cache support big size, your
>>>>>> can
>>>>>> check in
>>>>>> include/linux/kmalloc_sizes.h
>>>>> Yes, kmalloc can allocate large objects > PAGE_SIZE, but there are no
>>>>> cache sizes _between_ PAGE_SIZE/2 and PAGE_SIZE.  For example, on a
>>>>> system with 4k pages, there are no caches between kmalloc-2048 and
>>>>> kmalloc-4096.
>>>> kmalloc object > PAGE_SIZE/2 or > PAGE_SIZE should also allocate from
>>>> slab cache, correct? Then how can alloc object w/o slab cache which?
>>>> contains this object size objects?
>>> I have to admit, I didn't understand the question.
>> object is allocated from slab cache, correct? There two kinds of slab
>> cache, one is for general purpose, eg. kmalloc slab cache, the other
>> is for special purpose, eg. mm_struct, task_struct. kmalloc object >
>> PAGE_SIZE/2 or > PAGE_SIZE should also allocated from slab cache,
>> correct? then why you said that there are no caches between
>> kmalloc-2048 and kmalloc-4096?
> Ok, now I get it.  Yes, I guess I should qualified here that there are
> no _kmalloc_ caches between PAGE_SIZE/2 and PAGE_SIZE.

Why I have?

dma-kmalloc-8192       0      0   8192    4    8 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-4096       0      0   4096    8    8 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-2048       0      0   2048   16    8 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-1024       0      0   1024   32    8 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-512       32     32    512   32    4 : tunables    0 0    0 
: slabdata      1      1      0
dma-kmalloc-256        0      0    256   32    2 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-128        0      0    128   32    1 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-64         0      0     64   64    1 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-32         0      0     32  128    1 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-16         0      0     16  256    1 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-8          0      0      8  512    1 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-192        0      0    192   21    1 : tunables    0 0    0 
: slabdata      0      0      0
dma-kmalloc-96         0      0     96   42    1 : tunables    0 0    0 
: slabdata      0      0      0
kmalloc-8192         100    100   8192    4    8 : tunables    0 0    0 
: slabdata     25     25      0
kmalloc-4096         178    216   4096    8    8 : tunables    0 0    0 
: slabdata     27     27      0
kmalloc-2048         229    304   2048   16    8 : tunables    0 0    0 
: slabdata     19     19      0
kmalloc-1024         832    832   1024   32    8 : tunables    0 0    0 
: slabdata     26     26      0
kmalloc-512         2016   2016    512   32    4 : tunables    0 0    0 
: slabdata     63     63      0
kmalloc-256         2203   2368    256   32    2 : tunables    0 0    0 
: slabdata     74     74      0
kmalloc-128         2026   2464    128   32    1 : tunables    0 0    0 
: slabdata     77     77      0
kmalloc-64         27584  27584     64   64    1 : tunables    0 0    0 
: slabdata    431    431      0
kmalloc-32         19334  20864     32  128    1 : tunables    0 0    0 
: slabdata    163    163      0
kmalloc-16          6912   6912     16  256    1 : tunables    0 0    0 
: slabdata     27     27      0
kmalloc-8          17408  17408      8  512    1 : tunables    0 0    0 
: slabdata     34     34      0
kmalloc-192         8006   8946    192   21    1 : tunables    0 0    0 
: slabdata    426    426      0
kmalloc-96         19828  19992     96   42    1 : tunables    0 0    0 
: slabdata    476    476      0
kmem_cache_node      384    384     32  128    1 : tunables    0 0    0 
: slabdata      3      3      0
kmem_cache           160    160    128   32    1 : tunables    0 0    0 
: slabdata      5      5      0


>
> Yes, one can create caches of a particular size.  However that doesn't
> work well for zswap because the compressed pages vary widely and size
> and, imo, it doesn't make sense to create a bunch of caches very
> granular in size.
>
> Plus having granular caches doesn't solve the fragmentation issue
> caused by the storage of large objects.
>
> Thanks,
> Seth
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2013-03-01  6:47 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1360780731-11708-1-git-send-email-sjenning@linux.vnet.ibm.com>
2013-02-16  3:20 ` [PATCHv5 0/8] zswap: compressed swap caching Ric Mason
2013-02-18 19:37   ` Seth Jennings
     [not found] ` <1360780731-11708-5-git-send-email-sjenning@linux.vnet.ibm.com>
2013-02-16  4:04   ` [PATCHv5 4/8] zswap: add to mm/ Ric Mason
2013-02-18 19:24     ` Seth Jennings
2013-02-18 19:49       ` Cody P Schafer
2013-02-18 20:07         ` Seth Jennings
2013-02-18 19:55       ` Dan Magenheimer
2013-02-18 20:39         ` Seth Jennings
2013-02-18 21:59           ` Dan Magenheimer
2013-02-18 22:52             ` Seth Jennings
2013-02-18 23:17               ` Dan Magenheimer
2013-02-20 20:37         ` Seth Jennings
     [not found] ` <1360780731-11708-3-git-send-email-sjenning@linux.vnet.ibm.com>
2013-02-16  6:21   ` [PATCHv5 2/8] zsmalloc: add documentation Ric Mason
2013-02-18 19:16     ` Seth Jennings
2013-02-21  8:49       ` Ric Mason
2013-02-21 15:50         ` Seth Jennings
2013-02-21 16:20           ` Dan Magenheimer
2013-02-22  2:56           ` Ric Mason
2013-02-22 21:02             ` Seth Jennings
2013-02-24  0:37               ` Ric Mason
2013-02-25 15:18                 ` Seth Jennings
2013-03-01  6:47                   ` Ric Mason
2013-02-22  2:59           ` Ric Mason
     [not found] ` <1360780731-11708-2-git-send-email-sjenning@linux.vnet.ibm.com>
2013-02-16  3:26   ` [PATCHv5 1/8] zsmalloc: add to mm/ Ric Mason
2013-02-18 19:04     ` Seth Jennings
2013-02-19  9:18   ` Joonsoo Kim
2013-02-19 17:54     ` Seth Jennings
2013-02-19 23:37       ` Minchan Kim
2013-02-22  9:24         ` Joonsoo Kim
2013-02-22 20:04           ` Seth Jennings
2013-02-25 17:05             ` Dan Magenheimer
2013-02-25 19:14               ` Seth Jennings
2013-02-26  0:20                 ` Dan Magenheimer
2013-02-20  2:42       ` Nitin Gupta
     [not found] ` <1360780731-11708-8-git-send-email-sjenning@linux.vnet.ibm.com>
2013-02-16  6:11   ` [PATCHv5 7/8] zswap: add swap page writeback support Ric Mason
2013-02-18 19:32     ` Seth Jennings
2013-02-25  2:54   ` Minchan Kim
2013-02-25 17:37     ` Seth Jennings

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).