* [RFC PATCH v13 0/6] mm: security: ro protection for dynamic data @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa This patch-set introduces the possibility of protecting memory that has been allocated dynamically. The memory is managed in pools: when a memory pool is turned into R/O, all the memory that is part of it, will become R/O. A R/O pool can be destroyed, to recover its memory, but it cannot be turned back into R/W mode. This is intentional. This feature is meant for data that doesn't need further modifications after initialization. However the data might need to be released, for example as part of module unloading. To do this, the memory must first be freed, then the pool can be destroyed. An example is provided, in the form of self-testing. Changes since v12 [https://lkml.org/lkml/2018/1/30/397] - fixed Kconfig dependency for pmalloc-test - fixed warning for size_t treated as %ul on i386 - moved to SPDX license reference - rewrote pmalloc docs Igor Stoppa (6): genalloc: track beginning of allocations genalloc: selftest struct page: add field for vm_struct Protectable Memory Pmalloc: self-test Documentation for Pmalloc Documentation/core-api/pmalloc.rst | 114 ++++++++ include/linux/genalloc-selftest.h | 30 +++ include/linux/genalloc.h | 7 +- include/linux/mm_types.h | 1 + include/linux/pmalloc.h | 211 +++++++++++++++ include/linux/vmalloc.h | 1 + init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 +++++++++++++++++++++++++++++ lib/genalloc.c | 444 ++++++++++++++++++++++---------- mm/Kconfig | 9 + mm/Makefile | 2 + mm/pmalloc-selftest.c | 61 +++++ mm/pmalloc-selftest.h | 26 ++ mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 +- mm/vmalloc.c | 18 +- 18 files changed, 1742 insertions(+), 141 deletions(-) create mode 100644 Documentation/core-api/pmalloc.rst create mode 100644 include/linux/genalloc-selftest.h create mode 100644 include/linux/pmalloc.h create mode 100644 lib/genalloc-selftest.c create mode 100644 mm/pmalloc-selftest.c create mode 100644 mm/pmalloc-selftest.h create mode 100644 mm/pmalloc.c -- 2.9.3 ^ permalink raw reply [flat|nested] 84+ messages in thread
* [RFC PATCH v13 0/6] mm: security: ro protection for dynamic data @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa This patch-set introduces the possibility of protecting memory that has been allocated dynamically. The memory is managed in pools: when a memory pool is turned into R/O, all the memory that is part of it, will become R/O. A R/O pool can be destroyed, to recover its memory, but it cannot be turned back into R/W mode. This is intentional. This feature is meant for data that doesn't need further modifications after initialization. However the data might need to be released, for example as part of module unloading. To do this, the memory must first be freed, then the pool can be destroyed. An example is provided, in the form of self-testing. Changes since v12 [https://lkml.org/lkml/2018/1/30/397] - fixed Kconfig dependency for pmalloc-test - fixed warning for size_t treated as %ul on i386 - moved to SPDX license reference - rewrote pmalloc docs Igor Stoppa (6): genalloc: track beginning of allocations genalloc: selftest struct page: add field for vm_struct Protectable Memory Pmalloc: self-test Documentation for Pmalloc Documentation/core-api/pmalloc.rst | 114 ++++++++ include/linux/genalloc-selftest.h | 30 +++ include/linux/genalloc.h | 7 +- include/linux/mm_types.h | 1 + include/linux/pmalloc.h | 211 +++++++++++++++ include/linux/vmalloc.h | 1 + init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 +++++++++++++++++++++++++++++ lib/genalloc.c | 444 ++++++++++++++++++++++---------- mm/Kconfig | 9 + mm/Makefile | 2 + mm/pmalloc-selftest.c | 61 +++++ mm/pmalloc-selftest.h | 26 ++ mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 +- mm/vmalloc.c | 18 +- 18 files changed, 1742 insertions(+), 141 deletions(-) create mode 100644 Documentation/core-api/pmalloc.rst create mode 100644 include/linux/genalloc-selftest.h create mode 100644 include/linux/pmalloc.h create mode 100644 lib/genalloc-selftest.c create mode 100644 mm/pmalloc-selftest.c create mode 100644 mm/pmalloc-selftest.h create mode 100644 mm/pmalloc.c -- 2.9.3 ^ permalink raw reply [flat|nested] 84+ messages in thread
* [RFC PATCH v13 0/6] mm: security: ro protection for dynamic data @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa This patch-set introduces the possibility of protecting memory that has been allocated dynamically. The memory is managed in pools: when a memory pool is turned into R/O, all the memory that is part of it, will become R/O. A R/O pool can be destroyed, to recover its memory, but it cannot be turned back into R/W mode. This is intentional. This feature is meant for data that doesn't need further modifications after initialization. However the data might need to be released, for example as part of module unloading. To do this, the memory must first be freed, then the pool can be destroyed. An example is provided, in the form of self-testing. Changes since v12 [https://lkml.org/lkml/2018/1/30/397] - fixed Kconfig dependency for pmalloc-test - fixed warning for size_t treated as %ul on i386 - moved to SPDX license reference - rewrote pmalloc docs Igor Stoppa (6): genalloc: track beginning of allocations genalloc: selftest struct page: add field for vm_struct Protectable Memory Pmalloc: self-test Documentation for Pmalloc Documentation/core-api/pmalloc.rst | 114 ++++++++ include/linux/genalloc-selftest.h | 30 +++ include/linux/genalloc.h | 7 +- include/linux/mm_types.h | 1 + include/linux/pmalloc.h | 211 +++++++++++++++ include/linux/vmalloc.h | 1 + init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 +++++++++++++++++++++++++++++ lib/genalloc.c | 444 ++++++++++++++++++++++---------- mm/Kconfig | 9 + mm/Makefile | 2 + mm/pmalloc-selftest.c | 61 +++++ mm/pmalloc-selftest.h | 26 ++ mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 +- mm/vmalloc.c | 18 +- 18 files changed, 1742 insertions(+), 141 deletions(-) create mode 100644 Documentation/core-api/pmalloc.rst create mode 100644 include/linux/genalloc-selftest.h create mode 100644 include/linux/pmalloc.h create mode 100644 lib/genalloc-selftest.c create mode 100644 mm/pmalloc-selftest.c create mode 100644 mm/pmalloc-selftest.h create mode 100644 mm/pmalloc.c -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [RFC PATCH v13 0/6] mm: security: ro protection for dynamic data @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: linux-security-module This patch-set introduces the possibility of protecting memory that has been allocated dynamically. The memory is managed in pools: when a memory pool is turned into R/O, all the memory that is part of it, will become R/O. A R/O pool can be destroyed, to recover its memory, but it cannot be turned back into R/W mode. This is intentional. This feature is meant for data that doesn't need further modifications after initialization. However the data might need to be released, for example as part of module unloading. To do this, the memory must first be freed, then the pool can be destroyed. An example is provided, in the form of self-testing. Changes since v12 [https://lkml.org/lkml/2018/1/30/397] - fixed Kconfig dependency for pmalloc-test - fixed warning for size_t treated as %ul on i386 - moved to SPDX license reference - rewrote pmalloc docs Igor Stoppa (6): genalloc: track beginning of allocations genalloc: selftest struct page: add field for vm_struct Protectable Memory Pmalloc: self-test Documentation for Pmalloc Documentation/core-api/pmalloc.rst | 114 ++++++++ include/linux/genalloc-selftest.h | 30 +++ include/linux/genalloc.h | 7 +- include/linux/mm_types.h | 1 + include/linux/pmalloc.h | 211 +++++++++++++++ include/linux/vmalloc.h | 1 + init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 +++++++++++++++++++++++++++++ lib/genalloc.c | 444 ++++++++++++++++++++++---------- mm/Kconfig | 9 + mm/Makefile | 2 + mm/pmalloc-selftest.c | 61 +++++ mm/pmalloc-selftest.h | 26 ++ mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 +- mm/vmalloc.c | 18 +- 18 files changed, 1742 insertions(+), 141 deletions(-) create mode 100644 Documentation/core-api/pmalloc.rst create mode 100644 include/linux/genalloc-selftest.h create mode 100644 include/linux/pmalloc.h create mode 100644 lib/genalloc-selftest.c create mode 100644 mm/pmalloc-selftest.c create mode 100644 mm/pmalloc-selftest.h create mode 100644 mm/pmalloc.c -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 1/6] genalloc: track beginning of allocations 2018-02-03 19:42 ` Igor Stoppa (?) (?) @ 2018-02-03 19:42 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The genalloc library is only capable of tracking if a certain unit of allocation is in use or not. It is not capable of discerning where the memory associated to an allocation request begins and where it ends. The reason is that units of allocations are tracked by using a bitmap, where each bit represents that the unit is either allocated (1) or available (0). The user of the API must keep track of how much space was requested, if it ever needs to be freed. This can cause errors being undetected. Ex: * Only a subset of the memory provided to an allocation request is freed * The memory from a subsequent allocation is freed * The memory being freed doesn't start at the beginning of an allocation. The bitmap is used because it allows to perform lockless read/write access, where this is supported by hw through cmpxchg. Similarly, it is possible to scan the bitmap for a sufficiently long sequence of zeros, to identify zones available for allocation. -- This patch doubles the space reserved in the bitmap for each allocation. By using 2 bits per allocation, it is possible to encode also the information of where the allocation starts: (msb to the left, lsb to the right, in the following "dictionary") 11: first allocation unit in the allocation 10: any subsequent allocation unit (if any) in the allocation 00: available allocation unit 01: invalid Ex, with the same notation as above - MSb.......LSb: ...000010111100000010101011 <-- Read in this direction. \__|\__|\|\____|\______| | | | | \___ 4 used allocation units | | | \___________ 3 empty allocation units | | \_________________ 1 used allocation unit | \___________________ 2 used allocation units \_______________________ 2 empty allocation units Because of the encoding, the previous lockless operations are still possible. The only caveat is to change the parameter of the zero-finding function which establishes the alignment at which to perform the test for first zero. The original value of the parameter is 0, meaning that an allocation can start at any point in the bitmap, while the new value is 1, meaning that allocations can start only at even places (bit 0, bit 2, etc.) The number of zeroes to look for, must therefore be doubled. When it's time to free the memory associated to an allocation request, it's a matter of checking if the corresponding allocation unit is really the beginning of an allocation (both bits are set to 1). Looking for the ending can also be performed locklessly. It's sufficient to identify the first mapped allocation unit that is represented either as free (00) or busy (11). Even if the allocation status should change in the meanwhile, it doesn't matter, since it can only transition between free (00) and first-allocated (11). The parameter indicating to the *_free() function the size of the space that should be freed is not currently removed, to facilitate the transition, but it is verified, whenever it is not zero. If it is set to zero, then the free function will autonomously decide the size to be free, by scanning the bitmap. About the implementation: the patch introduces the concept of "bitmap entry", which has a 1:1 mapping with allocation units, while the code being patched has a 1:1 mapping between allocation units and bits. This means that, now, the bitmap can be extended (by following powers of 2), to track also other properties of the allocations, if ever needed. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 4 +- lib/genalloc.c | 417 ++++++++++++++++++++++++++++++++--------------- 2 files changed, 289 insertions(+), 132 deletions(-) diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 872f930..dcaa33e 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -32,7 +32,7 @@ #include <linux/types.h> #include <linux/spinlock_types.h> -#include <linux/atomic.h> +#include <linux/slab.h> struct device; struct device_node; @@ -76,7 +76,7 @@ struct gen_pool_chunk { phys_addr_t phys_addr; /* physical starting address of memory chunk */ unsigned long start_addr; /* start address of memory chunk */ unsigned long end_addr; /* end address of memory chunk (inclusive) */ - unsigned long bits[0]; /* bitmap for allocating memory chunk */ + unsigned long entries[0]; /* bitmap for allocating memory chunk */ }; /* diff --git a/lib/genalloc.c b/lib/genalloc.c index ca06adc..dde7830 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -36,114 +36,221 @@ #include <linux/genalloc.h> #include <linux/of_device.h> +#define ENTRY_ORDER 1UL +#define ENTRY_MASK ((1UL << ((ENTRY_ORDER) + 1UL)) - 1UL) +#define ENTRY_HEAD ENTRY_MASK +#define ENTRY_UNUSED 0UL +#define BITS_PER_ENTRY (1U << ENTRY_ORDER) +#define BITS_DIV_ENTRIES(x) ((x) >> ENTRY_ORDER) +#define ENTRIES_TO_BITS(x) ((x) << ENTRY_ORDER) +#define BITS_DIV_LONGS(x) ((x) / BITS_PER_LONG) +#define ENTRIES_DIV_LONGS(x) (BITS_DIV_LONGS(ENTRIES_TO_BITS(x))) + +#define ENTRIES_PER_LONG BITS_DIV_ENTRIES(BITS_PER_LONG) + +/* Binary pattern of 1010...1010 that spans one unsigned long. */ +#define MASK (~0UL / 3 * 2) + +/** + * get_bitmap_entry - extracts the specified entry from the bitmap + * @map: pointer to a bitmap + * @entry_index: the index of the desired entry in the bitmap + * + * Returns the requested bitmap. + */ +static inline unsigned long get_bitmap_entry(unsigned long *map, + int entry_index) +{ + return (map[ENTRIES_DIV_LONGS(entry_index)] >> + ENTRIES_TO_BITS(entry_index % ENTRIES_PER_LONG)) & + ENTRY_MASK; +} + + +/** + * mem_to_units - convert references to memory into orders of allocation + * @size: amount in bytes + * @order: pow of 2 represented by each entry in the bitmap + * + * Returns the number of units representing the size. + */ +static inline unsigned long mem_to_units(unsigned long size, + unsigned long order) +{ + return (size + (1UL << order) - 1) >> order; +} + +/** + * chunk_size - dimension of a chunk of memory + * @chunk: pointer to the struct describing the chunk + * + * Returns the size of the chunk. + */ static inline size_t chunk_size(const struct gen_pool_chunk *chunk) { return chunk->end_addr - chunk->start_addr + 1; } -static int set_bits_ll(unsigned long *addr, unsigned long mask_to_set) + +/** + * set_bits_ll - according to the mask, sets the bits specified by + * value, at the address specified. + * @addr: where to write + * @mask: filter to apply for the bits to alter + * @value: actual configuration of bits to store + * + * Returns 0 upon success, -EBUSY otherwise + */ +static int set_bits_ll(unsigned long *addr, + unsigned long mask, unsigned long value) { - unsigned long val, nval; + unsigned long nval; + unsigned long present; + unsigned long target; nval = *addr; do { - val = nval; - if (val & mask_to_set) + present = nval; + if (present & mask) return -EBUSY; + target = present | value; cpu_relax(); - } while ((nval = cmpxchg(addr, val, val | mask_to_set)) != val); - + } while ((nval = cmpxchg(addr, present, target)) != target); return 0; } -static int clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear) + +/** + * cleart_bits_ll - according to the mask, clears the bits specified by + * value, at the address specified. + * @addr: where to write + * @mask: filter to apply for the bits to alter + * @value: actual configuration of bits to clear + * + * Returns 0 upon success, -EBUSY otherwise + */ +static int clear_bits_ll(unsigned long *addr, + unsigned long mask, unsigned long value) { - unsigned long val, nval; + unsigned long nval; + unsigned long present; + unsigned long target; nval = *addr; + present = nval; + if (unlikely((present & mask) ^ value)) + return -EBUSY; do { - val = nval; - if ((val & mask_to_clear) != mask_to_clear) + present = nval; + if (unlikely((present & mask) ^ value)) return -EBUSY; + target = present & ~mask; cpu_relax(); - } while ((nval = cmpxchg(addr, val, val & ~mask_to_clear)) != val); - + } while ((nval = cmpxchg(addr, present, target)) != target); return 0; } -/* - * bitmap_set_ll - set the specified number of bits at the specified position + +/** + * get_boundary - verify that an allocation effectively + * starts at the given address, then measure its length. * @map: pointer to a bitmap - * @start: a bit position in @map - * @nr: number of bits to set + * @start_entry: the index of the first entry in the bitmap + * @nentries: number of entries to alter * - * Set @nr bits start from @start in @map lock-lessly. Several users - * can set/clear the same bitmap simultaneously without lock. If two - * users set the same bit, one user will return remain bits, otherwise - * return 0. + * Returns the length of an allocation, otherwise -EINVAL if the + * parameters do not refer to a correct allocation. */ -static int bitmap_set_ll(unsigned long *map, int start, int nr) +static int get_boundary(unsigned long *map, int start_entry, int nentries) { - unsigned long *p = map + BIT_WORD(start); - const int size = start + nr; - int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); - unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); - - while (nr - bits_to_set >= 0) { - if (set_bits_ll(p, mask_to_set)) - return nr; - nr -= bits_to_set; - bits_to_set = BITS_PER_LONG; - mask_to_set = ~0UL; - p++; - } - if (nr) { - mask_to_set &= BITMAP_LAST_WORD_MASK(size); - if (set_bits_ll(p, mask_to_set)) - return nr; - } + int i; + unsigned long bitmap_entry; - return 0; + + if (unlikely(get_bitmap_entry(map, start_entry) != ENTRY_HEAD)) + return -EINVAL; + for (i = start_entry + 1; i < nentries; i++) { + bitmap_entry = get_bitmap_entry(map, i); + if (bitmap_entry == ENTRY_HEAD || + bitmap_entry == ENTRY_UNUSED) + return i; + } + return nentries - start_entry; } + +#define SET_BITS 1 +#define CLEAR_BITS 0 + /* - * bitmap_clear_ll - clear the specified number of bits at the specified position + * alter_bitmap_ll - set or clear the entries associated to an allocation + * @alteration: selection if the bits selected should be set or cleared * @map: pointer to a bitmap - * @start: a bit position in @map - * @nr: number of bits to set + * @start: the index of the first entry in the bitmap + * @nentries: number of entries to alter * - * Clear @nr bits start from @start in @map lock-lessly. Several users - * can set/clear the same bitmap simultaneously without lock. If two - * users clear the same bit, one user will return remain bits, - * otherwise return 0. + * The modification happens lock-lessly. + * Several users can write to the same map simultaneously, without lock. + * If two users alter the same bit, one user will return remaining + * entries, otherwise return 0. */ -static int bitmap_clear_ll(unsigned long *map, int start, int nr) +static int alter_bitmap_ll(bool alteration, unsigned long *map, + int start_entry, int nentries) { - unsigned long *p = map + BIT_WORD(start); - const int size = start + nr; - int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG); - unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start); - - while (nr - bits_to_clear >= 0) { - if (clear_bits_ll(p, mask_to_clear)) - return nr; - nr -= bits_to_clear; - bits_to_clear = BITS_PER_LONG; - mask_to_clear = ~0UL; - p++; - } - if (nr) { - mask_to_clear &= BITMAP_LAST_WORD_MASK(size); - if (clear_bits_ll(p, mask_to_clear)) - return nr; + unsigned long start_bit; + unsigned long end_bit; + unsigned long mask; + unsigned long value; + int nbits; + int bits_to_write; + int index; + int (*action)(unsigned long *addr, + unsigned long mask, unsigned long value); + + action = (alteration == SET_BITS) ? set_bits_ll : clear_bits_ll; + + /* Prepare for writing the initial part of the allocation, from + * starting entry, to the end of the UL bitmap element which + * contains it. It might be larger than the actual allocation. + */ + start_bit = ENTRIES_TO_BITS(start_entry); + end_bit = ENTRIES_TO_BITS(start_entry + nentries); + nbits = ENTRIES_TO_BITS(nentries); + bits_to_write = BITS_PER_LONG - start_bit % BITS_PER_LONG; + mask = BITMAP_FIRST_WORD_MASK(start_bit); + /* Mark the beginning of the allocation. */ + value = MASK | (1UL << (start_bit % BITS_PER_LONG)); + index = BITS_DIV_LONGS(start_bit); + + /* Writes entries to the bitmap, as long as the reminder is + * positive or zero. + * Might be skipped if the entries to write do not reach the end + * of a bitmap UL unit. + */ + while (nbits >= bits_to_write) { + if (action(map + index, mask, value & mask)) + return BITS_DIV_ENTRIES(nbits); + nbits -= bits_to_write; + bits_to_write = BITS_PER_LONG; + mask = ~0UL; + value = MASK; + index++; } + /* Takes care of the ending part of the entries to mark. */ + if (nbits > 0) { + mask ^= BITMAP_FIRST_WORD_MASK((end_bit) % BITS_PER_LONG); + bits_to_write = nbits; + if (action(map + index, mask, value & mask)) + return BITS_DIV_ENTRIES(nbits); + } return 0; } + /** * gen_pool_create - create a new special memory pool - * @min_alloc_order: log base 2 of number of bytes each bitmap bit represents + * @min_alloc_order: log base 2 of number of bytes each bitmap entry represents * @nid: node id of the node the pool structure should be allocated on, or -1 * * Create a new special memory pool that can be used to manage special purpose @@ -183,10 +290,12 @@ int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phy size_t size, int nid) { struct gen_pool_chunk *chunk; - int nbits = size >> pool->min_alloc_order; - int nbytes = sizeof(struct gen_pool_chunk) + - BITS_TO_LONGS(nbits) * sizeof(long); + int nentries; + int nbytes; + nentries = size >> pool->min_alloc_order; + nbytes = sizeof(struct gen_pool_chunk) + + ENTRIES_DIV_LONGS(nentries) * sizeof(long); chunk = kzalloc_node(nbytes, GFP_KERNEL, nid); if (unlikely(chunk == NULL)) return -ENOMEM; @@ -248,7 +357,7 @@ void gen_pool_destroy(struct gen_pool *pool) list_del(&chunk->next_chunk); end_bit = chunk_size(chunk) >> order; - bit = find_next_bit(chunk->bits, end_bit, 0); + bit = find_next_bit(chunk->entries, end_bit, 0); BUG_ON(bit < end_bit); kfree(chunk); @@ -292,7 +401,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, struct gen_pool_chunk *chunk; unsigned long addr = 0; int order = pool->min_alloc_order; - int nbits, start_bit, end_bit, remain; + int nentries, start_entry, end_entry, remain; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); @@ -301,29 +410,32 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, if (size == 0) return 0; - nbits = (size + (1UL << order) - 1) >> order; + nentries = mem_to_units(size, order); rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { if (size > atomic_long_read(&chunk->avail)) continue; - start_bit = 0; - end_bit = chunk_size(chunk) >> order; + start_entry = 0; + end_entry = chunk_size(chunk) >> order; retry: - start_bit = algo(chunk->bits, end_bit, start_bit, - nbits, data, pool); - if (start_bit >= end_bit) + start_entry = algo(chunk->entries, end_entry, start_entry, + nentries, data, pool); + if (start_entry >= end_entry) continue; - remain = bitmap_set_ll(chunk->bits, start_bit, nbits); + remain = alter_bitmap_ll(SET_BITS, chunk->entries, + start_entry, nentries); if (remain) { - remain = bitmap_clear_ll(chunk->bits, start_bit, - nbits - remain); - BUG_ON(remain); + remain = alter_bitmap_ll(CLEAR_BITS, + chunk->entries, + start_entry, + nentries - remain); goto retry; } - addr = chunk->start_addr + ((unsigned long)start_bit << order); - size = nbits << order; + addr = chunk->start_addr + + ((unsigned long)start_entry << order); + size = nentries << order; atomic_long_sub(size, &chunk->avail); break; } @@ -365,7 +477,7 @@ EXPORT_SYMBOL(gen_pool_dma_alloc); * gen_pool_free - free allocated special memory back to the pool * @pool: pool to free to * @addr: starting address of memory to free back to pool - * @size: size in bytes of memory to free + * @size: size in bytes of memory to free or 0, for auto-detection * * Free previously allocated special memory back to the specified * pool. Can not be used in NMI handler on architectures without @@ -375,22 +487,29 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) { struct gen_pool_chunk *chunk; int order = pool->min_alloc_order; - int start_bit, nbits, remain; + int start_entry, remaining_entries, nentries, remain; + int boundary; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); #endif - nbits = (size + (1UL << order) - 1) >> order; rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { if (addr >= chunk->start_addr && addr <= chunk->end_addr) { BUG_ON(addr + size - 1 > chunk->end_addr); - start_bit = (addr - chunk->start_addr) >> order; - remain = bitmap_clear_ll(chunk->bits, start_bit, nbits); + start_entry = (addr - chunk->start_addr) >> order; + remaining_entries = (chunk->end_addr - addr) >> order; + boundary = get_boundary(chunk->entries, start_entry, + remaining_entries); + BUG_ON(boundary < 0); + nentries = boundary - start_entry; + BUG_ON(size && + (nentries != mem_to_units(size, order))); + remain = alter_bitmap_ll(CLEAR_BITS, chunk->entries, + start_entry, nentries); BUG_ON(remain); - size = nbits << order; - atomic_long_add(size, &chunk->avail); + atomic_long_add(nentries << order, &chunk->avail); rcu_read_unlock(); return; } @@ -517,9 +636,9 @@ EXPORT_SYMBOL(gen_pool_set_algo); * gen_pool_first_fit - find the first available region * of memory matching the size requirement (no alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from */ @@ -527,7 +646,15 @@ unsigned long gen_pool_first_fit(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - return bitmap_find_next_zero_area(map, size, start, nr, 0); + unsigned long align_mask; + unsigned long bit_index; + + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit); @@ -535,9 +662,9 @@ EXPORT_SYMBOL(gen_pool_first_fit); * gen_pool_first_fit_align - find the first available region * of memory matching the size requirement (alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: data for alignment * @pool: pool to get order from */ @@ -547,21 +674,28 @@ unsigned long gen_pool_first_fit_align(unsigned long *map, unsigned long size, { struct genpool_data_align *alignment; unsigned long align_mask; + unsigned long bit_index; int order; alignment = data; order = pool->min_alloc_order; - align_mask = ((alignment->align + (1UL << order) - 1) >> order) - 1; - return bitmap_find_next_zero_area(map, size, start, nr, align_mask); + align_mask = roundup_pow_of_two( + ENTRIES_TO_BITS(mem_to_units(alignment->align, + order))) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit_align); /** * gen_pool_fixed_alloc - reserve a specific region * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: data for alignment * @pool: pool to get order from */ @@ -571,20 +705,23 @@ unsigned long gen_pool_fixed_alloc(unsigned long *map, unsigned long size, { struct genpool_data_fixed *fixed_data; int order; - unsigned long offset_bit; - unsigned long start_bit; + unsigned long offset; + unsigned long align_mask; + unsigned long bit_index; fixed_data = data; order = pool->min_alloc_order; - offset_bit = fixed_data->offset >> order; if (WARN_ON(fixed_data->offset & ((1UL << order) - 1))) return size; + offset = fixed_data->offset >> order; + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start + offset), + ENTRIES_TO_BITS(nr), align_mask); + if (bit_index != ENTRIES_TO_BITS(offset)) + return size; - start_bit = bitmap_find_next_zero_area(map, size, - start + offset_bit, nr, 0); - if (start_bit != offset_bit) - start_bit = size; - return start_bit; + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_fixed_alloc); @@ -593,9 +730,9 @@ EXPORT_SYMBOL(gen_pool_fixed_alloc); * of memory matching the size requirement. The region will be aligned * to the order of the size specified. * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from */ @@ -603,9 +740,15 @@ unsigned long gen_pool_first_fit_order_align(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - unsigned long align_mask = roundup_pow_of_two(nr) - 1; - - return bitmap_find_next_zero_area(map, size, start, nr, align_mask); + unsigned long align_mask; + unsigned long bit_index; + + align_mask = roundup_pow_of_two(ENTRIES_TO_BITS(nr)) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit_order_align); @@ -613,9 +756,9 @@ EXPORT_SYMBOL(gen_pool_first_fit_order_align); * gen_pool_best_fit - find the best fitting region of memory * macthing the size requirement (no alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from * @@ -626,27 +769,41 @@ unsigned long gen_pool_best_fit(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - unsigned long start_bit = size; + unsigned long start_bit = ENTRIES_TO_BITS(size); unsigned long len = size + 1; unsigned long index; + unsigned long align_mask; + unsigned long bit_index; - index = bitmap_find_next_zero_area(map, size, start, nr, 0); + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + index = BITS_DIV_ENTRIES(bit_index); while (index < size) { - int next_bit = find_next_bit(map, size, index + nr); - if ((next_bit - index) < len) { - len = next_bit - index; - start_bit = index; + int next_bit; + + next_bit = find_next_bit(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(index + nr)); + if ((BITS_DIV_ENTRIES(next_bit) - index) < len) { + len = BITS_DIV_ENTRIES(next_bit) - index; + start_bit = ENTRIES_TO_BITS(index); if (len == nr) - return start_bit; + return BITS_DIV_ENTRIES(start_bit); } - index = bitmap_find_next_zero_area(map, size, - next_bit + 1, nr, 0); + bit_index = + bitmap_find_next_zero_area(map, + ENTRIES_TO_BITS(size), + next_bit + 1, + ENTRIES_TO_BITS(nr), + align_mask); + index = BITS_DIV_ENTRIES(bit_index); } - return start_bit; + return BITS_DIV_ENTRIES(start_bit); } -EXPORT_SYMBOL(gen_pool_best_fit); static void devm_gen_pool_release(struct device *dev, void *res) { -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 1/6] genalloc: track beginning of allocations @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The genalloc library is only capable of tracking if a certain unit of allocation is in use or not. It is not capable of discerning where the memory associated to an allocation request begins and where it ends. The reason is that units of allocations are tracked by using a bitmap, where each bit represents that the unit is either allocated (1) or available (0). The user of the API must keep track of how much space was requested, if it ever needs to be freed. This can cause errors being undetected. Ex: * Only a subset of the memory provided to an allocation request is freed * The memory from a subsequent allocation is freed * The memory being freed doesn't start at the beginning of an allocation. The bitmap is used because it allows to perform lockless read/write access, where this is supported by hw through cmpxchg. Similarly, it is possible to scan the bitmap for a sufficiently long sequence of zeros, to identify zones available for allocation. -- This patch doubles the space reserved in the bitmap for each allocation. By using 2 bits per allocation, it is possible to encode also the information of where the allocation starts: (msb to the left, lsb to the right, in the following "dictionary") 11: first allocation unit in the allocation 10: any subsequent allocation unit (if any) in the allocation 00: available allocation unit 01: invalid Ex, with the same notation as above - MSb.......LSb: ...000010111100000010101011 <-- Read in this direction. \__|\__|\|\____|\______| | | | | \___ 4 used allocation units | | | \___________ 3 empty allocation units | | \_________________ 1 used allocation unit | \___________________ 2 used allocation units \_______________________ 2 empty allocation units Because of the encoding, the previous lockless operations are still possible. The only caveat is to change the parameter of the zero-finding function which establishes the alignment at which to perform the test for first zero. The original value of the parameter is 0, meaning that an allocation can start at any point in the bitmap, while the new value is 1, meaning that allocations can start only at even places (bit 0, bit 2, etc.) The number of zeroes to look for, must therefore be doubled. When it's time to free the memory associated to an allocation request, it's a matter of checking if the corresponding allocation unit is really the beginning of an allocation (both bits are set to 1). Looking for the ending can also be performed locklessly. It's sufficient to identify the first mapped allocation unit that is represented either as free (00) or busy (11). Even if the allocation status should change in the meanwhile, it doesn't matter, since it can only transition between free (00) and first-allocated (11). The parameter indicating to the *_free() function the size of the space that should be freed is not currently removed, to facilitate the transition, but it is verified, whenever it is not zero. If it is set to zero, then the free function will autonomously decide the size to be free, by scanning the bitmap. About the implementation: the patch introduces the concept of "bitmap entry", which has a 1:1 mapping with allocation units, while the code being patched has a 1:1 mapping between allocation units and bits. This means that, now, the bitmap can be extended (by following powers of 2), to track also other properties of the allocations, if ever needed. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 4 +- lib/genalloc.c | 417 ++++++++++++++++++++++++++++++++--------------- 2 files changed, 289 insertions(+), 132 deletions(-) diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 872f930..dcaa33e 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -32,7 +32,7 @@ #include <linux/types.h> #include <linux/spinlock_types.h> -#include <linux/atomic.h> +#include <linux/slab.h> struct device; struct device_node; @@ -76,7 +76,7 @@ struct gen_pool_chunk { phys_addr_t phys_addr; /* physical starting address of memory chunk */ unsigned long start_addr; /* start address of memory chunk */ unsigned long end_addr; /* end address of memory chunk (inclusive) */ - unsigned long bits[0]; /* bitmap for allocating memory chunk */ + unsigned long entries[0]; /* bitmap for allocating memory chunk */ }; /* diff --git a/lib/genalloc.c b/lib/genalloc.c index ca06adc..dde7830 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -36,114 +36,221 @@ #include <linux/genalloc.h> #include <linux/of_device.h> +#define ENTRY_ORDER 1UL +#define ENTRY_MASK ((1UL << ((ENTRY_ORDER) + 1UL)) - 1UL) +#define ENTRY_HEAD ENTRY_MASK +#define ENTRY_UNUSED 0UL +#define BITS_PER_ENTRY (1U << ENTRY_ORDER) +#define BITS_DIV_ENTRIES(x) ((x) >> ENTRY_ORDER) +#define ENTRIES_TO_BITS(x) ((x) << ENTRY_ORDER) +#define BITS_DIV_LONGS(x) ((x) / BITS_PER_LONG) +#define ENTRIES_DIV_LONGS(x) (BITS_DIV_LONGS(ENTRIES_TO_BITS(x))) + +#define ENTRIES_PER_LONG BITS_DIV_ENTRIES(BITS_PER_LONG) + +/* Binary pattern of 1010...1010 that spans one unsigned long. */ +#define MASK (~0UL / 3 * 2) + +/** + * get_bitmap_entry - extracts the specified entry from the bitmap + * @map: pointer to a bitmap + * @entry_index: the index of the desired entry in the bitmap + * + * Returns the requested bitmap. + */ +static inline unsigned long get_bitmap_entry(unsigned long *map, + int entry_index) +{ + return (map[ENTRIES_DIV_LONGS(entry_index)] >> + ENTRIES_TO_BITS(entry_index % ENTRIES_PER_LONG)) & + ENTRY_MASK; +} + + +/** + * mem_to_units - convert references to memory into orders of allocation + * @size: amount in bytes + * @order: pow of 2 represented by each entry in the bitmap + * + * Returns the number of units representing the size. + */ +static inline unsigned long mem_to_units(unsigned long size, + unsigned long order) +{ + return (size + (1UL << order) - 1) >> order; +} + +/** + * chunk_size - dimension of a chunk of memory + * @chunk: pointer to the struct describing the chunk + * + * Returns the size of the chunk. + */ static inline size_t chunk_size(const struct gen_pool_chunk *chunk) { return chunk->end_addr - chunk->start_addr + 1; } -static int set_bits_ll(unsigned long *addr, unsigned long mask_to_set) + +/** + * set_bits_ll - according to the mask, sets the bits specified by + * value, at the address specified. + * @addr: where to write + * @mask: filter to apply for the bits to alter + * @value: actual configuration of bits to store + * + * Returns 0 upon success, -EBUSY otherwise + */ +static int set_bits_ll(unsigned long *addr, + unsigned long mask, unsigned long value) { - unsigned long val, nval; + unsigned long nval; + unsigned long present; + unsigned long target; nval = *addr; do { - val = nval; - if (val & mask_to_set) + present = nval; + if (present & mask) return -EBUSY; + target = present | value; cpu_relax(); - } while ((nval = cmpxchg(addr, val, val | mask_to_set)) != val); - + } while ((nval = cmpxchg(addr, present, target)) != target); return 0; } -static int clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear) + +/** + * cleart_bits_ll - according to the mask, clears the bits specified by + * value, at the address specified. + * @addr: where to write + * @mask: filter to apply for the bits to alter + * @value: actual configuration of bits to clear + * + * Returns 0 upon success, -EBUSY otherwise + */ +static int clear_bits_ll(unsigned long *addr, + unsigned long mask, unsigned long value) { - unsigned long val, nval; + unsigned long nval; + unsigned long present; + unsigned long target; nval = *addr; + present = nval; + if (unlikely((present & mask) ^ value)) + return -EBUSY; do { - val = nval; - if ((val & mask_to_clear) != mask_to_clear) + present = nval; + if (unlikely((present & mask) ^ value)) return -EBUSY; + target = present & ~mask; cpu_relax(); - } while ((nval = cmpxchg(addr, val, val & ~mask_to_clear)) != val); - + } while ((nval = cmpxchg(addr, present, target)) != target); return 0; } -/* - * bitmap_set_ll - set the specified number of bits at the specified position + +/** + * get_boundary - verify that an allocation effectively + * starts at the given address, then measure its length. * @map: pointer to a bitmap - * @start: a bit position in @map - * @nr: number of bits to set + * @start_entry: the index of the first entry in the bitmap + * @nentries: number of entries to alter * - * Set @nr bits start from @start in @map lock-lessly. Several users - * can set/clear the same bitmap simultaneously without lock. If two - * users set the same bit, one user will return remain bits, otherwise - * return 0. + * Returns the length of an allocation, otherwise -EINVAL if the + * parameters do not refer to a correct allocation. */ -static int bitmap_set_ll(unsigned long *map, int start, int nr) +static int get_boundary(unsigned long *map, int start_entry, int nentries) { - unsigned long *p = map + BIT_WORD(start); - const int size = start + nr; - int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); - unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); - - while (nr - bits_to_set >= 0) { - if (set_bits_ll(p, mask_to_set)) - return nr; - nr -= bits_to_set; - bits_to_set = BITS_PER_LONG; - mask_to_set = ~0UL; - p++; - } - if (nr) { - mask_to_set &= BITMAP_LAST_WORD_MASK(size); - if (set_bits_ll(p, mask_to_set)) - return nr; - } + int i; + unsigned long bitmap_entry; - return 0; + + if (unlikely(get_bitmap_entry(map, start_entry) != ENTRY_HEAD)) + return -EINVAL; + for (i = start_entry + 1; i < nentries; i++) { + bitmap_entry = get_bitmap_entry(map, i); + if (bitmap_entry == ENTRY_HEAD || + bitmap_entry == ENTRY_UNUSED) + return i; + } + return nentries - start_entry; } + +#define SET_BITS 1 +#define CLEAR_BITS 0 + /* - * bitmap_clear_ll - clear the specified number of bits at the specified position + * alter_bitmap_ll - set or clear the entries associated to an allocation + * @alteration: selection if the bits selected should be set or cleared * @map: pointer to a bitmap - * @start: a bit position in @map - * @nr: number of bits to set + * @start: the index of the first entry in the bitmap + * @nentries: number of entries to alter * - * Clear @nr bits start from @start in @map lock-lessly. Several users - * can set/clear the same bitmap simultaneously without lock. If two - * users clear the same bit, one user will return remain bits, - * otherwise return 0. + * The modification happens lock-lessly. + * Several users can write to the same map simultaneously, without lock. + * If two users alter the same bit, one user will return remaining + * entries, otherwise return 0. */ -static int bitmap_clear_ll(unsigned long *map, int start, int nr) +static int alter_bitmap_ll(bool alteration, unsigned long *map, + int start_entry, int nentries) { - unsigned long *p = map + BIT_WORD(start); - const int size = start + nr; - int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG); - unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start); - - while (nr - bits_to_clear >= 0) { - if (clear_bits_ll(p, mask_to_clear)) - return nr; - nr -= bits_to_clear; - bits_to_clear = BITS_PER_LONG; - mask_to_clear = ~0UL; - p++; - } - if (nr) { - mask_to_clear &= BITMAP_LAST_WORD_MASK(size); - if (clear_bits_ll(p, mask_to_clear)) - return nr; + unsigned long start_bit; + unsigned long end_bit; + unsigned long mask; + unsigned long value; + int nbits; + int bits_to_write; + int index; + int (*action)(unsigned long *addr, + unsigned long mask, unsigned long value); + + action = (alteration == SET_BITS) ? set_bits_ll : clear_bits_ll; + + /* Prepare for writing the initial part of the allocation, from + * starting entry, to the end of the UL bitmap element which + * contains it. It might be larger than the actual allocation. + */ + start_bit = ENTRIES_TO_BITS(start_entry); + end_bit = ENTRIES_TO_BITS(start_entry + nentries); + nbits = ENTRIES_TO_BITS(nentries); + bits_to_write = BITS_PER_LONG - start_bit % BITS_PER_LONG; + mask = BITMAP_FIRST_WORD_MASK(start_bit); + /* Mark the beginning of the allocation. */ + value = MASK | (1UL << (start_bit % BITS_PER_LONG)); + index = BITS_DIV_LONGS(start_bit); + + /* Writes entries to the bitmap, as long as the reminder is + * positive or zero. + * Might be skipped if the entries to write do not reach the end + * of a bitmap UL unit. + */ + while (nbits >= bits_to_write) { + if (action(map + index, mask, value & mask)) + return BITS_DIV_ENTRIES(nbits); + nbits -= bits_to_write; + bits_to_write = BITS_PER_LONG; + mask = ~0UL; + value = MASK; + index++; } + /* Takes care of the ending part of the entries to mark. */ + if (nbits > 0) { + mask ^= BITMAP_FIRST_WORD_MASK((end_bit) % BITS_PER_LONG); + bits_to_write = nbits; + if (action(map + index, mask, value & mask)) + return BITS_DIV_ENTRIES(nbits); + } return 0; } + /** * gen_pool_create - create a new special memory pool - * @min_alloc_order: log base 2 of number of bytes each bitmap bit represents + * @min_alloc_order: log base 2 of number of bytes each bitmap entry represents * @nid: node id of the node the pool structure should be allocated on, or -1 * * Create a new special memory pool that can be used to manage special purpose @@ -183,10 +290,12 @@ int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phy size_t size, int nid) { struct gen_pool_chunk *chunk; - int nbits = size >> pool->min_alloc_order; - int nbytes = sizeof(struct gen_pool_chunk) + - BITS_TO_LONGS(nbits) * sizeof(long); + int nentries; + int nbytes; + nentries = size >> pool->min_alloc_order; + nbytes = sizeof(struct gen_pool_chunk) + + ENTRIES_DIV_LONGS(nentries) * sizeof(long); chunk = kzalloc_node(nbytes, GFP_KERNEL, nid); if (unlikely(chunk == NULL)) return -ENOMEM; @@ -248,7 +357,7 @@ void gen_pool_destroy(struct gen_pool *pool) list_del(&chunk->next_chunk); end_bit = chunk_size(chunk) >> order; - bit = find_next_bit(chunk->bits, end_bit, 0); + bit = find_next_bit(chunk->entries, end_bit, 0); BUG_ON(bit < end_bit); kfree(chunk); @@ -292,7 +401,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, struct gen_pool_chunk *chunk; unsigned long addr = 0; int order = pool->min_alloc_order; - int nbits, start_bit, end_bit, remain; + int nentries, start_entry, end_entry, remain; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); @@ -301,29 +410,32 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, if (size == 0) return 0; - nbits = (size + (1UL << order) - 1) >> order; + nentries = mem_to_units(size, order); rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { if (size > atomic_long_read(&chunk->avail)) continue; - start_bit = 0; - end_bit = chunk_size(chunk) >> order; + start_entry = 0; + end_entry = chunk_size(chunk) >> order; retry: - start_bit = algo(chunk->bits, end_bit, start_bit, - nbits, data, pool); - if (start_bit >= end_bit) + start_entry = algo(chunk->entries, end_entry, start_entry, + nentries, data, pool); + if (start_entry >= end_entry) continue; - remain = bitmap_set_ll(chunk->bits, start_bit, nbits); + remain = alter_bitmap_ll(SET_BITS, chunk->entries, + start_entry, nentries); if (remain) { - remain = bitmap_clear_ll(chunk->bits, start_bit, - nbits - remain); - BUG_ON(remain); + remain = alter_bitmap_ll(CLEAR_BITS, + chunk->entries, + start_entry, + nentries - remain); goto retry; } - addr = chunk->start_addr + ((unsigned long)start_bit << order); - size = nbits << order; + addr = chunk->start_addr + + ((unsigned long)start_entry << order); + size = nentries << order; atomic_long_sub(size, &chunk->avail); break; } @@ -365,7 +477,7 @@ EXPORT_SYMBOL(gen_pool_dma_alloc); * gen_pool_free - free allocated special memory back to the pool * @pool: pool to free to * @addr: starting address of memory to free back to pool - * @size: size in bytes of memory to free + * @size: size in bytes of memory to free or 0, for auto-detection * * Free previously allocated special memory back to the specified * pool. Can not be used in NMI handler on architectures without @@ -375,22 +487,29 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) { struct gen_pool_chunk *chunk; int order = pool->min_alloc_order; - int start_bit, nbits, remain; + int start_entry, remaining_entries, nentries, remain; + int boundary; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); #endif - nbits = (size + (1UL << order) - 1) >> order; rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { if (addr >= chunk->start_addr && addr <= chunk->end_addr) { BUG_ON(addr + size - 1 > chunk->end_addr); - start_bit = (addr - chunk->start_addr) >> order; - remain = bitmap_clear_ll(chunk->bits, start_bit, nbits); + start_entry = (addr - chunk->start_addr) >> order; + remaining_entries = (chunk->end_addr - addr) >> order; + boundary = get_boundary(chunk->entries, start_entry, + remaining_entries); + BUG_ON(boundary < 0); + nentries = boundary - start_entry; + BUG_ON(size && + (nentries != mem_to_units(size, order))); + remain = alter_bitmap_ll(CLEAR_BITS, chunk->entries, + start_entry, nentries); BUG_ON(remain); - size = nbits << order; - atomic_long_add(size, &chunk->avail); + atomic_long_add(nentries << order, &chunk->avail); rcu_read_unlock(); return; } @@ -517,9 +636,9 @@ EXPORT_SYMBOL(gen_pool_set_algo); * gen_pool_first_fit - find the first available region * of memory matching the size requirement (no alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from */ @@ -527,7 +646,15 @@ unsigned long gen_pool_first_fit(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - return bitmap_find_next_zero_area(map, size, start, nr, 0); + unsigned long align_mask; + unsigned long bit_index; + + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit); @@ -535,9 +662,9 @@ EXPORT_SYMBOL(gen_pool_first_fit); * gen_pool_first_fit_align - find the first available region * of memory matching the size requirement (alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: data for alignment * @pool: pool to get order from */ @@ -547,21 +674,28 @@ unsigned long gen_pool_first_fit_align(unsigned long *map, unsigned long size, { struct genpool_data_align *alignment; unsigned long align_mask; + unsigned long bit_index; int order; alignment = data; order = pool->min_alloc_order; - align_mask = ((alignment->align + (1UL << order) - 1) >> order) - 1; - return bitmap_find_next_zero_area(map, size, start, nr, align_mask); + align_mask = roundup_pow_of_two( + ENTRIES_TO_BITS(mem_to_units(alignment->align, + order))) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit_align); /** * gen_pool_fixed_alloc - reserve a specific region * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: data for alignment * @pool: pool to get order from */ @@ -571,20 +705,23 @@ unsigned long gen_pool_fixed_alloc(unsigned long *map, unsigned long size, { struct genpool_data_fixed *fixed_data; int order; - unsigned long offset_bit; - unsigned long start_bit; + unsigned long offset; + unsigned long align_mask; + unsigned long bit_index; fixed_data = data; order = pool->min_alloc_order; - offset_bit = fixed_data->offset >> order; if (WARN_ON(fixed_data->offset & ((1UL << order) - 1))) return size; + offset = fixed_data->offset >> order; + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start + offset), + ENTRIES_TO_BITS(nr), align_mask); + if (bit_index != ENTRIES_TO_BITS(offset)) + return size; - start_bit = bitmap_find_next_zero_area(map, size, - start + offset_bit, nr, 0); - if (start_bit != offset_bit) - start_bit = size; - return start_bit; + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_fixed_alloc); @@ -593,9 +730,9 @@ EXPORT_SYMBOL(gen_pool_fixed_alloc); * of memory matching the size requirement. The region will be aligned * to the order of the size specified. * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from */ @@ -603,9 +740,15 @@ unsigned long gen_pool_first_fit_order_align(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - unsigned long align_mask = roundup_pow_of_two(nr) - 1; - - return bitmap_find_next_zero_area(map, size, start, nr, align_mask); + unsigned long align_mask; + unsigned long bit_index; + + align_mask = roundup_pow_of_two(ENTRIES_TO_BITS(nr)) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit_order_align); @@ -613,9 +756,9 @@ EXPORT_SYMBOL(gen_pool_first_fit_order_align); * gen_pool_best_fit - find the best fitting region of memory * macthing the size requirement (no alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from * @@ -626,27 +769,41 @@ unsigned long gen_pool_best_fit(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - unsigned long start_bit = size; + unsigned long start_bit = ENTRIES_TO_BITS(size); unsigned long len = size + 1; unsigned long index; + unsigned long align_mask; + unsigned long bit_index; - index = bitmap_find_next_zero_area(map, size, start, nr, 0); + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + index = BITS_DIV_ENTRIES(bit_index); while (index < size) { - int next_bit = find_next_bit(map, size, index + nr); - if ((next_bit - index) < len) { - len = next_bit - index; - start_bit = index; + int next_bit; + + next_bit = find_next_bit(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(index + nr)); + if ((BITS_DIV_ENTRIES(next_bit) - index) < len) { + len = BITS_DIV_ENTRIES(next_bit) - index; + start_bit = ENTRIES_TO_BITS(index); if (len == nr) - return start_bit; + return BITS_DIV_ENTRIES(start_bit); } - index = bitmap_find_next_zero_area(map, size, - next_bit + 1, nr, 0); + bit_index = + bitmap_find_next_zero_area(map, + ENTRIES_TO_BITS(size), + next_bit + 1, + ENTRIES_TO_BITS(nr), + align_mask); + index = BITS_DIV_ENTRIES(bit_index); } - return start_bit; + return BITS_DIV_ENTRIES(start_bit); } -EXPORT_SYMBOL(gen_pool_best_fit); static void devm_gen_pool_release(struct device *dev, void *res) { -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 1/6] genalloc: track beginning of allocations @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The genalloc library is only capable of tracking if a certain unit of allocation is in use or not. It is not capable of discerning where the memory associated to an allocation request begins and where it ends. The reason is that units of allocations are tracked by using a bitmap, where each bit represents that the unit is either allocated (1) or available (0). The user of the API must keep track of how much space was requested, if it ever needs to be freed. This can cause errors being undetected. Ex: * Only a subset of the memory provided to an allocation request is freed * The memory from a subsequent allocation is freed * The memory being freed doesn't start at the beginning of an allocation. The bitmap is used because it allows to perform lockless read/write access, where this is supported by hw through cmpxchg. Similarly, it is possible to scan the bitmap for a sufficiently long sequence of zeros, to identify zones available for allocation. -- This patch doubles the space reserved in the bitmap for each allocation. By using 2 bits per allocation, it is possible to encode also the information of where the allocation starts: (msb to the left, lsb to the right, in the following "dictionary") 11: first allocation unit in the allocation 10: any subsequent allocation unit (if any) in the allocation 00: available allocation unit 01: invalid Ex, with the same notation as above - MSb.......LSb: ...000010111100000010101011 <-- Read in this direction. \__|\__|\|\____|\______| | | | | \___ 4 used allocation units | | | \___________ 3 empty allocation units | | \_________________ 1 used allocation unit | \___________________ 2 used allocation units \_______________________ 2 empty allocation units Because of the encoding, the previous lockless operations are still possible. The only caveat is to change the parameter of the zero-finding function which establishes the alignment at which to perform the test for first zero. The original value of the parameter is 0, meaning that an allocation can start at any point in the bitmap, while the new value is 1, meaning that allocations can start only at even places (bit 0, bit 2, etc.) The number of zeroes to look for, must therefore be doubled. When it's time to free the memory associated to an allocation request, it's a matter of checking if the corresponding allocation unit is really the beginning of an allocation (both bits are set to 1). Looking for the ending can also be performed locklessly. It's sufficient to identify the first mapped allocation unit that is represented either as free (00) or busy (11). Even if the allocation status should change in the meanwhile, it doesn't matter, since it can only transition between free (00) and first-allocated (11). The parameter indicating to the *_free() function the size of the space that should be freed is not currently removed, to facilitate the transition, but it is verified, whenever it is not zero. If it is set to zero, then the free function will autonomously decide the size to be free, by scanning the bitmap. About the implementation: the patch introduces the concept of "bitmap entry", which has a 1:1 mapping with allocation units, while the code being patched has a 1:1 mapping between allocation units and bits. This means that, now, the bitmap can be extended (by following powers of 2), to track also other properties of the allocations, if ever needed. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 4 +- lib/genalloc.c | 417 ++++++++++++++++++++++++++++++++--------------- 2 files changed, 289 insertions(+), 132 deletions(-) diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 872f930..dcaa33e 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -32,7 +32,7 @@ #include <linux/types.h> #include <linux/spinlock_types.h> -#include <linux/atomic.h> +#include <linux/slab.h> struct device; struct device_node; @@ -76,7 +76,7 @@ struct gen_pool_chunk { phys_addr_t phys_addr; /* physical starting address of memory chunk */ unsigned long start_addr; /* start address of memory chunk */ unsigned long end_addr; /* end address of memory chunk (inclusive) */ - unsigned long bits[0]; /* bitmap for allocating memory chunk */ + unsigned long entries[0]; /* bitmap for allocating memory chunk */ }; /* diff --git a/lib/genalloc.c b/lib/genalloc.c index ca06adc..dde7830 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -36,114 +36,221 @@ #include <linux/genalloc.h> #include <linux/of_device.h> +#define ENTRY_ORDER 1UL +#define ENTRY_MASK ((1UL << ((ENTRY_ORDER) + 1UL)) - 1UL) +#define ENTRY_HEAD ENTRY_MASK +#define ENTRY_UNUSED 0UL +#define BITS_PER_ENTRY (1U << ENTRY_ORDER) +#define BITS_DIV_ENTRIES(x) ((x) >> ENTRY_ORDER) +#define ENTRIES_TO_BITS(x) ((x) << ENTRY_ORDER) +#define BITS_DIV_LONGS(x) ((x) / BITS_PER_LONG) +#define ENTRIES_DIV_LONGS(x) (BITS_DIV_LONGS(ENTRIES_TO_BITS(x))) + +#define ENTRIES_PER_LONG BITS_DIV_ENTRIES(BITS_PER_LONG) + +/* Binary pattern of 1010...1010 that spans one unsigned long. */ +#define MASK (~0UL / 3 * 2) + +/** + * get_bitmap_entry - extracts the specified entry from the bitmap + * @map: pointer to a bitmap + * @entry_index: the index of the desired entry in the bitmap + * + * Returns the requested bitmap. + */ +static inline unsigned long get_bitmap_entry(unsigned long *map, + int entry_index) +{ + return (map[ENTRIES_DIV_LONGS(entry_index)] >> + ENTRIES_TO_BITS(entry_index % ENTRIES_PER_LONG)) & + ENTRY_MASK; +} + + +/** + * mem_to_units - convert references to memory into orders of allocation + * @size: amount in bytes + * @order: pow of 2 represented by each entry in the bitmap + * + * Returns the number of units representing the size. + */ +static inline unsigned long mem_to_units(unsigned long size, + unsigned long order) +{ + return (size + (1UL << order) - 1) >> order; +} + +/** + * chunk_size - dimension of a chunk of memory + * @chunk: pointer to the struct describing the chunk + * + * Returns the size of the chunk. + */ static inline size_t chunk_size(const struct gen_pool_chunk *chunk) { return chunk->end_addr - chunk->start_addr + 1; } -static int set_bits_ll(unsigned long *addr, unsigned long mask_to_set) + +/** + * set_bits_ll - according to the mask, sets the bits specified by + * value, at the address specified. + * @addr: where to write + * @mask: filter to apply for the bits to alter + * @value: actual configuration of bits to store + * + * Returns 0 upon success, -EBUSY otherwise + */ +static int set_bits_ll(unsigned long *addr, + unsigned long mask, unsigned long value) { - unsigned long val, nval; + unsigned long nval; + unsigned long present; + unsigned long target; nval = *addr; do { - val = nval; - if (val & mask_to_set) + present = nval; + if (present & mask) return -EBUSY; + target = present | value; cpu_relax(); - } while ((nval = cmpxchg(addr, val, val | mask_to_set)) != val); - + } while ((nval = cmpxchg(addr, present, target)) != target); return 0; } -static int clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear) + +/** + * cleart_bits_ll - according to the mask, clears the bits specified by + * value, at the address specified. + * @addr: where to write + * @mask: filter to apply for the bits to alter + * @value: actual configuration of bits to clear + * + * Returns 0 upon success, -EBUSY otherwise + */ +static int clear_bits_ll(unsigned long *addr, + unsigned long mask, unsigned long value) { - unsigned long val, nval; + unsigned long nval; + unsigned long present; + unsigned long target; nval = *addr; + present = nval; + if (unlikely((present & mask) ^ value)) + return -EBUSY; do { - val = nval; - if ((val & mask_to_clear) != mask_to_clear) + present = nval; + if (unlikely((present & mask) ^ value)) return -EBUSY; + target = present & ~mask; cpu_relax(); - } while ((nval = cmpxchg(addr, val, val & ~mask_to_clear)) != val); - + } while ((nval = cmpxchg(addr, present, target)) != target); return 0; } -/* - * bitmap_set_ll - set the specified number of bits at the specified position + +/** + * get_boundary - verify that an allocation effectively + * starts at the given address, then measure its length. * @map: pointer to a bitmap - * @start: a bit position in @map - * @nr: number of bits to set + * @start_entry: the index of the first entry in the bitmap + * @nentries: number of entries to alter * - * Set @nr bits start from @start in @map lock-lessly. Several users - * can set/clear the same bitmap simultaneously without lock. If two - * users set the same bit, one user will return remain bits, otherwise - * return 0. + * Returns the length of an allocation, otherwise -EINVAL if the + * parameters do not refer to a correct allocation. */ -static int bitmap_set_ll(unsigned long *map, int start, int nr) +static int get_boundary(unsigned long *map, int start_entry, int nentries) { - unsigned long *p = map + BIT_WORD(start); - const int size = start + nr; - int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); - unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); - - while (nr - bits_to_set >= 0) { - if (set_bits_ll(p, mask_to_set)) - return nr; - nr -= bits_to_set; - bits_to_set = BITS_PER_LONG; - mask_to_set = ~0UL; - p++; - } - if (nr) { - mask_to_set &= BITMAP_LAST_WORD_MASK(size); - if (set_bits_ll(p, mask_to_set)) - return nr; - } + int i; + unsigned long bitmap_entry; - return 0; + + if (unlikely(get_bitmap_entry(map, start_entry) != ENTRY_HEAD)) + return -EINVAL; + for (i = start_entry + 1; i < nentries; i++) { + bitmap_entry = get_bitmap_entry(map, i); + if (bitmap_entry == ENTRY_HEAD || + bitmap_entry == ENTRY_UNUSED) + return i; + } + return nentries - start_entry; } + +#define SET_BITS 1 +#define CLEAR_BITS 0 + /* - * bitmap_clear_ll - clear the specified number of bits at the specified position + * alter_bitmap_ll - set or clear the entries associated to an allocation + * @alteration: selection if the bits selected should be set or cleared * @map: pointer to a bitmap - * @start: a bit position in @map - * @nr: number of bits to set + * @start: the index of the first entry in the bitmap + * @nentries: number of entries to alter * - * Clear @nr bits start from @start in @map lock-lessly. Several users - * can set/clear the same bitmap simultaneously without lock. If two - * users clear the same bit, one user will return remain bits, - * otherwise return 0. + * The modification happens lock-lessly. + * Several users can write to the same map simultaneously, without lock. + * If two users alter the same bit, one user will return remaining + * entries, otherwise return 0. */ -static int bitmap_clear_ll(unsigned long *map, int start, int nr) +static int alter_bitmap_ll(bool alteration, unsigned long *map, + int start_entry, int nentries) { - unsigned long *p = map + BIT_WORD(start); - const int size = start + nr; - int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG); - unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start); - - while (nr - bits_to_clear >= 0) { - if (clear_bits_ll(p, mask_to_clear)) - return nr; - nr -= bits_to_clear; - bits_to_clear = BITS_PER_LONG; - mask_to_clear = ~0UL; - p++; - } - if (nr) { - mask_to_clear &= BITMAP_LAST_WORD_MASK(size); - if (clear_bits_ll(p, mask_to_clear)) - return nr; + unsigned long start_bit; + unsigned long end_bit; + unsigned long mask; + unsigned long value; + int nbits; + int bits_to_write; + int index; + int (*action)(unsigned long *addr, + unsigned long mask, unsigned long value); + + action = (alteration == SET_BITS) ? set_bits_ll : clear_bits_ll; + + /* Prepare for writing the initial part of the allocation, from + * starting entry, to the end of the UL bitmap element which + * contains it. It might be larger than the actual allocation. + */ + start_bit = ENTRIES_TO_BITS(start_entry); + end_bit = ENTRIES_TO_BITS(start_entry + nentries); + nbits = ENTRIES_TO_BITS(nentries); + bits_to_write = BITS_PER_LONG - start_bit % BITS_PER_LONG; + mask = BITMAP_FIRST_WORD_MASK(start_bit); + /* Mark the beginning of the allocation. */ + value = MASK | (1UL << (start_bit % BITS_PER_LONG)); + index = BITS_DIV_LONGS(start_bit); + + /* Writes entries to the bitmap, as long as the reminder is + * positive or zero. + * Might be skipped if the entries to write do not reach the end + * of a bitmap UL unit. + */ + while (nbits >= bits_to_write) { + if (action(map + index, mask, value & mask)) + return BITS_DIV_ENTRIES(nbits); + nbits -= bits_to_write; + bits_to_write = BITS_PER_LONG; + mask = ~0UL; + value = MASK; + index++; } + /* Takes care of the ending part of the entries to mark. */ + if (nbits > 0) { + mask ^= BITMAP_FIRST_WORD_MASK((end_bit) % BITS_PER_LONG); + bits_to_write = nbits; + if (action(map + index, mask, value & mask)) + return BITS_DIV_ENTRIES(nbits); + } return 0; } + /** * gen_pool_create - create a new special memory pool - * @min_alloc_order: log base 2 of number of bytes each bitmap bit represents + * @min_alloc_order: log base 2 of number of bytes each bitmap entry represents * @nid: node id of the node the pool structure should be allocated on, or -1 * * Create a new special memory pool that can be used to manage special purpose @@ -183,10 +290,12 @@ int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phy size_t size, int nid) { struct gen_pool_chunk *chunk; - int nbits = size >> pool->min_alloc_order; - int nbytes = sizeof(struct gen_pool_chunk) + - BITS_TO_LONGS(nbits) * sizeof(long); + int nentries; + int nbytes; + nentries = size >> pool->min_alloc_order; + nbytes = sizeof(struct gen_pool_chunk) + + ENTRIES_DIV_LONGS(nentries) * sizeof(long); chunk = kzalloc_node(nbytes, GFP_KERNEL, nid); if (unlikely(chunk == NULL)) return -ENOMEM; @@ -248,7 +357,7 @@ void gen_pool_destroy(struct gen_pool *pool) list_del(&chunk->next_chunk); end_bit = chunk_size(chunk) >> order; - bit = find_next_bit(chunk->bits, end_bit, 0); + bit = find_next_bit(chunk->entries, end_bit, 0); BUG_ON(bit < end_bit); kfree(chunk); @@ -292,7 +401,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, struct gen_pool_chunk *chunk; unsigned long addr = 0; int order = pool->min_alloc_order; - int nbits, start_bit, end_bit, remain; + int nentries, start_entry, end_entry, remain; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); @@ -301,29 +410,32 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, if (size == 0) return 0; - nbits = (size + (1UL << order) - 1) >> order; + nentries = mem_to_units(size, order); rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { if (size > atomic_long_read(&chunk->avail)) continue; - start_bit = 0; - end_bit = chunk_size(chunk) >> order; + start_entry = 0; + end_entry = chunk_size(chunk) >> order; retry: - start_bit = algo(chunk->bits, end_bit, start_bit, - nbits, data, pool); - if (start_bit >= end_bit) + start_entry = algo(chunk->entries, end_entry, start_entry, + nentries, data, pool); + if (start_entry >= end_entry) continue; - remain = bitmap_set_ll(chunk->bits, start_bit, nbits); + remain = alter_bitmap_ll(SET_BITS, chunk->entries, + start_entry, nentries); if (remain) { - remain = bitmap_clear_ll(chunk->bits, start_bit, - nbits - remain); - BUG_ON(remain); + remain = alter_bitmap_ll(CLEAR_BITS, + chunk->entries, + start_entry, + nentries - remain); goto retry; } - addr = chunk->start_addr + ((unsigned long)start_bit << order); - size = nbits << order; + addr = chunk->start_addr + + ((unsigned long)start_entry << order); + size = nentries << order; atomic_long_sub(size, &chunk->avail); break; } @@ -365,7 +477,7 @@ EXPORT_SYMBOL(gen_pool_dma_alloc); * gen_pool_free - free allocated special memory back to the pool * @pool: pool to free to * @addr: starting address of memory to free back to pool - * @size: size in bytes of memory to free + * @size: size in bytes of memory to free or 0, for auto-detection * * Free previously allocated special memory back to the specified * pool. Can not be used in NMI handler on architectures without @@ -375,22 +487,29 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) { struct gen_pool_chunk *chunk; int order = pool->min_alloc_order; - int start_bit, nbits, remain; + int start_entry, remaining_entries, nentries, remain; + int boundary; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); #endif - nbits = (size + (1UL << order) - 1) >> order; rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { if (addr >= chunk->start_addr && addr <= chunk->end_addr) { BUG_ON(addr + size - 1 > chunk->end_addr); - start_bit = (addr - chunk->start_addr) >> order; - remain = bitmap_clear_ll(chunk->bits, start_bit, nbits); + start_entry = (addr - chunk->start_addr) >> order; + remaining_entries = (chunk->end_addr - addr) >> order; + boundary = get_boundary(chunk->entries, start_entry, + remaining_entries); + BUG_ON(boundary < 0); + nentries = boundary - start_entry; + BUG_ON(size && + (nentries != mem_to_units(size, order))); + remain = alter_bitmap_ll(CLEAR_BITS, chunk->entries, + start_entry, nentries); BUG_ON(remain); - size = nbits << order; - atomic_long_add(size, &chunk->avail); + atomic_long_add(nentries << order, &chunk->avail); rcu_read_unlock(); return; } @@ -517,9 +636,9 @@ EXPORT_SYMBOL(gen_pool_set_algo); * gen_pool_first_fit - find the first available region * of memory matching the size requirement (no alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from */ @@ -527,7 +646,15 @@ unsigned long gen_pool_first_fit(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - return bitmap_find_next_zero_area(map, size, start, nr, 0); + unsigned long align_mask; + unsigned long bit_index; + + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit); @@ -535,9 +662,9 @@ EXPORT_SYMBOL(gen_pool_first_fit); * gen_pool_first_fit_align - find the first available region * of memory matching the size requirement (alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: data for alignment * @pool: pool to get order from */ @@ -547,21 +674,28 @@ unsigned long gen_pool_first_fit_align(unsigned long *map, unsigned long size, { struct genpool_data_align *alignment; unsigned long align_mask; + unsigned long bit_index; int order; alignment = data; order = pool->min_alloc_order; - align_mask = ((alignment->align + (1UL << order) - 1) >> order) - 1; - return bitmap_find_next_zero_area(map, size, start, nr, align_mask); + align_mask = roundup_pow_of_two( + ENTRIES_TO_BITS(mem_to_units(alignment->align, + order))) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit_align); /** * gen_pool_fixed_alloc - reserve a specific region * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: data for alignment * @pool: pool to get order from */ @@ -571,20 +705,23 @@ unsigned long gen_pool_fixed_alloc(unsigned long *map, unsigned long size, { struct genpool_data_fixed *fixed_data; int order; - unsigned long offset_bit; - unsigned long start_bit; + unsigned long offset; + unsigned long align_mask; + unsigned long bit_index; fixed_data = data; order = pool->min_alloc_order; - offset_bit = fixed_data->offset >> order; if (WARN_ON(fixed_data->offset & ((1UL << order) - 1))) return size; + offset = fixed_data->offset >> order; + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start + offset), + ENTRIES_TO_BITS(nr), align_mask); + if (bit_index != ENTRIES_TO_BITS(offset)) + return size; - start_bit = bitmap_find_next_zero_area(map, size, - start + offset_bit, nr, 0); - if (start_bit != offset_bit) - start_bit = size; - return start_bit; + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_fixed_alloc); @@ -593,9 +730,9 @@ EXPORT_SYMBOL(gen_pool_fixed_alloc); * of memory matching the size requirement. The region will be aligned * to the order of the size specified. * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from */ @@ -603,9 +740,15 @@ unsigned long gen_pool_first_fit_order_align(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - unsigned long align_mask = roundup_pow_of_two(nr) - 1; - - return bitmap_find_next_zero_area(map, size, start, nr, align_mask); + unsigned long align_mask; + unsigned long bit_index; + + align_mask = roundup_pow_of_two(ENTRIES_TO_BITS(nr)) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit_order_align); @@ -613,9 +756,9 @@ EXPORT_SYMBOL(gen_pool_first_fit_order_align); * gen_pool_best_fit - find the best fitting region of memory * macthing the size requirement (no alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from * @@ -626,27 +769,41 @@ unsigned long gen_pool_best_fit(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - unsigned long start_bit = size; + unsigned long start_bit = ENTRIES_TO_BITS(size); unsigned long len = size + 1; unsigned long index; + unsigned long align_mask; + unsigned long bit_index; - index = bitmap_find_next_zero_area(map, size, start, nr, 0); + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + index = BITS_DIV_ENTRIES(bit_index); while (index < size) { - int next_bit = find_next_bit(map, size, index + nr); - if ((next_bit - index) < len) { - len = next_bit - index; - start_bit = index; + int next_bit; + + next_bit = find_next_bit(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(index + nr)); + if ((BITS_DIV_ENTRIES(next_bit) - index) < len) { + len = BITS_DIV_ENTRIES(next_bit) - index; + start_bit = ENTRIES_TO_BITS(index); if (len == nr) - return start_bit; + return BITS_DIV_ENTRIES(start_bit); } - index = bitmap_find_next_zero_area(map, size, - next_bit + 1, nr, 0); + bit_index = + bitmap_find_next_zero_area(map, + ENTRIES_TO_BITS(size), + next_bit + 1, + ENTRIES_TO_BITS(nr), + align_mask); + index = BITS_DIV_ENTRIES(bit_index); } - return start_bit; + return BITS_DIV_ENTRIES(start_bit); } -EXPORT_SYMBOL(gen_pool_best_fit); static void devm_gen_pool_release(struct device *dev, void *res) { -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 1/6] genalloc: track beginning of allocations @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: linux-security-module The genalloc library is only capable of tracking if a certain unit of allocation is in use or not. It is not capable of discerning where the memory associated to an allocation request begins and where it ends. The reason is that units of allocations are tracked by using a bitmap, where each bit represents that the unit is either allocated (1) or available (0). The user of the API must keep track of how much space was requested, if it ever needs to be freed. This can cause errors being undetected. Ex: * Only a subset of the memory provided to an allocation request is freed * The memory from a subsequent allocation is freed * The memory being freed doesn't start at the beginning of an allocation. The bitmap is used because it allows to perform lockless read/write access, where this is supported by hw through cmpxchg. Similarly, it is possible to scan the bitmap for a sufficiently long sequence of zeros, to identify zones available for allocation. -- This patch doubles the space reserved in the bitmap for each allocation. By using 2 bits per allocation, it is possible to encode also the information of where the allocation starts: (msb to the left, lsb to the right, in the following "dictionary") 11: first allocation unit in the allocation 10: any subsequent allocation unit (if any) in the allocation 00: available allocation unit 01: invalid Ex, with the same notation as above - MSb.......LSb: ...000010111100000010101011 <-- Read in this direction. \__|\__|\|\____|\______| | | | | \___ 4 used allocation units | | | \___________ 3 empty allocation units | | \_________________ 1 used allocation unit | \___________________ 2 used allocation units \_______________________ 2 empty allocation units Because of the encoding, the previous lockless operations are still possible. The only caveat is to change the parameter of the zero-finding function which establishes the alignment at which to perform the test for first zero. The original value of the parameter is 0, meaning that an allocation can start at any point in the bitmap, while the new value is 1, meaning that allocations can start only at even places (bit 0, bit 2, etc.) The number of zeroes to look for, must therefore be doubled. When it's time to free the memory associated to an allocation request, it's a matter of checking if the corresponding allocation unit is really the beginning of an allocation (both bits are set to 1). Looking for the ending can also be performed locklessly. It's sufficient to identify the first mapped allocation unit that is represented either as free (00) or busy (11). Even if the allocation status should change in the meanwhile, it doesn't matter, since it can only transition between free (00) and first-allocated (11). The parameter indicating to the *_free() function the size of the space that should be freed is not currently removed, to facilitate the transition, but it is verified, whenever it is not zero. If it is set to zero, then the free function will autonomously decide the size to be free, by scanning the bitmap. About the implementation: the patch introduces the concept of "bitmap entry", which has a 1:1 mapping with allocation units, while the code being patched has a 1:1 mapping between allocation units and bits. This means that, now, the bitmap can be extended (by following powers of 2), to track also other properties of the allocations, if ever needed. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 4 +- lib/genalloc.c | 417 ++++++++++++++++++++++++++++++++--------------- 2 files changed, 289 insertions(+), 132 deletions(-) diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 872f930..dcaa33e 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -32,7 +32,7 @@ #include <linux/types.h> #include <linux/spinlock_types.h> -#include <linux/atomic.h> +#include <linux/slab.h> struct device; struct device_node; @@ -76,7 +76,7 @@ struct gen_pool_chunk { phys_addr_t phys_addr; /* physical starting address of memory chunk */ unsigned long start_addr; /* start address of memory chunk */ unsigned long end_addr; /* end address of memory chunk (inclusive) */ - unsigned long bits[0]; /* bitmap for allocating memory chunk */ + unsigned long entries[0]; /* bitmap for allocating memory chunk */ }; /* diff --git a/lib/genalloc.c b/lib/genalloc.c index ca06adc..dde7830 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -36,114 +36,221 @@ #include <linux/genalloc.h> #include <linux/of_device.h> +#define ENTRY_ORDER 1UL +#define ENTRY_MASK ((1UL << ((ENTRY_ORDER) + 1UL)) - 1UL) +#define ENTRY_HEAD ENTRY_MASK +#define ENTRY_UNUSED 0UL +#define BITS_PER_ENTRY (1U << ENTRY_ORDER) +#define BITS_DIV_ENTRIES(x) ((x) >> ENTRY_ORDER) +#define ENTRIES_TO_BITS(x) ((x) << ENTRY_ORDER) +#define BITS_DIV_LONGS(x) ((x) / BITS_PER_LONG) +#define ENTRIES_DIV_LONGS(x) (BITS_DIV_LONGS(ENTRIES_TO_BITS(x))) + +#define ENTRIES_PER_LONG BITS_DIV_ENTRIES(BITS_PER_LONG) + +/* Binary pattern of 1010...1010 that spans one unsigned long. */ +#define MASK (~0UL / 3 * 2) + +/** + * get_bitmap_entry - extracts the specified entry from the bitmap + * @map: pointer to a bitmap + * @entry_index: the index of the desired entry in the bitmap + * + * Returns the requested bitmap. + */ +static inline unsigned long get_bitmap_entry(unsigned long *map, + int entry_index) +{ + return (map[ENTRIES_DIV_LONGS(entry_index)] >> + ENTRIES_TO_BITS(entry_index % ENTRIES_PER_LONG)) & + ENTRY_MASK; +} + + +/** + * mem_to_units - convert references to memory into orders of allocation + * @size: amount in bytes + * @order: pow of 2 represented by each entry in the bitmap + * + * Returns the number of units representing the size. + */ +static inline unsigned long mem_to_units(unsigned long size, + unsigned long order) +{ + return (size + (1UL << order) - 1) >> order; +} + +/** + * chunk_size - dimension of a chunk of memory + * @chunk: pointer to the struct describing the chunk + * + * Returns the size of the chunk. + */ static inline size_t chunk_size(const struct gen_pool_chunk *chunk) { return chunk->end_addr - chunk->start_addr + 1; } -static int set_bits_ll(unsigned long *addr, unsigned long mask_to_set) + +/** + * set_bits_ll - according to the mask, sets the bits specified by + * value, at the address specified. + * @addr: where to write + * @mask: filter to apply for the bits to alter + * @value: actual configuration of bits to store + * + * Returns 0 upon success, -EBUSY otherwise + */ +static int set_bits_ll(unsigned long *addr, + unsigned long mask, unsigned long value) { - unsigned long val, nval; + unsigned long nval; + unsigned long present; + unsigned long target; nval = *addr; do { - val = nval; - if (val & mask_to_set) + present = nval; + if (present & mask) return -EBUSY; + target = present | value; cpu_relax(); - } while ((nval = cmpxchg(addr, val, val | mask_to_set)) != val); - + } while ((nval = cmpxchg(addr, present, target)) != target); return 0; } -static int clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear) + +/** + * cleart_bits_ll - according to the mask, clears the bits specified by + * value, at the address specified. + * @addr: where to write + * @mask: filter to apply for the bits to alter + * @value: actual configuration of bits to clear + * + * Returns 0 upon success, -EBUSY otherwise + */ +static int clear_bits_ll(unsigned long *addr, + unsigned long mask, unsigned long value) { - unsigned long val, nval; + unsigned long nval; + unsigned long present; + unsigned long target; nval = *addr; + present = nval; + if (unlikely((present & mask) ^ value)) + return -EBUSY; do { - val = nval; - if ((val & mask_to_clear) != mask_to_clear) + present = nval; + if (unlikely((present & mask) ^ value)) return -EBUSY; + target = present & ~mask; cpu_relax(); - } while ((nval = cmpxchg(addr, val, val & ~mask_to_clear)) != val); - + } while ((nval = cmpxchg(addr, present, target)) != target); return 0; } -/* - * bitmap_set_ll - set the specified number of bits at the specified position + +/** + * get_boundary - verify that an allocation effectively + * starts at the given address, then measure its length. * @map: pointer to a bitmap - * @start: a bit position in @map - * @nr: number of bits to set + * @start_entry: the index of the first entry in the bitmap + * @nentries: number of entries to alter * - * Set @nr bits start from @start in @map lock-lessly. Several users - * can set/clear the same bitmap simultaneously without lock. If two - * users set the same bit, one user will return remain bits, otherwise - * return 0. + * Returns the length of an allocation, otherwise -EINVAL if the + * parameters do not refer to a correct allocation. */ -static int bitmap_set_ll(unsigned long *map, int start, int nr) +static int get_boundary(unsigned long *map, int start_entry, int nentries) { - unsigned long *p = map + BIT_WORD(start); - const int size = start + nr; - int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); - unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); - - while (nr - bits_to_set >= 0) { - if (set_bits_ll(p, mask_to_set)) - return nr; - nr -= bits_to_set; - bits_to_set = BITS_PER_LONG; - mask_to_set = ~0UL; - p++; - } - if (nr) { - mask_to_set &= BITMAP_LAST_WORD_MASK(size); - if (set_bits_ll(p, mask_to_set)) - return nr; - } + int i; + unsigned long bitmap_entry; - return 0; + + if (unlikely(get_bitmap_entry(map, start_entry) != ENTRY_HEAD)) + return -EINVAL; + for (i = start_entry + 1; i < nentries; i++) { + bitmap_entry = get_bitmap_entry(map, i); + if (bitmap_entry == ENTRY_HEAD || + bitmap_entry == ENTRY_UNUSED) + return i; + } + return nentries - start_entry; } + +#define SET_BITS 1 +#define CLEAR_BITS 0 + /* - * bitmap_clear_ll - clear the specified number of bits@the specified position + * alter_bitmap_ll - set or clear the entries associated to an allocation + * @alteration: selection if the bits selected should be set or cleared * @map: pointer to a bitmap - * @start: a bit position in @map - * @nr: number of bits to set + * @start: the index of the first entry in the bitmap + * @nentries: number of entries to alter * - * Clear @nr bits start from @start in @map lock-lessly. Several users - * can set/clear the same bitmap simultaneously without lock. If two - * users clear the same bit, one user will return remain bits, - * otherwise return 0. + * The modification happens lock-lessly. + * Several users can write to the same map simultaneously, without lock. + * If two users alter the same bit, one user will return remaining + * entries, otherwise return 0. */ -static int bitmap_clear_ll(unsigned long *map, int start, int nr) +static int alter_bitmap_ll(bool alteration, unsigned long *map, + int start_entry, int nentries) { - unsigned long *p = map + BIT_WORD(start); - const int size = start + nr; - int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG); - unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start); - - while (nr - bits_to_clear >= 0) { - if (clear_bits_ll(p, mask_to_clear)) - return nr; - nr -= bits_to_clear; - bits_to_clear = BITS_PER_LONG; - mask_to_clear = ~0UL; - p++; - } - if (nr) { - mask_to_clear &= BITMAP_LAST_WORD_MASK(size); - if (clear_bits_ll(p, mask_to_clear)) - return nr; + unsigned long start_bit; + unsigned long end_bit; + unsigned long mask; + unsigned long value; + int nbits; + int bits_to_write; + int index; + int (*action)(unsigned long *addr, + unsigned long mask, unsigned long value); + + action = (alteration == SET_BITS) ? set_bits_ll : clear_bits_ll; + + /* Prepare for writing the initial part of the allocation, from + * starting entry, to the end of the UL bitmap element which + * contains it. It might be larger than the actual allocation. + */ + start_bit = ENTRIES_TO_BITS(start_entry); + end_bit = ENTRIES_TO_BITS(start_entry + nentries); + nbits = ENTRIES_TO_BITS(nentries); + bits_to_write = BITS_PER_LONG - start_bit % BITS_PER_LONG; + mask = BITMAP_FIRST_WORD_MASK(start_bit); + /* Mark the beginning of the allocation. */ + value = MASK | (1UL << (start_bit % BITS_PER_LONG)); + index = BITS_DIV_LONGS(start_bit); + + /* Writes entries to the bitmap, as long as the reminder is + * positive or zero. + * Might be skipped if the entries to write do not reach the end + * of a bitmap UL unit. + */ + while (nbits >= bits_to_write) { + if (action(map + index, mask, value & mask)) + return BITS_DIV_ENTRIES(nbits); + nbits -= bits_to_write; + bits_to_write = BITS_PER_LONG; + mask = ~0UL; + value = MASK; + index++; } + /* Takes care of the ending part of the entries to mark. */ + if (nbits > 0) { + mask ^= BITMAP_FIRST_WORD_MASK((end_bit) % BITS_PER_LONG); + bits_to_write = nbits; + if (action(map + index, mask, value & mask)) + return BITS_DIV_ENTRIES(nbits); + } return 0; } + /** * gen_pool_create - create a new special memory pool - * @min_alloc_order: log base 2 of number of bytes each bitmap bit represents + * @min_alloc_order: log base 2 of number of bytes each bitmap entry represents * @nid: node id of the node the pool structure should be allocated on, or -1 * * Create a new special memory pool that can be used to manage special purpose @@ -183,10 +290,12 @@ int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phy size_t size, int nid) { struct gen_pool_chunk *chunk; - int nbits = size >> pool->min_alloc_order; - int nbytes = sizeof(struct gen_pool_chunk) + - BITS_TO_LONGS(nbits) * sizeof(long); + int nentries; + int nbytes; + nentries = size >> pool->min_alloc_order; + nbytes = sizeof(struct gen_pool_chunk) + + ENTRIES_DIV_LONGS(nentries) * sizeof(long); chunk = kzalloc_node(nbytes, GFP_KERNEL, nid); if (unlikely(chunk == NULL)) return -ENOMEM; @@ -248,7 +357,7 @@ void gen_pool_destroy(struct gen_pool *pool) list_del(&chunk->next_chunk); end_bit = chunk_size(chunk) >> order; - bit = find_next_bit(chunk->bits, end_bit, 0); + bit = find_next_bit(chunk->entries, end_bit, 0); BUG_ON(bit < end_bit); kfree(chunk); @@ -292,7 +401,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, struct gen_pool_chunk *chunk; unsigned long addr = 0; int order = pool->min_alloc_order; - int nbits, start_bit, end_bit, remain; + int nentries, start_entry, end_entry, remain; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); @@ -301,29 +410,32 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, if (size == 0) return 0; - nbits = (size + (1UL << order) - 1) >> order; + nentries = mem_to_units(size, order); rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { if (size > atomic_long_read(&chunk->avail)) continue; - start_bit = 0; - end_bit = chunk_size(chunk) >> order; + start_entry = 0; + end_entry = chunk_size(chunk) >> order; retry: - start_bit = algo(chunk->bits, end_bit, start_bit, - nbits, data, pool); - if (start_bit >= end_bit) + start_entry = algo(chunk->entries, end_entry, start_entry, + nentries, data, pool); + if (start_entry >= end_entry) continue; - remain = bitmap_set_ll(chunk->bits, start_bit, nbits); + remain = alter_bitmap_ll(SET_BITS, chunk->entries, + start_entry, nentries); if (remain) { - remain = bitmap_clear_ll(chunk->bits, start_bit, - nbits - remain); - BUG_ON(remain); + remain = alter_bitmap_ll(CLEAR_BITS, + chunk->entries, + start_entry, + nentries - remain); goto retry; } - addr = chunk->start_addr + ((unsigned long)start_bit << order); - size = nbits << order; + addr = chunk->start_addr + + ((unsigned long)start_entry << order); + size = nentries << order; atomic_long_sub(size, &chunk->avail); break; } @@ -365,7 +477,7 @@ EXPORT_SYMBOL(gen_pool_dma_alloc); * gen_pool_free - free allocated special memory back to the pool * @pool: pool to free to * @addr: starting address of memory to free back to pool - * @size: size in bytes of memory to free + * @size: size in bytes of memory to free or 0, for auto-detection * * Free previously allocated special memory back to the specified * pool. Can not be used in NMI handler on architectures without @@ -375,22 +487,29 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) { struct gen_pool_chunk *chunk; int order = pool->min_alloc_order; - int start_bit, nbits, remain; + int start_entry, remaining_entries, nentries, remain; + int boundary; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); #endif - nbits = (size + (1UL << order) - 1) >> order; rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { if (addr >= chunk->start_addr && addr <= chunk->end_addr) { BUG_ON(addr + size - 1 > chunk->end_addr); - start_bit = (addr - chunk->start_addr) >> order; - remain = bitmap_clear_ll(chunk->bits, start_bit, nbits); + start_entry = (addr - chunk->start_addr) >> order; + remaining_entries = (chunk->end_addr - addr) >> order; + boundary = get_boundary(chunk->entries, start_entry, + remaining_entries); + BUG_ON(boundary < 0); + nentries = boundary - start_entry; + BUG_ON(size && + (nentries != mem_to_units(size, order))); + remain = alter_bitmap_ll(CLEAR_BITS, chunk->entries, + start_entry, nentries); BUG_ON(remain); - size = nbits << order; - atomic_long_add(size, &chunk->avail); + atomic_long_add(nentries << order, &chunk->avail); rcu_read_unlock(); return; } @@ -517,9 +636,9 @@ EXPORT_SYMBOL(gen_pool_set_algo); * gen_pool_first_fit - find the first available region * of memory matching the size requirement (no alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from */ @@ -527,7 +646,15 @@ unsigned long gen_pool_first_fit(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - return bitmap_find_next_zero_area(map, size, start, nr, 0); + unsigned long align_mask; + unsigned long bit_index; + + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit); @@ -535,9 +662,9 @@ EXPORT_SYMBOL(gen_pool_first_fit); * gen_pool_first_fit_align - find the first available region * of memory matching the size requirement (alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: data for alignment * @pool: pool to get order from */ @@ -547,21 +674,28 @@ unsigned long gen_pool_first_fit_align(unsigned long *map, unsigned long size, { struct genpool_data_align *alignment; unsigned long align_mask; + unsigned long bit_index; int order; alignment = data; order = pool->min_alloc_order; - align_mask = ((alignment->align + (1UL << order) - 1) >> order) - 1; - return bitmap_find_next_zero_area(map, size, start, nr, align_mask); + align_mask = roundup_pow_of_two( + ENTRIES_TO_BITS(mem_to_units(alignment->align, + order))) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit_align); /** * gen_pool_fixed_alloc - reserve a specific region * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: data for alignment * @pool: pool to get order from */ @@ -571,20 +705,23 @@ unsigned long gen_pool_fixed_alloc(unsigned long *map, unsigned long size, { struct genpool_data_fixed *fixed_data; int order; - unsigned long offset_bit; - unsigned long start_bit; + unsigned long offset; + unsigned long align_mask; + unsigned long bit_index; fixed_data = data; order = pool->min_alloc_order; - offset_bit = fixed_data->offset >> order; if (WARN_ON(fixed_data->offset & ((1UL << order) - 1))) return size; + offset = fixed_data->offset >> order; + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start + offset), + ENTRIES_TO_BITS(nr), align_mask); + if (bit_index != ENTRIES_TO_BITS(offset)) + return size; - start_bit = bitmap_find_next_zero_area(map, size, - start + offset_bit, nr, 0); - if (start_bit != offset_bit) - start_bit = size; - return start_bit; + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_fixed_alloc); @@ -593,9 +730,9 @@ EXPORT_SYMBOL(gen_pool_fixed_alloc); * of memory matching the size requirement. The region will be aligned * to the order of the size specified. * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from */ @@ -603,9 +740,15 @@ unsigned long gen_pool_first_fit_order_align(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - unsigned long align_mask = roundup_pow_of_two(nr) - 1; - - return bitmap_find_next_zero_area(map, size, start, nr, align_mask); + unsigned long align_mask; + unsigned long bit_index; + + align_mask = roundup_pow_of_two(ENTRIES_TO_BITS(nr)) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + return BITS_DIV_ENTRIES(bit_index); } EXPORT_SYMBOL(gen_pool_first_fit_order_align); @@ -613,9 +756,9 @@ EXPORT_SYMBOL(gen_pool_first_fit_order_align); * gen_pool_best_fit - find the best fitting region of memory * macthing the size requirement (no alignment constraint) * @map: The address to base the search on - * @size: The bitmap size in bits - * @start: The bitnumber to start searching at - * @nr: The number of zeroed bits we're looking for + * @size: The number of allocation units in the bitmap + * @start: The allocation unit to start searching at + * @nr: The number of allocation units we're looking for * @data: additional data - unused * @pool: pool to find the fit region memory from * @@ -626,27 +769,41 @@ unsigned long gen_pool_best_fit(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, void *data, struct gen_pool *pool) { - unsigned long start_bit = size; + unsigned long start_bit = ENTRIES_TO_BITS(size); unsigned long len = size + 1; unsigned long index; + unsigned long align_mask; + unsigned long bit_index; - index = bitmap_find_next_zero_area(map, size, start, nr, 0); + align_mask = roundup_pow_of_two(BITS_PER_ENTRY) - 1; + bit_index = bitmap_find_next_zero_area(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(start), + ENTRIES_TO_BITS(nr), + align_mask); + index = BITS_DIV_ENTRIES(bit_index); while (index < size) { - int next_bit = find_next_bit(map, size, index + nr); - if ((next_bit - index) < len) { - len = next_bit - index; - start_bit = index; + int next_bit; + + next_bit = find_next_bit(map, ENTRIES_TO_BITS(size), + ENTRIES_TO_BITS(index + nr)); + if ((BITS_DIV_ENTRIES(next_bit) - index) < len) { + len = BITS_DIV_ENTRIES(next_bit) - index; + start_bit = ENTRIES_TO_BITS(index); if (len == nr) - return start_bit; + return BITS_DIV_ENTRIES(start_bit); } - index = bitmap_find_next_zero_area(map, size, - next_bit + 1, nr, 0); + bit_index = + bitmap_find_next_zero_area(map, + ENTRIES_TO_BITS(size), + next_bit + 1, + ENTRIES_TO_BITS(nr), + align_mask); + index = BITS_DIV_ENTRIES(bit_index); } - return start_bit; + return BITS_DIV_ENTRIES(start_bit); } -EXPORT_SYMBOL(gen_pool_best_fit); static void devm_gen_pool_release(struct device *dev, void *res) { -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info@ http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 2/6] genalloc: selftest 2018-02-03 19:42 ` Igor Stoppa (?) (?) @ 2018-02-03 19:42 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa Introduce a set of macros for writing concise test cases for genalloc. The test cases are meant to provide regression testing, when working on new functionality for genalloc. Primarily they are meant to confirm that the various allocation strategy will continue to work as expected. The execution of the self testing is controlled through a Kconfig option. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc-selftest.h | 30 +++ init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 450 insertions(+) create mode 100644 include/linux/genalloc-selftest.h create mode 100644 lib/genalloc-selftest.c diff --git a/include/linux/genalloc-selftest.h b/include/linux/genalloc-selftest.h new file mode 100644 index 0000000..7af1901 --- /dev/null +++ b/include/linux/genalloc-selftest.h @@ -0,0 +1,30 @@ +/* + * genalloc-selftest.h + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + + +#ifndef __GENALLOC_SELFTEST_H__ +#define __GENALLOC_SELFTEST_H__ + + +#ifdef CONFIG_GENERIC_ALLOCATOR_SELFTEST + +#include <linux/genalloc.h> + +void genalloc_selftest(void); + +#else + +static inline void genalloc_selftest(void){}; + +#endif + +#endif diff --git a/init/main.c b/init/main.c index a8100b9..fb844aa 100644 --- a/init/main.c +++ b/init/main.c @@ -89,6 +89,7 @@ #include <linux/io.h> #include <linux/cache.h> #include <linux/rodata_test.h> +#include <linux/genalloc-selftest.h> #include <asm/io.h> #include <asm/bugs.h> @@ -660,6 +661,7 @@ asmlinkage __visible void __init start_kernel(void) */ mem_encrypt_init(); + genalloc_selftest(); #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start && !initrd_below_start_ok && page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { diff --git a/lib/Kconfig b/lib/Kconfig index c5e84fb..430026d0 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -287,6 +287,21 @@ config DECOMPRESS_LZ4 config GENERIC_ALLOCATOR bool +config GENERIC_ALLOCATOR_SELFTEST + bool "genalloc tester" + default n + select GENERIC_ALLOCATOR + help + Enable automated testing of the generic allocator. + The testing is primarily for the tracking of allocated space. + +config GENERIC_ALLOCATOR_SELFTEST_VERBOSE + bool "make the genalloc tester more verbose" + default n + select GENERIC_ALLOCATOR_SELFTEST + help + More information will be displayed during the self-testing. + # # reed solomon support is select'ed if needed # diff --git a/lib/Makefile b/lib/Makefile index d11c48e..ba06e83 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -108,6 +108,7 @@ obj-$(CONFIG_LIBCRC32C) += libcrc32c.o obj-$(CONFIG_CRC8) += crc8.o obj-$(CONFIG_XXHASH) += xxhash.o obj-$(CONFIG_GENERIC_ALLOCATOR) += genalloc.o +obj-$(CONFIG_GENERIC_ALLOCATOR_SELFTEST) += genalloc-selftest.o obj-$(CONFIG_842_COMPRESS) += 842/ obj-$(CONFIG_842_DECOMPRESS) += 842/ diff --git a/lib/genalloc-selftest.c b/lib/genalloc-selftest.c new file mode 100644 index 0000000..007a0cf --- /dev/null +++ b/lib/genalloc-selftest.c @@ -0,0 +1,402 @@ +/* + * genalloc-selftest.c + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/module.h> +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/vmalloc.h> +#include <asm/set_memory.h> +#include <linux/string.h> +#include <linux/debugfs.h> +#include <linux/atomic.h> +#include <linux/genalloc.h> + + + +/* Keep the bitmap small, while including case of cross-ulong mapping. + * For simplicity, the test cases use only 1 chunk of memory. + */ +#define BITMAP_SIZE_C 16 +#define ALLOC_ORDER 0 + +#define ULONG_SIZE (sizeof(unsigned long)) +#define BITMAP_SIZE_UL (BITMAP_SIZE_C / ULONG_SIZE) +#define MIN_ALLOC_SIZE (1 << ALLOC_ORDER) +#define ENTRIES (BITMAP_SIZE_C * 8) +#define CHUNK_SIZE (MIN_ALLOC_SIZE * ENTRIES) + +#ifndef CONFIG_GENERIC_ALLOCATOR_SELFTEST_VERBOSE + +static inline void print_first_chunk_bitmap(struct gen_pool *pool) {} + +#else + +static void print_first_chunk_bitmap(struct gen_pool *pool) +{ + struct gen_pool_chunk *chunk; + char bitmap[BITMAP_SIZE_C * 2 + 1]; + unsigned long i; + char *bm = bitmap; + char *entry; + + if (unlikely(pool == NULL || pool->chunks.next == NULL)) + return; + + chunk = container_of(pool->chunks.next, struct gen_pool_chunk, + next_chunk); + entry = (void *)chunk->entries; + for (i = 1; i <= BITMAP_SIZE_C; i++) + bm += snprintf(bm, 3, "%02hhx", entry[BITMAP_SIZE_C - i]); + *bm = '\0'; + pr_notice("chunk: %p bitmap: 0x%s\n", chunk, bitmap); + +} + +#endif + +enum test_commands { + CMD_ALLOCATOR, + CMD_ALLOCATE, + CMD_FLUSH, + CMD_FREE, + CMD_NUMBER, + CMD_END = CMD_NUMBER, +}; + +struct null_struct { + void *null; +}; + +struct test_allocator { + genpool_algo_t algo; + union { + struct genpool_data_align align; + struct genpool_data_fixed offset; + struct null_struct null; + } data; +}; + +struct test_action { + unsigned int location; + char pattern[BITMAP_SIZE_C]; + unsigned int size; +}; + + +struct test_command { + enum test_commands command; + union { + struct test_allocator allocator; + struct test_action action; + }; +}; + + +/* To pass an array literal as parameter to a macro, it must go through + * this one, first. + */ +#define ARR(...) __VA_ARGS__ + +#define SET_DATA(parameter, value) \ + .parameter = { \ + .parameter = value, \ + } \ + +#define SET_ALLOCATOR(alloc, parameter, value) \ +{ \ + .command = CMD_ALLOCATOR, \ + .allocator = { \ + .algo = (alloc), \ + .data = { \ + SET_DATA(parameter, value), \ + }, \ + } \ +} + +#define ACTION_MEM(act, mem_size, mem_loc, match) \ +{ \ + .command = act, \ + .action = { \ + .size = (mem_size), \ + .location = (mem_loc), \ + .pattern = match, \ + }, \ +} + +#define ALLOCATE_MEM(mem_size, mem_loc, match) \ + ACTION_MEM(CMD_ALLOCATE, mem_size, mem_loc, ARR(match)) + +#define FREE_MEM(mem_size, mem_loc, match) \ + ACTION_MEM(CMD_FREE, mem_size, mem_loc, ARR(match)) + +#define FLUSH_MEM() \ +{ \ + .command = CMD_FLUSH, \ +} + +#define END() \ +{ \ + .command = CMD_END, \ +} + +static inline int compare_bitmaps(const struct gen_pool *pool, + const char *reference) +{ + struct gen_pool_chunk *chunk; + char *bitmap; + unsigned int i; + + chunk = container_of(pool->chunks.next, struct gen_pool_chunk, + next_chunk); + bitmap = (char *)chunk->entries; + + for (i = 0; i < BITMAP_SIZE_C; i++) + if (bitmap[i] != reference[i]) + return -1; + return 0; +} + +static void callback_set_allocator(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + gen_pool_set_algo(pool, cmd->allocator.algo, + (void *)&cmd->allocator.data); +} + +static void callback_allocate(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + const struct test_action *action = &cmd->action; + + locations[action->location] = gen_pool_alloc(pool, action->size); + BUG_ON(!locations[action->location]); + print_first_chunk_bitmap(pool); + BUG_ON(compare_bitmaps(pool, action->pattern)); +} + +static void callback_flush(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + unsigned int i; + + for (i = 0; i < ENTRIES; i++) + if (locations[i]) { + gen_pool_free(pool, locations[i], 0); + locations[i] = 0; + } +} + +static void callback_free(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + const struct test_action *action = &cmd->action; + + gen_pool_free(pool, locations[action->location], 0); + locations[action->location] = 0; + print_first_chunk_bitmap(pool); + BUG_ON(compare_bitmaps(pool, action->pattern)); +} + +static void (* const callbacks[CMD_NUMBER])(struct gen_pool *, + const struct test_command *, + unsigned long *) = { + [CMD_ALLOCATOR] = callback_set_allocator, + [CMD_ALLOCATE] = callback_allocate, + [CMD_FREE] = callback_free, + [CMD_FLUSH] = callback_flush, +}; + +const struct test_command test_first_fit[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(3, 0, ARR({0x2b})), + ALLOCATE_MEM(2, 1, ARR({0xeb, 0x02})), + ALLOCATE_MEM(5, 2, ARR({0xeb, 0xae, 0x0a})), + FREE_MEM(2, 1, ARR({0x2b, 0xac, 0x0a})), + ALLOCATE_MEM(1, 1, ARR({0xeb, 0xac, 0x0a})), + FREE_MEM(0, 2, ARR({0xeb})), + FREE_MEM(0, 0, ARR({0xc0})), + FREE_MEM(0, 1, ARR({0x00})), + END(), +}; + +/* To make the test work for both 32bit and 64bit ulong sizes, + * allocate (8 / 2 * 4 - 1) = 15 bytes bytes, then 16, then 2. + * The first allocation prepares for the crossing of the 32bit ulong + * threshold. The following crosses the 32bit threshold and prepares for + * crossing the 64bit thresholds. The last is large enough (2 bytes) to + * cross the 64bit threshold. + * Then free the allocations in the order: 2nd, 1st, 3rd. + */ +const struct test_command test_ulong_span[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(15, 0, ARR({0xab, 0xaa, 0xaa, 0x2a})), + ALLOCATE_MEM(16, 1, ARR({0xab, 0xaa, 0xaa, 0xea, + 0xaa, 0xaa, 0xaa, 0x2a})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0xaa, 0xaa, 0xea, + 0xaa, 0xaa, 0xaa, 0xea, + 0x02})), + FREE_MEM(0, 1, ARR({0xab, 0xaa, 0xaa, 0x2a, + 0x00, 0x00, 0x00, 0xc0, + 0x02})), + FREE_MEM(0, 0, ARR({0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0xc0, + 0x02})), + FREE_MEM(0, 2, ARR({0x00})), + END(), +}; + +/* Create progressively smaller allocations A B C D E. + * then free B and D. + * Then create new allocation that would fit in both of the gaps left by + * B and D. Verify that it uses the gap from B. + */ +const struct test_command test_first_fit_gaps[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(10, 0, ARR({0xab, 0xaa, 0x0a})), + ALLOCATE_MEM(8, 1, ARR({0xab, 0xaa, 0xba, 0xaa, + 0x0a})), + ALLOCATE_MEM(6, 2, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa})), + ALLOCATE_MEM(4, 3, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa, 0xab})), + ALLOCATE_MEM(2, 4, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa, 0xab, 0x0b})), + FREE_MEM(0, 1, ARR({0xab, 0xaa, 0x0a, 0x00, + 0xb0, 0xaa, 0xab, 0x0b})), + FREE_MEM(0, 3, ARR({0xab, 0xaa, 0x0a, 0x00, + 0xb0, 0xaa, 0x00, 0x0b})), + ALLOCATE_MEM(3, 3, ARR({0xab, 0xaa, 0xba, 0x02, + 0xb0, 0xaa, 0x00, 0x0b})), + FLUSH_MEM(), + END(), +}; + +/* Test first fit align */ +const struct test_command test_first_fit_align[] = { + SET_ALLOCATOR(gen_pool_first_fit_align, align, 4), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0x02, 0x2b})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0x02, 0x2b, 0x0b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0x02, 0x2b, 0x0b, 0x03})), + FREE_MEM(0, 0, ARR({0x00, 0x00, 0x2b, 0x0b, 0x03})), + FREE_MEM(0, 2, ARR({0x00, 0x00, 0x2b, 0x00, 0x03})), + ALLOCATE_MEM(2, 0, ARR({0x0b, 0x00, 0x2b, 0x00, 0x03})), + FLUSH_MEM(), + END(), +}; + + +/* Test fixed alloc */ +const struct test_command test_fixed_data[] = { + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 1), + ALLOCATE_MEM(5, 0, ARR({0xac, 0x0a})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 8), + ALLOCATE_MEM(3, 1, ARR({0xac, 0x0a, 0x2b})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 6), + ALLOCATE_MEM(2, 2, ARR({0xac, 0xba, 0x2b})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 30), + ALLOCATE_MEM(40, 3, ARR({0xac, 0xba, 0x2b, 0x00, + 0x00, 0x00, 0x00, 0xb0, + 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa})), + FLUSH_MEM(), + END(), +}; + + +/* Test first fit order align */ +const struct test_command test_first_fit_order_align[] = { + SET_ALLOCATOR(gen_pool_first_fit_order_align, null, NULL), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0x02, 0x2b})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0xb2, 0x2b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0xbe, 0x2b})), + ALLOCATE_MEM(1, 4, ARR({0xab, 0xbe, 0xeb})), + ALLOCATE_MEM(2, 5, ARR({0xab, 0xbe, 0xeb, 0x0b})), + FLUSH_MEM(), + END(), +}; + + +/* 007 Test best fit */ +const struct test_command test_best_fit[] = { + SET_ALLOCATOR(gen_pool_best_fit, null, NULL), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0xae})), + ALLOCATE_MEM(3, 2, ARR({0xab, 0xae, 0x2b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0xae, 0xeb})), + FREE_MEM(0, 0, ARR({0x00, 0xac, 0xeb})), + FREE_MEM(0, 2, ARR({0x00, 0xac, 0xc0})), + ALLOCATE_MEM(2, 0, ARR({0x00, 0xac, 0xcb})), + FLUSH_MEM(), + END(), +}; + + +enum test_cases_indexes { + TEST_CASE_FIRST_FIT, + TEST_CASE_ULONG_SPAN, + TEST_CASE_FIRST_FIT_GAPS, + TEST_CASE_FIRST_FIT_ALIGN, + TEST_CASE_FIXED_DATA, + TEST_CASE_FIRST_FIT_ORDER_ALIGN, + TEST_CASE_BEST_FIT, + TEST_CASES_NUM, +}; + +const struct test_command *test_cases[TEST_CASES_NUM] = { + [TEST_CASE_FIRST_FIT] = test_first_fit, + [TEST_CASE_ULONG_SPAN] = test_ulong_span, + [TEST_CASE_FIRST_FIT_GAPS] = test_first_fit_gaps, + [TEST_CASE_FIRST_FIT_ALIGN] = test_first_fit_align, + [TEST_CASE_FIXED_DATA] = test_fixed_data, + [TEST_CASE_FIRST_FIT_ORDER_ALIGN] = test_first_fit_order_align, + [TEST_CASE_BEST_FIT] = test_best_fit, +}; + + +void genalloc_selftest(void) +{ + static struct gen_pool *pool; + unsigned long locations[ENTRIES]; + char chunk[CHUNK_SIZE]; + int retval; + unsigned int i; + const struct test_command *cmd; + + pool = gen_pool_create(ALLOC_ORDER, -1); + if (unlikely(!pool)) { + pr_err("genalloc-selftest: no memory for pool."); + return; + } + + retval = gen_pool_add_virt(pool, (unsigned long)chunk, 0, + CHUNK_SIZE, -1); + if (unlikely(retval)) { + pr_err("genalloc-selftest: could not register chunk."); + goto destroy_pool; + } + + memset(locations, 0, ENTRIES * sizeof(unsigned long)); + for (i = 0; i < TEST_CASES_NUM; i++) + for (cmd = test_cases[i]; cmd->command < CMD_END; cmd++) + callbacks[cmd->command](pool, cmd, locations); + pr_notice("genalloc-selftest: executed successfully %d tests", + TEST_CASES_NUM); + +destroy_pool: + gen_pool_destroy(pool); +} -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 2/6] genalloc: selftest @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa Introduce a set of macros for writing concise test cases for genalloc. The test cases are meant to provide regression testing, when working on new functionality for genalloc. Primarily they are meant to confirm that the various allocation strategy will continue to work as expected. The execution of the self testing is controlled through a Kconfig option. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc-selftest.h | 30 +++ init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 450 insertions(+) create mode 100644 include/linux/genalloc-selftest.h create mode 100644 lib/genalloc-selftest.c diff --git a/include/linux/genalloc-selftest.h b/include/linux/genalloc-selftest.h new file mode 100644 index 0000000..7af1901 --- /dev/null +++ b/include/linux/genalloc-selftest.h @@ -0,0 +1,30 @@ +/* + * genalloc-selftest.h + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + + +#ifndef __GENALLOC_SELFTEST_H__ +#define __GENALLOC_SELFTEST_H__ + + +#ifdef CONFIG_GENERIC_ALLOCATOR_SELFTEST + +#include <linux/genalloc.h> + +void genalloc_selftest(void); + +#else + +static inline void genalloc_selftest(void){}; + +#endif + +#endif diff --git a/init/main.c b/init/main.c index a8100b9..fb844aa 100644 --- a/init/main.c +++ b/init/main.c @@ -89,6 +89,7 @@ #include <linux/io.h> #include <linux/cache.h> #include <linux/rodata_test.h> +#include <linux/genalloc-selftest.h> #include <asm/io.h> #include <asm/bugs.h> @@ -660,6 +661,7 @@ asmlinkage __visible void __init start_kernel(void) */ mem_encrypt_init(); + genalloc_selftest(); #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start && !initrd_below_start_ok && page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { diff --git a/lib/Kconfig b/lib/Kconfig index c5e84fb..430026d0 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -287,6 +287,21 @@ config DECOMPRESS_LZ4 config GENERIC_ALLOCATOR bool +config GENERIC_ALLOCATOR_SELFTEST + bool "genalloc tester" + default n + select GENERIC_ALLOCATOR + help + Enable automated testing of the generic allocator. + The testing is primarily for the tracking of allocated space. + +config GENERIC_ALLOCATOR_SELFTEST_VERBOSE + bool "make the genalloc tester more verbose" + default n + select GENERIC_ALLOCATOR_SELFTEST + help + More information will be displayed during the self-testing. + # # reed solomon support is select'ed if needed # diff --git a/lib/Makefile b/lib/Makefile index d11c48e..ba06e83 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -108,6 +108,7 @@ obj-$(CONFIG_LIBCRC32C) += libcrc32c.o obj-$(CONFIG_CRC8) += crc8.o obj-$(CONFIG_XXHASH) += xxhash.o obj-$(CONFIG_GENERIC_ALLOCATOR) += genalloc.o +obj-$(CONFIG_GENERIC_ALLOCATOR_SELFTEST) += genalloc-selftest.o obj-$(CONFIG_842_COMPRESS) += 842/ obj-$(CONFIG_842_DECOMPRESS) += 842/ diff --git a/lib/genalloc-selftest.c b/lib/genalloc-selftest.c new file mode 100644 index 0000000..007a0cf --- /dev/null +++ b/lib/genalloc-selftest.c @@ -0,0 +1,402 @@ +/* + * genalloc-selftest.c + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/module.h> +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/vmalloc.h> +#include <asm/set_memory.h> +#include <linux/string.h> +#include <linux/debugfs.h> +#include <linux/atomic.h> +#include <linux/genalloc.h> + + + +/* Keep the bitmap small, while including case of cross-ulong mapping. + * For simplicity, the test cases use only 1 chunk of memory. + */ +#define BITMAP_SIZE_C 16 +#define ALLOC_ORDER 0 + +#define ULONG_SIZE (sizeof(unsigned long)) +#define BITMAP_SIZE_UL (BITMAP_SIZE_C / ULONG_SIZE) +#define MIN_ALLOC_SIZE (1 << ALLOC_ORDER) +#define ENTRIES (BITMAP_SIZE_C * 8) +#define CHUNK_SIZE (MIN_ALLOC_SIZE * ENTRIES) + +#ifndef CONFIG_GENERIC_ALLOCATOR_SELFTEST_VERBOSE + +static inline void print_first_chunk_bitmap(struct gen_pool *pool) {} + +#else + +static void print_first_chunk_bitmap(struct gen_pool *pool) +{ + struct gen_pool_chunk *chunk; + char bitmap[BITMAP_SIZE_C * 2 + 1]; + unsigned long i; + char *bm = bitmap; + char *entry; + + if (unlikely(pool == NULL || pool->chunks.next == NULL)) + return; + + chunk = container_of(pool->chunks.next, struct gen_pool_chunk, + next_chunk); + entry = (void *)chunk->entries; + for (i = 1; i <= BITMAP_SIZE_C; i++) + bm += snprintf(bm, 3, "%02hhx", entry[BITMAP_SIZE_C - i]); + *bm = '\0'; + pr_notice("chunk: %p bitmap: 0x%s\n", chunk, bitmap); + +} + +#endif + +enum test_commands { + CMD_ALLOCATOR, + CMD_ALLOCATE, + CMD_FLUSH, + CMD_FREE, + CMD_NUMBER, + CMD_END = CMD_NUMBER, +}; + +struct null_struct { + void *null; +}; + +struct test_allocator { + genpool_algo_t algo; + union { + struct genpool_data_align align; + struct genpool_data_fixed offset; + struct null_struct null; + } data; +}; + +struct test_action { + unsigned int location; + char pattern[BITMAP_SIZE_C]; + unsigned int size; +}; + + +struct test_command { + enum test_commands command; + union { + struct test_allocator allocator; + struct test_action action; + }; +}; + + +/* To pass an array literal as parameter to a macro, it must go through + * this one, first. + */ +#define ARR(...) __VA_ARGS__ + +#define SET_DATA(parameter, value) \ + .parameter = { \ + .parameter = value, \ + } \ + +#define SET_ALLOCATOR(alloc, parameter, value) \ +{ \ + .command = CMD_ALLOCATOR, \ + .allocator = { \ + .algo = (alloc), \ + .data = { \ + SET_DATA(parameter, value), \ + }, \ + } \ +} + +#define ACTION_MEM(act, mem_size, mem_loc, match) \ +{ \ + .command = act, \ + .action = { \ + .size = (mem_size), \ + .location = (mem_loc), \ + .pattern = match, \ + }, \ +} + +#define ALLOCATE_MEM(mem_size, mem_loc, match) \ + ACTION_MEM(CMD_ALLOCATE, mem_size, mem_loc, ARR(match)) + +#define FREE_MEM(mem_size, mem_loc, match) \ + ACTION_MEM(CMD_FREE, mem_size, mem_loc, ARR(match)) + +#define FLUSH_MEM() \ +{ \ + .command = CMD_FLUSH, \ +} + +#define END() \ +{ \ + .command = CMD_END, \ +} + +static inline int compare_bitmaps(const struct gen_pool *pool, + const char *reference) +{ + struct gen_pool_chunk *chunk; + char *bitmap; + unsigned int i; + + chunk = container_of(pool->chunks.next, struct gen_pool_chunk, + next_chunk); + bitmap = (char *)chunk->entries; + + for (i = 0; i < BITMAP_SIZE_C; i++) + if (bitmap[i] != reference[i]) + return -1; + return 0; +} + +static void callback_set_allocator(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + gen_pool_set_algo(pool, cmd->allocator.algo, + (void *)&cmd->allocator.data); +} + +static void callback_allocate(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + const struct test_action *action = &cmd->action; + + locations[action->location] = gen_pool_alloc(pool, action->size); + BUG_ON(!locations[action->location]); + print_first_chunk_bitmap(pool); + BUG_ON(compare_bitmaps(pool, action->pattern)); +} + +static void callback_flush(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + unsigned int i; + + for (i = 0; i < ENTRIES; i++) + if (locations[i]) { + gen_pool_free(pool, locations[i], 0); + locations[i] = 0; + } +} + +static void callback_free(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + const struct test_action *action = &cmd->action; + + gen_pool_free(pool, locations[action->location], 0); + locations[action->location] = 0; + print_first_chunk_bitmap(pool); + BUG_ON(compare_bitmaps(pool, action->pattern)); +} + +static void (* const callbacks[CMD_NUMBER])(struct gen_pool *, + const struct test_command *, + unsigned long *) = { + [CMD_ALLOCATOR] = callback_set_allocator, + [CMD_ALLOCATE] = callback_allocate, + [CMD_FREE] = callback_free, + [CMD_FLUSH] = callback_flush, +}; + +const struct test_command test_first_fit[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(3, 0, ARR({0x2b})), + ALLOCATE_MEM(2, 1, ARR({0xeb, 0x02})), + ALLOCATE_MEM(5, 2, ARR({0xeb, 0xae, 0x0a})), + FREE_MEM(2, 1, ARR({0x2b, 0xac, 0x0a})), + ALLOCATE_MEM(1, 1, ARR({0xeb, 0xac, 0x0a})), + FREE_MEM(0, 2, ARR({0xeb})), + FREE_MEM(0, 0, ARR({0xc0})), + FREE_MEM(0, 1, ARR({0x00})), + END(), +}; + +/* To make the test work for both 32bit and 64bit ulong sizes, + * allocate (8 / 2 * 4 - 1) = 15 bytes bytes, then 16, then 2. + * The first allocation prepares for the crossing of the 32bit ulong + * threshold. The following crosses the 32bit threshold and prepares for + * crossing the 64bit thresholds. The last is large enough (2 bytes) to + * cross the 64bit threshold. + * Then free the allocations in the order: 2nd, 1st, 3rd. + */ +const struct test_command test_ulong_span[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(15, 0, ARR({0xab, 0xaa, 0xaa, 0x2a})), + ALLOCATE_MEM(16, 1, ARR({0xab, 0xaa, 0xaa, 0xea, + 0xaa, 0xaa, 0xaa, 0x2a})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0xaa, 0xaa, 0xea, + 0xaa, 0xaa, 0xaa, 0xea, + 0x02})), + FREE_MEM(0, 1, ARR({0xab, 0xaa, 0xaa, 0x2a, + 0x00, 0x00, 0x00, 0xc0, + 0x02})), + FREE_MEM(0, 0, ARR({0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0xc0, + 0x02})), + FREE_MEM(0, 2, ARR({0x00})), + END(), +}; + +/* Create progressively smaller allocations A B C D E. + * then free B and D. + * Then create new allocation that would fit in both of the gaps left by + * B and D. Verify that it uses the gap from B. + */ +const struct test_command test_first_fit_gaps[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(10, 0, ARR({0xab, 0xaa, 0x0a})), + ALLOCATE_MEM(8, 1, ARR({0xab, 0xaa, 0xba, 0xaa, + 0x0a})), + ALLOCATE_MEM(6, 2, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa})), + ALLOCATE_MEM(4, 3, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa, 0xab})), + ALLOCATE_MEM(2, 4, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa, 0xab, 0x0b})), + FREE_MEM(0, 1, ARR({0xab, 0xaa, 0x0a, 0x00, + 0xb0, 0xaa, 0xab, 0x0b})), + FREE_MEM(0, 3, ARR({0xab, 0xaa, 0x0a, 0x00, + 0xb0, 0xaa, 0x00, 0x0b})), + ALLOCATE_MEM(3, 3, ARR({0xab, 0xaa, 0xba, 0x02, + 0xb0, 0xaa, 0x00, 0x0b})), + FLUSH_MEM(), + END(), +}; + +/* Test first fit align */ +const struct test_command test_first_fit_align[] = { + SET_ALLOCATOR(gen_pool_first_fit_align, align, 4), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0x02, 0x2b})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0x02, 0x2b, 0x0b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0x02, 0x2b, 0x0b, 0x03})), + FREE_MEM(0, 0, ARR({0x00, 0x00, 0x2b, 0x0b, 0x03})), + FREE_MEM(0, 2, ARR({0x00, 0x00, 0x2b, 0x00, 0x03})), + ALLOCATE_MEM(2, 0, ARR({0x0b, 0x00, 0x2b, 0x00, 0x03})), + FLUSH_MEM(), + END(), +}; + + +/* Test fixed alloc */ +const struct test_command test_fixed_data[] = { + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 1), + ALLOCATE_MEM(5, 0, ARR({0xac, 0x0a})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 8), + ALLOCATE_MEM(3, 1, ARR({0xac, 0x0a, 0x2b})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 6), + ALLOCATE_MEM(2, 2, ARR({0xac, 0xba, 0x2b})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 30), + ALLOCATE_MEM(40, 3, ARR({0xac, 0xba, 0x2b, 0x00, + 0x00, 0x00, 0x00, 0xb0, + 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa})), + FLUSH_MEM(), + END(), +}; + + +/* Test first fit order align */ +const struct test_command test_first_fit_order_align[] = { + SET_ALLOCATOR(gen_pool_first_fit_order_align, null, NULL), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0x02, 0x2b})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0xb2, 0x2b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0xbe, 0x2b})), + ALLOCATE_MEM(1, 4, ARR({0xab, 0xbe, 0xeb})), + ALLOCATE_MEM(2, 5, ARR({0xab, 0xbe, 0xeb, 0x0b})), + FLUSH_MEM(), + END(), +}; + + +/* 007 Test best fit */ +const struct test_command test_best_fit[] = { + SET_ALLOCATOR(gen_pool_best_fit, null, NULL), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0xae})), + ALLOCATE_MEM(3, 2, ARR({0xab, 0xae, 0x2b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0xae, 0xeb})), + FREE_MEM(0, 0, ARR({0x00, 0xac, 0xeb})), + FREE_MEM(0, 2, ARR({0x00, 0xac, 0xc0})), + ALLOCATE_MEM(2, 0, ARR({0x00, 0xac, 0xcb})), + FLUSH_MEM(), + END(), +}; + + +enum test_cases_indexes { + TEST_CASE_FIRST_FIT, + TEST_CASE_ULONG_SPAN, + TEST_CASE_FIRST_FIT_GAPS, + TEST_CASE_FIRST_FIT_ALIGN, + TEST_CASE_FIXED_DATA, + TEST_CASE_FIRST_FIT_ORDER_ALIGN, + TEST_CASE_BEST_FIT, + TEST_CASES_NUM, +}; + +const struct test_command *test_cases[TEST_CASES_NUM] = { + [TEST_CASE_FIRST_FIT] = test_first_fit, + [TEST_CASE_ULONG_SPAN] = test_ulong_span, + [TEST_CASE_FIRST_FIT_GAPS] = test_first_fit_gaps, + [TEST_CASE_FIRST_FIT_ALIGN] = test_first_fit_align, + [TEST_CASE_FIXED_DATA] = test_fixed_data, + [TEST_CASE_FIRST_FIT_ORDER_ALIGN] = test_first_fit_order_align, + [TEST_CASE_BEST_FIT] = test_best_fit, +}; + + +void genalloc_selftest(void) +{ + static struct gen_pool *pool; + unsigned long locations[ENTRIES]; + char chunk[CHUNK_SIZE]; + int retval; + unsigned int i; + const struct test_command *cmd; + + pool = gen_pool_create(ALLOC_ORDER, -1); + if (unlikely(!pool)) { + pr_err("genalloc-selftest: no memory for pool."); + return; + } + + retval = gen_pool_add_virt(pool, (unsigned long)chunk, 0, + CHUNK_SIZE, -1); + if (unlikely(retval)) { + pr_err("genalloc-selftest: could not register chunk."); + goto destroy_pool; + } + + memset(locations, 0, ENTRIES * sizeof(unsigned long)); + for (i = 0; i < TEST_CASES_NUM; i++) + for (cmd = test_cases[i]; cmd->command < CMD_END; cmd++) + callbacks[cmd->command](pool, cmd, locations); + pr_notice("genalloc-selftest: executed successfully %d tests", + TEST_CASES_NUM); + +destroy_pool: + gen_pool_destroy(pool); +} -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 2/6] genalloc: selftest @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa Introduce a set of macros for writing concise test cases for genalloc. The test cases are meant to provide regression testing, when working on new functionality for genalloc. Primarily they are meant to confirm that the various allocation strategy will continue to work as expected. The execution of the self testing is controlled through a Kconfig option. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc-selftest.h | 30 +++ init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 450 insertions(+) create mode 100644 include/linux/genalloc-selftest.h create mode 100644 lib/genalloc-selftest.c diff --git a/include/linux/genalloc-selftest.h b/include/linux/genalloc-selftest.h new file mode 100644 index 0000000..7af1901 --- /dev/null +++ b/include/linux/genalloc-selftest.h @@ -0,0 +1,30 @@ +/* + * genalloc-selftest.h + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + + +#ifndef __GENALLOC_SELFTEST_H__ +#define __GENALLOC_SELFTEST_H__ + + +#ifdef CONFIG_GENERIC_ALLOCATOR_SELFTEST + +#include <linux/genalloc.h> + +void genalloc_selftest(void); + +#else + +static inline void genalloc_selftest(void){}; + +#endif + +#endif diff --git a/init/main.c b/init/main.c index a8100b9..fb844aa 100644 --- a/init/main.c +++ b/init/main.c @@ -89,6 +89,7 @@ #include <linux/io.h> #include <linux/cache.h> #include <linux/rodata_test.h> +#include <linux/genalloc-selftest.h> #include <asm/io.h> #include <asm/bugs.h> @@ -660,6 +661,7 @@ asmlinkage __visible void __init start_kernel(void) */ mem_encrypt_init(); + genalloc_selftest(); #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start && !initrd_below_start_ok && page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { diff --git a/lib/Kconfig b/lib/Kconfig index c5e84fb..430026d0 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -287,6 +287,21 @@ config DECOMPRESS_LZ4 config GENERIC_ALLOCATOR bool +config GENERIC_ALLOCATOR_SELFTEST + bool "genalloc tester" + default n + select GENERIC_ALLOCATOR + help + Enable automated testing of the generic allocator. + The testing is primarily for the tracking of allocated space. + +config GENERIC_ALLOCATOR_SELFTEST_VERBOSE + bool "make the genalloc tester more verbose" + default n + select GENERIC_ALLOCATOR_SELFTEST + help + More information will be displayed during the self-testing. + # # reed solomon support is select'ed if needed # diff --git a/lib/Makefile b/lib/Makefile index d11c48e..ba06e83 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -108,6 +108,7 @@ obj-$(CONFIG_LIBCRC32C) += libcrc32c.o obj-$(CONFIG_CRC8) += crc8.o obj-$(CONFIG_XXHASH) += xxhash.o obj-$(CONFIG_GENERIC_ALLOCATOR) += genalloc.o +obj-$(CONFIG_GENERIC_ALLOCATOR_SELFTEST) += genalloc-selftest.o obj-$(CONFIG_842_COMPRESS) += 842/ obj-$(CONFIG_842_DECOMPRESS) += 842/ diff --git a/lib/genalloc-selftest.c b/lib/genalloc-selftest.c new file mode 100644 index 0000000..007a0cf --- /dev/null +++ b/lib/genalloc-selftest.c @@ -0,0 +1,402 @@ +/* + * genalloc-selftest.c + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/module.h> +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/vmalloc.h> +#include <asm/set_memory.h> +#include <linux/string.h> +#include <linux/debugfs.h> +#include <linux/atomic.h> +#include <linux/genalloc.h> + + + +/* Keep the bitmap small, while including case of cross-ulong mapping. + * For simplicity, the test cases use only 1 chunk of memory. + */ +#define BITMAP_SIZE_C 16 +#define ALLOC_ORDER 0 + +#define ULONG_SIZE (sizeof(unsigned long)) +#define BITMAP_SIZE_UL (BITMAP_SIZE_C / ULONG_SIZE) +#define MIN_ALLOC_SIZE (1 << ALLOC_ORDER) +#define ENTRIES (BITMAP_SIZE_C * 8) +#define CHUNK_SIZE (MIN_ALLOC_SIZE * ENTRIES) + +#ifndef CONFIG_GENERIC_ALLOCATOR_SELFTEST_VERBOSE + +static inline void print_first_chunk_bitmap(struct gen_pool *pool) {} + +#else + +static void print_first_chunk_bitmap(struct gen_pool *pool) +{ + struct gen_pool_chunk *chunk; + char bitmap[BITMAP_SIZE_C * 2 + 1]; + unsigned long i; + char *bm = bitmap; + char *entry; + + if (unlikely(pool == NULL || pool->chunks.next == NULL)) + return; + + chunk = container_of(pool->chunks.next, struct gen_pool_chunk, + next_chunk); + entry = (void *)chunk->entries; + for (i = 1; i <= BITMAP_SIZE_C; i++) + bm += snprintf(bm, 3, "%02hhx", entry[BITMAP_SIZE_C - i]); + *bm = '\0'; + pr_notice("chunk: %p bitmap: 0x%s\n", chunk, bitmap); + +} + +#endif + +enum test_commands { + CMD_ALLOCATOR, + CMD_ALLOCATE, + CMD_FLUSH, + CMD_FREE, + CMD_NUMBER, + CMD_END = CMD_NUMBER, +}; + +struct null_struct { + void *null; +}; + +struct test_allocator { + genpool_algo_t algo; + union { + struct genpool_data_align align; + struct genpool_data_fixed offset; + struct null_struct null; + } data; +}; + +struct test_action { + unsigned int location; + char pattern[BITMAP_SIZE_C]; + unsigned int size; +}; + + +struct test_command { + enum test_commands command; + union { + struct test_allocator allocator; + struct test_action action; + }; +}; + + +/* To pass an array literal as parameter to a macro, it must go through + * this one, first. + */ +#define ARR(...) __VA_ARGS__ + +#define SET_DATA(parameter, value) \ + .parameter = { \ + .parameter = value, \ + } \ + +#define SET_ALLOCATOR(alloc, parameter, value) \ +{ \ + .command = CMD_ALLOCATOR, \ + .allocator = { \ + .algo = (alloc), \ + .data = { \ + SET_DATA(parameter, value), \ + }, \ + } \ +} + +#define ACTION_MEM(act, mem_size, mem_loc, match) \ +{ \ + .command = act, \ + .action = { \ + .size = (mem_size), \ + .location = (mem_loc), \ + .pattern = match, \ + }, \ +} + +#define ALLOCATE_MEM(mem_size, mem_loc, match) \ + ACTION_MEM(CMD_ALLOCATE, mem_size, mem_loc, ARR(match)) + +#define FREE_MEM(mem_size, mem_loc, match) \ + ACTION_MEM(CMD_FREE, mem_size, mem_loc, ARR(match)) + +#define FLUSH_MEM() \ +{ \ + .command = CMD_FLUSH, \ +} + +#define END() \ +{ \ + .command = CMD_END, \ +} + +static inline int compare_bitmaps(const struct gen_pool *pool, + const char *reference) +{ + struct gen_pool_chunk *chunk; + char *bitmap; + unsigned int i; + + chunk = container_of(pool->chunks.next, struct gen_pool_chunk, + next_chunk); + bitmap = (char *)chunk->entries; + + for (i = 0; i < BITMAP_SIZE_C; i++) + if (bitmap[i] != reference[i]) + return -1; + return 0; +} + +static void callback_set_allocator(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + gen_pool_set_algo(pool, cmd->allocator.algo, + (void *)&cmd->allocator.data); +} + +static void callback_allocate(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + const struct test_action *action = &cmd->action; + + locations[action->location] = gen_pool_alloc(pool, action->size); + BUG_ON(!locations[action->location]); + print_first_chunk_bitmap(pool); + BUG_ON(compare_bitmaps(pool, action->pattern)); +} + +static void callback_flush(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + unsigned int i; + + for (i = 0; i < ENTRIES; i++) + if (locations[i]) { + gen_pool_free(pool, locations[i], 0); + locations[i] = 0; + } +} + +static void callback_free(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + const struct test_action *action = &cmd->action; + + gen_pool_free(pool, locations[action->location], 0); + locations[action->location] = 0; + print_first_chunk_bitmap(pool); + BUG_ON(compare_bitmaps(pool, action->pattern)); +} + +static void (* const callbacks[CMD_NUMBER])(struct gen_pool *, + const struct test_command *, + unsigned long *) = { + [CMD_ALLOCATOR] = callback_set_allocator, + [CMD_ALLOCATE] = callback_allocate, + [CMD_FREE] = callback_free, + [CMD_FLUSH] = callback_flush, +}; + +const struct test_command test_first_fit[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(3, 0, ARR({0x2b})), + ALLOCATE_MEM(2, 1, ARR({0xeb, 0x02})), + ALLOCATE_MEM(5, 2, ARR({0xeb, 0xae, 0x0a})), + FREE_MEM(2, 1, ARR({0x2b, 0xac, 0x0a})), + ALLOCATE_MEM(1, 1, ARR({0xeb, 0xac, 0x0a})), + FREE_MEM(0, 2, ARR({0xeb})), + FREE_MEM(0, 0, ARR({0xc0})), + FREE_MEM(0, 1, ARR({0x00})), + END(), +}; + +/* To make the test work for both 32bit and 64bit ulong sizes, + * allocate (8 / 2 * 4 - 1) = 15 bytes bytes, then 16, then 2. + * The first allocation prepares for the crossing of the 32bit ulong + * threshold. The following crosses the 32bit threshold and prepares for + * crossing the 64bit thresholds. The last is large enough (2 bytes) to + * cross the 64bit threshold. + * Then free the allocations in the order: 2nd, 1st, 3rd. + */ +const struct test_command test_ulong_span[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(15, 0, ARR({0xab, 0xaa, 0xaa, 0x2a})), + ALLOCATE_MEM(16, 1, ARR({0xab, 0xaa, 0xaa, 0xea, + 0xaa, 0xaa, 0xaa, 0x2a})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0xaa, 0xaa, 0xea, + 0xaa, 0xaa, 0xaa, 0xea, + 0x02})), + FREE_MEM(0, 1, ARR({0xab, 0xaa, 0xaa, 0x2a, + 0x00, 0x00, 0x00, 0xc0, + 0x02})), + FREE_MEM(0, 0, ARR({0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0xc0, + 0x02})), + FREE_MEM(0, 2, ARR({0x00})), + END(), +}; + +/* Create progressively smaller allocations A B C D E. + * then free B and D. + * Then create new allocation that would fit in both of the gaps left by + * B and D. Verify that it uses the gap from B. + */ +const struct test_command test_first_fit_gaps[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(10, 0, ARR({0xab, 0xaa, 0x0a})), + ALLOCATE_MEM(8, 1, ARR({0xab, 0xaa, 0xba, 0xaa, + 0x0a})), + ALLOCATE_MEM(6, 2, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa})), + ALLOCATE_MEM(4, 3, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa, 0xab})), + ALLOCATE_MEM(2, 4, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa, 0xab, 0x0b})), + FREE_MEM(0, 1, ARR({0xab, 0xaa, 0x0a, 0x00, + 0xb0, 0xaa, 0xab, 0x0b})), + FREE_MEM(0, 3, ARR({0xab, 0xaa, 0x0a, 0x00, + 0xb0, 0xaa, 0x00, 0x0b})), + ALLOCATE_MEM(3, 3, ARR({0xab, 0xaa, 0xba, 0x02, + 0xb0, 0xaa, 0x00, 0x0b})), + FLUSH_MEM(), + END(), +}; + +/* Test first fit align */ +const struct test_command test_first_fit_align[] = { + SET_ALLOCATOR(gen_pool_first_fit_align, align, 4), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0x02, 0x2b})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0x02, 0x2b, 0x0b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0x02, 0x2b, 0x0b, 0x03})), + FREE_MEM(0, 0, ARR({0x00, 0x00, 0x2b, 0x0b, 0x03})), + FREE_MEM(0, 2, ARR({0x00, 0x00, 0x2b, 0x00, 0x03})), + ALLOCATE_MEM(2, 0, ARR({0x0b, 0x00, 0x2b, 0x00, 0x03})), + FLUSH_MEM(), + END(), +}; + + +/* Test fixed alloc */ +const struct test_command test_fixed_data[] = { + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 1), + ALLOCATE_MEM(5, 0, ARR({0xac, 0x0a})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 8), + ALLOCATE_MEM(3, 1, ARR({0xac, 0x0a, 0x2b})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 6), + ALLOCATE_MEM(2, 2, ARR({0xac, 0xba, 0x2b})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 30), + ALLOCATE_MEM(40, 3, ARR({0xac, 0xba, 0x2b, 0x00, + 0x00, 0x00, 0x00, 0xb0, + 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa})), + FLUSH_MEM(), + END(), +}; + + +/* Test first fit order align */ +const struct test_command test_first_fit_order_align[] = { + SET_ALLOCATOR(gen_pool_first_fit_order_align, null, NULL), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0x02, 0x2b})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0xb2, 0x2b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0xbe, 0x2b})), + ALLOCATE_MEM(1, 4, ARR({0xab, 0xbe, 0xeb})), + ALLOCATE_MEM(2, 5, ARR({0xab, 0xbe, 0xeb, 0x0b})), + FLUSH_MEM(), + END(), +}; + + +/* 007 Test best fit */ +const struct test_command test_best_fit[] = { + SET_ALLOCATOR(gen_pool_best_fit, null, NULL), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0xae})), + ALLOCATE_MEM(3, 2, ARR({0xab, 0xae, 0x2b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0xae, 0xeb})), + FREE_MEM(0, 0, ARR({0x00, 0xac, 0xeb})), + FREE_MEM(0, 2, ARR({0x00, 0xac, 0xc0})), + ALLOCATE_MEM(2, 0, ARR({0x00, 0xac, 0xcb})), + FLUSH_MEM(), + END(), +}; + + +enum test_cases_indexes { + TEST_CASE_FIRST_FIT, + TEST_CASE_ULONG_SPAN, + TEST_CASE_FIRST_FIT_GAPS, + TEST_CASE_FIRST_FIT_ALIGN, + TEST_CASE_FIXED_DATA, + TEST_CASE_FIRST_FIT_ORDER_ALIGN, + TEST_CASE_BEST_FIT, + TEST_CASES_NUM, +}; + +const struct test_command *test_cases[TEST_CASES_NUM] = { + [TEST_CASE_FIRST_FIT] = test_first_fit, + [TEST_CASE_ULONG_SPAN] = test_ulong_span, + [TEST_CASE_FIRST_FIT_GAPS] = test_first_fit_gaps, + [TEST_CASE_FIRST_FIT_ALIGN] = test_first_fit_align, + [TEST_CASE_FIXED_DATA] = test_fixed_data, + [TEST_CASE_FIRST_FIT_ORDER_ALIGN] = test_first_fit_order_align, + [TEST_CASE_BEST_FIT] = test_best_fit, +}; + + +void genalloc_selftest(void) +{ + static struct gen_pool *pool; + unsigned long locations[ENTRIES]; + char chunk[CHUNK_SIZE]; + int retval; + unsigned int i; + const struct test_command *cmd; + + pool = gen_pool_create(ALLOC_ORDER, -1); + if (unlikely(!pool)) { + pr_err("genalloc-selftest: no memory for pool."); + return; + } + + retval = gen_pool_add_virt(pool, (unsigned long)chunk, 0, + CHUNK_SIZE, -1); + if (unlikely(retval)) { + pr_err("genalloc-selftest: could not register chunk."); + goto destroy_pool; + } + + memset(locations, 0, ENTRIES * sizeof(unsigned long)); + for (i = 0; i < TEST_CASES_NUM; i++) + for (cmd = test_cases[i]; cmd->command < CMD_END; cmd++) + callbacks[cmd->command](pool, cmd, locations); + pr_notice("genalloc-selftest: executed successfully %d tests", + TEST_CASES_NUM); + +destroy_pool: + gen_pool_destroy(pool); +} -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 2/6] genalloc: selftest @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: linux-security-module Introduce a set of macros for writing concise test cases for genalloc. The test cases are meant to provide regression testing, when working on new functionality for genalloc. Primarily they are meant to confirm that the various allocation strategy will continue to work as expected. The execution of the self testing is controlled through a Kconfig option. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc-selftest.h | 30 +++ init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 450 insertions(+) create mode 100644 include/linux/genalloc-selftest.h create mode 100644 lib/genalloc-selftest.c diff --git a/include/linux/genalloc-selftest.h b/include/linux/genalloc-selftest.h new file mode 100644 index 0000000..7af1901 --- /dev/null +++ b/include/linux/genalloc-selftest.h @@ -0,0 +1,30 @@ +/* + * genalloc-selftest.h + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + + +#ifndef __GENALLOC_SELFTEST_H__ +#define __GENALLOC_SELFTEST_H__ + + +#ifdef CONFIG_GENERIC_ALLOCATOR_SELFTEST + +#include <linux/genalloc.h> + +void genalloc_selftest(void); + +#else + +static inline void genalloc_selftest(void){}; + +#endif + +#endif diff --git a/init/main.c b/init/main.c index a8100b9..fb844aa 100644 --- a/init/main.c +++ b/init/main.c @@ -89,6 +89,7 @@ #include <linux/io.h> #include <linux/cache.h> #include <linux/rodata_test.h> +#include <linux/genalloc-selftest.h> #include <asm/io.h> #include <asm/bugs.h> @@ -660,6 +661,7 @@ asmlinkage __visible void __init start_kernel(void) */ mem_encrypt_init(); + genalloc_selftest(); #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start && !initrd_below_start_ok && page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { diff --git a/lib/Kconfig b/lib/Kconfig index c5e84fb..430026d0 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -287,6 +287,21 @@ config DECOMPRESS_LZ4 config GENERIC_ALLOCATOR bool +config GENERIC_ALLOCATOR_SELFTEST + bool "genalloc tester" + default n + select GENERIC_ALLOCATOR + help + Enable automated testing of the generic allocator. + The testing is primarily for the tracking of allocated space. + +config GENERIC_ALLOCATOR_SELFTEST_VERBOSE + bool "make the genalloc tester more verbose" + default n + select GENERIC_ALLOCATOR_SELFTEST + help + More information will be displayed during the self-testing. + # # reed solomon support is select'ed if needed # diff --git a/lib/Makefile b/lib/Makefile index d11c48e..ba06e83 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -108,6 +108,7 @@ obj-$(CONFIG_LIBCRC32C) += libcrc32c.o obj-$(CONFIG_CRC8) += crc8.o obj-$(CONFIG_XXHASH) += xxhash.o obj-$(CONFIG_GENERIC_ALLOCATOR) += genalloc.o +obj-$(CONFIG_GENERIC_ALLOCATOR_SELFTEST) += genalloc-selftest.o obj-$(CONFIG_842_COMPRESS) += 842/ obj-$(CONFIG_842_DECOMPRESS) += 842/ diff --git a/lib/genalloc-selftest.c b/lib/genalloc-selftest.c new file mode 100644 index 0000000..007a0cf --- /dev/null +++ b/lib/genalloc-selftest.c @@ -0,0 +1,402 @@ +/* + * genalloc-selftest.c + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/module.h> +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/vmalloc.h> +#include <asm/set_memory.h> +#include <linux/string.h> +#include <linux/debugfs.h> +#include <linux/atomic.h> +#include <linux/genalloc.h> + + + +/* Keep the bitmap small, while including case of cross-ulong mapping. + * For simplicity, the test cases use only 1 chunk of memory. + */ +#define BITMAP_SIZE_C 16 +#define ALLOC_ORDER 0 + +#define ULONG_SIZE (sizeof(unsigned long)) +#define BITMAP_SIZE_UL (BITMAP_SIZE_C / ULONG_SIZE) +#define MIN_ALLOC_SIZE (1 << ALLOC_ORDER) +#define ENTRIES (BITMAP_SIZE_C * 8) +#define CHUNK_SIZE (MIN_ALLOC_SIZE * ENTRIES) + +#ifndef CONFIG_GENERIC_ALLOCATOR_SELFTEST_VERBOSE + +static inline void print_first_chunk_bitmap(struct gen_pool *pool) {} + +#else + +static void print_first_chunk_bitmap(struct gen_pool *pool) +{ + struct gen_pool_chunk *chunk; + char bitmap[BITMAP_SIZE_C * 2 + 1]; + unsigned long i; + char *bm = bitmap; + char *entry; + + if (unlikely(pool == NULL || pool->chunks.next == NULL)) + return; + + chunk = container_of(pool->chunks.next, struct gen_pool_chunk, + next_chunk); + entry = (void *)chunk->entries; + for (i = 1; i <= BITMAP_SIZE_C; i++) + bm += snprintf(bm, 3, "%02hhx", entry[BITMAP_SIZE_C - i]); + *bm = '\0'; + pr_notice("chunk: %p bitmap: 0x%s\n", chunk, bitmap); + +} + +#endif + +enum test_commands { + CMD_ALLOCATOR, + CMD_ALLOCATE, + CMD_FLUSH, + CMD_FREE, + CMD_NUMBER, + CMD_END = CMD_NUMBER, +}; + +struct null_struct { + void *null; +}; + +struct test_allocator { + genpool_algo_t algo; + union { + struct genpool_data_align align; + struct genpool_data_fixed offset; + struct null_struct null; + } data; +}; + +struct test_action { + unsigned int location; + char pattern[BITMAP_SIZE_C]; + unsigned int size; +}; + + +struct test_command { + enum test_commands command; + union { + struct test_allocator allocator; + struct test_action action; + }; +}; + + +/* To pass an array literal as parameter to a macro, it must go through + * this one, first. + */ +#define ARR(...) __VA_ARGS__ + +#define SET_DATA(parameter, value) \ + .parameter = { \ + .parameter = value, \ + } \ + +#define SET_ALLOCATOR(alloc, parameter, value) \ +{ \ + .command = CMD_ALLOCATOR, \ + .allocator = { \ + .algo = (alloc), \ + .data = { \ + SET_DATA(parameter, value), \ + }, \ + } \ +} + +#define ACTION_MEM(act, mem_size, mem_loc, match) \ +{ \ + .command = act, \ + .action = { \ + .size = (mem_size), \ + .location = (mem_loc), \ + .pattern = match, \ + }, \ +} + +#define ALLOCATE_MEM(mem_size, mem_loc, match) \ + ACTION_MEM(CMD_ALLOCATE, mem_size, mem_loc, ARR(match)) + +#define FREE_MEM(mem_size, mem_loc, match) \ + ACTION_MEM(CMD_FREE, mem_size, mem_loc, ARR(match)) + +#define FLUSH_MEM() \ +{ \ + .command = CMD_FLUSH, \ +} + +#define END() \ +{ \ + .command = CMD_END, \ +} + +static inline int compare_bitmaps(const struct gen_pool *pool, + const char *reference) +{ + struct gen_pool_chunk *chunk; + char *bitmap; + unsigned int i; + + chunk = container_of(pool->chunks.next, struct gen_pool_chunk, + next_chunk); + bitmap = (char *)chunk->entries; + + for (i = 0; i < BITMAP_SIZE_C; i++) + if (bitmap[i] != reference[i]) + return -1; + return 0; +} + +static void callback_set_allocator(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + gen_pool_set_algo(pool, cmd->allocator.algo, + (void *)&cmd->allocator.data); +} + +static void callback_allocate(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + const struct test_action *action = &cmd->action; + + locations[action->location] = gen_pool_alloc(pool, action->size); + BUG_ON(!locations[action->location]); + print_first_chunk_bitmap(pool); + BUG_ON(compare_bitmaps(pool, action->pattern)); +} + +static void callback_flush(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + unsigned int i; + + for (i = 0; i < ENTRIES; i++) + if (locations[i]) { + gen_pool_free(pool, locations[i], 0); + locations[i] = 0; + } +} + +static void callback_free(struct gen_pool *pool, + const struct test_command *cmd, + unsigned long *locations) +{ + const struct test_action *action = &cmd->action; + + gen_pool_free(pool, locations[action->location], 0); + locations[action->location] = 0; + print_first_chunk_bitmap(pool); + BUG_ON(compare_bitmaps(pool, action->pattern)); +} + +static void (* const callbacks[CMD_NUMBER])(struct gen_pool *, + const struct test_command *, + unsigned long *) = { + [CMD_ALLOCATOR] = callback_set_allocator, + [CMD_ALLOCATE] = callback_allocate, + [CMD_FREE] = callback_free, + [CMD_FLUSH] = callback_flush, +}; + +const struct test_command test_first_fit[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(3, 0, ARR({0x2b})), + ALLOCATE_MEM(2, 1, ARR({0xeb, 0x02})), + ALLOCATE_MEM(5, 2, ARR({0xeb, 0xae, 0x0a})), + FREE_MEM(2, 1, ARR({0x2b, 0xac, 0x0a})), + ALLOCATE_MEM(1, 1, ARR({0xeb, 0xac, 0x0a})), + FREE_MEM(0, 2, ARR({0xeb})), + FREE_MEM(0, 0, ARR({0xc0})), + FREE_MEM(0, 1, ARR({0x00})), + END(), +}; + +/* To make the test work for both 32bit and 64bit ulong sizes, + * allocate (8 / 2 * 4 - 1) = 15 bytes bytes, then 16, then 2. + * The first allocation prepares for the crossing of the 32bit ulong + * threshold. The following crosses the 32bit threshold and prepares for + * crossing the 64bit thresholds. The last is large enough (2 bytes) to + * cross the 64bit threshold. + * Then free the allocations in the order: 2nd, 1st, 3rd. + */ +const struct test_command test_ulong_span[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(15, 0, ARR({0xab, 0xaa, 0xaa, 0x2a})), + ALLOCATE_MEM(16, 1, ARR({0xab, 0xaa, 0xaa, 0xea, + 0xaa, 0xaa, 0xaa, 0x2a})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0xaa, 0xaa, 0xea, + 0xaa, 0xaa, 0xaa, 0xea, + 0x02})), + FREE_MEM(0, 1, ARR({0xab, 0xaa, 0xaa, 0x2a, + 0x00, 0x00, 0x00, 0xc0, + 0x02})), + FREE_MEM(0, 0, ARR({0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0xc0, + 0x02})), + FREE_MEM(0, 2, ARR({0x00})), + END(), +}; + +/* Create progressively smaller allocations A B C D E. + * then free B and D. + * Then create new allocation that would fit in both of the gaps left by + * B and D. Verify that it uses the gap from B. + */ +const struct test_command test_first_fit_gaps[] = { + SET_ALLOCATOR(gen_pool_first_fit, null, NULL), + ALLOCATE_MEM(10, 0, ARR({0xab, 0xaa, 0x0a})), + ALLOCATE_MEM(8, 1, ARR({0xab, 0xaa, 0xba, 0xaa, + 0x0a})), + ALLOCATE_MEM(6, 2, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa})), + ALLOCATE_MEM(4, 3, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa, 0xab})), + ALLOCATE_MEM(2, 4, ARR({0xab, 0xaa, 0xba, 0xaa, + 0xba, 0xaa, 0xab, 0x0b})), + FREE_MEM(0, 1, ARR({0xab, 0xaa, 0x0a, 0x00, + 0xb0, 0xaa, 0xab, 0x0b})), + FREE_MEM(0, 3, ARR({0xab, 0xaa, 0x0a, 0x00, + 0xb0, 0xaa, 0x00, 0x0b})), + ALLOCATE_MEM(3, 3, ARR({0xab, 0xaa, 0xba, 0x02, + 0xb0, 0xaa, 0x00, 0x0b})), + FLUSH_MEM(), + END(), +}; + +/* Test first fit align */ +const struct test_command test_first_fit_align[] = { + SET_ALLOCATOR(gen_pool_first_fit_align, align, 4), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0x02, 0x2b})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0x02, 0x2b, 0x0b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0x02, 0x2b, 0x0b, 0x03})), + FREE_MEM(0, 0, ARR({0x00, 0x00, 0x2b, 0x0b, 0x03})), + FREE_MEM(0, 2, ARR({0x00, 0x00, 0x2b, 0x00, 0x03})), + ALLOCATE_MEM(2, 0, ARR({0x0b, 0x00, 0x2b, 0x00, 0x03})), + FLUSH_MEM(), + END(), +}; + + +/* Test fixed alloc */ +const struct test_command test_fixed_data[] = { + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 1), + ALLOCATE_MEM(5, 0, ARR({0xac, 0x0a})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 8), + ALLOCATE_MEM(3, 1, ARR({0xac, 0x0a, 0x2b})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 6), + ALLOCATE_MEM(2, 2, ARR({0xac, 0xba, 0x2b})), + SET_ALLOCATOR(gen_pool_fixed_alloc, offset, 30), + ALLOCATE_MEM(40, 3, ARR({0xac, 0xba, 0x2b, 0x00, + 0x00, 0x00, 0x00, 0xb0, + 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa})), + FLUSH_MEM(), + END(), +}; + + +/* Test first fit order align */ +const struct test_command test_first_fit_order_align[] = { + SET_ALLOCATOR(gen_pool_first_fit_order_align, null, NULL), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0x02, 0x2b})), + ALLOCATE_MEM(2, 2, ARR({0xab, 0xb2, 0x2b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0xbe, 0x2b})), + ALLOCATE_MEM(1, 4, ARR({0xab, 0xbe, 0xeb})), + ALLOCATE_MEM(2, 5, ARR({0xab, 0xbe, 0xeb, 0x0b})), + FLUSH_MEM(), + END(), +}; + + +/* 007 Test best fit */ +const struct test_command test_best_fit[] = { + SET_ALLOCATOR(gen_pool_best_fit, null, NULL), + ALLOCATE_MEM(5, 0, ARR({0xab, 0x02})), + ALLOCATE_MEM(3, 1, ARR({0xab, 0xae})), + ALLOCATE_MEM(3, 2, ARR({0xab, 0xae, 0x2b})), + ALLOCATE_MEM(1, 3, ARR({0xab, 0xae, 0xeb})), + FREE_MEM(0, 0, ARR({0x00, 0xac, 0xeb})), + FREE_MEM(0, 2, ARR({0x00, 0xac, 0xc0})), + ALLOCATE_MEM(2, 0, ARR({0x00, 0xac, 0xcb})), + FLUSH_MEM(), + END(), +}; + + +enum test_cases_indexes { + TEST_CASE_FIRST_FIT, + TEST_CASE_ULONG_SPAN, + TEST_CASE_FIRST_FIT_GAPS, + TEST_CASE_FIRST_FIT_ALIGN, + TEST_CASE_FIXED_DATA, + TEST_CASE_FIRST_FIT_ORDER_ALIGN, + TEST_CASE_BEST_FIT, + TEST_CASES_NUM, +}; + +const struct test_command *test_cases[TEST_CASES_NUM] = { + [TEST_CASE_FIRST_FIT] = test_first_fit, + [TEST_CASE_ULONG_SPAN] = test_ulong_span, + [TEST_CASE_FIRST_FIT_GAPS] = test_first_fit_gaps, + [TEST_CASE_FIRST_FIT_ALIGN] = test_first_fit_align, + [TEST_CASE_FIXED_DATA] = test_fixed_data, + [TEST_CASE_FIRST_FIT_ORDER_ALIGN] = test_first_fit_order_align, + [TEST_CASE_BEST_FIT] = test_best_fit, +}; + + +void genalloc_selftest(void) +{ + static struct gen_pool *pool; + unsigned long locations[ENTRIES]; + char chunk[CHUNK_SIZE]; + int retval; + unsigned int i; + const struct test_command *cmd; + + pool = gen_pool_create(ALLOC_ORDER, -1); + if (unlikely(!pool)) { + pr_err("genalloc-selftest: no memory for pool."); + return; + } + + retval = gen_pool_add_virt(pool, (unsigned long)chunk, 0, + CHUNK_SIZE, -1); + if (unlikely(retval)) { + pr_err("genalloc-selftest: could not register chunk."); + goto destroy_pool; + } + + memset(locations, 0, ENTRIES * sizeof(unsigned long)); + for (i = 0; i < TEST_CASES_NUM; i++) + for (cmd = test_cases[i]; cmd->command < CMD_END; cmd++) + callbacks[cmd->command](pool, cmd, locations); + pr_notice("genalloc-selftest: executed successfully %d tests", + TEST_CASES_NUM); + +destroy_pool: + gen_pool_destroy(pool); +} -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 3/6] struct page: add field for vm_struct 2018-02-03 19:42 ` Igor Stoppa (?) (?) @ 2018-02-03 19:42 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa When a page is used for virtual memory, it is often necessary to obtian a handler to the corresponding vm_struct, which refers to the virtually continuous area generated when invoking vmalloc. The struct page has a "mapping" field, which can be re-used, to store a pointer to the parent area. This will avoid more expensive searches. As example, the function find_vm_area is reimplemented, to take advantage of the newly introduced field. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/mm_types.h | 1 + mm/vmalloc.c | 18 +++++++++++++----- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index cfd0ac4..2abd540 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -56,6 +56,7 @@ struct page { void *s_mem; /* slab first object */ atomic_t compound_mapcount; /* first tail page */ /* page_deferred_list().next -- second tail page */ + struct vm_struct *area; }; /* Second double word */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6739420..44c5dfc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1466,13 +1466,16 @@ struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, */ struct vm_struct *find_vm_area(const void *addr) { - struct vmap_area *va; + struct page *page; - va = find_vmap_area((unsigned long)addr); - if (va && va->flags & VM_VM_AREA) - return va->vm; + if (unlikely(!is_vmalloc_addr(addr))) + return NULL; - return NULL; + page = vmalloc_to_page(addr); + if (unlikely(!page)) + return NULL; + + return page->area; } /** @@ -1536,6 +1539,7 @@ static void __vunmap(const void *addr, int deallocate_pages) struct page *page = area->pages[i]; BUG_ON(!page); + page->area = NULL; __free_pages(page, 0); } @@ -1744,6 +1748,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) { struct vm_struct *area; + unsigned int page_counter; void *addr; unsigned long real_size = size; @@ -1769,6 +1774,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, kmemleak_vmalloc(area, size, gfp_mask); + for (page_counter = 0; page_counter < area->nr_pages; page_counter++) + area->pages[page_counter]->area = area; + return addr; fail: -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 3/6] struct page: add field for vm_struct @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa When a page is used for virtual memory, it is often necessary to obtian a handler to the corresponding vm_struct, which refers to the virtually continuous area generated when invoking vmalloc. The struct page has a "mapping" field, which can be re-used, to store a pointer to the parent area. This will avoid more expensive searches. As example, the function find_vm_area is reimplemented, to take advantage of the newly introduced field. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/mm_types.h | 1 + mm/vmalloc.c | 18 +++++++++++++----- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index cfd0ac4..2abd540 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -56,6 +56,7 @@ struct page { void *s_mem; /* slab first object */ atomic_t compound_mapcount; /* first tail page */ /* page_deferred_list().next -- second tail page */ + struct vm_struct *area; }; /* Second double word */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6739420..44c5dfc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1466,13 +1466,16 @@ struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, */ struct vm_struct *find_vm_area(const void *addr) { - struct vmap_area *va; + struct page *page; - va = find_vmap_area((unsigned long)addr); - if (va && va->flags & VM_VM_AREA) - return va->vm; + if (unlikely(!is_vmalloc_addr(addr))) + return NULL; - return NULL; + page = vmalloc_to_page(addr); + if (unlikely(!page)) + return NULL; + + return page->area; } /** @@ -1536,6 +1539,7 @@ static void __vunmap(const void *addr, int deallocate_pages) struct page *page = area->pages[i]; BUG_ON(!page); + page->area = NULL; __free_pages(page, 0); } @@ -1744,6 +1748,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) { struct vm_struct *area; + unsigned int page_counter; void *addr; unsigned long real_size = size; @@ -1769,6 +1774,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, kmemleak_vmalloc(area, size, gfp_mask); + for (page_counter = 0; page_counter < area->nr_pages; page_counter++) + area->pages[page_counter]->area = area; + return addr; fail: -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 3/6] struct page: add field for vm_struct @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa When a page is used for virtual memory, it is often necessary to obtian a handler to the corresponding vm_struct, which refers to the virtually continuous area generated when invoking vmalloc. The struct page has a "mapping" field, which can be re-used, to store a pointer to the parent area. This will avoid more expensive searches. As example, the function find_vm_area is reimplemented, to take advantage of the newly introduced field. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/mm_types.h | 1 + mm/vmalloc.c | 18 +++++++++++++----- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index cfd0ac4..2abd540 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -56,6 +56,7 @@ struct page { void *s_mem; /* slab first object */ atomic_t compound_mapcount; /* first tail page */ /* page_deferred_list().next -- second tail page */ + struct vm_struct *area; }; /* Second double word */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6739420..44c5dfc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1466,13 +1466,16 @@ struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, */ struct vm_struct *find_vm_area(const void *addr) { - struct vmap_area *va; + struct page *page; - va = find_vmap_area((unsigned long)addr); - if (va && va->flags & VM_VM_AREA) - return va->vm; + if (unlikely(!is_vmalloc_addr(addr))) + return NULL; - return NULL; + page = vmalloc_to_page(addr); + if (unlikely(!page)) + return NULL; + + return page->area; } /** @@ -1536,6 +1539,7 @@ static void __vunmap(const void *addr, int deallocate_pages) struct page *page = area->pages[i]; BUG_ON(!page); + page->area = NULL; __free_pages(page, 0); } @@ -1744,6 +1748,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) { struct vm_struct *area; + unsigned int page_counter; void *addr; unsigned long real_size = size; @@ -1769,6 +1774,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, kmemleak_vmalloc(area, size, gfp_mask); + for (page_counter = 0; page_counter < area->nr_pages; page_counter++) + area->pages[page_counter]->area = area; + return addr; fail: -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 3/6] struct page: add field for vm_struct @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: linux-security-module When a page is used for virtual memory, it is often necessary to obtian a handler to the corresponding vm_struct, which refers to the virtually continuous area generated when invoking vmalloc. The struct page has a "mapping" field, which can be re-used, to store a pointer to the parent area. This will avoid more expensive searches. As example, the function find_vm_area is reimplemented, to take advantage of the newly introduced field. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/mm_types.h | 1 + mm/vmalloc.c | 18 +++++++++++++----- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index cfd0ac4..2abd540 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -56,6 +56,7 @@ struct page { void *s_mem; /* slab first object */ atomic_t compound_mapcount; /* first tail page */ /* page_deferred_list().next -- second tail page */ + struct vm_struct *area; }; /* Second double word */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6739420..44c5dfc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1466,13 +1466,16 @@ struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, */ struct vm_struct *find_vm_area(const void *addr) { - struct vmap_area *va; + struct page *page; - va = find_vmap_area((unsigned long)addr); - if (va && va->flags & VM_VM_AREA) - return va->vm; + if (unlikely(!is_vmalloc_addr(addr))) + return NULL; - return NULL; + page = vmalloc_to_page(addr); + if (unlikely(!page)) + return NULL; + + return page->area; } /** @@ -1536,6 +1539,7 @@ static void __vunmap(const void *addr, int deallocate_pages) struct page *page = area->pages[i]; BUG_ON(!page); + page->area = NULL; __free_pages(page, 0); } @@ -1744,6 +1748,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) { struct vm_struct *area; + unsigned int page_counter; void *addr; unsigned long real_size = size; @@ -1769,6 +1774,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, kmemleak_vmalloc(area, size, gfp_mask); + for (page_counter = 0; page_counter < area->nr_pages; page_counter++) + area->pages[page_counter]->area = area; + return addr; fail: -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory 2018-02-03 19:42 ` Igor Stoppa (?) (?) @ 2018-02-03 19:42 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 211 +++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 778 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e..b6c4cea 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..6d4a24e --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..116d280 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde7830..7ba2ec9 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..11daca2 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,514 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include "pmalloc-selftest.h" + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + pmalloc_selftest(); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 211 +++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 778 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e..b6c4cea 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..6d4a24e --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..116d280 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde7830..7ba2ec9 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..11daca2 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,514 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include "pmalloc-selftest.h" + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + pmalloc_selftest(); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 211 +++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 778 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e..b6c4cea 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..6d4a24e --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..116d280 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde7830..7ba2ec9 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..11daca2 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,514 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include "pmalloc-selftest.h" + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + pmalloc_selftest(); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-03 19:42 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:42 UTC (permalink / raw) To: linux-security-module The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 211 +++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 778 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e..b6c4cea 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..6d4a24e --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..116d280 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde7830..7ba2ec9 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..11daca2 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,514 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include "pmalloc-selftest.h" + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + pmalloc_selftest(); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [RFC PATCH v16 0/6] mm: security: ro protection for dynamic data @ 2018-02-12 16:52 Igor Stoppa 2018-02-12 16:52 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 16:52 UTC (permalink / raw) To: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa This patch-set introduces the possibility of protecting memory that has been allocated dynamically. The memory is managed in pools: when a memory pool is turned into R/O, all the memory that is part of it, will become R/O. A R/O pool can be destroyed, to recover its memory, but it cannot be turned back into R/W mode. This is intentional. This feature is meant for data that doesn't need further modifications after initialization. However the data might need to be released, for example as part of module unloading. To do this, the memory must first be freed, then the pool can be destroyed. An example is provided, in the form of self-testing. Changes since v15: [http://www.openwall.com/lists/kernel-hardening/2018/02/11/4] - Fixed remaining broken comments - Fixed remaining broken "Returns" instead of "Return:" in kernel-doc - Converted "Return:" values to lists - Fixed SPDX license statements Igor Stoppa (6): genalloc: track beginning of allocations genalloc: selftest struct page: add field for vm_struct Protectable Memory Pmalloc: self-test Documentation for Pmalloc Documentation/core-api/index.rst | 1 + Documentation/core-api/pmalloc.rst | 114 +++++++ include/linux/genalloc-selftest.h | 26 ++ include/linux/genalloc.h | 7 +- include/linux/mm_types.h | 1 + include/linux/pmalloc.h | 242 ++++++++++++++ include/linux/vmalloc.h | 1 + init/main.c | 2 + lib/Kconfig | 15 + lib/Makefile | 1 + lib/genalloc-selftest.c | 400 ++++++++++++++++++++++ lib/genalloc.c | 658 +++++++++++++++++++++++++++---------- mm/Kconfig | 15 + mm/Makefile | 2 + mm/pmalloc-selftest.c | 64 ++++ mm/pmalloc-selftest.h | 24 ++ mm/pmalloc.c | 501 ++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++ mm/vmalloc.c | 18 +- 19 files changed, 1950 insertions(+), 175 deletions(-) create mode 100644 Documentation/core-api/pmalloc.rst create mode 100644 include/linux/genalloc-selftest.h create mode 100644 include/linux/pmalloc.h create mode 100644 lib/genalloc-selftest.c create mode 100644 mm/pmalloc-selftest.c create mode 100644 mm/pmalloc-selftest.h create mode 100644 mm/pmalloc.c -- 2.14.1 ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory 2018-02-12 16:52 [RFC PATCH v16 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-02-12 16:52 ` Igor Stoppa (?) @ 2018-02-12 16:52 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 16:52 UTC (permalink / raw) To: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 242 +++++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Kconfig | 6 + mm/Makefile | 1 + mm/pmalloc.c | 499 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++++ 8 files changed, 812 insertions(+) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..afc2068d5545 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,242 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _LINUX_PMALLOC_H +#define _LINUX_PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool() - create a new protectable memory pool + * @name: the name of the pool, enforced to be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Return: + * * pointer to the new pool - success + * * NULL - error + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + +/** + * is_pmalloc_object() - validates the existence of an alleged object + * @ptr: address of the object + * @n: size of the object, in bytes + * + * Return: + * * 0 - the object does not belong to pmalloc + * * 1 - the object belongs to pmalloc + * * \-1 - the object overlaps pmalloc memory incorrectly + */ +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc() - tries to allocate a memory chunk of the requested size + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleeping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposed to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * One additional advantage of pre-allocating larger chunks of memory is + * that the total slack tends to be smaller. + * + * Return: + * * true - the vmalloc call was successful + * * false - error + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc() - allocate protectable memory from a pool + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Return: + * * pointer to the memory requested - success + * * NULL - either no memory available or + * pool already read-only + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc() - zero-initialized version of pmalloc + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Return: + * * pointer to the memory requested - success + * * NULL - either no memory available or + * pool already read-only + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array() - allocates an array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Return: + * * the pmalloc result - success + * * NULL - error + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc() - allocates a 0-initialized array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Return: + * * the pmalloc result - success + * * NULL - error + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup() - duplicate a string, using pmalloc as allocator + * @pool: handle to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Return: + * * pointer to the replica - success + * * NULL - error + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool() - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Return: + * * 0 - success + * * -EINVAL - error + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree() - mark as unused memory that was previously in use + * @pool: handle to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be available for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool() - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Return: + * * 0 - success + * * -EINVAL - error + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 87f62f31b52f..24ed35035095 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -625,6 +625,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk() - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and it should have become unavailable for any other + * sort of operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk() - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Kconfig b/mm/Kconfig index c782e8fb7235..be578fbdce6d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -760,3 +760,9 @@ config GUP_BENCHMARK performance of get_user_pages_fast(). See tools/testing/selftests/vm/gup_benchmark.c + +config PROTECTABLE_MEMORY + bool + depends on ARCH_HAS_SET_MEMORY + select GENERIC_ALLOCATOR + default y diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..959fdbdac118 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..abddba90a9f6 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,499 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include <linux/pmalloc.h> +/* + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/* Exposes the pool and its attributes through sysfs. */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/* Removes the pool and its attributes from sysfs. */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/* Declares an attribute of the pool. */ +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + + if (unlikely(!req_size || !pool)) + return -1; + + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* + * is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + + if (check_alloc_params(pool, size)) + return false; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + + if (check_alloc_params(pool, size)) + return NULL; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* + * Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/* + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index e9e9325f7638..946ce051e296 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, } } +#ifdef CONFIG_PROTECTABLE_MEMORY + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ + int retv; + + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) + usercopy_abort("pmalloc", + "trying to write to pmalloc object", + to_user, (const unsigned long)ptr, n); + if (retv < 0) + usercopy_abort("pmalloc", + "invalid pmalloc object", + to_user, (const unsigned long)ptr, n); + } +} + +#else + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ +} +#endif + /* * Validates that the given object is: * - not bogus address @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); -- 2.14.1 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 16:52 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 16:52 UTC (permalink / raw) To: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 242 +++++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Kconfig | 6 + mm/Makefile | 1 + mm/pmalloc.c | 499 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++++ 8 files changed, 812 insertions(+) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..afc2068d5545 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,242 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _LINUX_PMALLOC_H +#define _LINUX_PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool() - create a new protectable memory pool + * @name: the name of the pool, enforced to be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Return: + * * pointer to the new pool - success + * * NULL - error + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + +/** + * is_pmalloc_object() - validates the existence of an alleged object + * @ptr: address of the object + * @n: size of the object, in bytes + * + * Return: + * * 0 - the object does not belong to pmalloc + * * 1 - the object belongs to pmalloc + * * \-1 - the object overlaps pmalloc memory incorrectly + */ +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc() - tries to allocate a memory chunk of the requested size + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleeping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposed to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * One additional advantage of pre-allocating larger chunks of memory is + * that the total slack tends to be smaller. + * + * Return: + * * true - the vmalloc call was successful + * * false - error + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc() - allocate protectable memory from a pool + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Return: + * * pointer to the memory requested - success + * * NULL - either no memory available or + * pool already read-only + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc() - zero-initialized version of pmalloc + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Return: + * * pointer to the memory requested - success + * * NULL - either no memory available or + * pool already read-only + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array() - allocates an array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Return: + * * the pmalloc result - success + * * NULL - error + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc() - allocates a 0-initialized array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Return: + * * the pmalloc result - success + * * NULL - error + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup() - duplicate a string, using pmalloc as allocator + * @pool: handle to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Return: + * * pointer to the replica - success + * * NULL - error + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool() - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Return: + * * 0 - success + * * -EINVAL - error + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree() - mark as unused memory that was previously in use + * @pool: handle to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be available for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool() - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Return: + * * 0 - success + * * -EINVAL - error + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 87f62f31b52f..24ed35035095 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -625,6 +625,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk() - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and it should have become unavailable for any other + * sort of operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk() - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Kconfig b/mm/Kconfig index c782e8fb7235..be578fbdce6d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -760,3 +760,9 @@ config GUP_BENCHMARK performance of get_user_pages_fast(). See tools/testing/selftests/vm/gup_benchmark.c + +config PROTECTABLE_MEMORY + bool + depends on ARCH_HAS_SET_MEMORY + select GENERIC_ALLOCATOR + default y diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..959fdbdac118 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..abddba90a9f6 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,499 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include <linux/pmalloc.h> +/* + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/* Exposes the pool and its attributes through sysfs. */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/* Removes the pool and its attributes from sysfs. */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/* Declares an attribute of the pool. */ +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + + if (unlikely(!req_size || !pool)) + return -1; + + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* + * is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + + if (check_alloc_params(pool, size)) + return false; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + + if (check_alloc_params(pool, size)) + return NULL; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* + * Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/* + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index e9e9325f7638..946ce051e296 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, } } +#ifdef CONFIG_PROTECTABLE_MEMORY + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ + int retv; + + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) + usercopy_abort("pmalloc", + "trying to write to pmalloc object", + to_user, (const unsigned long)ptr, n); + if (retv < 0) + usercopy_abort("pmalloc", + "invalid pmalloc object", + to_user, (const unsigned long)ptr, n); + } +} + +#else + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ +} +#endif + /* * Validates that the given object is: * - not bogus address @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); -- 2.14.1 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 16:52 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 16:52 UTC (permalink / raw) To: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 242 +++++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Kconfig | 6 + mm/Makefile | 1 + mm/pmalloc.c | 499 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++++ 8 files changed, 812 insertions(+) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..afc2068d5545 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,242 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _LINUX_PMALLOC_H +#define _LINUX_PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool() - create a new protectable memory pool + * @name: the name of the pool, enforced to be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Return: + * * pointer to the new pool - success + * * NULL - error + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + +/** + * is_pmalloc_object() - validates the existence of an alleged object + * @ptr: address of the object + * @n: size of the object, in bytes + * + * Return: + * * 0 - the object does not belong to pmalloc + * * 1 - the object belongs to pmalloc + * * \-1 - the object overlaps pmalloc memory incorrectly + */ +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc() - tries to allocate a memory chunk of the requested size + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleeping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposed to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * One additional advantage of pre-allocating larger chunks of memory is + * that the total slack tends to be smaller. + * + * Return: + * * true - the vmalloc call was successful + * * false - error + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc() - allocate protectable memory from a pool + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Return: + * * pointer to the memory requested - success + * * NULL - either no memory available or + * pool already read-only + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc() - zero-initialized version of pmalloc + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Return: + * * pointer to the memory requested - success + * * NULL - either no memory available or + * pool already read-only + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array() - allocates an array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Return: + * * the pmalloc result - success + * * NULL - error + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc() - allocates a 0-initialized array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Return: + * * the pmalloc result - success + * * NULL - error + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup() - duplicate a string, using pmalloc as allocator + * @pool: handle to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Return: + * * pointer to the replica - success + * * NULL - error + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool() - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Return: + * * 0 - success + * * -EINVAL - error + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree() - mark as unused memory that was previously in use + * @pool: handle to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be available for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool() - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Return: + * * 0 - success + * * -EINVAL - error + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 87f62f31b52f..24ed35035095 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -625,6 +625,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk() - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and it should have become unavailable for any other + * sort of operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk() - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Kconfig b/mm/Kconfig index c782e8fb7235..be578fbdce6d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -760,3 +760,9 @@ config GUP_BENCHMARK performance of get_user_pages_fast(). See tools/testing/selftests/vm/gup_benchmark.c + +config PROTECTABLE_MEMORY + bool + depends on ARCH_HAS_SET_MEMORY + select GENERIC_ALLOCATOR + default y diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..959fdbdac118 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..abddba90a9f6 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,499 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include <linux/pmalloc.h> +/* + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/* Exposes the pool and its attributes through sysfs. */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/* Removes the pool and its attributes from sysfs. */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/* Declares an attribute of the pool. */ +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + + if (unlikely(!req_size || !pool)) + return -1; + + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* + * is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + + if (check_alloc_params(pool, size)) + return false; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + + if (check_alloc_params(pool, size)) + return NULL; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* + * Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/* + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index e9e9325f7638..946ce051e296 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, } } +#ifdef CONFIG_PROTECTABLE_MEMORY + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ + int retv; + + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) + usercopy_abort("pmalloc", + "trying to write to pmalloc object", + to_user, (const unsigned long)ptr, n); + if (retv < 0) + usercopy_abort("pmalloc", + "invalid pmalloc object", + to_user, (const unsigned long)ptr, n); + } +} + +#else + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ +} +#endif + /* * Validates that the given object is: * - not bogus address @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); -- 2.14.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 16:52 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 16:52 UTC (permalink / raw) To: linux-security-module The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 242 +++++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Kconfig | 6 + mm/Makefile | 1 + mm/pmalloc.c | 499 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++++ 8 files changed, 812 insertions(+) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..afc2068d5545 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,242 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _LINUX_PMALLOC_H +#define _LINUX_PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool() - create a new protectable memory pool + * @name: the name of the pool, enforced to be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Return: + * * pointer to the new pool - success + * * NULL - error + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + +/** + * is_pmalloc_object() - validates the existence of an alleged object + * @ptr: address of the object + * @n: size of the object, in bytes + * + * Return: + * * 0 - the object does not belong to pmalloc + * * 1 - the object belongs to pmalloc + * * \-1 - the object overlaps pmalloc memory incorrectly + */ +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc() - tries to allocate a memory chunk of the requested size + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleeping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposed to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * One additional advantage of pre-allocating larger chunks of memory is + * that the total slack tends to be smaller. + * + * Return: + * * true - the vmalloc call was successful + * * false - error + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc() - allocate protectable memory from a pool + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Return: + * * pointer to the memory requested - success + * * NULL - either no memory available or + * pool already read-only + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc() - zero-initialized version of pmalloc + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Return: + * * pointer to the memory requested - success + * * NULL - either no memory available or + * pool already read-only + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array() - allocates an array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Return: + * * the pmalloc result - success + * * NULL - error + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc() - allocates a 0-initialized array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Return: + * * the pmalloc result - success + * * NULL - error + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup() - duplicate a string, using pmalloc as allocator + * @pool: handle to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Return: + * * pointer to the replica - success + * * NULL - error + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool() - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Return: + * * 0 - success + * * -EINVAL - error + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree() - mark as unused memory that was previously in use + * @pool: handle to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be available for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool() - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Return: + * * 0 - success + * * -EINVAL - error + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 87f62f31b52f..24ed35035095 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -625,6 +625,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk() - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and it should have become unavailable for any other + * sort of operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk() - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Kconfig b/mm/Kconfig index c782e8fb7235..be578fbdce6d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -760,3 +760,9 @@ config GUP_BENCHMARK performance of get_user_pages_fast(). See tools/testing/selftests/vm/gup_benchmark.c + +config PROTECTABLE_MEMORY + bool + depends on ARCH_HAS_SET_MEMORY + select GENERIC_ALLOCATOR + default y diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..959fdbdac118 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..abddba90a9f6 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,499 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include <linux/pmalloc.h> +/* + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/* Exposes the pool and its attributes through sysfs. */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/* Removes the pool and its attributes from sysfs. */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/* Declares an attribute of the pool. */ +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + + if (unlikely(!req_size || !pool)) + return -1; + + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* + * is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + + if (check_alloc_params(pool, size)) + return false; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + + if (check_alloc_params(pool, size)) + return NULL; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* + * Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/* + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index e9e9325f7638..946ce051e296 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, } } +#ifdef CONFIG_PROTECTABLE_MEMORY + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ + int retv; + + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) + usercopy_abort("pmalloc", + "trying to write to pmalloc object", + to_user, (const unsigned long)ptr, n); + if (retv < 0) + usercopy_abort("pmalloc", + "invalid pmalloc object", + to_user, (const unsigned long)ptr, n); + } +} + +#else + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ +} +#endif + /* * Validates that the given object is: * - not bogus address @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); -- 2.14.1 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info@ http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [RFC PATCH v15 0/6] mm: security: ro protection for dynamic data @ 2018-02-11 3:19 Igor Stoppa 2018-02-11 3:19 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 3:19 UTC (permalink / raw) To: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa This patch-set introduces the possibility of protecting memory that has been allocated dynamically. The memory is managed in pools: when a memory pool is turned into R/O, all the memory that is part of it, will become R/O. A R/O pool can be destroyed, to recover its memory, but it cannot be turned back into R/W mode. This is intentional. This feature is meant for data that doesn't need further modifications after initialization. However the data might need to be released, for example as part of module unloading. To do this, the memory must first be freed, then the pool can be destroyed. An example is provided, in the form of self-testing. Changes since v14: [http://www.openwall.com/lists/kernel-hardening/2018/02/04/2] - fix various warnings from sparse - multiline comments - fix naming of headers guards - fix compilation of individual patches, for bisect - split genalloc documentation about bitmap for allocation - fix headers to match kerneldoc format for "Return:" field - fix variable naming according to coding guidelines - fix wrong default value for pmalloc Kconfig option - refreshed integration of pmalloc with hardened usercopy - removed unnecessary include that was causing compilation failures - changed license of pmalloc documentation from GPL 2.0 to CC-BY-SA-4.0 Igor Stoppa (6): genalloc: track beginning of allocations genalloc: selftest struct page: add field for vm_struct Protectable Memory Pmalloc: self-test Documentation for Pmalloc Documentation/core-api/index.rst | 1 + Documentation/core-api/pmalloc.rst | 114 ++++++++ include/linux/genalloc-selftest.h | 26 ++ include/linux/genalloc.h | 7 +- include/linux/mm_types.h | 1 + include/linux/pmalloc.h | 222 +++++++++++++++ include/linux/vmalloc.h | 1 + init/main.c | 2 + lib/Kconfig | 15 + lib/Makefile | 1 + lib/genalloc-selftest.c | 400 ++++++++++++++++++++++++++ lib/genalloc.c | 554 +++++++++++++++++++++++++++---------- mm/Kconfig | 15 + mm/Makefile | 2 + mm/pmalloc-selftest.c | 63 +++++ mm/pmalloc-selftest.h | 24 ++ mm/pmalloc.c | 499 +++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 +++ mm/vmalloc.c | 18 +- 19 files changed, 1852 insertions(+), 146 deletions(-) create mode 100644 Documentation/core-api/pmalloc.rst create mode 100644 include/linux/genalloc-selftest.h create mode 100644 include/linux/pmalloc.h create mode 100644 lib/genalloc-selftest.c create mode 100644 mm/pmalloc-selftest.c create mode 100644 mm/pmalloc-selftest.h create mode 100644 mm/pmalloc.c -- 2.14.1 ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory 2018-02-11 3:19 [RFC PATCH v15 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-02-11 3:19 ` Igor Stoppa (?) @ 2018-02-11 3:19 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 3:19 UTC (permalink / raw) To: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 222 +++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Kconfig | 6 + mm/Makefile | 1 + mm/pmalloc.c | 497 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++++ 8 files changed, 790 insertions(+) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..624379a937c5 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _LINUX_PMALLOC_H +#define _LINUX_PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool + * @name: the name of the pool, enforced to be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Return: pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + +/** + * is_pmalloc_object - validates the existence of an alleged object + * @ptr: address of the object + * @n: size of the object, in bytes + * + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. + */ +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleeping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposed to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * One additional advantage of pre-allocating larger chunks of memory is + * that the total slack tends to be smaller. + * + * Return: true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Return: pointer to the memory requested upon success, NULL otherwise + * (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Return: pointer to the zeroed memory requested, upon success, NULL + * otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Return: either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Return: either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handle to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Return: pointer to the replica, NULL in case of error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Return: 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handle to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be available for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 044347163acb..e40a5db89439 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -598,6 +598,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Kconfig b/mm/Kconfig index c782e8fb7235..be578fbdce6d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -760,3 +760,9 @@ config GUP_BENCHMARK performance of get_user_pages_fast(). See tools/testing/selftests/vm/gup_benchmark.c + +config PROTECTABLE_MEMORY + bool + depends on ARCH_HAS_SET_MEMORY + select GENERIC_ALLOCATOR + default y diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..959fdbdac118 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..e94bfb407c92 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,497 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include <linux/pmalloc.h> +/* + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/* Exposes the pool and its attributes through sysfs. */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/* Removes the pool and its attributes from sysfs. */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/* Declares an attribute of the pool. */ +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + + if (unlikely(!req_size || !pool)) + return -1; + + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + + if (check_alloc_params(pool, size)) + return false; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + + if (check_alloc_params(pool, size)) + return NULL; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index e9e9325f7638..946ce051e296 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, } } +#ifdef CONFIG_PROTECTABLE_MEMORY + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ + int retv; + + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) + usercopy_abort("pmalloc", + "trying to write to pmalloc object", + to_user, (const unsigned long)ptr, n); + if (retv < 0) + usercopy_abort("pmalloc", + "invalid pmalloc object", + to_user, (const unsigned long)ptr, n); + } +} + +#else + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ +} +#endif + /* * Validates that the given object is: * - not bogus address @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); -- 2.14.1 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-11 3:19 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 3:19 UTC (permalink / raw) To: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 222 +++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Kconfig | 6 + mm/Makefile | 1 + mm/pmalloc.c | 497 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++++ 8 files changed, 790 insertions(+) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..624379a937c5 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _LINUX_PMALLOC_H +#define _LINUX_PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool + * @name: the name of the pool, enforced to be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Return: pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + +/** + * is_pmalloc_object - validates the existence of an alleged object + * @ptr: address of the object + * @n: size of the object, in bytes + * + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. + */ +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleeping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposed to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * One additional advantage of pre-allocating larger chunks of memory is + * that the total slack tends to be smaller. + * + * Return: true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Return: pointer to the memory requested upon success, NULL otherwise + * (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Return: pointer to the zeroed memory requested, upon success, NULL + * otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Return: either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Return: either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handle to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Return: pointer to the replica, NULL in case of error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Return: 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handle to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be available for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 044347163acb..e40a5db89439 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -598,6 +598,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Kconfig b/mm/Kconfig index c782e8fb7235..be578fbdce6d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -760,3 +760,9 @@ config GUP_BENCHMARK performance of get_user_pages_fast(). See tools/testing/selftests/vm/gup_benchmark.c + +config PROTECTABLE_MEMORY + bool + depends on ARCH_HAS_SET_MEMORY + select GENERIC_ALLOCATOR + default y diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..959fdbdac118 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..e94bfb407c92 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,497 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include <linux/pmalloc.h> +/* + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/* Exposes the pool and its attributes through sysfs. */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/* Removes the pool and its attributes from sysfs. */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/* Declares an attribute of the pool. */ +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + + if (unlikely(!req_size || !pool)) + return -1; + + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + + if (check_alloc_params(pool, size)) + return false; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + + if (check_alloc_params(pool, size)) + return NULL; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index e9e9325f7638..946ce051e296 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, } } +#ifdef CONFIG_PROTECTABLE_MEMORY + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ + int retv; + + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) + usercopy_abort("pmalloc", + "trying to write to pmalloc object", + to_user, (const unsigned long)ptr, n); + if (retv < 0) + usercopy_abort("pmalloc", + "invalid pmalloc object", + to_user, (const unsigned long)ptr, n); + } +} + +#else + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ +} +#endif + /* * Validates that the given object is: * - not bogus address @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); -- 2.14.1 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-11 3:19 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 3:19 UTC (permalink / raw) To: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 222 +++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Kconfig | 6 + mm/Makefile | 1 + mm/pmalloc.c | 497 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++++ 8 files changed, 790 insertions(+) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..624379a937c5 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _LINUX_PMALLOC_H +#define _LINUX_PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool + * @name: the name of the pool, enforced to be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Return: pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + +/** + * is_pmalloc_object - validates the existence of an alleged object + * @ptr: address of the object + * @n: size of the object, in bytes + * + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. + */ +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleeping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposed to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * One additional advantage of pre-allocating larger chunks of memory is + * that the total slack tends to be smaller. + * + * Return: true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Return: pointer to the memory requested upon success, NULL otherwise + * (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Return: pointer to the zeroed memory requested, upon success, NULL + * otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Return: either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Return: either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handle to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Return: pointer to the replica, NULL in case of error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Return: 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handle to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be available for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 044347163acb..e40a5db89439 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -598,6 +598,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Kconfig b/mm/Kconfig index c782e8fb7235..be578fbdce6d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -760,3 +760,9 @@ config GUP_BENCHMARK performance of get_user_pages_fast(). See tools/testing/selftests/vm/gup_benchmark.c + +config PROTECTABLE_MEMORY + bool + depends on ARCH_HAS_SET_MEMORY + select GENERIC_ALLOCATOR + default y diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..959fdbdac118 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..e94bfb407c92 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,497 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include <linux/pmalloc.h> +/* + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/* Exposes the pool and its attributes through sysfs. */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/* Removes the pool and its attributes from sysfs. */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/* Declares an attribute of the pool. */ +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + + if (unlikely(!req_size || !pool)) + return -1; + + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + + if (check_alloc_params(pool, size)) + return false; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + + if (check_alloc_params(pool, size)) + return NULL; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index e9e9325f7638..946ce051e296 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, } } +#ifdef CONFIG_PROTECTABLE_MEMORY + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ + int retv; + + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) + usercopy_abort("pmalloc", + "trying to write to pmalloc object", + to_user, (const unsigned long)ptr, n); + if (retv < 0) + usercopy_abort("pmalloc", + "invalid pmalloc object", + to_user, (const unsigned long)ptr, n); + } +} + +#else + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ +} +#endif + /* * Validates that the given object is: * - not bogus address @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); -- 2.14.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-11 3:19 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 3:19 UTC (permalink / raw) To: linux-security-module The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 222 +++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Kconfig | 6 + mm/Makefile | 1 + mm/pmalloc.c | 497 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 33 ++++ 8 files changed, 790 insertions(+) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..624379a937c5 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _LINUX_PMALLOC_H +#define _LINUX_PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool + * @name: the name of the pool, enforced to be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Return: pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + +/** + * is_pmalloc_object - validates the existence of an alleged object + * @ptr: address of the object + * @n: size of the object, in bytes + * + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. + */ +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleeping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposed to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * One additional advantage of pre-allocating larger chunks of memory is + * that the total slack tends to be smaller. + * + * Return: true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Return: pointer to the memory requested upon success, NULL otherwise + * (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handle to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Return: pointer to the zeroed memory requested, upon success, NULL + * otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Return: either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handle to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Return: either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handle to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Return: pointer to the replica, NULL in case of error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Return: 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handle to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be available for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 044347163acb..e40a5db89439 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -598,6 +598,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Kconfig b/mm/Kconfig index c782e8fb7235..be578fbdce6d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -760,3 +760,9 @@ config GUP_BENCHMARK performance of get_user_pages_fast(). See tools/testing/selftests/vm/gup_benchmark.c + +config PROTECTABLE_MEMORY + bool + depends on ARCH_HAS_SET_MEMORY + select GENERIC_ALLOCATOR + default y diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..959fdbdac118 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..e94bfb407c92 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,497 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include <linux/pmalloc.h> +/* + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/* Exposes the pool and its attributes through sysfs. */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/* Removes the pool and its attributes from sysfs. */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/* Declares an attribute of the pool. */ +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + + if (unlikely(!req_size || !pool)) + return -1; + + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + + if (check_alloc_params(pool, size)) + return false; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + + if (check_alloc_params(pool, size)) + return NULL; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index e9e9325f7638..946ce051e296 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, } } +#ifdef CONFIG_PROTECTABLE_MEMORY + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ + int retv; + + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) + usercopy_abort("pmalloc", + "trying to write to pmalloc object", + to_user, (const unsigned long)ptr, n); + if (retv < 0) + usercopy_abort("pmalloc", + "invalid pmalloc object", + to_user, (const unsigned long)ptr, n); + } +} + +#else + +static void check_pmalloc_object(const void *ptr, unsigned long n, + bool to_user) +{ +} +#endif + /* * Validates that the given object is: * - not bogus address @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); -- 2.14.1 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info@ http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-11 3:19 ` Igor Stoppa (?) @ 2018-02-11 12:37 ` Mike Rapoport -1 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-11 12:37 UTC (permalink / raw) To: Igor Stoppa Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > The MMU available in many systems running Linux can often provide R/O > protection to the memory pages it handles. > > However, the MMU-based protection works efficiently only when said pages > contain exclusively data that will not need further modifications. > > Statically allocated variables can be segregated into a dedicated > section, but this does not sit very well with dynamically allocated > ones. > > Dynamic allocation does not provide, currently, any means for grouping > variables in memory pages that would contain exclusively data suitable > for conversion to read only access mode. > > The allocator here provided (pmalloc - protectable memory allocator) > introduces the concept of pools of protectable memory. > > A module can request a pool and then refer any allocation request to the > pool handler it has received. > > Once all the chunks of memory associated to a specific pool are > initialized, the pool can be protected. > > After this point, the pool can only be destroyed (it is up to the module > to avoid any further references to the memory from the pool, after > the destruction is invoked). > > The latter case is mainly meant for releasing memory, when a module is > unloaded. > > A module can have as many pools as needed, for example to support the > protection of data that is initialized in sufficiently distinct phases. > > Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> > --- > include/linux/genalloc.h | 3 + > include/linux/pmalloc.h | 222 +++++++++++++++++++++ > include/linux/vmalloc.h | 1 + > lib/genalloc.c | 27 +++ > mm/Kconfig | 6 + > mm/Makefile | 1 + > mm/pmalloc.c | 497 +++++++++++++++++++++++++++++++++++++++++++++++ > mm/usercopy.c | 33 ++++ > 8 files changed, 790 insertions(+) > create mode 100644 include/linux/pmalloc.h > create mode 100644 mm/pmalloc.c > > diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h > index dcaa33e74b1c..b6c4cea9fbd8 100644 > --- a/include/linux/genalloc.h > +++ b/include/linux/genalloc.h > @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, > extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, > dma_addr_t *dma); > extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); > + > +extern void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk); > extern void gen_pool_for_each_chunk(struct gen_pool *, > void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); > extern size_t gen_pool_avail(struct gen_pool *); > diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h > new file mode 100644 > index 000000000000..624379a937c5 > --- /dev/null > +++ b/include/linux/pmalloc.h > @@ -0,0 +1,222 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.h: Header for Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#ifndef _LINUX_PMALLOC_H > +#define _LINUX_PMALLOC_H > + > + > +#include <linux/genalloc.h> > +#include <linux/string.h> > + > +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) > + > +/* > + * Library for dynamic allocation of pools of memory that can be, > + * after initialization, marked as read-only. > + * > + * This is intended to complement __read_only_after_init, for those cases > + * where either it is not possible to know the initialization value before > + * init is completed, or the amount of data is variable and can be > + * determined only at run-time. > + * > + * ***WARNING*** > + * The user of the API is expected to synchronize: > + * 1) allocation, > + * 2) writes to the allocated memory, > + * 3) write protection of the pool, > + * 4) freeing of the allocated memory, and > + * 5) destruction of the pool. > + * > + * For a non-threaded scenario, this type of locking is not even required. > + * > + * Even if the library were to provide support for locking, point 2) > + * would still depend on the user taking the lock. > + */ > + > + > +/** > + * pmalloc_create_pool - create a new protectable memory pool > + * @name: the name of the pool, enforced to be unique > + * @min_alloc_order: log2 of the minimum allocation size obtainable > + * from the pool > + * > + * Creates a new (empty) memory pool for allocation of protectable > + * memory. Memory will be allocated upon request (through pmalloc). > + * > + * Return: pointer to the new pool upon success, otherwise a NULL. > + */ > +struct gen_pool *pmalloc_create_pool(const char *name, > + int min_alloc_order); > + > +/** > + * is_pmalloc_object - validates the existence of an alleged object > + * @ptr: address of the object > + * @n: size of the object, in bytes > + * > + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. typo: ^ memory > + */ > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +/** > + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * > + * Prepares a chunk of the requested size. > + * This is intended to both minimize latency in later memory requests and > + * avoid sleeping during allocation. > + * Memory allocated with prealloc is stored in one single chunk, as > + * opposed to what is allocated on-demand when pmalloc runs out of free > + * space already existing in the pool and has to invoke vmalloc. > + * One additional advantage of pre-allocating larger chunks of memory is > + * that the total slack tends to be smaller. > + * > + * Return: true if the vmalloc call was successful, false otherwise. > + */ > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); > + > +/** > + * pmalloc - allocate protectable memory from a pool > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Allocates memory from an unprotected pool. If the pool doesn't have > + * enough memory, and the request did not include GFP_ATOMIC, an attempt > + * is made to add a new chunk of memory to the pool > + * (a multiple of PAGE_SIZE), in order to fit the new request. > + * Otherwise, NULL is returned. > + * > + * Return: pointer to the memory requested upon success, NULL otherwise > + * (either no memory available or pool already read-only). > + */ > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); > + > + > +/** > + * pzalloc - zero-initialized version of pmalloc > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Executes pmalloc, initializing the memory requested to 0, > + * before returning the pointer to it. > + * > + * Return: pointer to the zeroed memory requested, upon success, NULL > + * otherwise (either no memory available or pool already read-only). > + */ > +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + return pmalloc(pool, size, gfp | __GFP_ZERO); > +} > + > +/** > + * pmalloc_array - allocates an array according to the parameters > + * @pool: handle to the pool to be used for memory allocation > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested for each element > + * @flags: flags for page allocation > + * > + * Executes pmalloc, if it has a chance to succeed. > + * > + * Return: either NULL or the pmalloc result. > + */ > +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + if (unlikely(!(pool && n && size))) > + return NULL; > + return pmalloc(pool, n * size, flags); > +} > + > +/** > + * pcalloc - allocates a 0-initialized array according to the parameters > + * @pool: handle to the pool to be used for memory allocation > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested > + * @flags: flags for page allocation > + * > + * Executes pmalloc_array, if it has a chance to succeed. > + * > + * Return: either NULL or the pmalloc result. > + */ > +static inline void *pcalloc(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); > +} > + > +/** > + * pstrdup - duplicate a string, using pmalloc as allocator > + * @pool: handle to the pool to be used for memory allocation > + * @s: string to duplicate > + * @gfp: flags for page allocation > + * > + * Generates a copy of the given string, allocating sufficient memory > + * from the given pmalloc pool. > + * > + * Return: pointer to the replica, NULL in case of error. > + */ > +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) > +{ > + size_t len; > + char *buf; > + > + if (unlikely(pool == NULL || s == NULL)) > + return NULL; > + > + len = strlen(s) + 1; > + buf = pmalloc(pool, len, gfp); > + if (likely(buf)) > + strncpy(buf, s, len); > + return buf; > +} > + > +/** > + * pmalloc_protect_pool - turn a read/write pool read-only > + * @pool: the pool to protect > + * > + * Write-protects all the memory chunks assigned to the pool. > + * This prevents any further allocation. > + * > + * Return: 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_protect_pool(struct gen_pool *pool); > + > +/** > + * pfree - mark as unused memory that was previously in use > + * @pool: handle to the pool to be used for memory allocation > + * @addr: the beginning of the memory area to be freed > + * > + * The behavior of pfree is different, depending on the state of the > + * protection. > + * If the pool is not yet protected, the memory is marked as unused and > + * will be available for further allocations. > + * If the pool is already protected, the memory is marked as unused, but > + * it will still be impossible to perform further allocation, because of > + * the existing protection. > + * The freed memory, in this case, will be truly released only when the > + * pool is destroyed. > + */ > +static inline void pfree(struct gen_pool *pool, const void *addr) > +{ > + gen_pool_free(pool, (unsigned long)addr, 0); > +} > + > +/** > + * pmalloc_destroy_pool - destroys a pool and all the associated memory > + * @pool: the pool to destroy > + * > + * All the memory that was allocated through pmalloc in the pool will be freed. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_destroy_pool(struct gen_pool *pool); > + > +#endif > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 1e5d8c392f15..116d280cca53 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ > #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ > #define VM_NO_GUARD 0x00000040 /* don't add guard page */ > #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ > +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ > /* bits [20..32] reserved for arch specific ioremap internals */ > > /* > diff --git a/lib/genalloc.c b/lib/genalloc.c > index 044347163acb..e40a5db89439 100644 > --- a/lib/genalloc.c > +++ b/lib/genalloc.c > @@ -598,6 +598,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) > } > EXPORT_SYMBOL(gen_pool_free); > > + > +/** > + * gen_pool_flush_chunk - drops all the allocations from a specific chunk > + * @pool: the generic memory pool > + * @chunk: The chunk to wipe clear. > + * > + * This is meant to be called only while destroying a pool. It's up to the > + * caller to avoid races, but really, at this point the pool should have > + * already been retired and have become unavailable for any other sort of > + * operation. > + */ > +void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk) > +{ > + size_t size; > + > + if (unlikely(!(pool && chunk))) > + return; > + > + size = chunk->end_addr + 1 - chunk->start_addr; > + memset(chunk->entries, 0, > + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, > + BITS_PER_BYTE)); > + atomic_long_set(&chunk->avail, size); > +} > + > + > /** > * gen_pool_for_each_chunk - call func for every chunk of generic memory pool > * @pool: the generic memory pool > diff --git a/mm/Kconfig b/mm/Kconfig > index c782e8fb7235..be578fbdce6d 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -760,3 +760,9 @@ config GUP_BENCHMARK > performance of get_user_pages_fast(). > > See tools/testing/selftests/vm/gup_benchmark.c > + > +config PROTECTABLE_MEMORY > + bool > + depends on ARCH_HAS_SET_MEMORY > + select GENERIC_ALLOCATOR > + default y > diff --git a/mm/Makefile b/mm/Makefile > index e669f02c5a54..959fdbdac118 100644 > --- a/mm/Makefile > +++ b/mm/Makefile > @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o > obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o > obj-$(CONFIG_SLOB) += slob.o > obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o > +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o > obj-$(CONFIG_KSM) += ksm.o > obj-$(CONFIG_PAGE_POISONING) += page_poison.o > obj-$(CONFIG_SLAB) += slab.o > diff --git a/mm/pmalloc.c b/mm/pmalloc.c > new file mode 100644 > index 000000000000..e94bfb407c92 > --- /dev/null > +++ b/mm/pmalloc.c > @@ -0,0 +1,497 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.c: Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#include <linux/printk.h> > +#include <linux/init.h> > +#include <linux/mm.h> > +#include <linux/vmalloc.h> > +#include <linux/genalloc.h> > +#include <linux/kernel.h> > +#include <linux/log2.h> > +#include <linux/slab.h> > +#include <linux/device.h> > +#include <linux/atomic.h> > +#include <linux/rculist.h> > +#include <linux/set_memory.h> > +#include <asm/cacheflush.h> > +#include <asm/page.h> > + > +#include <linux/pmalloc.h> > +/* > + * pmalloc_data contains the data specific to a pmalloc pool, > + * in a format compatible with the design of gen_alloc. > + * Some of the fields are used for exposing the corresponding parameter > + * to userspace, through sysfs. > + */ > +struct pmalloc_data { > + struct gen_pool *pool; /* Link back to the associated pool. */ > + bool protected; /* Status of the pool: RO or RW. */ > + struct kobj_attribute attr_protected; /* Sysfs attribute. */ > + struct kobj_attribute attr_avail; /* Sysfs attribute. */ > + struct kobj_attribute attr_size; /* Sysfs attribute. */ > + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ > + struct kobject *pool_kobject; > + struct list_head node; /* list of pools */ > +}; > + > +static LIST_HEAD(pmalloc_final_list); > +static LIST_HEAD(pmalloc_tmp_list); > +static struct list_head *pmalloc_list = &pmalloc_tmp_list; > +static DEFINE_MUTEX(pmalloc_mutex); > +static struct kobject *pmalloc_kobject; > + > +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_protected); > + if (data->protected) > + return sprintf(buf, "protected\n"); > + else > + return sprintf(buf, "unprotected\n"); > +} > + > +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_avail); > + return sprintf(buf, "%lu\n", > + (unsigned long)gen_pool_avail(data->pool)); > +} > + > +static ssize_t pmalloc_pool_show_size(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_size); > + return sprintf(buf, "%lu\n", > + (unsigned long)gen_pool_size(data->pool)); > +} > + > +static void pool_chunk_number(struct gen_pool *pool, > + struct gen_pool_chunk *chunk, void *data) > +{ > + unsigned long *counter = data; > + > + (*counter)++; > +} > + > +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + unsigned long chunks_num = 0; > + > + data = container_of(attr, struct pmalloc_data, attr_chunks); > + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); > + return sprintf(buf, "%lu\n", chunks_num); > +} > + > +/* Exposes the pool and its attributes through sysfs. */ > +static struct kobject *pmalloc_connect(struct pmalloc_data *data) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + struct kobject *kobj; > + > + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); > + if (unlikely(!kobj)) > + return NULL; > + > + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { > + kobject_put(kobj); > + kobj = NULL; > + } > + return kobj; > +} > + > +/* Removes the pool and its attributes from sysfs. */ > +static void pmalloc_disconnect(struct pmalloc_data *data, > + struct kobject *kobj) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + > + sysfs_remove_files(kobj, attrs); > + kobject_put(kobj); > +} > + > +/* Declares an attribute of the pool. */ > +#define pmalloc_attr_init(data, attr_name) \ > +do { \ > + sysfs_attr_init(&data->attr_##attr_name.attr); \ > + data->attr_##attr_name.attr.name = #attr_name; \ > + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ > + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ > +} while (0) > + > +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) > +{ > + struct gen_pool *pool; > + const char *pool_name; > + struct pmalloc_data *data; > + > + if (!name) { > + WARN_ON(1); > + return NULL; > + } > + > + if (min_alloc_order < 0) > + min_alloc_order = ilog2(sizeof(unsigned long)); > + > + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); > + if (unlikely(!pool)) > + return NULL; > + > + mutex_lock(&pmalloc_mutex); > + list_for_each_entry(data, pmalloc_list, node) > + if (!strcmp(name, data->pool->name)) > + goto same_name_err; > + > + pool_name = kstrdup(name, GFP_KERNEL); > + if (unlikely(!pool_name)) > + goto name_alloc_err; > + > + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); > + if (unlikely(!data)) > + goto data_alloc_err; > + > + data->protected = false; > + data->pool = pool; > + pmalloc_attr_init(data, protected); > + pmalloc_attr_init(data, avail); > + pmalloc_attr_init(data, size); > + pmalloc_attr_init(data, chunks); > + pool->data = data; > + pool->name = pool_name; > + > + list_add(&data->node, pmalloc_list); > + if (pmalloc_list == &pmalloc_final_list) > + data->pool_kobject = pmalloc_connect(data); > + mutex_unlock(&pmalloc_mutex); > + return pool; > + > +data_alloc_err: > + kfree(pool_name); > +name_alloc_err: > +same_name_err: > + mutex_unlock(&pmalloc_mutex); > + gen_pool_destroy(pool); > + return NULL; > +} > + > +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) > +{ > + struct pmalloc_data *data; > + > + if (unlikely(!req_size || !pool)) > + return -1; > + > + data = pool->data; > + > + if (data == NULL) > + return -1; > + > + if (unlikely(data->protected)) { > + WARN_ON(1); > + return -1; > + } > + return 0; > +} > + > + > +static inline bool chunk_tagging(void *chunk, bool tag) > +{ > + struct vm_struct *area; > + struct page *page; > + > + if (!is_vmalloc_addr(chunk)) > + return false; > + > + page = vmalloc_to_page(chunk); > + if (unlikely(!page)) > + return false; > + > + area = page->area; > + if (tag) > + area->flags |= VM_PMALLOC; > + else > + area->flags &= ~VM_PMALLOC; > + return true; > +} > + > + > +static inline bool tag_chunk(void *chunk) > +{ > + return chunk_tagging(chunk, true); > +} > + > + > +static inline bool untag_chunk(void *chunk) > +{ > + return chunk_tagging(chunk, false); > +} > + > +enum { > + INVALID_PMALLOC_OBJECT = -1, > + NOT_PMALLOC_OBJECT = 0, > + VALID_PMALLOC_OBJECT = 1, > +}; > + > +int is_pmalloc_object(const void *ptr, const unsigned long n) > +{ > + struct vm_struct *area; > + struct page *page; > + unsigned long area_start; > + unsigned long area_end; > + unsigned long object_start; > + unsigned long object_end; > + > + > + /* is_pmalloc_object gets called pretty late, so chances are high > + * that the object is indeed of vmalloc type > + */ > + if (unlikely(!is_vmalloc_addr(ptr))) > + return NOT_PMALLOC_OBJECT; > + > + page = vmalloc_to_page(ptr); > + if (unlikely(!page)) > + return NOT_PMALLOC_OBJECT; > + > + area = page->area; > + > + if (likely(!(area->flags & VM_PMALLOC))) > + return NOT_PMALLOC_OBJECT; > + > + area_start = (unsigned long)area->addr; > + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; > + object_start = (unsigned long)ptr; > + object_end = object_start + n - 1; > + > + if (likely((area_start <= object_start) && > + (object_end <= area_end))) > + return VALID_PMALLOC_OBJECT; > + else > + return INVALID_PMALLOC_OBJECT; > +} > + > + > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + > + if (check_alloc_params(pool, size)) > + return false; > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(chunk == NULL)) > + return false; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error != 0)) > + goto abort; > + > + return true; > +abort: > + vfree_atomic(chunk); > + return false; > + > +} > + > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned long retval; > + > + if (check_alloc_params(pool, size)) > + return NULL; > + > +retry_alloc_from_pool: > + retval = gen_pool_alloc(pool, size); > + if (retval) > + goto return_allocation; > + > + if (unlikely((gfp & __GFP_ATOMIC))) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(!chunk)) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + if (unlikely(!tag_chunk(chunk))) > + goto free; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error)) > + goto abort; > + > + retval = gen_pool_alloc(pool, size); > + if (retval) { > +return_allocation: > + *(size_t *)retval = size; > + if (gfp & __GFP_ZERO) > + memset((void *)retval, 0, size); > + return (void *)retval; > + } > + /* Here there is no test for __GFP_NO_FAIL because, in case of > + * concurrent allocation, one thread might add a chunk to the > + * pool and this memory could be allocated by another thread, > + * before the first thread gets a chance to use it. > + * As long as vmalloc succeeds, it's ok to retry. > + */ > + goto retry_alloc_from_pool; > +abort: > + untag_chunk(chunk); > +free: > + vfree_atomic(chunk); > + return NULL; > +} > + > +static void pmalloc_chunk_set_protection(struct gen_pool *pool, > + > + struct gen_pool_chunk *chunk, > + void *data) > +{ > + const bool *flag = data; > + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; > + unsigned long pages = chunk_size / PAGE_SIZE; > + > + BUG_ON(chunk_size & (PAGE_SIZE - 1)); > + > + if (*flag) > + set_memory_ro(chunk->start_addr, pages); > + else > + set_memory_rw(chunk->start_addr, pages); > +} > + > +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) > +{ > + struct pmalloc_data *data; > + struct gen_pool_chunk *chunk; > + > + if (unlikely(!pool)) > + return -EINVAL; > + > + data = pool->data; > + > + if (unlikely(!data)) > + return -EINVAL; > + > + if (unlikely(data->protected == protection)) { > + WARN_ON(1); > + return 0; > + } > + > + data->protected = protection; > + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) > + pmalloc_chunk_set_protection(pool, chunk, &protection); > + return 0; > +} > + > +int pmalloc_protect_pool(struct gen_pool *pool) > +{ > + return pmalloc_pool_set_protection(pool, true); > +} > + > + > +static void pmalloc_chunk_free(struct gen_pool *pool, > + struct gen_pool_chunk *chunk, void *data) > +{ > + untag_chunk(chunk); > + gen_pool_flush_chunk(pool, chunk); > + vfree_atomic((void *)chunk->start_addr); > +} > + > + > +int pmalloc_destroy_pool(struct gen_pool *pool) > +{ > + struct pmalloc_data *data; > + > + if (unlikely(pool == NULL)) > + return -EINVAL; > + > + data = pool->data; > + > + if (unlikely(data == NULL)) > + return -EINVAL; > + > + mutex_lock(&pmalloc_mutex); > + list_del(&data->node); > + mutex_unlock(&pmalloc_mutex); > + > + if (likely(data->pool_kobject)) > + pmalloc_disconnect(data, data->pool_kobject); > + > + pmalloc_pool_set_protection(pool, false); > + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); > + gen_pool_destroy(pool); > + kfree(data); > + return 0; > +} > + > +/** > + * When the sysfs is ready to receive registrations, connect all the > + * pools previously created. Also enable further pools to be connected > + * right away. > + */ This does not seem as kernel-doc comment. Please either remove the second * from the opening comment mark or reformat the comment. > +static int __init pmalloc_late_init(void) > +{ > + struct pmalloc_data *data, *n; > + > + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); > + > + mutex_lock(&pmalloc_mutex); > + pmalloc_list = &pmalloc_final_list; > + > + if (likely(pmalloc_kobject != NULL)) { > + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { > + list_move(&data->node, &pmalloc_final_list); > + pmalloc_connect(data); > + } > + } > + mutex_unlock(&pmalloc_mutex); > + return 0; > +} > +late_initcall(pmalloc_late_init); > diff --git a/mm/usercopy.c b/mm/usercopy.c > index e9e9325f7638..946ce051e296 100644 > --- a/mm/usercopy.c > +++ b/mm/usercopy.c > @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, > } > } > > +#ifdef CONFIG_PROTECTABLE_MEMORY > + > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +static void check_pmalloc_object(const void *ptr, unsigned long n, > + bool to_user) > +{ > + int retv; > + > + retv = is_pmalloc_object(ptr, n); > + if (unlikely(retv)) { > + if (unlikely(!to_user)) > + usercopy_abort("pmalloc", > + "trying to write to pmalloc object", > + to_user, (const unsigned long)ptr, n); > + if (retv < 0) > + usercopy_abort("pmalloc", > + "invalid pmalloc object", > + to_user, (const unsigned long)ptr, n); > + } > +} > + > +#else > + > +static void check_pmalloc_object(const void *ptr, unsigned long n, > + bool to_user) > +{ > +} > +#endif > + > /* > * Validates that the given object is: > * - not bogus address > @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for object in kernel to avoid text exposure. */ > check_kernel_text_object((const unsigned long)ptr, n, to_user); > + > + /* Check if object is from a pmalloc chunk. */ > + check_pmalloc_object(ptr, n, to_user); > } > EXPORT_SYMBOL(__check_object_size); > -- > 2.14.1 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> > -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-11 12:37 ` Mike Rapoport 0 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-11 12:37 UTC (permalink / raw) To: Igor Stoppa Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > The MMU available in many systems running Linux can often provide R/O > protection to the memory pages it handles. > > However, the MMU-based protection works efficiently only when said pages > contain exclusively data that will not need further modifications. > > Statically allocated variables can be segregated into a dedicated > section, but this does not sit very well with dynamically allocated > ones. > > Dynamic allocation does not provide, currently, any means for grouping > variables in memory pages that would contain exclusively data suitable > for conversion to read only access mode. > > The allocator here provided (pmalloc - protectable memory allocator) > introduces the concept of pools of protectable memory. > > A module can request a pool and then refer any allocation request to the > pool handler it has received. > > Once all the chunks of memory associated to a specific pool are > initialized, the pool can be protected. > > After this point, the pool can only be destroyed (it is up to the module > to avoid any further references to the memory from the pool, after > the destruction is invoked). > > The latter case is mainly meant for releasing memory, when a module is > unloaded. > > A module can have as many pools as needed, for example to support the > protection of data that is initialized in sufficiently distinct phases. > > Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> > --- > include/linux/genalloc.h | 3 + > include/linux/pmalloc.h | 222 +++++++++++++++++++++ > include/linux/vmalloc.h | 1 + > lib/genalloc.c | 27 +++ > mm/Kconfig | 6 + > mm/Makefile | 1 + > mm/pmalloc.c | 497 +++++++++++++++++++++++++++++++++++++++++++++++ > mm/usercopy.c | 33 ++++ > 8 files changed, 790 insertions(+) > create mode 100644 include/linux/pmalloc.h > create mode 100644 mm/pmalloc.c > > diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h > index dcaa33e74b1c..b6c4cea9fbd8 100644 > --- a/include/linux/genalloc.h > +++ b/include/linux/genalloc.h > @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, > extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, > dma_addr_t *dma); > extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); > + > +extern void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk); > extern void gen_pool_for_each_chunk(struct gen_pool *, > void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); > extern size_t gen_pool_avail(struct gen_pool *); > diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h > new file mode 100644 > index 000000000000..624379a937c5 > --- /dev/null > +++ b/include/linux/pmalloc.h > @@ -0,0 +1,222 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.h: Header for Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#ifndef _LINUX_PMALLOC_H > +#define _LINUX_PMALLOC_H > + > + > +#include <linux/genalloc.h> > +#include <linux/string.h> > + > +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) > + > +/* > + * Library for dynamic allocation of pools of memory that can be, > + * after initialization, marked as read-only. > + * > + * This is intended to complement __read_only_after_init, for those cases > + * where either it is not possible to know the initialization value before > + * init is completed, or the amount of data is variable and can be > + * determined only at run-time. > + * > + * ***WARNING*** > + * The user of the API is expected to synchronize: > + * 1) allocation, > + * 2) writes to the allocated memory, > + * 3) write protection of the pool, > + * 4) freeing of the allocated memory, and > + * 5) destruction of the pool. > + * > + * For a non-threaded scenario, this type of locking is not even required. > + * > + * Even if the library were to provide support for locking, point 2) > + * would still depend on the user taking the lock. > + */ > + > + > +/** > + * pmalloc_create_pool - create a new protectable memory pool > + * @name: the name of the pool, enforced to be unique > + * @min_alloc_order: log2 of the minimum allocation size obtainable > + * from the pool > + * > + * Creates a new (empty) memory pool for allocation of protectable > + * memory. Memory will be allocated upon request (through pmalloc). > + * > + * Return: pointer to the new pool upon success, otherwise a NULL. > + */ > +struct gen_pool *pmalloc_create_pool(const char *name, > + int min_alloc_order); > + > +/** > + * is_pmalloc_object - validates the existence of an alleged object > + * @ptr: address of the object > + * @n: size of the object, in bytes > + * > + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. typo: ^ memory > + */ > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +/** > + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * > + * Prepares a chunk of the requested size. > + * This is intended to both minimize latency in later memory requests and > + * avoid sleeping during allocation. > + * Memory allocated with prealloc is stored in one single chunk, as > + * opposed to what is allocated on-demand when pmalloc runs out of free > + * space already existing in the pool and has to invoke vmalloc. > + * One additional advantage of pre-allocating larger chunks of memory is > + * that the total slack tends to be smaller. > + * > + * Return: true if the vmalloc call was successful, false otherwise. > + */ > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); > + > +/** > + * pmalloc - allocate protectable memory from a pool > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Allocates memory from an unprotected pool. If the pool doesn't have > + * enough memory, and the request did not include GFP_ATOMIC, an attempt > + * is made to add a new chunk of memory to the pool > + * (a multiple of PAGE_SIZE), in order to fit the new request. > + * Otherwise, NULL is returned. > + * > + * Return: pointer to the memory requested upon success, NULL otherwise > + * (either no memory available or pool already read-only). > + */ > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); > + > + > +/** > + * pzalloc - zero-initialized version of pmalloc > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Executes pmalloc, initializing the memory requested to 0, > + * before returning the pointer to it. > + * > + * Return: pointer to the zeroed memory requested, upon success, NULL > + * otherwise (either no memory available or pool already read-only). > + */ > +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + return pmalloc(pool, size, gfp | __GFP_ZERO); > +} > + > +/** > + * pmalloc_array - allocates an array according to the parameters > + * @pool: handle to the pool to be used for memory allocation > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested for each element > + * @flags: flags for page allocation > + * > + * Executes pmalloc, if it has a chance to succeed. > + * > + * Return: either NULL or the pmalloc result. > + */ > +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + if (unlikely(!(pool && n && size))) > + return NULL; > + return pmalloc(pool, n * size, flags); > +} > + > +/** > + * pcalloc - allocates a 0-initialized array according to the parameters > + * @pool: handle to the pool to be used for memory allocation > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested > + * @flags: flags for page allocation > + * > + * Executes pmalloc_array, if it has a chance to succeed. > + * > + * Return: either NULL or the pmalloc result. > + */ > +static inline void *pcalloc(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); > +} > + > +/** > + * pstrdup - duplicate a string, using pmalloc as allocator > + * @pool: handle to the pool to be used for memory allocation > + * @s: string to duplicate > + * @gfp: flags for page allocation > + * > + * Generates a copy of the given string, allocating sufficient memory > + * from the given pmalloc pool. > + * > + * Return: pointer to the replica, NULL in case of error. > + */ > +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) > +{ > + size_t len; > + char *buf; > + > + if (unlikely(pool == NULL || s == NULL)) > + return NULL; > + > + len = strlen(s) + 1; > + buf = pmalloc(pool, len, gfp); > + if (likely(buf)) > + strncpy(buf, s, len); > + return buf; > +} > + > +/** > + * pmalloc_protect_pool - turn a read/write pool read-only > + * @pool: the pool to protect > + * > + * Write-protects all the memory chunks assigned to the pool. > + * This prevents any further allocation. > + * > + * Return: 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_protect_pool(struct gen_pool *pool); > + > +/** > + * pfree - mark as unused memory that was previously in use > + * @pool: handle to the pool to be used for memory allocation > + * @addr: the beginning of the memory area to be freed > + * > + * The behavior of pfree is different, depending on the state of the > + * protection. > + * If the pool is not yet protected, the memory is marked as unused and > + * will be available for further allocations. > + * If the pool is already protected, the memory is marked as unused, but > + * it will still be impossible to perform further allocation, because of > + * the existing protection. > + * The freed memory, in this case, will be truly released only when the > + * pool is destroyed. > + */ > +static inline void pfree(struct gen_pool *pool, const void *addr) > +{ > + gen_pool_free(pool, (unsigned long)addr, 0); > +} > + > +/** > + * pmalloc_destroy_pool - destroys a pool and all the associated memory > + * @pool: the pool to destroy > + * > + * All the memory that was allocated through pmalloc in the pool will be freed. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_destroy_pool(struct gen_pool *pool); > + > +#endif > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 1e5d8c392f15..116d280cca53 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ > #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ > #define VM_NO_GUARD 0x00000040 /* don't add guard page */ > #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ > +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ > /* bits [20..32] reserved for arch specific ioremap internals */ > > /* > diff --git a/lib/genalloc.c b/lib/genalloc.c > index 044347163acb..e40a5db89439 100644 > --- a/lib/genalloc.c > +++ b/lib/genalloc.c > @@ -598,6 +598,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) > } > EXPORT_SYMBOL(gen_pool_free); > > + > +/** > + * gen_pool_flush_chunk - drops all the allocations from a specific chunk > + * @pool: the generic memory pool > + * @chunk: The chunk to wipe clear. > + * > + * This is meant to be called only while destroying a pool. It's up to the > + * caller to avoid races, but really, at this point the pool should have > + * already been retired and have become unavailable for any other sort of > + * operation. > + */ > +void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk) > +{ > + size_t size; > + > + if (unlikely(!(pool && chunk))) > + return; > + > + size = chunk->end_addr + 1 - chunk->start_addr; > + memset(chunk->entries, 0, > + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, > + BITS_PER_BYTE)); > + atomic_long_set(&chunk->avail, size); > +} > + > + > /** > * gen_pool_for_each_chunk - call func for every chunk of generic memory pool > * @pool: the generic memory pool > diff --git a/mm/Kconfig b/mm/Kconfig > index c782e8fb7235..be578fbdce6d 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -760,3 +760,9 @@ config GUP_BENCHMARK > performance of get_user_pages_fast(). > > See tools/testing/selftests/vm/gup_benchmark.c > + > +config PROTECTABLE_MEMORY > + bool > + depends on ARCH_HAS_SET_MEMORY > + select GENERIC_ALLOCATOR > + default y > diff --git a/mm/Makefile b/mm/Makefile > index e669f02c5a54..959fdbdac118 100644 > --- a/mm/Makefile > +++ b/mm/Makefile > @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o > obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o > obj-$(CONFIG_SLOB) += slob.o > obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o > +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o > obj-$(CONFIG_KSM) += ksm.o > obj-$(CONFIG_PAGE_POISONING) += page_poison.o > obj-$(CONFIG_SLAB) += slab.o > diff --git a/mm/pmalloc.c b/mm/pmalloc.c > new file mode 100644 > index 000000000000..e94bfb407c92 > --- /dev/null > +++ b/mm/pmalloc.c > @@ -0,0 +1,497 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.c: Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#include <linux/printk.h> > +#include <linux/init.h> > +#include <linux/mm.h> > +#include <linux/vmalloc.h> > +#include <linux/genalloc.h> > +#include <linux/kernel.h> > +#include <linux/log2.h> > +#include <linux/slab.h> > +#include <linux/device.h> > +#include <linux/atomic.h> > +#include <linux/rculist.h> > +#include <linux/set_memory.h> > +#include <asm/cacheflush.h> > +#include <asm/page.h> > + > +#include <linux/pmalloc.h> > +/* > + * pmalloc_data contains the data specific to a pmalloc pool, > + * in a format compatible with the design of gen_alloc. > + * Some of the fields are used for exposing the corresponding parameter > + * to userspace, through sysfs. > + */ > +struct pmalloc_data { > + struct gen_pool *pool; /* Link back to the associated pool. */ > + bool protected; /* Status of the pool: RO or RW. */ > + struct kobj_attribute attr_protected; /* Sysfs attribute. */ > + struct kobj_attribute attr_avail; /* Sysfs attribute. */ > + struct kobj_attribute attr_size; /* Sysfs attribute. */ > + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ > + struct kobject *pool_kobject; > + struct list_head node; /* list of pools */ > +}; > + > +static LIST_HEAD(pmalloc_final_list); > +static LIST_HEAD(pmalloc_tmp_list); > +static struct list_head *pmalloc_list = &pmalloc_tmp_list; > +static DEFINE_MUTEX(pmalloc_mutex); > +static struct kobject *pmalloc_kobject; > + > +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_protected); > + if (data->protected) > + return sprintf(buf, "protected\n"); > + else > + return sprintf(buf, "unprotected\n"); > +} > + > +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_avail); > + return sprintf(buf, "%lu\n", > + (unsigned long)gen_pool_avail(data->pool)); > +} > + > +static ssize_t pmalloc_pool_show_size(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_size); > + return sprintf(buf, "%lu\n", > + (unsigned long)gen_pool_size(data->pool)); > +} > + > +static void pool_chunk_number(struct gen_pool *pool, > + struct gen_pool_chunk *chunk, void *data) > +{ > + unsigned long *counter = data; > + > + (*counter)++; > +} > + > +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + unsigned long chunks_num = 0; > + > + data = container_of(attr, struct pmalloc_data, attr_chunks); > + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); > + return sprintf(buf, "%lu\n", chunks_num); > +} > + > +/* Exposes the pool and its attributes through sysfs. */ > +static struct kobject *pmalloc_connect(struct pmalloc_data *data) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + struct kobject *kobj; > + > + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); > + if (unlikely(!kobj)) > + return NULL; > + > + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { > + kobject_put(kobj); > + kobj = NULL; > + } > + return kobj; > +} > + > +/* Removes the pool and its attributes from sysfs. */ > +static void pmalloc_disconnect(struct pmalloc_data *data, > + struct kobject *kobj) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + > + sysfs_remove_files(kobj, attrs); > + kobject_put(kobj); > +} > + > +/* Declares an attribute of the pool. */ > +#define pmalloc_attr_init(data, attr_name) \ > +do { \ > + sysfs_attr_init(&data->attr_##attr_name.attr); \ > + data->attr_##attr_name.attr.name = #attr_name; \ > + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ > + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ > +} while (0) > + > +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) > +{ > + struct gen_pool *pool; > + const char *pool_name; > + struct pmalloc_data *data; > + > + if (!name) { > + WARN_ON(1); > + return NULL; > + } > + > + if (min_alloc_order < 0) > + min_alloc_order = ilog2(sizeof(unsigned long)); > + > + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); > + if (unlikely(!pool)) > + return NULL; > + > + mutex_lock(&pmalloc_mutex); > + list_for_each_entry(data, pmalloc_list, node) > + if (!strcmp(name, data->pool->name)) > + goto same_name_err; > + > + pool_name = kstrdup(name, GFP_KERNEL); > + if (unlikely(!pool_name)) > + goto name_alloc_err; > + > + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); > + if (unlikely(!data)) > + goto data_alloc_err; > + > + data->protected = false; > + data->pool = pool; > + pmalloc_attr_init(data, protected); > + pmalloc_attr_init(data, avail); > + pmalloc_attr_init(data, size); > + pmalloc_attr_init(data, chunks); > + pool->data = data; > + pool->name = pool_name; > + > + list_add(&data->node, pmalloc_list); > + if (pmalloc_list == &pmalloc_final_list) > + data->pool_kobject = pmalloc_connect(data); > + mutex_unlock(&pmalloc_mutex); > + return pool; > + > +data_alloc_err: > + kfree(pool_name); > +name_alloc_err: > +same_name_err: > + mutex_unlock(&pmalloc_mutex); > + gen_pool_destroy(pool); > + return NULL; > +} > + > +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) > +{ > + struct pmalloc_data *data; > + > + if (unlikely(!req_size || !pool)) > + return -1; > + > + data = pool->data; > + > + if (data == NULL) > + return -1; > + > + if (unlikely(data->protected)) { > + WARN_ON(1); > + return -1; > + } > + return 0; > +} > + > + > +static inline bool chunk_tagging(void *chunk, bool tag) > +{ > + struct vm_struct *area; > + struct page *page; > + > + if (!is_vmalloc_addr(chunk)) > + return false; > + > + page = vmalloc_to_page(chunk); > + if (unlikely(!page)) > + return false; > + > + area = page->area; > + if (tag) > + area->flags |= VM_PMALLOC; > + else > + area->flags &= ~VM_PMALLOC; > + return true; > +} > + > + > +static inline bool tag_chunk(void *chunk) > +{ > + return chunk_tagging(chunk, true); > +} > + > + > +static inline bool untag_chunk(void *chunk) > +{ > + return chunk_tagging(chunk, false); > +} > + > +enum { > + INVALID_PMALLOC_OBJECT = -1, > + NOT_PMALLOC_OBJECT = 0, > + VALID_PMALLOC_OBJECT = 1, > +}; > + > +int is_pmalloc_object(const void *ptr, const unsigned long n) > +{ > + struct vm_struct *area; > + struct page *page; > + unsigned long area_start; > + unsigned long area_end; > + unsigned long object_start; > + unsigned long object_end; > + > + > + /* is_pmalloc_object gets called pretty late, so chances are high > + * that the object is indeed of vmalloc type > + */ > + if (unlikely(!is_vmalloc_addr(ptr))) > + return NOT_PMALLOC_OBJECT; > + > + page = vmalloc_to_page(ptr); > + if (unlikely(!page)) > + return NOT_PMALLOC_OBJECT; > + > + area = page->area; > + > + if (likely(!(area->flags & VM_PMALLOC))) > + return NOT_PMALLOC_OBJECT; > + > + area_start = (unsigned long)area->addr; > + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; > + object_start = (unsigned long)ptr; > + object_end = object_start + n - 1; > + > + if (likely((area_start <= object_start) && > + (object_end <= area_end))) > + return VALID_PMALLOC_OBJECT; > + else > + return INVALID_PMALLOC_OBJECT; > +} > + > + > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + > + if (check_alloc_params(pool, size)) > + return false; > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(chunk == NULL)) > + return false; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error != 0)) > + goto abort; > + > + return true; > +abort: > + vfree_atomic(chunk); > + return false; > + > +} > + > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned long retval; > + > + if (check_alloc_params(pool, size)) > + return NULL; > + > +retry_alloc_from_pool: > + retval = gen_pool_alloc(pool, size); > + if (retval) > + goto return_allocation; > + > + if (unlikely((gfp & __GFP_ATOMIC))) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(!chunk)) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + if (unlikely(!tag_chunk(chunk))) > + goto free; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error)) > + goto abort; > + > + retval = gen_pool_alloc(pool, size); > + if (retval) { > +return_allocation: > + *(size_t *)retval = size; > + if (gfp & __GFP_ZERO) > + memset((void *)retval, 0, size); > + return (void *)retval; > + } > + /* Here there is no test for __GFP_NO_FAIL because, in case of > + * concurrent allocation, one thread might add a chunk to the > + * pool and this memory could be allocated by another thread, > + * before the first thread gets a chance to use it. > + * As long as vmalloc succeeds, it's ok to retry. > + */ > + goto retry_alloc_from_pool; > +abort: > + untag_chunk(chunk); > +free: > + vfree_atomic(chunk); > + return NULL; > +} > + > +static void pmalloc_chunk_set_protection(struct gen_pool *pool, > + > + struct gen_pool_chunk *chunk, > + void *data) > +{ > + const bool *flag = data; > + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; > + unsigned long pages = chunk_size / PAGE_SIZE; > + > + BUG_ON(chunk_size & (PAGE_SIZE - 1)); > + > + if (*flag) > + set_memory_ro(chunk->start_addr, pages); > + else > + set_memory_rw(chunk->start_addr, pages); > +} > + > +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) > +{ > + struct pmalloc_data *data; > + struct gen_pool_chunk *chunk; > + > + if (unlikely(!pool)) > + return -EINVAL; > + > + data = pool->data; > + > + if (unlikely(!data)) > + return -EINVAL; > + > + if (unlikely(data->protected == protection)) { > + WARN_ON(1); > + return 0; > + } > + > + data->protected = protection; > + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) > + pmalloc_chunk_set_protection(pool, chunk, &protection); > + return 0; > +} > + > +int pmalloc_protect_pool(struct gen_pool *pool) > +{ > + return pmalloc_pool_set_protection(pool, true); > +} > + > + > +static void pmalloc_chunk_free(struct gen_pool *pool, > + struct gen_pool_chunk *chunk, void *data) > +{ > + untag_chunk(chunk); > + gen_pool_flush_chunk(pool, chunk); > + vfree_atomic((void *)chunk->start_addr); > +} > + > + > +int pmalloc_destroy_pool(struct gen_pool *pool) > +{ > + struct pmalloc_data *data; > + > + if (unlikely(pool == NULL)) > + return -EINVAL; > + > + data = pool->data; > + > + if (unlikely(data == NULL)) > + return -EINVAL; > + > + mutex_lock(&pmalloc_mutex); > + list_del(&data->node); > + mutex_unlock(&pmalloc_mutex); > + > + if (likely(data->pool_kobject)) > + pmalloc_disconnect(data, data->pool_kobject); > + > + pmalloc_pool_set_protection(pool, false); > + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); > + gen_pool_destroy(pool); > + kfree(data); > + return 0; > +} > + > +/** > + * When the sysfs is ready to receive registrations, connect all the > + * pools previously created. Also enable further pools to be connected > + * right away. > + */ This does not seem as kernel-doc comment. Please either remove the second * from the opening comment mark or reformat the comment. > +static int __init pmalloc_late_init(void) > +{ > + struct pmalloc_data *data, *n; > + > + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); > + > + mutex_lock(&pmalloc_mutex); > + pmalloc_list = &pmalloc_final_list; > + > + if (likely(pmalloc_kobject != NULL)) { > + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { > + list_move(&data->node, &pmalloc_final_list); > + pmalloc_connect(data); > + } > + } > + mutex_unlock(&pmalloc_mutex); > + return 0; > +} > +late_initcall(pmalloc_late_init); > diff --git a/mm/usercopy.c b/mm/usercopy.c > index e9e9325f7638..946ce051e296 100644 > --- a/mm/usercopy.c > +++ b/mm/usercopy.c > @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, > } > } > > +#ifdef CONFIG_PROTECTABLE_MEMORY > + > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +static void check_pmalloc_object(const void *ptr, unsigned long n, > + bool to_user) > +{ > + int retv; > + > + retv = is_pmalloc_object(ptr, n); > + if (unlikely(retv)) { > + if (unlikely(!to_user)) > + usercopy_abort("pmalloc", > + "trying to write to pmalloc object", > + to_user, (const unsigned long)ptr, n); > + if (retv < 0) > + usercopy_abort("pmalloc", > + "invalid pmalloc object", > + to_user, (const unsigned long)ptr, n); > + } > +} > + > +#else > + > +static void check_pmalloc_object(const void *ptr, unsigned long n, > + bool to_user) > +{ > +} > +#endif > + > /* > * Validates that the given object is: > * - not bogus address > @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for object in kernel to avoid text exposure. */ > check_kernel_text_object((const unsigned long)ptr, n, to_user); > + > + /* Check if object is from a pmalloc chunk. */ > + check_pmalloc_object(ptr, n, to_user); > } > EXPORT_SYMBOL(__check_object_size); > -- > 2.14.1 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> > -- Sincerely yours, Mike. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-11 12:37 ` Mike Rapoport 0 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-11 12:37 UTC (permalink / raw) To: linux-security-module On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > The MMU available in many systems running Linux can often provide R/O > protection to the memory pages it handles. > > However, the MMU-based protection works efficiently only when said pages > contain exclusively data that will not need further modifications. > > Statically allocated variables can be segregated into a dedicated > section, but this does not sit very well with dynamically allocated > ones. > > Dynamic allocation does not provide, currently, any means for grouping > variables in memory pages that would contain exclusively data suitable > for conversion to read only access mode. > > The allocator here provided (pmalloc - protectable memory allocator) > introduces the concept of pools of protectable memory. > > A module can request a pool and then refer any allocation request to the > pool handler it has received. > > Once all the chunks of memory associated to a specific pool are > initialized, the pool can be protected. > > After this point, the pool can only be destroyed (it is up to the module > to avoid any further references to the memory from the pool, after > the destruction is invoked). > > The latter case is mainly meant for releasing memory, when a module is > unloaded. > > A module can have as many pools as needed, for example to support the > protection of data that is initialized in sufficiently distinct phases. > > Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> > --- > include/linux/genalloc.h | 3 + > include/linux/pmalloc.h | 222 +++++++++++++++++++++ > include/linux/vmalloc.h | 1 + > lib/genalloc.c | 27 +++ > mm/Kconfig | 6 + > mm/Makefile | 1 + > mm/pmalloc.c | 497 +++++++++++++++++++++++++++++++++++++++++++++++ > mm/usercopy.c | 33 ++++ > 8 files changed, 790 insertions(+) > create mode 100644 include/linux/pmalloc.h > create mode 100644 mm/pmalloc.c > > diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h > index dcaa33e74b1c..b6c4cea9fbd8 100644 > --- a/include/linux/genalloc.h > +++ b/include/linux/genalloc.h > @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, > extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, > dma_addr_t *dma); > extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); > + > +extern void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk); > extern void gen_pool_for_each_chunk(struct gen_pool *, > void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); > extern size_t gen_pool_avail(struct gen_pool *); > diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h > new file mode 100644 > index 000000000000..624379a937c5 > --- /dev/null > +++ b/include/linux/pmalloc.h > @@ -0,0 +1,222 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.h: Header for Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#ifndef _LINUX_PMALLOC_H > +#define _LINUX_PMALLOC_H > + > + > +#include <linux/genalloc.h> > +#include <linux/string.h> > + > +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) > + > +/* > + * Library for dynamic allocation of pools of memory that can be, > + * after initialization, marked as read-only. > + * > + * This is intended to complement __read_only_after_init, for those cases > + * where either it is not possible to know the initialization value before > + * init is completed, or the amount of data is variable and can be > + * determined only at run-time. > + * > + * ***WARNING*** > + * The user of the API is expected to synchronize: > + * 1) allocation, > + * 2) writes to the allocated memory, > + * 3) write protection of the pool, > + * 4) freeing of the allocated memory, and > + * 5) destruction of the pool. > + * > + * For a non-threaded scenario, this type of locking is not even required. > + * > + * Even if the library were to provide support for locking, point 2) > + * would still depend on the user taking the lock. > + */ > + > + > +/** > + * pmalloc_create_pool - create a new protectable memory pool > + * @name: the name of the pool, enforced to be unique > + * @min_alloc_order: log2 of the minimum allocation size obtainable > + * from the pool > + * > + * Creates a new (empty) memory pool for allocation of protectable > + * memory. Memory will be allocated upon request (through pmalloc). > + * > + * Return: pointer to the new pool upon success, otherwise a NULL. > + */ > +struct gen_pool *pmalloc_create_pool(const char *name, > + int min_alloc_order); > + > +/** > + * is_pmalloc_object - validates the existence of an alleged object > + * @ptr: address of the object > + * @n: size of the object, in bytes > + * > + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. typo: ^ memory > + */ > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +/** > + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * > + * Prepares a chunk of the requested size. > + * This is intended to both minimize latency in later memory requests and > + * avoid sleeping during allocation. > + * Memory allocated with prealloc is stored in one single chunk, as > + * opposed to what is allocated on-demand when pmalloc runs out of free > + * space already existing in the pool and has to invoke vmalloc. > + * One additional advantage of pre-allocating larger chunks of memory is > + * that the total slack tends to be smaller. > + * > + * Return: true if the vmalloc call was successful, false otherwise. > + */ > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); > + > +/** > + * pmalloc - allocate protectable memory from a pool > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Allocates memory from an unprotected pool. If the pool doesn't have > + * enough memory, and the request did not include GFP_ATOMIC, an attempt > + * is made to add a new chunk of memory to the pool > + * (a multiple of PAGE_SIZE), in order to fit the new request. > + * Otherwise, NULL is returned. > + * > + * Return: pointer to the memory requested upon success, NULL otherwise > + * (either no memory available or pool already read-only). > + */ > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); > + > + > +/** > + * pzalloc - zero-initialized version of pmalloc > + * @pool: handle to the pool to be used for memory allocation > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Executes pmalloc, initializing the memory requested to 0, > + * before returning the pointer to it. > + * > + * Return: pointer to the zeroed memory requested, upon success, NULL > + * otherwise (either no memory available or pool already read-only). > + */ > +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + return pmalloc(pool, size, gfp | __GFP_ZERO); > +} > + > +/** > + * pmalloc_array - allocates an array according to the parameters > + * @pool: handle to the pool to be used for memory allocation > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested for each element > + * @flags: flags for page allocation > + * > + * Executes pmalloc, if it has a chance to succeed. > + * > + * Return: either NULL or the pmalloc result. > + */ > +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + if (unlikely(!(pool && n && size))) > + return NULL; > + return pmalloc(pool, n * size, flags); > +} > + > +/** > + * pcalloc - allocates a 0-initialized array according to the parameters > + * @pool: handle to the pool to be used for memory allocation > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested > + * @flags: flags for page allocation > + * > + * Executes pmalloc_array, if it has a chance to succeed. > + * > + * Return: either NULL or the pmalloc result. > + */ > +static inline void *pcalloc(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); > +} > + > +/** > + * pstrdup - duplicate a string, using pmalloc as allocator > + * @pool: handle to the pool to be used for memory allocation > + * @s: string to duplicate > + * @gfp: flags for page allocation > + * > + * Generates a copy of the given string, allocating sufficient memory > + * from the given pmalloc pool. > + * > + * Return: pointer to the replica, NULL in case of error. > + */ > +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) > +{ > + size_t len; > + char *buf; > + > + if (unlikely(pool == NULL || s == NULL)) > + return NULL; > + > + len = strlen(s) + 1; > + buf = pmalloc(pool, len, gfp); > + if (likely(buf)) > + strncpy(buf, s, len); > + return buf; > +} > + > +/** > + * pmalloc_protect_pool - turn a read/write pool read-only > + * @pool: the pool to protect > + * > + * Write-protects all the memory chunks assigned to the pool. > + * This prevents any further allocation. > + * > + * Return: 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_protect_pool(struct gen_pool *pool); > + > +/** > + * pfree - mark as unused memory that was previously in use > + * @pool: handle to the pool to be used for memory allocation > + * @addr: the beginning of the memory area to be freed > + * > + * The behavior of pfree is different, depending on the state of the > + * protection. > + * If the pool is not yet protected, the memory is marked as unused and > + * will be available for further allocations. > + * If the pool is already protected, the memory is marked as unused, but > + * it will still be impossible to perform further allocation, because of > + * the existing protection. > + * The freed memory, in this case, will be truly released only when the > + * pool is destroyed. > + */ > +static inline void pfree(struct gen_pool *pool, const void *addr) > +{ > + gen_pool_free(pool, (unsigned long)addr, 0); > +} > + > +/** > + * pmalloc_destroy_pool - destroys a pool and all the associated memory > + * @pool: the pool to destroy > + * > + * All the memory that was allocated through pmalloc in the pool will be freed. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_destroy_pool(struct gen_pool *pool); > + > +#endif > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 1e5d8c392f15..116d280cca53 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ > #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ > #define VM_NO_GUARD 0x00000040 /* don't add guard page */ > #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ > +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ > /* bits [20..32] reserved for arch specific ioremap internals */ > > /* > diff --git a/lib/genalloc.c b/lib/genalloc.c > index 044347163acb..e40a5db89439 100644 > --- a/lib/genalloc.c > +++ b/lib/genalloc.c > @@ -598,6 +598,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) > } > EXPORT_SYMBOL(gen_pool_free); > > + > +/** > + * gen_pool_flush_chunk - drops all the allocations from a specific chunk > + * @pool: the generic memory pool > + * @chunk: The chunk to wipe clear. > + * > + * This is meant to be called only while destroying a pool. It's up to the > + * caller to avoid races, but really, at this point the pool should have > + * already been retired and have become unavailable for any other sort of > + * operation. > + */ > +void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk) > +{ > + size_t size; > + > + if (unlikely(!(pool && chunk))) > + return; > + > + size = chunk->end_addr + 1 - chunk->start_addr; > + memset(chunk->entries, 0, > + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, > + BITS_PER_BYTE)); > + atomic_long_set(&chunk->avail, size); > +} > + > + > /** > * gen_pool_for_each_chunk - call func for every chunk of generic memory pool > * @pool: the generic memory pool > diff --git a/mm/Kconfig b/mm/Kconfig > index c782e8fb7235..be578fbdce6d 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -760,3 +760,9 @@ config GUP_BENCHMARK > performance of get_user_pages_fast(). > > See tools/testing/selftests/vm/gup_benchmark.c > + > +config PROTECTABLE_MEMORY > + bool > + depends on ARCH_HAS_SET_MEMORY > + select GENERIC_ALLOCATOR > + default y > diff --git a/mm/Makefile b/mm/Makefile > index e669f02c5a54..959fdbdac118 100644 > --- a/mm/Makefile > +++ b/mm/Makefile > @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o > obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o > obj-$(CONFIG_SLOB) += slob.o > obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o > +obj-$(CONFIG_PROTECTABLE_MEMORY) += pmalloc.o > obj-$(CONFIG_KSM) += ksm.o > obj-$(CONFIG_PAGE_POISONING) += page_poison.o > obj-$(CONFIG_SLAB) += slab.o > diff --git a/mm/pmalloc.c b/mm/pmalloc.c > new file mode 100644 > index 000000000000..e94bfb407c92 > --- /dev/null > +++ b/mm/pmalloc.c > @@ -0,0 +1,497 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.c: Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#include <linux/printk.h> > +#include <linux/init.h> > +#include <linux/mm.h> > +#include <linux/vmalloc.h> > +#include <linux/genalloc.h> > +#include <linux/kernel.h> > +#include <linux/log2.h> > +#include <linux/slab.h> > +#include <linux/device.h> > +#include <linux/atomic.h> > +#include <linux/rculist.h> > +#include <linux/set_memory.h> > +#include <asm/cacheflush.h> > +#include <asm/page.h> > + > +#include <linux/pmalloc.h> > +/* > + * pmalloc_data contains the data specific to a pmalloc pool, > + * in a format compatible with the design of gen_alloc. > + * Some of the fields are used for exposing the corresponding parameter > + * to userspace, through sysfs. > + */ > +struct pmalloc_data { > + struct gen_pool *pool; /* Link back to the associated pool. */ > + bool protected; /* Status of the pool: RO or RW. */ > + struct kobj_attribute attr_protected; /* Sysfs attribute. */ > + struct kobj_attribute attr_avail; /* Sysfs attribute. */ > + struct kobj_attribute attr_size; /* Sysfs attribute. */ > + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ > + struct kobject *pool_kobject; > + struct list_head node; /* list of pools */ > +}; > + > +static LIST_HEAD(pmalloc_final_list); > +static LIST_HEAD(pmalloc_tmp_list); > +static struct list_head *pmalloc_list = &pmalloc_tmp_list; > +static DEFINE_MUTEX(pmalloc_mutex); > +static struct kobject *pmalloc_kobject; > + > +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_protected); > + if (data->protected) > + return sprintf(buf, "protected\n"); > + else > + return sprintf(buf, "unprotected\n"); > +} > + > +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_avail); > + return sprintf(buf, "%lu\n", > + (unsigned long)gen_pool_avail(data->pool)); > +} > + > +static ssize_t pmalloc_pool_show_size(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + > + data = container_of(attr, struct pmalloc_data, attr_size); > + return sprintf(buf, "%lu\n", > + (unsigned long)gen_pool_size(data->pool)); > +} > + > +static void pool_chunk_number(struct gen_pool *pool, > + struct gen_pool_chunk *chunk, void *data) > +{ > + unsigned long *counter = data; > + > + (*counter)++; > +} > + > +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, > + struct kobj_attribute *attr, > + char *buf) > +{ > + struct pmalloc_data *data; > + unsigned long chunks_num = 0; > + > + data = container_of(attr, struct pmalloc_data, attr_chunks); > + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); > + return sprintf(buf, "%lu\n", chunks_num); > +} > + > +/* Exposes the pool and its attributes through sysfs. */ > +static struct kobject *pmalloc_connect(struct pmalloc_data *data) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + struct kobject *kobj; > + > + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); > + if (unlikely(!kobj)) > + return NULL; > + > + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { > + kobject_put(kobj); > + kobj = NULL; > + } > + return kobj; > +} > + > +/* Removes the pool and its attributes from sysfs. */ > +static void pmalloc_disconnect(struct pmalloc_data *data, > + struct kobject *kobj) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + > + sysfs_remove_files(kobj, attrs); > + kobject_put(kobj); > +} > + > +/* Declares an attribute of the pool. */ > +#define pmalloc_attr_init(data, attr_name) \ > +do { \ > + sysfs_attr_init(&data->attr_##attr_name.attr); \ > + data->attr_##attr_name.attr.name = #attr_name; \ > + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ > + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ > +} while (0) > + > +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) > +{ > + struct gen_pool *pool; > + const char *pool_name; > + struct pmalloc_data *data; > + > + if (!name) { > + WARN_ON(1); > + return NULL; > + } > + > + if (min_alloc_order < 0) > + min_alloc_order = ilog2(sizeof(unsigned long)); > + > + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); > + if (unlikely(!pool)) > + return NULL; > + > + mutex_lock(&pmalloc_mutex); > + list_for_each_entry(data, pmalloc_list, node) > + if (!strcmp(name, data->pool->name)) > + goto same_name_err; > + > + pool_name = kstrdup(name, GFP_KERNEL); > + if (unlikely(!pool_name)) > + goto name_alloc_err; > + > + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); > + if (unlikely(!data)) > + goto data_alloc_err; > + > + data->protected = false; > + data->pool = pool; > + pmalloc_attr_init(data, protected); > + pmalloc_attr_init(data, avail); > + pmalloc_attr_init(data, size); > + pmalloc_attr_init(data, chunks); > + pool->data = data; > + pool->name = pool_name; > + > + list_add(&data->node, pmalloc_list); > + if (pmalloc_list == &pmalloc_final_list) > + data->pool_kobject = pmalloc_connect(data); > + mutex_unlock(&pmalloc_mutex); > + return pool; > + > +data_alloc_err: > + kfree(pool_name); > +name_alloc_err: > +same_name_err: > + mutex_unlock(&pmalloc_mutex); > + gen_pool_destroy(pool); > + return NULL; > +} > + > +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) > +{ > + struct pmalloc_data *data; > + > + if (unlikely(!req_size || !pool)) > + return -1; > + > + data = pool->data; > + > + if (data == NULL) > + return -1; > + > + if (unlikely(data->protected)) { > + WARN_ON(1); > + return -1; > + } > + return 0; > +} > + > + > +static inline bool chunk_tagging(void *chunk, bool tag) > +{ > + struct vm_struct *area; > + struct page *page; > + > + if (!is_vmalloc_addr(chunk)) > + return false; > + > + page = vmalloc_to_page(chunk); > + if (unlikely(!page)) > + return false; > + > + area = page->area; > + if (tag) > + area->flags |= VM_PMALLOC; > + else > + area->flags &= ~VM_PMALLOC; > + return true; > +} > + > + > +static inline bool tag_chunk(void *chunk) > +{ > + return chunk_tagging(chunk, true); > +} > + > + > +static inline bool untag_chunk(void *chunk) > +{ > + return chunk_tagging(chunk, false); > +} > + > +enum { > + INVALID_PMALLOC_OBJECT = -1, > + NOT_PMALLOC_OBJECT = 0, > + VALID_PMALLOC_OBJECT = 1, > +}; > + > +int is_pmalloc_object(const void *ptr, const unsigned long n) > +{ > + struct vm_struct *area; > + struct page *page; > + unsigned long area_start; > + unsigned long area_end; > + unsigned long object_start; > + unsigned long object_end; > + > + > + /* is_pmalloc_object gets called pretty late, so chances are high > + * that the object is indeed of vmalloc type > + */ > + if (unlikely(!is_vmalloc_addr(ptr))) > + return NOT_PMALLOC_OBJECT; > + > + page = vmalloc_to_page(ptr); > + if (unlikely(!page)) > + return NOT_PMALLOC_OBJECT; > + > + area = page->area; > + > + if (likely(!(area->flags & VM_PMALLOC))) > + return NOT_PMALLOC_OBJECT; > + > + area_start = (unsigned long)area->addr; > + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; > + object_start = (unsigned long)ptr; > + object_end = object_start + n - 1; > + > + if (likely((area_start <= object_start) && > + (object_end <= area_end))) > + return VALID_PMALLOC_OBJECT; > + else > + return INVALID_PMALLOC_OBJECT; > +} > + > + > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + > + if (check_alloc_params(pool, size)) > + return false; > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(chunk == NULL)) > + return false; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error != 0)) > + goto abort; > + > + return true; > +abort: > + vfree_atomic(chunk); > + return false; > + > +} > + > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned long retval; > + > + if (check_alloc_params(pool, size)) > + return NULL; > + > +retry_alloc_from_pool: > + retval = gen_pool_alloc(pool, size); > + if (retval) > + goto return_allocation; > + > + if (unlikely((gfp & __GFP_ATOMIC))) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(!chunk)) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + if (unlikely(!tag_chunk(chunk))) > + goto free; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error)) > + goto abort; > + > + retval = gen_pool_alloc(pool, size); > + if (retval) { > +return_allocation: > + *(size_t *)retval = size; > + if (gfp & __GFP_ZERO) > + memset((void *)retval, 0, size); > + return (void *)retval; > + } > + /* Here there is no test for __GFP_NO_FAIL because, in case of > + * concurrent allocation, one thread might add a chunk to the > + * pool and this memory could be allocated by another thread, > + * before the first thread gets a chance to use it. > + * As long as vmalloc succeeds, it's ok to retry. > + */ > + goto retry_alloc_from_pool; > +abort: > + untag_chunk(chunk); > +free: > + vfree_atomic(chunk); > + return NULL; > +} > + > +static void pmalloc_chunk_set_protection(struct gen_pool *pool, > + > + struct gen_pool_chunk *chunk, > + void *data) > +{ > + const bool *flag = data; > + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; > + unsigned long pages = chunk_size / PAGE_SIZE; > + > + BUG_ON(chunk_size & (PAGE_SIZE - 1)); > + > + if (*flag) > + set_memory_ro(chunk->start_addr, pages); > + else > + set_memory_rw(chunk->start_addr, pages); > +} > + > +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) > +{ > + struct pmalloc_data *data; > + struct gen_pool_chunk *chunk; > + > + if (unlikely(!pool)) > + return -EINVAL; > + > + data = pool->data; > + > + if (unlikely(!data)) > + return -EINVAL; > + > + if (unlikely(data->protected == protection)) { > + WARN_ON(1); > + return 0; > + } > + > + data->protected = protection; > + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) > + pmalloc_chunk_set_protection(pool, chunk, &protection); > + return 0; > +} > + > +int pmalloc_protect_pool(struct gen_pool *pool) > +{ > + return pmalloc_pool_set_protection(pool, true); > +} > + > + > +static void pmalloc_chunk_free(struct gen_pool *pool, > + struct gen_pool_chunk *chunk, void *data) > +{ > + untag_chunk(chunk); > + gen_pool_flush_chunk(pool, chunk); > + vfree_atomic((void *)chunk->start_addr); > +} > + > + > +int pmalloc_destroy_pool(struct gen_pool *pool) > +{ > + struct pmalloc_data *data; > + > + if (unlikely(pool == NULL)) > + return -EINVAL; > + > + data = pool->data; > + > + if (unlikely(data == NULL)) > + return -EINVAL; > + > + mutex_lock(&pmalloc_mutex); > + list_del(&data->node); > + mutex_unlock(&pmalloc_mutex); > + > + if (likely(data->pool_kobject)) > + pmalloc_disconnect(data, data->pool_kobject); > + > + pmalloc_pool_set_protection(pool, false); > + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); > + gen_pool_destroy(pool); > + kfree(data); > + return 0; > +} > + > +/** > + * When the sysfs is ready to receive registrations, connect all the > + * pools previously created. Also enable further pools to be connected > + * right away. > + */ This does not seem as kernel-doc comment. Please either remove the second * from the opening comment mark or reformat the comment. > +static int __init pmalloc_late_init(void) > +{ > + struct pmalloc_data *data, *n; > + > + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); > + > + mutex_lock(&pmalloc_mutex); > + pmalloc_list = &pmalloc_final_list; > + > + if (likely(pmalloc_kobject != NULL)) { > + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { > + list_move(&data->node, &pmalloc_final_list); > + pmalloc_connect(data); > + } > + } > + mutex_unlock(&pmalloc_mutex); > + return 0; > +} > +late_initcall(pmalloc_late_init); > diff --git a/mm/usercopy.c b/mm/usercopy.c > index e9e9325f7638..946ce051e296 100644 > --- a/mm/usercopy.c > +++ b/mm/usercopy.c > @@ -240,6 +240,36 @@ static inline void check_heap_object(const void *ptr, unsigned long n, > } > } > > +#ifdef CONFIG_PROTECTABLE_MEMORY > + > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +static void check_pmalloc_object(const void *ptr, unsigned long n, > + bool to_user) > +{ > + int retv; > + > + retv = is_pmalloc_object(ptr, n); > + if (unlikely(retv)) { > + if (unlikely(!to_user)) > + usercopy_abort("pmalloc", > + "trying to write to pmalloc object", > + to_user, (const unsigned long)ptr, n); > + if (retv < 0) > + usercopy_abort("pmalloc", > + "invalid pmalloc object", > + to_user, (const unsigned long)ptr, n); > + } > +} > + > +#else > + > +static void check_pmalloc_object(const void *ptr, unsigned long n, > + bool to_user) > +{ > +} > +#endif > + > /* > * Validates that the given object is: > * - not bogus address > @@ -277,5 +307,8 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for object in kernel to avoid text exposure. */ > check_kernel_text_object((const unsigned long)ptr, n, to_user); > + > + /* Check if object is from a pmalloc chunk. */ > + check_pmalloc_object(ptr, n, to_user); > } > EXPORT_SYMBOL(__check_object_size); > -- > 2.14.1 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo at kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email at kvack.org </a> > -- Sincerely yours, Mike. -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-11 12:37 ` Mike Rapoport (?) (?) @ 2018-02-12 11:26 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 11:26 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 11/02/18 14:37, Mike Rapoport wrote: > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > typo: ^ memory thanks :-( [...] >> +/** >> + * When the sysfs is ready to receive registrations, connect all the >> + * pools previously created. Also enable further pools to be connected >> + * right away. >> + */ > > This does not seem as kernel-doc comment. Please either remove the second * > from the opening comment mark or reformat the comment. For this too, I thought I had caught them all, but I was wrong ... I didn't find any mention of automated checking for comments. Is there such tool? -- thanks, igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 11:26 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 11:26 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 11/02/18 14:37, Mike Rapoport wrote: > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > typo: ^ memory thanks :-( [...] >> +/** >> + * When the sysfs is ready to receive registrations, connect all the >> + * pools previously created. Also enable further pools to be connected >> + * right away. >> + */ > > This does not seem as kernel-doc comment. Please either remove the second * > from the opening comment mark or reformat the comment. For this too, I thought I had caught them all, but I was wrong ... I didn't find any mention of automated checking for comments. Is there such tool? -- thanks, igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 11:26 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 11:26 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 11/02/18 14:37, Mike Rapoport wrote: > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > typo: ^ memory thanks :-( [...] >> +/** >> + * When the sysfs is ready to receive registrations, connect all the >> + * pools previously created. Also enable further pools to be connected >> + * right away. >> + */ > > This does not seem as kernel-doc comment. Please either remove the second * > from the opening comment mark or reformat the comment. For this too, I thought I had caught them all, but I was wrong ... I didn't find any mention of automated checking for comments. Is there such tool? -- thanks, igor -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 11:26 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 11:26 UTC (permalink / raw) To: linux-security-module On 11/02/18 14:37, Mike Rapoport wrote: > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > typo: ^ memory thanks :-( [...] >> +/** >> + * When the sysfs is ready to receive registrations, connect all the >> + * pools previously created. Also enable further pools to be connected >> + * right away. >> + */ > > This does not seem as kernel-doc comment. Please either remove the second * > from the opening comment mark or reformat the comment. For this too, I thought I had caught them all, but I was wrong ... I didn't find any mention of automated checking for comments. Is there such tool? -- thanks, igor -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-12 11:26 ` Igor Stoppa (?) @ 2018-02-12 11:43 ` Mike Rapoport -1 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 11:43 UTC (permalink / raw) To: Igor Stoppa Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On Mon, Feb 12, 2018 at 01:26:28PM +0200, Igor Stoppa wrote: > On 11/02/18 14:37, Mike Rapoport wrote: > > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > > >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > > > typo: ^ memory > > thanks :-( > > [...] > > >> +/** > >> + * When the sysfs is ready to receive registrations, connect all the > >> + * pools previously created. Also enable further pools to be connected > >> + * right away. > >> + */ > > > > This does not seem as kernel-doc comment. Please either remove the second * > > from the opening comment mark or reformat the comment. > > For this too, I thought I had caught them all, but I was wrong ... > > I didn't find any mention of automated checking for comments. > Is there such tool? I don't know if there is a tool. I couldn't find anything in scripts, maybe somebody have such tool out of tree. For now, I've added mm-api.rst that includes all mm .c files and run 'make htmldocs' which spits plenty of warnings and errors. > -- > thanks, igor > -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 11:43 ` Mike Rapoport 0 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 11:43 UTC (permalink / raw) To: Igor Stoppa Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On Mon, Feb 12, 2018 at 01:26:28PM +0200, Igor Stoppa wrote: > On 11/02/18 14:37, Mike Rapoport wrote: > > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > > >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > > > typo: ^ memory > > thanks :-( > > [...] > > >> +/** > >> + * When the sysfs is ready to receive registrations, connect all the > >> + * pools previously created. Also enable further pools to be connected > >> + * right away. > >> + */ > > > > This does not seem as kernel-doc comment. Please either remove the second * > > from the opening comment mark or reformat the comment. > > For this too, I thought I had caught them all, but I was wrong ... > > I didn't find any mention of automated checking for comments. > Is there such tool? I don't know if there is a tool. I couldn't find anything in scripts, maybe somebody have such tool out of tree. For now, I've added mm-api.rst that includes all mm .c files and run 'make htmldocs' which spits plenty of warnings and errors. > -- > thanks, igor > -- Sincerely yours, Mike. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 11:43 ` Mike Rapoport 0 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 11:43 UTC (permalink / raw) To: linux-security-module On Mon, Feb 12, 2018 at 01:26:28PM +0200, Igor Stoppa wrote: > On 11/02/18 14:37, Mike Rapoport wrote: > > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > > >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > > > typo: ^ memory > > thanks :-( > > [...] > > >> +/** > >> + * When the sysfs is ready to receive registrations, connect all the > >> + * pools previously created. Also enable further pools to be connected > >> + * right away. > >> + */ > > > > This does not seem as kernel-doc comment. Please either remove the second * > > from the opening comment mark or reformat the comment. > > For this too, I thought I had caught them all, but I was wrong ... > > I didn't find any mention of automated checking for comments. > Is there such tool? I don't know if there is a tool. I couldn't find anything in scripts, maybe somebody have such tool out of tree. For now, I've added mm-api.rst that includes all mm .c files and run 'make htmldocs' which spits plenty of warnings and errors. > -- > thanks, igor > -- Sincerely yours, Mike. -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-12 11:43 ` Mike Rapoport (?) @ 2018-02-12 12:53 ` Mike Rapoport -1 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 12:53 UTC (permalink / raw) To: Igor Stoppa Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On Mon, Feb 12, 2018 at 01:43:11PM +0200, Mike Rapoport wrote: > On Mon, Feb 12, 2018 at 01:26:28PM +0200, Igor Stoppa wrote: > > On 11/02/18 14:37, Mike Rapoport wrote: > > > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > > > > >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > > >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > > > > > typo: ^ memory > > > > thanks :-( > > > > [...] > > > > >> +/** > > >> + * When the sysfs is ready to receive registrations, connect all the > > >> + * pools previously created. Also enable further pools to be connected > > >> + * right away. > > >> + */ > > > > > > This does not seem as kernel-doc comment. Please either remove the second * > > > from the opening comment mark or reformat the comment. > > > > For this too, I thought I had caught them all, but I was wrong ... > > > > I didn't find any mention of automated checking for comments. > > Is there such tool? > > I don't know if there is a tool. I couldn't find anything in scripts, maybe > somebody have such tool out of tree. > > For now, I've added mm-api.rst that includes all mm .c files and run 'make > htmldocs' which spits plenty of warnings and errors. Actually, you can run 'scripts/kernel-doc -v -none <filename>' to check comments starting with '/**'. I afraid it won't catch formatted blocks that start with '/*' -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 12:53 ` Mike Rapoport 0 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 12:53 UTC (permalink / raw) To: Igor Stoppa Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On Mon, Feb 12, 2018 at 01:43:11PM +0200, Mike Rapoport wrote: > On Mon, Feb 12, 2018 at 01:26:28PM +0200, Igor Stoppa wrote: > > On 11/02/18 14:37, Mike Rapoport wrote: > > > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > > > > >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > > >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > > > > > typo: ^ memory > > > > thanks :-( > > > > [...] > > > > >> +/** > > >> + * When the sysfs is ready to receive registrations, connect all the > > >> + * pools previously created. Also enable further pools to be connected > > >> + * right away. > > >> + */ > > > > > > This does not seem as kernel-doc comment. Please either remove the second * > > > from the opening comment mark or reformat the comment. > > > > For this too, I thought I had caught them all, but I was wrong ... > > > > I didn't find any mention of automated checking for comments. > > Is there such tool? > > I don't know if there is a tool. I couldn't find anything in scripts, maybe > somebody have such tool out of tree. > > For now, I've added mm-api.rst that includes all mm .c files and run 'make > htmldocs' which spits plenty of warnings and errors. Actually, you can run 'scripts/kernel-doc -v -none <filename>' to check comments starting with '/**'. I afraid it won't catch formatted blocks that start with '/*' -- Sincerely yours, Mike. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 12:53 ` Mike Rapoport 0 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 12:53 UTC (permalink / raw) To: linux-security-module On Mon, Feb 12, 2018 at 01:43:11PM +0200, Mike Rapoport wrote: > On Mon, Feb 12, 2018 at 01:26:28PM +0200, Igor Stoppa wrote: > > On 11/02/18 14:37, Mike Rapoport wrote: > > > On Sun, Feb 11, 2018 at 05:19:18AM +0200, Igor Stoppa wrote: > > > > >> + * Return: 0 if the object does not belong to pmalloc, 1 if it belongs to > > >> + * pmalloc, -1 if it partially overlaps pmalloc meory, but incorectly. > > > > > > typo: ^ memory > > > > thanks :-( > > > > [...] > > > > >> +/** > > >> + * When the sysfs is ready to receive registrations, connect all the > > >> + * pools previously created. Also enable further pools to be connected > > >> + * right away. > > >> + */ > > > > > > This does not seem as kernel-doc comment. Please either remove the second * > > > from the opening comment mark or reformat the comment. > > > > For this too, I thought I had caught them all, but I was wrong ... > > > > I didn't find any mention of automated checking for comments. > > Is there such tool? > > I don't know if there is a tool. I couldn't find anything in scripts, maybe > somebody have such tool out of tree. > > For now, I've added mm-api.rst that includes all mm .c files and run 'make > htmldocs' which spits plenty of warnings and errors. Actually, you can run 'scripts/kernel-doc -v -none <filename>' to check comments starting with '/**'. I afraid it won't catch formatted blocks that start with '/*' -- Sincerely yours, Mike. -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-12 12:53 ` Mike Rapoport (?) (?) @ 2018-02-12 13:41 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 13:41 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 12/02/18 14:53, Mike Rapoport wrote: > 'scripts/kernel-doc -v -none That has a quite interesting behavior. I run it on genalloc.c while I am in the process of adding the brackets to the function names in the kernel-doc description. The brackets confuse the script and it fails to output the name of the function in the log: lib/genalloc.c:123: info: Scanning doc for get_bitmap_entry lib/genalloc.c:139: info: Scanning doc for lib/genalloc.c:152: info: Scanning doc for lib/genalloc.c:164: info: Scanning doc for The first function does not have the brackets. The others do. So what should I do with the missing brackets? Add them, according to the kernel docs, or leave them out? I'd lean toward adding them. -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 13:41 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 13:41 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 12/02/18 14:53, Mike Rapoport wrote: > 'scripts/kernel-doc -v -none That has a quite interesting behavior. I run it on genalloc.c while I am in the process of adding the brackets to the function names in the kernel-doc description. The brackets confuse the script and it fails to output the name of the function in the log: lib/genalloc.c:123: info: Scanning doc for get_bitmap_entry lib/genalloc.c:139: info: Scanning doc for lib/genalloc.c:152: info: Scanning doc for lib/genalloc.c:164: info: Scanning doc for The first function does not have the brackets. The others do. So what should I do with the missing brackets? Add them, according to the kernel docs, or leave them out? I'd lean toward adding them. -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 13:41 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 13:41 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 12/02/18 14:53, Mike Rapoport wrote: > 'scripts/kernel-doc -v -none That has a quite interesting behavior. I run it on genalloc.c while I am in the process of adding the brackets to the function names in the kernel-doc description. The brackets confuse the script and it fails to output the name of the function in the log: lib/genalloc.c:123: info: Scanning doc for get_bitmap_entry lib/genalloc.c:139: info: Scanning doc for lib/genalloc.c:152: info: Scanning doc for lib/genalloc.c:164: info: Scanning doc for The first function does not have the brackets. The others do. So what should I do with the missing brackets? Add them, according to the kernel docs, or leave them out? I'd lean toward adding them. -- igor -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 13:41 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 13:41 UTC (permalink / raw) To: linux-security-module On 12/02/18 14:53, Mike Rapoport wrote: > 'scripts/kernel-doc -v -none That has a quite interesting behavior. I run it on genalloc.c while I am in the process of adding the brackets to the function names in the kernel-doc description. The brackets confuse the script and it fails to output the name of the function in the log: lib/genalloc.c:123: info: Scanning doc for get_bitmap_entry lib/genalloc.c:139: info: Scanning doc for lib/genalloc.c:152: info: Scanning doc for lib/genalloc.c:164: info: Scanning doc for The first function does not have the brackets. The others do. So what should I do with the missing brackets? Add them, according to the kernel docs, or leave them out? I'd lean toward adding them. -- igor -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-12 13:41 ` Igor Stoppa (?) @ 2018-02-12 15:31 ` Mike Rapoport -1 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 15:31 UTC (permalink / raw) To: Igor Stoppa Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On Mon, Feb 12, 2018 at 03:41:57PM +0200, Igor Stoppa wrote: > > > On 12/02/18 14:53, Mike Rapoport wrote: > > 'scripts/kernel-doc -v -none > > That has a quite interesting behavior. > > I run it on genalloc.c while I am in the process of adding the brackets > to the function names in the kernel-doc description. > > The brackets confuse the script and it fails to output the name of the > function in the log: > > lib/genalloc.c:123: info: Scanning doc for get_bitmap_entry > lib/genalloc.c:139: info: Scanning doc for > lib/genalloc.c:152: info: Scanning doc for > lib/genalloc.c:164: info: Scanning doc for > > The first function does not have the brackets. > The others do. So what should I do with the missing brackets? > Add them, according to the kernel docs, or leave them out? Seems that kernel-doc does not consider () as a valid match for the identifier :) Can you please check with the below patch? > I'd lean toward adding them. > > -- > igor -- Sincerely yours, Mike. >From 35255bc2d7d2a63be4f78a7bf4eec83ab0dc4f3f Mon Sep 17 00:00:00 2001 From: Mike Rapoport <rppt@linux.vnet.ibm.com> Date: Mon, 12 Feb 2018 17:19:04 +0200 Subject: [PATCH] scripts: kernel_doc: fixup reporting of function identifiers When function description includes brackets after the function name as suggested by Documentation/doc-guide/kernel-doc, the kernel-doc script omits the function name from "Scanning doc for" report. Extending match for identifier name with optional brackets fixes this issue. Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> --- scripts/kernel-doc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/kernel-doc b/scripts/kernel-doc index fee8952037b1..a6a9a8ef116c 100755 --- a/scripts/kernel-doc +++ b/scripts/kernel-doc @@ -1873,7 +1873,7 @@ sub process_file($) { } elsif (/$doc_decl/o) { $identifier = $1; - if (/\s*([\w\s]+?)\s*-/) { + if (/\s*([\w\s]+?)(\(\))?\s*-/) { $identifier = $1; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 15:31 ` Mike Rapoport 0 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 15:31 UTC (permalink / raw) To: Igor Stoppa Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On Mon, Feb 12, 2018 at 03:41:57PM +0200, Igor Stoppa wrote: > > > On 12/02/18 14:53, Mike Rapoport wrote: > > 'scripts/kernel-doc -v -none > > That has a quite interesting behavior. > > I run it on genalloc.c while I am in the process of adding the brackets > to the function names in the kernel-doc description. > > The brackets confuse the script and it fails to output the name of the > function in the log: > > lib/genalloc.c:123: info: Scanning doc for get_bitmap_entry > lib/genalloc.c:139: info: Scanning doc for > lib/genalloc.c:152: info: Scanning doc for > lib/genalloc.c:164: info: Scanning doc for > > The first function does not have the brackets. > The others do. So what should I do with the missing brackets? > Add them, according to the kernel docs, or leave them out? Seems that kernel-doc does not consider () as a valid match for the identifier :) Can you please check with the below patch? > I'd lean toward adding them. > > -- > igor -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 15:31 ` Mike Rapoport 0 siblings, 0 replies; 84+ messages in thread From: Mike Rapoport @ 2018-02-12 15:31 UTC (permalink / raw) To: linux-security-module On Mon, Feb 12, 2018 at 03:41:57PM +0200, Igor Stoppa wrote: > > > On 12/02/18 14:53, Mike Rapoport wrote: > > 'scripts/kernel-doc -v -none > > That has a quite interesting behavior. > > I run it on genalloc.c while I am in the process of adding the brackets > to the function names in the kernel-doc description. > > The brackets confuse the script and it fails to output the name of the > function in the log: > > lib/genalloc.c:123: info: Scanning doc for get_bitmap_entry > lib/genalloc.c:139: info: Scanning doc for > lib/genalloc.c:152: info: Scanning doc for > lib/genalloc.c:164: info: Scanning doc for > > The first function does not have the brackets. > The others do. So what should I do with the missing brackets? > Add them, according to the kernel docs, or leave them out? Seems that kernel-doc does not consider () as a valid match for the identifier :) Can you please check with the below patch? > I'd lean toward adding them. > > -- > igor -- Sincerely yours, Mike. >From 35255bc2d7d2a63be4f78a7bf4eec83ab0dc4f3f Mon Sep 17 00:00:00 2001 From: Mike Rapoport <rppt@linux.vnet.ibm.com> Date: Mon, 12 Feb 2018 17:19:04 +0200 Subject: [PATCH] scripts: kernel_doc: fixup reporting of function identifiers When function description includes brackets after the function name as suggested by Documentation/doc-guide/kernel-doc, the kernel-doc script omits the function name from "Scanning doc for" report. Extending match for identifier name with optional brackets fixes this issue. Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> --- scripts/kernel-doc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/kernel-doc b/scripts/kernel-doc index fee8952037b1..a6a9a8ef116c 100755 --- a/scripts/kernel-doc +++ b/scripts/kernel-doc @@ -1873,7 +1873,7 @@ sub process_file($) { } elsif (/$doc_decl/o) { $identifier = $1; - if (/\s*([\w\s]+?)\s*-/) { + if (/\s*([\w\s]+?)(\(\))?\s*-/) { $identifier = $1; } -- 2.7.4 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-12 15:31 ` Mike Rapoport (?) (?) @ 2018-02-12 15:41 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 15:41 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 12/02/18 17:31, Mike Rapoport wrote: [...] > Seems that kernel-doc does not consider () as a valid match for the > identifier :) > > Can you please check with the below patch? yes, it works now, than you! -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 15:41 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 15:41 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 12/02/18 17:31, Mike Rapoport wrote: [...] > Seems that kernel-doc does not consider () as a valid match for the > identifier :) > > Can you please check with the below patch? yes, it works now, than you! -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-12 15:41 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 15:41 UTC (permalink / raw) To: Mike Rapoport Cc: willy, rdunlap, corbet, keescook, mhocko, labbott, jglisse, hch, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 12/02/18 17:31, Mike Rapoport wrote: [...] > Seems that kernel-doc does not consider () as a valid match for the > identifier :) > > Can you please check with the below patch? yes, it works now, than you! -- igor -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-12 15:41 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-12 15:41 UTC (permalink / raw) To: linux-security-module On 12/02/18 17:31, Mike Rapoport wrote: [...] > Seems that kernel-doc does not consider () as a valid match for the > identifier :) > > Can you please check with the below patch? yes, it works now, than you! -- igor -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* [RFC PATCH v14 0/6] mm: security: ro protection for dynamic data @ 2018-02-04 16:47 Igor Stoppa 2018-02-04 16:47 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Igor Stoppa @ 2018-02-04 16:47 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa This patch-set introduces the possibility of protecting memory that has been allocated dynamically. The memory is managed in pools: when a memory pool is turned into R/O, all the memory that is part of it, will become R/O. A R/O pool can be destroyed, to recover its memory, but it cannot be turned back into R/W mode. This is intentional. This feature is meant for data that doesn't need further modifications after initialization. However the data might need to be released, for example as part of module unloading. To do this, the memory must first be freed, then the pool can be destroyed. An example is provided, in the form of self-testing. Changes since v13: [http://www.openwall.com/lists/kernel-hardening/2018/02/03/3] - fixed warnings from "make htmldocs" - added documentation to core-api index Igor Stoppa (6): genalloc: track beginning of allocations genalloc: selftest struct page: add field for vm_struct Protectable Memory Pmalloc: self-test Documentation for Pmalloc Documentation/core-api/index.rst | 1 + Documentation/core-api/pmalloc.rst | 114 ++++++++ include/linux/genalloc-selftest.h | 30 +++ include/linux/genalloc.h | 7 +- include/linux/mm_types.h | 1 + include/linux/pmalloc.h | 213 +++++++++++++++ include/linux/vmalloc.h | 1 + init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 +++++++++++++++++++++++++++++ lib/genalloc.c | 444 ++++++++++++++++++++++---------- mm/Kconfig | 9 + mm/Makefile | 2 + mm/pmalloc-selftest.c | 61 +++++ mm/pmalloc-selftest.h | 26 ++ mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 +- mm/vmalloc.c | 18 +- 19 files changed, 1745 insertions(+), 141 deletions(-) create mode 100644 Documentation/core-api/pmalloc.rst create mode 100644 include/linux/genalloc-selftest.h create mode 100644 include/linux/pmalloc.h create mode 100644 lib/genalloc-selftest.c create mode 100644 mm/pmalloc-selftest.c create mode 100644 mm/pmalloc-selftest.h create mode 100644 mm/pmalloc.c -- 2.16.0 ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory 2018-02-04 16:47 [RFC PATCH v14 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-02-04 16:47 ` Igor Stoppa (?) @ 2018-02-04 16:47 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-04 16:47 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 213 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 780 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..5fa8a78be819 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde78307b093..7ba2ec96c360 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..a6a47e1b6e66 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..11daca252589 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,514 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include "pmalloc-selftest.h" + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + pmalloc_selftest(); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b24715d..c3b10298d808 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.16.0 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-04 16:47 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-04 16:47 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 213 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 780 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..5fa8a78be819 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde78307b093..7ba2ec96c360 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..a6a47e1b6e66 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..11daca252589 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,514 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include "pmalloc-selftest.h" + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + pmalloc_selftest(); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b24715d..c3b10298d808 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.16.0 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-04 16:47 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-04 16:47 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 213 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 780 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..5fa8a78be819 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde78307b093..7ba2ec96c360 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..a6a47e1b6e66 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..11daca252589 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,514 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include "pmalloc-selftest.h" + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + pmalloc_selftest(); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b24715d..c3b10298d808 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.16.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-04 16:47 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-04 16:47 UTC (permalink / raw) To: linux-security-module The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 213 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 780 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index dcaa33e74b1c..b6c4cea9fbd8 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 000000000000..5fa8a78be819 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested for each element + * @flags: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @n: number of elements in the array + * @size: amount of memory (in bytes) requested + * @flags: flags for page allocation + * + * Executes pmalloc_array, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..116d280cca53 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde78307b093..7ba2ec96c360 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_long_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02c5a54..a6a47e1b6e66 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 000000000000..11daca252589 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,514 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +#include "pmalloc-selftest.h" + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", + (unsigned long)gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree_atomic(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree_atomic(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + pmalloc_selftest(); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b24715d..c3b10298d808 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.16.0 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-04 16:47 ` Igor Stoppa (?) @ 2018-02-04 22:06 ` Randy Dunlap -1 siblings, 0 replies; 84+ messages in thread From: Randy Dunlap @ 2018-02-04 22:06 UTC (permalink / raw) To: Igor Stoppa, jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 02/04/2018 08:47 AM, Igor Stoppa wrote: > The MMU available in many systems running Linux can often provide R/O > protection to the memory pages it handles. > > However, the MMU-based protection works efficiently only when said pages > contain exclusively data that will not need further modifications. > > Statically allocated variables can be segregated into a dedicated > section, but this does not sit very well with dynamically allocated > ones. > > Dynamic allocation does not provide, currently, any means for grouping > variables in memory pages that would contain exclusively data suitable > for conversion to read only access mode. > > The allocator here provided (pmalloc - protectable memory allocator) > introduces the concept of pools of protectable memory. > > A module can request a pool and then refer any allocation request to the > pool handler it has received. > > Once all the chunks of memory associated to a specific pool are > initialized, the pool can be protected. > > After this point, the pool can only be destroyed (it is up to the module > to avoid any further references to the memory from the pool, after > the destruction is invoked). > > The latter case is mainly meant for releasing memory, when a module is > unloaded. > > A module can have as many pools as needed, for example to support the > protection of data that is initialized in sufficiently distinct phases. > > Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> > --- > include/linux/genalloc.h | 3 + > include/linux/pmalloc.h | 213 ++++++++++++++++++++ > include/linux/vmalloc.h | 1 + > lib/genalloc.c | 27 +++ > mm/Makefile | 1 + > mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ > mm/usercopy.c | 25 ++- > 7 files changed, 780 insertions(+), 4 deletions(-) > create mode 100644 include/linux/pmalloc.h > create mode 100644 mm/pmalloc.c > > diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h > index dcaa33e74b1c..b6c4cea9fbd8 100644 > --- a/include/linux/genalloc.h > +++ b/include/linux/genalloc.h > @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, > extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, > dma_addr_t *dma); > extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); > + > +extern void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk); > extern void gen_pool_for_each_chunk(struct gen_pool *, > void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); > extern size_t gen_pool_avail(struct gen_pool *); > diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h > new file mode 100644 > index 000000000000..5fa8a78be819 > --- /dev/null > +++ b/include/linux/pmalloc.h > @@ -0,0 +1,213 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.h: Header for Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#ifndef _PMALLOC_H > +#define _PMALLOC_H use _LINUX_PMALLOC_H_ > + > + > +#include <linux/genalloc.h> > +#include <linux/string.h> > + > +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) > + > +/* > + * Library for dynamic allocation of pools of memory that can be, > + * after initialization, marked as read-only. > + * > + * This is intended to complement __read_only_after_init, for those cases > + * where either it is not possible to know the initialization value before > + * init is completed, or the amount of data is variable and can be > + * determined only at run-time. > + * > + * ***WARNING*** > + * The user of the API is expected to synchronize: > + * 1) allocation, > + * 2) writes to the allocated memory, > + * 3) write protection of the pool, > + * 4) freeing of the allocated memory, and > + * 5) destruction of the pool. > + * > + * For a non-threaded scenario, this type of locking is not even required. > + * > + * Even if the library were to provide support for locking, point 2) > + * would still depend on the user taking the lock. > + */ > + > + > +/** > + * pmalloc_create_pool - create a new protectable memory pool - Drop trailing " -". > + * @name: the name of the pool, must be unique Is that enforced? Will return NULL if @name is duplicated? > + * @min_alloc_order: log2 of the minimum allocation size obtainable > + * from the pool > + * > + * Creates a new (empty) memory pool for allocation of protectable > + * memory. Memory will be allocated upon request (through pmalloc). > + * > + * Returns a pointer to the new pool upon success, otherwise a NULL. > + */ > +struct gen_pool *pmalloc_create_pool(const char *name, > + int min_alloc_order); > + > + > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +/** > + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size > + * @pool: handler to the pool to be used for memory allocation handle (I think) > + * @size: amount of memory (in bytes) requested > + * > + * Prepares a chunk of the requested size. > + * This is intended to both minimize latency in later memory requests and > + * avoid sleping during allocation. sleeping > + * Memory allocated with prealloc is stored in one single chunk, as with pmalloc_prealloc() > + * opposite to what is allocated on-demand when pmalloc runs out of free opposed to > + * space already existing in the pool and has to invoke vmalloc. > + * > + * Returns true if the vmalloc call was successful, false otherwise. Where is the allocated memory (pointer)? I.e., how does the caller know where that memory is? Oh, that memory isn't yet available to the caller until it calls pmalloc(), right? > + */ > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); > + > +/** > + * pmalloc - allocate protectable memory from a pool > + * @pool: handler to the pool to be used for memory allocation handle (?) > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Allocates memory from an unprotected pool. If the pool doesn't have > + * enough memory, and the request did not include GFP_ATOMIC, an attempt > + * is made to add a new chunk of memory to the pool > + * (a multiple of PAGE_SIZE), in order to fit the new request. fill What if @size is > PAGE_SIZE? > + * Otherwise, NULL is returned. > + * > + * Returns the pointer to the memory requested upon success, > + * NULL otherwise (either no memory available or pool already read-only). It would be good to use the * Return: kernel-doc notation for return values. > + */ > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); > + > + > +/** > + * pzalloc - zero-initialized version of pmalloc > + * @pool: handler to the pool to be used for memory allocation handle (?) > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Executes pmalloc, initializing the memory requested to 0, > + * before returning the pointer to it. > + * > + * Returns the pointer to the zeroed memory requested, upon success, > + * NULL otherwise (either no memory available or pool already read-only). > + */ > +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + return pmalloc(pool, size, gfp | __GFP_ZERO); > +} > + > +/** > + * pmalloc_array - allocates an array according to the parameters > + * @pool: handler to the pool to be used for memory allocation handle > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested for each element > + * @flags: flags for page allocation > + * > + * Executes pmalloc, if it has a chance to succeed. > + * > + * Returns either NULL or the pmalloc result. > + */ > +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + if (unlikely(!(pool && n && size))) > + return NULL; > + return pmalloc(pool, n * size, flags); > +} > + > +/** > + * pcalloc - allocates a 0-initialized array according to the parameters > + * @pool: handler to the pool to be used for memory allocation handle > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested > + * @flags: flags for page allocation > + * > + * Executes pmalloc_array, if it has a chance to succeed. > + * > + * Returns either NULL or the pmalloc result. > + */ > +static inline void *pcalloc(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); > +} > + > +/** > + * pstrdup - duplicate a string, using pmalloc as allocator > + * @pool: handler to the pool to be used for memory allocation handle > + * @s: string to duplicate > + * @gfp: flags for page allocation > + * > + * Generates a copy of the given string, allocating sufficient memory > + * from the given pmalloc pool. > + * > + * Returns a pointer to the replica, NULL in case of recoverable error. > + */ > +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) > +{ > + size_t len; > + char *buf; > + > + if (unlikely(pool == NULL || s == NULL)) > + return NULL; > + > + len = strlen(s) + 1; > + buf = pmalloc(pool, len, gfp); > + if (likely(buf)) > + strncpy(buf, s, len); > + return buf; > +} > + > +/** > + * pmalloc_protect_pool - turn a read/write pool read-only > + * @pool: the pool to protect > + * > + * Write-protects all the memory chunks assigned to the pool. > + * This prevents any further allocation. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_protect_pool(struct gen_pool *pool); > + > +/** > + * pfree - mark as unused memory that was previously in use > + * @pool: handler to the pool to be used for memory allocation handle > + * @addr: the beginning of the memory area to be freed > + * > + * The behavior of pfree is different, depending on the state of the > + * protection. > + * If the pool is not yet protected, the memory is marked as unused and > + * will be availabel for further allocations. available > + * If the pool is already protected, the memory is marked as unused, but > + * it will still be impossible to perform further allocation, because of > + * the existing protection. > + * The freed memory, in this case, will be truly released only when the > + * pool is destroyed. > + */ > +static inline void pfree(struct gen_pool *pool, const void *addr) > +{ > + gen_pool_free(pool, (unsigned long)addr, 0); > +} > + > +/** > + * pmalloc_destroy_pool - destroys a pool and all the associated memory > + * @pool: the pool to destroy > + * > + * All the memory that was allocated through pmalloc in the pool will be freed. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_destroy_pool(struct gen_pool *pool); > + > +#endif > diff --git a/mm/pmalloc.c b/mm/pmalloc.c > new file mode 100644 > index 000000000000..11daca252589 > --- /dev/null > +++ b/mm/pmalloc.c > @@ -0,0 +1,514 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.c: Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#include <linux/printk.h> > +#include <linux/init.h> > +#include <linux/mm.h> > +#include <linux/vmalloc.h> > +#include <linux/genalloc.h> > +#include <linux/kernel.h> > +#include <linux/log2.h> > +#include <linux/slab.h> > +#include <linux/device.h> > +#include <linux/atomic.h> > +#include <linux/rculist.h> > +#include <linux/set_memory.h> > +#include <asm/cacheflush.h> > +#include <asm/page.h> > + > +#include "pmalloc-selftest.h" > + > +/** /** means that the following comments are kernel-doc notation, but these comments are not, so just use /* there, please. > + * pmalloc_data contains the data specific to a pmalloc pool, > + * in a format compatible with the design of gen_alloc. > + * Some of the fields are used for exposing the corresponding parameter > + * to userspace, through sysfs. > + */ > +struct pmalloc_data { > + struct gen_pool *pool; /* Link back to the associated pool. */ > + bool protected; /* Status of the pool: RO or RW. */ > + struct kobj_attribute attr_protected; /* Sysfs attribute. */ > + struct kobj_attribute attr_avail; /* Sysfs attribute. */ > + struct kobj_attribute attr_size; /* Sysfs attribute. */ > + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ > + struct kobject *pool_kobject; > + struct list_head node; /* list of pools */ > +}; > + > +static LIST_HEAD(pmalloc_final_list); > +static LIST_HEAD(pmalloc_tmp_list); > +static struct list_head *pmalloc_list = &pmalloc_tmp_list; > +static DEFINE_MUTEX(pmalloc_mutex); > +static struct kobject *pmalloc_kobject; [snip] > +/** Just use /* since this is not kernel-doc notation. > + * Exposes the pool and its attributes through sysfs. > + */ > +static struct kobject *pmalloc_connect(struct pmalloc_data *data) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + struct kobject *kobj; > + > + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); > + if (unlikely(!kobj)) > + return NULL; > + > + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { > + kobject_put(kobj); > + kobj = NULL; > + } > + return kobj; > +} > + > +/** Ditto. > + * Removes the pool and its attributes from sysfs. > + */ > +static void pmalloc_disconnect(struct pmalloc_data *data, > + struct kobject *kobj) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + > + sysfs_remove_files(kobj, attrs); > + kobject_put(kobj); > +} > + > +/** Same. > + * Declares an attribute of the pool. > + */ > + > +#define pmalloc_attr_init(data, attr_name) \ > +do { \ > + sysfs_attr_init(&data->attr_##attr_name.attr); \ > + data->attr_##attr_name.attr.name = #attr_name; \ > + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ > + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ > +} while (0) [snip] > +int is_pmalloc_object(const void *ptr, const unsigned long n) > +{ > + struct vm_struct *area; > + struct page *page; > + unsigned long area_start; > + unsigned long area_end; > + unsigned long object_start; > + unsigned long object_end; > + > + > + /* is_pmalloc_object gets called pretty late, so chances are high > + * that the object is indeed of vmalloc type > + */ Multi-line comment style is /* * comment1 * comment..N */ > + if (unlikely(!is_vmalloc_addr(ptr))) > + return NOT_PMALLOC_OBJECT; > + > + page = vmalloc_to_page(ptr); > + if (unlikely(!page)) > + return NOT_PMALLOC_OBJECT; > + > + area = page->area; > + > + if (likely(!(area->flags & VM_PMALLOC))) > + return NOT_PMALLOC_OBJECT; > + > + area_start = (unsigned long)area->addr; > + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; > + object_start = (unsigned long)ptr; > + object_end = object_start + n - 1; > + > + if (likely((area_start <= object_start) && > + (object_end <= area_end))) > + return VALID_PMALLOC_OBJECT; > + else > + return INVALID_PMALLOC_OBJECT; > +} > + > + > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned int order; > + > + if (check_alloc_params(pool, size)) > + return false; > + > + order = (unsigned int)pool->min_alloc_order; > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(chunk == NULL)) > + return false; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error != 0)) > + goto abort; > + > + return true; > +abort: > + vfree_atomic(chunk); > + return false; > + > +} > + > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned long retval; > + unsigned int order; > + > + if (check_alloc_params(pool, size)) > + return NULL; > + > + order = (unsigned int)pool->min_alloc_order; > + > +retry_alloc_from_pool: > + retval = gen_pool_alloc(pool, size); > + if (retval) > + goto return_allocation; > + > + if (unlikely((gfp & __GFP_ATOMIC))) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(!chunk)) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + if (unlikely(!tag_chunk(chunk))) > + goto free; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error)) > + goto abort; > + > + retval = gen_pool_alloc(pool, size); > + if (retval) { > +return_allocation: > + *(size_t *)retval = size; > + if (gfp & __GFP_ZERO) > + memset((void *)retval, 0, size); > + return (void *)retval; > + } > + /* Here there is no test for __GFP_NO_FAIL because, in case of > + * concurrent allocation, one thread might add a chunk to the > + * pool and this memory could be allocated by another thread, > + * before the first thread gets a chance to use it. > + * As long as vmalloc succeeds, it's ok to retry. > + */ Fix multi-line comment style. > + goto retry_alloc_from_pool; > +abort: > + untag_chunk(chunk); > +free: > + vfree_atomic(chunk); > + return NULL; > +} [snip] > +/** Just use /* > + * When the sysfs is ready to receive registrations, connect all the > + * pools previously created. Also enable further pools to be connected > + * right away. > + */ > +static int __init pmalloc_late_init(void) > +{ > + struct pmalloc_data *data, *n; > + > + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); > + > + mutex_lock(&pmalloc_mutex); > + pmalloc_list = &pmalloc_final_list; > + > + if (likely(pmalloc_kobject != NULL)) { > + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { > + list_move(&data->node, &pmalloc_final_list); > + pmalloc_connect(data); > + } > + } > + mutex_unlock(&pmalloc_mutex); > + pmalloc_selftest(); > + return 0; > +} > +late_initcall(pmalloc_late_init); > diff --git a/mm/usercopy.c b/mm/usercopy.c > index a9852b24715d..c3b10298d808 100644 > --- a/mm/usercopy.c > +++ b/mm/usercopy.c > @@ -15,6 +15,7 @@ > #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt > > #include <linux/mm.h> > +#include <linux/pmalloc.h> > #include <linux/slab.h> > #include <linux/sched.h> > #include <linux/sched/task.h> > @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, > void __check_object_size(const void *ptr, unsigned long n, bool to_user) > { > const char *err; > + int retv; > > /* Skip all tests if size is zero. */ > if (!n) > @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for invalid addresses. */ > err = check_bogus_address(ptr, n); > - if (err) > + if (unlikely(err)) > goto report; > > /* Check for bad heap object. */ > err = check_heap_object(ptr, n, to_user); > - if (err) > + if (unlikely(err)) > goto report; > > /* Check for bad stack object. */ > @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for object in kernel to avoid text exposure. */ > err = check_kernel_text_object(ptr, n); > - if (!err) > - return; > + if (unlikely(err)) > + goto report; > + > + /* Check if object is from a pmalloc chunk. > + */ Use kernel multi-line comment style. > + retv = is_pmalloc_object(ptr, n); > + if (unlikely(retv)) { > + if (unlikely(!to_user)) { > + err = "<trying to write to pmalloc object>"; > + goto report; > + } > + if (retv < 0) { > + err = "<invalid pmalloc object>"; > + goto report; > + } > + } > + return; > > report: > report_usercopy(ptr, n, to_user, err); > -- ~Randy ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-04 22:06 ` Randy Dunlap 0 siblings, 0 replies; 84+ messages in thread From: Randy Dunlap @ 2018-02-04 22:06 UTC (permalink / raw) To: Igor Stoppa, jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 02/04/2018 08:47 AM, Igor Stoppa wrote: > The MMU available in many systems running Linux can often provide R/O > protection to the memory pages it handles. > > However, the MMU-based protection works efficiently only when said pages > contain exclusively data that will not need further modifications. > > Statically allocated variables can be segregated into a dedicated > section, but this does not sit very well with dynamically allocated > ones. > > Dynamic allocation does not provide, currently, any means for grouping > variables in memory pages that would contain exclusively data suitable > for conversion to read only access mode. > > The allocator here provided (pmalloc - protectable memory allocator) > introduces the concept of pools of protectable memory. > > A module can request a pool and then refer any allocation request to the > pool handler it has received. > > Once all the chunks of memory associated to a specific pool are > initialized, the pool can be protected. > > After this point, the pool can only be destroyed (it is up to the module > to avoid any further references to the memory from the pool, after > the destruction is invoked). > > The latter case is mainly meant for releasing memory, when a module is > unloaded. > > A module can have as many pools as needed, for example to support the > protection of data that is initialized in sufficiently distinct phases. > > Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> > --- > include/linux/genalloc.h | 3 + > include/linux/pmalloc.h | 213 ++++++++++++++++++++ > include/linux/vmalloc.h | 1 + > lib/genalloc.c | 27 +++ > mm/Makefile | 1 + > mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ > mm/usercopy.c | 25 ++- > 7 files changed, 780 insertions(+), 4 deletions(-) > create mode 100644 include/linux/pmalloc.h > create mode 100644 mm/pmalloc.c > > diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h > index dcaa33e74b1c..b6c4cea9fbd8 100644 > --- a/include/linux/genalloc.h > +++ b/include/linux/genalloc.h > @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, > extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, > dma_addr_t *dma); > extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); > + > +extern void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk); > extern void gen_pool_for_each_chunk(struct gen_pool *, > void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); > extern size_t gen_pool_avail(struct gen_pool *); > diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h > new file mode 100644 > index 000000000000..5fa8a78be819 > --- /dev/null > +++ b/include/linux/pmalloc.h > @@ -0,0 +1,213 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.h: Header for Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#ifndef _PMALLOC_H > +#define _PMALLOC_H use _LINUX_PMALLOC_H_ > + > + > +#include <linux/genalloc.h> > +#include <linux/string.h> > + > +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) > + > +/* > + * Library for dynamic allocation of pools of memory that can be, > + * after initialization, marked as read-only. > + * > + * This is intended to complement __read_only_after_init, for those cases > + * where either it is not possible to know the initialization value before > + * init is completed, or the amount of data is variable and can be > + * determined only at run-time. > + * > + * ***WARNING*** > + * The user of the API is expected to synchronize: > + * 1) allocation, > + * 2) writes to the allocated memory, > + * 3) write protection of the pool, > + * 4) freeing of the allocated memory, and > + * 5) destruction of the pool. > + * > + * For a non-threaded scenario, this type of locking is not even required. > + * > + * Even if the library were to provide support for locking, point 2) > + * would still depend on the user taking the lock. > + */ > + > + > +/** > + * pmalloc_create_pool - create a new protectable memory pool - Drop trailing " -". > + * @name: the name of the pool, must be unique Is that enforced? Will return NULL if @name is duplicated? > + * @min_alloc_order: log2 of the minimum allocation size obtainable > + * from the pool > + * > + * Creates a new (empty) memory pool for allocation of protectable > + * memory. Memory will be allocated upon request (through pmalloc). > + * > + * Returns a pointer to the new pool upon success, otherwise a NULL. > + */ > +struct gen_pool *pmalloc_create_pool(const char *name, > + int min_alloc_order); > + > + > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +/** > + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size > + * @pool: handler to the pool to be used for memory allocation handle (I think) > + * @size: amount of memory (in bytes) requested > + * > + * Prepares a chunk of the requested size. > + * This is intended to both minimize latency in later memory requests and > + * avoid sleping during allocation. sleeping > + * Memory allocated with prealloc is stored in one single chunk, as with pmalloc_prealloc() > + * opposite to what is allocated on-demand when pmalloc runs out of free opposed to > + * space already existing in the pool and has to invoke vmalloc. > + * > + * Returns true if the vmalloc call was successful, false otherwise. Where is the allocated memory (pointer)? I.e., how does the caller know where that memory is? Oh, that memory isn't yet available to the caller until it calls pmalloc(), right? > + */ > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); > + > +/** > + * pmalloc - allocate protectable memory from a pool > + * @pool: handler to the pool to be used for memory allocation handle (?) > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Allocates memory from an unprotected pool. If the pool doesn't have > + * enough memory, and the request did not include GFP_ATOMIC, an attempt > + * is made to add a new chunk of memory to the pool > + * (a multiple of PAGE_SIZE), in order to fit the new request. fill What if @size is > PAGE_SIZE? > + * Otherwise, NULL is returned. > + * > + * Returns the pointer to the memory requested upon success, > + * NULL otherwise (either no memory available or pool already read-only). It would be good to use the * Return: kernel-doc notation for return values. > + */ > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); > + > + > +/** > + * pzalloc - zero-initialized version of pmalloc > + * @pool: handler to the pool to be used for memory allocation handle (?) > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Executes pmalloc, initializing the memory requested to 0, > + * before returning the pointer to it. > + * > + * Returns the pointer to the zeroed memory requested, upon success, > + * NULL otherwise (either no memory available or pool already read-only). > + */ > +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + return pmalloc(pool, size, gfp | __GFP_ZERO); > +} > + > +/** > + * pmalloc_array - allocates an array according to the parameters > + * @pool: handler to the pool to be used for memory allocation handle > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested for each element > + * @flags: flags for page allocation > + * > + * Executes pmalloc, if it has a chance to succeed. > + * > + * Returns either NULL or the pmalloc result. > + */ > +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + if (unlikely(!(pool && n && size))) > + return NULL; > + return pmalloc(pool, n * size, flags); > +} > + > +/** > + * pcalloc - allocates a 0-initialized array according to the parameters > + * @pool: handler to the pool to be used for memory allocation handle > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested > + * @flags: flags for page allocation > + * > + * Executes pmalloc_array, if it has a chance to succeed. > + * > + * Returns either NULL or the pmalloc result. > + */ > +static inline void *pcalloc(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); > +} > + > +/** > + * pstrdup - duplicate a string, using pmalloc as allocator > + * @pool: handler to the pool to be used for memory allocation handle > + * @s: string to duplicate > + * @gfp: flags for page allocation > + * > + * Generates a copy of the given string, allocating sufficient memory > + * from the given pmalloc pool. > + * > + * Returns a pointer to the replica, NULL in case of recoverable error. > + */ > +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) > +{ > + size_t len; > + char *buf; > + > + if (unlikely(pool == NULL || s == NULL)) > + return NULL; > + > + len = strlen(s) + 1; > + buf = pmalloc(pool, len, gfp); > + if (likely(buf)) > + strncpy(buf, s, len); > + return buf; > +} > + > +/** > + * pmalloc_protect_pool - turn a read/write pool read-only > + * @pool: the pool to protect > + * > + * Write-protects all the memory chunks assigned to the pool. > + * This prevents any further allocation. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_protect_pool(struct gen_pool *pool); > + > +/** > + * pfree - mark as unused memory that was previously in use > + * @pool: handler to the pool to be used for memory allocation handle > + * @addr: the beginning of the memory area to be freed > + * > + * The behavior of pfree is different, depending on the state of the > + * protection. > + * If the pool is not yet protected, the memory is marked as unused and > + * will be availabel for further allocations. available > + * If the pool is already protected, the memory is marked as unused, but > + * it will still be impossible to perform further allocation, because of > + * the existing protection. > + * The freed memory, in this case, will be truly released only when the > + * pool is destroyed. > + */ > +static inline void pfree(struct gen_pool *pool, const void *addr) > +{ > + gen_pool_free(pool, (unsigned long)addr, 0); > +} > + > +/** > + * pmalloc_destroy_pool - destroys a pool and all the associated memory > + * @pool: the pool to destroy > + * > + * All the memory that was allocated through pmalloc in the pool will be freed. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_destroy_pool(struct gen_pool *pool); > + > +#endif > diff --git a/mm/pmalloc.c b/mm/pmalloc.c > new file mode 100644 > index 000000000000..11daca252589 > --- /dev/null > +++ b/mm/pmalloc.c > @@ -0,0 +1,514 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.c: Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#include <linux/printk.h> > +#include <linux/init.h> > +#include <linux/mm.h> > +#include <linux/vmalloc.h> > +#include <linux/genalloc.h> > +#include <linux/kernel.h> > +#include <linux/log2.h> > +#include <linux/slab.h> > +#include <linux/device.h> > +#include <linux/atomic.h> > +#include <linux/rculist.h> > +#include <linux/set_memory.h> > +#include <asm/cacheflush.h> > +#include <asm/page.h> > + > +#include "pmalloc-selftest.h" > + > +/** /** means that the following comments are kernel-doc notation, but these comments are not, so just use /* there, please. > + * pmalloc_data contains the data specific to a pmalloc pool, > + * in a format compatible with the design of gen_alloc. > + * Some of the fields are used for exposing the corresponding parameter > + * to userspace, through sysfs. > + */ > +struct pmalloc_data { > + struct gen_pool *pool; /* Link back to the associated pool. */ > + bool protected; /* Status of the pool: RO or RW. */ > + struct kobj_attribute attr_protected; /* Sysfs attribute. */ > + struct kobj_attribute attr_avail; /* Sysfs attribute. */ > + struct kobj_attribute attr_size; /* Sysfs attribute. */ > + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ > + struct kobject *pool_kobject; > + struct list_head node; /* list of pools */ > +}; > + > +static LIST_HEAD(pmalloc_final_list); > +static LIST_HEAD(pmalloc_tmp_list); > +static struct list_head *pmalloc_list = &pmalloc_tmp_list; > +static DEFINE_MUTEX(pmalloc_mutex); > +static struct kobject *pmalloc_kobject; [snip] > +/** Just use /* since this is not kernel-doc notation. > + * Exposes the pool and its attributes through sysfs. > + */ > +static struct kobject *pmalloc_connect(struct pmalloc_data *data) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + struct kobject *kobj; > + > + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); > + if (unlikely(!kobj)) > + return NULL; > + > + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { > + kobject_put(kobj); > + kobj = NULL; > + } > + return kobj; > +} > + > +/** Ditto. > + * Removes the pool and its attributes from sysfs. > + */ > +static void pmalloc_disconnect(struct pmalloc_data *data, > + struct kobject *kobj) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + > + sysfs_remove_files(kobj, attrs); > + kobject_put(kobj); > +} > + > +/** Same. > + * Declares an attribute of the pool. > + */ > + > +#define pmalloc_attr_init(data, attr_name) \ > +do { \ > + sysfs_attr_init(&data->attr_##attr_name.attr); \ > + data->attr_##attr_name.attr.name = #attr_name; \ > + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ > + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ > +} while (0) [snip] > +int is_pmalloc_object(const void *ptr, const unsigned long n) > +{ > + struct vm_struct *area; > + struct page *page; > + unsigned long area_start; > + unsigned long area_end; > + unsigned long object_start; > + unsigned long object_end; > + > + > + /* is_pmalloc_object gets called pretty late, so chances are high > + * that the object is indeed of vmalloc type > + */ Multi-line comment style is /* * comment1 * comment..N */ > + if (unlikely(!is_vmalloc_addr(ptr))) > + return NOT_PMALLOC_OBJECT; > + > + page = vmalloc_to_page(ptr); > + if (unlikely(!page)) > + return NOT_PMALLOC_OBJECT; > + > + area = page->area; > + > + if (likely(!(area->flags & VM_PMALLOC))) > + return NOT_PMALLOC_OBJECT; > + > + area_start = (unsigned long)area->addr; > + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; > + object_start = (unsigned long)ptr; > + object_end = object_start + n - 1; > + > + if (likely((area_start <= object_start) && > + (object_end <= area_end))) > + return VALID_PMALLOC_OBJECT; > + else > + return INVALID_PMALLOC_OBJECT; > +} > + > + > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned int order; > + > + if (check_alloc_params(pool, size)) > + return false; > + > + order = (unsigned int)pool->min_alloc_order; > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(chunk == NULL)) > + return false; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error != 0)) > + goto abort; > + > + return true; > +abort: > + vfree_atomic(chunk); > + return false; > + > +} > + > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned long retval; > + unsigned int order; > + > + if (check_alloc_params(pool, size)) > + return NULL; > + > + order = (unsigned int)pool->min_alloc_order; > + > +retry_alloc_from_pool: > + retval = gen_pool_alloc(pool, size); > + if (retval) > + goto return_allocation; > + > + if (unlikely((gfp & __GFP_ATOMIC))) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(!chunk)) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + if (unlikely(!tag_chunk(chunk))) > + goto free; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error)) > + goto abort; > + > + retval = gen_pool_alloc(pool, size); > + if (retval) { > +return_allocation: > + *(size_t *)retval = size; > + if (gfp & __GFP_ZERO) > + memset((void *)retval, 0, size); > + return (void *)retval; > + } > + /* Here there is no test for __GFP_NO_FAIL because, in case of > + * concurrent allocation, one thread might add a chunk to the > + * pool and this memory could be allocated by another thread, > + * before the first thread gets a chance to use it. > + * As long as vmalloc succeeds, it's ok to retry. > + */ Fix multi-line comment style. > + goto retry_alloc_from_pool; > +abort: > + untag_chunk(chunk); > +free: > + vfree_atomic(chunk); > + return NULL; > +} [snip] > +/** Just use /* > + * When the sysfs is ready to receive registrations, connect all the > + * pools previously created. Also enable further pools to be connected > + * right away. > + */ > +static int __init pmalloc_late_init(void) > +{ > + struct pmalloc_data *data, *n; > + > + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); > + > + mutex_lock(&pmalloc_mutex); > + pmalloc_list = &pmalloc_final_list; > + > + if (likely(pmalloc_kobject != NULL)) { > + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { > + list_move(&data->node, &pmalloc_final_list); > + pmalloc_connect(data); > + } > + } > + mutex_unlock(&pmalloc_mutex); > + pmalloc_selftest(); > + return 0; > +} > +late_initcall(pmalloc_late_init); > diff --git a/mm/usercopy.c b/mm/usercopy.c > index a9852b24715d..c3b10298d808 100644 > --- a/mm/usercopy.c > +++ b/mm/usercopy.c > @@ -15,6 +15,7 @@ > #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt > > #include <linux/mm.h> > +#include <linux/pmalloc.h> > #include <linux/slab.h> > #include <linux/sched.h> > #include <linux/sched/task.h> > @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, > void __check_object_size(const void *ptr, unsigned long n, bool to_user) > { > const char *err; > + int retv; > > /* Skip all tests if size is zero. */ > if (!n) > @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for invalid addresses. */ > err = check_bogus_address(ptr, n); > - if (err) > + if (unlikely(err)) > goto report; > > /* Check for bad heap object. */ > err = check_heap_object(ptr, n, to_user); > - if (err) > + if (unlikely(err)) > goto report; > > /* Check for bad stack object. */ > @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for object in kernel to avoid text exposure. */ > err = check_kernel_text_object(ptr, n); > - if (!err) > - return; > + if (unlikely(err)) > + goto report; > + > + /* Check if object is from a pmalloc chunk. > + */ Use kernel multi-line comment style. > + retv = is_pmalloc_object(ptr, n); > + if (unlikely(retv)) { > + if (unlikely(!to_user)) { > + err = "<trying to write to pmalloc object>"; > + goto report; > + } > + if (retv < 0) { > + err = "<invalid pmalloc object>"; > + goto report; > + } > + } > + return; > > report: > report_usercopy(ptr, n, to_user, err); > -- ~Randy -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-04 22:06 ` Randy Dunlap 0 siblings, 0 replies; 84+ messages in thread From: Randy Dunlap @ 2018-02-04 22:06 UTC (permalink / raw) To: linux-security-module On 02/04/2018 08:47 AM, Igor Stoppa wrote: > The MMU available in many systems running Linux can often provide R/O > protection to the memory pages it handles. > > However, the MMU-based protection works efficiently only when said pages > contain exclusively data that will not need further modifications. > > Statically allocated variables can be segregated into a dedicated > section, but this does not sit very well with dynamically allocated > ones. > > Dynamic allocation does not provide, currently, any means for grouping > variables in memory pages that would contain exclusively data suitable > for conversion to read only access mode. > > The allocator here provided (pmalloc - protectable memory allocator) > introduces the concept of pools of protectable memory. > > A module can request a pool and then refer any allocation request to the > pool handler it has received. > > Once all the chunks of memory associated to a specific pool are > initialized, the pool can be protected. > > After this point, the pool can only be destroyed (it is up to the module > to avoid any further references to the memory from the pool, after > the destruction is invoked). > > The latter case is mainly meant for releasing memory, when a module is > unloaded. > > A module can have as many pools as needed, for example to support the > protection of data that is initialized in sufficiently distinct phases. > > Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> > --- > include/linux/genalloc.h | 3 + > include/linux/pmalloc.h | 213 ++++++++++++++++++++ > include/linux/vmalloc.h | 1 + > lib/genalloc.c | 27 +++ > mm/Makefile | 1 + > mm/pmalloc.c | 514 +++++++++++++++++++++++++++++++++++++++++++++++ > mm/usercopy.c | 25 ++- > 7 files changed, 780 insertions(+), 4 deletions(-) > create mode 100644 include/linux/pmalloc.h > create mode 100644 mm/pmalloc.c > > diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h > index dcaa33e74b1c..b6c4cea9fbd8 100644 > --- a/include/linux/genalloc.h > +++ b/include/linux/genalloc.h > @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, > extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, > dma_addr_t *dma); > extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); > + > +extern void gen_pool_flush_chunk(struct gen_pool *pool, > + struct gen_pool_chunk *chunk); > extern void gen_pool_for_each_chunk(struct gen_pool *, > void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); > extern size_t gen_pool_avail(struct gen_pool *); > diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h > new file mode 100644 > index 000000000000..5fa8a78be819 > --- /dev/null > +++ b/include/linux/pmalloc.h > @@ -0,0 +1,213 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.h: Header for Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#ifndef _PMALLOC_H > +#define _PMALLOC_H use _LINUX_PMALLOC_H_ > + > + > +#include <linux/genalloc.h> > +#include <linux/string.h> > + > +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) > + > +/* > + * Library for dynamic allocation of pools of memory that can be, > + * after initialization, marked as read-only. > + * > + * This is intended to complement __read_only_after_init, for those cases > + * where either it is not possible to know the initialization value before > + * init is completed, or the amount of data is variable and can be > + * determined only at run-time. > + * > + * ***WARNING*** > + * The user of the API is expected to synchronize: > + * 1) allocation, > + * 2) writes to the allocated memory, > + * 3) write protection of the pool, > + * 4) freeing of the allocated memory, and > + * 5) destruction of the pool. > + * > + * For a non-threaded scenario, this type of locking is not even required. > + * > + * Even if the library were to provide support for locking, point 2) > + * would still depend on the user taking the lock. > + */ > + > + > +/** > + * pmalloc_create_pool - create a new protectable memory pool - Drop trailing " -". > + * @name: the name of the pool, must be unique Is that enforced? Will return NULL if @name is duplicated? > + * @min_alloc_order: log2 of the minimum allocation size obtainable > + * from the pool > + * > + * Creates a new (empty) memory pool for allocation of protectable > + * memory. Memory will be allocated upon request (through pmalloc). > + * > + * Returns a pointer to the new pool upon success, otherwise a NULL. > + */ > +struct gen_pool *pmalloc_create_pool(const char *name, > + int min_alloc_order); > + > + > +int is_pmalloc_object(const void *ptr, const unsigned long n); > + > +/** > + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size > + * @pool: handler to the pool to be used for memory allocation handle (I think) > + * @size: amount of memory (in bytes) requested > + * > + * Prepares a chunk of the requested size. > + * This is intended to both minimize latency in later memory requests and > + * avoid sleping during allocation. sleeping > + * Memory allocated with prealloc is stored in one single chunk, as with pmalloc_prealloc() > + * opposite to what is allocated on-demand when pmalloc runs out of free opposed to > + * space already existing in the pool and has to invoke vmalloc. > + * > + * Returns true if the vmalloc call was successful, false otherwise. Where is the allocated memory (pointer)? I.e., how does the caller know where that memory is? Oh, that memory isn't yet available to the caller until it calls pmalloc(), right? > + */ > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); > + > +/** > + * pmalloc - allocate protectable memory from a pool > + * @pool: handler to the pool to be used for memory allocation handle (?) > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Allocates memory from an unprotected pool. If the pool doesn't have > + * enough memory, and the request did not include GFP_ATOMIC, an attempt > + * is made to add a new chunk of memory to the pool > + * (a multiple of PAGE_SIZE), in order to fit the new request. fill What if @size is > PAGE_SIZE? > + * Otherwise, NULL is returned. > + * > + * Returns the pointer to the memory requested upon success, > + * NULL otherwise (either no memory available or pool already read-only). It would be good to use the * Return: kernel-doc notation for return values. > + */ > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); > + > + > +/** > + * pzalloc - zero-initialized version of pmalloc > + * @pool: handler to the pool to be used for memory allocation handle (?) > + * @size: amount of memory (in bytes) requested > + * @gfp: flags for page allocation > + * > + * Executes pmalloc, initializing the memory requested to 0, > + * before returning the pointer to it. > + * > + * Returns the pointer to the zeroed memory requested, upon success, > + * NULL otherwise (either no memory available or pool already read-only). > + */ > +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + return pmalloc(pool, size, gfp | __GFP_ZERO); > +} > + > +/** > + * pmalloc_array - allocates an array according to the parameters > + * @pool: handler to the pool to be used for memory allocation handle > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested for each element > + * @flags: flags for page allocation > + * > + * Executes pmalloc, if it has a chance to succeed. > + * > + * Returns either NULL or the pmalloc result. > + */ > +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + if (unlikely(!(pool && n && size))) > + return NULL; > + return pmalloc(pool, n * size, flags); > +} > + > +/** > + * pcalloc - allocates a 0-initialized array according to the parameters > + * @pool: handler to the pool to be used for memory allocation handle > + * @n: number of elements in the array > + * @size: amount of memory (in bytes) requested > + * @flags: flags for page allocation > + * > + * Executes pmalloc_array, if it has a chance to succeed. > + * > + * Returns either NULL or the pmalloc result. > + */ > +static inline void *pcalloc(struct gen_pool *pool, size_t n, > + size_t size, gfp_t flags) > +{ > + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); > +} > + > +/** > + * pstrdup - duplicate a string, using pmalloc as allocator > + * @pool: handler to the pool to be used for memory allocation handle > + * @s: string to duplicate > + * @gfp: flags for page allocation > + * > + * Generates a copy of the given string, allocating sufficient memory > + * from the given pmalloc pool. > + * > + * Returns a pointer to the replica, NULL in case of recoverable error. > + */ > +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) > +{ > + size_t len; > + char *buf; > + > + if (unlikely(pool == NULL || s == NULL)) > + return NULL; > + > + len = strlen(s) + 1; > + buf = pmalloc(pool, len, gfp); > + if (likely(buf)) > + strncpy(buf, s, len); > + return buf; > +} > + > +/** > + * pmalloc_protect_pool - turn a read/write pool read-only > + * @pool: the pool to protect > + * > + * Write-protects all the memory chunks assigned to the pool. > + * This prevents any further allocation. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_protect_pool(struct gen_pool *pool); > + > +/** > + * pfree - mark as unused memory that was previously in use > + * @pool: handler to the pool to be used for memory allocation handle > + * @addr: the beginning of the memory area to be freed > + * > + * The behavior of pfree is different, depending on the state of the > + * protection. > + * If the pool is not yet protected, the memory is marked as unused and > + * will be availabel for further allocations. available > + * If the pool is already protected, the memory is marked as unused, but > + * it will still be impossible to perform further allocation, because of > + * the existing protection. > + * The freed memory, in this case, will be truly released only when the > + * pool is destroyed. > + */ > +static inline void pfree(struct gen_pool *pool, const void *addr) > +{ > + gen_pool_free(pool, (unsigned long)addr, 0); > +} > + > +/** > + * pmalloc_destroy_pool - destroys a pool and all the associated memory > + * @pool: the pool to destroy > + * > + * All the memory that was allocated through pmalloc in the pool will be freed. > + * > + * Returns 0 upon success, -EINVAL in abnormal cases. > + */ > +int pmalloc_destroy_pool(struct gen_pool *pool); > + > +#endif > diff --git a/mm/pmalloc.c b/mm/pmalloc.c > new file mode 100644 > index 000000000000..11daca252589 > --- /dev/null > +++ b/mm/pmalloc.c > @@ -0,0 +1,514 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * pmalloc.c: Protectable Memory Allocator > + * > + * (C) Copyright 2017 Huawei Technologies Co. Ltd. > + * Author: Igor Stoppa <igor.stoppa@huawei.com> > + */ > + > +#include <linux/printk.h> > +#include <linux/init.h> > +#include <linux/mm.h> > +#include <linux/vmalloc.h> > +#include <linux/genalloc.h> > +#include <linux/kernel.h> > +#include <linux/log2.h> > +#include <linux/slab.h> > +#include <linux/device.h> > +#include <linux/atomic.h> > +#include <linux/rculist.h> > +#include <linux/set_memory.h> > +#include <asm/cacheflush.h> > +#include <asm/page.h> > + > +#include "pmalloc-selftest.h" > + > +/** /** means that the following comments are kernel-doc notation, but these comments are not, so just use /* there, please. > + * pmalloc_data contains the data specific to a pmalloc pool, > + * in a format compatible with the design of gen_alloc. > + * Some of the fields are used for exposing the corresponding parameter > + * to userspace, through sysfs. > + */ > +struct pmalloc_data { > + struct gen_pool *pool; /* Link back to the associated pool. */ > + bool protected; /* Status of the pool: RO or RW. */ > + struct kobj_attribute attr_protected; /* Sysfs attribute. */ > + struct kobj_attribute attr_avail; /* Sysfs attribute. */ > + struct kobj_attribute attr_size; /* Sysfs attribute. */ > + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ > + struct kobject *pool_kobject; > + struct list_head node; /* list of pools */ > +}; > + > +static LIST_HEAD(pmalloc_final_list); > +static LIST_HEAD(pmalloc_tmp_list); > +static struct list_head *pmalloc_list = &pmalloc_tmp_list; > +static DEFINE_MUTEX(pmalloc_mutex); > +static struct kobject *pmalloc_kobject; [snip] > +/** Just use /* since this is not kernel-doc notation. > + * Exposes the pool and its attributes through sysfs. > + */ > +static struct kobject *pmalloc_connect(struct pmalloc_data *data) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + struct kobject *kobj; > + > + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); > + if (unlikely(!kobj)) > + return NULL; > + > + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { > + kobject_put(kobj); > + kobj = NULL; > + } > + return kobj; > +} > + > +/** Ditto. > + * Removes the pool and its attributes from sysfs. > + */ > +static void pmalloc_disconnect(struct pmalloc_data *data, > + struct kobject *kobj) > +{ > + const struct attribute *attrs[] = { > + &data->attr_protected.attr, > + &data->attr_avail.attr, > + &data->attr_size.attr, > + &data->attr_chunks.attr, > + NULL > + }; > + > + sysfs_remove_files(kobj, attrs); > + kobject_put(kobj); > +} > + > +/** Same. > + * Declares an attribute of the pool. > + */ > + > +#define pmalloc_attr_init(data, attr_name) \ > +do { \ > + sysfs_attr_init(&data->attr_##attr_name.attr); \ > + data->attr_##attr_name.attr.name = #attr_name; \ > + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0400); \ > + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ > +} while (0) [snip] > +int is_pmalloc_object(const void *ptr, const unsigned long n) > +{ > + struct vm_struct *area; > + struct page *page; > + unsigned long area_start; > + unsigned long area_end; > + unsigned long object_start; > + unsigned long object_end; > + > + > + /* is_pmalloc_object gets called pretty late, so chances are high > + * that the object is indeed of vmalloc type > + */ Multi-line comment style is /* * comment1 * comment..N */ > + if (unlikely(!is_vmalloc_addr(ptr))) > + return NOT_PMALLOC_OBJECT; > + > + page = vmalloc_to_page(ptr); > + if (unlikely(!page)) > + return NOT_PMALLOC_OBJECT; > + > + area = page->area; > + > + if (likely(!(area->flags & VM_PMALLOC))) > + return NOT_PMALLOC_OBJECT; > + > + area_start = (unsigned long)area->addr; > + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; > + object_start = (unsigned long)ptr; > + object_end = object_start + n - 1; > + > + if (likely((area_start <= object_start) && > + (object_end <= area_end))) > + return VALID_PMALLOC_OBJECT; > + else > + return INVALID_PMALLOC_OBJECT; > +} > + > + > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned int order; > + > + if (check_alloc_params(pool, size)) > + return false; > + > + order = (unsigned int)pool->min_alloc_order; > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(chunk == NULL)) > + return false; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error != 0)) > + goto abort; > + > + return true; > +abort: > + vfree_atomic(chunk); > + return false; > + > +} > + > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ > + void *chunk; > + size_t chunk_size; > + bool add_error; > + unsigned long retval; > + unsigned int order; > + > + if (check_alloc_params(pool, size)) > + return NULL; > + > + order = (unsigned int)pool->min_alloc_order; > + > +retry_alloc_from_pool: > + retval = gen_pool_alloc(pool, size); > + if (retval) > + goto return_allocation; > + > + if (unlikely((gfp & __GFP_ATOMIC))) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); > + if (unlikely(!chunk)) { > + if (unlikely((gfp & __GFP_NOFAIL))) > + goto retry_alloc_from_pool; > + else > + return NULL; > + } > + if (unlikely(!tag_chunk(chunk))) > + goto free; > + > + /* Locking is already done inside gen_pool_add */ > + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, > + NUMA_NO_NODE); > + if (unlikely(add_error)) > + goto abort; > + > + retval = gen_pool_alloc(pool, size); > + if (retval) { > +return_allocation: > + *(size_t *)retval = size; > + if (gfp & __GFP_ZERO) > + memset((void *)retval, 0, size); > + return (void *)retval; > + } > + /* Here there is no test for __GFP_NO_FAIL because, in case of > + * concurrent allocation, one thread might add a chunk to the > + * pool and this memory could be allocated by another thread, > + * before the first thread gets a chance to use it. > + * As long as vmalloc succeeds, it's ok to retry. > + */ Fix multi-line comment style. > + goto retry_alloc_from_pool; > +abort: > + untag_chunk(chunk); > +free: > + vfree_atomic(chunk); > + return NULL; > +} [snip] > +/** Just use /* > + * When the sysfs is ready to receive registrations, connect all the > + * pools previously created. Also enable further pools to be connected > + * right away. > + */ > +static int __init pmalloc_late_init(void) > +{ > + struct pmalloc_data *data, *n; > + > + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); > + > + mutex_lock(&pmalloc_mutex); > + pmalloc_list = &pmalloc_final_list; > + > + if (likely(pmalloc_kobject != NULL)) { > + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { > + list_move(&data->node, &pmalloc_final_list); > + pmalloc_connect(data); > + } > + } > + mutex_unlock(&pmalloc_mutex); > + pmalloc_selftest(); > + return 0; > +} > +late_initcall(pmalloc_late_init); > diff --git a/mm/usercopy.c b/mm/usercopy.c > index a9852b24715d..c3b10298d808 100644 > --- a/mm/usercopy.c > +++ b/mm/usercopy.c > @@ -15,6 +15,7 @@ > #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt > > #include <linux/mm.h> > +#include <linux/pmalloc.h> > #include <linux/slab.h> > #include <linux/sched.h> > #include <linux/sched/task.h> > @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, > void __check_object_size(const void *ptr, unsigned long n, bool to_user) > { > const char *err; > + int retv; > > /* Skip all tests if size is zero. */ > if (!n) > @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for invalid addresses. */ > err = check_bogus_address(ptr, n); > - if (err) > + if (unlikely(err)) > goto report; > > /* Check for bad heap object. */ > err = check_heap_object(ptr, n, to_user); > - if (err) > + if (unlikely(err)) > goto report; > > /* Check for bad stack object. */ > @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) > > /* Check for object in kernel to avoid text exposure. */ > err = check_kernel_text_object(ptr, n); > - if (!err) > - return; > + if (unlikely(err)) > + goto report; > + > + /* Check if object is from a pmalloc chunk. > + */ Use kernel multi-line comment style. > + retv = is_pmalloc_object(ptr, n); > + if (unlikely(retv)) { > + if (unlikely(!to_user)) { > + err = "<trying to write to pmalloc object>"; > + goto report; > + } > + if (retv < 0) { > + err = "<invalid pmalloc object>"; > + goto report; > + } > + } > + return; > > report: > report_usercopy(ptr, n, to_user, err); > -- ~Randy -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-04 22:06 ` Randy Dunlap (?) (?) @ 2018-02-11 1:04 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 1:04 UTC (permalink / raw) To: Randy Dunlap, jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 05/02/18 00:06, Randy Dunlap wrote: > On 02/04/2018 08:47 AM, Igor Stoppa wrote: [...] >> + * pmalloc_create_pool - create a new protectable memory pool - > > Drop trailing " -". yes >> + * @name: the name of the pool, must be unique > > Is that enforced? Will return NULL if @name is duplicated? ok, I'll state it more clearly that it's enforced [...] >> + * @pool: handler to the pool to be used for memory allocation > > handle (I think) yes, also for all the other ones [...] >> + * avoid sleping during allocation. > > sleeping yes [...] >> + * opposite to what is allocated on-demand when pmalloc runs out of free > > opposed to yes >> + * space already existing in the pool and has to invoke vmalloc. >> + * >> + * Returns true if the vmalloc call was successful, false otherwise. > > Where is the allocated memory (pointer)? I.e., how does the caller know > where that memory is? > Oh, that memory isn't yet available to the caller until it calls pmalloc(), right? yes, it's a way to: - preemptively beef up the pool, before entering atomic context (unlikely that it will be needed, but possible), so that there is no need to allocate extra pages (assuming one can estimate the max memory that will be requested) - avoid fragmentation caused by allocating smaller groups of pages I'll add explanation for this. [...] >> + * @size: amount of memory (in bytes) requested >> + * @gfp: flags for page allocation >> + * >> + * Allocates memory from an unprotected pool. If the pool doesn't have >> + * enough memory, and the request did not include GFP_ATOMIC, an attempt >> + * is made to add a new chunk of memory to the pool >> + * (a multiple of PAGE_SIZE), in order to fit the new request. > > fill > What if @size is > PAGE_SIZE? Nothing special, it gets rounded up to the nearest multiple of PAGE_SIZE. vmalloc doesn't have only drawbacks ;-) [...] >> + * Returns the pointer to the memory requested upon success, >> + * NULL otherwise (either no memory available or pool already read-only). > > It would be good to use the > * Return: > kernel-doc notation for return values. yes, good point, I'm fixing it everywhere in the patchset [...] >> + * will be availabel for further allocations. > > available yes [...] >> +/** > > /** means that the following comments are kernel-doc notation, but these > comments are not, so just use /* there, please. yes, also to the others [...] >> + /* is_pmalloc_object gets called pretty late, so chances are high >> + * that the object is indeed of vmalloc type >> + */ > > Multi-line comment style is > /* > * comment1 > * comment..N > */ yes, also to the others -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-11 1:04 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 1:04 UTC (permalink / raw) To: Randy Dunlap, jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 05/02/18 00:06, Randy Dunlap wrote: > On 02/04/2018 08:47 AM, Igor Stoppa wrote: [...] >> + * pmalloc_create_pool - create a new protectable memory pool - > > Drop trailing " -". yes >> + * @name: the name of the pool, must be unique > > Is that enforced? Will return NULL if @name is duplicated? ok, I'll state it more clearly that it's enforced [...] >> + * @pool: handler to the pool to be used for memory allocation > > handle (I think) yes, also for all the other ones [...] >> + * avoid sleping during allocation. > > sleeping yes [...] >> + * opposite to what is allocated on-demand when pmalloc runs out of free > > opposed to yes >> + * space already existing in the pool and has to invoke vmalloc. >> + * >> + * Returns true if the vmalloc call was successful, false otherwise. > > Where is the allocated memory (pointer)? I.e., how does the caller know > where that memory is? > Oh, that memory isn't yet available to the caller until it calls pmalloc(), right? yes, it's a way to: - preemptively beef up the pool, before entering atomic context (unlikely that it will be needed, but possible), so that there is no need to allocate extra pages (assuming one can estimate the max memory that will be requested) - avoid fragmentation caused by allocating smaller groups of pages I'll add explanation for this. [...] >> + * @size: amount of memory (in bytes) requested >> + * @gfp: flags for page allocation >> + * >> + * Allocates memory from an unprotected pool. If the pool doesn't have >> + * enough memory, and the request did not include GFP_ATOMIC, an attempt >> + * is made to add a new chunk of memory to the pool >> + * (a multiple of PAGE_SIZE), in order to fit the new request. > > fill > What if @size is > PAGE_SIZE? Nothing special, it gets rounded up to the nearest multiple of PAGE_SIZE. vmalloc doesn't have only drawbacks ;-) [...] >> + * Returns the pointer to the memory requested upon success, >> + * NULL otherwise (either no memory available or pool already read-only). > > It would be good to use the > * Return: > kernel-doc notation for return values. yes, good point, I'm fixing it everywhere in the patchset [...] >> + * will be availabel for further allocations. > > available yes [...] >> +/** > > /** means that the following comments are kernel-doc notation, but these > comments are not, so just use /* there, please. yes, also to the others [...] >> + /* is_pmalloc_object gets called pretty late, so chances are high >> + * that the object is indeed of vmalloc type >> + */ > > Multi-line comment style is > /* > * comment1 > * comment..N > */ yes, also to the others -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-11 1:04 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 1:04 UTC (permalink / raw) To: Randy Dunlap, jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 05/02/18 00:06, Randy Dunlap wrote: > On 02/04/2018 08:47 AM, Igor Stoppa wrote: [...] >> + * pmalloc_create_pool - create a new protectable memory pool - > > Drop trailing " -". yes >> + * @name: the name of the pool, must be unique > > Is that enforced? Will return NULL if @name is duplicated? ok, I'll state it more clearly that it's enforced [...] >> + * @pool: handler to the pool to be used for memory allocation > > handle (I think) yes, also for all the other ones [...] >> + * avoid sleping during allocation. > > sleeping yes [...] >> + * opposite to what is allocated on-demand when pmalloc runs out of free > > opposed to yes >> + * space already existing in the pool and has to invoke vmalloc. >> + * >> + * Returns true if the vmalloc call was successful, false otherwise. > > Where is the allocated memory (pointer)? I.e., how does the caller know > where that memory is? > Oh, that memory isn't yet available to the caller until it calls pmalloc(), right? yes, it's a way to: - preemptively beef up the pool, before entering atomic context (unlikely that it will be needed, but possible), so that there is no need to allocate extra pages (assuming one can estimate the max memory that will be requested) - avoid fragmentation caused by allocating smaller groups of pages I'll add explanation for this. [...] >> + * @size: amount of memory (in bytes) requested >> + * @gfp: flags for page allocation >> + * >> + * Allocates memory from an unprotected pool. If the pool doesn't have >> + * enough memory, and the request did not include GFP_ATOMIC, an attempt >> + * is made to add a new chunk of memory to the pool >> + * (a multiple of PAGE_SIZE), in order to fit the new request. > > fill > What if @size is > PAGE_SIZE? Nothing special, it gets rounded up to the nearest multiple of PAGE_SIZE. vmalloc doesn't have only drawbacks ;-) [...] >> + * Returns the pointer to the memory requested upon success, >> + * NULL otherwise (either no memory available or pool already read-only). > > It would be good to use the > * Return: > kernel-doc notation for return values. yes, good point, I'm fixing it everywhere in the patchset [...] >> + * will be availabel for further allocations. > > available yes [...] >> +/** > > /** means that the following comments are kernel-doc notation, but these > comments are not, so just use /* there, please. yes, also to the others [...] >> + /* is_pmalloc_object gets called pretty late, so chances are high >> + * that the object is indeed of vmalloc type >> + */ > > Multi-line comment style is > /* > * comment1 > * comment..N > */ yes, also to the others -- igor -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-11 1:04 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-11 1:04 UTC (permalink / raw) To: linux-security-module On 05/02/18 00:06, Randy Dunlap wrote: > On 02/04/2018 08:47 AM, Igor Stoppa wrote: [...] >> + * pmalloc_create_pool - create a new protectable memory pool - > > Drop trailing " -". yes >> + * @name: the name of the pool, must be unique > > Is that enforced? Will return NULL if @name is duplicated? ok, I'll state it more clearly that it's enforced [...] >> + * @pool: handler to the pool to be used for memory allocation > > handle (I think) yes, also for all the other ones [...] >> + * avoid sleping during allocation. > > sleeping yes [...] >> + * opposite to what is allocated on-demand when pmalloc runs out of free > > opposed to yes >> + * space already existing in the pool and has to invoke vmalloc. >> + * >> + * Returns true if the vmalloc call was successful, false otherwise. > > Where is the allocated memory (pointer)? I.e., how does the caller know > where that memory is? > Oh, that memory isn't yet available to the caller until it calls pmalloc(), right? yes, it's a way to: - preemptively beef up the pool, before entering atomic context (unlikely that it will be needed, but possible), so that there is no need to allocate extra pages (assuming one can estimate the max memory that will be requested) - avoid fragmentation caused by allocating smaller groups of pages I'll add explanation for this. [...] >> + * @size: amount of memory (in bytes) requested >> + * @gfp: flags for page allocation >> + * >> + * Allocates memory from an unprotected pool. If the pool doesn't have >> + * enough memory, and the request did not include GFP_ATOMIC, an attempt >> + * is made to add a new chunk of memory to the pool >> + * (a multiple of PAGE_SIZE), in order to fit the new request. > > fill > What if @size is > PAGE_SIZE? Nothing special, it gets rounded up to the nearest multiple of PAGE_SIZE. vmalloc doesn't have only drawbacks ;-) [...] >> + * Returns the pointer to the memory requested upon success, >> + * NULL otherwise (either no memory available or pool already read-only). > > It would be good to use the > * Return: > kernel-doc notation for return values. yes, good point, I'm fixing it everywhere in the patchset [...] >> + * will be availabel for further allocations. > > available yes [...] >> +/** > > /** means that the following comments are kernel-doc notation, but these > comments are not, so just use /* there, please. yes, also to the others [...] >> + /* is_pmalloc_object gets called pretty late, so chances are high >> + * that the object is indeed of vmalloc type >> + */ > > Multi-line comment style is > /* > * comment1 > * comment..N > */ yes, also to the others -- igor -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-04 16:47 ` Igor Stoppa (?) @ 2018-02-07 10:03 ` kbuild test robot -1 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-07 10:03 UTC (permalink / raw) To: Igor Stoppa Cc: kbuild-all, jglisse, keescook, mhocko, labbott, hch, willy, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa [-- Attachment #1: Type: text/plain, Size: 1327 bytes --] Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on kees/for-next/pstore] [also build test ERROR on v4.15] [cannot apply to linus/master mmotm/master next-20180206] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 base: https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/pstore config: i386-tinyconfig (attached as .config) compiler: gcc-7 (Debian 7.3.0-1) 7.3.0 reproduce: # save the attached .config to linux build tree make ARCH=i386 Note: the linux-review/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 HEAD 99d0cb7905216da7595ef08a781a9be16a8ce687 builds fine. It only hurts bisectibility. All errors (new ones prefixed by >>): >> mm/pmalloc.c:24:10: fatal error: pmalloc-selftest.h: No such file or directory #include "pmalloc-selftest.h" ^~~~~~~~~~~~~~~~~~~~ compilation terminated. vim +24 mm/pmalloc.c 23 > 24 #include "pmalloc-selftest.h" 25 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 6806 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-07 10:03 ` kbuild test robot 0 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-07 10:03 UTC (permalink / raw) To: Igor Stoppa Cc: kbuild-all, jglisse, keescook, mhocko, labbott, hch, willy, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening [-- Attachment #1: Type: text/plain, Size: 1327 bytes --] Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on kees/for-next/pstore] [also build test ERROR on v4.15] [cannot apply to linus/master mmotm/master next-20180206] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 base: https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/pstore config: i386-tinyconfig (attached as .config) compiler: gcc-7 (Debian 7.3.0-1) 7.3.0 reproduce: # save the attached .config to linux build tree make ARCH=i386 Note: the linux-review/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 HEAD 99d0cb7905216da7595ef08a781a9be16a8ce687 builds fine. It only hurts bisectibility. All errors (new ones prefixed by >>): >> mm/pmalloc.c:24:10: fatal error: pmalloc-selftest.h: No such file or directory #include "pmalloc-selftest.h" ^~~~~~~~~~~~~~~~~~~~ compilation terminated. vim +24 mm/pmalloc.c 23 > 24 #include "pmalloc-selftest.h" 25 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 6806 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-07 10:03 ` kbuild test robot 0 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-07 10:03 UTC (permalink / raw) To: linux-security-module Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on kees/for-next/pstore] [also build test ERROR on v4.15] [cannot apply to linus/master mmotm/master next-20180206] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 base: https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/pstore config: i386-tinyconfig (attached as .config) compiler: gcc-7 (Debian 7.3.0-1) 7.3.0 reproduce: # save the attached .config to linux build tree make ARCH=i386 Note: the linux-review/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 HEAD 99d0cb7905216da7595ef08a781a9be16a8ce687 builds fine. It only hurts bisectibility. All errors (new ones prefixed by >>): >> mm/pmalloc.c:24:10: fatal error: pmalloc-selftest.h: No such file or directory #include "pmalloc-selftest.h" ^~~~~~~~~~~~~~~~~~~~ compilation terminated. vim +24 mm/pmalloc.c 23 > 24 #include "pmalloc-selftest.h" 25 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-02-04 16:47 ` Igor Stoppa (?) @ 2018-02-07 22:21 ` kbuild test robot -1 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-07 22:21 UTC (permalink / raw) To: Igor Stoppa Cc: kbuild-all, jglisse, keescook, mhocko, labbott, hch, willy, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa [-- Attachment #1: Type: text/plain, Size: 2474 bytes --] Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on kees/for-next/pstore] [also build test ERROR on v4.15] [cannot apply to linus/master mmotm/master next-20180207] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 base: https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/pstore config: um-allmodconfig (attached as .config) compiler: gcc-7 (Debian 7.3.0-1) 7.3.0 reproduce: # save the attached .config to linux build tree make ARCH=um All errors (new ones prefixed by >>): arch/um/drivers/vde.o: In function `vde_open_real': (.text+0x951): warning: Using 'getgrnam' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking (.text+0x79c): warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking (.text+0xab5): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoaddr': (.text+0xdee5): warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametonetaddr': (.text+0xdf85): warning: Using 'getnetbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoproto': (.text+0xe1a5): warning: Using 'getprotobyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoport': (.text+0xdfd7): warning: Using 'getservbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking mm/usercopy.o: In function `__check_object_size': >> (.text+0x3aa): undefined reference to `is_pmalloc_object' >> collect2: error: ld returned 1 exit status --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 20060 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-07 22:21 ` kbuild test robot 0 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-07 22:21 UTC (permalink / raw) To: Igor Stoppa Cc: kbuild-all, jglisse, keescook, mhocko, labbott, hch, willy, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening [-- Attachment #1: Type: text/plain, Size: 2474 bytes --] Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on kees/for-next/pstore] [also build test ERROR on v4.15] [cannot apply to linus/master mmotm/master next-20180207] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 base: https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/pstore config: um-allmodconfig (attached as .config) compiler: gcc-7 (Debian 7.3.0-1) 7.3.0 reproduce: # save the attached .config to linux build tree make ARCH=um All errors (new ones prefixed by >>): arch/um/drivers/vde.o: In function `vde_open_real': (.text+0x951): warning: Using 'getgrnam' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking (.text+0x79c): warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking (.text+0xab5): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoaddr': (.text+0xdee5): warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametonetaddr': (.text+0xdf85): warning: Using 'getnetbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoproto': (.text+0xe1a5): warning: Using 'getprotobyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoport': (.text+0xdfd7): warning: Using 'getservbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking mm/usercopy.o: In function `__check_object_size': >> (.text+0x3aa): undefined reference to `is_pmalloc_object' >> collect2: error: ld returned 1 exit status --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 20060 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-07 22:21 ` kbuild test robot 0 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-07 22:21 UTC (permalink / raw) To: linux-security-module Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on kees/for-next/pstore] [also build test ERROR on v4.15] [cannot apply to linus/master mmotm/master next-20180207] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180207-171252 base: https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/pstore config: um-allmodconfig (attached as .config) compiler: gcc-7 (Debian 7.3.0-1) 7.3.0 reproduce: # save the attached .config to linux build tree make ARCH=um All errors (new ones prefixed by >>): arch/um/drivers/vde.o: In function `vde_open_real': (.text+0x951): warning: Using 'getgrnam' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking (.text+0x79c): warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking (.text+0xab5): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoaddr': (.text+0xdee5): warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametonetaddr': (.text+0xdf85): warning: Using 'getnetbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoproto': (.text+0xe1a5): warning: Using 'getprotobyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking arch/um/drivers/pcap.o: In function `pcap_nametoport': (.text+0xdfd7): warning: Using 'getservbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking mm/usercopy.o: In function `__check_object_size': >> (.text+0x3aa): undefined reference to `is_pmalloc_object' >> collect2: error: ld returned 1 exit status --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation ^ permalink raw reply [flat|nested] 84+ messages in thread
* [RFC PATCH v12 0/6] mm: security: ro protection for dynamic data @ 2018-01-30 15:14 Igor Stoppa 2018-01-30 15:14 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Igor Stoppa @ 2018-01-30 15:14 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa This patch-set introduces the possibility of protecting memory that has been allocated dynamically. The memory is managed in pools: when a memory pool is turned into R/O, all the memory that is part of it, will become R/O. A R/O pool can be destroyed, to recover its memory, but it cannot be turned back into R/W mode. This is intentional. This feature is meant for data that doesn't need further modifications after initialization. However the data might need to be released, for example as part of module unloading. To do this, the memory must first be freed, then the pool can be destroyed. An example is provided, in the form of self-testing. Changes since the v11 version: [http://www.openwall.com/lists/kernel-hardening/2018/01/24/4] - restricted access to sysfs entries created (444 -> 400) - more explicit reference to documentation - couple of typos Igor Stoppa (6): genalloc: track beginning of allocations genalloc: selftest struct page: add field for vm_struct Protectable Memory Documentation for Pmalloc Pmalloc: self-test Documentation/core-api/pmalloc.txt | 104 ++++++++ include/linux/genalloc-selftest.h | 30 +++ include/linux/genalloc.h | 5 +- include/linux/mm_types.h | 1 + include/linux/pmalloc.h | 216 ++++++++++++++++ include/linux/vmalloc.h | 1 + init/main.c | 2 + lib/Kconfig | 15 ++ lib/Makefile | 1 + lib/genalloc-selftest.c | 402 +++++++++++++++++++++++++++++ lib/genalloc.c | 444 +++++++++++++++++++++---------- mm/Kconfig | 7 + mm/Makefile | 2 + mm/pmalloc-selftest.c | 65 +++++ mm/pmalloc-selftest.h | 30 +++ mm/pmalloc.c | 516 +++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 +- mm/vmalloc.c | 18 +- 18 files changed, 1744 insertions(+), 140 deletions(-) create mode 100644 Documentation/core-api/pmalloc.txt create mode 100644 include/linux/genalloc-selftest.h create mode 100644 include/linux/pmalloc.h create mode 100644 lib/genalloc-selftest.c create mode 100644 mm/pmalloc-selftest.c create mode 100644 mm/pmalloc-selftest.h create mode 100644 mm/pmalloc.c -- 2.9.3 ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory 2018-01-30 15:14 [RFC PATCH v12 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-01-30 15:14 ` Igor Stoppa (?) @ 2018-01-30 15:14 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-01-30 15:14 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 216 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 513 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 782 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 0377681..a486a26 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..ad7d557 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,216 @@ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> +#include <linux/gfp.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..e8171b6 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see pmalloc.txt */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde7830..62f69b3 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..a64ac49 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,513 @@ +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0444); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-01-30 15:14 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-01-30 15:14 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 216 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 513 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 782 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 0377681..a486a26 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..ad7d557 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,216 @@ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> +#include <linux/gfp.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..e8171b6 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see pmalloc.txt */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde7830..62f69b3 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..a64ac49 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,513 @@ +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0444); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-01-30 15:14 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-01-30 15:14 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 216 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 513 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 782 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 0377681..a486a26 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..ad7d557 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,216 @@ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> +#include <linux/gfp.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..e8171b6 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see pmalloc.txt */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde7830..62f69b3 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..a64ac49 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,513 @@ +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0444); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-01-30 15:14 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-01-30 15:14 UTC (permalink / raw) To: linux-security-module The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 216 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 513 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 782 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 0377681..a486a26 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..ad7d557 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,216 @@ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> +#include <linux/gfp.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..e8171b6 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see pmalloc.txt */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index dde7830..62f69b3 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..a64ac49 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,513 @@ +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0444); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-01-30 15:14 ` Igor Stoppa (?) @ 2018-02-02 5:41 ` kbuild test robot -1 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-02 5:41 UTC (permalink / raw) To: Igor Stoppa Cc: kbuild-all, jglisse, keescook, mhocko, labbott, hch, willy, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa [-- Attachment #1: Type: text/plain, Size: 1898 bytes --] Hi Igor, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on linus/master] [also build test WARNING on v4.15] [cannot apply to next-20180201] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180202-123437 config: i386-randconfig-x071-201804 (attached as .config) compiler: gcc-7 (Debian 7.2.0-12) 7.2.1 20171025 reproduce: # save the attached .config to linux build tree make ARCH=i386 All warnings (new ones prefixed by >>): mm/pmalloc.c: In function 'pmalloc_pool_show_avail': >> mm/pmalloc.c:71:25: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=] return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); ~~^ ~~~~~~~~~~~~~~~~~~~~~~~~~~ %u mm/pmalloc.c: In function 'pmalloc_pool_show_size': mm/pmalloc.c:81:25: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=] return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); ~~^ ~~~~~~~~~~~~~~~~~~~~~~~~~ %u vim +71 mm/pmalloc.c 63 64 static ssize_t pmalloc_pool_show_avail(struct kobject *dev, 65 struct kobj_attribute *attr, 66 char *buf) 67 { 68 struct pmalloc_data *data; 69 70 data = container_of(attr, struct pmalloc_data, attr_avail); > 71 return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); 72 } 73 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 31940 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-02 5:41 ` kbuild test robot 0 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-02 5:41 UTC (permalink / raw) To: Igor Stoppa Cc: kbuild-all, jglisse, keescook, mhocko, labbott, hch, willy, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening [-- Attachment #1: Type: text/plain, Size: 1898 bytes --] Hi Igor, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on linus/master] [also build test WARNING on v4.15] [cannot apply to next-20180201] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180202-123437 config: i386-randconfig-x071-201804 (attached as .config) compiler: gcc-7 (Debian 7.2.0-12) 7.2.1 20171025 reproduce: # save the attached .config to linux build tree make ARCH=i386 All warnings (new ones prefixed by >>): mm/pmalloc.c: In function 'pmalloc_pool_show_avail': >> mm/pmalloc.c:71:25: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=] return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); ~~^ ~~~~~~~~~~~~~~~~~~~~~~~~~~ %u mm/pmalloc.c: In function 'pmalloc_pool_show_size': mm/pmalloc.c:81:25: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=] return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); ~~^ ~~~~~~~~~~~~~~~~~~~~~~~~~ %u vim +71 mm/pmalloc.c 63 64 static ssize_t pmalloc_pool_show_avail(struct kobject *dev, 65 struct kobj_attribute *attr, 66 char *buf) 67 { 68 struct pmalloc_data *data; 69 70 data = container_of(attr, struct pmalloc_data, attr_avail); > 71 return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); 72 } 73 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 31940 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-02 5:41 ` kbuild test robot 0 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-02 5:41 UTC (permalink / raw) To: linux-security-module Hi Igor, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on linus/master] [also build test WARNING on v4.15] [cannot apply to next-20180201] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180202-123437 config: i386-randconfig-x071-201804 (attached as .config) compiler: gcc-7 (Debian 7.2.0-12) 7.2.1 20171025 reproduce: # save the attached .config to linux build tree make ARCH=i386 All warnings (new ones prefixed by >>): mm/pmalloc.c: In function 'pmalloc_pool_show_avail': >> mm/pmalloc.c:71:25: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=] return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); ~~^ ~~~~~~~~~~~~~~~~~~~~~~~~~~ %u mm/pmalloc.c: In function 'pmalloc_pool_show_size': mm/pmalloc.c:81:25: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=] return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); ~~^ ~~~~~~~~~~~~~~~~~~~~~~~~~ %u vim +71 mm/pmalloc.c 63 64 static ssize_t pmalloc_pool_show_avail(struct kobject *dev, 65 struct kobj_attribute *attr, 66 char *buf) 67 { 68 struct pmalloc_data *data; 69 70 data = container_of(attr, struct pmalloc_data, attr_avail); > 71 return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); 72 } 73 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory 2018-01-30 15:14 ` Igor Stoppa (?) @ 2018-02-02 5:53 ` kbuild test robot -1 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-02 5:53 UTC (permalink / raw) To: Igor Stoppa Cc: kbuild-all, jglisse, keescook, mhocko, labbott, hch, willy, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa [-- Attachment #1: Type: text/plain, Size: 2034 bytes --] Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on linus/master] [also build test ERROR on v4.15] [cannot apply to next-20180201] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180202-123437 config: i386-tinyconfig (attached as .config) compiler: gcc-7 (Debian 7.2.0-12) 7.2.1 20171025 reproduce: # save the attached .config to linux build tree make ARCH=i386 All errors (new ones prefixed by >>): mm/pmalloc.o: In function `pmalloc_pool_show_chunks': >> pmalloc.c:(.text+0x50): undefined reference to `gen_pool_for_each_chunk' mm/pmalloc.o: In function `pmalloc_pool_show_size': >> pmalloc.c:(.text+0x6e): undefined reference to `gen_pool_size' mm/pmalloc.o: In function `pmalloc_pool_show_avail': >> pmalloc.c:(.text+0x8a): undefined reference to `gen_pool_avail' mm/pmalloc.o: In function `pmalloc_chunk_free': >> pmalloc.c:(.text+0x171): undefined reference to `gen_pool_flush_chunk' mm/pmalloc.o: In function `pmalloc_create_pool': >> pmalloc.c:(.text+0x19b): undefined reference to `gen_pool_create' >> pmalloc.c:(.text+0x2bb): undefined reference to `gen_pool_destroy' mm/pmalloc.o: In function `pmalloc_prealloc': >> pmalloc.c:(.text+0x350): undefined reference to `gen_pool_add_virt' mm/pmalloc.o: In function `pmalloc': >> pmalloc.c:(.text+0x3a7): undefined reference to `gen_pool_alloc' pmalloc.c:(.text+0x3f1): undefined reference to `gen_pool_add_virt' pmalloc.c:(.text+0x401): undefined reference to `gen_pool_alloc' mm/pmalloc.o: In function `pmalloc_destroy_pool': pmalloc.c:(.text+0x4a1): undefined reference to `gen_pool_for_each_chunk' pmalloc.c:(.text+0x4a8): undefined reference to `gen_pool_destroy' --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 6802 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-02-02 5:53 ` kbuild test robot 0 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-02 5:53 UTC (permalink / raw) To: Igor Stoppa Cc: kbuild-all, jglisse, keescook, mhocko, labbott, hch, willy, cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening [-- Attachment #1: Type: text/plain, Size: 2034 bytes --] Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on linus/master] [also build test ERROR on v4.15] [cannot apply to next-20180201] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180202-123437 config: i386-tinyconfig (attached as .config) compiler: gcc-7 (Debian 7.2.0-12) 7.2.1 20171025 reproduce: # save the attached .config to linux build tree make ARCH=i386 All errors (new ones prefixed by >>): mm/pmalloc.o: In function `pmalloc_pool_show_chunks': >> pmalloc.c:(.text+0x50): undefined reference to `gen_pool_for_each_chunk' mm/pmalloc.o: In function `pmalloc_pool_show_size': >> pmalloc.c:(.text+0x6e): undefined reference to `gen_pool_size' mm/pmalloc.o: In function `pmalloc_pool_show_avail': >> pmalloc.c:(.text+0x8a): undefined reference to `gen_pool_avail' mm/pmalloc.o: In function `pmalloc_chunk_free': >> pmalloc.c:(.text+0x171): undefined reference to `gen_pool_flush_chunk' mm/pmalloc.o: In function `pmalloc_create_pool': >> pmalloc.c:(.text+0x19b): undefined reference to `gen_pool_create' >> pmalloc.c:(.text+0x2bb): undefined reference to `gen_pool_destroy' mm/pmalloc.o: In function `pmalloc_prealloc': >> pmalloc.c:(.text+0x350): undefined reference to `gen_pool_add_virt' mm/pmalloc.o: In function `pmalloc': >> pmalloc.c:(.text+0x3a7): undefined reference to `gen_pool_alloc' pmalloc.c:(.text+0x3f1): undefined reference to `gen_pool_add_virt' pmalloc.c:(.text+0x401): undefined reference to `gen_pool_alloc' mm/pmalloc.o: In function `pmalloc_destroy_pool': pmalloc.c:(.text+0x4a1): undefined reference to `gen_pool_for_each_chunk' pmalloc.c:(.text+0x4a8): undefined reference to `gen_pool_destroy' --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 6802 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-02-02 5:53 ` kbuild test robot 0 siblings, 0 replies; 84+ messages in thread From: kbuild test robot @ 2018-02-02 5:53 UTC (permalink / raw) To: linux-security-module Hi Igor, Thank you for the patch! Yet something to improve: [auto build test ERROR on linus/master] [also build test ERROR on v4.15] [cannot apply to next-20180201] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Igor-Stoppa/mm-security-ro-protection-for-dynamic-data/20180202-123437 config: i386-tinyconfig (attached as .config) compiler: gcc-7 (Debian 7.2.0-12) 7.2.1 20171025 reproduce: # save the attached .config to linux build tree make ARCH=i386 All errors (new ones prefixed by >>): mm/pmalloc.o: In function `pmalloc_pool_show_chunks': >> pmalloc.c:(.text+0x50): undefined reference to `gen_pool_for_each_chunk' mm/pmalloc.o: In function `pmalloc_pool_show_size': >> pmalloc.c:(.text+0x6e): undefined reference to `gen_pool_size' mm/pmalloc.o: In function `pmalloc_pool_show_avail': >> pmalloc.c:(.text+0x8a): undefined reference to `gen_pool_avail' mm/pmalloc.o: In function `pmalloc_chunk_free': >> pmalloc.c:(.text+0x171): undefined reference to `gen_pool_flush_chunk' mm/pmalloc.o: In function `pmalloc_create_pool': >> pmalloc.c:(.text+0x19b): undefined reference to `gen_pool_create' >> pmalloc.c:(.text+0x2bb): undefined reference to `gen_pool_destroy' mm/pmalloc.o: In function `pmalloc_prealloc': >> pmalloc.c:(.text+0x350): undefined reference to `gen_pool_add_virt' mm/pmalloc.o: In function `pmalloc': >> pmalloc.c:(.text+0x3a7): undefined reference to `gen_pool_alloc' pmalloc.c:(.text+0x3f1): undefined reference to `gen_pool_add_virt' pmalloc.c:(.text+0x401): undefined reference to `gen_pool_alloc' mm/pmalloc.o: In function `pmalloc_destroy_pool': pmalloc.c:(.text+0x4a1): undefined reference to `gen_pool_for_each_chunk' pmalloc.c:(.text+0x4a8): undefined reference to `gen_pool_destroy' --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation ^ permalink raw reply [flat|nested] 84+ messages in thread
* [kernel-hardening] [RFC PATCH v11 0/6] mm: security: ro protection for dynamic data
@ 2018-01-24 17:56 Igor Stoppa
2018-01-24 17:56 ` Igor Stoppa
0 siblings, 1 reply; 84+ messages in thread
From: Igor Stoppa @ 2018-01-24 17:56 UTC (permalink / raw)
To: jglisse, keescook, mhocko, labbott, hch, willy
Cc: cl, linux-security-module, linux-mm, linux-kernel,
kernel-hardening, Igor Stoppa
This patch-set introduces the possibility of protecting memory that has
been allocated dynamically.
The memory is managed in pools: when a memory pool is turned into R/O,
all the memory that is part of it, will become R/O.
A R/O pool can be destroyed, to recover its memory, but it cannot be
turned back into R/W mode.
This is intentional. This feature is meant for data that doesn't need
further modifications after initialization.
However the data might need to be released, for example as part of module
unloading.
To do this, the memory must first be freed, then the pool can be destroyed.
An example is provided, in the form of self-testing.
Changes since the v10 version:
Initially I tried to provide support for hardening the LSM hooks, but the
LSM code was too much in a flux to have some chance to be merged.
Several drop-in replacement for kmalloc based functions, for example
kzalloc.
>From this perspective I have also modified genalloc, to make its free
functionality follow more closely the kfree, which doesn't need to be told
the size of the allocation being released. This was sent out for review
twice, but it has not received any feedback, so far.
Also genalloc now comes with self-testing.
The latest can be found also here:
https://www.spinics.net/lists/kernel/msg2696152.html
The need to integrate with hardened user copy has driven an optimization
in the management of vmap_areas, where each struct page in a vmalloc area
has a reference to it, saving the search through the various areas.
I was planning - and can still do it - to provide hardening for some IMA
data, but in the meanwhile it seems that the XFS developers might be
interested in htis functionality:
http://www.openwall.com/lists/kernel-hardening/2018/01/24/1
So I'm sending it out as preview.
Igor Stoppa (6):
genalloc: track beginning of allocations
genalloc: selftest
struct page: add field for vm_struct
Protectable Memory
Documentation for Pmalloc
Pmalloc: self-test
Documentation/core-api/pmalloc.txt | 104 ++++++++
include/linux/genalloc-selftest.h | 30 +++
include/linux/genalloc.h | 6 +-
include/linux/mm_types.h | 1 +
include/linux/pmalloc.h | 215 ++++++++++++++++
include/linux/vmalloc.h | 1 +
init/main.c | 2 +
lib/Kconfig | 15 ++
lib/Makefile | 1 +
lib/genalloc-selftest.c | 402 +++++++++++++++++++++++++++++
lib/genalloc.c | 444 +++++++++++++++++++++----------
mm/Kconfig | 7 +
mm/Makefile | 2 +
mm/pmalloc-selftest.c | 65 +++++
mm/pmalloc-selftest.h | 30 +++
mm/pmalloc.c | 516 +++++++++++++++++++++++++++++++++++++
mm/usercopy.c | 25 +-
mm/vmalloc.c | 18 +-
18 files changed, 1744 insertions(+), 140 deletions(-)
create mode 100644 Documentation/core-api/pmalloc.txt
create mode 100644 include/linux/genalloc-selftest.h
create mode 100644 include/linux/pmalloc.h
create mode 100644 lib/genalloc-selftest.c
create mode 100644 mm/pmalloc-selftest.c
create mode 100644 mm/pmalloc-selftest.h
create mode 100644 mm/pmalloc.c
--
2.9.3
^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory 2018-01-24 17:56 [kernel-hardening] [RFC PATCH v11 0/6] mm: security: ro protection for dynamic data Igor Stoppa @ 2018-01-24 17:56 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-01-24 17:56 UTC (permalink / raw) To: linux-security-module The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 215 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 513 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 781 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index a8fdabf..9f2974f 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..cb18739 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,215 @@ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..116d280 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 13bc8cf..8ce616fb 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..a64ac49 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,513 @@ +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0444); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory @ 2018-01-24 17:56 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-01-24 17:56 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening, Igor Stoppa The MMU available in many systems running Linux can often provide R/O protection to the memory pages it handles. However, the MMU-based protection works efficiently only when said pages contain exclusively data that will not need further modifications. Statically allocated variables can be segregated into a dedicated section, but this does not sit very well with dynamically allocated ones. Dynamic allocation does not provide, currently, any means for grouping variables in memory pages that would contain exclusively data suitable for conversion to read only access mode. The allocator here provided (pmalloc - protectable memory allocator) introduces the concept of pools of protectable memory. A module can request a pool and then refer any allocation request to the pool handler it has received. Once all the chunks of memory associated to a specific pool are initialized, the pool can be protected. After this point, the pool can only be destroyed (it is up to the module to avoid any further references to the memory from the pool, after the destruction is invoked). The latter case is mainly meant for releasing memory, when a module is unloaded. A module can have as many pools as needed, for example to support the protection of data that is initialized in sufficiently distinct phases. Signed-off-by: Igor Stoppa <igor.stoppa@huawei.com> --- include/linux/genalloc.h | 3 + include/linux/pmalloc.h | 215 ++++++++++++++++++++ include/linux/vmalloc.h | 1 + lib/genalloc.c | 27 +++ mm/Makefile | 1 + mm/pmalloc.c | 513 +++++++++++++++++++++++++++++++++++++++++++++++ mm/usercopy.c | 25 ++- 7 files changed, 781 insertions(+), 4 deletions(-) create mode 100644 include/linux/pmalloc.h create mode 100644 mm/pmalloc.c diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index a8fdabf..9f2974f 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -121,6 +121,9 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); + +extern void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk); extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); diff --git a/include/linux/pmalloc.h b/include/linux/pmalloc.h new file mode 100644 index 0000000..cb18739 --- /dev/null +++ b/include/linux/pmalloc.h @@ -0,0 +1,215 @@ +/* + * pmalloc.h: Header for Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#ifndef _PMALLOC_H +#define _PMALLOC_H + + +#include <linux/genalloc.h> +#include <linux/string.h> + +#define PMALLOC_DEFAULT_ALLOC_ORDER (-1) + +/* + * Library for dynamic allocation of pools of memory that can be, + * after initialization, marked as read-only. + * + * This is intended to complement __read_only_after_init, for those cases + * where either it is not possible to know the initialization value before + * init is completed, or the amount of data is variable and can be + * determined only at run-time. + * + * ***WARNING*** + * The user of the API is expected to synchronize: + * 1) allocation, + * 2) writes to the allocated memory, + * 3) write protection of the pool, + * 4) freeing of the allocated memory, and + * 5) destruction of the pool. + * + * For a non-threaded scenario, this type of locking is not even required. + * + * Even if the library were to provide support for locking, point 2) + * would still depend on the user taking the lock. + */ + + +/** + * pmalloc_create_pool - create a new protectable memory pool - + * @name: the name of the pool, must be unique + * @min_alloc_order: log2 of the minimum allocation size obtainable + * from the pool + * + * Creates a new (empty) memory pool for allocation of protectable + * memory. Memory will be allocated upon request (through pmalloc). + * + * Returns a pointer to the new pool upon success, otherwise a NULL. + */ +struct gen_pool *pmalloc_create_pool(const char *name, + int min_alloc_order); + + +int is_pmalloc_object(const void *ptr, const unsigned long n); + +/** + * pmalloc_prealloc - tries to allocate a memory chunk of the requested size + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * + * Prepares a chunk of the requested size. + * This is intended to both minimize latency in later memory requests and + * avoid sleping during allocation. + * Memory allocated with prealloc is stored in one single chunk, as + * opposite to what is allocated on-demand when pmalloc runs out of free + * space already existing in the pool and has to invoke vmalloc. + * + * Returns true if the vmalloc call was successful, false otherwise. + */ +bool pmalloc_prealloc(struct gen_pool *pool, size_t size); + +/** + * pmalloc - allocate protectable memory from a pool + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Allocates memory from an unprotected pool. If the pool doesn't have + * enough memory, and the request did not include GFP_ATOMIC, an attempt + * is made to add a new chunk of memory to the pool + * (a multiple of PAGE_SIZE), in order to fit the new request. + * Otherwise, NULL is returned. + * + * Returns the pointer to the memory requested upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp); + + +/** + * pzalloc - zero-initialized version of pmalloc + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, initializing the memory requested to 0, + * before returning the pointer to it. + * + * Returns the pointer to the zeroed memory requested, upon success, + * NULL otherwise (either no memory available or pool already read-only). + */ +static inline void *pzalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + return pmalloc(pool, size, gfp | __GFP_ZERO); +} + +/** + * pmalloc_array - allocates an array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pmalloc_array(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + if (unlikely(!(pool && n && size))) + return NULL; + return pmalloc(pool, n * size, flags); +} + +/** + * pcalloc - allocates a 0-initialized array according to the parameters + * @pool: handler to the pool to be used for memory allocation + * @size: amount of memory (in bytes) requested + * @gfp: flags for page allocation + * + * Executes pmalloc, if it has a chance to succeed. + * + * Returns either NULL or the pmalloc result. + */ +static inline void *pcalloc(struct gen_pool *pool, size_t n, + size_t size, gfp_t flags) +{ + return pmalloc_array(pool, n, size, flags | __GFP_ZERO); +} + +/** + * pstrdup - duplicate a string, using pmalloc as allocator + * @pool: handler to the pool to be used for memory allocation + * @s: string to duplicate + * @gfp: flags for page allocation + * + * Generates a copy of the given string, allocating sufficient memory + * from the given pmalloc pool. + * + * Returns a pointer to the replica, NULL in case of recoverable error. + */ +static inline char *pstrdup(struct gen_pool *pool, const char *s, gfp_t gfp) +{ + size_t len; + char *buf; + + if (unlikely(pool == NULL || s == NULL)) + return NULL; + + len = strlen(s) + 1; + buf = pmalloc(pool, len, gfp); + if (likely(buf)) + strncpy(buf, s, len); + return buf; +} + +/** + * pmalloc_protect_pool - turn a read/write pool read-only + * @pool: the pool to protect + * + * Write-protects all the memory chunks assigned to the pool. + * This prevents any further allocation. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_protect_pool(struct gen_pool *pool); + +/** + * pfree - mark as unused memory that was previously in use + * @pool: handler to the pool to be used for memory allocation + * @addr: the beginning of the memory area to be freed + * + * The behavior of pfree is different, depending on the state of the + * protection. + * If the pool is not yet protected, the memory is marked as unused and + * will be availabel for further allocations. + * If the pool is already protected, the memory is marked as unused, but + * it will still be impossible to perform further allocation, because of + * the existing protection. + * The freed memory, in this case, will be truly released only when the + * pool is destroyed. + */ +static inline void pfree(struct gen_pool *pool, const void *addr) +{ + gen_pool_free(pool, (unsigned long)addr, 0); +} + +/** + * pmalloc_destroy_pool - destroys a pool and all the associated memory + * @pool: the pool to destroy + * + * All the memory that was allocated through pmalloc in the pool will be freed. + * + * Returns 0 upon success, -EINVAL in abnormal cases. + */ +int pmalloc_destroy_pool(struct gen_pool *pool); + +#endif diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c3..116d280 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/lib/genalloc.c b/lib/genalloc.c index 13bc8cf..8ce616fb 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -519,6 +519,33 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) } EXPORT_SYMBOL(gen_pool_free); + +/** + * gen_pool_flush_chunk - drops all the allocations from a specific chunk + * @pool: the generic memory pool + * @chunk: The chunk to wipe clear. + * + * This is meant to be called only while destroying a pool. It's up to the + * caller to avoid races, but really, at this point the pool should have + * already been retired and have become unavailable for any other sort of + * operation. + */ +void gen_pool_flush_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk) +{ + size_t size; + + if (unlikely(!(pool && chunk))) + return; + + size = chunk->end_addr + 1 - chunk->start_addr; + memset(chunk->entries, 0, + DIV_ROUND_UP(size >> pool->min_alloc_order * BITS_PER_ENTRY, + BITS_PER_BYTE)); + atomic_set(&chunk->avail, size); +} + + /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool * @pool: the generic memory pool diff --git a/mm/Makefile b/mm/Makefile index e669f02..a6a47e1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/pmalloc.c b/mm/pmalloc.c new file mode 100644 index 0000000..a64ac49 --- /dev/null +++ b/mm/pmalloc.c @@ -0,0 +1,513 @@ +/* + * pmalloc.c: Protectable Memory Allocator + * + * (C) Copyright 2017 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa <igor.stoppa@huawei.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; version 2 + * of the License. + */ + +#include <linux/printk.h> +#include <linux/init.h> +#include <linux/mm.h> +#include <linux/vmalloc.h> +#include <linux/genalloc.h> +#include <linux/kernel.h> +#include <linux/log2.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/rculist.h> +#include <linux/set_memory.h> +#include <asm/cacheflush.h> +#include <asm/page.h> + +/** + * pmalloc_data contains the data specific to a pmalloc pool, + * in a format compatible with the design of gen_alloc. + * Some of the fields are used for exposing the corresponding parameter + * to userspace, through sysfs. + */ +struct pmalloc_data { + struct gen_pool *pool; /* Link back to the associated pool. */ + bool protected; /* Status of the pool: RO or RW. */ + struct kobj_attribute attr_protected; /* Sysfs attribute. */ + struct kobj_attribute attr_avail; /* Sysfs attribute. */ + struct kobj_attribute attr_size; /* Sysfs attribute. */ + struct kobj_attribute attr_chunks; /* Sysfs attribute. */ + struct kobject *pool_kobject; + struct list_head node; /* list of pools */ +}; + +static LIST_HEAD(pmalloc_final_list); +static LIST_HEAD(pmalloc_tmp_list); +static struct list_head *pmalloc_list = &pmalloc_tmp_list; +static DEFINE_MUTEX(pmalloc_mutex); +static struct kobject *pmalloc_kobject; + +static ssize_t pmalloc_pool_show_protected(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_protected); + if (data->protected) + return sprintf(buf, "protected\n"); + else + return sprintf(buf, "unprotected\n"); +} + +static ssize_t pmalloc_pool_show_avail(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_avail); + return sprintf(buf, "%lu\n", gen_pool_avail(data->pool)); +} + +static ssize_t pmalloc_pool_show_size(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + + data = container_of(attr, struct pmalloc_data, attr_size); + return sprintf(buf, "%lu\n", gen_pool_size(data->pool)); +} + +static void pool_chunk_number(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long *counter = data; + + (*counter)++; +} + +static ssize_t pmalloc_pool_show_chunks(struct kobject *dev, + struct kobj_attribute *attr, + char *buf) +{ + struct pmalloc_data *data; + unsigned long chunks_num = 0; + + data = container_of(attr, struct pmalloc_data, attr_chunks); + gen_pool_for_each_chunk(data->pool, pool_chunk_number, &chunks_num); + return sprintf(buf, "%lu\n", chunks_num); +} + +/** + * Exposes the pool and its attributes through sysfs. + */ +static struct kobject *pmalloc_connect(struct pmalloc_data *data) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + struct kobject *kobj; + + kobj = kobject_create_and_add(data->pool->name, pmalloc_kobject); + if (unlikely(!kobj)) + return NULL; + + if (unlikely(sysfs_create_files(kobj, attrs) < 0)) { + kobject_put(kobj); + kobj = NULL; + } + return kobj; +} + +/** + * Removes the pool and its attributes from sysfs. + */ +static void pmalloc_disconnect(struct pmalloc_data *data, + struct kobject *kobj) +{ + const struct attribute *attrs[] = { + &data->attr_protected.attr, + &data->attr_avail.attr, + &data->attr_size.attr, + &data->attr_chunks.attr, + NULL + }; + + sysfs_remove_files(kobj, attrs); + kobject_put(kobj); +} + +/** + * Declares an attribute of the pool. + */ + +#define pmalloc_attr_init(data, attr_name) \ +do { \ + sysfs_attr_init(&data->attr_##attr_name.attr); \ + data->attr_##attr_name.attr.name = #attr_name; \ + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0444); \ + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ +} while (0) + +struct gen_pool *pmalloc_create_pool(const char *name, int min_alloc_order) +{ + struct gen_pool *pool; + const char *pool_name; + struct pmalloc_data *data; + + if (!name) { + WARN_ON(1); + return NULL; + } + + if (min_alloc_order < 0) + min_alloc_order = ilog2(sizeof(unsigned long)); + + pool = gen_pool_create(min_alloc_order, NUMA_NO_NODE); + if (unlikely(!pool)) + return NULL; + + mutex_lock(&pmalloc_mutex); + list_for_each_entry(data, pmalloc_list, node) + if (!strcmp(name, data->pool->name)) + goto same_name_err; + + pool_name = kstrdup(name, GFP_KERNEL); + if (unlikely(!pool_name)) + goto name_alloc_err; + + data = kzalloc(sizeof(struct pmalloc_data), GFP_KERNEL); + if (unlikely(!data)) + goto data_alloc_err; + + data->protected = false; + data->pool = pool; + pmalloc_attr_init(data, protected); + pmalloc_attr_init(data, avail); + pmalloc_attr_init(data, size); + pmalloc_attr_init(data, chunks); + pool->data = data; + pool->name = pool_name; + + list_add(&data->node, pmalloc_list); + if (pmalloc_list == &pmalloc_final_list) + data->pool_kobject = pmalloc_connect(data); + mutex_unlock(&pmalloc_mutex); + return pool; + +data_alloc_err: + kfree(pool_name); +name_alloc_err: +same_name_err: + mutex_unlock(&pmalloc_mutex); + gen_pool_destroy(pool); + return NULL; +} + +static inline int check_alloc_params(struct gen_pool *pool, size_t req_size) +{ + struct pmalloc_data *data; + unsigned int order; + + if (unlikely(!req_size || !pool)) + return -1; + + order = (unsigned int)pool->min_alloc_order; + data = pool->data; + + if (data == NULL) + return -1; + + if (unlikely(data->protected)) { + WARN_ON(1); + return -1; + } + return 0; +} + + +static inline bool chunk_tagging(void *chunk, bool tag) +{ + struct vm_struct *area; + struct page *page; + + if (!is_vmalloc_addr(chunk)) + return false; + + page = vmalloc_to_page(chunk); + if (unlikely(!page)) + return false; + + area = page->area; + if (tag) + area->flags |= VM_PMALLOC; + else + area->flags &= ~VM_PMALLOC; + return true; +} + + +static inline bool tag_chunk(void *chunk) +{ + return chunk_tagging(chunk, true); +} + + +static inline bool untag_chunk(void *chunk) +{ + return chunk_tagging(chunk, false); +} + +enum { + INVALID_PMALLOC_OBJECT = -1, + NOT_PMALLOC_OBJECT = 0, + VALID_PMALLOC_OBJECT = 1, +}; + +int is_pmalloc_object(const void *ptr, const unsigned long n) +{ + struct vm_struct *area; + struct page *page; + unsigned long area_start; + unsigned long area_end; + unsigned long object_start; + unsigned long object_end; + + + /* is_pmalloc_object gets called pretty late, so chances are high + * that the object is indeed of vmalloc type + */ + if (unlikely(!is_vmalloc_addr(ptr))) + return NOT_PMALLOC_OBJECT; + + page = vmalloc_to_page(ptr); + if (unlikely(!page)) + return NOT_PMALLOC_OBJECT; + + area = page->area; + + if (likely(!(area->flags & VM_PMALLOC))) + return NOT_PMALLOC_OBJECT; + + area_start = (unsigned long)area->addr; + area_end = area_start + area->nr_pages * PAGE_SIZE - 1; + object_start = (unsigned long)ptr; + object_end = object_start + n - 1; + + if (likely((area_start <= object_start) && + (object_end <= area_end))) + return VALID_PMALLOC_OBJECT; + else + return INVALID_PMALLOC_OBJECT; +} + + +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned int order; + + if (check_alloc_params(pool, size)) + return false; + + order = (unsigned int)pool->min_alloc_order; + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(chunk == NULL)) + return false; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error != 0)) + goto abort; + + return true; +abort: + vfree(chunk); + return false; + +} + +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) +{ + void *chunk; + size_t chunk_size; + bool add_error; + unsigned long retval; + unsigned int order; + + if (check_alloc_params(pool, size)) + return NULL; + + order = (unsigned int)pool->min_alloc_order; + +retry_alloc_from_pool: + retval = gen_pool_alloc(pool, size); + if (retval) + goto return_allocation; + + if (unlikely((gfp & __GFP_ATOMIC))) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + + /* Expand pool */ + chunk_size = roundup(size, PAGE_SIZE); + chunk = vmalloc(chunk_size); + if (unlikely(!chunk)) { + if (unlikely((gfp & __GFP_NOFAIL))) + goto retry_alloc_from_pool; + else + return NULL; + } + if (unlikely(!tag_chunk(chunk))) + goto free; + + /* Locking is already done inside gen_pool_add */ + add_error = gen_pool_add(pool, (unsigned long)chunk, chunk_size, + NUMA_NO_NODE); + if (unlikely(add_error)) + goto abort; + + retval = gen_pool_alloc(pool, size); + if (retval) { +return_allocation: + *(size_t *)retval = size; + if (gfp & __GFP_ZERO) + memset((void *)retval, 0, size); + return (void *)retval; + } + /* Here there is no test for __GFP_NO_FAIL because, in case of + * concurrent allocation, one thread might add a chunk to the + * pool and this memory could be allocated by another thread, + * before the first thread gets a chance to use it. + * As long as vmalloc succeeds, it's ok to retry. + */ + goto retry_alloc_from_pool; +abort: + untag_chunk(chunk); +free: + vfree(chunk); + return NULL; +} + +static void pmalloc_chunk_set_protection(struct gen_pool *pool, + + struct gen_pool_chunk *chunk, + void *data) +{ + const bool *flag = data; + size_t chunk_size = chunk->end_addr + 1 - chunk->start_addr; + unsigned long pages = chunk_size / PAGE_SIZE; + + BUG_ON(chunk_size & (PAGE_SIZE - 1)); + + if (*flag) + set_memory_ro(chunk->start_addr, pages); + else + set_memory_rw(chunk->start_addr, pages); +} + +static int pmalloc_pool_set_protection(struct gen_pool *pool, bool protection) +{ + struct pmalloc_data *data; + struct gen_pool_chunk *chunk; + + if (unlikely(!pool)) + return -EINVAL; + + data = pool->data; + + if (unlikely(!data)) + return -EINVAL; + + if (unlikely(data->protected == protection)) { + WARN_ON(1); + return 0; + } + + data->protected = protection; + list_for_each_entry(chunk, &(pool)->chunks, next_chunk) + pmalloc_chunk_set_protection(pool, chunk, &protection); + return 0; +} + +int pmalloc_protect_pool(struct gen_pool *pool) +{ + return pmalloc_pool_set_protection(pool, true); +} + + +static void pmalloc_chunk_free(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + untag_chunk(chunk); + gen_pool_flush_chunk(pool, chunk); + vfree_atomic((void *)chunk->start_addr); +} + + +int pmalloc_destroy_pool(struct gen_pool *pool) +{ + struct pmalloc_data *data; + + if (unlikely(pool == NULL)) + return -EINVAL; + + data = pool->data; + + if (unlikely(data == NULL)) + return -EINVAL; + + mutex_lock(&pmalloc_mutex); + list_del(&data->node); + mutex_unlock(&pmalloc_mutex); + + if (likely(data->pool_kobject)) + pmalloc_disconnect(data, data->pool_kobject); + + pmalloc_pool_set_protection(pool, false); + gen_pool_for_each_chunk(pool, pmalloc_chunk_free, NULL); + gen_pool_destroy(pool); + kfree(data); + return 0; +} + +/** + * When the sysfs is ready to receive registrations, connect all the + * pools previously created. Also enable further pools to be connected + * right away. + */ +static int __init pmalloc_late_init(void) +{ + struct pmalloc_data *data, *n; + + pmalloc_kobject = kobject_create_and_add("pmalloc", kernel_kobj); + + mutex_lock(&pmalloc_mutex); + pmalloc_list = &pmalloc_final_list; + + if (likely(pmalloc_kobject != NULL)) { + list_for_each_entry_safe(data, n, &pmalloc_tmp_list, node) { + list_move(&data->node, &pmalloc_final_list); + pmalloc_connect(data); + } + } + mutex_unlock(&pmalloc_mutex); + return 0; +} +late_initcall(pmalloc_late_init); diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b2..c3b1029 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mm.h> +#include <linux/pmalloc.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/task.h> @@ -222,6 +223,7 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n, void __check_object_size(const void *ptr, unsigned long n, bool to_user) { const char *err; + int retv; /* Skip all tests if size is zero. */ if (!n) @@ -229,12 +231,12 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for invalid addresses. */ err = check_bogus_address(ptr, n); - if (err) + if (unlikely(err)) goto report; /* Check for bad heap object. */ err = check_heap_object(ptr, n, to_user); - if (err) + if (unlikely(err)) goto report; /* Check for bad stack object. */ @@ -257,8 +259,23 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ err = check_kernel_text_object(ptr, n); - if (!err) - return; + if (unlikely(err)) + goto report; + + /* Check if object is from a pmalloc chunk. + */ + retv = is_pmalloc_object(ptr, n); + if (unlikely(retv)) { + if (unlikely(!to_user)) { + err = "<trying to write to pmalloc object>"; + goto report; + } + if (retv < 0) { + err = "<invalid pmalloc object>"; + goto report; + } + } + return; report: report_usercopy(ptr, n, to_user, err); -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-01-24 17:56 ` Igor Stoppa (?) @ 2018-01-24 19:10 ` Jann Horn 2018-01-26 5:35 ` Matthew Wilcox -1 siblings, 1 reply; 84+ messages in thread From: Jann Horn @ 2018-01-24 19:10 UTC (permalink / raw) To: Igor Stoppa Cc: jglisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, Matthew Wilcox, Christoph Lameter, linux-security-module, linux-mm, kernel list, Kernel Hardening On Wed, Jan 24, 2018 at 6:56 PM, Igor Stoppa <igor.stoppa@huawei.com> wrote: > The MMU available in many systems running Linux can often provide R/O > protection to the memory pages it handles. > > However, the MMU-based protection works efficiently only when said pages > contain exclusively data that will not need further modifications. > > Statically allocated variables can be segregated into a dedicated > section, but this does not sit very well with dynamically allocated > ones. > > Dynamic allocation does not provide, currently, any means for grouping > variables in memory pages that would contain exclusively data suitable > for conversion to read only access mode. > > The allocator here provided (pmalloc - protectable memory allocator) > introduces the concept of pools of protectable memory. > > A module can request a pool and then refer any allocation request to the > pool handler it has received. > > Once all the chunks of memory associated to a specific pool are > initialized, the pool can be protected. I'm not entirely convinced by the approach of marking small parts of kernel memory as readonly for hardening. Comments on some details are inline. > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 1e5d8c3..116d280 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -20,6 +20,7 @@ struct notifier_block; /* in notifier.h */ > #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ > #define VM_NO_GUARD 0x00000040 /* don't add guard page */ > #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ > +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ Is "see docs" specific enough to actually guide the reader to the right documentation? > +#define pmalloc_attr_init(data, attr_name) \ > +do { \ > + sysfs_attr_init(&data->attr_##attr_name.attr); \ > + data->attr_##attr_name.attr.name = #attr_name; \ > + data->attr_##attr_name.attr.mode = VERIFY_OCTAL_PERMISSIONS(0444); \ > + data->attr_##attr_name.show = pmalloc_pool_show_##attr_name; \ > +} while (0) Is there a good reason for making all these files mode 0444 (as opposed to setting them to 0400 and then allowing userspace to make them accessible if desired)? /proc/slabinfo contains vaguely similar data and is mode 0400 (or mode 0600, depending on the kernel config) AFAICS. > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ [...] > + /* Expand pool */ > + chunk_size = roundup(size, PAGE_SIZE); > + chunk = vmalloc(chunk_size); You're allocating with vmalloc(), which, as far as I know, establishes a second mapping in the vmalloc area for pages that are already mapped as RW through the physmap. AFAICS, later, when you're trying to make pages readonly, you're only changing the protections on the second mapping in the vmalloc area, therefore leaving the memory writable through the physmap. Is that correct? If so, please either document the reasoning why this is okay or change it. ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-01-24 19:10 ` [kernel-hardening] " Jann Horn @ 2018-01-26 5:35 ` Matthew Wilcox 2018-02-02 18:39 ` Christopher Lameter 0 siblings, 1 reply; 84+ messages in thread From: Matthew Wilcox @ 2018-01-26 5:35 UTC (permalink / raw) To: Jann Horn Cc: Igor Stoppa, jglisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, Christoph Lameter, linux-security-module, linux-mm, kernel list, Kernel Hardening On Wed, Jan 24, 2018 at 08:10:53PM +0100, Jann Horn wrote: > I'm not entirely convinced by the approach of marking small parts of > kernel memory as readonly for hardening. It depends how significant the data stored in there are. For example, storing function pointers in read-only memory provides significant hardening. > You're allocating with vmalloc(), which, as far as I know, establishes > a second mapping in the vmalloc area for pages that are already mapped > as RW through the physmap. AFAICS, later, when you're trying to make > pages readonly, you're only changing the protections on the second > mapping in the vmalloc area, therefore leaving the memory writable > through the physmap. Is that correct? If so, please either document > the reasoning why this is okay or change it. Yes, this is still vulnerable to attacks through the physmap. That's also true for marking structs as const. We should probably fix that at some point, but at least they're not vulnerable to heap overruns by small amounts ... you have to be able to overrun some other array by terabytes. It's worth having a discussion about whether we want the pmalloc API or whether we want a slab-based API. We can have a separate discussion about an API to remove pages from the physmap. ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-01-26 5:35 ` Matthew Wilcox @ 2018-02-02 18:39 ` Christopher Lameter 2018-02-03 15:38 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Christopher Lameter @ 2018-02-02 18:39 UTC (permalink / raw) To: Matthew Wilcox Cc: Jann Horn, Igor Stoppa, jglisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, linux-security-module, linux-mm, kernel list, Kernel Hardening On Thu, 25 Jan 2018, Matthew Wilcox wrote: > It's worth having a discussion about whether we want the pmalloc API > or whether we want a slab-based API. We can have a separate discussion > about an API to remove pages from the physmap. We could even do this in a more thorough way. Can we use a ring 1 / 2 distinction to create a hardened OS core that policies the rest of the ever expanding kernel with all its modules and this and that feature? I think that will long term be a better approach and allow more than the current hardening approaches can get you. It seems that we are willing to tolerate significant performance regressions now. So lets use the protection mechanisms that the hardware offers. ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-02-02 18:39 ` Christopher Lameter @ 2018-02-03 15:38 ` Igor Stoppa 2018-02-03 19:57 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 15:38 UTC (permalink / raw) To: Christopher Lameter, Matthew Wilcox, Boris Lukashev Cc: Jann Horn, jglisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, linux-security-module, linux-mm, kernel list, Kernel Hardening +Boris Lukashev On 02/02/18 20:39, Christopher Lameter wrote: > On Thu, 25 Jan 2018, Matthew Wilcox wrote: > >> It's worth having a discussion about whether we want the pmalloc API >> or whether we want a slab-based API. We can have a separate discussion >> about an API to remove pages from the physmap. > > We could even do this in a more thorough way. Can we use a ring 1 / 2 > distinction to create a hardened OS core that policies the rest of > the ever expanding kernel with all its modules and this and that feature? What would be the differentiating criteria? Furthermore, what are the chances of invalidating the entire concept, because there is already an hypervisor using the higher level features? That is what you are proposing, if I understand correctly. But more on this below ... > I think that will long term be a better approach and allow more than the > current hardening approaches can get you. It seems that we are willing to > tolerate significant performance regressions now. So lets use the > protection mechanisms that the hardware offers. I would rather *not* propose significant performance regression :-P There might be some one-off case or anyway rare event which is penalized, but my preference goes to not introducing any significant performance penalty, during regular use. After all, the lower the penalty, the wider the (potential) adoption. More in detail: there are 2 major cases for wanting some form of read-only protection. 1) extra ward against accidental corruption The kernel provides many debugging tools and they can detect lots of errors during development, but they require time and knowledge to use them, which are not always available. Furthermore, it is objectively true that not all the code has the same level of maturity, especially when non-upstream code is used in some custom product. It's not my main goal, but it would be nice if that case too could be addressed by the protection. Corruption *can* happen. Having live guards against it, will definitely help spotting bugs or, at the very least, crash/reboot a device before it can cause permanent data corruption. Protection against accidental corruption should be used as widely as possible, therefore it cannot have an high price tag, in terms of lost performance. Otherwise, there's the risk that it will be just a debug feature, more like lockdep or ubsan. 2) protection against malicious attacks This is harder, of course, but what is realistically to be expected? If an attacker can gain full control of the kernel, the only way to do damage control is to have HW and/or higher privilege SW that can somehow limit the reach of the attacker. To make it work for real, it should be mandated that either these extra HW/SW means can tell apart legitimate kernel activity from rogue actions, or they operate so independently from the kernel that a compromise kernel cannot use any API to influence them. The consensus seems to be to put aside (for now) this concern and instead focus on what is a typical scenario: - some bug is found that allows to read/write kernel memory - some other bug is found, which leaks the address of a well known variable, effectively revealing the randomized offset of each symbol placed in linear memory, once their relative location is known. What is described above is a toolkit that effectively can allow - with patience - to attack anything that is writable by the kernel. Including page tables and permissions. However the typical attack is more like: "let's flip some bit(s)". Which is where __ro_after_init has its purpose to exist. My proposal is to extend the same sort of protection also to variables allocated dynamically. * make the pages read only, once the data is initialized * use vmalloc to prevent that exfiltrating the address of an unrelated variable can easily give away the location of the real target, because of the individual page mapping vs linear mapping. Boris Lukashev proposed additional hardening, when accessing a certain variable, in the form of hash/checksum, but I could not come up with an implementation that did not have too much overhead. Re-considering this, one option would be to have a function "pool_validate()" - probably expensive - that could be invoked by a piece of code before using the data from the pool. Not perfect, because it would not be atomic, but it could be used once, at the beginning of a function, without adding overhead to each access to the pool that the function would perform. An attacker would have to time the attack so that the corruption of the data wold happen after the pool is validated and before the data is read from it. Possible, but way tricker than the current unprotected situation. What I am trying to say, is that even after having multi-ring implementation (which would be more dependent on HW features), there would be still the problem of validating the legitimacy of the use of the API that such implementation would expose. I'd rather try to preserve performance and still provide a defense against the more trivial attacks, since other types of attacks are much harder to perform in the wild. Of course, I'm interested in alternatives (I'll comment separately on the compound pages) The way pmalloc is designed is to take advantage of any page provider. So far, vmalloc seems to me the best option, but something else might emerge that works better. Yet the pmalloc API is, I think, what would be still needed, to let the rest of the kernel take advantage of this feature. -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-02-03 15:38 ` Igor Stoppa @ 2018-02-03 19:57 ` Igor Stoppa 2018-02-03 20:12 ` Boris Lukashev 0 siblings, 1 reply; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 19:57 UTC (permalink / raw) To: Christopher Lameter, Matthew Wilcox, Boris Lukashev Cc: Jann Horn, jglisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, linux-security-module, linux-mm, kernel list, Kernel Hardening >> On Thu, 25 Jan 2018, Matthew Wilcox wrote: >>> It's worth having a discussion about whether we want the pmalloc API >>> or whether we want a slab-based API. I'd love to have some feedback specifically about the API. I have also some idea about userspace and how to extend the pmalloc concept to it: http://www.openwall.com/lists/kernel-hardening/2018/01/30/20 I'll be AFK intermittently for about 2 weeks, so i might not be able to reply immediately, but from my perspective this would be just the beginning of a broader hardening of both kernel and userspace that I'd like to pursue. -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-02-03 19:57 ` Igor Stoppa @ 2018-02-03 20:12 ` Boris Lukashev 2018-02-03 20:32 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Boris Lukashev @ 2018-02-03 20:12 UTC (permalink / raw) To: Igor Stoppa Cc: Christopher Lameter, Matthew Wilcox, Jann Horn, Jerome Glisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, linux-security-module, Linux-MM, kernel list, Kernel Hardening On Sat, Feb 3, 2018 at 2:57 PM, Igor Stoppa <igor.stoppa@huawei.com> wrote: >>> On Thu, 25 Jan 2018, Matthew Wilcox wrote: > >>>> It's worth having a discussion about whether we want the pmalloc API >>>> or whether we want a slab-based API. > I'd love to have some feedback specifically about the API. > > I have also some idea about userspace and how to extend the pmalloc > concept to it: > > http://www.openwall.com/lists/kernel-hardening/2018/01/30/20 > > I'll be AFK intermittently for about 2 weeks, so i might not be able to > reply immediately, but from my perspective this would be just the > beginning of a broader hardening of both kernel and userspace that I'd > like to pursue. > > -- > igor Regarding the notion of validated protected memory, is there a method by which the resulting checksum could be used in a lookup table/function to resolve the location of the protected data? Effectively a hash table of protected allocations, with a benefit of dedup since any data matching the same key would be the same data (multiple identical cred structs being pushed around). Should leave the resolver address/csum in recent memory to check against, right? -- Boris Lukashev Systems Architect Semper Victus ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-02-03 20:12 ` Boris Lukashev @ 2018-02-03 20:32 ` Igor Stoppa 2018-02-03 22:29 ` Boris Lukashev 0 siblings, 1 reply; 84+ messages in thread From: Igor Stoppa @ 2018-02-03 20:32 UTC (permalink / raw) To: Boris Lukashev Cc: Christopher Lameter, Matthew Wilcox, Jann Horn, Jerome Glisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, linux-security-module, Linux-MM, kernel list, Kernel Hardening On 03/02/18 22:12, Boris Lukashev wrote: > Regarding the notion of validated protected memory, is there a method > by which the resulting checksum could be used in a lookup > table/function to resolve the location of the protected data? What I have in mind is a checksum at page/vmap_area level, so there would be no 1:1 mapping between a specific allocation and the checksum. An extreme case would be the one where an allocation crosses one or more page boundaries, while the checksum refers to a (partially) overlapping memory area. Code accessing a pool could perform one (relatively expensive) validation. But still something that would require a more sophisticated attack, to subvert. > Effectively a hash table of protected allocations, with a benefit of > dedup since any data matching the same key would be the same data > (multiple identical cred structs being pushed around). Should leave > the resolver address/csum in recent memory to check against, right? I see where you are trying to land, but I do not see how it would work without a further intermediate step. pmalloc dishes out virtual memory addresses, when called. It doesn't know what the user of the allocation will put in it. The user, otoh, has the direct address of the memory it got. What you are suggesting, if I have understood it correctly, is that, when the pool is protected, the addresses already given out, will become traps that get resolved through a lookup table that is built based on the content of each allocation. That seems to generate a lot of overhead, not to mention the fact that it might not play very well with the MMU. If I misunderstood, then I'd need a step by step description of what happens, because it's not clear to me how else the data would be accessed if not through the address that was obtained when pmalloc was invoked. -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-02-03 20:32 ` Igor Stoppa @ 2018-02-03 22:29 ` Boris Lukashev 2018-02-04 15:05 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Boris Lukashev @ 2018-02-03 22:29 UTC (permalink / raw) To: Igor Stoppa Cc: Christopher Lameter, Matthew Wilcox, Jann Horn, Jerome Glisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, linux-security-module, Linux-MM, kernel list, Kernel Hardening On Sat, Feb 3, 2018 at 3:32 PM, Igor Stoppa <igor.stoppa@huawei.com> wrote: > > > On 03/02/18 22:12, Boris Lukashev wrote: > >> Regarding the notion of validated protected memory, is there a method >> by which the resulting checksum could be used in a lookup >> table/function to resolve the location of the protected data? > > What I have in mind is a checksum at page/vmap_area level, so there > would be no 1:1 mapping between a specific allocation and the checksum. > > An extreme case would be the one where an allocation crosses one or more > page boundaries, while the checksum refers to a (partially) overlapping > memory area. > > Code accessing a pool could perform one (relatively expensive) > validation. But still something that would require a more sophisticated > attack, to subvert. > >> Effectively a hash table of protected allocations, with a benefit of >> dedup since any data matching the same key would be the same data >> (multiple identical cred structs being pushed around). Should leave >> the resolver address/csum in recent memory to check against, right? > > I see where you are trying to land, but I do not see how it would work > without a further intermediate step. > > pmalloc dishes out virtual memory addresses, when called. > > It doesn't know what the user of the allocation will put in it. > The user, otoh, has the direct address of the memory it got. > > What you are suggesting, if I have understood it correctly, is that, > when the pool is protected, the addresses already given out, will become > traps that get resolved through a lookup table that is built based on > the content of each allocation. > > That seems to generate a lot of overhead, not to mention the fact that > it might not play very well with the MMU. That is effectively what i'm suggesting - as a form of protection for consumers against direct reads of data which may have been corrupted by some irrelevant means. In the context of pmalloc, it would probably be a separate type of ro+verified pool which consumers would explicitly opt into. Say there's a maintenance cycle on a <name some scary thing controlled by Linux> and it wants to make sure that the instructions it read in are what they should have been before running them, those consumers might well take the penalty if it keeps <said scary big thing> from doing <the thing we're scared of it doing>. If such a resolver could be implemented in a manner which doesnt break all the things (including acceptable performance for at least a significant number of workloads), it might be useful as a general tool for handing out memory to userspace, even in rw, as it provides execution context in which other requirements can be forcibly resolved, preventing unauthorized access to pages the consumer shouldn't get in a very generic way. Spectre comes to mind as a potential class of issues to be addressed this way, since speculative load could be prevented if the resolution were to fail. > > If I misunderstood, then I'd need a step by step description of what > happens, because it's not clear to me how else the data would be > accessed if not through the address that was obtained when pmalloc was > invoked. > > -- > igor -- Boris Lukashev Systems Architect Semper Victus ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-02-03 22:29 ` Boris Lukashev @ 2018-02-04 15:05 ` Igor Stoppa 2018-02-12 23:27 ` Kees Cook 0 siblings, 1 reply; 84+ messages in thread From: Igor Stoppa @ 2018-02-04 15:05 UTC (permalink / raw) To: Boris Lukashev Cc: Christopher Lameter, Matthew Wilcox, Jann Horn, Jerome Glisse, Kees Cook, Michal Hocko, Laura Abbott, Christoph Hellwig, linux-security-module, Linux-MM, kernel list, Kernel Hardening On 04/02/18 00:29, Boris Lukashev wrote: > On Sat, Feb 3, 2018 at 3:32 PM, Igor Stoppa <igor.stoppa@huawei.com> wrote: [...] >> What you are suggesting, if I have understood it correctly, is that, >> when the pool is protected, the addresses already given out, will become >> traps that get resolved through a lookup table that is built based on >> the content of each allocation. >> >> That seems to generate a lot of overhead, not to mention the fact that >> it might not play very well with the MMU. > > That is effectively what i'm suggesting - as a form of protection for > consumers against direct reads of data which may have been corrupted > by some irrelevant means. In the context of pmalloc, it would probably > be a separate type of ro+verified pool ok, that seems more like an extension though. ATM I am having problems gaining traction to get even the basic merged :-) I would consider this as a possibility for future work, unless it is said that it's necessary for pmalloc to be accepted ... -- igor ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-02-04 15:05 ` Igor Stoppa @ 2018-02-12 23:27 ` Kees Cook 2018-02-13 0:40 ` Laura Abbott 0 siblings, 1 reply; 84+ messages in thread From: Kees Cook @ 2018-02-12 23:27 UTC (permalink / raw) To: Igor Stoppa Cc: Boris Lukashev, Christopher Lameter, Matthew Wilcox, Jann Horn, Jerome Glisse, Michal Hocko, Laura Abbott, Christoph Hellwig, linux-security-module, Linux-MM, kernel list, Kernel Hardening On Sun, Feb 4, 2018 at 7:05 AM, Igor Stoppa <igor.stoppa@huawei.com> wrote: > On 04/02/18 00:29, Boris Lukashev wrote: >> On Sat, Feb 3, 2018 at 3:32 PM, Igor Stoppa <igor.stoppa@huawei.com> wrote: > > [...] > >>> What you are suggesting, if I have understood it correctly, is that, >>> when the pool is protected, the addresses already given out, will become >>> traps that get resolved through a lookup table that is built based on >>> the content of each allocation. >>> >>> That seems to generate a lot of overhead, not to mention the fact that >>> it might not play very well with the MMU. >> >> That is effectively what i'm suggesting - as a form of protection for >> consumers against direct reads of data which may have been corrupted >> by some irrelevant means. In the context of pmalloc, it would probably >> be a separate type of ro+verified pool > ok, that seems more like an extension though. > > ATM I am having problems gaining traction to get even the basic merged :-) > > I would consider this as a possibility for future work, unless it is > said that it's necessary for pmalloc to be accepted ... I would agree: let's get basic functionality in first. Both verification and the physmap part can be done separately, IMO. -Kees -- Kees Cook Pixel Security ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [kernel-hardening] [PATCH 4/6] Protectable Memory 2018-02-12 23:27 ` Kees Cook @ 2018-02-13 0:40 ` Laura Abbott 2018-02-13 15:20 ` Igor Stoppa 0 siblings, 1 reply; 84+ messages in thread From: Laura Abbott @ 2018-02-13 0:40 UTC (permalink / raw) To: Kees Cook, Igor Stoppa Cc: Boris Lukashev, Christopher Lameter, Matthew Wilcox, Jann Horn, Jerome Glisse, Michal Hocko, Christoph Hellwig, linux-security-module, Linux-MM, kernel list, Kernel Hardening On 02/12/2018 03:27 PM, Kees Cook wrote: > On Sun, Feb 4, 2018 at 7:05 AM, Igor Stoppa <igor.stoppa@huawei.com> wrote: >> On 04/02/18 00:29, Boris Lukashev wrote: >>> On Sat, Feb 3, 2018 at 3:32 PM, Igor Stoppa <igor.stoppa@huawei.com> wrote: >> >> [...] >> >>>> What you are suggesting, if I have understood it correctly, is that, >>>> when the pool is protected, the addresses already given out, will become >>>> traps that get resolved through a lookup table that is built based on >>>> the content of each allocation. >>>> >>>> That seems to generate a lot of overhead, not to mention the fact that >>>> it might not play very well with the MMU. >>> >>> That is effectively what i'm suggesting - as a form of protection for >>> consumers against direct reads of data which may have been corrupted >>> by some irrelevant means. In the context of pmalloc, it would probably >>> be a separate type of ro+verified pool >> ok, that seems more like an extension though. >> >> ATM I am having problems gaining traction to get even the basic merged :-) >> >> I would consider this as a possibility for future work, unless it is >> said that it's necessary for pmalloc to be accepted ... > > I would agree: let's get basic functionality in first. Both > verification and the physmap part can be done separately, IMO. Skipping over physmap leaves a pretty big area of exposure that could be difficult to solve later. I appreciate this might block basic functionality but I don't think we should just gloss over it without at least some idea of what we would do. Thanks, Laura ^ permalink raw reply [flat|nested] 84+ messages in thread
* RE: [PATCH 4/6] Protectable Memory 2018-02-13 0:40 ` Laura Abbott @ 2018-02-13 15:20 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-02-13 15:20 UTC (permalink / raw) To: Laura Abbott, Kees Cook Cc: Boris Lukashev, Christopher Lameter, Matthew Wilcox, Jann Horn, Jerome Glisse, Michal Hocko, Christoph Hellwig, linux-security-module, Linux-MM, kernel list, Kernel Hardening [-- Attachment #1: Type: text/plain, Size: 3802 bytes --] hi, apologies for (probably) breaking any email etiquette, but i'm travelling and i have available only the corporate mail client. I'll reply more extensively to all the comments i go next week, when i'm back to the office. In the meanwhile i would like to point out that I had already addressed this, in past thread, but got no reply. To recap: -1) vmalloced memory is harder to attack than kmalloced, because it requires the attacker to figuere out also the physical address. Currently it's sufficient to identify the randomized base address and the offset in memory of the victim. I have not seen comments about this statement I made. Is it incorrect? -2) this patchset is about protecting something that right now is not protected at all. That should be the starting point for comparison. If it was possible to have separate section like const or _ro_after init, the situation would be different, but i was told that it's not possible. furthermore, it would require reserving a fixed size "zone", i think. -3)What is the attack we want to make harder to perform? Because even const data can be attacked, if we assume that the attacker can alter page mappings. In reality, the only safe way would be to have one-way only protection. But we do not have it. Why alterations of page properties are not considered a risk and the physmap is? And how would it be easier (i suppose) to attack the latter? I'm all for hardening what is possible, but I feel I do not have full understanding of some of the assumptions being made here. Getting some answers to my questions above might help me seeing the point being made. -- thanks, igor -------------------------------------------------- Igor Stoppa Igor Stoppa M: E: igor.stoppa@huawei.com<mailto:igor.stoppa@huawei.com> 2012<tel:2012>实验室-赫尔辛基研究所 2012<tel:2012> Laboratories-Helsinki Research Center From:Laura Abbott To:Kees Cook,Igor Stoppa, Cc:Boris Lukashev,Christopher Lameter,Matthew Wilcox,Jann Horn,Jerome Glisse,Michal Hocko,Christoph Hellwig,linux-security-module,Linux-MM,kernel list,Kernel Hardening, Date:2018-02-13 00:40:54 Subject:Re: [kernel-hardening] [PATCH 4/6] Protectable Memory On 02/12/2018 03:27 PM, Kees Cook wrote: > On Sun, Feb 4, 2018 at 7:05 AM, Igor Stoppa <igor.stoppa@huawei.com> wrote: >> On 04/02/18 00:29, Boris Lukashev wrote: >>> On Sat, Feb 3, 2018 at 3:32 PM, Igor Stoppa <igor.stoppa@huawei.com> wrote: >> >> [...] >> >>>> What you are suggesting, if I have understood it correctly, is that, >>>> when the pool is protected, the addresses already given out, will become >>>> traps that get resolved through a lookup table that is built based on >>>> the content of each allocation. >>>> >>>> That seems to generate a lot of overhead, not to mention the fact that >>>> it might not play very well with the MMU. >>> >>> That is effectively what i'm suggesting - as a form of protection for >>> consumers against direct reads of data which may have been corrupted >>> by some irrelevant means. In the context of pmalloc, it would probably >>> be a separate type of ro+verified pool >> ok, that seems more like an extension though. >> >> ATM I am having problems gaining traction to get even the basic merged :-) >> >> I would consider this as a possibility for future work, unless it is >> said that it's necessary for pmalloc to be accepted ... > > I would agree: let's get basic functionality in first. Both > verification and the physmap part can be done separately, IMO. Skipping over physmap leaves a pretty big area of exposure that could be difficult to solve later. I appreciate this might block basic functionality but I don't think we should just gloss over it without at least some idea of what we would do. Thanks, Laura [-- Attachment #2: Type: text/html, Size: 5191 bytes --] ^ permalink raw reply [flat|nested] 84+ messages in thread
* [PATCH 4/6] Protectable Memory 2018-01-24 17:56 ` Igor Stoppa @ 2018-01-26 19:41 ` Igor Stoppa -1 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-01-26 19:41 UTC (permalink / raw) To: linux-security-module On 24/01/18 19:56, Igor Stoppa wrote: [...] > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) > +{ [...] > +abort: > + vfree(chunk); this should be vfree_atomic() [...] > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ [...] > +free: > + vfree(chunk); and this one too I will fix them in the next iteration. I am waiting to see if any more comments arrive. Otherwise, I'll send it out probably next Tuesday. -- igor -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [PATCH 4/6] Protectable Memory @ 2018-01-26 19:41 ` Igor Stoppa 0 siblings, 0 replies; 84+ messages in thread From: Igor Stoppa @ 2018-01-26 19:41 UTC (permalink / raw) To: jglisse, keescook, mhocko, labbott, hch, willy Cc: cl, linux-security-module, linux-mm, linux-kernel, kernel-hardening On 24/01/18 19:56, Igor Stoppa wrote: [...] > +bool pmalloc_prealloc(struct gen_pool *pool, size_t size) > +{ [...] > +abort: > + vfree(chunk); this should be vfree_atomic() [...] > +void *pmalloc(struct gen_pool *pool, size_t size, gfp_t gfp) > +{ [...] > +free: > + vfree(chunk); and this one too I will fix them in the next iteration. I am waiting to see if any more comments arrive. Otherwise, I'll send it out probably next Tuesday. -- igor -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 84+ messages in thread
end of thread, other threads:[~2018-02-13 15:20 UTC | newest] Thread overview: 84+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-02-03 19:42 [RFC PATCH v13 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` [PATCH 1/6] genalloc: track beginning of allocations Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` [PATCH 2/6] genalloc: selftest Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` [PATCH 3/6] struct page: add field for vm_struct Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` [PATCH 4/6] Protectable Memory Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa 2018-02-03 19:42 ` Igor Stoppa -- strict thread matches above, loose matches on Subject: below -- 2018-02-12 16:52 [RFC PATCH v16 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-02-12 16:52 ` [PATCH 4/6] Protectable Memory Igor Stoppa 2018-02-12 16:52 ` Igor Stoppa 2018-02-12 16:52 ` Igor Stoppa 2018-02-12 16:52 ` Igor Stoppa 2018-02-11 3:19 [RFC PATCH v15 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-02-11 3:19 ` [PATCH 4/6] Protectable Memory Igor Stoppa 2018-02-11 3:19 ` Igor Stoppa 2018-02-11 3:19 ` Igor Stoppa 2018-02-11 3:19 ` Igor Stoppa 2018-02-11 12:37 ` Mike Rapoport 2018-02-11 12:37 ` Mike Rapoport 2018-02-11 12:37 ` Mike Rapoport 2018-02-12 11:26 ` Igor Stoppa 2018-02-12 11:26 ` Igor Stoppa 2018-02-12 11:26 ` Igor Stoppa 2018-02-12 11:26 ` Igor Stoppa 2018-02-12 11:43 ` Mike Rapoport 2018-02-12 11:43 ` Mike Rapoport 2018-02-12 11:43 ` Mike Rapoport 2018-02-12 12:53 ` Mike Rapoport 2018-02-12 12:53 ` Mike Rapoport 2018-02-12 12:53 ` Mike Rapoport 2018-02-12 13:41 ` Igor Stoppa 2018-02-12 13:41 ` Igor Stoppa 2018-02-12 13:41 ` Igor Stoppa 2018-02-12 13:41 ` Igor Stoppa 2018-02-12 15:31 ` Mike Rapoport 2018-02-12 15:31 ` Mike Rapoport 2018-02-12 15:31 ` Mike Rapoport 2018-02-12 15:41 ` Igor Stoppa 2018-02-12 15:41 ` Igor Stoppa 2018-02-12 15:41 ` Igor Stoppa 2018-02-12 15:41 ` Igor Stoppa 2018-02-04 16:47 [RFC PATCH v14 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-02-04 16:47 ` [PATCH 4/6] Protectable Memory Igor Stoppa 2018-02-04 16:47 ` Igor Stoppa 2018-02-04 16:47 ` Igor Stoppa 2018-02-04 16:47 ` Igor Stoppa 2018-02-04 22:06 ` Randy Dunlap 2018-02-04 22:06 ` Randy Dunlap 2018-02-04 22:06 ` Randy Dunlap 2018-02-11 1:04 ` Igor Stoppa 2018-02-11 1:04 ` Igor Stoppa 2018-02-11 1:04 ` Igor Stoppa 2018-02-11 1:04 ` Igor Stoppa 2018-02-07 10:03 ` kbuild test robot 2018-02-07 10:03 ` kbuild test robot 2018-02-07 10:03 ` kbuild test robot 2018-02-07 22:21 ` kbuild test robot 2018-02-07 22:21 ` kbuild test robot 2018-02-07 22:21 ` kbuild test robot 2018-01-30 15:14 [RFC PATCH v12 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-01-30 15:14 ` [PATCH 4/6] Protectable Memory Igor Stoppa 2018-01-30 15:14 ` Igor Stoppa 2018-01-30 15:14 ` Igor Stoppa 2018-01-30 15:14 ` Igor Stoppa 2018-02-02 5:41 ` kbuild test robot 2018-02-02 5:41 ` kbuild test robot 2018-02-02 5:41 ` kbuild test robot 2018-02-02 5:53 ` kbuild test robot 2018-02-02 5:53 ` kbuild test robot 2018-02-02 5:53 ` kbuild test robot 2018-01-24 17:56 [kernel-hardening] [RFC PATCH v11 0/6] mm: security: ro protection for dynamic data Igor Stoppa 2018-01-24 17:56 ` [PATCH 4/6] Protectable Memory Igor Stoppa 2018-01-24 17:56 ` Igor Stoppa 2018-01-24 19:10 ` [kernel-hardening] " Jann Horn 2018-01-26 5:35 ` Matthew Wilcox 2018-02-02 18:39 ` Christopher Lameter 2018-02-03 15:38 ` Igor Stoppa 2018-02-03 19:57 ` Igor Stoppa 2018-02-03 20:12 ` Boris Lukashev 2018-02-03 20:32 ` Igor Stoppa 2018-02-03 22:29 ` Boris Lukashev 2018-02-04 15:05 ` Igor Stoppa 2018-02-12 23:27 ` Kees Cook 2018-02-13 0:40 ` Laura Abbott 2018-02-13 15:20 ` Igor Stoppa 2018-01-26 19:41 ` Igor Stoppa 2018-01-26 19:41 ` Igor Stoppa
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.