From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753368AbbDCBHX (ORCPT ); Thu, 2 Apr 2015 21:07:23 -0400 Received: from mail-ie0-f170.google.com ([209.85.223.170]:36331 "EHLO mail-ie0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752646AbbDCBHT (ORCPT ); Thu, 2 Apr 2015 21:07:19 -0400 Date: Thu, 2 Apr 2015 18:07:16 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: Andrey Ryabinin , Dave Kleikamp , Christoph Hellwig , Sebastian Ott , Mikulas Patocka , Catalin Marinas , LKML , "linux-mm@kvack.org" , jfs-discussion@lists.sourceforge.net Subject: [patch -mm] mm, mempool: poison elements backed by page allocator fix fix In-Reply-To: Message-ID: References: <551A861B.7020701@samsung.com> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Elements backed by the page allocator might not be directly mapped into lowmem, so do k{,un}map_atomic() before poisoning and verifying contents to map into lowmem and return the virtual adddress. Reported-by: Andrey Ryabinin Signed-off-by: David Rientjes --- mm/mempool.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/mempool.c b/mm/mempool.c --- a/mm/mempool.c +++ b/mm/mempool.c @@ -61,9 +61,10 @@ static void check_element(mempool_t *pool, void *element) /* Mempools backed by page allocator */ if (pool->free == mempool_free_pages) { int order = (int)(long)pool->pool_data; - void *addr = page_address(element); + void *addr = kmap_atomic((struct page *)element); __check_element(pool, addr, 1UL << (PAGE_SHIFT + order)); + kunmap_atomic(addr); } } @@ -84,9 +85,10 @@ static void poison_element(mempool_t *pool, void *element) /* Mempools backed by page allocator */ if (pool->alloc == mempool_alloc_pages) { int order = (int)(long)pool->pool_data; - void *addr = page_address(element); + void *addr = kmap_atomic((struct page *)element); __poison_element(addr, 1UL << (PAGE_SHIFT + order)); + kunmap_atomic(addr); } } #else /* CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON */ From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ig0-f178.google.com (mail-ig0-f178.google.com [209.85.213.178]) by kanga.kvack.org (Postfix) with ESMTP id F04786B006E for ; Thu, 2 Apr 2015 21:07:18 -0400 (EDT) Received: by igbqf9 with SMTP id qf9so86808481igb.1 for ; Thu, 02 Apr 2015 18:07:18 -0700 (PDT) Received: from mail-ie0-x229.google.com (mail-ie0-x229.google.com. [2607:f8b0:4001:c03::229]) by mx.google.com with ESMTPS id v29si5853554iov.99.2015.04.02.18.07.18 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Apr 2015 18:07:18 -0700 (PDT) Received: by ierf6 with SMTP id f6so81842105ier.2 for ; Thu, 02 Apr 2015 18:07:18 -0700 (PDT) Date: Thu, 2 Apr 2015 18:07:16 -0700 (PDT) From: David Rientjes Subject: [patch -mm] mm, mempool: poison elements backed by page allocator fix fix In-Reply-To: Message-ID: References: <551A861B.7020701@samsung.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Andrey Ryabinin , Dave Kleikamp , Christoph Hellwig , Sebastian Ott , Mikulas Patocka , Catalin Marinas , LKML , "linux-mm@kvack.org" , jfs-discussion@lists.sourceforge.net Elements backed by the page allocator might not be directly mapped into lowmem, so do k{,un}map_atomic() before poisoning and verifying contents to map into lowmem and return the virtual adddress. Reported-by: Andrey Ryabinin Signed-off-by: David Rientjes --- mm/mempool.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/mempool.c b/mm/mempool.c --- a/mm/mempool.c +++ b/mm/mempool.c @@ -61,9 +61,10 @@ static void check_element(mempool_t *pool, void *element) /* Mempools backed by page allocator */ if (pool->free == mempool_free_pages) { int order = (int)(long)pool->pool_data; - void *addr = page_address(element); + void *addr = kmap_atomic((struct page *)element); __check_element(pool, addr, 1UL << (PAGE_SHIFT + order)); + kunmap_atomic(addr); } } @@ -84,9 +85,10 @@ static void poison_element(mempool_t *pool, void *element) /* Mempools backed by page allocator */ if (pool->alloc == mempool_alloc_pages) { int order = (int)(long)pool->pool_data; - void *addr = page_address(element); + void *addr = kmap_atomic((struct page *)element); __poison_element(addr, 1UL << (PAGE_SHIFT + order)); + kunmap_atomic(addr); } } #else /* CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON */ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org