From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753643AbdEDJVk (ORCPT ); Thu, 4 May 2017 05:21:40 -0400 Received: from mx2.suse.de ([195.135.220.15]:59943 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752473AbdEDJFT (ORCPT ); Thu, 4 May 2017 05:05:19 -0400 X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References" From: Jiri Slaby To: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Minchan Kim , Sergey Senozhatsky , Andrew Morton , Linus Torvalds , Jiri Slaby Subject: [PATCH 3.12 34/86] zram: do not use copy_page with non-page aligned address Date: Thu, 4 May 2017 11:03:59 +0200 Message-Id: X-Mailer: git-send-email 2.12.2 In-Reply-To: <13a6a971c9165237531c2870da03084a6becc905.1493888632.git.jslaby@suse.cz> References: <13a6a971c9165237531c2870da03084a6becc905.1493888632.git.jslaby@suse.cz> In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Minchan Kim 3.12-stable review patch. If anyone has any objections, please let me know. =============== commit d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e upstream. The copy_page is optimized memcpy for page-alinged address. If it is used with non-page aligned address, it can corrupt memory which means system corruption. With zram, it can happen with 1. 64K architecture 2. partial IO 3. slub debug Partial IO need to allocate a page and zram allocates it via kmalloc. With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned address. And finally, copy_page(mem, cmem) corrupts memory. So, this patch changes it to memcpy. Actuaully, we don't need to change zram_bvec_write part because zsmalloc returns page-aligned address in case of PAGE_SIZE class but it's not good to rely on the internal of zsmalloc. Note: When this patch is merged to stable, clear_page should be fixed, too. Unfortunately, recent zram removes it by "same page merge" feature so it's hard to backport this patch to -stable tree. I will handle it when I receive the mail from stable tree maintainer to merge this patch to backport. Fixes: 42e99bd ("zram: optimize memory operations with clear_page()/copy_page()") Link: http://lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim Cc: Sergey Senozhatsky Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Jiri Slaby --- drivers/staging/zram/zram_drv.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c index 162e01a27d40..f893a902a534 100644 --- a/drivers/staging/zram/zram_drv.c +++ b/drivers/staging/zram/zram_drv.c @@ -321,13 +321,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index) unsigned long handle = meta->table[index].handle; if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) { - clear_page(mem); + memset(mem, 0, PAGE_SIZE); return 0; } cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO); if (meta->table[index].size == PAGE_SIZE) - copy_page(mem, cmem); + memcpy(mem, cmem, PAGE_SIZE); else ret = lzo1x_decompress_safe(cmem, meta->table[index].size, mem, &clen); @@ -482,7 +482,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) { src = kmap_atomic(page); - copy_page(cmem, src); + memcpy(cmem, src, PAGE_SIZE); kunmap_atomic(src); } else { memcpy(cmem, src, clen); -- 2.12.2