From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADC7BC433E0 for ; Tue, 22 Dec 2020 20:00:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 28CA523137 for ; Tue, 22 Dec 2020 20:00:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 28CA523137 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D76026B00AE; Tue, 22 Dec 2020 15:00:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D004D6B00B0; Tue, 22 Dec 2020 15:00:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B53E26B00B1; Tue, 22 Dec 2020 15:00:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 9BD8E6B00AE for ; Tue, 22 Dec 2020 15:00:38 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 69DAD8249980 for ; Tue, 22 Dec 2020 20:00:38 +0000 (UTC) X-FDA: 77621985756.03.month53_081056427462 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 2D4B928A4E9 for ; Tue, 22 Dec 2020 20:00:38 +0000 (UTC) X-HE-Tag: month53_081056427462 X-Filterd-Recvd-Size: 5902 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 20:00:37 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id EE866221FF; Tue, 22 Dec 2020 20:00:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608667236; bh=qlqKg0D1FBUunJ/7Gr2LXyck+8gOv/WVqpxxiORWXLQ=; h=Date:From:To:Subject:In-Reply-To:From; b=1EozPhv9ekEcBBT31Wbxle0CwbRwHSo/uQf7pQ6wjVM+gVTJbcT420oCR4YpOmfTe BxibUCrIIwCrSEeKxQcvppmHCYwwnR1ZSe/XPOWaKNR/J9dFcC77PohUkFzEWs1p8+ ONbDjpUYvvOEesDMUvFsPYBKlbDOKzz/MWjRFuVQ= Date: Tue, 22 Dec 2020 12:00:35 -0800 From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 09/60] kasan: define KASAN_MEMORY_PER_SHADOW_PAGE Message-ID: <20201222200035.5ghw6cTXt%akpm@linux-foundation.org> In-Reply-To: <20201222115844.d30aaef7df6f5b120d3e0c3d@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: define KASAN_MEMORY_PER_SHADOW_PAGE Define KASAN_MEMORY_PER_SHADOW_PAGE as (KASAN_GRANULE_SIZE << PAGE_SHIFT), which is the same as (KASAN_GRANULE_SIZE * PAGE_SIZE) for software modes that use shadow memory, and use it across KASAN code to simplify it. Link: https://lkml.kernel.org/r/8329391cfe14b5cffd3decf3b5c535b6ce21eef6.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/kasan/init.c | 10 ++++------ mm/kasan/kasan.h | 2 ++ mm/kasan/shadow.c | 16 +++++++--------- 3 files changed, 13 insertions(+), 15 deletions(-) --- a/mm/kasan/init.c~kasan-define-kasan_memory_per_shadow_page +++ a/mm/kasan/init.c @@ -441,9 +441,8 @@ void kasan_remove_zero_shadow(void *star addr = (unsigned long)kasan_mem_to_shadow(start); end = addr + (size >> KASAN_SHADOW_SCALE_SHIFT); - if (WARN_ON((unsigned long)start % - (KASAN_GRANULE_SIZE * PAGE_SIZE)) || - WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE))) + if (WARN_ON((unsigned long)start % KASAN_MEMORY_PER_SHADOW_PAGE) || + WARN_ON(size % KASAN_MEMORY_PER_SHADOW_PAGE)) return; for (; addr < end; addr = next) { @@ -476,9 +475,8 @@ int kasan_add_zero_shadow(void *start, u shadow_start = kasan_mem_to_shadow(start); shadow_end = shadow_start + (size >> KASAN_SHADOW_SCALE_SHIFT); - if (WARN_ON((unsigned long)start % - (KASAN_GRANULE_SIZE * PAGE_SIZE)) || - WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE))) + if (WARN_ON((unsigned long)start % KASAN_MEMORY_PER_SHADOW_PAGE) || + WARN_ON(size % KASAN_MEMORY_PER_SHADOW_PAGE)) return -EINVAL; ret = kasan_populate_early_shadow(shadow_start, shadow_end); --- a/mm/kasan/kasan.h~kasan-define-kasan_memory_per_shadow_page +++ a/mm/kasan/kasan.h @@ -8,6 +8,8 @@ #define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_GRANULE_MASK (KASAN_GRANULE_SIZE - 1) +#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_GRANULE_SIZE << PAGE_SHIFT) + #define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ #define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ #define KASAN_TAG_MAX 0xFD /* maximum value for random tags */ --- a/mm/kasan/shadow.c~kasan-define-kasan_memory_per_shadow_page +++ a/mm/kasan/shadow.c @@ -161,7 +161,7 @@ static int __meminit kasan_mem_notifier( shadow_end = shadow_start + shadow_size; if (WARN_ON(mem_data->nr_pages % KASAN_GRANULE_SIZE) || - WARN_ON(start_kaddr % (KASAN_GRANULE_SIZE << PAGE_SHIFT))) + WARN_ON(start_kaddr % KASAN_MEMORY_PER_SHADOW_PAGE)) return NOTIFY_BAD; switch (action) { @@ -432,22 +432,20 @@ void kasan_release_vmalloc(unsigned long unsigned long region_start, region_end; unsigned long size; - region_start = ALIGN(start, PAGE_SIZE * KASAN_GRANULE_SIZE); - region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_GRANULE_SIZE); + region_start = ALIGN(start, KASAN_MEMORY_PER_SHADOW_PAGE); + region_end = ALIGN_DOWN(end, KASAN_MEMORY_PER_SHADOW_PAGE); - free_region_start = ALIGN(free_region_start, - PAGE_SIZE * KASAN_GRANULE_SIZE); + free_region_start = ALIGN(free_region_start, KASAN_MEMORY_PER_SHADOW_PAGE); if (start != region_start && free_region_start < region_start) - region_start -= PAGE_SIZE * KASAN_GRANULE_SIZE; + region_start -= KASAN_MEMORY_PER_SHADOW_PAGE; - free_region_end = ALIGN_DOWN(free_region_end, - PAGE_SIZE * KASAN_GRANULE_SIZE); + free_region_end = ALIGN_DOWN(free_region_end, KASAN_MEMORY_PER_SHADOW_PAGE); if (end != region_end && free_region_end > region_end) - region_end += PAGE_SIZE * KASAN_GRANULE_SIZE; + region_end += KASAN_MEMORY_PER_SHADOW_PAGE; shadow_start = kasan_mem_to_shadow((void *)region_start); shadow_end = kasan_mem_to_shadow((void *)region_end); _