From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8686C433EF for ; Fri, 27 May 2022 08:57:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348743AbiE0I5M (ORCPT ); Fri, 27 May 2022 04:57:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350522AbiE0Iz2 (ORCPT ); Fri, 27 May 2022 04:55:28 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09E7949CA6; Fri, 27 May 2022 01:54:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6495A61C01; Fri, 27 May 2022 08:54:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 405D4C385A9; Fri, 27 May 2022 08:54:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653641648; bh=Ubsnt854yE+1eQJjtO06DtISei0SzeBhKGcpKSrE0Ac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=16z7yi94XU1qeXadMBYW5XqA3zHc+PKGPvOVsZZx3/tWq7xz+4MweIz5n47SdABl1 2NxdJkEPIpufMNvkKTpX8BTMUxuMoUH4EdzhMWhsCBCSCxVa9kMHHNXBhqgwwGJNUA 7+afB8Jv7IrfRA1o3w1qAOtttcN2qzScDBZoGhsE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andrew Morton , "Jason A. Donenfeld" Subject: [PATCH 5.18 40/47] random: move randomize_page() into mm where it belongs Date: Fri, 27 May 2022 10:50:20 +0200 Message-Id: <20220527084807.895697456@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084801.223648383@linuxfoundation.org> References: <20220527084801.223648383@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit 5ad7dd882e45d7fe432c32e896e2aaa0b21746ea upstream. randomize_page is an mm function. It is documented like one. It contains the history of one. It has the naming convention of one. It looks just like another very similar function in mm, randomize_stack_top(). And it has always been maintained and updated by mm people. There is no need for it to be in random.c. In the "which shape does not look like the other ones" test, pointing to randomize_page() is correct. So move randomize_page() into mm/util.c, right next to the similar randomize_stack_top() function. This commit contains no actual code changes. Cc: Andrew Morton Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 32 -------------------------------- include/linux/mm.h | 1 + include/linux/random.h | 2 -- mm/util.c | 32 ++++++++++++++++++++++++++++++++ 4 files changed, 33 insertions(+), 34 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -622,38 +622,6 @@ int __cold random_prepare_cpu(unsigned i } #endif -/** - * randomize_page - Generate a random, page aligned address - * @start: The smallest acceptable address the caller will take. - * @range: The size of the area, starting at @start, within which the - * random address must fall. - * - * If @start + @range would overflow, @range is capped. - * - * NOTE: Historical use of randomize_range, which this replaces, presumed that - * @start was already page aligned. We now align it regardless. - * - * Return: A page aligned address within [start, start + range). On error, - * @start is returned. - */ -unsigned long randomize_page(unsigned long start, unsigned long range) -{ - if (!PAGE_ALIGNED(start)) { - range -= PAGE_ALIGN(start) - start; - start = PAGE_ALIGN(start); - } - - if (start > ULONG_MAX - range) - range = ULONG_MAX - start; - - range >>= PAGE_SHIFT; - - if (range == 0) - return start; - - return start + (get_random_long() % range << PAGE_SHIFT); -} - /* * This function will use the architecture-specific hardware random * number generator if it is available. It is not recommended for --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2677,6 +2677,7 @@ extern int install_special_mapping(struc unsigned long flags, struct page **pages); unsigned long randomize_stack_top(unsigned long stack_top); +unsigned long randomize_page(unsigned long start, unsigned long range); extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); --- a/include/linux/random.h +++ b/include/linux/random.h @@ -73,8 +73,6 @@ static inline unsigned long get_random_c return get_random_long() & CANARY_MASK; } -unsigned long randomize_page(unsigned long start, unsigned long range); - int __init random_init(const char *command_line); bool rng_is_initialized(void); int wait_for_random_bytes(void); --- a/mm/util.c +++ b/mm/util.c @@ -343,6 +343,38 @@ unsigned long randomize_stack_top(unsign #endif } +/** + * randomize_page - Generate a random, page aligned address + * @start: The smallest acceptable address the caller will take. + * @range: The size of the area, starting at @start, within which the + * random address must fall. + * + * If @start + @range would overflow, @range is capped. + * + * NOTE: Historical use of randomize_range, which this replaces, presumed that + * @start was already page aligned. We now align it regardless. + * + * Return: A page aligned address within [start, start + range). On error, + * @start is returned. + */ +unsigned long randomize_page(unsigned long start, unsigned long range) +{ + if (!PAGE_ALIGNED(start)) { + range -= PAGE_ALIGN(start) - start; + start = PAGE_ALIGN(start); + } + + if (start > ULONG_MAX - range) + range = ULONG_MAX - start; + + range >>= PAGE_SHIFT; + + if (range == 0) + return start; + + return start + (get_random_long() % range << PAGE_SHIFT); +} + #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT unsigned long arch_randomize_brk(struct mm_struct *mm) {