From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3BDAC19F38 for ; Tue, 27 Jul 2021 14:50:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFFBB61AF7 for ; Tue, 27 Jul 2021 14:50:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236855AbhG0Ou4 (ORCPT ); Tue, 27 Jul 2021 10:50:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:36248 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236916AbhG0Ott (ORCPT ); Tue, 27 Jul 2021 10:49:49 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D161461B00; Tue, 27 Jul 2021 14:49:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1627397377; bh=waF0yGZ/74FCKFwfsSHiFG6HeMntkph0RFHmK2X6nWk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XenONNoSK2cLncTqqJgIRWkdacJuUk3tCFtmqztNrzRf62E+2ZdCK0GnSIsIu+dlv qcGEi/Iz+oPD42JhJdKaGksnfmuNWft0N/+iDUWeJ7rsoW+OWZ/k6dWpnQNBt8+kwz dexqr+/4ut9vd2esm0RkG/KrcnY4zQzJKKhBn/yKPmH2x9fmtb66Xptq19HxJPEV88 wP2JALUBp2AhG0A5obwSEHOqw/gKNp57a1ov3O6nCYs5EhuqCiqke2DVgzbogxYnQ6 nDvN8P5s3b6EsJxE0INjFM3F2whAxtmFr66NDXEk2sJaVnRy/+I3N2hf+RsC784t9S 6SVvPCVxy9a/w== From: Arnd Bergmann To: Andrew Morton Cc: Arnd Bergmann , Catalin Marinas , Will Deacon , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , "David S. Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Al Viro , "Eric W. Biederman" , Christoph Hellwig , Feng Tang , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig Subject: [PATCH v5 3/6] mm: simplify compat_sys_move_pages Date: Tue, 27 Jul 2021 16:48:56 +0200 Message-Id: <20210727144859.4150043-4-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210727144859.4150043-1-arnd@kernel.org> References: <20210727144859.4150043-1-arnd@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Arnd Bergmann The compat move_pages() implementation uses compat_alloc_user_space() for converting the pointer array. Moving the compat handling into the function itself is a bit simpler and lets us avoid the compat_alloc_user_space() call. Reviewed-by: Christoph Hellwig Signed-off-by: Arnd Bergmann --- mm/migrate.c | 45 ++++++++++++++++++++++++++++++--------------- 1 file changed, 30 insertions(+), 15 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 34a9ad3e0a4f..60634d2653b9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1820,6 +1820,23 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages, mmap_read_unlock(mm); } +static int get_compat_pages_array(const void __user *chunk_pages[], + const void __user * __user *pages, + unsigned long chunk_nr) +{ + compat_uptr_t __user *pages32 = (compat_uptr_t __user *)pages; + compat_uptr_t p; + int i; + + for (i = 0; i < chunk_nr; i++) { + if (get_user(p, pages32 + i)) + return -EFAULT; + chunk_pages[i] = compat_ptr(p); + } + + return 0; +} + /* * Determine the nodes of a user array of pages and store it in * a user array of status. @@ -1839,8 +1856,15 @@ static int do_pages_stat(struct mm_struct *mm, unsigned long nr_pages, if (chunk_nr > DO_PAGES_STAT_CHUNK_NR) chunk_nr = DO_PAGES_STAT_CHUNK_NR; - if (copy_from_user(chunk_pages, pages, chunk_nr * sizeof(*chunk_pages))) - break; + if (in_compat_syscall()) { + if (get_compat_pages_array(chunk_pages, pages, + chunk_nr)) + break; + } else { + if (copy_from_user(chunk_pages, pages, + chunk_nr * sizeof(*chunk_pages))) + break; + } do_pages_stat_array(mm, chunk_nr, chunk_pages, chunk_status); @@ -1945,23 +1969,14 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages, #ifdef CONFIG_COMPAT COMPAT_SYSCALL_DEFINE6(move_pages, pid_t, pid, compat_ulong_t, nr_pages, - compat_uptr_t __user *, pages32, + compat_uptr_t __user *, pages, const int __user *, nodes, int __user *, status, int, flags) { - const void __user * __user *pages; - int i; - - pages = compat_alloc_user_space(nr_pages * sizeof(void *)); - for (i = 0; i < nr_pages; i++) { - compat_uptr_t p; - - if (get_user(p, pages32 + i) || - put_user(compat_ptr(p), pages + i)) - return -EFAULT; - } - return kernel_move_pages(pid, nr_pages, pages, nodes, status, flags); + return kernel_move_pages(pid, nr_pages, + (const void __user *__user *)pages, + nodes, status, flags); } #endif /* CONFIG_COMPAT */ -- 2.29.2