From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A56F6C00A89 for ; Mon, 2 Nov 2020 12:32:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 090C72225B for ; Mon, 2 Nov 2020 12:32:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="jML6uWKr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 090C72225B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7E5646B0073; Mon, 2 Nov 2020 07:32:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 796BD6B0074; Mon, 2 Nov 2020 07:32:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 685ED6B0075; Mon, 2 Nov 2020 07:32:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0204.hostedemail.com [216.40.44.204]) by kanga.kvack.org (Postfix) with ESMTP id 382046B0073 for ; Mon, 2 Nov 2020 07:32:31 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C07488249980 for ; Mon, 2 Nov 2020 12:32:30 +0000 (UTC) X-FDA: 77439416460.01.edge52_1e0dfe5272b0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 9740810047853 for ; Mon, 2 Nov 2020 12:32:30 +0000 (UTC) X-HE-Tag: edge52_1e0dfe5272b0 X-Filterd-Recvd-Size: 11134 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Nov 2020 12:32:29 +0000 (UTC) Received: from localhost.localdomain (unknown [192.30.34.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F0AB824102; Mon, 2 Nov 2020 12:32:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604320349; bh=NFzeeg/yQugHEBOTvh2kv8ztwy1VtMM6OJaAjWz0K8A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jML6uWKrGxP/ST0pte7rtg8s2Z1JgMtur1Y6VqZNBj8MkkZSbGu8YHVITSzJ4iD+a 4rov11GY/ak50OpNvGV4F7xdiKKs7PIAkegxy4+slQvQIz8qP2IckD66kH+A0bz+75 AgpBsuPADkL+kKowl8UlLvVuZq0k9QPA1h3nvgG8= From: Arnd Bergmann To: linux-arch@vger.kernel.org Cc: Arnd Bergmann , Alexander Viro , Andrew Morton , Andy Lutomirski , Borislav Petkov , Brian Gerst , Christoph Hellwig , Eric Biederman , Ingo Molnar , "H . Peter Anvin" , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kexec@lists.infradead.org Subject: [PATCH v2 3/4] mm: simplify compat numa syscalls Date: Mon, 2 Nov 2020 13:31:50 +0100 Message-Id: <20201102123151.2860165-4-arnd@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201102123151.2860165-1-arnd@kernel.org> References: <20201102123151.2860165-1-arnd@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Arnd Bergmann The compat implementations for mbind, get_mempolicy, set_mempolicy and migrate_pages are just there to handle the subtly different layout of bitmaps on 32-bit hosts. The compat implementation however lacks some of the checks that are present in the native one, in particular for checking that the extra bits are all zero when user space has a larger mask size than the kernel. Worse, those extra bits do not get cleared when copying in or out of the kernel, which can lead to incorrect data as well. Unify the implementation to handle the compat bitmap layout directly in the get_nodes() and copy_nodes_to_user() helpers. Splitting out the get_bitmap() helper from get_nodes() also helps readability of the native case. On x86, two additional problems are addressed by this: compat tasks can pass a bitmap at the end of a mapping, causing a fault when reading across the page boundary for a 64-bit word. x32 tasks might also run into problems with get_mempolicy corrupting data when an odd number of 32-bit words gets passed. On parisc the migrate_pages() system call apparently had the wrong calling convention, as big-endian architectures expect the words inside of a bitmap to be swapped. This is not a problem though since parisc has no NUMA support. Signed-off-by: Arnd Bergmann --- mm/mempolicy.c | 174 +++++++++++++++---------------------------------- 1 file changed, 53 insertions(+), 121 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 3fde772ef5ef..fb5e533667f6 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1371,16 +1371,32 @@ static long do_mbind(unsigned long start, unsigne= d long len, /* * User space interface with variable sized bitmaps for nodelists. */ +static int get_bitmap(unsigned long *mask, const unsigned long __user *n= mask, + unsigned long maxnode) +{ + unsigned long nlongs =3D BITS_TO_LONGS(maxnode); + int ret; + + if (in_compat_syscall()) + ret =3D compat_get_bitmap(mask, + (const compat_ulong_t __user *)nmask, + maxnode); + else + ret =3D copy_from_user(mask, nmask, nlongs*sizeof(unsigned long)); + + if (ret) + return -EFAULT; + + if (maxnode % BITS_PER_LONG) + mask[nlongs-1] &=3D (1UL << (maxnode % BITS_PER_LONG)) - 1; + + return 0; +} =20 /* Copy a node mask from user space. */ static int get_nodes(nodemask_t *nodes, const unsigned long __user *nmas= k, unsigned long maxnode) { - unsigned long k; - unsigned long t; - unsigned long nlongs; - unsigned long endmask; - --maxnode; nodes_clear(*nodes); if (maxnode =3D=3D 0 || !nmask) @@ -1388,49 +1404,29 @@ static int get_nodes(nodemask_t *nodes, const uns= igned long __user *nmask, if (maxnode > PAGE_SIZE*BITS_PER_BYTE) return -EINVAL; =20 - nlongs =3D BITS_TO_LONGS(maxnode); - if ((maxnode % BITS_PER_LONG) =3D=3D 0) - endmask =3D ~0UL; - else - endmask =3D (1UL << (maxnode % BITS_PER_LONG)) - 1; - /* * When the user specified more nodes than supported just check - * if the non supported part is all zero. - * - * If maxnode have more longs than MAX_NUMNODES, check - * the bits in that area first. And then go through to - * check the rest bits which equal or bigger than MAX_NUMNODES. - * Otherwise, just check bits [MAX_NUMNODES, maxnode). + * if the non supported part is all zero, one word at a time, + * starting at the end. */ - if (nlongs > BITS_TO_LONGS(MAX_NUMNODES)) { - for (k =3D BITS_TO_LONGS(MAX_NUMNODES); k < nlongs; k++) { - if (get_user(t, nmask + k)) - return -EFAULT; - if (k =3D=3D nlongs - 1) { - if (t & endmask) - return -EINVAL; - } else if (t) - return -EINVAL; - } - nlongs =3D BITS_TO_LONGS(MAX_NUMNODES); - endmask =3D ~0UL; - } + while (maxnode > MAX_NUMNODES) { + unsigned long bits =3D min_t(unsigned long, maxnode, BITS_PER_LONG); + unsigned long t; =20 - if (maxnode > MAX_NUMNODES && MAX_NUMNODES % BITS_PER_LONG !=3D 0) { - unsigned long valid_mask =3D endmask; - - valid_mask &=3D ~((1UL << (MAX_NUMNODES % BITS_PER_LONG)) - 1); - if (get_user(t, nmask + nlongs - 1)) + if (get_bitmap(&t, &nmask[maxnode / BITS_PER_LONG], bits)) return -EFAULT; - if (t & valid_mask) + + if (maxnode - bits >=3D MAX_NUMNODES) { + maxnode -=3D bits; + } else { + maxnode =3D MAX_NUMNODES; + t &=3D ~((1UL << (MAX_NUMNODES % BITS_PER_LONG)) - 1); + } + if (t) return -EINVAL; } =20 - if (copy_from_user(nodes_addr(*nodes), nmask, nlongs*sizeof(unsigned lo= ng))) - return -EFAULT; - nodes_addr(*nodes)[nlongs-1] &=3D endmask; - return 0; + return get_bitmap(nodes_addr(*nodes), nmask, maxnode); } =20 /* Copy a kernel node mask to user space */ @@ -1439,6 +1435,10 @@ static int copy_nodes_to_user(unsigned long __user= *mask, unsigned long maxnode, { unsigned long copy =3D ALIGN(maxnode-1, 64) / 8; unsigned int nbytes =3D BITS_TO_LONGS(nr_node_ids) * sizeof(long); + bool compat =3D in_compat_syscall(); + + if (compat) + nbytes =3D BITS_TO_COMPAT_LONGS(nr_node_ids) * sizeof(compat_long_t); =20 if (copy > nbytes) { if (copy > PAGE_SIZE) @@ -1447,6 +1447,11 @@ static int copy_nodes_to_user(unsigned long __user= *mask, unsigned long maxnode, return -EFAULT; copy =3D nbytes; } + + if (compat) + return compat_put_bitmap((compat_ulong_t __user *)mask, + nodes_addr(*nodes), maxnode); + return copy_to_user(mask, nodes_addr(*nodes), copy) ? -EFAULT : 0; } =20 @@ -1645,72 +1650,22 @@ COMPAT_SYSCALL_DEFINE5(get_mempolicy, int __user = *, policy, compat_ulong_t, maxnode, compat_ulong_t, addr, compat_ulong_t, flags) { - long err; - unsigned long __user *nm =3D NULL; - unsigned long nr_bits, alloc_size; - DECLARE_BITMAP(bm, MAX_NUMNODES); - - nr_bits =3D min_t(unsigned long, maxnode-1, nr_node_ids); - alloc_size =3D ALIGN(nr_bits, BITS_PER_LONG) / 8; - - if (nmask) - nm =3D compat_alloc_user_space(alloc_size); - - err =3D kernel_get_mempolicy(policy, nm, nr_bits+1, addr, flags); - - if (!err && nmask) { - unsigned long copy_size; - copy_size =3D min_t(unsigned long, sizeof(bm), alloc_size); - err =3D copy_from_user(bm, nm, copy_size); - /* ensure entire bitmap is zeroed */ - err |=3D clear_user(nmask, ALIGN(maxnode-1, 8) / 8); - err |=3D compat_put_bitmap(nmask, bm, nr_bits); - } - - return err; + return kernel_get_mempolicy(policy, (unsigned long __user *)nmask, + maxnode, addr, flags); } =20 COMPAT_SYSCALL_DEFINE3(set_mempolicy, int, mode, compat_ulong_t __user *= , nmask, compat_ulong_t, maxnode) { - unsigned long __user *nm =3D NULL; - unsigned long nr_bits, alloc_size; - DECLARE_BITMAP(bm, MAX_NUMNODES); - - nr_bits =3D min_t(unsigned long, maxnode-1, MAX_NUMNODES); - alloc_size =3D ALIGN(nr_bits, BITS_PER_LONG) / 8; - - if (nmask) { - if (compat_get_bitmap(bm, nmask, nr_bits)) - return -EFAULT; - nm =3D compat_alloc_user_space(alloc_size); - if (copy_to_user(nm, bm, alloc_size)) - return -EFAULT; - } - - return kernel_set_mempolicy(mode, nm, nr_bits+1); + return kernel_set_mempolicy(mode, (unsigned long __user *)nmask, maxnod= e); } =20 COMPAT_SYSCALL_DEFINE6(mbind, compat_ulong_t, start, compat_ulong_t, len= , compat_ulong_t, mode, compat_ulong_t __user *, nmask, compat_ulong_t, maxnode, compat_ulong_t, flags) { - unsigned long __user *nm =3D NULL; - unsigned long nr_bits, alloc_size; - nodemask_t bm; - - nr_bits =3D min_t(unsigned long, maxnode-1, MAX_NUMNODES); - alloc_size =3D ALIGN(nr_bits, BITS_PER_LONG) / 8; - - if (nmask) { - if (compat_get_bitmap(nodes_addr(bm), nmask, nr_bits)) - return -EFAULT; - nm =3D compat_alloc_user_space(alloc_size); - if (copy_to_user(nm, nodes_addr(bm), alloc_size)) - return -EFAULT; - } - - return kernel_mbind(start, len, mode, nm, nr_bits+1, flags); + return kernel_mbind(start, len, mode, (unsigned long __user *)nmask, + maxnode, flags); } =20 COMPAT_SYSCALL_DEFINE4(migrate_pages, compat_pid_t, pid, @@ -1718,32 +1673,9 @@ COMPAT_SYSCALL_DEFINE4(migrate_pages, compat_pid_t= , pid, const compat_ulong_t __user *, old_nodes, const compat_ulong_t __user *, new_nodes) { - unsigned long __user *old =3D NULL; - unsigned long __user *new =3D NULL; - nodemask_t tmp_mask; - unsigned long nr_bits; - unsigned long size; - - nr_bits =3D min_t(unsigned long, maxnode - 1, MAX_NUMNODES); - size =3D ALIGN(nr_bits, BITS_PER_LONG) / 8; - if (old_nodes) { - if (compat_get_bitmap(nodes_addr(tmp_mask), old_nodes, nr_bits)) - return -EFAULT; - old =3D compat_alloc_user_space(new_nodes ? size * 2 : size); - if (new_nodes) - new =3D old + size / sizeof(unsigned long); - if (copy_to_user(old, nodes_addr(tmp_mask), size)) - return -EFAULT; - } - if (new_nodes) { - if (compat_get_bitmap(nodes_addr(tmp_mask), new_nodes, nr_bits)) - return -EFAULT; - if (new =3D=3D NULL) - new =3D compat_alloc_user_space(size); - if (copy_to_user(new, nodes_addr(tmp_mask), size)) - return -EFAULT; - } - return kernel_migrate_pages(pid, nr_bits + 1, old, new); + return kernel_migrate_pages(pid, maxnode, + (const unsigned long __user *)old_nodes, + (const unsigned long __user *)new_nodes); } =20 #endif /* CONFIG_COMPAT */ --=20 2.27.0