From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94542C433FE for ; Wed, 16 Feb 2022 13:18:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B2526B0073; Wed, 16 Feb 2022 08:18:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 261DF6B0074; Wed, 16 Feb 2022 08:18:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12B9D6B0082; Wed, 16 Feb 2022 08:18:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 04A086B0074 for ; Wed, 16 Feb 2022 08:18:40 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B768A181AC9C6 for ; Wed, 16 Feb 2022 13:18:39 +0000 (UTC) X-FDA: 79148697558.06.DBEA4D7 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf05.hostedemail.com (Postfix) with ESMTP id C827F100013 for ; Wed, 16 Feb 2022 13:18:38 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 5905FCE26F3; Wed, 16 Feb 2022 13:18:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A9E1C340F0; Wed, 16 Feb 2022 13:18:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1645017514; bh=HK+tVapaV31snBn8v5Q1USmrkGxBnfuKp+8RGzmZzjI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bsuYzbxkAoiYAmxDbgE0FResaoqYp9lL2WRBFz6fdmL0V8tZZV8xaWKFJSo/3mJYx WdQCiIu/zsrVqbxz09YsCZ9c64A7VZw0Vdg9LFehAVDOqZCFj+XOgd0VRFqvGKGYoO +KodvujkWwd2KS2nWL6OlgsdBaGtV5H4BiRuPDazGZlooWEAIcbJSmY1TqXF9NJZbj mRzzQ8X9nW3OGRlGtXQqj1EE+it/xVoinYPLds9k81saknC5jOmycuYH/4EUfXiqIz PW0jEYAhflNTNWI39oCemJzad46eq07obLNFXbCilj6TnxpIOrAQV7vcvZgB/r5sRC yEaD2i/JrlYBA== From: Arnd Bergmann To: Linus Torvalds , Christoph Hellwig , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-api@vger.kernel.org, arnd@arndb.de, linux-kernel@vger.kernel.org, viro@zeniv.linux.org.uk Cc: linux@armlinux.org.uk, will@kernel.org, guoren@kernel.org, bcain@codeaurora.org, geert@linux-m68k.org, monstr@monstr.eu, tsbogend@alpha.franken.de, nickhu@andestech.com, green.hu@gmail.com, dinguyen@kernel.org, shorne@gmail.com, deller@gmx.de, mpe@ellerman.id.au, peterz@infradead.org, mingo@redhat.com, mark.rutland@arm.com, hca@linux.ibm.com, dalias@libc.org, davem@davemloft.net, richard@nod.at, x86@kernel.org, jcmvbkbc@gmail.com, ebiederm@xmission.com, akpm@linux-foundation.org, ardb@kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org Subject: [PATCH v2 17/18] ia64: remove CONFIG_SET_FS support Date: Wed, 16 Feb 2022 14:13:31 +0100 Message-Id: <20220216131332.1489939-18-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20220216131332.1489939-1-arnd@kernel.org> References: <20220216131332.1489939-1-arnd@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: C827F100013 X-Stat-Signature: m7hx1znfj6ebnfergd64tku9a8dmz3de X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bsuYzbxk; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of arnd@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=arnd@kernel.org X-HE-Tag: 1645017518-512145 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Arnd Bergmann ia64 only uses set_fs() in one file to handle unaligned access for both user space and kernel instructions. Rewrite this to explicitly pass around a flag about which one it is and drop the feature from the architecture. Signed-off-by: Arnd Bergmann --- arch/ia64/Kconfig | 1 - arch/ia64/include/asm/processor.h | 4 -- arch/ia64/include/asm/thread_info.h | 2 - arch/ia64/include/asm/uaccess.h | 21 +++------- arch/ia64/kernel/unaligned.c | 60 +++++++++++++++++++---------- 5 files changed, 45 insertions(+), 43 deletions(-) diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index a7e01573abd8..6b6a35b3d959 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -61,7 +61,6 @@ config IA64 select NEED_SG_DMA_LENGTH select NUMA if !FLATMEM select PCI_MSI_ARCH_FALLBACKS if PCI_MSI - select SET_FS select ZONE_DMA32 default y help diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/pr= ocessor.h index 45365c2ef598..7cbce290f4e5 100644 --- a/arch/ia64/include/asm/processor.h +++ b/arch/ia64/include/asm/processor.h @@ -243,10 +243,6 @@ DECLARE_PER_CPU(struct cpuinfo_ia64, ia64_cpu_info); =20 extern void print_cpu_info (struct cpuinfo_ia64 *); =20 -typedef struct { - unsigned long seg; -} mm_segment_t; - #define SET_UNALIGN_CTL(task,value) \ ({ \ (task)->thread.flags =3D (((task)->thread.flags & ~IA64_THREAD_UAC_MASK= ) \ diff --git a/arch/ia64/include/asm/thread_info.h b/arch/ia64/include/asm/= thread_info.h index 51d20cb37706..ef83493e6778 100644 --- a/arch/ia64/include/asm/thread_info.h +++ b/arch/ia64/include/asm/thread_info.h @@ -27,7 +27,6 @@ struct thread_info { __u32 cpu; /* current CPU */ __u32 last_cpu; /* Last CPU thread ran on */ __u32 status; /* Thread synchronous flags */ - mm_segment_t addr_limit; /* user-level address space limit */ int preempt_count; /* 0=3Dpremptable, <0=3DBUG; will also serve as bh-= counter */ #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE __u64 utime; @@ -48,7 +47,6 @@ struct thread_info { .task =3D &tsk, \ .flags =3D 0, \ .cpu =3D 0, \ - .addr_limit =3D KERNEL_DS, \ .preempt_count =3D INIT_PREEMPT_COUNT, \ } =20 diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uacc= ess.h index e242a3cc1330..60adadeb3e9e 100644 --- a/arch/ia64/include/asm/uaccess.h +++ b/arch/ia64/include/asm/uaccess.h @@ -42,26 +42,17 @@ #include =20 /* - * For historical reasons, the following macros are grossly misnamed: - */ -#define KERNEL_DS ((mm_segment_t) { ~0UL }) /* cf. access_ok() */ -#define USER_DS ((mm_segment_t) { TASK_SIZE-1 }) /* cf. access_ok() */ - -#define get_fs() (current_thread_info()->addr_limit) -#define set_fs(x) (current_thread_info()->addr_limit =3D (x)) - -/* - * When accessing user memory, we need to make sure the entire area real= ly is in - * user-level space. In order to do this efficiently, we make sure that= the page at - * address TASK_SIZE is never valid. We also need to make sure that the= address doesn't + * When accessing user memory, we need to make sure the entire area real= ly is + * in user-level space. We also need to make sure that the address does= n't * point inside the virtually mapped linear page table. */ static inline int __access_ok(const void __user *p, unsigned long size) { + unsigned long limit =3D TASK_SIZE; unsigned long addr =3D (unsigned long)p; - unsigned long seg =3D get_fs().seg; - return likely(addr <=3D seg) && - (seg =3D=3D KERNEL_DS.seg || likely(REGION_OFFSET(addr) < RGN_MAP_LIMI= T)); + + return likely((size <=3D limit) && (addr <=3D (limit - size)) && + likely(REGION_OFFSET(addr) < RGN_MAP_LIMIT)); } #define __access_ok __access_ok #include diff --git a/arch/ia64/kernel/unaligned.c b/arch/ia64/kernel/unaligned.c index 6c1a8951dfbb..0acb5a0cd7ab 100644 --- a/arch/ia64/kernel/unaligned.c +++ b/arch/ia64/kernel/unaligned.c @@ -749,9 +749,25 @@ emulate_load_updates (update_t type, load_store_t ld= , struct pt_regs *regs, unsi } } =20 +static int emulate_store(unsigned long ifa, void *val, int len, bool ker= nel_mode) +{ + if (kernel_mode) + return copy_to_kernel_nofault((void *)ifa, val, len); + + return copy_to_user((void __user *)ifa, val, len); +} + +static int emulate_load(void *val, unsigned long ifa, int len, bool kern= el_mode) +{ + if (kernel_mode) + return copy_from_kernel_nofault(val, (void *)ifa, len); + + return copy_from_user(val, (void __user *)ifa, len); +} =20 static int -emulate_load_int (unsigned long ifa, load_store_t ld, struct pt_regs *re= gs) +emulate_load_int (unsigned long ifa, load_store_t ld, struct pt_regs *re= gs, + bool kernel_mode) { unsigned int len =3D 1 << ld.x6_sz; unsigned long val =3D 0; @@ -774,7 +790,7 @@ emulate_load_int (unsigned long ifa, load_store_t ld,= struct pt_regs *regs) return -1; } /* this assumes little-endian byte-order: */ - if (copy_from_user(&val, (void __user *) ifa, len)) + if (emulate_load(&val, ifa, len, kernel_mode)) return -1; setreg(ld.r1, val, 0, regs); =20 @@ -872,7 +888,8 @@ emulate_load_int (unsigned long ifa, load_store_t ld,= struct pt_regs *regs) } =20 static int -emulate_store_int (unsigned long ifa, load_store_t ld, struct pt_regs *r= egs) +emulate_store_int (unsigned long ifa, load_store_t ld, struct pt_regs *r= egs, + bool kernel_mode) { unsigned long r2; unsigned int len =3D 1 << ld.x6_sz; @@ -901,7 +918,7 @@ emulate_store_int (unsigned long ifa, load_store_t ld= , struct pt_regs *regs) } =20 /* this assumes little-endian byte-order: */ - if (copy_to_user((void __user *) ifa, &r2, len)) + if (emulate_store(ifa, &r2, len, kernel_mode)) return -1; =20 /* @@ -1021,7 +1038,7 @@ float2mem_double (struct ia64_fpreg *init, struct i= a64_fpreg *final) } =20 static int -emulate_load_floatpair (unsigned long ifa, load_store_t ld, struct pt_re= gs *regs) +emulate_load_floatpair (unsigned long ifa, load_store_t ld, struct pt_re= gs *regs, bool kernel_mode) { struct ia64_fpreg fpr_init[2]; struct ia64_fpreg fpr_final[2]; @@ -1050,8 +1067,8 @@ emulate_load_floatpair (unsigned long ifa, load_sto= re_t ld, struct pt_regs *regs * This assumes little-endian byte-order. Note that there is no "ldfp= e" * instruction: */ - if (copy_from_user(&fpr_init[0], (void __user *) ifa, len) - || copy_from_user(&fpr_init[1], (void __user *) (ifa + len), len)) + if (emulate_load(&fpr_init[0], ifa, len, kernel_mode) + || emulate_load(&fpr_init[1], (ifa + len), len, kernel_mode)) return -1; =20 DPRINT("ld.r1=3D%d ld.imm=3D%d x6_sz=3D%d\n", ld.r1, ld.imm, ld.x6_sz)= ; @@ -1126,7 +1143,8 @@ emulate_load_floatpair (unsigned long ifa, load_sto= re_t ld, struct pt_regs *regs =20 =20 static int -emulate_load_float (unsigned long ifa, load_store_t ld, struct pt_regs *= regs) +emulate_load_float (unsigned long ifa, load_store_t ld, struct pt_regs *= regs, + bool kernel_mode) { struct ia64_fpreg fpr_init; struct ia64_fpreg fpr_final; @@ -1152,7 +1170,7 @@ emulate_load_float (unsigned long ifa, load_store_t= ld, struct pt_regs *regs) * See comments in ldX for descriptions on how the various loads are ha= ndled. */ if (ld.x6_op !=3D 0x2) { - if (copy_from_user(&fpr_init, (void __user *) ifa, len)) + if (emulate_load(&fpr_init, ifa, len, kernel_mode)) return -1; =20 DPRINT("ld.r1=3D%d x6_sz=3D%d\n", ld.r1, ld.x6_sz); @@ -1202,7 +1220,8 @@ emulate_load_float (unsigned long ifa, load_store_t= ld, struct pt_regs *regs) =20 =20 static int -emulate_store_float (unsigned long ifa, load_store_t ld, struct pt_regs = *regs) +emulate_store_float (unsigned long ifa, load_store_t ld, struct pt_regs = *regs, + bool kernel_mode) { struct ia64_fpreg fpr_init; struct ia64_fpreg fpr_final; @@ -1244,7 +1263,7 @@ emulate_store_float (unsigned long ifa, load_store_= t ld, struct pt_regs *regs) DDUMP("fpr_init =3D", &fpr_init, len); DDUMP("fpr_final =3D", &fpr_final, len); =20 - if (copy_to_user((void __user *) ifa, &fpr_final, len)) + if (emulate_store(ifa, &fpr_final, len, kernel_mode)) return -1; =20 /* @@ -1295,7 +1314,6 @@ void ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs) { struct ia64_psr *ipsr =3D ia64_psr(regs); - mm_segment_t old_fs =3D get_fs(); unsigned long bundle[2]; unsigned long opcode; const struct exception_table_entry *eh =3D NULL; @@ -1304,6 +1322,7 @@ ia64_handle_unaligned (unsigned long ifa, struct pt= _regs *regs) load_store_t insn; } u; int ret =3D -1; + bool kernel_mode =3D false; =20 if (ia64_psr(regs)->be) { /* we don't support big-endian accesses */ @@ -1367,13 +1386,13 @@ ia64_handle_unaligned (unsigned long ifa, struct = pt_regs *regs) if (unaligned_dump_stack) dump_stack(); } - set_fs(KERNEL_DS); + kernel_mode =3D true; } =20 DPRINT("iip=3D%lx ifa=3D%lx isr=3D%lx (ei=3D%d, sp=3D%d)\n", regs->cr_iip, ifa, regs->cr_ipsr, ipsr->ri, ipsr->it); =20 - if (__copy_from_user(bundle, (void __user *) regs->cr_iip, 16)) + if (emulate_load(bundle, regs->cr_iip, 16, kernel_mode)) goto failure; =20 /* @@ -1467,7 +1486,7 @@ ia64_handle_unaligned (unsigned long ifa, struct pt= _regs *regs) case LDCCLR_IMM_OP: case LDCNC_IMM_OP: case LDCCLRACQ_IMM_OP: - ret =3D emulate_load_int(ifa, u.insn, regs); + ret =3D emulate_load_int(ifa, u.insn, regs, kernel_mode); break; =20 case ST_OP: @@ -1478,7 +1497,7 @@ ia64_handle_unaligned (unsigned long ifa, struct pt= _regs *regs) fallthrough; case ST_IMM_OP: case STREL_IMM_OP: - ret =3D emulate_store_int(ifa, u.insn, regs); + ret =3D emulate_store_int(ifa, u.insn, regs, kernel_mode); break; =20 case LDF_OP: @@ -1486,21 +1505,21 @@ ia64_handle_unaligned (unsigned long ifa, struct = pt_regs *regs) case LDFCCLR_OP: case LDFCNC_OP: if (u.insn.x) - ret =3D emulate_load_floatpair(ifa, u.insn, regs); + ret =3D emulate_load_floatpair(ifa, u.insn, regs, kernel_mode); else - ret =3D emulate_load_float(ifa, u.insn, regs); + ret =3D emulate_load_float(ifa, u.insn, regs, kernel_mode); break; =20 case LDF_IMM_OP: case LDFA_IMM_OP: case LDFCCLR_IMM_OP: case LDFCNC_IMM_OP: - ret =3D emulate_load_float(ifa, u.insn, regs); + ret =3D emulate_load_float(ifa, u.insn, regs, kernel_mode); break; =20 case STF_OP: case STF_IMM_OP: - ret =3D emulate_store_float(ifa, u.insn, regs); + ret =3D emulate_store_float(ifa, u.insn, regs, kernel_mode); break; =20 default: @@ -1521,7 +1540,6 @@ ia64_handle_unaligned (unsigned long ifa, struct pt= _regs *regs) =20 DPRINT("ipsr->ri=3D%d iip=3D%lx\n", ipsr->ri, regs->cr_iip); done: - set_fs(old_fs); /* restore original address limit */ return; =20 failure: --=20 2.29.2