From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751838AbdEESSK (ORCPT ); Fri, 5 May 2017 14:18:10 -0400 Received: from mga03.intel.com ([134.134.136.65]:46952 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751709AbdEESSG (ORCPT ); Fri, 5 May 2017 14:18:06 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,293,1491289200"; d="scan'208";a="96140811" From: Ricardo Neri To: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , Andy Lutomirski , Borislav Petkov Cc: Peter Zijlstra , Andrew Morton , Brian Gerst , Chris Metcalf , Dave Hansen , Paolo Bonzini , Liang Z Li , Masami Hiramatsu , Huang Rui , Jiri Slaby , Jonathan Corbet , "Michael S. Tsirkin" , Paul Gortmaker , Vlastimil Babka , Chen Yucong , Alexandre Julliard , Stas Sergeev , Fenghua Yu , "Ravi V. Shankar" , Shuah Khan , linux-kernel@vger.kernel.org, x86@kernel.org, linux-msdos@vger.kernel.org, wine-devel@winehq.org, Ricardo Neri , Adam Buchbinder , Colin Ian King , Lorenzo Stoakes , Qiaowei Ren , Arnaldo Carvalho de Melo , Adrian Hunter , Kees Cook , Thomas Garnier , Dmitry Vyukov Subject: [PATCH v7 16/26] x86/insn-eval: Support both signed 32-bit and 64-bit effective addresses Date: Fri, 5 May 2017 11:17:14 -0700 Message-Id: <20170505181724.55000-17-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170505181724.55000-1-ricardo.neri-calderon@linux.intel.com> References: <20170505181724.55000-1-ricardo.neri-calderon@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The 32-bit and 64-bit address encodings are identical. This means that we can use the same function in both cases. In order to reuse the function for 32-bit address encodings, we must sign-extend our 32-bit signed operands to 64-bit signed variables (only for 64-bit builds). To decide on whether sign extension is needed, we rely on the address size as given by the instruction structure. Once the effective address has been computed, a special verification is needed for 32-bit processes. If running on a 64-bit kernel, such processes can address up to 4GB of memory. Hence, for instance, an effective address of 0xffff1234 would be misinterpreted as 0xffffffffffff1234 due to the sign extension mentioned above. For this reason, the 4 must be truncated to obtain the true effective address. Lastly, before computing the linear address, we verify that the effective address is within the limits of the segment. The check is kept for long mode because in such a case the limit is set to -1L. This is the largest unsigned number possible. This is equivalent to a limit-less segment. Cc: Dave Hansen Cc: Adam Buchbinder Cc: Colin Ian King Cc: Lorenzo Stoakes Cc: Qiaowei Ren Cc: Arnaldo Carvalho de Melo Cc: Masami Hiramatsu Cc: Adrian Hunter Cc: Kees Cook Cc: Thomas Garnier Cc: Peter Zijlstra Cc: Borislav Petkov Cc: Dmitry Vyukov Cc: Ravi V. Shankar Cc: x86@kernel.org Signed-off-by: Ricardo Neri --- arch/x86/lib/insn-eval.c | 99 ++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 88 insertions(+), 11 deletions(-) diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c index 1a5f5a6..c7c1239 100644 --- a/arch/x86/lib/insn-eval.c +++ b/arch/x86/lib/insn-eval.c @@ -688,6 +688,62 @@ int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs) return get_reg_offset(insn, regs, REG_TYPE_RM); } +/** + * _to_signed_long() - Cast an unsigned long into signed long + * @val A 32-bit or 64-bit unsigned long + * @long_bytes The number of bytes used to represent a long number + * @out The casted signed long + * + * Return: A signed long of either 32 or 64 bits, as per the build configuration + * of the kernel. + */ +static int _to_signed_long(unsigned long val, int long_bytes, long *out) +{ + if (!out) + return -EINVAL; + +#ifdef CONFIG_X86_64 + if (long_bytes == 4) { + /* higher bytes should all be zero */ + if (val & ~0xffffffff) + return -EINVAL; + + /* sign-extend to a 64-bit long */ + *out = (long)((int)(val)); + return 0; + } else if (long_bytes == 8) { + *out = (long)val; + return 0; + } else { + return -EINVAL; + } +#else + *out = (long)val; + return 0; +#endif +} + +/** get_mem_offset() - Obtain the memory offset indicated in operand register + * @regs Structure with register values as seen when entering kernel mode + * @reg_offset Offset from the base of pt_regs of the operand register + * @addr_size Address size of the code segment in use + * + * Obtain the offset (a signed number with size as specified in addr_size) + * indicated in the register used for register-indirect memory adressing. + * + * Return: A memory offset to be used in the computation of effective address. + */ +long get_mem_offset(struct pt_regs *regs, int reg_offset, int addr_size) +{ + int ret; + long offset = -1L; + unsigned long uoffset = regs_get_register(regs, reg_offset); + + ret = _to_signed_long(uoffset, addr_size, &offset); + if (ret) + return -1L; + return offset; +} /* * return the address being referenced be instruction * for rm=3 returning the content of the rm reg @@ -697,18 +753,21 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs) { unsigned long linear_addr, seg_base_addr, seg_limit; long eff_addr, base, indx; - int addr_offset, base_offset, indx_offset; + int addr_offset, base_offset, indx_offset, addr_bytes; insn_byte_t sib; insn_get_modrm(insn); insn_get_sib(insn); sib = insn->sib.value; + addr_bytes = insn->addr_bytes; if (X86_MODRM_MOD(insn->modrm.value) == 3) { addr_offset = get_reg_offset(insn, regs, REG_TYPE_RM); if (addr_offset < 0) goto out_err; - eff_addr = regs_get_register(regs, addr_offset); + eff_addr = get_mem_offset(regs, addr_offset, addr_bytes); + if (eff_addr == -1L) + goto out_err; seg_base_addr = insn_get_seg_base(regs, insn, addr_offset); if (seg_base_addr == -1L) goto out_err; @@ -722,20 +781,28 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs) * in the address computation. */ base_offset = get_reg_offset(insn, regs, REG_TYPE_BASE); - if (base_offset == -EDOM) + if (base_offset == -EDOM) { base = 0; - else if (base_offset < 0) + } else if (base_offset < 0) { goto out_err; - else - base = regs_get_register(regs, base_offset); + } else { + base = get_mem_offset(regs, base_offset, + addr_bytes); + if (base == -1L) + goto out_err; + } indx_offset = get_reg_offset(insn, regs, REG_TYPE_INDEX); - if (indx_offset == -EDOM) + if (indx_offset == -EDOM) { indx = 0; - else if (indx_offset < 0) + } else if (indx_offset < 0) { goto out_err; - else - indx = regs_get_register(regs, indx_offset); + } else { + indx = get_mem_offset(regs, indx_offset, + addr_bytes); + if (indx == -1L) + goto out_err; + } eff_addr = base + indx * (1 << X86_SIB_SCALE(sib)); seg_base_addr = insn_get_seg_base(regs, insn, @@ -758,7 +825,10 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs) } else if (addr_offset < 0) { goto out_err; } else { - eff_addr = regs_get_register(regs, addr_offset); + eff_addr = get_mem_offset(regs, addr_offset, + addr_bytes); + if (eff_addr == -1L) + goto out_err; } seg_base_addr = insn_get_seg_base(regs, insn, addr_offset); @@ -771,6 +841,13 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs) linear_addr = (unsigned long)eff_addr; /* + * If address size is 32-bit, truncate the 4 most significant bytes. + * This is to avoid phony negative offsets. + */ + if (addr_bytes == 4) + linear_addr &= 0xffffffff; + + /* * Make sure the effective address is within the limits of the * segment. In long mode, the limit is -1L. Thus, the second part * of the check always succeeds. -- 2.9.3