From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752745AbbC2Tg5 (ORCPT ); Sun, 29 Mar 2015 15:36:57 -0400 Received: from mail-qc0-f178.google.com ([209.85.216.178]:34817 "EHLO mail-qc0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752601AbbC2Tg4 (ORCPT ); Sun, 29 Mar 2015 15:36:56 -0400 MIME-Version: 1.0 In-Reply-To: <20150328091106.GA5361@gmail.com> References: <1427373731-13056-1-git-send-email-dvlasenk@redhat.com> <20150327081141.GA9526@gmail.com> <551534B1.6090908@redhat.com> <20150327111738.GA8749@gmail.com> <20150327113430.GC14778@gmail.com> <551549AF.50808@redhat.com> <20150327121645.GC15631@gmail.com> <55154DB3.9000008@redhat.com> <20150328091106.GA5361@gmail.com> From: Denys Vlasenko Date: Sun, 29 Mar 2015 21:36:33 +0200 Message-ID: Subject: Re: [PATCH] x86/asm/entry/64: better check for canonical address To: Ingo Molnar Cc: Denys Vlasenko , Brian Gerst , Andy Lutomirski , Borislav Petkov , "the arch/x86 maintainers" , Linux Kernel Mailing List , Linus Torvalds Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Mar 28, 2015 at 10:11 AM, Ingo Molnar wrote: >> >> $ ./timing_test64 iret >> >> 10000 loops in 0.00344s = 343.90 nsec/loop for iret >> >> 100000 loops in 0.01890s = 188.97 nsec/loop for iret >> >> 1000000 loops in 0.08228s = 82.28 nsec/loop for iret >> >> 10000000 loops in 0.77910s = 77.91 nsec/loop for iret >> >> >> >> This is the "same-ring interrupt return". ~230 cycles! :( >> > >> > Ugh, that's really expensive! Why is that so? Same-ring irqs are >> > supposedly a lot simpler. >> >> Descriptor checks for restored CS and SS, >> checking canonical-ness of RIP, >> supporting "return to TSS" (flags.NT bit), >> "return to VM86" (flags.VM bit), >> complex logic around restoring RFLAGS >> ("don't allow CPL3 to be able to disable interrupts... >> ...unless their flags.IOPL is 3." Gasp) >> return to 16-bit code ("do not touch high 16 bits") >> >> All of this is a giant PITA to encode in microcode. > > I guess they could optimize it by adding a single "I am a modern OS > executing regular userspace" flag to the descriptor [or expressing the > same as a separate instruction], to avoid all that legacy crap that > won't trigger on like 99.999999% of systems ... Yes, that would be a useful addition. Interrupt servicing on x86 takes a non-negligible hit because of IRET slowness. Specifically, a CPL0-only IRET_FAST insn which uses the same stack layout as IRET, but makes the following assumptions: * The restored SS and CS are 0-based, 4G-limit segments. (as usual, in 64-bit mode limits are ignored). * CS is read/execute, SS is read/write. * The CPL to return to is equal to (CS & 3). This would mean that IRET_FAST would not need to read descriptors from GDT/LDT. It only needs to read values from stack. It would be capable of returning both to CPL0 and CPL3 - iow, usable for returning from interrupts both to userpace and kernelspace. * FLAGS.NT is ignored (as if it is 0). IOW, no task returns. * pt_regs->FLAGS.VM is not restored, but set to 0. IOW, no vm86. * Extend this to other flags as well, if it makes return faster. We can have a separate code which restores AC,DF,IF,TF,RF,IOPL in the unlikely event they are "unusual". So it's okay if IRET_FAST just sets them to 0 (1 for IF). The instruction would need a differentiator whether returned-to code is 64-bit or 32-bit. Then it probably can use the same approach SYSRET{O,L} uses: with REX.W, return is to 64-bit; without it, return is to 32-bit. Interrupt return then can check pt_regs->cs and use IRETL_FAST if it is USER32_CS; use IRETQ_FAST if it is USER_CS or KERNEL_CS; otherwise, fall back to slow but "universal" IRETQ. Do we have contacts at Intel to petition for this? :D