From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751184AbdJRSJf convert rfc822-to-8bit (ORCPT ); Wed, 18 Oct 2017 14:09:35 -0400 Received: from mail.efficios.com ([167.114.142.141]:37887 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750800AbdJRSJe (ORCPT ); Wed, 18 Oct 2017 14:09:34 -0400 Date: Wed, 18 Oct 2017 18:11:39 +0000 (UTC) From: Mathieu Desnoyers To: Ben Maurer Cc: "Paul E. McKenney" , Boqun Feng , Peter Zijlstra , Paul Turner , Andrew Hunter , Andy Lutomirski , Dave Watson , Josh Triplett , Will Deacon , linux-kernel , Thomas Gleixner , Andi Kleen , Chris Lameter , Ingo Molnar , "H. Peter Anvin" , rostedt , Linus Torvalds , Andrew Morton , Russell King , Catalin Marinas , Michael Kerrisk , Alexander Viro , linux-api Message-ID: <515879378.43966.1508350299712.JavaMail.zimbra@efficios.com> In-Reply-To: References: <20171012230326.19984-1-mathieu.desnoyers@efficios.com> <20171012230326.19984-2-mathieu.desnoyers@efficios.com> Subject: Re: [RFC PATCH v9 for 4.15 01/14] Restartable sequences system call MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT X-Originating-IP: [167.114.142.141] X-Mailer: Zimbra 8.7.11_GA_1854 (ZimbraWebClient - FF52 (Linux)/8.7.11_GA_1854) Thread-Topic: Restartable sequences system call Thread-Index: AQHTQ6515cQ6nxXTN0eVL251OZT19KLkKPJz5kNtRjc= Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- On Oct 18, 2017, at 12:41 PM, Ben Maurer bmaurer@fb.com wrote: >> The layout of struct rseq_cs is as follows: > >> start_ip >> Instruction pointer address of the first instruction of the >> sequence of consecutive assembly instructions. > >> post_commit_ip >> Instruction pointer address after the last  instruction  of >>  the sequence of consecutive assembly instructions. > >>  abort_ip >> Instruction  pointer  address  where  to move the execution >>  flow in case of abort of the sequence of consecutive assem‐ >>  bly instructions. > > Really minor performance performance thought here. > > 1) In the kernel at context switch time you'd need code like: > > if (ip >= start_ip && ip <= post_commit_ip) > > This branch would be hard to predict because most instruction pointers would be > either before or after. If post_commit_ip were relative to start_ip you could > do this: > > if (ip - start_ip <= post_commit_offset) > > which is a single branch that would be more predictable. The actual context switch code only sets the "t->rseq_preempt" flags and TIF_NOTIFY_RESUME. The user-accesses happen when returning to user-space with TIF_NOTIFY_RESUME set. Indeed, we can expect most context switch out of a registered rseq thread to trigger one __rseq_handle_notify_resume on return to user-space for that thread. As you point out, the "common" case is *not* nested over a critical section. This means t->rseq->rseq_cs is NULL. This effectively means post_commit_ip and start_ip are NULL in rseq_ip_fixup when compared to the current ip. The check is currently implemented like this: /* Handle potentially not being within a critical section. */ if ((void __user *)instruction_pointer(regs) >= post_commit_ip || (void __user *)instruction_pointer(regs) < start_ip) return 1; So if non-nested over c.s., the first branch is ip >= NULL, which turns out to be true, and we therefore return 1 from rseq_ip_fixup. I suspect that we'd need to cast those pointers to (unsigned long) to be strictly C standard compliant. If we instead use "post_commit_offset" relative to start_ip, a non-nested common case would have start_ip = NULL, post_commit_offset = 0. The check you propose for not being nested over a c.s. would look like: if (!((long)ip - (long)start_ip <= (long)post_commit_offset)) return 1; This introduces an issue here: if "ip" is lower than "start_ip", we can incorrectly think we are in a critical section, when we are in fact not. With the previous approach proposed by Paul Turner, this was not an issue, because he was setting the equivalent of the rseq_cs pointer back to NULL at the end of the assembly fast-path. However, I have a fast-path optimization that only sets the rseq_cs pointer at the beginning of the fast-path, without clearing it afterward. It's up to the following critical section to overwrite the rseq_cs pointer, or to the kernel to set it back to NULL if it finds out that it is preempting/delivering a signal over an instruction pointer outside of the current rseq_cs start_ip/post_commit_ip range (lazy clear). Moreover, this modification would add a subtraction on the common case (ip - start_ip), and makes the ABI slightly uglier. > > 2) In a shared library a rseq_cs structure would have to be relocated at runtime > because at compilation time the final address of the library wouldn't be known. > I'm not sure if this is important enough to address, but it could be solved by > making the pointers relative to the address of rseq_cs. But this would make for > an uglier API. If I understand well, you are proposing to speed up .so load time by means of removing relocations of pointers within rseq_cs, done by making those relative to the rseq_cs address. So the downside here is extra arithmetic operations on resume to userspace (__rseq_handle_notify_resume): the kernel would have to calculate the offset of start_ip and post_commit_ip from the address of rseq_cs. Sure, we're only talking about two additions there, but I don't think marginally speeding up library load justifies extra work in that kernel path, nor the uglier ABI. Thoughts ? Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mathieu Desnoyers Subject: Re: [RFC PATCH v9 for 4.15 01/14] Restartable sequences system call Date: Wed, 18 Oct 2017 18:11:39 +0000 (UTC) Message-ID: <515879378.43966.1508350299712.JavaMail.zimbra@efficios.com> References: <20171012230326.19984-1-mathieu.desnoyers@efficios.com> <20171012230326.19984-2-mathieu.desnoyers@efficios.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Return-path: In-Reply-To: Sender: linux-api-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Ben Maurer Cc: "Paul E. McKenney" , Boqun Feng , Peter Zijlstra , Paul Turner , Andrew Hunter , Andy Lutomirski , Dave Watson , Josh Triplett , Will Deacon , linux-kernel , Thomas Gleixner , Andi Kleen , Chris Lameter , Ingo Molnar , "H. Peter Anvin" , rostedt , Linus Torvalds , Andrew Morton , Russell King , Catalin Marinas , Michael List-Id: linux-api@vger.kernel.org ----- On Oct 18, 2017, at 12:41 PM, Ben Maurer bmaurer-b10kYP2dOMg@public.gmane.org wrote: >> The layout of struct rseq_cs is as follows: > >> start_ip >> Instruction pointer address of the first instruction of the >> sequence of consecutive assembly instructions. > >> post_commit_ip >> Instruction pointer address after the last  instruction  of >>  the sequence of consecutive assembly instructions. > >>  abort_ip >> Instruction  pointer  address  where  to move the execution >>  flow in case of abort of the sequence of consecutive assem‐ >>  bly instructions. > > Really minor performance performance thought here. > > 1) In the kernel at context switch time you'd need code like: > > if (ip >= start_ip && ip <= post_commit_ip) > > This branch would be hard to predict because most instruction pointers would be > either before or after. If post_commit_ip were relative to start_ip you could > do this: > > if (ip - start_ip <= post_commit_offset) > > which is a single branch that would be more predictable. The actual context switch code only sets the "t->rseq_preempt" flags and TIF_NOTIFY_RESUME. The user-accesses happen when returning to user-space with TIF_NOTIFY_RESUME set. Indeed, we can expect most context switch out of a registered rseq thread to trigger one __rseq_handle_notify_resume on return to user-space for that thread. As you point out, the "common" case is *not* nested over a critical section. This means t->rseq->rseq_cs is NULL. This effectively means post_commit_ip and start_ip are NULL in rseq_ip_fixup when compared to the current ip. The check is currently implemented like this: /* Handle potentially not being within a critical section. */ if ((void __user *)instruction_pointer(regs) >= post_commit_ip || (void __user *)instruction_pointer(regs) < start_ip) return 1; So if non-nested over c.s., the first branch is ip >= NULL, which turns out to be true, and we therefore return 1 from rseq_ip_fixup. I suspect that we'd need to cast those pointers to (unsigned long) to be strictly C standard compliant. If we instead use "post_commit_offset" relative to start_ip, a non-nested common case would have start_ip = NULL, post_commit_offset = 0. The check you propose for not being nested over a c.s. would look like: if (!((long)ip - (long)start_ip <= (long)post_commit_offset)) return 1; This introduces an issue here: if "ip" is lower than "start_ip", we can incorrectly think we are in a critical section, when we are in fact not. With the previous approach proposed by Paul Turner, this was not an issue, because he was setting the equivalent of the rseq_cs pointer back to NULL at the end of the assembly fast-path. However, I have a fast-path optimization that only sets the rseq_cs pointer at the beginning of the fast-path, without clearing it afterward. It's up to the following critical section to overwrite the rseq_cs pointer, or to the kernel to set it back to NULL if it finds out that it is preempting/delivering a signal over an instruction pointer outside of the current rseq_cs start_ip/post_commit_ip range (lazy clear). Moreover, this modification would add a subtraction on the common case (ip - start_ip), and makes the ABI slightly uglier. > > 2) In a shared library a rseq_cs structure would have to be relocated at runtime > because at compilation time the final address of the library wouldn't be known. > I'm not sure if this is important enough to address, but it could be solved by > making the pointers relative to the address of rseq_cs. But this would make for > an uglier API. If I understand well, you are proposing to speed up .so load time by means of removing relocations of pointers within rseq_cs, done by making those relative to the rseq_cs address. So the downside here is extra arithmetic operations on resume to userspace (__rseq_handle_notify_resume): the kernel would have to calculate the offset of start_ip and post_commit_ip from the address of rseq_cs. Sure, we're only talking about two additions there, but I don't think marginally speeding up library load justifies extra work in that kernel path, nor the uglier ABI. Thoughts ? Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com